Skip to content
 
Episode 60

Ethically Exploiting Vulnerabilities with John Hammond

EPISODE SUMMARY

As thousands of vulnerabilities are discovered yearly, how can security teams prioritize mitigation responses? John Hammond, content creator and Senior Security Research at Huntress, joins the podcast to explain the importance of recreating exploits, how to determine potential impacts, and the fine line between hackers and attackers.

Watch the video or scroll down to listen to the podcast:

 

Subscribe or listen now:  Apple Podcasts   Spotify   iHeartRadio

Chloe Messdaghi Co-host of 401 Access Denied Podcast
Chloé Messdaghi

Chloé is the Chief Impact Officer at Cybrary where she develops impactful strategies to enhance trust, mitigate risk, and be purpose-driven.


Joseph Carson:
Hello everyone. Welcome to another episode of the 401 Access Denied podcast, brought to you by both Delinea and Cybrary. I'm the host and co-host for the show, which I'm Joseph Carson, chief security scientist, and I'm joined again with Chloe. Chloe?
 
Chloé Messdaghi:
Hi, my name is Chloe and I am the chief impact officer over at Cybrary.
 
Joseph Carson:
Awesome. And we've got a awesome guest for you today, for today's episode. we're actually brought John Hammond in to tell us all about, what do you do? Tell us about yourself.
 
John Hammond:
Oh, hey, thanks so much. Yeah. Hello, my name's John Hammond. I'm a senior security researcher at a company called Huntress. So I do a little bit of threat hunting analysis, malware investigation, all that fun stuff. And then off on the side, I have a cheesy YouTube channel, where I showcase a lot of cybersecurity content and education, hopefully just for training, just for learning, all the good stuff.
 
Joseph Carson:
On the content side, when you're doing the content, do you enjoy doing it? Is it the thing that you enjoy?
 
John Hammond:
Yes.
 
Joseph Carson:
Is it what's going to gets you excited?
 
John Hammond:
No, absolutely. If that wasn't clear, the YouTube is really a passion. It's a labor of love. It's something that's fun to play with and something to explore on the side. And honestly, I think, hey, that's grown and blossomed to some very, very cool thing. But I'm just grateful, hey, we can bring more education and more training to more people.
 
Joseph Carson:
Absolutely. For me, I mean, your content is awesome.
 
John Hammond:
Thank you.
 
Joseph Carson:
It is fantastic. And I think for those who create content out there, it is so valuable for the industry, because it's giving back and I always looking for ways to give back. Because we've benefited for many years from our mentors and being in the industry. And I think what's really changing is to really bring new people and get people excited, and somewhat bringing fun back. So when I'm listening to you or IPSec or Stok or other content creators, even Ben, and I'm just like, you make it fun.
You bring some of the entertainment back. And I think that's what little bit is definitely missing is bringing the excitement, and the way that you're delivering them gets everyone excited. So I think, for me, I think that's definitely what we need to be doing is bringing the excitement back into security. And by doing that, you get other new people want to get into the industry. And I think that's one of the things we need to be doing.
So the topic for today, it's not going to be just about content creation, but the topic of today is going to be talking about vulnerabilities. And can I just, because I know you cover a lot of vulnerabilities over the years and really go into details about how exploitable they are, how to create payloads and to how to use them, and really showing the risk. In the past year or so, what are some memorable vulnerabilities that you've experienced the past year that got you excited?
 
John Hammond:
Oh, yeah, well, thank you. So honestly, there are so many. Hey, we know that's-
 
Joseph Carson:
Never ending.
 
John Hammond:
The treadmill of cyber security is that, hey, there's always another one coming next. There's someone else in the horizon, whatever incident, whatever breach, whatever attack, whatever news headlines, news, zero-day, making a lot of noise in the news. I think I try to be very selective and what are the vulnerabilities that we really need to scream and shout about? Because there is a certain conversation and a little bit of balancing act of the doom and gloom, alarmist, fear, uncertainty and doubt stuff. And I really want to be cognizant of avoiding that. I don't want to chase an ambulance if it's just another ambulance going down, right?
So there are somes that have stuck out in my mind are the ones like, okay, we really need to be vocal about this. And I think those are the big name ones that folks tend to know. Okay, PrintNightmare certainly was worth a commotion. Log4j, Log4Shell, certainly worth the commotion. Some folks probably remember HAFNIUM, the proxy shell, proxy log on stuff that was a targeting Microsoft Exchange servers. And now most recently, as the time of us getting together to chat is Follina or that Microsoft support diagnostics tool, or the MSDT one. It's funny, I can rattle off the CVE names from the top of, all the numbers and the IDs from my head. But I don't mean to be that nerdy on you guys.
 
Joseph Carson:
Which is okay, because many of the audience, somewhere in that thought leadership, they cannot just wanting to get what's the latest, hot topics. But we have a lot of techies out there, lot of nerds like myself. Even when I see the CVs, I'm like, "Oh, yeah, that's that one.
 
Chloé Messdaghi:
Yeah. I actually started my career in cyber security through vulnerability management.
 
John Hammond:
Oh, goodness.
 
Chloé Messdaghi:
So I'm like, okay, let's talk here. So how do you factor in what to prioritize? Because I always find that's the number one issue. And it's like, "Oh, well we can use scanners." And then we have other things that we can set on top of scanners to know what to prioritize. But how do you pick what should prioritize?
 
John Hammond:
Yeah. So coming from a little bit more of that offensive background or adverse relationship, red teaming and pen testing, the stuff that really helps speed run a threat actor and help give them a shortcut is absolutely what is attractive to them. With that said, remote code execution and the critical high-impact stuff is what they're going to go after. And the easier that is, the less barrier of entry that is for a specific exploit or for a specific technique or vulnerability, that makes it more and more impactful. Oh, okay, we found this thing where we can get code execution without authentication. Okay, without authorization, you don't need user credentials, that sounds like a smoking gun. That sounds like a bad, big weapon that could be used.
The thing is when the reporters and the journalists come out of the woodwork and they say, "Hey, who's the target here? Are they going after country X, Y, Z, region A, B, C or any of those noise?" I think the scariest thing that we come to is there is no target. It's literally just a spray and pray across the internet, because the vulnerability is just so easy and so accessible, and the exploit can be so widespread. Those are the things we need to maybe sound the alarm for.
 
Joseph Carson:
Yeah. The ones that's opportunistic. They're just like, "I can just target everybody and hopefully I'll catch basically, a threat in one, and then you exploit it. Then you take advantage of it, whether it's been financial motivation or data disclosure. Absolutely. Those opportunistic ones are the ones I started. The targeted ones, you always have to be much more analytical and really look for the metrics and try to make it difficult for them so they can create noise in the network can give you some visibility.
But the ones that scare me is just the ones that, they just go on. They can target anybody and anyone that's connected to the internet. And if those services are exposed, it makes it more difficult, the more challenging. I almost think, there's two types of, you talk about remote code execution, which for me is always the one, whether it's privileges or no privileges are needed, whether you have access or not. The one that I always see is, well, sometimes you've got those who already have some initial access and they're just lying in wait, waiting for the local privilege escalation ones.
PrintNightmare was also capable of doing that. How do you evaluate those as well, when all of a sudden, maybe the threat actors got initial access and they're just waiting? And all of a sudden, when that zero day comes out and they basically can elevate up to a local administrator or a domain administrator or higher, how risky is that?
 
John Hammond:
Totally. There are so many things to unpack with that one. Because you're absolutely right. Hey, a threat actor could be lurking and dwelling in the environment, just waiting to strike. That's a reality. Oftentimes, you see more a smash and grab operation, but again, those are, they're more sophisticated or advanced persistent threat APTs that will just wait for the moment of opportunity. So when a vulnerability, PrintNightmare, like you mentioned, comes along, that opens the door.
PrintNightmare was pretty scary, because that could certainly be used for remote go-to execution from a distance. But then even on a box, on a target, local privilege escalation. Hey, we take our Joe Schmo user and then we bring them up to roots or the administrator, et cetera. How I tend to analyze those or prioritize those is again, okay, is this based off of a inherent native feature or functionality that's core to the operating system? And that's absolutely why PrintNightmare made such a splash, I think, because that's Windows, that is the computer.
 
Joseph Carson:
It's built in. It's on every computer. That's the issue, is that, how widespread is it? Even Log4Shell, you end up getting into is so widespread. There's so many systems using it, and even you get in things like even Java vulnerabilities as well. It's on almost every computer. And that, for me, is the impact, and then how easy and exploitable is it?
 
John Hammond:
And I think it's really interesting... If it's all right, I know it's sometimes a spicy polarizing topic, but when the community comes together in the response for these headlines and zero-day and vulnerability stuff, shaking up the scene, I really love to see how we can come together and we've got detection engineers that do great work, we've got other threat hunters and cyber threat analysts and intelligence that are trying to determine, okay, are there any entities or threat actor groups already using this? We also have some of those red team individuals that are like, "Hey, let's create a proof of concept. Let's recreate that with a benign payload, like pop and open the calculator or the notepad, right?"
I'll admit I, again, come from, hey, let's have some fun, let's hack away, and I try to recreate an exploit. And there are conversations, which I'm sure some folks might raise and like, "Hey, aren't you just enabling more cyber crime?" Which is, yes, I realize this is a can of worms and I'm trying to be very careful in dancing around this. But I think it's important for the community. And yes, okay, there are bad people that might use these, but the good people can very well use this just as well. That's how you can detect this, that's how you can work and validate against this, with the patch or work-around or mitigation efforts. And if you know what the adversary is going to use, you're better off than not knowing what they would have.
 
Chloé Messdaghi:
Make it very realistic. At the end of the day, if you want to get that taken care of, you want to learn. You've got to make it as realistic as possible. And yeah, it is scary for many people. Wait, but aren't you just training people to do devious things? It's like, well, I mean, think about it this way, if I'm going to do it ethically and stay within scope and everything, and I'm doing it for learning purposes, not to take advantage of someone, then I'm doing the right thing. Once again, get permission.
 
Joseph Carson:
Yeah, always make sure.
 
Chloé Messdaghi:
Always get permission.
 
Joseph Carson:
Yeah, stay legal, do it with authorization.
 
John Hammond:
Yeah. Totally a balancing act. But I think the communication and the transparency is paramount on that.
 
Joseph Carson:
Yeah. So I absolutely, I agree, because you need to think like an attacker. You need to understand it. And if you go through the steps, I mean, for the reason I went through some of the previous content you've created, is for me to understand it, so I can look at what things along the way can I actually identify, are indicators compromise, or to make it more difficult, or to at least maybe sandbox it or limit the ability for attackers to exploit it. So for me, it's knowing and being transparent is more important than hiding it and not talking about it, because I think that makes it more dangerous.
 
John Hammond:
I love that you mentioned that. It's a fascinating conversation when you get into okay, malware, and threat actors that might use and abuse these new vulnerabilities. And I hearken it back to, I know it sounds a little bit dated now, but WannaCry is a fine example. Ransomware, hey, targeting some of the EternalBlue SMB, version 1 stuff, protocols and abuse there. The reason that WannaCry was able to be, okay, "taken down," is because we looked into it, because you had researchers that would take it apart, that would disassemble it, work through it in a debugger and understand, "Oh, if we just had a domain name registered, we could turn this thing off." I don't know, I think there should always be scrutiny and understanding and what is the threat here? What's the vulnerability? What's being taken advantage of? And we all need to be in it together. That's the end of the day, hey, it takes a village.
 
Joseph Carson:
Yeah. When you're looking at these different vulnerabilities, how important is it for you to also look at what's the mitigations? How can you reduce the risk? Is it patchable? And if you wanted... For me, I think I always look at it is, how easy is this to get patched? How easy is it for me to reduce the risk or to harden the box so that people can't take advantage of it? When you're going through that, is that something you consider and put into the content as well, how people can actually reduce the risk from these?
 
John Hammond:
Absolutely. Sure, creating the cool proof of concept and the exploit is nice and fancy to recreate it, but you have to be able to, again, validate and verify workarounds are working. Our mitigation is in place when we're still waiting for a patch to become available, which I think has become a very unfortunate norm. I got to be honest, guys, I don't know if we still have a patch from this Follina thing, the Microsoft support diagnostics tool one. But we were waiting for a couple times for PrintNightmare. Okay, that got patched, what, a second or third time. Mitigations and workarounds are, unfortunately, what we'd have to, I think, come to more often than not now. Oftentimes, that has its own consequences. Like, hey, the "solution" for PrintNightmare was, so just stop printing, right? Doesn't work.
 
Joseph Carson:
Disabled prints for... We're done.
 
John Hammond:
Yeah. And if I could, for whatever advice or words or wisdom is, you really have to take this whole thing, whenever a new vulnerability or there's something new to sound the alarm, you have to assess your own threat model. You have to know your own risk. You have to know what's appropriate for your business, your organization, your operations, because that's variable.
Before we jumped in on live, to be able to record this segment, we were thinking about, oh the CVSS, the severity score for how you impact and how you categorize a vulnerability. Sure, you can go from numbers and the higher you are, the more impact it is. But keep in mind, that's totally subjective, that's totally relative and variable, dependent on your worldview. You can decide the priority of a threat or vulnerability.
 
Joseph Carson:
Absolutely. This is where you started off.
 
Chloé Messdaghi:
Yeah.
 
John Hammond:
Totally. I'm sorry. I didn't mean to go and tangent.
 
Chloé Messdaghi:
Oh, no.
 
Joseph Carson:
I mean, from your side of things, what did you focus on? Was it more on discovery or mitigations, or was it more of-
... reporting it back to the business and trying to figure out, what priority is it?
 
Chloé Messdaghi:
It? I think the first thing was just getting people to understand that you can't fully rely on scanners. I think that was the first thing that I focused on was that it is very, very challenging to just overly rely on scanners. And that's the reason why I went to bug bounty next was because, oh wait, we've also seen the research and it shows, oh, scanners are great to use, but bug bounty, you're going to find those vulnerabilities that the scanner didn't pick up on it. And that's so important because, once again, we're sharing knowledge and whenever there's a vulnerability that just drops, it always seems like suddenly the entire bug bounty scene, everyone's submitting that one. And then the triage team for all the bug bounty platforms are, "Oh, God, it's going to be a long couple days."
 
John Hammond:
There's a bat signal in the sky and the avengers assemble.
 
Joseph Carson:
Yeah, yeah, because my-
 
Chloé Messdaghi:
And everyone's like, "Oh, yes, going to get that pay."
 
Joseph Carson:
Yeah. So my, my background, I came from the patch management side of things. And it just felt like a never-ending story. You get, the patch Tuesday comes out, and then you get into Wednesday where you actually try to understand it. And then it was proof-of-concept Thursday, deploy Friday, cry on Saturday, and roll it back on Sunday-
 
John Hammond:
Oh goodness.
 
Joseph Carson:
... try again Monday. And you're already in Tuesday again and it's just like, repeat, you just kept getting into that cycle. And it almost became never-ending story. And I think to your point is, you have to be very subjective to really understand about... Because you can just get into that never-ending loop, and all you do is deploy patches without really thinking about how impactful it is to the business. Does it really apply? Is my environment any unique or different that this is more or less exploitable? So I think organizations really to have...
You can do the scanning tools, which ultimately, just gives you the same basically CVE results. And you can look at the scores and determine is this something I should patch? But you really got to understand what is impact to the business? Is this connected to the public internet? Which makes it much more critical. Is it something that's on a closed network? I think organizations really still...
I don't think they've got to the point where they really understand that, at least to a point where they become efficient. And I think it's important to have organizations have people like you in the organizations to really start to give them that subjective view. I think it's really important why organizations should work with bug bounty companies, to really help them understand about how it is impactful to their business as well. Do you think organizations can do this alone or do they need help?
 
John Hammond:
I think I side with you in that there isn't an easy button for this, which probably is why this gets pushed away or isn't on the forefront of folks' minds, because yeah, it's going to take a little bit of effort to know and understand and really see, okay, what is this threat, what is this vulnerability? Whether it's Log4j, Follina, PrintNightmare, et cetera. We can enumerate here and there, but it takes the knowledge being spread and the information made available, and it takes the organization to actually read that and consume it and understand it.
Could an organization do this alone? I think so. Maybe I'm optimistic. It obviously helps. I'm glad we were chatting about bug bounty, because at the end of the day, yes, it's incredible, it's a great initiative. Always continue to patch vulnerabilities. Keep patching. It's a tried-and-true method, just patch and have those updates. I realize, hey, we can put whatever asterisk and disclaimer that says, "Oh, what if that breaks something else?" Trust me, I think you're going to see more benefit in being safe than sorry. And I think I had one more nugget in there that I was trying to think of, but there's more to uncover in here, for sure.
 
Joseph Carson:
Okay. So what's next, or is there areas of improvements we can do around this? I remember, it was an interesting. I was on a panel years ago and it was a bunch of ethical hackers and then we had law enforcement, and it was always that fine balance between how far is going too far? How do you stay legal? So where is the boundaries for... What's the boundaries for you to make sure you're always doing it in a legal manner? Is there certain things that you want to make sure that you're not getting into, where you're crossing that line?
Because I remember, I used to do things like taking vulnerability disclosures or database dumps of password hashes and then correlating together. And then the police are going you can't do that because you're theoretically creating a new vulnerability or you're creating a new data breach. So they always said it's always a fine line. What areas do you say is crossing the line when you get into this?
 
John Hammond:
I think we're suddenly getting closer to the hot topics of, oh, okay, offensive security tooling and the release and the timing of that, right? I do not, and I do not like to share. Okay, obviously, I release some proof-of-concept code to be able to recreate the exploit against some vulnerabilities. But I have the guilt and shame again, okay, am I enabling more evil? Okay, there are already exploits out there. There's already exploits in the wild. This is known, I'm not adding to the noise anymore than I had if that were not the case, right? I've had some conversations about that, like, should there ever be, like when a researcher discloses a vulnerability to an organization, there's a 90-day waiting period-
 
Joseph Carson:
Yeah, the disclosure window.
 
John Hammond:
... disclosure window? Yeah. So part of me wonders is, okay, if a new vulnerability's out there, should pen testers and red teamers wait 90 days before dropping the exploit scripts? I don't know, an interesting thing to think about. As long as we're communicating, as long as we're transparent, as long as we are doing this all for the reasons of, hey, we're bettering security, I think that's the most important thing, but I know for a fact that yeah, okay, sometimes we're making it easier on the adversary than we need to. The PrintNightmare proof of concept that I had written with a good friend of mine, is in the Conti ransomware gang playbook. I don't know, hey, you can see the leak, but there's our name in our GitHub repository, and that's crazy and wild and sometimes super scary. I don't know, but some things to be cognizant when you look for vulnerability.
 
Joseph Carson:
But at the same time, though, attackers are using things like PsExec, they're using GMER.
 
John Hammond:
Totally.
 
Joseph Carson:
When I was going through in response and looking, they're using a lot of the tools that we use for everyday system administration. GMER, they were using it to try to find out what antivirus and anti-malware software's running in the kernel.
 
John Hammond:
Oh, wow.
 
Joseph Carson:
So, if you don't see an agent, they're running GMER to see AV, so they can do obfuscation so they can avoid it. For me, I think it's the right thing to do is to be transparent and make the world know. We had Casey Ellis on and we had Katie Moussouris on. So we actually went through, we actually did a whole episode of vulnerability disclosure. That 90-day window is a ballpark for me. It all comes down to, is this something that we need to know about? Is this something that we need to be thinking and already starting, looking for the indicators of compromise, because it's already been exploited. I actually met with Chris last night. So Chris Krebs, I was chatting with him.
 
John Hammond:
Oh, turning jealous.
 
Joseph Carson:
And one of the things, so I said to him, I was like, when you came on board and you took over CISA and you basically, the biggest thing, the big change I had was a security researchers' role was sending information one direction. And the big change since Chris come on board was that it became a two-way information street.
 
John Hammond:
Totally.
 
Joseph Carson:
And it completely changed. We started seeing government agencies becoming more proactive and actually sending out notifications and alerts that sometimes they might've kept for long time for themselves. But they started actually working in the communities, to start working with researchers. And I think that's a big difference. And I think, for us, it makes it much easier moving forward if we have that two-way communication. And with Jen Easterly, I've definitely, I've seen that continuous. I think, definitely the government is now, I think, becoming more part of the solution than were maybe four, five years before. They were basically maybe part of the problem.
 
John Hammond:
Yeah. I'm not obviously, we're not trying to have any ill intention speak or throw shade one way or the other, but this is the best cybersecurity team that we've ever had on the field than ever before.
 
Joseph Carson:
Absolutely.
 
John Hammond:
Without a doubt. I think we taking a more proactive defense and defending forward stance, and I am so happy to see it.
 
Joseph Carson:
Yeah, absolutely. When I met, I was like, I have to tell you the feedback, because I've been doing this for years and I was like, you made a big difference in working with the industry, which had never happened before. I was actually talking with the FBI as well. And they were like, "We've been listening and monitoring for so many years, it's hard to share information. It's a whole change in culture for us." And it was always ironic. It's like, I get it. What things for you next? What's content you're looking to create going forward?
 
John Hammond:
Oh, yeah. No, thank you. Hey, for other vulnerability ends and for some day job stuff in the classic Huntress sense, whatever else comes up, the new things and the winds are brewing, but for content creation over on the personal side, I have been having a lot of fun with creating a small active directory environments. Because I know there are a lot of folks that are super interested. And again, oh, that pen tester, red team of world, it's like, what about domain controller environments? What are we going to do with BloodHound? How are we cracking into stuff with Impact IT? So I'm wanting to showcase both the building perspective of that, and then pivoting and bouncing back and forth for breaking different things. So it's been a lot of fun and I'm excited for that to come out and come to life.
 
Joseph Carson:
Awesome. Any timeframe?
 
John Hammond:
It's here. Currently, I released the first video today, or just yesterday, admittedly, and I'm trying to sprinkle them out now. It's funny, because I try to be able to record and get stuff in the backlog so, okay, I don't have to stress about, I'm going to have to stay up really late to record something, and then just sprinkle and slowly get it out. But then I'll have to go back to the grind and record again. So I have about 10 ready, and I'll sprinkle them slowly.
 
Joseph Carson:
Awesome.
 
Chloé Messdaghi:
Nice.
 
Joseph Carson:
Any thoughts on you? What can we do? And what can we do better?
 
Chloé Messdaghi:
I mean, just overall, I've seen your videos. We've chatted about CTFs all the time, and I mean, it's really good to just keep on doing what you're doing. You're giving back to the community for them to learn. And anyone who's like, "Oh, aren't you training those bad guys all the good, the stuff and all that?" Forget them. They don't understand.
 
John Hammond:
Forget it.
 
Chloé Messdaghi:
At the end of the day, they don't understand. Because there is a fine line between an attacker and a hacker.
 
John Hammond:
Agreed.
 
Chloé Messdaghi:
As long as we just keep on teaching people that, I think that's important.
 
Joseph Carson:
Absolutely. I mean, for me, hacking's not a crime, it's the motive in how you abuse it is basically the differentiate between doing criminal work and actually doing work that's good for the world. So keep on hacking, because what you do is you're making the world a safer place for all of us. And I think that's, for me, is the big difference. And so, for me, it's all about how you use your knowledge and how you apply it and the difference it makes. Because I would rather know about a vulnerability than not. And I'd rather know what to look for and have that knowledge beforehand before somebody's already in my environment and trying to clean it up. Because it's more difficult to clean it up than it is to prevent it. And I think that makes the difference. So absolutely, keep on hacking, keep on making the world a safer place. You're definitely one of the most awesome in the community and content creators.
 
John Hammond:
Thank you.
 
Joseph Carson:
And I think there's a lot of new, young talent that's coming into the industry that you're attracting into it. You're solving our skills gap in a very unique way. So thank you and it's awesome.
 
John Hammond:
Thank you. That's very flattered and this has been a real treat. Thank you so much for letting me spend some time with y'all.
 
Joseph Carson:
Absolutely. It's been a pleasure.
 
Chloé Messdaghi:
Yep, totally.
 
Joseph Carson:
I've been pestering you for a long time now. So I'm so happy to get you on the show. So everyone, John Hammonds, this is Chloe, Joe Carson on the 401 Access Denied podcast. Again, tune in every two weeks. Stay safe, enjoy and see you soon. Thank you.