Joseph Carson:
Hi everyone, welcome to another episode of 401 Access Denied. I'm your cohost today, Joe Carson, joining you from Tallinn, Estonia, where it's pretty dark and cold. I'm the Chief Security Scientist at Thycotic, and chief advisory CISO for several companies out there.
Joseph Carson:
And I'm joined with some awesome guests here. Really excited about today's conversation, because I think it's really important. I think this is something in the industry we really have to bring forward. We all have to find a way to work together, and I think this is a discussion where that will actually come to light. I'm joined again with my awesome cohost, Mike Gruen. Mike, do you want to give us a little bit background on what we're planning to do?
Mike Gruen:
Yep. Mike Gruen. VP of Engineering and CISO at Cybrary. Yeah. Today, we're going to be talking to Katie and Casey about vulnerability disclosure programs and bug bounty programs and all sorts of good stuff, and why, if you're not doing it, you're doing it wrong. So, yeah. I'll let Katie introduce herself, and then Casey.
Katie Moussouris:
All right. Well, thanks for having me. I'm Katie Moussouris. I'm founder and CEO of Luta Security, and we provide governments and large complex organizations a positive roadmap and assessment in how they can build sustainable vuln disclosure programs and bug bounty programs. I'm really happy to be here today and talking with you and my good old friend, Casey.
Casey Ellis:
Absolutely. So, yeah, my name is Casey Ellis. I'm the founder, chairman and CTO of Bugcrowd. We didn't invent vuln disclosure or bug bounty programs, they predated Bugcrowd's existence, but we pioneered the idea of bringing the hard parts and actually delivering those over a platform back in 2011, 2012.
Casey Ellis:
So, yeah, it's fantastic to be chatting. Obviously, good to be catching up with you again, Katie. It feels like a recurring conversation that you and I have had over the past eight years. And there's been progress, but there's still more work to do, so this will be a fun shot.
Joseph Carson:
Absolutely. I think 2020 has been a year that's been ... It's had goods and bads and lessons learned. I think we've seen a lot of different disclosures and different things happening. And, again, I think that 2021 is going to be pretty much a continuous of that, but I am hoping that we've learned a lot of lessons.
Joseph Carson:
So, Katie, you've been, let's say, one of the innovators and really, let's say, in the forefront of vulnerability disclosures. Can you tell us a little bit of background about what's been happening? Where do we start? And how did we get to where we are? I guess, it's been a long time since you've been involved in it, so can you give us a little bit of a background and what's been happening today?
Katie Moussouris:
Yeah. I mean, I can definitely give a little bit of a quick history of vuln disclosure. So, what a lot of organizations fail to realize is that the concept of vulnerability disclosure originated with hackers. They're thinking that it originated with companies who started vuln disclosure programs early, because they were forced to, like Microsoft is a great example.
Katie Moussouris:
But actually, the history is something different. Hackers basically wanted to alert the public that they were vulnerable to things. And as a courtesy, maybe give the vendor a few days to fix it. The original vuln disclosure policies were five day disclosure deadlines.
Katie Moussouris:
So, we've come a long way from the earliest vuln disclosure policies to where we are today, which we have a couple of ISO standards that govern not just the external components to a vuln disclosure program, but more importantly, the internal digestive system that has to process all potential vulnerabilities, whether you found them yourself or somebody outside your organization has found them.
Katie Moussouris:
As a hacker myself, it shocked me more than anyone when Microsoft, when I worked there, asked me to help with those ISO standards. And I got there saying, "Why would hackers ever want to be ISO compliant?" So, we managed to scope it down to what vendors should do when they're preparing to receive vulnerability reports and process them. And then we get a couple of ISO standards out of it, which were the basis of some of the biggest programs that we've seen out there.
Katie Moussouris:
I launched Microsoft's first bug bounty program in 2013. That was really the first time a major vendor, besides Google, had offered big bug bounty. And that was one of the big sort of sea changes that happened, that it made it possible for complex organizations to think this through.
Katie Moussouris:
Now, luckily, my friend, Casey, was around and had just started this company. And so, folks were catching on to this idea that hackers weren't necessarily your enemies. Hackers could be confrontational, but hackers are trying to do the right thing for the most part, because all humans are pretty much trying to do the right thing, for the most part.
Katie Moussouris:
Last time I checked, hackers are still human beings. So, kind of wrapping up the quick history lesson, brings us fast forward to 2016 when I helped the Pentagon launched the very first DOD program embracing hackers, and that was called Hack the Pentagon, first bug bounty program of the US Department of Defense.
Katie Moussouris:
The momentum has been growing. It's been great to see. I've seen a few folks and organizations kind of getting a little ahead of themselves in terms of trying to roll out these programs before they really upset their digestive system up appropriately. You don't want to get bug indigestion by any means.
Katie Moussouris:
I think overall, the work of all of us in this community has been contributing to the acceptance of hackers, and the idea that we can be helpful. And especially if you pay attention to what we're telling you, we don't necessarily need to get paid every time, although that's good and appropriate. But really, it's about communication and getting those bugs fixed. And that seems to be the most important thing to all of us. So, there you go. Short history of vuln disclosure, and you...
Mike Gruen:
And I think just picking up on that a little bit on the bug indigestion, I mean, I think that's a great segue into Casey and Bugcrowd. Cybrary is a customer of Bugcrowd, and they definitely help us so that we're not choking on all of the constant reports and repeat reports, and repeat reports of things that aren't actually problems. So, yeah, Casey, do you want to maybe jump in and talk a little bit about that?
Casey Ellis:
Yeah, for sure. What I'll fit into that, that was fantastic synopsis of where we've come from, and where we are today. Where we fit into that, I think really the two things that triggered me to start Bugcrowd, one was this awareness of the fact that cyber security is a human problem. We're talking about humans being good and hackers being humans too. Humans also make mistakes.
Casey Ellis:
I think there's this idea that the internet, collectively, is becoming more aware of. It's the fact that vulnerabilities are going to happen, in spite of all of your best efforts. And those efforts vary across organizations to varying degrees, but it's just a part of human nature as well.
Casey Ellis:
So, all right, if you've got this helpful group of people that are at table with the ability to identify risk as a byproduct of that, who are trying to give you information for you to use, that seems to sort out some of the issues that the cyber security industry has with the lack of talent, and a lack of the ability to really get the right people to answer these questions. So, that was a big piece of what I wanted to solve with Bugcrowd was to be able to plug that latent potential together with the unmet demand to try to make our security roll forward in a better way.
Casey Ellis:
The other part of it is just keeping my friends out of jail, because it's this sort of origin story. I think, finding a vulnerability in someone's stuff and telling them about it, it's an inherently adversarial process, usually the first time around, especially if the recipient hasn't prepared for it. And if the person who's doing the talking might be inexperienced, and just going in a little hot. That's the thing that happens a lot. So, to be able to standardize and normalize the conversation a little bit, in order to make it smoother. That's really kind of what we were trying to get sorted out with Bugcrowd.
Casey Ellis:
And when it comes to bug indigestion and all those sorts of things, triage is one part of it. I think it's the part of it that gets talked about the most in context of a public program, because you're trying to listen to the entire internet, and it's a noisy place. So, trying to get to the signal through the noise is very definitely a part of what needs to get done there.
Casey Ellis:
I think where I've had a lot of respect for the work that Katie has done was ISO 30111 and those different pieces, what she does with Luta as well. Most organizations don't realize how, for starters, vulnerable they are, but the second is ill equipped they are to actually deal with information that's coming in from the outside on a reactive bases and integrate that into their process of building their company.
Casey Ellis:
Because they didn't start the company to handle bug reports. They started it to do whatever they do. So, finding a way to fit that in and have it all roll forward, it's a very important aspect of it, which is something that we help particularly smaller organizations with and the larger ones that we deal with that we've been working with for longer. But there's a lot of different moving pieces to the puzzle, I guess, is really the point here.
Joseph Carson:
Absolutely. For the audience, I just want to make sure we clarify is that, absolutely, for me, I'm always beating that drum. Most hackers out there in the media are kind of misrepresented. Most hackers, they're using their skills for good. They're there to help. We're good citizens out there trying to make things transparent, trying to get people to step up and be accountable.
Joseph Carson:
So, there's a misrepresentation I always feel in the media, and the news that hackers are bad. And I want to make sure that ... I myself, I'm an ethical hacker. I'm always looking to make sure that I do things in order to make the world a safer place. And you're using your skills to help organizations and help identify those.
Joseph Carson:
So, for the audience, when we always talk about hackers, unless there's more context, we're talking about good citizens who's using their skills to help you improve and provide a better service to your customers. And I think what we're really talking about here ... Go ahead.
Casey Ellis:
Sorry, just to jump in, and I'm fully mindful of the fact that I'm sitting in front of the usual suspects. There is this element of offensive security that is ... One of the things that got me into security in the first place is I really enjoy thinking like a criminal. I've just got absolutely no desire to be one.
Casey Ellis:
I think that's been one of the triggers, or the sources of some of the misunderstandings out there, but it's a really good point that you've raised. Like hacker, as a phrase, became synonymous with the bad version. Like to me, hacking is actually amoral. It's a thing that you do. It's a mindset, it's a set of activities and a set of things, interests that people have. It doesn't actually have any inherent moral loading. You can use it for good or you can use it for bad. We do the same thing. We try to use the word hacker purely in the good context. If we're talking about the bad version, it's malicious attacker or a cyber attacker or something like that.
Joseph Carson:
We always have to make sure we put the right context. It's criminal, it's malicious.
Casey Ellis:
Words are hard.
Mike Gruen:
Just criminal is fine. Criminal.
Joseph Carson:
We've had discussions. I work in a lot of incidence response, I do a lot of penetration testing, and I will always make sure that when we're talking about ... I've been working on a ransomware case for a number of weeks. And the digital thieves are criminals, and that we have to make sure that we call the right context, because otherwise, what we end up doing is we put them in a pedestal. We put the criminals up there, as elite, as sophisticated, and they like that, they embrace it.
Joseph Carson:
So, what we're doing is we're encouraging them to do more, but we want to make sure we actually call it what it is. It's a criminal activity. It's crime, it's digital crime, and we have to make sure that we get the media to pick that and actually use that as a headline. It doesn't make it cool. It doesn't make a headline. But we have to get to reality and call it what it really is.
Joseph Carson:
I have a question for you, Casey. One thing is that I'm definitely a very big promoter of security researchers and finding vulnerabilities. What's the standpoint when they weaponize it? There's a point of actually finding a bug, but then weaponizing it and making that available. Where's the kind of ethics? Where's the kind of boundaries where it should be kind of staying within the legal side of things? What's your advice when you do find something, but then you make an exploit that actually will take advantage of it?
Casey Ellis:
Yeah. It's a challenging question to give a single answer to, because half the time, weaponization in a good faith hacking context is really about explaining what the nature of the problem is to the recipient. Engineers don't necessarily seen alert one on a website as a POC, and automatically understand the importance of that. So, sometimes you got to do a bit of extra work.
Casey Ellis:
I think when it comes to drawing the line around ethics as a finder, and as a submitter, it really does come down to, firstly, understanding what the expectations are. I think this whole idea of standardizing vulnerability disclosure, brief language and all of those different things to make as much of that like expectation setting for both sides.
Casey Ellis:
As a finder, if I submitted this program, this is what they expect me to do. I'm not forced into that, because I'm on the Internet and I can do whatever I want, but if I'm engaging, it's going to be most productive if I engage in these sorts of ways. And probably more importantly, this is what I can expect in response from the recipient.
Casey Ellis:
So, if I'm going and making sure that I've weaponized, to use your parlance there, my exploit in order to explain it. I know that the recipient is not going to misinterpret that or take it the wrong way. It's all those sorts of things. I mean, we're talking about unintended consequences as a service here. So, it's very difficult to give a one size fits all answer to that. That's as close as I could get. I'd be interested to see if Katie has thoughts on that too, because she's obviously seen a few times of this.
Katie Moussouris:
Well, yeah. I mean, definitely, creating proof of concept that demonstrates the severity is one way to interpret your question in terms of proving it to folks. Sometimes, even with a very, very strong proof of concept exploit that you've developed, they still misunderstand the root cause, and they only understand that one vector that you showed them. So, they'll fix that one vector.
Katie Moussouris:
I mean, I think it's an important piece. And in fact, in the Microsoft bug bounties, which I had created initially, and they still have this criteria, you have to. In order to qualify for the highest bounty amount, you have to produce a working, reliable proof of concept code.
Katie Moussouris:
So, the point of doing that is actually showing, because Microsoft's defenses have evolved over time, and they're quite sophisticated in the latest operating systems. You have to demonstrate that you could actually leverage that vulnerability to do harm. And that's part of the reward that they're paying for is they want you to do that extra validation step for them, so that it's really fast for them to say, "Yep, that's definitely an issue and get to work on fixing it."
Katie Moussouris:
As opposed to offloading that work to the receiving team to either be able to understand it in the first place and get to the root cause, but also to be able to address it comprehensively. So, I think it's an important piece.
Joseph Carson:
Yep. Oh, I completely agree. For me, I remember, it's quite a few years, probably about six years ago, I was doing a bit of research. And I remember one of the things I was doing was basically taking some previous breaches data and correlate them together.
Joseph Carson:
In EU, well, that is considered as actually creating a new data breach, because of GDPR and data protection. So, it meant that you're not in the same ... From a legal perspective, getting a slap in the hand from Europol. At least they can inform me not to be doing it.
Joseph Carson:
But when you get into those things as well, especially when you're working across borders and different countries, what do you recommend security resources? Because when I go from different country to country, I had to choose which laptop to take, or which hard disk, or which SD drive to pop out of my laptop, so we're not breaking laws.
Joseph Carson:
How do we deal with this when it comes into cross border, especially companies that have different office locations across the world? How does that challenge into this?
Katie Moussouris:
So, I can take the first pass at this one, because I helped renegotiate some of the export controls around intrusion software, intrusion software technology as part of the Wassenaar Arrangement. So, Wassenaar Arrangement, for those people who don't know, and thank goodness, you don't have to know, is basically it's an export control agreement between 42 countries. It was originally 41, but they added India in the last couple of years. So, it's a total of 42 countries.
Katie Moussouris:
The issue is that at the Wassenaar level, all of these countries decided that they would have people fill out export control forms in order to bring their tools across borders, et cetera. But even in some cases, depending on the country, even if it was from the same company to a different corporate office in a different country, they might have to deal with some export controls.
Katie Moussouris:
So, one of the most important things for me to get accomplish as part of the official delegation to renegotiate that was to make sure that incident responders and people trying to do vuln disclosure would not have to bother with export control forms and waiting and delays and whatnot. Now, that being said, that doesn't mean that every single country that's part of Wassenaar has not also implemented their own more restrictive controls.
Katie Moussouris:
Let's just talk about France and Germany for a second here. They have some of the most restrictive controls, especially around tools. There's a very famous moment when our good friend in the hacker community, Halvar Flake, also known as real name Thomas Dullien. He was trying to come and bring training to black hat and was basically denied entry to the country because of what was on his laptop, and so he had to miss the training.
Katie Moussouris:
Now, fun fact, everybody in their grandpa and grandma was trying to impersonate Halvar to get into the Microsoft party in Vegas that year. Just as an aside. As if we had never seen him before.
Casey Ellis:
That totally checks out.
Katie Moussouris:
Yeah. Exactly. So, these damn export controls almost got a bunch of gatecrashers into the Microsoft party in Vegas. But really, I mean, on a very serious level, I worked last year back when travel was a reality, and I can't even put my head around it.
Joseph Carson:
The old normal?
Katie Moussouris:
Right.
Casey Ellis:
Yeah, the old normal. Yeah.
Katie Moussouris:
But last year, yeah, we were doing an exploitation contest similar to Pwn2Own and we were doing it in the United Arab Emirates. And so, that was a whole thing where we had to make sure that essentially the exploits themselves were contained to just the researchers and the receiving party. We had to have no devices in the room. We had to basically do an impromptu skiff to be able to try and contain these exploits, because I knew that we weren't export control protected in that place where we happen to be.
Katie Moussouris:
The exemptions only work if you are the reporter, if you are the receiving vendor, or if you're the coordinator that is going to kind of work that arrangement. Now, we were none of those things. We were judges who had set up a contest. So, it was really, really tricky, and I'm just glad we all ...
Katie Moussouris:
To Casey's point, we all stayed out of jail. Nobody got fined. Nobody got arrested when they landed back in their home countries. And actually, the State Department reached out to me because I did a little talk about it at Summer Con this year about how crazy it was. They reached out to me and they were like, "We saw your talk." And then they didn't say anything like that. So, I guess it was fine. I guess they were watching it. I guess it was fine.
Katie Moussouris:
But yeah, this is super tricky. Last thing I'll say about it is that we're never going to get to a true safe harbor for researchers until we get normalization across the globe about hacking, cybercrime laws, anti-cybercrime laws, and export controls. We're going to be very old people, Casey, by the time this is settled. I mean, we're going to be radically old.
Mike Gruen:
I think you're very optimistic.
Casey Ellis:
I don't know about you, but I've aged 10 years in the last 12 months. There is that. I completely agree with that. I don't know that we'll ever necessarily see this get solved short of any hacking laws, and even things like the DMCA, like anti-circumvention laws. All of the different things that get brought in to either legitimately or more frequently to chill security research, like legitimately prosecute or chill security research.
Casey Ellis:
Until those get basically made an afterthought or an addendum to a more traditional crime, that to me seems to be like a rejigging of the legal construct that could work. But that's a ways off, because we're still in this place where things are very vulnerable. People don't necessarily ... There's not consensus or unified understanding of the role of good faith hacking as a part of the overall security of the internet.
Casey Ellis:
There's definitely a lot of good intentions, and I feel like there's been a lot of progress that's directionally correct. Particularly over the past 12 months, but really over the past 10 years. Yeah. I mean, to Katie's point, I think there's a long way to go with that.
Casey Ellis:
What I would add to it as well, just real quick, is the idea of ... So, we're talking about physical transit, and we're talking about mostly potential criminal or state level legal risk. On the civil side, this is where I think it's a part of what we've been really pushing forward on. The folks that are involved in the ... project is like, "How do we standardize as best as possible the kind of things that need to be written in a policy to clearly indicate to the finder that if they don't basically do criminal stuff, then they're going to be okay? They're not going to be pursued from a civil standpoint."
Casey Ellis:
And that's not perfect, but I think the more consensus there is, the more adoption there is of those types of terms. The higher the tide rises on that. And I do think, as well, through convos with people like the AFF and ACLU, and so on, like organizations that go to the effort of actually enumerating the different things they're going to allow from an authorization or access or exception standpoint. The more things they've done, the less likely they are to actually kind of prosecute by mistake, if that makes sense.
Casey Ellis:
So, the whole idea of civil cases being brought under 1030, for example, where there has been a decently written vulnerability disclosure brief is very low, because that organization has actually put the time into figuring out what that means. So, I think there is an element of that as well, people actually just stopping for a second and working out what the implications of all this are, and moving forward like that kind of from the bottom up.
Joseph Carson:
I totally agree. For me, it's all about content. It's your intention is for good or intentions for bad. That's ultimately what the definition is. Whether you have... Sometimes we make mistakes.
Casey Ellis:
Ideally. I don't know the law has quite figured out how to wrap its arms around that and like if you want evidence of that, you just watch the Van Buren briefings on the Supreme Court.
Joseph Carson:
I remember those.
Casey Ellis:
A month ago.
Joseph Carson:
Yeah, the Starbucks double spending one, even the rewards card. Going through that. And even the guy who did the PlayStation mod, all of those kind of thinking outside the box about how you can do something. I think companies should look at that as feedback. For me, it's...
Casey Ellis:
Yes. Ideally.
Joseph Carson:
Form of whistleblowing, in my view, And there should be some type of protection like a whistleblower protection.
Casey Ellis:
If I touch on that for a second, because the dynamic I agree with. There's still this history of it being a very adversarial relationship. How I like to explain vulnerability disclosure in particular, I've heard Katie used this as well, as well as others, is more like the idea of neighborhood watch.
Casey Ellis:
You've got stuff out on the internet. The internet is this gigantic neighborhood. There's people that can identify risk, and potentially want to tell you where you might have an issue. Really, what you're doing is saying, "I'm open to that feedback, and I will interpret that feedback as positive."
Casey Ellis:
Yeah, you can sort of see the difference, because it's a ... Neighborhood watch is a form of whistleblowing too, but I think couching it in more friendly ways. Really, what that does is actually establishes the true nature of what's going on in a way that helps ameliorate some of the sins of the past.
Joseph Carson:
Yeah. It's a metaphor. Again, it is simply your neighbor coming up to your door and saying you left a window open. And that's the form of what it is. And ultimately, if the neighbor says, "I'm going to sue you, because you shouldn't have came to my door."
Mike Gruen:
As opposed to the neighbor who comes in through the window to say you left your window open, right?
Joseph Carson:
Exactly.
Katie Moussouris:
Yeah. You know what, that's where we get into some ambiguity. A lot of times, the way that some scoping of vuln disclosure programs and bug bounty firms is written, is it confuses the hackers, and I don't blame them. So, it will say things like, "Do your best to show the impact of this vulnerability." And they're like, "So, I've got some credentials. What can I do with them?"
Katie Moussouris:
And then they start pivoting all through your network, and that's not what you meant, right? So, that would be the equivalent of they think that they're doing neighborhood watch stuff, and it's all authorized. But actually, you've just found them rooting around your underwear drawer, and them saying, "What? You told me to tell you. I could possibly do with this access." And I'm like, "Get out of my house." You know what I mean?
Katie Moussouris:
I think a lot of organizations are feeling invaded when they haven't really thought this thing through. I mean, honestly, that's a lot of the work that my company does with organizations is they understand conceptually what they want in terms of scope, but they have never met some of the scenarios that are fully reasonable scenarios, fairly common where a researcher will accidentally or deliberately go out of scope, but it's still not with bad intent.
Katie Moussouris:
And they have to kind of figure out how are they going to make those decisions, and how are they going to behave essentially when something like that happens. I mean, a great example is the DOJ put out guidelines for how you're supposed to think these things through, and part of that was because of the work we had done with Hack the Pentagon, and helping to create the ongoing vuln disclosure program that was outside of any particular bug bounty, time limited challenge.
Katie Moussouris:
If you see something, say something. You're supposed to come forward to the DOD. But we did run into people who were definitely out of scope and had to coach the DOD and DOJ in like, "Nope, this is not it. You're not going after these folks right now."
Katie Moussouris:
But that's why DOJ put out those guidelines to help organizations think that through. And a great example of where they always fall down in first pass is they say, "Oh, well, we want no data exfiltration whatsoever." And I'm like, "Yep, but hackcidents happen. So, what are you going to do when somebody actually tells you, 'I didn't mean to, but I did see data I wasn't supposed to?'" So, that's just a classic example.
Casey Ellis:
No data exfiltration whatsoever, please provide a clear POC.
Katie Moussouris:
Right.
Casey Ellis:
Where's the line between those two? Because dumping all of the records versus creating enough information to demonstrate that you've actually found something, that line, there's no real established standard on where to draw that line, which is part of where it becomes a case by case thing. And I think on the flip side of that, you see a common clause is, "Please don't use automated tools." It's like, "What? Like a computer?" That's kind of the whole point.
Casey Ellis:
So, what exactly are we talking about here? What that close is actually saying is don't do aggressive scanning where you've not inserted your own creativity. A, we've already done that, and B, we don't want the traffic, right?
Mike Gruen:
Right. And we don't want that traffic multiplied by all of you.
Casey Ellis:
Exactly. It's such an ambiguously phrased term that you end up in a position where you've got folks that are ESL, you've got folks that definitely don't necessarily have a legal background, trying to read through and interpret all this stuff to work out what's okay and what's not.
Casey Ellis:
I like that there's a lot of work going into standardizing this stuff, like better preparing the companies on Katie's side with Luta, or all of that. On our side, sitting in the middle to be a translation layer and effectively a broker between these two groups of people that really need to talk but don't really have a great history of being able to do that. So, it's all heading in the right direction, but there's still a long way to go.
Joseph Carson:
Scoping is the most critical thing. We're probably talking about an entire process. It is making sure people understand what is okay. This is the top. Everything I do is that when I'm doing scoping, my goal in the whole process is my top of my scope is do no harm, cause no harm. Everything else can be accidental. Don't cause the business to lose money as a result of my activity. Do no harm, try not to let ... If I see data, it's one thing, but if I leak data, it's another thing. My goal is to not cause any harm to the business. So, everything else is kind of byproduct of what we're doing.
Joseph Carson:
One of the things also going in this topic is around one of the things you get into. You're talking about automated tools and talking about the ethics side of things. I really kind of understand about ... And of all the things, there's a lot of cross border. You get a lot of people in India doing this as well. I loved last year listening to the talk around, when we're talking about scoping was around the guys who did the courthouse, and end up getting arrested and so forth.
Katie Moussouris:
Although that was in scope, though. That was totally in scope.
Casey Ellis:
As an example, despite it being in scope, but then still all that happening, that was a really good example for that, I think.
Katie Moussouris:
I mean, honestly, that's a good example of why I don't think scoping is the most important thing in this process at all, because it's having the organizational maturity to, one, get behind doing something good with vulnerability reports.
Katie Moussouris:
Two, in that particular case, that was just infighting politically happening, where one branch said they had legal authority, and another branch of law enforcement said, "No, you don't." And it was their infighting that caused those arrests to happen and everything, so for me ...
Katie Moussouris:
Yeah. We do a vulnerability maturity assessment of organizations. And it's five different capability areas, and only one of them is engineering, because it's this whole picture internally.
Katie Moussouris:
I've seen organizations trying to skip to the scoping step. They're like, "Okay, first thing we need to do." And this is my big problem that I have with the existing guidance that came out of DHS CISA for the binding operational directive.
Katie Moussouris:
I love that they have this as a major initiative for the US government saying, "Thou shalt have some way to get in touch with you to report a vulnerability." I love that. What I hated was they said, "Step one, decide what's in scope." And I'm like, "No, step one is decide how are you fixing vulnerabilities that you already know about. What is your capacity?"
Mike Gruen:
Oh, wait. Can I jump in there, because ...
Katie Moussouris:
We need to add capacity.
Mike Gruen:
I think the number one thing is to figure out how you want to actually receive that information. To me, that was the thing. As a security researcher, sometimes they're coming in through the window or through the other way, because they have no other way to get your attention. They don't know how to actually approach you and how to get you that information. So to me, the most important thing is, "How do I, as a security researcher, provide this information to you so that you can then act on it?" I'm happy to be wrong.
Katie Moussouris:
As a security researcher, sure, that would be the most important thing. But in terms of what security researchers actually want, which is responsiveness, adequate responsiveness, competence, understanding of what they were trying to do, and action, right? So, all that stuff that a researcher actually wants after they find the point of contact, that is the digestive system.
Katie Moussouris:
So, here's the thing. Nobody actually gets from ... I don't receive any vuln reports from the outside to a functional vuln disclosure program without doing some of those steps. If they do, it is a trial by fire, completely unnecessary, painful moment where they have to basically struggle. They basically put out a menu for a restaurant that they've got no kitchen staff for. You know what I mean?
Katie Moussouris:
It's like they can do it, they can pull off the shifts, they could maybe feed some people. They can maybe get this done, but it's excessively painful. And there's no reason for it, because we literally have two ISO standards. Two ISO standards that have been out since 2014, and numerous other examples and ways to get ready.
Katie Moussouris:
And one thing I love about Casey's approach with his company is that I don't get a lot of, let's just say, bug bounty refugees from Bugcrowd, if that makes sense. Folks who just got in over their heads, and we're hurting and everything. So, I feel like Casey's team does a good job of making sure that there's no bug bounty Botox going on over there. And that's something that's really important. You can put up a front door, but if nobody is listening, I mean, it's a Hollywood ghost town set.
Casey Ellis:
I do appreciate that, Katie.
Katie Moussouris:
No liquor in the saloon.
Casey Ellis:
It has been a focus for us. Bugcrowd actually started out like the way that I characterized why I started the company in the first place. It looks more like a crowdsourcing problem than it does like a disclosure problem. And I think part of the challenge around where we're up to now is that a lot of those things are conflated together. It's all a bug bounty, or it's all of VDP. Or a VDP is all about engaging people to do work. It's not because you're not paying them, so by definition, it's not actually work you're doing, you're actually putting out basically a policy and a way to listen from the internet when it's got something to tell you. That's what a VDP is.
Casey Ellis:
But there's all this kind of term confusion around the different ways of coming back to the founding of Bugcrowd, get access to this pool of people that want to help out in a variety of different ways, I think. What we focused on, because we observed this very quickly, it's like, "Okay, people that build software, that deploy enterprise networks, they're generally way more vulnerable than they think they are." And when you get the people with the right kind of talent into the mix and activate them, you tend to figure that out.
Casey Ellis:
So, okay, if too much of that happens too quickly for the organization, they're going to become overwhelmed. They're not going to be in a position to step back and think about like, "What kind of frameworks do we need to change? Do we need to implement a proper SDLC? What's our risk management, our approach to vulnerability risk management within the organization, or risk based vulnerability management, for example?"
Casey Ellis:
If they haven't had the opportunity to do that, because they're so busy swatting bugs, then we're not really ... We've given them information that's valuable, but it could be way more helpful than that. So, that's been a big part of why we've kind of always tried to take that like crawl, walk, run approach.
Casey Ellis:
The other is to make sure the researcher gets looked after. Because if a bunch of people get a commitment made to them, and all of a sudden, the essence of what's happening on the other side changes because they've gotten overwhelmed. That's a bump steer for the research community. It becomes huge overhead for us to be able to keep everyone's expectations in line, and it's just a bad time.
Casey Ellis:
That's the context behind that. I will say with VDP as well, this whole distinction between scope versus not scope, I do put the root cause of that as term confusion around what these things are. A VDP is different from a crowdsource security assessment or what we call next gen pen test, like crowdsourced pen testing, those sorts of things. The latter crowdsourcing is against targets. So, it's effectively a different way of engaging or encouraging information, and you scope that.
Casey Ellis:
For a VDP, it's against you as an entity. So, you're basically saying, "I want to know about all of it. This is me wanting to hear reactively what the Internet is discovering about my risk posture, so that I can do something about that." And scope, it kind of better reflects, I think, the fact that attackers don't read scope in the first place. This whole idea of like ...
Mike Gruen:
Yeah.
Casey Ellis:
Yeah.
Joseph Carson:
I've got one thing from you, Casey. One thing is I don't see it as a crowdsourcing, I see it as a skill sourcing, because I think we're applying the skills. It's not about the volume of people, I think it's about getting the right people.
Casey Ellis:
That's difficult too. Crowdsourcing triggers particular things in people's minds. So, yeah.
Joseph Carson:
For me, that's skill sourcing, because we can't be the experts in everything. A developer who's writing code is not the expert in security, they're the expert in being able to build a module that will actually serve the purpose of the business, and going and getting the right skills.
Joseph Carson:
Kind of moving on from two questions I've got, one is the world is very much cloud. And a lot of organizations, it's shared services, its shared resources, its infrastructure services. I remember working a lot of companies where I did penetration testing in maritime. And one of the things I did is going into a ship and into a power station. The problem I had was that the company, they own the engine, but they don't own the data. You have a car today, you're buying a car contract, you own the freaking ...
Casey Ellis:
Cars where you see this play out. Yeah.
Joseph Carson:
Yeah. You own the car, but the data in the car is not owned by you. You're actually there to provide that to the manufacturer. Same as TVs. Everything we're moving to this multiple contracts type of scenario, and cloud is definitely one of those areas.
Joseph Carson:
I remember, the company might be hosting it in whatever hosting name, cloud, whatever it might be. And now, how do you do that especially with shared resources to do those proper testing? And the second part of it as well ...
Casey Ellis:
I'm sorry. Go ahead.
Joseph Carson:
I have a two part question. I'll let you answer the first one, and then I'll move to the second one after.
Casey Ellis:
Okay. So, I think, honestly, cloud is almost easier. If you're talking about this is one of the things that's a big feature in the ISO standards, Katie mentioned before, and it's been this kind of tough nut to crack behind the scenes with a lot of what the platforms have done since we came on the scene is this idea of like multi-vendor coordination, or multi-party coordination of response.
Casey Ellis:
A car is, effectively most of the time, a collection of OEMs. It's been assembled into a unit and then sold to a customer, and then you've got the data, and then you've got all the other things that you were just talking about there. The average piece of home networking kit often has a lot of aspects like that, like supply chains that make up physical things that we interact with to have a cyber component to them. They're there the whole time.
Casey Ellis:
So, trying to figure out how to coordinate vulnerability disclosure down that supply chain. Obviously, we're recording this at a point in time in 2020 where supply chain is now very much top of mind. I swear, I'm not doing the buzzword thing right now, but it's true. That's just sort of how it works.
Casey Ellis:
Like with cloud, it's a little easier, because it tends to be a dynamic target that you're assessing. And it's in one place, and it's either there or it's not. So, who's hosting it and what else behind it? It's not to say that, that doesn't ... That still factors in heavily. I'm not saying that, but I do think it tends to be more obvious in terms of who owns what.
Casey Ellis:
And honestly, going back to the combo around VDP and scope and all that stuff before, the sale of Expanse this year, and the rise of attack surface management as a category. It's evidence of something that we've known in security for a really long time, like people don't know where this stuff is.
Casey Ellis:
And ultimately, that's really what it comes down to. This is not a surprise for anyone who's been in the space, because it's always been a difficult thing to solve. But then cloud is gone and made that happen at the speed of caffeine and internet. You go through Docker and whatever else on top of that, and you've now suddenly got everything everywhere.
Casey Ellis:
That potentially comes back to me as an entity, because I am the owner of it, but I don't really know where it is. So, when it comes to defining scope and who's responsible for what, that's actually usually a fair bit of work.
Joseph Carson:
You got me laughing here. It's part of the disclosure program. Should at least tell me where my data really is, because I don't know where it is.
Casey Ellis:
That happens. S3, like all of these breaches you hear about was databases being left in cloud provider, cloud storage buckets. That's essentially that. It's like, "Where is my data? I forgot." Someone put it there probably well-intentioned, and this is part of a thing that they were being asked to do for work, but they didn't consider the security implications of that. Nor did they integrate that particular thing that they bought on a credit card into the ISMS of that organization, such they could be managed going forward. I can see that problem actually getting more difficult over time, because 2020 is like the great Zero Trust experiment, and we're all cloud native now.
Joseph Carson:
I'm not a big fan of Zero Trust. I think it's the balance. I'm for building trust, yes. Zero Trust is not effective for the business. It's always finding that balance. So, Katie, I have a question for you.
Katie Moussouris:
Yeah, but I was going to jump in on the supply chain vulnerability coordination stuff. So, I started Microsoft Vulnerability Research in 2008. And part of that was to do research on third-party software that affected Microsoft customers. And a major driver of that was doing multi-party and supply chain vulnerability coordination, because it's essentially a vital cousin of the first-party vulnerability coordination, and it needed dedicated attention.
Katie Moussouris:
This is at Microsoft. Obviously, a major operating system company, the biggest software company in the world. But the concept of organizations having that capability in-house is literally a dozen years old. So, I want people to kind of gain some perspective of when they're thinking about, "Oh, we have to be able to handle all this stuff." We're still dealing with most of the internet not adequately handling first-party vulnerability disclosure. One bug, one vendor, and settling on that.
Katie Moussouris:
So, I don't want to underestimate the importance of thinking those things through, and that is literally what we help companies do is help understand. Not just the tactical of where are my assets, where are my bugs, but it's how do we even deal with a situation where we've got a complex supply chain up and down, we're somewhere in the middle of it. Because usually, organizations are. They have dependencies up and down the stream in the supply chain.
Mike Gruen:
Just to jump in there. The first real disclosure that we got through Bugcrowd was actually against not a first-party thing. It was a SaaS that we use for authentication. And I was like, "Oh, great. Now, how do I communicate this to them?" And we lucked up, because they also happen to be a Bugcrowd customer.
Mike Gruen:
So, made it super easy to sort of get that get the security researcher connected to the right people over at the other company, and it went fairly well, but I do think that's going to be the big challenge moving forward. As I get more of those, how do I make sure that I can effectively get the right people communicating with the right people without being an inefficient middleman?
Katie Moussouris:
That's exciting that you were able to do that with Casey's company's help, with Bugcrowd's help. I have definitely seen, not Bugcrowd, failing miserably at that one to the point where I'm not kidding you. I have been embedded at a shared customer with a different bug bounty platform provider, and two groups at the same company, they couldn't coordinate bugs, and they were both using the other folks.
Katie Moussouris:
And so, that just kind of goes to show that I think that this skill set in general, and even the concept of what's in scope for them to help you with is still developing. And I like that we've got folks who have lived the consultants and pen tester's life before and Casey, of being a real hacker who started that company with Bugcrowd. I just feel like there's a lot of your mileage may vary in terms of getting the right help for this type of thing.
Casey Ellis:
It's very kind of you to say that, Katie. And I completely agree. Your mileage may vary in terms of one size fits all, like the different approaches that third-parties have taken. But also that every company is a snowflake. Every system is a snowflake. Every vulnerability is a snowflake. So, you've got so many snowflakes at this point that it really becomes more around the underlying approach the organization has to this, as opposed to what is the security thing that we're going to bolt on top of our existing security program.
Casey Ellis:
To me, that's the true value of this. It's not a better pen test or a vuln scan or a particular thing, it's really this idea of integrating, build a break-the-feedback loop. Feedback loops into your organization in a way that becomes a part of design, becomes a part of how you structure your organization itself. How do you negotiate a third-party supply chain contract to accommodate the fact that, that upstream provider is a part of your problem when it comes to risk as it relates to your customer?
Casey Ellis:
If you haven't seen this stuff as an organization, you're less inclined to even have that thought, in the first place. So, I think that's where the true value of this starts to come out. This idea of builders and breakers not thinking the same, but if you can get them talking to each other, and have them basically exchanging, Vulcan mind melding as best they can, then good things come out of that.
Joseph Carson:
Absolutely. So, this leads me into my next question. When are we going to see Fixcrowd?
Katie Moussouris:
Oh, yeah. Well, my company does a lot of that stuff where we help with internal staff augmentation, because, yeah, the growing volume ... It's not just the volume of bugs, but it's the skill sets required to understand them to guide the existing developers into a better understanding. To point them towards secure development lifecycle practices that will reduce the overall number and severity of vulnerabilities over time, and ideally, increase the complexity. You want the low hanging fruit eliminated by you. That's what you want. You don't want it hanging out there for Bugs Bunny Botox cowboys out there.
Katie Moussouris:
Yeah, exactly. So, yeah, it's making sure that there's that kind of efficiency. Yeah, we've been doing that, that kind of work. We did that kind of work with Zoom, and we got them ... We flattened their bug curve, because they got a big spike of bugs, because they got really popular.
Katie Moussouris:
And we helped flatten their curve by 37% in 10 weeks, which was, trust me, if you knew the raw numbers, that was significant. Yeah, it is really about making sure that the organization inside, if they don't have the right people, the right tools, the right skills, that we've got to prop them up. So, I wouldn't say fix bounty per se, because that makes it too transactional.
Katie Moussouris:
Internally, you have to have organizational memory that carries why you made a certain security decision, why you punted something, and that's knowledge of your product lifecycle, your support lifecycle, how long are you going to even keep that product under support. You may make a different prioritization decision. So, it's about getting embedded and understanding all of these internal company needs and our company helps a lot with that. So, yeah. Thanks for asking.
Joseph Carson:
That's awesome.
Katie Moussouris:
We didn't even talk about this. That's great.
Casey Ellis:
What Katie is talking about there as well as, other things that if you're working on those that you're addressing the problem closer to the root cause at that point. And you're in a position where the cost of having to catch things later on down the timeline is less expensive, because you're getting it done sooner. So, I completely agree with that.
Casey Ellis:
What we've done and what we've seen, there are the transactional fixes that are possible in terms of mitigating specific risks. And there's areas of that, that Bugcrowd has already applied with and that we work with, with customers on.
Casey Ellis:
I think in general in the middle, and this is sort of almost the bridge between the part that Katie is talking about in terms of the institutional knowledge and raised at, the part that crowdsourcing addresses and the crowd addresses, which is more where I play. Where can trends be identified? Are there particular frameworks or particular types of software that, as an organization, you're more systemically having issues with, that indicate an underlying any pattern that might be addressed by developer training?
Katie Moussouris:
Or eliminating ... in your life.
Casey Ellis:
Or a shift in framework. Or getting rid of PHP, or yeeting it into the sun. There's all these sorts of things where ... And it's almost like the way that we've just kind of framed that. We didn't talk about this ahead of time, but there's the discrete issue that gets found by an individual all the way down to the root cause, like organizational, cultural decision that probably happened 10 years ago that led to that.
Casey Ellis:
And we're really talking about addressing different parts of that chain, because it is ultimately a spectrum. Those things are all tied together. If you just work on one piece, then you will not be able to work on the other. And that's where I think being able to approach that mindfully and not just do one part of it and say job done is really important. Those are some of the ways I see those things tying together.
Joseph Carson:
Yeah. I really think it's important that we reward the right process. I remember a long time ago doing a security awareness training, and we end up having this instant response plan and we rewarded employees with money when they reported incidence. And the reward and the motivation was to get money, so they reported everything. We realized then is that, that wasn't the right motivation. It wasn't the right kind of thing. That's not what we're trying to achieve.
Katie Moussouris:
Right. That's the classic Dilbert cartoon from 1995, where they're saying ...
Casey Ellis:
Cobra farming.
Katie Moussouris:
Yeah. Exactly. Cobra farming. So, there's something important, though, about reversing the polarity of what you're paying for. Of paying for fixes transactionally and stuff. So, there's a good story here.
Katie Moussouris:
The European Commission authorized a bug bounty program for all open source software that's commonly used in the European government. So, this is a few years ago. There's this idea that if you just find the bugs and throw them over the fence, that, that's just unequivocally a good thing.
Casey Ellis:
Job done.
Katie Moussouris:
Open source is different. Open source is different. There are sole maintainers, or they're part-time on an open source project that might be quite popular. We saw this with open SSL right before it got the resuscitation investments from the Linux Foundation.
Katie Moussouris:
So, here's the thing. I asked the Apache server core developers. "Hey, if we were to structure this bounty in such a way that would be most helpful to you, would it be helpful to ask for a solution because you're open source? And would that be helpful?" And they said, "Absolutely not. Please don't offer money for that."
Katie Moussouris:
I was very confused. And I was like, "Wouldn't this be helpful?" And they said, "Yep, most of our work right now is dealing ... In terms of accepting or not accepting security fixes is arguing with people about breaking changes. And we're the core maintainers for a reason."
Katie Moussouris:
So, basically, they saw this as a danger of not only overwhelming them with more people wanting to get their code committed and everything, and that would increase their workload in a bad way without increasing security. But the other thing was open source relies on volunteer maintainers. And if suddenly, the people who are fixing issues for free are now going to be moved into this transactional model, it made less room for them to really identify the folks that would become the next generation of maintainers.
Katie Moussouris:
And to give you one scary fact. OpenBSD, the average core maintainer age is 55 years old. So, just draw that into yourself for a minute. That's the average age. So, we need new blood doing the core maintenance of some of these packages. And doing these sort of transactional incentives, turns out, is not going to work in terms of giving them better fixes and a better pipeline for who's going to take over these projects. So, I just wanted to put that out there that it's not like a one to one pay for the bugs, pay for the fixes, that's going to solve our problems here.
Mike Gruen:
Right. Actually, it's very reminiscent of ... I remember the first job, my first software engineering job where the idea was, "Oh, we're going to give bonuses to the QA team for finding bugs." And it's like, "But it's the engineer." Like, "Okay, cool. So, I'll just collude with them. I'll put some bugs in, and we'll split the bonuses 50-50." It's unintended consequences. I understand what you're trying to do. But by offering money for the stuff, you're going to create some perverse incentives and potentially disrupt things that we're working to find.
Katie Moussouris:
Yeah.
Casey Ellis:
Unintentional economic forces that you introduce to things. And it comes back to the whole crawl, walk, run piece we're talking about before. Starting this off in a smaller context, and actually getting that feedback. But then, as an organization, making sure that you're actually taking that on board beyond just fixing the bug, so to speak. And thinking through, "Okay, if we were to scale this out, how would we do ingestion of offered code changes at scale?"
Casey Ellis:
We had people offering us web application firewall rules for us to integrate in front of a dynamic platform, for example. How do we validate those? You can't just go and take someone else's code and slap it on top of your organization. That's unwise. All these different things, and there's a million of those, and they're different for every single organization.
Casey Ellis:
So, this whole idea of saying, "Okay, how much of that are you going to do based on your baseline? And then how do you think about this as it scales out? What are the next steps that you need to take to make sure that all of those kind of foundational elements have been put in place?"
Casey Ellis:
And then that ultimately ends up ... The closer it can be to balance of forces from an economic and an incentive standpoint. Ultimately, when you see like bug bounty programs that are actually kind of awesome, they're at the end of a lot of that.
Casey Ellis:
It's this balancing of forces and setting these different things up in the organization that's progressed and scaled up over time to the point where it's at a maturity level where you can just plug it into the Internet and it works. That doesn't happen by mistake. So, I think that's kind of the key thing. Yeah.
Katie Moussouris:
Oh, by the way, that's Scott Adams' Dilbert with the perverse incentives. That got anonymously plastered on my office door at Microsoft right after I announced those bounty programs, so just so you know.
Casey Ellis:
Code me a minivan.
Katie Moussouris:
Yeah, I'm going to code me a minivan was right on the door, and I left it there.
Casey Ellis:
I had it as a Twitter header for a while. Yeah, it's good.
Katie Moussouris:
Yeah, I know. I just left it there. I was like, "Yeah, okay. We'll see how this goes." And it turns out, it went great, because last year, I think they spent $14 million on bug bounties. Microsoft did. So, from 2013 to now. Yeah.
Casey Ellis:
On that piece, because this is a question I get asked, how do you probably see it all the time as well. It's like, "How do you avoid that? How do you avoid Cobra farming?" Is the economic reference for it going back to some stuff that people can look up if they're interested.
Casey Ellis:
How do I avoid that? The reality of it, yeah, there is the risk of perverse incentive and things kind of looping back around like that. Anecdotally, we've not seen any of that so far. And I think it's largely going back to some of the stuff we talked about earlier around people being good.
Katie Moussouris:
Oh, I have though. Absolutely.
Casey Ellis:
Okay. I'm sure it is a thing that has happened. I'm not saying that it's impossible and it's nonexistent. But the reality is that getting caught doing that is actually pretty easy too. So, as someone who sits within an organization who potentially wants to code themself a minivan, you're also aware of get blame, and all of the different ways that you could end up going to jail for doing that, which becomes...
Katie Moussouris:
Yeah. And now, it's not even where there's that sort of insider collusion part that's the only potential collusion that can happen or cheating of the system, the intent of the system really. We've seen triage people who are under contract steal vuln reports from what they're triaging, the bug bounty program or vuln disclosure program they're triaging. And then copy and paste that exact same bug report to another vendor who's vulnerable to the same thing and collect the reward. And that's essentially them abusing their triage visibility into incoming bugs.
Katie Moussouris:
And this, I think, is a problem. This is a huge problem, and this is literally just of the past few years where there have been bug bounty platforms, where you guys have the same hiring constraints that everybody else does and trying to scale appropriately. So, you're going to bring on contractors for a period, and before they're full-timers and whatnot. So, we've actually seen this manifesting as another threat in this ecosystem.
Casey Ellis:
Yeah. No, definitely. It's actually the risk of that, or the potential for that is another one of those things that we kind of saw coming over the hill. So, in terms of how Bugcrowd resources the triage team is primarily in-house. If they're contractors, it's usually because they're full-time contractors in a part of the world where we're not necessarily headquartered yet.
Casey Ellis:
Really, firm and very strict rules on what's okay and what's not okay to maintain that Chinese wall, in effect, within the organization. It's like, "All right, if you're able to see these sorts of things, here are all the other things that you're not allowed to do on condition. On pain of you basically retaining your employment."
Casey Ellis:
And it's one of those things. It feels draconian, because it's a lot of like, "Oh, we're doing hacker stuff, and its heaps of find and all of that." And here's some really hardcore rules that you also have to follow, but it's important. Because ultimately, this whole dynamic relies on trust, and it relies on expectations being aligned and kept as the process plays out.
Katie Moussouris:
It was Popular Science Magazine in 2008 called Microsoft security grunt, which was kind of a conglomeration of job descriptions. The way they described it was the people who have to answer your email when you email Microsoft about a security issue and deal with that. So, this was kind of a broad painting picture, but they literally ... Popular Science named it as one of the top 10 worst jobs in science, and it was literally between elephant vasectomist and whale feces researcher. So, you got to take that in.
Casey Ellis:
I'm seeing a trend there.
Katie Moussouris:
Yeah, and everything. And by the way, I got to say that on an official list advisory board call just last month, and I was so pleased. I'm certain I'm the first person to go on federal record saying ...
Joseph Carson:
You should have put the salaries.
Katie Moussouris:
Yeah, exactly. Elephant vasectomist and whale feces researcher. But here's the thing ...
Casey Ellis:
They've all got us really well, but they also got not so much.
Katie Moussouris:
No, that triage job sucks, and it does suck, and when you're good at it, you're going to be good at it, and you're not going to want to do it for the rest of your life. So, the fact of the matter is, that's a very important piece of understanding the ...
Casey Ellis:
Yeah. Team management behind that.
Katie Moussouris:
Yeah, and we're bug hunters. Maybe happy to hunt bugs for many years longer than that. It's those folks that begin to be the shock absorbers on the inside that you start to see a degeneration and shorter and shorter times where they're willing to even do that work. So, you're kind of in a little puppy mill of having to recruit, train, and make sure that they're very efficient. Yeah.
Casey Ellis:
Yeah. And creating a career pathway and doing all those sorts of...
Joseph Carson:
I was just going to say, I think, that's the best part of that is that ...
Casey Ellis:
That's the upside of it.
Mike Gruen:
Right. The upside is that there is this entry level into security, where you can get this job, you can start understanding the problem, and you can start to see a career path that develops from there. I think that, that's the upside is that this is one of the few.
Mike Gruen:
People ask me all the time, "How do you get into cyber security?" Cyber exists, we provide training and career development. But saying I want to get into cyber security is like saying I want to be a doctor or a lawyer. There's so many jobs. There's not that many entry level, and I think that's a really good solid, entry level, get a good taste for what things are like, and a solid understanding and foundation.
Joseph Carson:
A lot of people came from support industry, because ultimately, you listen to all the problems every single day, and they're trying to fix them as well.
Mike Gruen:
And for us, I mean, there's no difference. I mean, it's partially because I have the type of organization that we run, we're very small, small engineering team. VP of Engineering and CISO. right. There's not as much like back and forth between the security team and the engineering team. We're all one team.
Joseph Carson:
The QA.
Mike Gruen:
Right. QA is part of my ... But the idea that vulnerability or security problem is just another bug. That's how they're treated. We look at it like any other bug. What's the risk? What's the business value? What happens if we don't fix it? And so, it goes through the same sort of product triaging, and we have that benefit.
Mike Gruen:
I think a lot of companies. And I'm curious, Katie, you probably see this, where it's way further apart. And it's like you have the security team, you have whoever is deciding what's going to get done, and the guy is doing it and the people doing it. Just are so far apart that it's almost impossible to get these things that are security problems fixed. And for your experience ... Yeah.
Katie Moussouris:
It is absolutely like that in a lot of organizations and especially because time to market with whatever it is that you're building is so essential. We all know this as entrepreneurs and everything that you have to build the thing and focus on building the thing. And so, security and those security teams are often hired after the fact.
Katie Moussouris:
So, there's a well established development culture that exists. And then the security team comes in. And often, they're seen as the know people. Just Captain Know over there telling us we can't do this thing that before the security team got here we were free.
Katie Moussouris:
So, there's often just this family counseling that we end up having to do to be quite honest, where we're like, "Okay. Now, we're going to get together, and we're going to count your bugs. Okay, everybody got the same count. All right. Now, we're going to figure out ..."
Casey Ellis:
This is the features that you didn't intend to put in the product.
Mike Gruen:
Exactly. They have intended features, right?
Katie Moussouris:
I mean, literally getting into organizations where they want to fight Jira math wars saying, "That shouldn't count as a security bug, because it was this other thing." And it's like, "Well, this one should count as a separate bug than this other one, even though they're the same root cause, because they're on different endpoint." You get into these religious wars between literally labeling in your bug databases.
Joseph Carson:
Many are labeled by design.
Mike Gruen:
I mean, the same is true when you talk about features, and bugs, and so on, and so forth. I've run into that sometimes as well. It's like, "Look, all I really care about is how much new work are we doing and how much rework are we doing. And I don't really care what's causing the rework, until the amount of rework that we're doing is so large, that we're not getting actual new stuff done."
Mike Gruen:
Then it's like, "Well, is it because the product isn't defining the requirements well enough? Is it because the engineers are trying to get too much done too quickly, and they're putting all these bugs in unintentionally?"
Casey Ellis:
With that, because in my mind, based on observation, there's almost this post-Facebook and pre-Facebook kind of line in the sand, in terms of an organization's natural tendency to be able to even understand what we're talking about right now.
Casey Ellis:
Organizations that are older, this whole idea of, "Oh, how are you going to fit it in? Like the conversation we're having is more natively compatible with folks that are agile first, cloud first, CI/CD, all those sorts of things. Or at least to think about that as a business, because you have to do that too.
Casey Ellis:
When you're retrofitting this stuff over an organization that's been around from a technology standpoint for 30 or 40 years or more, it's a lot of work, because the whole idea of, "Oh, well, how are we going to prioritize? How are we pushing the pipeline forward to insert this work that's coming off the wire?" They don't have a muscle group that does that yet, necessarily.
Katie Moussouris:
I think a lot of organizations get pushed into it, because they have an existential threat to their business bottom line. That's what pushed Microsoft into trustworthy computing initiative, where they did a code freeze and said, "Every developer is getting trained now on writing more secure code." And then they started their secure development lifecycle.
Katie Moussouris:
Same thing, Zoom went for a 90-day feature freeze, except for security features, because they were experiencing that existential threat that they could not go on with business as usual without addressing it in a very serious way.
Katie Moussouris:
We hope that most organizations don't have to have that sort of shock to their system to get them to start investing. But I definitely have seen a pattern where, to your point, Casey, it's interesting where the companies that seem to understand that they are in over their heads are these older companies that are coming forward and saying, like, "Don't tell anyone, we're over 100 years old, and we only got into computers like five years ago."
Casey Ellis:
I know.
Katie Moussouris:
Yeah. We're like, "It's very obvious. You don't need to be shy about that." I think it's these companies that already have huge sprawling infrastructures that need kind of the most TLC in how to get to a place where they can be responsive. This is a new work item engine for them that they need to get that rhythm right.
Casey Ellis:
Even the idea of being able to admit the fact that there's likely to be a problem in the first place. To me, that's becoming tantamount to security maturity. The idea that like, "No, I know that somewhere, at some time over the past existence I've had on the internet, one of my developers has made some sort of mistake that's created a security risk." That's just mathematically probably true.
Casey Ellis:
And the companies that are comfortable with that are the ones that end up being in a really strong position to be able to integrate this feedback and just have it be a part of how they operate. But also, I think they ultimately end up being the ones that are more trusted by the consumer base as well.
Casey Ellis:
So, it will be interesting to see how that plays out over time, because that's something that I see the older organizations struggling with, just because of 40 years of history of saying there's nothing wrong.
Katie Moussouris:
Oh, yeah.
Joseph Carson:
And companies that have waterfall approach is still two, three year life cycles, and your freeze codes are probably going to struggle. They were probably going to take a long time to change.
Katie Moussouris:
Yeah. In some ways, but in other ways, there are certain things about those companies that it makes them a lot more deliberate about things, which if you are talking about needing to hire resources and plan for them internally, those old fashioned waterfall companies, that's kind of their bread and butter of how they plan releases and plan engagement.
Katie Moussouris:
But I do think that the companies that are smaller, more agile, more compact, they can get things done faster. The real danger there is those companies are usually moving so fast. And remember, the turnover rate in our industry is high, and especially for jobs that touch this area. Whale feces, elephant vasectomists, that kind of thing.
Katie Moussouris:
So, where we see smaller companies falling down is in failing to capture some of that magic that simply was living in the heads of these in-place persons at a given point in time. So, what we do see is we'll see a very responsive organization, and then key personnel leave take the institutional knowledge of how to make that a very good responsive organization.
Katie Moussouris:
And the org itself suffers the wound of having to relearn that operational capacity. So, it's not a fixed point in time that we see you're mature and you're not. It's like this group in your organization is highly mature, but that person is about to leave the company, and you're going to be dropped down into the relatively immature levels of the rest of the company. So, we see all of these kind of mixed modes going on.
Joseph Carson:
So, I think we might be able to solve the whole world's problems on this call today.
Casey Ellis:
I feel like we've fixed it.
Joseph Carson:
I think we've fixed it. I think for everyone else who's listening to this will be very clear, and everything is going to be fixed. The sun is starting to rise here in Estonia, so I thought ...
Katie Moussouris:
Oh, god.
Joseph Carson:
For the two perspectives, for the companies who's thinking about this, or wants to really address it, I'd like to get anyone who's going to take this journey, anyone who's listening in, what's your recommendation? The second part of it as well, for security researchers who want to do the right thing, what do you recommend for them?
Joseph Carson:
So, the two part question. For companies who's thinking about this, what's your kind of recommendation the path they take, and where's a good place to start? And for security researchers, the same.
Casey Ellis:
Wow. For security researchers, I think just getting to be a part of a community, plug in as much as you possibly can. It's not just about learning to hack or learning how to do whatever the thing is that you're wanting to do. I actually do think that we grow as almost ...
Casey Ellis:
I've got this picture on the office wall in San Francisco. It's the swarm of birds. And I kind of think about the community in a similar way. If we're together, then the whole becomes far greater than sum of the parts. So, I think for researchers to be able to do that, we've got Discords, there's forums.
Casey Ellis:
Bugcrowd, we're doing virtual conferencing before it was cool, or virtual security conferences partly because we wanted to just create opportunities for connection, and to be able to get educational information into the hands of people to see if it matches with their curiosity, and they could move forward. I think that happens in community. I'm a huge believer in that.
Casey Ellis:
For organizations, really, they're biased, but fairly accurate recommendation is give us a call just in terms of being able to sit down and understand where you're up to, what are the things that you're trying to get done.
Casey Ellis:
If you're coming and saying, "We need to start a public bug bounty program, so we could do a huge press release on Friday." And it's Wednesday and you've not done anything, we're likely to say, "No, don't do that." And then we can have a conversation around how you get your goals met in a way that's more sane. And that kind of fits in with what you're trying to get done as an organization.
Casey Ellis:
I think especially for the larger organizations that might be earlier on in this process, reaching out to Katie and the Luta crew as well. All of this stuff around like how do you mature yourself as an organization to be in a position where you are in 2021 out or near the top of the pack as it relates to cyber security is a lot of work. And honestly, every organization is in that boat together. So, to be able to get assistance from organizations like hers, from a consulting standpoint, I think is really valuable as well.
Joseph Carson:
So, for both, don't go it alone.
Casey Ellis:
Don't go it alone. 100%. Yeah, totally.
Katie Moussouris:
Thanks, Casey. I appreciate that. We teamed up on the UK government. So, Luta was in there and then you we were helping to get them ready and mature. And this is a government-wide initiative where they wanted to assess their operational maturity, find out what are the people processes and tools were missing so that we don't get a bad case of bug indigestion when we start opening the front door.
Katie Moussouris:
And then we coupled up with Bugcrowd who provided that initial service of making sure it ran smoothly, and that hacker expectations were met. They were getting good advice on how to fix some of these issues. So, yeah, I think that it's important to be able to do that organizational assessment, think about what your goals are.
Katie Moussouris:
Definitely, neither Bugcrowd nor Luta Security is into you just getting a press release out of it, because ultimately, it will come back and bite you whether that's in the form of perverse incentives, or even making it more difficult for you to hire internal folks. If you're focusing so much energy on your external bug bounty programs and there's skill sets that you could actually hire for internally, you haven't gotten to that sophistication level where you can't possibly afford the pairs of eyes that you would need internally, I think that's super important.
Katie Moussouris:
And then, from security researchers' standpoint, I would advise them, definitely belonging to a community is super important. But ultimately, how you choose to spend your time whether it's hobbyist, hacking time, professional hacking time, come to an understanding with yourself about what your goals are. Are your goals to learn? Then definitely, you can have a broad availability of targets.
Katie Moussouris:
If your goal is to make money, though, the thing I advise people to do is say, "Go for a company that already offers a bug bounty program." You will not believe how many researchers come to me and say, "How can I make such and such company pay a bug bounty to me for this vuln disclosure report?" I'm like, "Do they have a bug bounty program?" And they said, "No, how can I make them have one?"
Katie Moussouris:
How about you not spend any of your time trying to make people do something they're not ready for? Instead, just go to the companies that have advertised that they're ready for it. So, think of it, for researchers, if your goal is to make money, choose your targets wisely. Calculate your hourly rate of, "If I get the highest bounty, how many hours is it worth it to me to spend?" Arguing back and forth. Or am I ready to like set it and forget it and just kind of send it off if I get paid great? If not, whatever. But really, be ruthless about your time if you are trying to make a living using bug bounty programs as part of that living. That's my advice for researchers.
Joseph Carson:
Well said for both of you. And Mike, any closing thoughts? Anything?
Mike Gruen:
Yeah. I think one of the important parts is also just understand the mechanisms that you already have in place and trying to make ... Like for developers, I think, one of the things, rather than making adversarial internally and figuring out how to make it so that it's just part of the developer's job like any other bug fix, is one of those areas that gets overlooked a lot and how to leverage.
Mike Gruen:
I think if you're a newer company, and you're doing continuous delivery, or agile. I'm not a big agile fan. But continuous delivery, or whatever methodology you're using, how do you plug this into your CI/CD? How do you make these programs just sort of work like any other part of the business, getting feature development or bug fixes?
Mike Gruen:
I think you approached it from that perspective, and things just go a lot smoother. And I think when you're the larger ones, that's where other companies ... You need external help in helping to bridge those gaps, but if you're ...
Casey Ellis:
In moving to that place. I think it's a really good place to land it. Because ultimately, this is what the future of development and business and being on the internet looks like. It's distributed, it's feature-rich, it's being constantly updated.
Casey Ellis:
That's where everyone is at various stages of maturity on that journey, but I believe that's ultimately kind of what the end state looks like for basically everyone. Okay. Where are you with respect to that? And what steps do you take next moving forward?
Mike Gruen:
And I think you have to be honest with yourself about where you are, right?
Katie Moussouris:
Yeah.
Mike Gruen:
Yeah.
Katie Moussouris:
No, we published a free guide called the Vulnerability Coordination Maturity Model, and it's on our website. It's lutasecurity.com/VCMM. And you can just download the slides. We're not tracking you with cookies. We're not asking for your email address. It's literally as free as free comes on the internet.
Katie Moussouris:
We're just making that available for free for people to just look at and try to get a sense of where they are maturity-wise. Obviously, when we do a maturity assessment, we go much further in depth than what you see on the website there, but it's a really good framework. If people want to self assess, like, "Am I ready for even a vuln disclosure program?" Let alone a bug bounty program.
Katie Moussouris:
They can literally take a look at those slides and super easy, tell themselves, "Is this realistic? Or do we have other work where we need to invest further in our internal processes before we take on this other work stream that is very demanding and is a work stream that we don't entirely control?"
Katie Moussouris:
I think that's the big transition is that you're adding in another work stream that your company can't necessarily control the rhythm of that work stream. So, being prepared almost like a customer support organization, but the customer support the deals with the people that can hack you out of business is really what it is.
Joseph Carson:
So, I'm going to try and summarize everything up from the closing statements. Going to Mike. Mike, what you're really telling me is that not to do it as a checkbox and not to do it as a special project. Is you want to actually build it into your actual existing process. Is you want to make it something that is part of your job, and is part of basically the entire lifecycle process. Not special, not a checkbox, try to get it into the existing workflow.
Joseph Carson:
And from Katie, I think, it's really setting the goals, and really understanding about what your intentions are, and making sure you actually put your time and the resources into the right places. And from Casey, it's really about don't go it alone, get help, be part of the community.
Joseph Carson:
I think that really sums it up. For any one of our audiences really looking to take on this path, I do highly recommend reach out to Katie. Katie is one of the world experts in this area. Has been doing it for a long time and really started it off. I think it was James was the one of the first bug bounty payments you've worked on going out.
Joseph Carson:
And definitely were looking to get part of the community, reach out to Casey, because that's where you can get your network. That's where you can get and run your knowledge off. We might come in with a specific set of skills, but being part of a community will help you run those skills off and become a much better skilled person.
Joseph Carson:
So, that's really where I kind of think it's coming down to that don't go it alone, get help, reach out to Katie and Casey, they'll definitely be there to help you and direct you in the right path and journey to success.
Joseph Carson:
So, again, I think really pleasure having us on the show. Really awesome conversation. I think probably one of the longest episodes we'll ever have, but it's not a bad thing. I think the more knowledge we share, the more we talk, the better it is for the world. The more resources and the more knowledge people would gain.
Joseph Carson:
So, many thanks, Katie, Casey. Mike, as always, I'm the first person you speak during the day and the last person to speak to, so I'm not sure how that goes. But for the audience that are tuning in every two weeks for an episode of 401 Access Denied, subscribe, get in touch with us, share your feedback, and let us know what you'd like to hear. So, out there, stay safe, stay secure, and keep learning. Thank you.