Skip to content
 
Episode 94

Crowdsourced Security & Vulnerability Disclosure with Casey Ellis

EPISODE SUMMARY

Join host Joseph Carson for a compelling discussion with Bugcrowd founder Casey Ellis on the evolution of coordinated vulnerability disclosure. Ellis’ pioneering work connects ethical hackers with organizations to enhance their cyber resilience. He shares his experiences and unique insights into disclosure trends, including how changing regulations and emerging AI considerations are having an impact. Don't miss this engaging dialogue to learn how the next generation of builders and breakers can take the lead and collaborate for better security.

Watch the video or scroll down to listen to the podcast:

 

Subscribe or listen now:  Apple Podcasts   Spotify   iHeartRadio

Joseph Carson:

Hello everyone. Welcome back to another episode of the 401 Access Denied Podcast. I'm the host of the episode, Joe Carson, Chief Security Scientist and Advisory CISO at Delinea, and I'm really excited today to basically bring back a returning guest to the episode. So it's awesome to have Casey on the show today. So I'm going to pass over to Casey to introduce himself, and I know you've been on the show before, but for the new audience and new people, I think it'll be good to give a bit of background, who you are, what you do, and some fun things about yourself.

Casey Ellis:

Absolutely. Yeah, thanks for having me back on, Joe. It's been a while, but it's been good to be catching up at conferences, and doing all that sort of stuff lately, and getting back in to have a chat. So yeah, my name's Casey Ellis. I'm the founder and Chief Strategy Officer for Bugcrowd. I'm also the co-founder of The disclose.io Project. Really, I think the best description of me is that I basically pioneered this idea of crowdsourced security as a service. So Bugcrowd didn't invent vulnerability disclosure, or bug bounty programs, those were prior art, but this idea of basically building out a platform to connect all of the latent potential that exists in the white hat hacker community with as many problems as we can on the defender side. We were the first to break ground on that, and that's coming up on 11 years ago now. So, been at it for quite some time. Yeah, it's crazy, right? It's like-

Joseph Carson:

That's pretty impressive.

Casey Ellis:

It feels like a billion years ago and yesterday all at the same time. But it's such a big, and kind of complex, and frankly kind of interesting and fun problem to solve. It actually doesn't seem like that long ago sometimes, as well. But yeah, on the policy piece, with disclose.io, that's really a vulnerability disclosure standardization project, and its main goal is to ease the adoption of vulnerability disclosure policies for anyone, right? Bugcrowd helps people run these programs. disclose.io is basically tools that anyone can use regardless of whether they're a customer of Bugcrowd's or not. And yeah, a big reason for getting into that is really just to change the operating environment for people to hack in good faith. I think our backdrop in all of this is that we've been historically assumed to be criminal, and a lot of the...

Joseph Carson:

At least when it started, it wasn't. It was curiosity. Now, the media have made it criminalized.

Casey Ellis:

Exactly.

Joseph Carson:

Everything you see in the media is all about the criminal hackers, not so much the good people.

Casey Ellis:

Yeah. I think part of the reason for that is that bad news travels faster, right? So, you're right, hacking started off as a morally agnostic thing back in the kind of the '80s, but then got co-opted into being synonymous with crime through the '90s and forward from there. And really, we're at a point right now where hackers are pretty vital, and I think actually critical as a part of the Internet's immune system, and we've had the equivalent of an autoimmune deficiency this entire time. So, a big part of what we're trying to do there is just accelerate adoption of that conversation between builders and breakers. Have companies get to the point of humility where they say, "Oh, we're not perfect. Sometimes things are going to come in from the outside world. Let's actually prepare for that." And in the process, have an impact on laws like the CFAA, like DMCA, like the Computer Misuse Acts in the UK, and other places, because the other side of it-

Joseph Carson:

Those laws are always quite confusing. I had a big discussion. One of the great things actually when I attended a Bugcrowd party at Black Hat in DEF CON this year, which was fantastic because they had a long discussion with the EFF, so...

Casey Ellis:

There you go.

Joseph Carson:

And that was really interesting, because one of the things we talked about was the Computer Misuse Act, and how it's evolving. We're providing a little bit more safety for security researchers, and those who are breaking things, where in the past, most of it was you had to prove yourself innocent. You're already assumed guilty. And now it's got to the point where they had to prove malicious intent, which is the right direction. So, listening, and having that discussion was fantastic. So, I was really happy that Bugcrowd made that, facilitated that for me, that discussion was great, to go into those details.

Casey Ellis:

And I think it's not just me, and it's not just Bugcrowd, it's an army of dozens, and I think at this point in time, we're up into hundreds, or even thousands of people that are actually putting their hand to the plow to influence policy in the right direction when it comes to security research. But it's been a long road. The charging law changes that the DOJ passed down around CFAA in I think it was May of last year, basically just did exactly what you just said. Instead of, if you've broken into a computer, if you've exceeded authorized access in some way, that's not automatically a crime. If you happen to be using that to commit a crime, then yeah, you're in trouble, right?

Joseph Carson:

Correct. It's the proof.

Casey Ellis:

But if you happen to be doing that as a part of good faith security research, then the assumption is that you've got good intent, and like you have to prove that there's a crime in process to actually prosecute that, which takes a lot of-

Joseph Carson:

And it goes back to...

Casey Ellis:

... Chilling effect away from security research.

Joseph Carson:

Yeah, because really, for those who are in the security research area, it can be quite a challenging time. When you look back at the journalists who basically found, what was it, the F11 was it?

Casey Ellis:

F12.

Joseph Carson:

And ultimately... Yeah, F12. F12.

Casey Ellis:

F12, they found out. Yeah. Yeah.

Joseph Carson:

So viewing source of the browser, and ultimately then basically going through this whole public debate with the politicians in the state. Getting into that scenario, it's quite challenging, and I think it's good here where the law's at least getting into where you had to prove that you're doing something properly criminal, not just looking at the source of a webpage.

Casey Ellis:

And then coming back to disclose.io, coming back to Bugcrowd's involvement, and things like the Hacking Policy Council, and the Election Security Research Forum, we're a part of all of those as well, and I'm personally involved in a lot of that stuff. It really comes down to driving standardization, where we can kind of provide it. Because the other side of this that technologists often overlook is that it's pretty confusing. You've got policy makers that are, I believe, most of the time, honestly trying to do the right thing, but they're not thinking about the same things that we are on a day-to-day basis, right?

So, the fact that you've got a state law in Missouri that makes F12 potentially a felony crime, at the same time, the CFAA's being amended by the Department of Justice, like yeah, that's the thing that happens. That kind of disconnect, and disunification across different laws is actually not that uncommon. So trying to harmonize that stuff is ultimately a goal as well.

Joseph Carson:

Yeah, absolutely. I think that's one of the things in the US is definitely... Every state has a different law. So even you have to even know the specific state laws to make sure you actually-

Casey Ellis:

It gets tricky.

Joseph Carson:

... stay in line with them.

Casey Ellis:

Yeah, 100%. Yeah. Yeah.

Joseph Carson:

So one of the things I wanted to kind of get, what's some of the big trends that you've seen this year? What's some of the big movements? What's the direction, specifically for vulnerability disclosure? What's, let's say, some of the news headlines? What's some of the things that government, you're seeing, the regulations, compliance and stuff? Where's the direction? What's the big topics of this year, that you've seen?

Casey Ellis:

Yeah. Look, I think just what we just talked about, so the changes in CFAA have started to be kind of understood. They were implemented as a charging rule change last year, in May, but I think it kind of went under the rug a little bit in terms of people actually talking about it, and understanding the implications of that. Really what it does is it changes this default threatening environment for security researchers into one that's default permissive. And there's a few things that have to happen in order to actually catch up with that.

So, there's been definitely a kind of a groundswell of security research, people actually understanding what the implications of a change like that mean, both on the defender, and the white hat hacker side of things. And moving on to the next piece, it's like, "All right, how do we, particularly in North America, harmonize as best we can some of the state laws, so that there isn't a disconnect between the goals that we've been pursuing at the federal level?" Some of it-

Joseph Carson:

Does it need to be at the federal level, or can the states do something like a standard? Like one state basically takes the precedence. Like California Consumer Privacy Act did with GDPR, do we need to get some state that's going to say, "We're going to take this forward as the baseline?"

Casey Ellis:

Well, there are those who... like California actually has done this, and there are other states that have as well. The thing that's interesting about North American policy is you've got the 10th Amendment, which is basically states' rights. So, in these sorts of issues, the Fed can't mandate what the states do, and you end up oftentimes with this kind of weird disconnect between federal law, and state law. So, instead of kind of going top down, often the best thing to do is actually go bottom up, and do exactly what you just said. If you've got enough precedent, then all of a sudden the states start to get together, and they all decide it's a good idea, and things start to harmonize behind that. But it's not just North America. There's CMA, there's things happening in Australia.

Joseph Carson:

In EU as well. Australia's having-

Casey Ellis:

I've been there.

Joseph Carson:

Every company... Singapore is going down it. All of Japan also have different paths. And I've seen this. I tried exactly what you're doing. I tried probably about seven years ago with digital identity. I went to the federal level, and was going down this path about, "Let's get everyone in the US going down this digitalized identity signature and so forth to make services much easier to interact with the government."

Casey Ellis:

Yep.

Joseph Carson:

And the federal government said, "Fantastic. We love it, but we can't do it," because the states own the identity, and that's what's...

Casey Ellis:

Yeah. And they know their rights. Yep, exactly right.

Joseph Carson:

And that was the challenge. And then you had to go to deal with every single state individually, and they got different states going down one path, and then other states doing it completely something different, and it was always challenging. And that got to the point where it got really confusing.

Casey Ellis:

Yeah. I know you wanted to talk about disclos.io a bit later, so I'll come back to kind of the strategy that we've used to actually bypass that. But yeah, with respect to trends in vulnerability disclosure in general, like basically, I can't think of a western country at this point in time that isn't reviewing this pretty actively as a part of their kind of top-down policy from a crime standpoint, but also integrating it into their kind of national cybersecurity strategies as well. I think we're at this point where, at that level, it's pretty well-recognized that like vulnerabilities happen. It's not because we're stupid, or bad, or whatever else, it's because building computer-

Joseph Carson:

We're human.

Casey Ellis:

Yeah. Like building computer things is kind of hard, and we're human, and humans make mistakes. So, there's a maturity I think that comes with that sort of posture. And I've been preaching about this for a while, like the idea of that tying back into Kerckhoffs's principle in cryptography, like the enemy knows the system, right? If you try to maintain a posture of security by pretending vulnerabilities don't exist, eventually you're going to get called out, and whatever security controls you've got built on top of that thing are going to fall apart at that point in time. So, the anti-fragile approach is to accept the fact that you're not perfect, be transparent about that, and accept, and integrate as much input as you possibly can, in how you do-

Joseph Carson:

And especially when you think about a lot of the technology we use today is shared in many cases. We've got basically lots of supply chains, your depending on different hardware vendors, cloud services, and there's certain things that you can control maybe in your own area, but the moment something like an OS vulnerability comes out, or some type of shared code vulnerability comes out, like Log4j, those types of scenarios, that's something that you had to be prepared for.

Casey Ellis:

Yeah, no, definitely. Definitely. And the other part with this one, just to tap on it quickly, because I do think the topic of election security, particularly in North America, but it's a big election year for a lot of western countries next year, the idea of there being this convergence between information warfare, disinformation, all those different things, and security is ultimately kind of the way that we keep the plumbing that gets information safe from A to B, yeah, how we actually keep that safe and secure, right? So, full disclosure ties into that as well. It's something that we started working on back in 2018, ahead of the 2020 elections here, the idea that you can actually explain what... It's like, I'm not going to be able to, as a poll worker, with a concerned voter standing in front of me on election day, explain how EDR works, or what ASLR in a voting machine looks like, or any of those different things.

But what I can do is explain the concept of Neighborhood Watch for these systems, and the fact that there are people that have these skills, that are actually trying to help make democracy safer going forward, and that the manufacturers and the states are actually listening to that and integrating that input. That's a tool that you can actually use to increase... It's not a silver bullet. We're not saying everything's secure, but when we're talking about the things that we're trying to do to make the process more transparent and more secure, it's actually a really effective tool for us to use in election stuff. Yeah.

Joseph Carson:

Yeah. So interesting in that topic, one of the things, when I was recently in Singapore, I was there for Singapore International Cyber Week, and we had GovWare and all those, and one of the biggest topics was around... And this comes back to some of the EU regulation laws that's coming in play, and also laws coming out across Singapore, Japan, other countries, which is around the labeling, security labeling and stuff like that.

Casey Ellis:

Yeah.

Joseph Carson:

I wonder what your opinion is. Should organizations look, not just about, "Here's the security updates, and here's the security process." It's interesting that should vulnerability disclosure also be tied into labeling, and classification as well, because that's an interesting area.

Casey Ellis:

Yeah, I mean, I'm biased on this one. It's a tricky one, because obviously vulnerability disclosure programs, and the ability to run those, and have those be efficient for organizations, that is a product that Bugcrowd sells, so I'll disclose an inherent kind of conflict of interest there. But I do genuinely believe that the idea of that transparency, and that kind of anti fragility that I was talking about before, it is a proxy indicator of security maturity within an organization.

And again, what we're not saying is everything's perfectly fine. But if you're looking for markers from the outside in that an organization is doing its best basically to stay on top of security and keep its users safe, then it's a pretty easy thing to find for starters, but also I think it's a pretty proactive... Still at the point... We're still at a point of adoption where the folks that are doing that are quite proactive in their approach to security. So, seeing that sort of feed into things like rating systems, and we've seen the presence or absence of a VDP, or a bug bounty program used as a cyber risk indicator for insurance companies. We've seen it pop up in different places like that. It's logical.

Joseph Carson:

That's really interesting, is I recently did a big talk in cyber insurance, and one thing as I'm seeing is absolutely cyber insurance is starting to have a lot more requirements for organizations because they've been at a loss for many years, and they're looking to make sure that they minimize. And they're getting to the point where they're starting... I would say this is the year of cyber insurance maturity, where the insurers are starting to better understand risk quantification when it comes to cyber attacks, and cyber incidents.

Casey Ellis:

They've been buying data for long enough now, it's about time they turn it into an actual business, I think.

Joseph Carson:

Exactly, exactly. I think we've had enough incidents over the last five years, that there's enough data there. Probably even more data than what they've collected over natural disasters over the entire lifetime of the insurance industry, think.

Casey Ellis:

I'd say you'd be, yeah.

Joseph Carson:

I think we've had enough cyber incidents to exceed that, and to the point where they're now having that better quantification, and I think they're really starting to realize what risk is when it comes to cyber, and how to better reduce it. And I see a lot of things where compliance is becoming mandatory, and definitely what security programs are doing, whether it being security awareness training, whether it being their PETS management program and vulnerability disclosures, all part of that. So, it's really showing, to your point, is that these organizations that really take these different strategies to top priority is they're doing the right things to reduce the risk, and this brings it into a side topic-

Casey Ellis:

Just on that real quick, and they're selecting... They're not making it fragile, because I've been involved in a lot of the different kind of conversations around labeling through policy council and different things like that, and the tendency is always to try to make it very technical, and very kind of brittle as a result. Like, "You need to do this thing, that thing, the other thing," and the technical controls that might not necessarily be relevant to every organization that gets that label.

And also, fast-forward three years, maybe the Internet's changed a fair bit by that point. So, the whole idea of going back to these kind of core fundamentals, like security awareness training within your organization, secure code training, things like vulnerability disclosure, so if you get it, if there is an issue, you've got the ability to receive that input from the outside world, these kind of core design feeders. I think I'm seeing a lot of folk reorient around just having those as the core principles and trying to build from that instead of the other way around, which is good. I think that's ultimately our...

Joseph Carson:

Absolutely, because at the same time, you want to make sure that you're actually focusing on the business value.

Casey Ellis:

Yeah.

Joseph Carson:

And ultimately it's a business strategy at the end of the day, to make the business more resilient, to make the business reduce on the risk. Ultimately, it all has to tie back into the business outcomes. And this gets into one of the kind of interesting side topics, the recent news around the SolarWinds CISO, who's now being sued by the SEC. Of course, I guess the whole reason is falsely misrepresenting their security capabilities, or posture. What's your thoughts and opinion around that? Because it's sent a bit of shock waves around the world, in regards to when it comes to... I guess, we've been pushing for CISOs to be more represented at the board level. Is this exposing them a bit more, as a result of that push?

Casey Ellis:

Well, yeah. I guess the thing that's a little quirky about, I guess, my point of view on that is having worked in cybersecurity, and having been a board member, and a chair for the past 10, 11 years through Bugcrowd, I can understand through the lens of board liability, and the risk associated with that, as well as frankly the challenges of trying to run an effective security program. The thing that's... There's an increasing... We've got a trend starting to form here. You've got Joe Sullivan with the stuff that happened with Uber, you've got now this whole thing with SolarWinds, and there's other cases that are kind of bubbling away in the background. So it's very clear that this issue of liability, we are in the boardroom now, which means, okay, SEC's-

Joseph Carson:

Which means your signature means a lot when you sign something.

Casey Ellis:

Yeah, yeah.

Joseph Carson:

And I think this comes down to-

Casey Ellis:

Time for big boy pants, right?

Joseph Carson:

Yes. And it comes down to... I've been in situations where I will not sign anything until I have a basically very clear understanding about what I'm signing. Especially when you're at that level. And is this a case where maybe the CISOs aren't maybe yet really understanding accountability at that level? Or is it something that maybe that they're being pressured to make sure that they're focusing on getting things done, and moving forward?

Casey Ellis:

Yeah, that's an interesting question, because I will caveat this and say that I've not dove all the way into the specifics of the SolarWinds stuff yet, so I'm speaking in general terms here, but I tend to think it actually goes in both directions. I think for CISOs, and for security leaders, we're not used to being accountable. Kind of the extent of our accountability to the business is whether or not we get breached, right?

Joseph Carson:

Yep.

Casey Ellis:

Yes or no. And until it happens, that accountability is a fairly loose coupling, because the board historically hasn't really understood what the hell we actually do.

Joseph Carson:

There's been very few boards that actually have very well-educated security professionals on the board.

Casey Ellis:

Yeah. Yeah. So there's this impedance mismatch between our language and theirs, and I think really the part that we're going through right now is a journey of understanding in both directions. You look at the SEC proposed rule changes that came out last year, where it's basically you have to provide enumeration of the cybersecurity expertise that you've got on your board, for publicly listed companies. That's the SEC trying to basically get us out of the nerd corner, and put us square in the middle of just normal risk management and governance, which is what a board does. So, we're in kind of a bumpy season, ...normalizing right now, I think.

Joseph Carson:

To your point, it's a bit of a translation issue as well. Sometimes I find as I've been looking through, for example, recently in the cyber insurance industry, I find the same things, is that the language that we speak in security, and the language that is spoken at the board and in cyber insurers are not the same. Which gets confusing, because you might be talking about one thing, and assuming, making that assumption, but to the other person on the other side, it means something completely different. One big example was at the security level, when we talk about data backup, and recovery, we're thinking about recovering from a good backup, but sometimes in the insurance industry, that translates into paying the ransom, and getting the key back to recover the data, because they see that still as a legitimate recovery operation as well. So, just making sure they're always on the same page.

Casey Ellis:

100%. Well, I do think... Yeah, so there's that, in terms of the impedance mismatch, and the different kind of priorities, and languages being used, and all that kind of thing. I do foresee, when you look at some of the lean of policymaking legislation, that's been the trend over the past 12 to 18 months. It's very focused on the user, and the thing that's best for the security for the user is not always necessarily aligned with the thing that's best for the security financially of the shareholder.

Joseph Carson:

Yep.

Casey Ellis:

So okay, when there's any kind of divergence on that level, that's going to be a pretty interesting thing to try to reconcile as we go forward. And I think for the short term, the shareholder is going to win out, because they're paying the bills, and that's kind of how capitalism works.

Joseph Carson:

It's the business decision, ultimately.

Casey Ellis:

But at the same time, the US is trying to understand, and try to figure out different ways to roll out privacy legislation, in a way that is kind of reflective of CCPA in California at the state level, and some of the stuff in the EU. That's going to force that conversation, and force a point of convergence. So, it's a really interesting time to be a CISO. You kind of asked for my just general hot take. There's a lot of conversation around E&O insurance, actually viewing the liability of being in that position in a different way.

Joseph Carson:

I've actually seen even-

Casey Ellis:

The fact that we just haven't really been doing that much, up until this point.

Joseph Carson:

Interesting, is to that point, I was at a CISO summit just a few weeks ago, even prior to all of this SolarWinds happening, and the big topic of the discussion was about CISO liability, and how they're taking insurance for themselves, to cover themselves from personal liability, and that was a big topic, so it was really interesting.

Casey Ellis:

Yep.

Joseph Carson:

Another area I want to get into as well is a big topic's been over the last couple of years, and this is also something that I've seen numerous times. There was a great talker this year at BSides San Francisco around software bill of materials, and vulnerability disclosure. What's your view around... Because ultimately, we're in the supply chain. And this again, gets into where company A has a vulnerability, how much do they get into disclosing that further down into their supply chains, that ultimately get impacted, and what's happening in that space around the SBOMs and vulnerability disclosure?

Casey Ellis:

Yeah, yeah, look, I mean, speaking of SolarWinds, like as an attack, SolarWinds was kind of an education process for people outside of security that the Internet is literally built on a gigantic stack of turtles, and we've got dependencies all the way down, and that can be a vendor like SolarWinds in that case, or it can be open source libraries, it can be all those different things. The stuff that we interact with on a daily basis is so incredibly complex in how it's been assembled at this point in time. You've got to think about what are those components? Where does liability exist for a mediation, if something goes wrong? Coming back to the last question, that's, I think, starting to come into the mix as well. And so, there was this kind of presidential EO around the creation of SBOMs, which by the way, was... I know there was some different opinions around all of that, but I've watched Alan Friedman in particular.

Joseph Carson:

Yeah. I'm just seeing his Italian vacation pictures on his bicycle, which is also cool.

Casey Ellis:

Yeah, yeah. Yeah. Like Josh Coleman, Beau Woods. There's been a bunch of people that have been really pushing this forward. I think Alan's kind of the face of it, and I've watched him annoy everyone talking about SBOM for the better part of 10 years now, and all of a sudden he's living his best life, because that is legitimately a problem that we need to solve. So, that part I think is really good. The downside of it is that no one really knows what to do with an SBOM yet. We've all got the ability to generate them, and we've got different mandates that actually require that now, but cool, now I've got a phone book sitting on my desk, what do I do with it?

Joseph Carson:

What would you do with it?

Casey Ellis:

How do I actually operationalize that? Yeah.

Joseph Carson:

So it's interesting we talk about it, because this is a big lesson I learned, and I think that we're going to go down the same process that I had at least 15 years ago, when I had this discussion. I went in, back 15 years ago, I was talking about software defined networks. And I went to the Estonian government. I was like, "Oh, this is the greatest thing. We have to talk about software defined networks." Now, I feel it's very... The similar thing that has been done with the SBOM as well, is that we're focusing on software, and all of the components that builds into it. And one of the things in Estonia when the government came back and said, "No, we don't do that, software defined networks, we do services." And I was like, "Huh, okay, that makes more sense." And what they... Rather than looking at it from a software perspective, they said, "What does all this together make as a service? And what's all the components?"

Casey Ellis:

Yeah, there you go.

Joseph Carson:

Not just the software, but because the hardware, it's the people, it's the process, it's the data hosting, it's the communication, and ultimately that defines the service, and therefore then they go through that, and do vulnerability assessments, risk assessments. And I kind of feel we're down the same path that we did with software defined networks in SBOMs, and ultimately, ultimately what is it you're providing? What is the end product?

Casey Ellis:

I think that's right. We definitely had a crash course in a similar sort of thing, when we started working a lot with the automotive industry back in 2015. Charlie -

Joseph Carson:

It is exactly a very comparison industry that... Yeah, it's the same.

Casey Ellis:

Yeah, it's like car manufacturers don't actually kind of originate as an OEM, like that many products that end up forming the thing that they go off and sell on the showroom floor. So like, okay, what's the chain of responsibility there? How do you secure all of that when you've got such a deep supply chain? Are there opportunities for cross pollination between the different OEMs, all that kind of stuff? I think those same principles apply to ultimately how we'll end up solving some of this supply chain stuff. But as it relates to-

Joseph Carson:

I think even the airline industry also is a really good example of similar... Yeah.

Casey Ellis:

Yeah, agriculture like ICS, pretty much the older and more physical an industry is, the more it has this particular challenge from a supply standpoint. But yeah, software's no exception to that, because we've been around, doing this for a long time now, and we're kind following similar patterns, I think. So I think from a vuln disclosure, and from a security research standpoint, it's definitely... I keep on coming back to the idea of how do we operationalize the insight that an SBOM can give us? And there's vendors doing that. Like the SPM space is pretty good at that, the pure... legit, different kind of platforms that do that part.

But what I've seen from a security research standpoint is folk actually starting to think through that lens. So when they're attacking a particular thing as a white hat, they're not just thinking about the code that's sitting on the surface of the target. They're thinking about all of the manufacturers that have fed up into it. Are there other points of insertion, other consistent... Any patterns that could potentially come up and create a vulnerability that might not be intuitive? All those different things. And you start to get a risk-based view. Ultimately, to me that's what SBOMs are probably going to end up being most useful for is actually kind of making risks sexy again in some ways and saying, "Okay, we've got all of this stuff that we can go off and fix. How are we going to prioritize that? Let's actually stack rank the risk priority and then kind of work-"

Joseph Carson:

In the end is how is that going to be used in real life, versus when you get company... So, when you get companies that said, "I can't reproduce it," you're going, "Ah."

Casey Ellis:

Yeah.

Joseph Carson:

Because you're not basically using it in the way it's physically going to be used by the end user.

Casey Ellis:

100%. And to me, there's science and there's math that can be brought into that, but I think at the end of the day, it is a fairly creative process that does require an attacker mindset to resolve the questions. So this idea of security researchers being a part of what feeds the outcome, that allows that, I'm starting to see things move in that direction, and I think that's probably going to continue.

Joseph Carson:

That's fantastic. So, we... Back into disclose.io, and the whole concept behind it, because I definitely... What's great is that yes, when you look at different vulnerability programs and the different types, and consistency, and getting a proper classification, what was the whole idea behind disclose.io? And can you tell us a little bit more about detail into... When you founded it, and then also the great discussion, the great talk was done at BSides San Francisco this year on it was fantastic.

Casey Ellis:

Yeah, yeah. No, definitely. That was-

Joseph Carson:

It was an eye-opener for me, because it wasn't something I... I knew it was happening over here, but it really opened my eyes into the challenges, and what needed to be done.

Casey Ellis:

Yeah, I mean that talk, it was funny, because I was sitting with a couple of folk that I've been working with in the space, and around disclose.io for a long time now, and frankly I hadn't really been worded up on what the talk was going to be about. I'm sort of sitting there borderline tearing up, and they're all like, "Oh my God, we were a part of this." I'm like, "Yeah, I'm changing..." And it's cool. I think there's so much more work to do in this space that it's very easy just to look at that part. It's like, "Okay, the magnitude of the problem that we're trying to solve here is huge." And I think as hackers, we mostly tend to see the broken stuff. So to actually stop, and pause, and look back on what's changed over the past 10 years, that was very cool.

But yeah, I mean look, the origin of it really was kind of early on in a bug bounty, involved disclosure in a Bugcrowd context, we realized that lawyers don't know how to write, and it's not because lawyers are dumb, it's because they just haven't thought about this before. They hadn't thought about, how do you write a policy that allows, at that point in time someone that they consider to be probably malicious, because the idea of good faith hacking, that sort of didn't really exist in the popular mindset 10 years ago. It does now, but it didn't then. How do you get them to write a policy that encourages, and accepts the input of people that are operating in good faith, but also doesn't give carte blanche to bad actors in the process? So, that was a real tripping hazard for lawyers, and a nervous lawyer tends to be pretty verbose, so you'd end up with War and Peace. They tried to figure out every age case, and whatever else.

Joseph Carson:

When you get the terms of services, and ULAs, and it just gets so enormous that you can't do anything with it.

Casey Ellis:

And honestly, I don't fault them for that. They're trying to do their job, and be as thorough as possible. But the other side of that problem is that you've got security researchers that aren't lawyers, that oftentimes are English as a second language, for example, and who are probably... Some of them do go through and try to fully digest and understand this stuff so that they know that they're not breaking the law, or doing the wrong thing, and from a corporate or a criminal standpoint, but most of them don't. So how do you simplify that, and collapse it in a way that makes it easier for both sides to understand, and ingest, with the ultimate goal of reducing risk for the hacker side of the house, but also to improve the speed of adoption on the defender side?

So that's kind of how it got started, and yeah, we put together with a law firm called Cipher Law, back in the day, a version, like a boilerplate piece of this, and then Amit started working on legal bug bounty through UC Berkeley, and that was kind of when we connected up, and like, "We should join forces on this," and actually created its own thing, that becomes almost like this lightning rod for innovation, and really, it's the idea of just what I said before, how do we make the environment safer for people that are hacking in good faith, so we can reduce the chilling effect on their work?

And then on the defender side, how do we make this as easy to adopt as possible, in the form of the language itself, but then tooling around it as well? All those different things. We've got a policymaker it now where people can literally just go, and plug stuff in, and it'll populate it. They can drag and drop it onto their website if they want to, or they can hand it off to their legal team if they're a larger company, and go through those processes. But the whole-

Joseph Carson:

Making it more flexible. Absolutely. It makes it more flexible for those who are... Because I always go back to the majority of hackers in the world are here to make the world a safer place, and the criminals don't read ULAs, and terms of services, and policy.

Casey Ellis:

Well, if they do, they don't follow them. Right?

Joseph Carson:

Exactly. Exactly. So, who are those really intended for? And what's the purpose, and how do we make sure that ultimately, companies are not abusing it, in regards to... When ultimately they're getting something in return. They're getting information about a problem that could be used maliciously by a criminal. So, I think it's really important to make sure the attention is right.

Casey Ellis:

100%. The other piece... Yeah, exactly. The other piece of that as well, is the adoption side's really important, because really the goal, and this is going back to what we were talking about earlier with state laws eventually gaining precedent, and then rolling up into federal laws. If you're having trouble with that part, then go another layer down to the constituents of that state, and just populate the commerce population with a particular approach, and a particular set of language. Ultimately what that does is builds up momentum and establishes precedent. That was one of the big tools that we used for... DOJ have been fantastic, and I'd give them all the credit where the credit's due, but in terms of prodding them in that direction, that was one of the big tools that we used, because you've got whatever it is at this point in time, tens of thousands of people using this policy, and then you've got AWS adopting it, and starting to give input from their legal team, you've got OpenAI.

Joseph Carson:

That's fantastic, because that was always-

Casey Ellis:

You've got... It just kind of grows from there.

Joseph Carson:

But cloud providers, that was always a challenge. Cloud providers, that was always the big one, because ultimately if you're doing... Even not just bug bounties and so forth, if you're doing vulnerability, or pen testing, or any type of thing, and you're doing it against shared services, that was always the biggest... That was scary. I avoided those like the plague, because I was like, "I'm not going down there." If you don't own it, and you're renting it, it's a bad idea.

Casey Ellis:

Exactly, and you've got to be able to establish what the rules of the road are, and have those things be clear. So that's what it's been, and it's separate from Bugcrowd in the sense that it's an open source project. We're actually in the process of establishing 501(c)(3), and want to do a lot more of that going forward. But the relationship to Bugcrowd obviously is that this sort of came from problems that we were thinking about, and trying to solve within Bugcrowd that others were as well. And from a commercial standpoint, it kind of clears the snow off the road. The easier it is to have a conversation with a company about hackers actually being useful, and not always harmful, that makes it easier for Bugcrowd as a company to position what we do as a platform, from a service standpoint. The whole thing kind of comes together, if that makes sense.

Joseph Carson:

It starts giving you clear visibility, because one is, going down and doing bug bounties is one thing, but then making sure organizations who make it easier for you to work with them, and more transparency about not only do we have a bug bounty, but actually we went down the process of making sure that the policies, and all the terms and everything makes... You're not being criminalized at the same time.

Casey Ellis:

Yeah. Yeah.

Joseph Carson:

And I think that just gives you much more, let's say, protection in the end. It gives you the comfort that you're getting all the bases covered.

Casey Ellis:

Yeah, We were setting out the -

Joseph Carson:

Which gets into the safe harbor discussion as well.

Casey Ellis:

Yeah. Exactly, and that was the big piece, the idea of adoption, but then adoption with safe harbor, to make sure that that safe harbor actually works in both directions. So the hacker feels comfortable, but also going back to where we started, the company feels comfortable putting this policy out there, knowing that there is still this concept of a criminal that they can lawyer up on if they feel like they should, because there's times when that's the appropriate thing to do. So it's not just everything's YOLO now, it's how do you find that big ground where everyone feels safe.

Joseph Carson:

What's that... Going back to that malicious intent part that we covered at the beginning, the great conversation that I had with the FF, which was also... I was so happy to hear the legal terms that kind of... They said that's what was being covered. I want to shift gears a little bit into the big movement that's been happening is around the whole secure by design discussion. Of course, that's where... Bug bounties always tend to be at the end. It's publicly raised software, it's out there, everyone's got it, and we see this whole Shift Left movements.

I prefer to be secured by default, is that not just by design, but actually having it turned on, enabled, and be used, and that it's not assuming as you go through and you click through all defaults, and eventually at the end, security's not turned on and enabled. I wanted to get at the point where you had to purposely change the security to turn it off. You have to go through the exception. But going back to that security by design and the whole Shift Left, where's bug bounties fitting into that? Are you starting to see it being more earlier in the design phase of software?

Casey Ellis:

Yeah, so there's probably two parts to that, that I'll speak to you real quick. What we are seeing is crowdsourcing. So, when people hear the term bug bounty, most often they think about a public program where it's, "Hey, Internet, come at me and I'll pay you." That doesn't fit with Shift Left, because Shift Left by nature is pre-production. But you've got the ability, and this is something that we've been doing since day one, to basically carve out trusted subsets of the community, and deploy them into testing and provide security feedback in a pre-production way. It's funny, because it's not necessarily a thing that we're known for, because the whole kind of public bug bounty thing's so noisy, so it drowns out the rest of what we do. But it's a pretty common use case, frankly, for the platform, and we're seeing more of that.

The idea of actually getting code into the hands of security researchers and saying, "Hey, have at it. You've signed a nondisclosure agreement, we've got our intellectual property buttoned down, all those different things. So as a customer, I'm okay with doing that." And what Bugcrowd does is figures out who are the right people to actually offer that to in the first place from a trust standpoint. That's definitely increasing, and I think that works. I mean, we've partnered with Secure Code Warrior for example, for a lot of years now, in terms of, okay, what we learned from that process, what can we then inform from a secure development training standpoint, to try to address some of these issues at the root cause, instead of catching them after they've already become a vulnerability? So, there's that. I think with Secure by Design, like this is the stuff that came out, I know a bunch of the folks that actually worked on that, and transparent-

Joseph Carson:

Much is now leading it, isn't it?

Casey Ellis:

Yeah, Much is in the mix. Jack Cable, and just a cast of heroes that have worked on this stuff for a long time have actually contributed to that document. And one of the key things that it calls out is transparency, which I actually think has a pretty profound impact on design in a way that's not necessarily that intuitive. If you're building something, knowing that there's going to be transparency around the security of that thing in the future, then you think about resilience in a different way, which to me, fundamentally affects some of the design choices that you're going to make, including which defaults you select from a UX, and a usability standpoint.

Joseph Carson:

Yeah. Yeah.

Casey Ellis:

To your point before.

Joseph Carson:

What you turn on, what you don't.

Casey Ellis:

Yeah.

Joseph Carson:

Because sometimes people are afraid, when you're going down that security design process, you're afraid of adoption, and usability issues, and sometimes that's why they're always turned off. Let's not give them additional things that they need to get to where it is usable, but we need to make sure that we incorporate that into design as well. So, making secure... easy, and seamless at the end. Yeah.

Casey Ellis:

Yeah, for sure. I think the biggest piece there is that the way I see, I look at this, because we're talking about vuln disclosure, we're talking about bounty, we're talking about pre-production, crowdsourcing and multi-sourcing. There's all different ways to plug hackers into helping solve this particular problem. The overarching thing, I think, really comes down to builder or breaker feedback.

You've got builders, and you've got even before that, designers that are trying to get the damn thing to work in the first place, that's their main job. So if they're not security experts in the process, they can't really be blamed for overlooking some of the things that need to be factored in, because their problem is to actually make it work in the first place. But then you've the breakers on other side-

Joseph Carson:

And speed.

Casey Ellis:

Who take that thing... Yeah. And do it quickly, right? Then you've got the breakers on the other side, who their first instinct is to take that thing, tip it upside down and see what falls out. Yeah. The more that those two groups can basically cross pollinate their thinking, and the earlier in the process that can happen, I think the better, because ultimately it makes its way down into design, and that's when the real stuff starts to change, I think.

Joseph Carson:

It always reminds me, when I always have this discussion, it reminds me, back in university when I was doing COBOL programming, and at the end of a semester, and I was always excited. I got my COBOL program working. It was fantastic, and every time you typed in, it did all of the processes, and all the work. And when I demoed it to the lecturer, she came, she was like, "Oh, fantastic." And then what she did with her hands on the keyboard, she just went... And the program just crashed. It just came tumbling down. She just basically just put her hand on all the keys, just rubbed it, and my thing just crashed. And that was a big realization, is that, yeah, making sure you do a lot of validation.

Casey Ellis:

Yeah. And even consider the fact that people are going to do weird stuff with your app.

Joseph Carson:

Yeah. Unintended. They don't use it as you expected in many cases.

Casey Ellis:

Well, we do do a bunch of hardware hacking events, and they're oftentimes in person, if it's a prototype, or whatever else. They did one recently around voting equipment at the Election Security Research Forum, and that was no exception to other hardware things. You've got these product designers, and these people that have worked on building this thing, watching hackers come in and do the most bizarre stuff to their baby. It's like how the-

Joseph Carson:

It is, for me.

Casey Ellis:

How the hell would you think about that? Why would you do that? I'm like, "Exactly, that's what an adversary is going to do, and that's frankly probably what some of your users are going to do as well."

Joseph Carson:

That's what I loved about the recent episode we did with Sick.Codes, and we were talking about the John Deere tractor, and playing Doom on it, which is... fascinating. Making things unintended.

Casey Ellis:

I had the pleasure of working with Sick on some of the disclosure aspects of that, and just, he's a great storyteller as well, which I think helps explain what all's going on in the background there.

Joseph Carson:

And one of the things, the whole Secure By Design concept, it got me thinking about do we need to have start having the discussion around security warranties? We have product warranties, which is one thing which is getting more consistency, by getting into security warranties, is how long are you going to secure me for, when I use these products? So it got me once... I think some companies would...

Casey Ellis:

And even what the definition of secure.

Joseph Carson:

Yeah, exactly.

Casey Ellis:

Right? How do you actually define that? Because a warranty either gets paid out or it doesn't, so you need a bright line between secure and insecure, and those lines are pretty hard to draw. So yeah, there's a lot of work to be done on that side. Product liability, and the warranty piece that's come out with some of the guidance. I was part of the ONCD national cybersecurity strategy creation, which is where this kind of came from, and I'm a huge advocate of that idea. I think the implementation of it is going to be really kind of long-winded, and difficult to get right, but as a directional thing. I think it's the right way to pull, if that makes sense.

Joseph Carson:

I agree. I think it's something that, it's the path we need to go on, but it's going to be a lot of uphill.

Casey Ellis:

Maybe a bit of a windy one. Yeah.

Joseph Carson:

And a windy one, and maybe a few back... Rolling back down the hill as well.

Casey Ellis:

Yeah.

Joseph Carson:

It's probably a bit like Irish roads, to be honest. And next gets me into one of the big topics has been around, and the trend is generative AI, and artificial intelligence, or augmented intelligence, however we defined it. I just call it a batch job, to be honest... scheduled task...

Casey Ellis:

...Statements. Yeah. It's gone beyond that now quite a bit, but yeah.

Joseph Carson:

It has advanced. I do agree. It's got a little bit more intelligent, for sure. But what's happening around that when it comes to vulnerability disclosure side of things is you're seeing it being much more used in the vulnerability disclosure area, and maybe what's a bit of the future, do you see, as more, let's say, data scientists, and analysts, and generative AI developers get their hands on it?

Casey Ellis:

For sure. I mean, probably three prongs to that one. One is kind of the use of generative AI by people doing offensive security research, and there's a ton of that stuff going on. On the bad guy side, one of the things that's popped out is the time between a patch being dropped out in response to a CBE, and an exploit showing up on the Internet that actually abuses unpatched systems. That's shortened a ton over the past nine months.

Joseph Carson:

It's way short, yeah.

Casey Ellis:

And that's LLMs that are doing that, because they're very good at that type of thing. So, yeah, there's a lot of that. Probably the second part is the relationship between LLMs, in particular, generative AI and traditional AppSec and infrastructure security, because ultimately what you end up with is this kind of untrusted user right in the middle of everything. So, the architectural side of things is pretty wild, and wooly, frankly. I was just at the OWASP Global Asset Conference, and we'll talk a lot around-

Joseph Carson:

In DC, was it? Or that was... Yeah.

Casey Ellis:

Yeah.

Joseph Carson:

Yeah.

Casey Ellis:

Yeah, what's the top 10 look like here, and how do we think about the relationship between this new, and kind of different thing in terms of how it behaves, and these traditionally static parameters that we put around security when it comes to AppSec and infrastructure sec, and all that kind of stuff? So, that's a work in progress. But probably the big one, which you were kind of hinting at in the question there, is actually security of the LLMs themselves. So, how are we thinking about prompt injection? How are we thinking about hallucinations? How are we thinking about bias? What are the different categories of risk, and issue that we can start to zone in on to enable Secure By Design? Kind of harking back to the last topic in this area, because it's moving very, very quickly, and it's incredibly powerful. So this idea of that being a pretty important question to get some answers to at this point in history, there's a lot of work going into that. You can see that in the Biden EO. There's a lot of stuff in there around bias.

Joseph Carson:

Yeah.

Casey Ellis:

Yeah, that's right. There's a lot of things around safety, and privacy, and the downstream human, and potential societal impacts of AI. At the same time it's like, yeah, let's take advantage of this stuff, because it's competitive, and it's useful, but also it's very powerful. I almost view AI as a great power level shift in technology, if you sort of play that out...

Joseph Carson:

Acceleration. It gives a lot of non-technical people powerful skills that they've never had before.

Casey Ellis:

Yeah, and it's powerful in and of itself. So, I think what we've seen, we started working with OpenAI like 12 months ago. We actually started getting involved in this type of thing from a testing standpoint back in 2018, with social media networks, and the role of-

Joseph Carson:

A lot of those algorithms were in the social media area to begin with. That was what was behind the scenes, literally that's what they were leveraging.

Casey Ellis:

Yeah. Yeah. But the nice thing that ChatGPT did, was it kind of just dumped on the collective consciousness of the Internet like, "This is what we mean when we say AI." And that's good and bad. I think the misinterpretation, and the freaking out about that, some of it's productive, some of it isn't, but everyone's engaging with this is a real thing now, and I think the fact that that wasn't really true prior to that is a really good thing. So, generative, the generative AI red teaming village at DEF CON this year were involved in helping actually create the rubric, and the approach to testing there, working with OpenAI, and other kind of fundamental model vendors, and some of the different things that are popping up around it. Probably the biggest thing is that this is moving so freaking fast.

Joseph Carson:

It's getting to the point where the amount of focus, and effort, and engineering, and development time that's been spent in this right now exceeds everything else. It's just, it's accelerating beyond what we've seen in any area in decades.

Casey Ellis:

Yeah. And my Spidey senses go off whenever that happens, because generally, the size of the, "Oh shit," moment a new technology has from a security standpoint is proportional to how quickly it hits the market. You think about IoT, all of a sudden there's IoT everywhere, and then all of a sudden we realize, "Oops, we forgot to do a bunch of really rudimentary security stuff in that space." And all of a sudden you've got Mirai taking half of America offline. I think speed is the natural enemy of security when it comes to design, and I do know there's a lot of really good work going into trying to do this properly, and they are, to their credit, soliciting a lot of input from hackers, and from people that think like we do.

Joseph Carson:

I think what they've seen is they've learned their lessons from what they've seen in the past, and they want to make sure they're doing it right, as they continue forward.

Casey Ellis:

Yeah. That is a hot tip on... For folk, because it's going to be relevant. It's one of these things where it's important to at least understand what's going on in this space, because it's not going to slow down, and it's going to become relevant whether you like it, or not. And I think most people that are intrigued-

Joseph Carson:

It gets to the point where, when you take autonomous and you put AI together, and that's where I get really like, "Whoa." Things start... You take autonomous where it allows you to go, and do something in an automated way, whether it being logistics and transportation, cars, for example. Then you give it a self-learning algorithm in the background, which trains it. You want to make sure that that training can't be manipulated.

Casey Ellis:

Well, you want to make sure you design it properly, because you think about some of the conversation around machine learning, social media, and disinformation. Ultimately, the ML figured out that tribalism is a pretty good way to get people to click on an ad.

Joseph Carson:

Yeah.

Casey Ellis:

Right? So, it was actually the supervisory model that taught the algorithms to favor that type of thing, and there came a point where it's like, "Oh, that's what it's doing. We need to start to back that out, because that's not having a positive impact on society." At that point in time-

Joseph Carson:

Because a lot of the cases is, is that it's not accurate information.

Casey Ellis:

yeah. It's more that it can be abused really easily, and we've seen that. So, that's a real thing.

Joseph Carson:

So, are you good for time? One of the things I want to cover is around the regulation side of things, because I know for me one of the biggest security moments is the Cyber Resiliency Act. I was involved in the EOA Act and other regulations in the past, and one I've kind of stepped back from it is the Cyber Resiliency Act, because it's a bit scary. What's your thoughts around that side of things? Because it has created a bit of a debate in the industry, in regards to its initial direction.

Casey Ellis:

Yeah, no, definitely. So, I mean, the initial response to that is that there's a lot of stuff in there that I think is really good, and really important, and the fact that there is policy-driven pressure in the direction of good things. I'm always in favor of that. I think overall, my reads of the different versions of the CRA that are out there for comment at the moment, it's all directionally pretty good. There's a couple of pieces in there that have been a bit concerning, the 24-hour notification window on exploitive vulnerabilities. That's... If bad things happen on a Friday night, which is when they do tend to happen, it's not so much that you want to write policy to make sure everyone keeps their weekends. I mean, that's nice to do, but that's not the reason. I think it's logistically hard to do that well, in that shorter timeframe.

Joseph Carson:

I think it goes back to one of the things I've learned from it as well because back to the lessons we talked about when I was involved in GDPR, is that not all vulnerabilities are equal.

Casey Ellis:

Yeah.

Joseph Carson:

And this is the thing, is that it should be based on risk, based on the impact, based on the scope, and it should be really factoring that part in, because not all vulnerabilities are equal. So, why would you try to do something to accelerate, if it's something that's not urgent?

Casey Ellis:

Yeah, I do think, to the credit of the drafts that are out there right now, I do think observed exploitation is a pretty good modifier that gets you from a theoretical vulnerability, to something where you've got a risk that you can associate with it. So, that sort of thing I do believe is actually in there. I just think 24 hours is a bit too short.

Joseph Carson:

It is. Yeah. I remember we talked about, it was the 14 days in GDPR was about the disclosure, and then it went down to without undue delay. That was the term that was used in GDPR, and I think that it should probably go the same direction.

Casey Ellis:

Or possibly somewhere in the middle, because undue delay becomes pretty exploitable.

Joseph Carson:

Yeah, it's the guideline. I think they still go with the 14 days, roughly, depending on the risk itself. And I know the FTC, and other... In the US, they've went the same kind direction of bringing it down to this, like 48 hours, or 72 hours, and stuff like that. I think definitely, getting into realtime vulnerability disclosures, you'd be having to work around the clock, and it becomes a challenge. I think sometimes you need time to evaluate what you're doing.

Casey Ellis:

We'd probably end up with a similar problem to what we have with SBOM right now, too, because the other side of it is that these reports, I believe, are meant to go to ENISA. I think that's the right agency. And the idea there is that that goes out for dissemination to member agencies and all those different things. That's another aspect of it that I'm, frankly, I can understand the intent of that, but I'm not super comfortable with it, because it kind of violates one of the design rules of pre-patch vulnerability information management, which is don't send it where it doesn't need to go, and do your best not to aggregate it into one place. There's definitely some feedback that we've provided around that particular piece, but yeah, those are two points that are like, "Oh, that could probably change and be better." Overall, I do want to bring this back to a positive. I think the fact that it's pushing in the directions that it's pushing, the fact that vuln disclosure is in there in the first place is awesome, for example.

Joseph Carson:

I think it's fantastic. I think that the attention that vulnerability disclosures had in the last couple of years is amazing, especially when you get Biden tweeting about exploits and vulnerabilities in the tweet. When he's using those words in the tweet, I think that just says a lot.

Casey Ellis:

It is pretty wild. It's like, "Wow." Yes, we've come a long way, which is good.

Joseph Carson:

We have indeed.

Casey Ellis:

And again, still a lot of work to do, but we've come a really long way over the past 10-15...

Joseph Carson:

Casey, it's always amazing catching up with you, and your knowledge, and expertise in this area is second to none. It's impressive, and it's-

Casey Ellis:

I appreciate that.

Joseph Carson:

Having you really driving this industry, where it didn't exist in the first place, until you came along and really seen that this was an area of improvement, and definitely an area that needed to be really established as a proper platform, I think, has really changed the world, and made it a safer place. Any closing thoughts? Anything you want to leave the audience before we close out?

Casey Ellis:

Oh, you got me a little emo with that, mate. I do appreciate that. It's very kind, and I will call out, it's been a cast of thousands, honestly, over the years. So definitely, I'll take credit for the bits of it that I've done, especially early on in the piece. But I think the thing that's been phenomenal to watch is to see the body of people that are working on this kind of broader system level problem expand. Going into the policy village at DEF CON and not knowing 80% of the people in the room, it's like that is a good thing, because ultimately, we're at a point where there's younger generations. This would actually be my parting point around that. The younger generations that are coming in to this, they're the ones that are ultimately going to inherit this problem, and they're viewing aspects of it in a different way to us old farts at this point.

Joseph Carson:

Yeah. I think with the gray here, I'm excited.

Casey Ellis:

Technologically old. We've all been working on these things for a long time.

Joseph Carson:

We have.

Casey Ellis:

So the idea of like, of what we know, I think the mandate, and the thing that's important for us at this point in time is to work out how to share the parts of the wisdom that we've built up over the years with folks that are coming in, so they can-

Joseph Carson:

They're really going to make the difference to the future. The ones that's basically going to take... Yeah, it's the ones that's basically taking the backs off, and they're continuing it, yeah.

Casey Ellis:

We know where at least some of the tripping hazard are, so we can help with that part. But then also for them, jumping in, and actually putting your hand to the plow and figuring out how you do it differently. Take what we've done over the past 10 years, and make it better, because it's not done, it'll never be over.

Joseph Carson:

Yeah. This is something that's going to continue, and we just need to make sure we have enough fuel, and energy to keep going, and ultimately to get it where it becomes something that everyone can have, as something that they can benefit from.

Casey Ellis:

Absolutely. Absolutely. So, thank you for the time, Joseph. I appreciate it.

Joseph Carson:

It's always impressive. Thank you. So, for the audience, Casey, amazing. Really the foundation and getting the bug bounty, and ultimately making sure that we fix as much of the problems out there as possible, so really helping the world become a safer place. So again, this is the 401 Access Denied Podcast, bringing knowledge, information, trends, and to news to you, ultimately to make sure that you're as educated and knowledgeable as possible, and... Or give you areas of interest, and maybe something of a path that you want to go down, and learn more about. So, absolutely. So, Casey, we'll make sure that the audience gets ways to connect with you, and find out more about disclose.io. We'll put all of those in the show notes.

Casey Ellis:

Absolutely. And folks that want help connecting with hackers to make this stuff safer, bugcrowd.com. For folks that are looking to understand, implement policy, and just start there, disclose.io, and yeah, we'll put other links in the notes, or have that conversation after.

Joseph Carson:

Will do. Fantastic. So, everyone, tune in, 401 Access Denied Podcast. Take care, stay safe, and all the best. Thank you. Goodbye.

Casey Ellis:

Cheers.