Joseph Carson:
Hello everyone, welcome back to another episode of 401 Access Denied. It's your podcast every two weeks. My name is Joseph Carson, and I'm the chief security scientist and advisory CISO at Thycotic, based in Tallinn, Estonia, and I'm really pleased to be here. We've got an exciting discussion today all about critical infrastructure. I'm joined again with my cohost, Mike. Mike, do you want to give us an update and what we're expecting today?
Mike Gruen:
Yeah, Mike Gruen, VP of engineering and CISO here at Cybrary. Today we're going to be talking about critical infrastructure with Ben Miller from Dragos. Ben, do you want to jump in and tell us a little bit about yourself, company Dragos and what you guys do?
Ben Miller:
Yeah, sure, absolutely. Hey, everyone. I work at Dragos. So Dragos is focused on critical infrastructure, ICS specifically, industrial control systems. I've been going at it for about five years now. We have a variety of different things that we're known for, I think. Certainly the Dragos intelligence produces a lot of quality reports, some of them are public and have gained, I think, a little bit of reputation at Dragos.
Ben Miller:
We also have our technology, where we install all those indicators and detections back into the platform. I don't do any of those things, though. I work on the services side. So my team delivers professional services, penetration testing, assessments. And our blue team, as far as our fly away teams are doing incident response, doing threat hunting, doing incident response plan reviews. All of that is centered around industrial control systems. And it has been a crazy journey.
Ben Miller:
I joined from the electricity sector. So I worked at a regulator NERC, known for NERC CIP. I actually wasn't on the regulatory side, and I was doing some of the voluntary information sharing, but under the E-ISAC, but that's certainly my lineage there, and I was an asset owner before that Constellation Energy. And so it's been very interesting, gaining perspective from other verticals of beyond electricity, and just where everything stands. So it's been a fun ... so far.
Joseph Carson:
That's great. I think one of the things for me is we tend to see a lot more in the news recently about critical infrastructure becoming targeted, what has changed over the... Wasn't so much 10 years ago, and probably not exist in 15 years ago, where we hear very little about critical infrastructure being targeted by cyber attacks? What has changed in the last 15, 10 years, that's really brought it into the forefront? What's causing these companies to become much more victims today?
Ben Miller:
It's a great question. So I want to say everything and nothing at the same time. I mean, I think, before we started, you mentioned the S word. That was certainly a event that grabbed everyone's attention, and I want to say, caused a sequence of events where folks realized they didn't have certain capabilities, and they started developing those capabilities. That really started manifesting about four or five years later, when we started seeing a more advanced attacks are on critical infrastructure, culminating in Ukraine 2015, Ukraine 2016, and the Trasis/Triton attack that occurred in 2017. And that continued on.
Ben Miller:
I think that still didn't get the perspective of organizations. It was very much well, Russia is not attacking me sort of construct. But then we started seeing ransomware and well, say, unsophisticated attacks across various areas where really showed just how we don't want to be culpable and be in that position. And that's led to a heightened interest to the degree that we have executive orders that talk about operational technology. They actually use the words operational technology, so that the degree of focus has really heightened over even just the last 12 months. And it's been interesting, being that kind of tone in the water as it's slowly increasing.
Mike Gruen:
That's good. Then I think that leads to the question I had, which was, is it so much? I mean, we know that there's been an increase in targeting, but I also feel like some of it's just happenstance. A lot of it isn't really targeted, it's just these criminals are finding different things that are open and vulnerable, maybe not even realizing what it is that they're attacking, or what the implications of what they're doing are going to really pay. Is that also the case that's also causing that heightened awareness?
Ben Miller:
I think it's both, for sure. So years ago, while we were talking about Shodan, and John Matherly has done excellent with Shodan in presenting that the problem set to the public. I think everyone is like, "Oh, yeah, that's okay." They didn't accept it as okay, but they didn't necessarily do anything about it, or they didn't have the right tools to do anything about it, or because often, when something's on Shodan, it was an accident, it was a misconfiguration. It wasn't like it was their policy to put it online.
Ben Miller:
So it became a hard problem to solve, with those attacks that we're seeing that look very opportunistic of ransomware that moves in from the internet, or just any sort of unauthorized access that moves in from the internet, because RDP was exposed, or there was a device itself that was on the internet, that certainly increasing a more and more attention on it, that barrier, I think has been reduced quite a bit, and that there's a lot more knowledge around these systems than there used to be on the industrial side. But they also have the nation states that are actively building their capabilities, as I said. They're doing a lot of reconnaissance, they're subtle, and that they're building their toolkit along the way at the same time.
Joseph Carson:
Absolutely. It reminds me actually, one of the things I've noticed as well, is that absolutely, it started off with governments and nation states doing intelligence gathering, reconnaissance capabilities, developing their offensive capabilities and preparations. And then, of course, we've got organized crime who decided they weren't a part of it and get into basically, criminal side of things, where you start doing of course ransomware.
Joseph Carson:
The accidents started happening when they moved into this affiliate program in the past 24 months. We've got basically rather the criminals themselves, basically, they will do it as a service, and then you've got the smaller gangs who are basically just doing opportunistic attacks. And unfortunately critical infrastructure is becoming secondary victims just because of ransomware. They just want to get the money profit out of it.
Joseph Carson:
The one thing that reminds me though, is that one thing I've seen change, and reminds me of many years ago, I worked in the maritime sector quite aggressively. And it reminds me a story on, basically, as you mentioned, Shodan, we did a scan one time and all of a sudden it was an IP range that came up that we should not have seen. We find basically one of the ship systems, engine control systems was not available in the public internet. We're trying to figure out why that was happening.
Joseph Carson:
And ultimately, basically, what we end up finding that investigations and basically checking why the system is rushing in the public domain, was ultimately there was a captain on the ship, who decided that didn't want to be basically in the bridge 24/7, and ultimately, they went to their cabin, and they want to be able to speak with their family. Back then they were, of course, using Skype, you can make phone calls back to home. And what they've done was that you guys basically wanted to be able to see the navigational system in their cabin.
Joseph Carson:
So they got a big long cable, and basically Actus connected to it, connected to the laptop and created a bridge. And of course, it was during 2000, it was about 2010, 2011. So there was a crew welfare law that said that now crew, basically on these long voyages had to have some type of connectivity, so they can actually browse the internet, send emails, talk to their family. And this captain, basically unaware or intentionally had created a bridge between, basically the bridge of the ship and the public domain, using their laptop.
Joseph Carson:
And I think this realizes when we talk about Shodan, we talk about basically, OT and IT. There is a convergence of those both coming together where previously OT was more, let's say, segregated. Stuxnet, of course, brought the product to visibility that these are guest systems, aren't really as effective. But I think we've seen a convergence or two, do you think that this is causing much more of a problem for organizations as OT and IT are converging, and the systems are now becoming more publicly connected than they were previously?
Ben Miller:
I think, in general, they're just becoming more connected, more and more interconnected, and a lot of folks, I don't know if they appreciate what that leads to over time, I see. You can have a great deployment, whether it's a ship or a SCADA system, DCS software for a generation plant. After that deployment, more things get added. And it's not like they get reduced over time, it only gets more complex and more connections.
Ben Miller:
And one of the things we've been talking more and more at Dragos is just atrophy of the security controls over time. So you can have the best policy. We're doing assessments, there have been multiple instances where we're doing firewall reviews in the firewall that has that any permit, and it's commented temporary or test, right? There's nothing more permanent than a test. So that continual atrophy of the security program definitely happens. And I think there's a lack of oversight in many ways.
Ben Miller:
In the security programs, the traditional enterprise of security apparatus generally does not have visibility into the OT side. And that's likely because IT screwed up before and OT doesn't trust them anymore. And so when you're talking about safety, and uptime, where your hundreds of thousands or millions of dollars per hour is relying on uptime, you're not going to accept the apology of something that happened 10 years ago.
Ben Miller:
You're going to start limiting their access as much as possible, and calls that DMARC, and a true DMZ, but not against the adversary, against the other teams within that company. And it becomes a really challenging environment. I think the people problem and the organizational problem can be much more amplifying than the technology that we have today in a lot of these environments.
Mike Gruen:
I mean, I think that's human nature in general, like you look at any industrial, whatever, where there's like safety controls, but then well, the work needs to get done. And we're going to cut corn, there's always that sort of give and take between the people that want to make sure things are secure, or safe or whatever, and the people that are like, "Well, time is money., and if we follow these exact procedures, there'll be downtime, there'll be whatever." And so I guess it's not that surprising that it would show up between those two groups, especially, we're talking about industrial controls where right, resiliency is important, uptime is important, all these things. Time is money can't... I have an IT background, we make mistakes, like sometimes things go down with that role.
Ben Miller:
Yeah. I think one of the things is there's a big difference between IT and OT. IT has a life cycle of three to four years. Typically, when you're rotating, you've seen maybe maximum seven years, just tried to squeeze as much out of it as possible. But you tend to force to update whether new technology, new hardware comes out. In OT however, I've seen systems 20, 30 years old, that they just don't want to change, because it works. And to change it, means changing not just one component, it means changing the entire production line or something.
Mike Gruen:
And potentially breaking. I mean, like you look at hospitals, you look at any number of systems, it's the same problem, right? Nobody wants to update because they're scared.
Ben Miller:
Well, it's even worse than that. Support contracts, right? There's support contracts you buy a turbine that is generating electricity. You don't separately... It's not all a cart where you get to choose which computer is going to run it. It is a combined integrated system. And so you can't necessarily just swap out that computer without breaking your warranty, breaking the support contract that you have there. So it is very much a... I think supply chain is a big topic of late.
Ben Miller:
When you have instrumental vendors and OEMs and partners that are needed for that support, then it becomes a much more complicated perspective. We have multiple stakeholders that you need to work with on maintaining the latest version of Windows or a controller and that's a challenge especially when you're looking at to your point, a lifespan of 20 years for the equipment. My laptop is four years old, and it's barely working right now. It's a challenge.
Joseph Carson:
Because one of the things... You go ahead, Mike.
Mike Gruen:
I was just going to ask like, is that part of the support contract is secure? Is that becoming a thing now? Are we looking at where these vendors are being asked to say like, "No, we're not just going to support it, but we're also going to patch it and do security updates and stuff like that." Is that becoming part of the norm? Or are we still ways off from that?
Ben Miller:
Yeah. So there's certainly a base level of, we're going to provide you security and patches of the OEM vendors, absolutely, have incentivized security services within their portfolio, whether that is expediency, always rolling out those patches faster, having environments that are set up at the vendor where they do their testing and staging before they roll out and do that integration testing to a variety of security services very similar to what Dragos says. The challenge with often many of those OEMs, you walk into a refinery, it's not just one vendor, it's a lot of vendors, and so that's a challenge of death by a thousand paper cuts on all these vendors and how you put your arms around it.
Mike Gruen:
And I'm sure you have a staging refinery, right? like where ... refinery, they're just standing there, waiting for you to do whatever.
Ben Miller:
There's all kinds of ... improvement. If we're willing to pay for it, absolutely, there is a very similar analog or you have your digital plans and a lot of other areas that have been going down from-
Mike Gruen:
I was actually being facetious. I thought for sure you're like, "No, nobody's going to build a hole."
Joseph Carson:
But they do. So I did years ago, in the power station, you're familiar, I've mentioned a few times. What ends up happening is it's not a test environment, it becomes an auxiliary. Which means that they use it for basically, if they need to do some maintenance on the main site, this becomes an exhilarate to providing, let's say, excess energy. So when the main site is being offline, or they'll have multiple engines and power stations they'll have two, basically let's say, engine centers, and command and control to help everything.
Joseph Carson:
But what happens is, it doesn't become just only a test, becomes an auxiliary, becomes a back up, it becomes a maintenance site, rather than basically just somewhere where you test things. So whenever, for example, even during high usage times, let's say, it comes to Christmas, and people put their Christmas tree lights on. They need to have access energy to go into those. So rather than just being a test, it becomes an auxiliary type of site for those high uses times or when they need to do maintenance to the main sites and this gives you...
Mike Gruen:
Shouldn't be that surprising, because yes, I mean, that's the same thing we used to do back in the old days, when it was much more expensive to run servers and stuff. We'd have that same stuff. So that's awesome.
Joseph Carson:
So Ben, one of the things that keep coming into that when you were mentioning earlier, one thing I became familiar with, when I was doing, even in the maritime side, what I did see, and this was around probably 2012, 2013, and to your point a lot of the OEM contracts. What ended up happening was that you've got many vendors coming in, you have the engine, you have basically the cons, you have satellite vendors, and all getting in, and that comes in this whole integrated system.
Joseph Carson:
But what I started seeing was it was a conversion into, that you buy the hardware, but you don't own the data, meaning that you have to send the data back to the manufacturer. And we've seen this, of course, with autonomous vehicles, we've seen it with home televisions, and so forth. You're starting to see that the actual manufacturers are now saying, "You own hardware, but we maintain ownership of the data, therefore, so you need to send that data back to us."
Joseph Carson:
Meaning that these systems have to have an outbound direction to the internet to get that data back to the manufacturer. And that gets into the support agreements into diagnostics data into continuous improvements. Is this also fueling some of the security challenges that a lot of these organizations and critical infrastructure are starting to have?
Ben Miller:
Yeah, it's a great point, in so many different angles, I could approach it. And if you were to look at a standard model of a network architecture within the industrial environment, it's usually stacked right with the enterprise or the internet top and then it gets more and more trusted to where you get to that actual IO at the very bottom layer. And that looks very clean cut and you're like, "Okay, that's good." And then you have these maintenance diagnostics networks, they're third party and that's coming directly out of the side, directly into the control iron level, and then you'll have the comms on something like if it's a larger system, right?
Ben Miller:
So you mentioned ships, if there's a port involved, it has some sort of connectivity to the ships. That's another line that's coming in from radio. So you have all these complex sort of systems, and yet, when the enterprise architect looks at it, they often don't see those auxiliary systems, they certainly don't own them. And so the aspect of diagnostic and maintenance networks is absolutely not more and more pervasive within the vendors, and it has value too.
Ben Miller:
So it's not so much as like, "Why are you doing this, just cut it off." They're saving millions of dollars in maintenance costs by getting ahead of the problem through the telemetry that's coming from these systems. So there's a real value in doing exactly that, but when you look at Kaseya, when you look at some of the other supply chain attacks, SolarWinds. There are two OEMs that we know of that were using SolarWinds to attract their customers' statuses, that were completely black boxed, so that the customers didn't know that these vendors were using them as part of their diagnostic and maintenance network, it was just part of the service delivery, right?
Ben Miller:
So there's all these challenges, when you start peeling back the layers of third parties, networks and software that's used there, the visibility to understand what's going on in these environments. We've seen PowerShell employed by third party vendors, doing scanning activities within customer environments, outside of maintenance windows, all sorts of things that you don't want to hear about in one sentence that happens within these environments, and the asset owners are largely oblivious to them, because it didn't have direct impact, it didn't actually do anything harmful, and they don't have logging, they don't have visibility within the network of these environments. So they simply are oblivious to it. Well, that's optimistic. Very simple things can have a big impact on their security posture. And it's not even necessarily a high capital cost to implement those sorts of controls.
Joseph Carson:
One of the things I've noticed in a lot of these critical infrastructure side of things is that safety is the top priority. When helicopter is landing in a refinery or an oil rig, everything is off, communications is purely dedicated to the helicopter landing. When you put lives at stake, this is where the product comes [crosstalk 00:23:04] into the power station.
Joseph Carson:
I was worried about IT security noticing me or identifying me, but no, it was all about safety, what not to touch, where to go, make sure that basically your physical presence or the safety person they're all the time. Should security be in the same level of safety? Should it because it has cascading effects. If security is impacted, a lot of these safety systems are dependent on the security side of things. Should it be at the same level or at least prioritize up there, because right now, what I find it's not.
Ben Miller:
Yeah. All right. It should be that same level. I think the challenge in some way is a culture from a traditional cybersecurity perspective. They feel they're top dog like, "I'm cybersecurity, I got this, guys. Give me space to work." They will always be trumped by safety. So unless they work and identify how they can feed into the safety program. That's the way you can have a large impact there, because at the local facility, if you're a large refinery or another facility, they absolutely have a safety of culture.
Ben Miller:
You're walking down the stairs and you don't have your hand on the handrail, someone is going to call you out on that, and right up, it's going to be days of paperwork to do that. Nobody wants that, but that is the safety aspect that they've ingrained in their culture. They have emergency operation centers within the facility to handle weather events, or just environmental challenges, explosions, whatever is present in that facility. And then you turn to cybersecurity team and you're like, "Oh, I'm going to check the logs."
Ben Miller:
Not use the EOC design that's already been built. You don't need a new incident response plan, you need to tie your incident response plan to those emergency procedures. We've done exercises, well, my analyst was infuriated, because they're just going around in circles, but in the EOC, he just looks at them, "Guys, you're the EOC, all those binders on the wall, they tell you exactly what you need to do, just go do it."
Ben Miller:
And then the light bulb triggered and the cybersecurity guys started talking through that the EOC process, their next exercise they did a year later, entirely different. Because that they were melding those two different worlds together, and accepting that, "Hey, there's already something that exist. We don't need to reinvent to the, let's say, 161 to feel good about our incident response plan."
Joseph Carson:
Mm-hmm (affirmative). I completely agree. I remember many of those calls were [inaudible 00:25:58], and even conference calls, the first two with five minutes was basically a safety notice for anyone, you shouldn't be walking on the phone, you shouldn't be driving in your car, if you are. Follow those safeties, that's pure safety. And I remember, I use that quite often, and every single time you'd have that first two, maybe it's five minutes, depending on the company. You'd have a safety notice about all, to your point, handrails, opening doors, walking telephoning. You weren't allowed to walk on your telephone, you had to be sitting basically, in an anesthetic place before you're allowed to be attending in a conference call.
Ben Miller:
I think it's underappreciated how painful it is that before a meeting that you're hosting to come up with a safety tip that hasn't been done yet. You hear all kinds of things, it's like, don't stand on a chair that has wheels on it, that's not safe. Like really scraping the bottom of the barrel, because you have nine meetings that day, and you have to come up with five of them for the meetings that you're holding on. But that is the level of culture that electric has, oil gas critical manufacturing, it's all very similar, and I think underappreciated the degree of focus that they put in there.
Mike Gruen:
So I think what you're saying is you've just coined safe sec ops, right? Like safety, security apps, they just need to all come together. It's the whole new...
Joseph Carson:
But I definitely, this is where we need to be moving. It's all about, we have to relate ourselves to the impact, is what is the impact, is it the safety impact. Is it a financial impact? Because I think for too long from security side, we've been focusing purely on our world, and everything we do as a style of approach. And we definitely need to make that convergence, we need to listen to the business or listen to critical infrastructure teams, and understand it by how they measure things. And we need to tie everything we do from a security into that part of the business, whether getting into it, whether the navigation systems, are hard, let's say, emergency rooms in a hospital are impacted if the computer systems are offline.
Joseph Carson:
Another point I was thinking as well, earlier on my mind, was that I was saying what I've seen in the last probably 5, 10 years is much more commercial based systems being used, and the critical infrastructure been sitting on top of it. Well, prior to that, probably 15, 20 years ago, it was much more proprietary, they were building their own pieces. And then basically, it was very, let's say, specialists had to understand about how this all works.
Joseph Carson:
I've started seeing this convergence of both basically commercial base operating systems, and the curriculum for structure components being built on top of it. Is this also because it means that you don't necessarily need to have all the knowledge and be a specialist in critical infrastructure, because a lot of those new commercial systems are available, whether being a Linux based system that's been slightly modified. Is that also impacting the ability for, let's say, attackers, and nation states or cyber criminals to be more successful, because they're using commercial based systems as well?
Ben Miller:
So I feel like we've been using commercial based systems for so long as an underpinnings for many of these technologies. That it's a hard question to answer, right? Going back to a lot of the older systems, the RTOS system, real time operating systems, were using QNX, which is owned by Blackberry, and a lot of these underlying of which, back in the early 2000s was a very esoteric language and mystical in that regard. I don't think it is much anymore. The security that we've gained from actually moving to systems on Linux systems or just custom NICs solutions, or even in some aspects windows.
Ben Miller:
I have more trust in that, then some guy or person at an OEM that built their own operating system. And so that actually gives me a little bit more confidence on the resiliency. There's a lot to be said on software lifecycle and third party solutions. A vendor could easily buy the network stack that they're using, and they're relying on the security of that network stack. Now, historically, those have not been a high security high vetted sets of packages that we've had, especially on the industrial side, that the stuff that's tailored towards industrial. Security was never considered as part of the requirements. It was always assumed that this device can talk to this device, and nobody else has access to it.
Ben Miller:
So why do I care about authentication? I'm reinterpreting serial communications and throwing it into IP encapsulation and off to the races that we go. That's changing slowly, but that's still a very much present artifact that we have in the industry on the software itself. You had earlier mentioned consequences as well, and there's a difference of conversation when you talk to the engineer at the facility, now in a cybersecurity professional. The engineer at the facility is going to describe how it was designed and how it operates.
Ben Miller:
The cyber security professional is going to talk about how they can manipulate that to do something unexpected. And the response back to the engineer is, "Well, we didn't design for that site." That's correct. And that there's a lot of discussion happening. There's some, I think, it's underappreciated, how smart these engineers are, like if you give them a challenge, like, "Hey, I'm going to try and manipulate this." They can actually change the logic in their controllers to at least a flag, that sort of setting or high setpoint, or whatever it is to bubble up to the system operators and the engineers.
Ben Miller:
And we're at the very beginning of that journey. I feel like it's been over 10 years at this point, and yet, we're still at the very beginning of being able to influence a lot of those discussions.
Joseph Carson:
So what are some of the impacts that you've seen, because the ones that I've looked at have been DEF CON many years and Black Hat, listening to a lot of the talks related to ICS. I've also seen a lot of the research papers, and one that comes to mind is around, I think was a wind turbine where it had a... I think when we talk about the kinetic impacts, a lot of times in critical infrastructure, will have where it can have some moving parts, or it can cause some damage.
Joseph Carson:
And I think it was, I remember one where it was the wind turbine had what was a emergency... It was a safety feature for a hard stop, but if you did it multiple times, you hard stop, turn it on, again, hard stop, you can actually get to a point where you actually completely damage the turbine itself unrepairable, where you'd actually have to completely rebuild or remove it. So what have you seen when cyber attacks happen with critical infrastructure? What are some of the impacts that you have seen and can share with us?
Ben Miller:
All right, yeah, so I remember that talk. I think it was at Georgia Tech Talk that individuals representing when they gave that. And prior to that, actually I talked to a wind farm operator, and was questioning them. And this goes back to sometimes the engineers are more clever than you realize. They were actually on this particular wind farm, on this particular turbine, they had to be very specific. They had manual overrides in there, where it couldn't have tripped repeatedly. There were manual systems that just you need them, go there physically and move something to reset it.
Ben Miller:
So there's always that aspect of just because the computer says you're doing it, it doesn't mean you're actually doing it component, which is that the great aspect of leveraging your facilities, your engineering teams to great regard. Which means it's a lot more challenging to have an impact. I think that the large impacts that we've had have been fairly public, whether it's the refinery trip, which in fact in safety systems by the way, the refinery tripped because the safety system itself that is there to safeguard catching an unsafe event and shutting the process down, that was directly manipulated and impacted with the traces attack.
Ben Miller:
The Ukraine 2016, 2015, attacks impacted upwards of 220,000 customers for a series of hours of basically the windshield time from a person driving out in their bucket truck to the substation to manually put the breakers online. Some of the other, a lot of the other cases that I'm aware of are near misses or assumption that, was it cyber? So the fire systems and data center are triggering twice in two days, like, "What was that? Was that cyber?" And doing the walkthrough and realizing there was a phone line that was connected in there and doing the analysis of why was it cyber, didn't end up being cyber.
Ben Miller:
Those are more often the cases than not of the ransomware, is definitely a big one. We are getting near misses where it took down the facility, but the one facility that I'm thinking of was already in maintenance cycle. So they weren't actually directly impacted by that, other than they needed to restore within three days to get out of maintenance cycle and not have that the pressures of that. So there's a lot of those near misses that occur pretty regularly.
Ben Miller:
Ransomware is a big pain. It often doesn't have a direct operational impact, in many cases. If you're an operator managing a skate environment, you don't necessarily need real time visibility. You're getting visibility in the order of generally minutes across a large region as a city, whether it's water, power, et cetera. It's not like the system operator is, it's not a video game where they're controlling the entire thing along the way. They're in the loop, but they're there to catch things that go right or not or continually maintain it.
Ben Miller:
And that's where high redundancy environments and these environments assume to be very simple, but they have redundant networks, and then they have a backup control center along with that, that also has redundant networks and multiple failover mechanisms. So it starts to add a lot of complexity, and certainly ransomware can have an impact across all those sites. But it's dependent on the security architecture, and how you're doing credentials, in many ways is what leads to those sorts of impacts.
Joseph Carson:
So one thing I was interested in as well was that I do a lot of digital, was it forensics and its response? And in the past, what I tend to do is what I'm finding during this investigation, you're looking in on the network, and you tend to find that you might be investigating one incident and all of a sudden, you find that maybe six months ago, there was another attacker on the network that maybe didn't do anything, but maybe it was they sold the access to the next attacker. I was interested in the watering hole research that you used to write the paper on, release the blog on.
Joseph Carson:
I was interested in that, how many times do you tend to... When you're doing, let's say, response or doing penetration testing? Do you find that there's multiple attackers on the network? Some have motive and some don't. How often has that occurred? Because it was a really interesting paper when I read it. That's, sounds very good.
Ben Miller:
That's actually it. So it doesn't happen too often. That said, it's happened a couple of times so far this year. But it's the minority for sure, in that the context around that for your audience as well is, so Oldsmar was a water treatment facility in lower part of Florida. In the February timeframe, the city disclosed that they had a event where somebody gained remote access and tried to put a chemical into the water supply by using the HMI, the human machine interface, which is really just a diagram of the process and they're able to click a button click another button. I'll bet you, it was a very carcinogenic chemical. The operator saw it as it happens, and just said, "No, don't do that."
Ben Miller:
And so they're doing there job, its one of the components of the process. So, brittle, if that's what you're relying on from a security perspective, but it did prevent any sort of real damage in that regard. At the same time, so when that public release happened, and the sheriff and the mayor explained the walkthrough, and the dynamics of that the time it happened, our intel team started looking at the data from various sources that we have. We started piecing together, that morning a web host went to a website that led to a series of essentially exploit kit type behavior, fingerprinting of the workstation, and continual redirects in an interesting sort of dynamics and then it stopped.
Ben Miller:
And it's like, was there exploit that was actually delivered, that actually led to this event? Was it much more complicated than we thought it was? Or what's going on there? So the intel team focused on for a good month of just pulling it apart, and what we came to is that it was fingerprinting workstations for use in other campaigns, and it didn't have any direct relationship to the attack itself. But it had all the hallmark signs of something more sophisticated.
Ben Miller:
The waterhole website itself, the victim website was a construction company that focus on water facilities in Florida. And that's so the asset owner, Oldsmar, who went there. I'm sure they were one of their suppliers or potential suppliers, and that's what led to the series of events that cascaded down. Seems like a odd thing, but if we had more visibility into that entire campaign of all the victim water holding websites, there's probably a diverse group there. And that the continual theme would be that they had some sort of WordPress plugin or some exploit that was consistent among them. And that's what led to what seemingly looked like, a targeted waterhole attack was simply a waterhole attack to grab fingerprints for workstation browsers.
Joseph Carson:
So pursuing reconnaissance, gathering basically for maybe a later campaign or something that we're basically just-
Ben Miller:
Yeah, it's definitely a possibility. And that's where the [crosstalk 00:42:55] and we were excited for a while potentially seeing something that was much more nefarious than say, somebody finds this an Oldsmar host or TeamViewer client on Shodan or just brute forcing those credentials off, but at the end of the day, all signs are pointing towards something like that.
Joseph Carson:
So from your perspective, are we in a good place? Or are we have a lot of work to do ahead of us? I mean, it sounds like, with a lot of [crosstalk 00:43:31]-
Ben Miller:
... those two things are necessarily to each other exclusive, it might be in a good place with a lot of work to do, I think.
Joseph Carson:
Good. So, I mean, for me from what we're seeing, it's going to get worse, I think, before we really get a priority. I think the executive order that was recently done is a realization that it's from the top of the government, that they do have basically, this is a priority. They do have visibility, and you're not putting together whether in Task Force, whether it being specialists, they're not gathering experts at the top level to observe and to watch and decide what course of action.
Joseph Carson:
I think that's a good starting point. My fear is that there's a lot of criminal gangs out there that's basically getting safe havens from certain nation states that will continue to operate. Until those governments are told some type of collaborations is going to hold them accountable, my fears is it's going to get worse over the next year or two before reaction or before the prioritization of this is really priority or brought into, let's say, investing security as a safety measure. Any thoughts or what's your view on that?
Ben Miller:
So I certainly think that policies on US government and other government levels are coming to agreement on what is unexpected. It has been something that hasn't happened yet, could have happened a long time ago, and can help deter some of this, it certainly won't stop it. Dragos is tracking 15 activity groups that focus on critical infrastructure. That number is not going to go down, it's not going down to zero. That cat is out of the bag, and we're never going to recover that.
Ben Miller:
And so now it's how much damage can be done, while we work on more controls. And where Dragos is focused on is visibility. I understand what's going on in your environments, understand who's authenticating in, how they're gaining access, what commands are being issued to your controllers. If your HMI happens to be trying to beacon out to the internet, you should be aware of that. And so it's instrumenting, not just along the perimeter of these systems, but in them to be able to support your incident response plan.
Ben Miller:
Many of the cases that our customers do have an incident response plan, they have no data to back it up. When you go to do injury response, you're like, "Okay, I'm here." Like, "Well, we have these hosts," like do you have logs? "Let me look. No, it's only for 24 hours." And that rolls over, and all these challenges, what do you expect a forensics person to really do other than to explain why, know what happened over the last 24 hours?, or maybe be able to extend it back 30 days with dead host forensics on the hard drive and whatnot.
Ben Miller:
And so that's a real challenge, when you're asked to do forensics in an environment that is not forensics rich, there's no logs or rich. And this has always been a problem, and it's where Dragos has been making inroads in there with our technology,
Joseph Carson:
Yeah. I think I completely agree is that, in traditional IT environments, it's not forensics and evidence rich, to start with. And a lot of the attackers will delete them, the logs and delete every evidence before you even get the chance to review it. And even more so on OT side of things, is that when you've got 24 hours of logs to deal with, for an attack that's been going on for maybe weeks or months, it's pretty difficult. You end up, you have that piece of the puzzle, but not the big picture, and sometimes it's very hard to get root cause analysis or do some type of attribution. So, Mike, any final thoughts or any things that you have that you would?
Mike Gruen:
No, I mean, I think this has been a great conversation. I appreciate, Ben joining us. I think we talked a lot about these convergences and all these different spaces, whether it's across technology, and I think it's just funny how that theme plays out like, "Well, our security people need to work with our safety people."
Mike Gruen:
In software engineering, it's our security people need to work more closely, be teamed up with the software developers and it's just like, I think it's just more continuation of that same theme of how do we work together and create more of a team approach to this, as opposed to more what's been historically an adversarial approach of, we give you the report, and you program against it, or you go fix it. So just more collaboration across the teams and more security by design. And we'll just keep beating that drum.
Ben Miller:
I will absolutely say, with a lot of the activity that's occurred over just the last, where we're in July, the last eight months or so, there's been a reliance historically over air gapped. And you dive into that, you know that's bullshit. I'm sorry ... do you know that's wrong? The board doesn't know that's wrong. They've accepted that as an answer for a number of years, they thought the risk was mitigated, because they were told they were air gapped. And then when you dig into it, it's like, "Oh, by air gap, I mean, it's on a separate VLAN." It's like it's not an air gap, right?
Mike Gruen:
I mean, even-
Ben Miller:
Even humans-
Mike Gruen:
... I mean, even a real air gap, we know that it doesn't exist, right? There's the sneaker networks.
Joseph Carson:
Once humans interact with it, there's no air gap. That was the key, when we go back to looking at Stuxnet. Once you have people going in and out interacting you might be physically disconnected, but you might have sensors that can influence. We've seen basically lifeline where you can actually communicate with light, with sound, with people. That's when you assume that air gap exists, when you have the air gap means that basically is buried in the ground and no one can get to it. When you've disconnected that air gap tends, it means that it becomes almost limited and very little value. One has that data that's not been used.
Ben Miller:
But the system we have is the International Space Station, and that's not air gapped either.
Joseph Carson:
Exactly, exactly, and one thing is there's a lot talk about in one final question, around Zero Trust, and we're doing a lot in Zero trust and IT. The challenge I've got, I'm not a big fan of the term, I prefer continuous verification. It's all about you making sure you verify, verify, verify. Authenticates, verify, audit, is my kind of preference. But when you get into OT side of things, Zero Trust is even possible, or do you put Zero Trust or runs, let's say, the air gaps. What's your thoughts around that side?
Ben Miller:
I think we have a long way to go. I think the approach historically within these environments is the opposite of that of most trusts.
Ben Miller:
Yes, there's zero authentication, there's a very little integrity, and certainly no encryption. So you have the name of the game, which is if you gain access to the environment, if you know what you want to do, you can do it. And with lack of visibility and controls, you'll get away with it too, and nobody will know that what actually caused the events, with the exception of like extreme cases of piecing together a crazy series of events in order to tell that story.
Ben Miller:
So that's not to say, it's not possible with the technology, especially as you move to the equipment that's sensing and actuating the environment. It's completely oblivious to those concepts right now, and when we have that lifecycle of 20 years, where you can start to understand how much it takes to turn this ship around, and what we can do with it.
Joseph Carson:
Absolutely. Ben, it's been a pleasure having on the show, really enlightening, really interesting conversation, brings back all the memories for me. Any final comments or words for the audience around what things maybe they can do today? What practices they can do just to get themselves on the right track?
Ben Miller:
Yeah, I would go back to the idea that the folks that understand this facility and care about it more than the cybersecurity professional, quite frankly, are the ones that built it and maintain it, buy them donuts. Walk over and start, move out of your cubicle, your safety zone, and really understand how these systems work. You're a hacker, you're inquisitive, in order to do that, you need to start a conversation. It's not in reading a man page, it's talking to these engineers, understanding what they're doing often is mapping physics to computers and manipulating the world in a very unique way.
Ben Miller:
It's really cool stuff. But in order to understand that, you have to talk to them, and you have to have the conversation of, "Well, what if I do this?" What if somebody misuses your system and gain those relationships? That's going to have a large impact on just the organization and how the organization thinks through it by continually having those discussions. Every Friday, go out to the field and meet the guys, buy some donuts, you'll pay them every time with the donuts.
Joseph Carson:
Wise words, and I definitely think that that's something we all can do even outside of critical infrastructure and all businesses, is understand the user, listen, understand what motivates them and what they get measured on for success. Because if you understand their measurements and understand their metrics and how they see success, that allows us to map security into how we can help them, not basically how we do security for the sake of a checkbox, for the sake of security, but it all but basically enables and empowers the organization to meet those metrics and success.
Joseph Carson:
So wise words and Ben it has been fantastic having you on the show, keep up the great work at Dragos. So fantastic. It's things that haven't been in the foreground and in the visibility for many years. I think it's great to make sure that we prioritize, we make sure that critical infrastructure is protected, because a lot of that is what it keeps the lights on. It keeps basically us in society doing the things we can, because a lot of those things that happen in the background, whether it being when you hit that light switch and the light comes on, or whether you get in a car and you drive to the shop, and whether the food you get in the store.
Joseph Carson:
All of those things are basically powered by critical infrastructure, whether being logistics, travel, communications, money, and even life saving things, like hospitals and medicine. So keep up the great work. It's been a pleasure for the audience. Make sure you tune in every two weeks for 401 Access Denied, and we're here to make sure you keep up with the latest news, latest trends and just keep you informed and educated. It's been a pleasure. Thank you, all the best and take care.