Skip to content
 
Episode 7

International Cyber Warfare: How Real is the Threat? Part 1

EPISODE SUMMARY

Joseph Carson and Mike Gruen are joined today by special guest Josh Lospinoso, former Cyber Officer of the US Army as they discuss definitions and methods of international cyber warfare. Is there accountability for governments who use private companies and contractors to handle their dirty work? How often do attackers get caught and how long does it take to find out what has been compromised? We’ll discuss all this and more in this 2-part episode.

Subscribe or listen now:  Apple Podcasts   Spotify   iHeartRadio

mike-gruen-150x150
Mike Gruen

Mike is the Cybrary VP of Engineering / CISO. He manages Cybrary’s engineering and data science teams, information technology infrastructure, and overall security posture.


Intro:

Invest in yourself today with our Insider Pro product, which gives you the career path to reach the next step in your cybersecurity journey. Join today on cybrary.IT using the discount code podcast.

Mike Gruen:

You're listening to the 401 Access Denied podcast. I'm Mike Gruen, VP of Engineering and CSO at Cybrary. Please join me and my cohost, Joseph Carson, Chief Security Scientists at Thycotic as we discuss the latest news and attempt to make cybersecurity accessible, usable, and fun. Be sure to check back every two weeks for new episodes.

Joseph Carson:

Hi, everyone. Welcome to the next episode of 401 Access Denied. We've got a fun, interesting topic for the audience today, which is to talk about cyber war. Potentially, we could get into really defining it as cyber warfare because of different definitions that's out there. And today we've got, of course, Mike, I'm on the show with us today and we also have Josh. So Josh, I'll pass over to you, if you want to give a brief introduction of yourself and what you do and where you're from?

Josh Lospinoso:

Sure. Yeah. Thanks for having me. So my name's Josh Lospinoso. I spent 10 years in the United States Army as a cyber officer. I spent a lot of that time building offensive cyber tool kits for various intelligence community organizations, including Cyber National Mission Force, which is one of the big commands inside of Cyber Command that conducts cyber warfare. I got out about two years ago, started a cyber security company called Shift5 that builds cybersecurity products for operational technology, including weapon systems. So things like tanks, destroyers, aircraft, as well as civilian infrastructure like trains, commercial aircraft, building automation systems, that sort of thing.

Joseph Carson:

Awesome. The reason why this topic is probably very close to me is that I'm based in Estonia and I've been here for 20 years. I'm not Estonian, otherwise people can get to comment that my English is very good for being Estonian. English is my first language. So let's make sure we have that clear. So no need to compliment me on how good my English is because it's the only one I know. But in Estonia, we did have a cyber attack, which was classified as a cyber war and I myself am a survivor of what they claim is a cyber war. But we get into a lot of the hot topics about the definition.

And a lot of what you just mentioned, things that have, let's say kinetic impact from … cyber war or cyber warfare, it gets into what can be classed truly as a cyber warfare is when it actually has some type of kinetic impact to the nation or to the country or let's say location being attacked. So, the definition itself gets a bit varied over time. I do see the Estonian aspect of being a pre cyber war that didn't actually turn into cyber war. It was the use of technology against Estonia, but without the real kinetic impact, I find it difficult to really say it was a cyber war itself. So Josh in your mind, being on the offensive side of things and the technology side, what is your interpretation or classification or definition of a cyber war, cyber warfare in itself?

Josh Lospinoso:

That's a really sticky question. I think it's generational how people think about these things. So certainly I think if you're having kinetic effects, people have an easier time understanding how that's physical violence and that is close to some kind of warfare. People in my generation when we saw, for example, the Sony attacks, I very much considered that as tantamount to dropping a bomb through an empty building. There may not have been loss of human life, but there was massive economic casualties from that action. You had serious, serious monetary consequences for a private corporation that's headquartered in the US. And so from my mind, the means of how you achieve that outcome are not so important, it's the fact that you achieve that outcome, but for whatever reason, because people don't have an intuition for how someone could conduct a cyber attack. I think for whatever reason we had a very muted response to that. So I know that's a non answer to your question but yeah.

Joseph Carson:

No, that's absolutely one of the things we have to look at as well. There's different kinetic impacts. Whether it being physical damage, whether it being taken out a power station, taking out some type of military complex or taking out, let's say, civilian locations, but the financial impact does have a kinetic impact in the outcome. It does have some type of economical impact to basically either companies financially wise that they'll have to spend and lose at the end of the year on profits. So I agree. I think the time is that during those types of attacks is that the time to realizing that the impact is sometimes is that dwell time between basically it happening to doing the forensics and doing the actual attribution and doing the understanding, is sometimes far apart. And that's why sometimes it's very, very difficult to make the association because when you get that even six months, a year later, when the attribution finally happens, then that's where you start, how close does the attribution or the root cause analysis need to be before that definition becomes realistic?

Josh Lospinoso:

Yeah.

Mike Gruen:

Where do you think disinformation type attacks and the propaganda, where do you think that falls? Because it's much harder to draw that line to the kinetic or economic outcomes. It's just disinformation and, where would you classify that?

Josh Lospinoso:

Yeah. That's another really sticky question. Because I think there's something, Computer Fraud Abuse Act has it's, there's a lot of things about it that we could talk about, but I think it did at least outline some kinds of norms or standards, whether we agree with those or not. And so there's a pretty clear violation of that in the sorts of attack, like the Sony attack. I found some OTA on your machines, I exploited them, I'm manipulating them, I'm destroying data. It's fairly clear that I violated your property rights at some level. When it comes to disinformation campaigns, it's almost like if I create a bunch of Twitter bots, I'm doing hashtags suppression or whatever, I'm not really subverting an IT asset or getting remote code execution on them and doing anything overtly nefarious. It's even a shade before that. So it's really hard because, where do you draw the line number one? And then number two is, what remedial action can you take? Because at some level you're now talking about inhibiting free speech of these ends. So it's very complicated and I'm not sure that I have well-formed thoughts on it.

Joseph Carson:

… Yeah. It's a good old age of propaganda, all this fake news, misinformation, disinformation? All of those really just come from the core element of propaganda. And there's two elements of it is one, that you defraud the existing information or you basically poison the existing information. And it happens consistently all the time and we go back to the most popular discussed one is Cambridge Analytica. That should it had been classed as some type of weapon because the way that it used data and use that information was to basically have an impact of an outcome, whether or not being financially, but also from people's democracies.

So it's always a challenge. And I agree with Josh, it's a bit of a sticky point where really it can get to the point where the attribution for me, what I prefer to have is that when you're getting into, at least the political realm of side of things, is that anyone who's campaigning that they have to disclose or be transparent to their source of information and not use those types of propaganda tools that's hidden in the background. So they had to disclose it upfront. And so at least be honest with the citizens that is voting for them. So it's always important when you get into it. And that helps people make the more, let's say, educated decision when you know where information is coming from. The context is what's important internet, and when you lose context, that's when it gets a bit tricky and a bit into not knowing where the original source came from.

Josh Lospinoso:

Yeah.

Joseph Carson:

There's a lot of challenges here with when we look at... There's a lot events over the years when we talk about from the Estonian impact in 2007, of course we had Georgia in I think it was 2008, and then we had the Ukraine situation as well. And Ukraine happened numerous times over those years. And for me, really, it gets into those discussions and those areas around defensive and offensive capabilities. And this really gets into, for me, it's all becomes down to better cooperation collaboration between countries. And it's really about when we look at... My overall, when I looked at this back in 2007, 2008 after Estonia, and one of the things...

So just give you some context. One of the things then was that Estonia had been building its digital society for many years. And in 2007, we realized, we called it the Doomsday Scenario, because what happened was is that all the raw citizens' data was held within the Estonian borders. And during that attack in 2007, there's a military exercise on the border of Estonia, training exercises. Just happen to exist at the same time a cyber attacks happening. And they realized that all of their data was in the country of Estonia. And it meant that if there ever was a land invasion, that they could destroy the role of the central data sources data centers and put Estonia back to pre digitalization time. And that meant that the only way that Estonia could actually really move forward was to decentralize this information systems. And this got into the concept of what was data embassies, allowing Estonia to decentralize outside of the border of Estonia

So even if there was a land invasion that they would be decentralized, meaning that if you take out one node, which would be the Estonian country, then the country could continue digitally in those post events. And this really, when I was working on it back in 2007, 2008, it got me realizing that collaboration is key here and countries working together. But with defensive capabilities is quite clear, but when you get into offensive capabilities and also using private companies for those offensive capabilities, for me, that's a bit of a gray area. And an area that really overstepped some of the boundaries and ethics that I've got. So Josh, since you worked in a lot of the offensive areas, where do you see collaboration and offensive working together?

Josh Lospinoso:

Yeah, it's a tough one. Every country has a different answer on where the line is, but I am unaware of any party that plays in this space that doesn't, in some form or fashion, rely on private industry to make offensive operations happen. So in the reduction ad absurdum, you're not making the computers and the infrastructure that you're using to conduct operations. So right off the bat, you're already using tools at some level. You're not rewriting operating systems to use for your platforms to conduct attacks.

On the other extreme, you have some countries that, I'm sure you're aware of, given your location, that work through gray hat types to conduct operations. And essentially like, "Hey, if you guys are willing to do these government directed actions, we'll turn a blind eye to all of the carting stuff that you're doing on the side to make money." And then you have other nations that are somewhere in between. And it's really just a moral question, I think of where do we fall in that spectrum?

So in the US for example, historically, in our way of conducting warfare, we've got a distinction between basically the people pulling the trigger and everyone behind that person pulling the trigger. And if you look at the way we do kinetic operations, you have civilians, government civilians all over the battle space. We deploy government civilians into war zones to do things like run the bases, sometimes do security operations, defending an encampment. We've got people doing supply emissions, intel analysts are very often seasoned government civilians who retired from military and came back to continue working on mission. And so without getting into specifics, I think a reasonable person could surmise that we port that mental model into how we conduct cyber operations.

Joseph Carson:

Yeah. And that does create a massive gray area. I actually remember after doing my university in the US, one of the first jobs I applied for was … air traffic controller. And it was actually for a government that, actually it was a civilian position and they actually had civilians operating on, for example, aircraft carriers that actually were basically doing the navigation and bypass for flight for aircraft.

Josh Lospinoso:

Yep.

Joseph Carson:

And for me, I was always surprised of that, that there would be actually in those locations in war zones, that there'd be civilians operating in those environments.

Josh Lospinoso:

Right.

Joseph Carson:

And so it is always a challenge to, where does that line fit? And I agree with you, one of the biggest challenges I have got, and especially when we talk about events, the APT groups, and you got all of those labels and numbers that we have definitions from various different groups, flipping fancy bird to cozy burn and so forth.

And for me it always gets into the challenges that many of those groups they are basically, for me, some of them are cyber mercenaries. They are not basically government employees. They're not being paid by the government. What they're doing is carrying out cyber crime, they're criminal organizations, carrying out crimes within the country's borders. And those nations States are giving a blind eye as you mentioned, that as long as they're not attacking their own citizens, and they're attacking other countries and as long as they do favors for the government, then the government will not prosecute them. And this gets into a bit of a challenge, it's always that legal area into, even if they're not government operators, but should we start to look at some way of holding those countries accountable when we have cyber mercenaries working within their borders?

Josh Lospinoso:

Right. Yeah. I mean, you think about the moral hazard from a company country's perspective. It's like, "Hey, I can have these." I wrote this article, a fish out of water that describes how difficult it is to have, the analogy is like somebody who could run two miles in under 15 minutes and also dissect a windows kernel dump. There's not a lot of overlap between those two communities, nor do I think there has to be, to be super clear. And so, we are hamstringing ourselves in a lot of ways by making these kinds of artificial requirements because the army and the Navy and air force are meant to do certain things, and they have this whole completely different aspect of warfare. Now that we're like, "Well, the DOD does warfare, so they have to do cyber too."

And we haven't changed our standards to accommodate this population that's capable of conducting cyber warfare. So from our perspective, it would be a huge boon to cyber operations if we could just sort of task private citizens to conduct these activities. And that would be enough of a convincing argument, but there's an additional convincing argument which is, if they get caught now it's like, "Well, I don't know. I didn't tell them to do that. What are you talking about?" There's plausible deniability baked into the whole system. And so you can understand why certain countries would be like, "Yeah, this is exactly what we want. There's no qualms here with running cyber operations this way." And so given that there is that moral hazard that the incentives are just totally misaligned. We have to make countries incur some costs for doing business that way. And I don't know what the right answer is, but what you're, I think illuminating is definitely in line with that thought.

Joseph Carson:

Yeah, because in some regards, they're breaking the law in their own countries and the country's not upholding their own law against those criminals. And therefore we have to find a way to be accountable, whether it's a Europol or Interpol, whatever organization leads that from an international cooperation, for me, there has to be some way of holding those countries accountable for allowing criminal organizations to attack other nations, whether it being for economical or just simple ransomer cases or intellectual property theft and so forth. There has to be some accountability and there has to be some area of classification or definition for those types of acts. And that could stand to one of my biggest topics is, it gets into attribution. I think one of our biggest features in attribution is that we try to, in everything that I've read or every time there's an incident happening and there's an attribution assumption, is sometimes we tend to look for one actor, on offer off it.

And even I remember doing a lot of work for different areas that I didn't even know what my architecture design was going to be used for. I did not know, I was just creating one jigsaw piece puzzle in a larger puzzle that I have no visibility into what the bigger picture is. And that happens a lot with even, let's say operations that's happening as you've got those people who's creating the small little bit pieces of puzzle they're meant to carry out this task. And then that puzzle gets used either for multiple things or a single operation or campaign. And it makes attribution very difficult because you end up having multiple parties.

And also even getting to the point where for me, only if they make a mistake is where I find that we can get to some type of attribution, unless there's some type of human element or human intelligence which then makes disclosing attribution very difficult, because then you're revealing that you have people, whatever agents are in those other countries that are helping provide and disclose that information.

So, from what you've seen or experienced, where do you think we are with attribution now and getting to eventually true attributions where we can really call out and reveal the actors behind those types of activities?

Josh Lospinoso:

Yeah. I think for having been on the offensive side for a while, we make attribution really, really, really difficult. We think about it a lot and more often than not people get it totally wrong. So at first, the first rule is obviously don't get caught. So if your job is surveillance, don't get caught. And the sad state of the world is that even for pretty well defended systems, the time to detecting a compromise is in the months to years, and the better you are, both in the tools that you build and in the techniques that you employ and the discipline that you show, it's really hard to get caught if you're good. And so that's the first thing. So obviously you can't attributed attack that you don't know about.

Joseph Carson:

Yeah, number one rule is being stealthy.

Josh Lospinoso:

Yeah don't get caught.

Joseph Carson:

And sometimes that means, do no harm and use the other tools, use someone else's tools not your own.

Josh Lospinoso:

That's right. That's right. That's exactly right. Right. And so, if you're going to transition from surveillance into overt action, or you're going to start doing riskier things like the attacker gets a vote, we know when we're being risky, and we know like, "Oh, there's a good chance this gets burned so what do you think we're going to use?" So first off, just catching people is really difficult. And I think we as a community of defenders have to be, of course, partner systems and try to show good discipline and keeping people out. But that's not good enough because the attacker only has to be right. So you have to focus on, "Okay, assuming that we got compromised, how quickly can we detect compromise?"

And so I think first off that's extremely important because oftentimes I've found that you can at least rule out groups of potential kinds of attackers based on the behavior of how they're operating. It's not necessarily like reverse engineering a root kit. It's, "Okay, what did they do on the network? If I've got full tape packet capture, what did they dump?" "Oh, they dumped this database." "Why did they dump this database? Who would possibly be interested in that? Why did they misconfigure this spiral?"

So, you can't just use the TTPs, the tactics, techniques, and procedures that the operators were employing to pivot around the network because we read hacker news too. We know what all these different APTs are using and we learn from each other when we repurpose techniques, that's not good enough. And so, yeah I don't have any answers. Catching people's really hard. And then even once you catch them, attribution is really hard.

Mike Gruen:

Don't you think attribution is actually getting even harder? I assume that as technology advances, attribution, your ability to hide and hide your source is actually getting easier for you to do. And therefore it's actually harder. I think it's only getting worse.

Josh Lospinoso:

That's right.

Joseph Carson:

You're absolutely right Mike, it is getting more difficult. And one of the things I've learned a lot is, even from criminal organizations over the years, they start using a lot of misdirections, using other people's tools, using your own tools against you. So it gets to the point where, unless they're actually having massive campaigns and operations where they're building everything themselves, or it's very unique type of target where they had to do that. If it's more commonality, then it's more difficult to do attribution. And I find that only when the attacker makes mistakes is where you can call them out, but you're only going to get so far. You're only going to get to a point where I might have an IP address of an origin. I may have a keyboard that the person typed on, but if that machine is shared or that's in a shared location, then it gets very difficult to know who the person was typing on the keyboard and who was giving them the commands to do so? That's where it really gets into the difficult. And that's where the element of attribution comes in.

Mike Gruen:

Right, but I also assume that if they're able to compromise the system, they're probably capable of compromising that trace of, were they actually the ones on that system or was it that that system was compromiseed and it's actually a different actor?

Joseph Carson:

Well, you're given also proxies as well. This is one of my other challenges is that, when you get into offensive, you want to be absolute, as you said, high confidence that there's no other possibility because in many situations that in order to hide your tracks, you're going to use proxy countries. You're going to use proxy environments. And you've got to basically, if you think of, you mentioned earlier the Sony attack was carried out from different locations. It didn't come from North Korea. It was initially launched from different locations around the world. Even in Sony in 2007, majority of the attacks came from basically compromised botnets that were coming from either educational systems in the US, universities were compromised and higher education to systems in the middle East or attacking Estonia because those were the compromised machines.

So, the attacks can be launched from anywhere and a lot of countries are used as secondary victims. And that's where you have to be really careful about that. I really even remember working on it. There was an incident I worked on a few years ago, which lucky enough, one of the things was the email account was set up in a foreign country. And lucky enough, we got the data hosting provider to show the email account and the logs from that system and the letters to back trace it. And here's the problem that we had with the attribution then, we got it back to an internet cafe. And so we have an internet cafe. We knew that was the location where the threat was a government official threat. And we knew the location of the source, but since it was an internet cafe, this is where we end up handing it over to local law enforcement.

They went into the internet cafe and they had a warrant to actually confiscate and collect all technology and computer evidence. They collected everything except all the network devices, Play Stations, telephone. There was many things that basically, under the local law enforcement's understanding was that their understanding of what a computer was and the warrant itself was not exactly the same as what we would view it as. And that made it difficult. It ended up getting to the point where we eventually find out the email was sent from a PlayStation, but because of the logs and because of the information, it meant that we could only assume who the attacker was or the person who was doing the threat and we couldn't do it legally binding.

So it ended up becoming just a more, a person was under surveillance for longer rather than being prosecuted because of those failures and the digital forensics response and the local law enforcement team, that didn't really understand what technology was. So, sometimes you get lucky with other countries cooperating, but when you get into even the execution of the arresting or prosecution and the legal side of things becomes also very difficult and very lengthy as well.

Josh Lospinoso:

Yeah. So, I try really hard not to be a security annalist, these things are really difficult. The one technique that I have seen be probably the best for attribution is unfortunately not available to most forensics people, is to hack back. So if you really want to know who the active participant is on your network, you have to go back and be on the op station for the person that's on the other end and conduct your own campaign. And if you do that, sometimes you can get confirmation, right?

Joseph Carson:

Yeah.

Josh Lospinoso:

But of course that's like fraught with all kinds of challenges.

Joseph Carson:

I remember. Because I remember there was the concept which was called cyber minds. And cyber minds was the basically concept of, the attackers stand in your cyber mind and ends up attacking back. And it got into big gray legal area. Eventually, it's what we call now today, deception technology. Ultimately where they took that term from was deception technology, which came from the cyber minds projects. And because it got into big gray legal area about hacking back into, again, if that was a proxy, you hacking back the proxy and that proxy was in a country where you've got maybe political, let's say, instability, you don't know what that hack back could cause. So you had to make sure that your hack back was not hitting the proxy country, but actually hitting the original source. There's a lot of challenges into that, but I agree, if you have legal frameworks that support that capability, then it does provide one area that you can get back to some of the original source.

Outro:

And we're going to take a break right here. Make sure to check back in two weeks for part two of Josh Lospinoso, encryption and back doors. Learn how your team can get a free trial of Cybrary for business by going to www.cybrary.it/business. This podcast is also brought to you by Thycotic, the leader in privileged access management. To learn more, visit www.thycotic.com.