In this episode, we look at the need to secure the internet of things, physical workspaces, and the products companies make. From planes to children’s toys to oil rigs, more connected devices are vulnerable to attack than ever before.Ken Munro is an internet-of things security researcher, penetration tester, and writer with two decades of experience in the security industry. He is also the founder of security services company Pen Test Partners.
Munro helps expose the vulnerabilities in items we use every day, and he discusses some of the most important skills that cybersecurity experts can have, why companies are at risk for physical security breaches, and something he calls “supersystemic flaws.”
Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next. Music is by Merlean, from Epidemic Sound.
Show notes and links
Ken Munro, on Twitter
Ken Munro, Pen Test Partners
“Kids Tracker Watches: CloudPets, exploiting athletes and hijacking reality TV,” Pen Test Partners Security Blog
“Think you’ve had a breach? Top 5 things to do,” Pen Test Partners Security Blog
“Internet of Things Security,” a TEDx presentation by Ken Munro
Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.
Security threats are everywhere. That’s why Microsoft Security has over 3,500 cybercrime experts constantly monitoring for threats to help protect your business more at microsoft.com/cybersecurity.
Our topic today is cybersecurity, but more specifically the importance of securing the internet of things, your physical workplace, and your products. Oh, and hacking planes, boats, and automobiles. Three words for you for: remote-controlled oil rig. My guest is Ken Munro, who is an expert internet-of-things security researcher, penetration tester, and writer with two decades worth of experience in the security industry. He is the founder and a partner at Pen Test Partners and also you may have seen some of his exploits with keyless cars, children’s toys, planes, and we’ll talk more about what keeps him up at night: “supersystemic flaws.” Ken, thank you for joining me on Business Lab.
Ken Munro: Oh, thank you.
Laurel: First off, can you tell me how you became interested in security, but specifically in the context of devices and the internet of things?
Ken: I’d love to tell you I had a misspent youth, but it’s far more boring. I spent time after I studied at a university looking at restaurants. And I remember I picked up a job where I was managing a restaurant, and one bored afternoon when it was quiet, I started playing around with the point-of-sale system, very, very old. After a bit of messing around you could crash it out to DOS, early version of DOS. I then discovered I could print out my mortgage amortization statements, so I could get my payments, which was a weird thing to print on a restaurant check. But one of my bosses came along afterwards and said, “I think perhaps you’re in the wrong career.” Which was a nice way of firing me I suppose, but that led me into antivirus then into firewalls and cybersecurity more generally.
Laurel: So, other than curiosity, what are some of the skills that cybersecurity experts must have as well as, of course, their excellent technical abilities?
Ken: Curiosity, that’s a really good word actually, because when I look at what we do as penetration testers, ethical hackers, if you like. It’s trying to think in the way that the people that developed the system didn’t. Is trying to think of all the possible mistakes that have been made, things that had been overlooked, and just poking those, scratching those itches to see if you can find a mistake made.
You mentioned what we call supersystemic flaws—that’s a really common one we’re finding in APIs [application programming interfaces]. So the API you find on a mobile app for maybe a smart device, and we’re so often finding that developers whilst they authenticate their users correctly, they often forget to authorize them correctly. And by that I mean you log in, you create your account, you sign in—tell you’re you. But the developer somewhere along the line forgot to check that every request was coming from you, which means you can potentially jump into others’ accounts, execute actions on behalf of them, and break down the segregation.
Laurel: If I’m in that situation, and I’m you, a pen tester at a company and I have to then go tell someone, “I think we have a mistake here,” how does that conversation play out? Who do you even say anything to?
Ken: Let’s talk about independent research. We do a lot of work at our own cost, buying products and looking at them, looking to see if we can find vulnerabilities. Partly, it’s for good practice, good exercise, but also it’s helpful because it improves the state of security. So if we find a vulnerability, the first thing we want to do is tell the manufacturer about it. We want to tell them privately, confidentially so they can fix it.
But so often the problem is that first reach out you make gets to the wrong people or it gets to an organization that just aren’t used to dealing with vulnerabilities—it’s a first time for them. So how do they deal with someone completely cold coming up and saying, “Hey, we found a vulnerability in your stuff.” Most people take that as perceived criticism, which isn’t the intention at all. But if someone said, “You’ve got a bug in your product,” first thing you do is put your hands up, “Hey, hey, hang on, hang on. No, no, no, no. We need to defend our brand.” And that’s usually the first problem is, people don’t take constructive criticism well, which makes life very difficult for a vulnerability researcher.
Laurel: But you also have to have that kind of special tact, right? The way you communicate problems to these different companies with products or millions and billions of dollars invested in specific …
Ken: Yeah. I think of it like a game of chess actually. Because I know if we go in aggressively too hard, they’ll probably rear up and instead of getting the vulnerability fixed, they’ll probably do something that we don’t want to see. They’ll get aggressive, start trying to shout us down or occasionally we’ve had legal threats which were baseless but are irritating and costly to defend.
So it’s all about getting yourself heard by the right people in the organization who understand security, who understand the potential impact on their brand if some other nonethical person has found it. Getting them, cajoling them, working with them, giving them the tools they need so they can fix it quickly. And then the vulnerability’s fixed, and everyone’s better off.
Laurel: When you’re in a situation, you actually have to have a counterpart in the organization, and we’re seeing that not necessarily every organization has someone like a CISO, or there will be someone named in charge of security. How do you carefully tread those waters— what kind of communication skills and techniques do you use to bring across your points?
Ken: How do you even find them? That can be a real problem. Maybe in the first place you start with a contact-us form, and it’s probably going to go to someone who doesn’t understand cybersecurity. It’s going to end up in the wrong place. We had a few issues many, many years ago with Fitbit back in the day, and we started off on the contact-us form and it just went into the ether. Nothing happened. We tried phoning supports, but it was customer support, and it just didn’t really get dealt with.
And it was all going a bit wrong. And then by some complete fluke, a friend of a friend started as their in-house security expert, and it’s a bit of a train wreck about to happen. And proactively picked up the phone and made contact, said, “Hey Ken, I think we know someone, we know the same guy. I heard you were trying to report a vulnerability.” And from what could have been a very difficult situation whereby there’s a vulnerability, it’s not being fixed, vendor isn’t listening. Actually, it was very quickly shortcutted, and the product got fixed very quickly. The firmware is updated in just over a week, and it was a real success story for everyone.
Laurel: That’s excellent. I guess that’s what we want to hear more of. Right?
Ken: Yeah. I just wish that happened every time.
Laurel: Well, that’s probably what makes your job so interesting, too. How are you thinking about the current shortage of security professionals? Just kind of across the industry, what are some things that you look for or you wish you could wave a magic wand and fix? How do you fix the security professional pipeline?
Ken: Gosh, that’s a deep question. It’s well-known there’s a shortage of skill. I think one of the real challenges certainly in the pen-testing space, where people are hacking ethically, is it’s often overlooked how important it is to be able to communicate. I mean, awesome tech skills are an essential, but actually it’s those communication skills that are often forgotten about, because if the results of your findings aren’t communicated in the right way, to the right audience, the right person, then actually, stuff doesn’t get fixed.
Whilst we have an academy program—we take people, we coach them, they’ve got raw skills, we’ll turn them into fully fledged pen testers. What I’m really looking for is that communication ability. If you can’t communicate the amazing things you’ve found, I’d argue you might as well not have bothered finding them.
Laurel: Just goes into that old saying, well, my old saying—tech is easy, people are hard.
Laurel: That’s the kind of skills that, it seems, most companies are now realizing that you can teach tech, right? You can teach coding. You can teach these skills. Some people obviously have an innate ability to do them, but the communication is what’s critical, to actually bring your entire organization up that one level and actually is a competitive differentiator.
Ken: Yeah, it’s a real surprise actually. I really do strongly encourage people to think about who they’re speaking to. So you’re doing something cool and clever and very smart and technical, but actually thinking about the audience, the person receiving that, think about how they’re going to receive that and the way that’s going to get that problem resolved fastest.
Laurel: Yeah, it’s all about users, right? Or your customers and in some sense they are your customer because you are trying to develop this relationship in one way or another with a complete stranger and letting them know some particularly bad news.
Do you think that because security is becoming more important within the organization, it’s no longer a back-office function that folks are looking at it maybe as a more interesting profession, because it kind of is out in the open, people are talking about it. It’s no longer hidden behind some kind of black curtain.
Ken: Yeah, it’s been great seeing the more profile that’s been brought to the cyber industry, though. The more university-level courses are starting to appear, which is fantastic, which is improving the supply of individuals. But I think many organizations, particularly those that have suffered high-profile breaches, have learned in perhaps the hard way that the only way to really deal with cyber is to bake it into the business. If you put it in as a subsidiary of the IT department, then it’s really only ever going to be an afterthought. Whereas if you bake it into the business and make someone truly responsible, who truly understands how to assess risk and communicate risk, if they’re sat at the exact level, they’re sat at the board, then I think you’ll see a mind shift within the organization.
I do a bit of teaching at board level, and it’s been great to see how if you can get the CEO to start thinking about their personal security, about their personal safety, about the safety of their family. They start thinking about passwords, they start thinking about updating systems, they start thinking about teaching people, and all of a sudden you see that mindset change percolate right from the top of the business down through the people. So instead of trying to ram security into an organization, it actually drip freeze right from the top and pervades everything that the business does.
Laurel: Definitely more of a security-first approach. And when as part of everything, you don’t think of it as an intrusion so much in your everyday workings.
I’d like to frame our discussion in a way that talks about how a cybersecurity breach can happen anywhere—obviously with employees, the products, and even the physical office building that your company is in. Can you talk about why physical security is critical for businesses to think about?
Ken: Oh my gosh. As you start to improve your cybersecurity, so you start improving your defenses, so you start finding that it’s harder and harder to penetrate your organization from the internet. And hopefully even if someone does penetrate it remotely, you’ve got some tools that alert you and a team of people to actually flag that. It becomes increasingly difficult to penetrate the perimeter of the organization. And that’s when the higher risk, physical types of attack start getting interesting. When we start talking about social engineering, when our job is to bluff our way into an organization, to plant a backdoor physically inside the organization or to steal data physically. The stuff of the movies, right?
But often you find those two camps actually are quite closely linked. I like looking for the shadow IT in an organization. I like looking for the building management system. The HVAC, the thing that controls your elevators, that controls them going up and down. The tech that controls your gate lights. The tech that controls your electronic door locks, your access controls. Why? Because those systems often sit outside your conventional IT department. Maybe the people in facilities put that in. Maybe the sub contracted out to a third party who’s got remote access, which has been badly done. Sound farfetched? Well, there have been plenty of big breaches that have happened through exactly that route.
So if we’re social engineering and I find your building management controllers on the public internet, not difficult to do. Walk up to the office, pop the door locks electronically, remotely, and the doors wide open. In you go.
Laurel: Are you finding that your business is an even mix of that data breach vulnerabilities as well as the physical, or are people focused on and thinking of data first and then the physical later?
Ken: I like to think of it as a mix of both. I think some of the most farsighted and forward-thinking organizations realize that it’s a combination. You have to address both. Yes, you can red team the organization, you can attack it remotely, and you can make sure that it’s well set up to detect and respond to cyber breaches, but at the same time you’ve got to be covering the physical side, too.
My experience has been, is when you work with the physical side of the business, you start helping the organization build bridges between its IT department and its facilities team. And once they’re talking, once the fiscal security team are talking to the cybersecurity team, you get some really interesting stuff shake out of that. You get teams starting to work together cooperatively, not seeing as one’s in a box and others, well, another one is somewhere completely different.
Laurel: So what are some of the basics that companies can look at when they think about vulnerabilities and just maybe doing things themselves before they bring someone in like you to help them out?
Ken: Oh, straightforward. Number one, so what do we dine out on? We dine out on passwords not complex enough, reused, default, or blank. You fix those, and you’re going to make our life a great deal more difficult. Next big one, patching. I know it’s so boring, but actually missing patches, exposed vulnerabilities, patches are there to fix vulnerabilities. You fix patches, you make my life difficult. And the last thing I look to look at is the people. So you can either blame people for clicking links on emails and responding to phishes, or you can teach them and train them. They can either be your worst enemy of security, or they can be another pair of eyes looking, spotting, reporting, feeling educated, and empowered. So my advice before you get anywhere near getting third parties in, get the passwords sorted out, sort out your patches, and teach people about cyber.
Laurel: Basic security hygiene. How do companies work with partners to ensure that the components of their product are secure? So that third-party supply chain—it could be any product, but I’m thinking really basically about a car that’s built in Detroit and has a GPS unit from a different vendor, and an entertainment center from a third. And that’s literally a very basic description of the third-party vendors in a car, but I think that’s what we think of when we think of hackability. So when we look at the entire ecosystem of third-party vendors, where would you start if you are a car vendor or any company?
Ken: So it’s great putting your own house in order and sorting out your own security. But so often that the attacker will move to the most vulnerable part, and that’s probably going to be your supply chain. Because it’s your organization, it’s your data, it’s your customers. You take that information and the cybersecurity of that very seriously. But as soon as you start moving into supply chain, it’s no longer their data. So the first thing to do is start asking questions.
Just by starting to ask questions, you’ll very quickly start to appreciate just how mature your suppliers are in terms of cybersecurity. I often see organizations we’re asked to audit and say, “Well, we couldn’t possibly share that information with you for reasons of cybers,” and you realize that’s just a smokescreen. Your supplier should be very happy to share their security processes with you, because just having a process of cyber doesn’t mean exposing vulnerabilities. Showing that they’re a mature organization that takes cybersecurity and the security of the data they’re processing for you should be an open process. Ask questions, audit your suppliers, talk to them about security. And more than anything, one piece of advice you can take away from this little interview, put cybersecurity into your contractual procurement terms. Which means A, they’re thinking seriously about cyber before they sign your contract, and B, if it does go wrong, you’ve got a right of recourse. Really, really important. Embed cyber in your procurement.
Laurel: Would you say companies actually do vet their third-party vendors often though? Is this a new way of thinking that companies need to have really just, like you said, integrate into every part of the way that they work with outside vendors?
Ken: Sometimes. So I’ve been doing some outreach and teaching in the legal profession about exactly this concept. I’ve done it for a few years now, and this year just gone, I said, look, has anyone actually embedded cyber into their contracts? And a hand went up in the audience, and I felt a little glow. They said, “Yes, we took your advice. We’ve embedded the open web application security project top 10 into our third-party development contracts.” So they contractually sign up to delivering code that is not vulnerable to the OWASP [Open Web Application Security Project] top 10, and I thought that was a huge win. One little step. So please, everybody, do that. Embed simple cybersecurity terms into contracts, and it just makes life so much easier if there’s a problem.
Laurel: Cybersecurity isn’t only about stopping the threats you see. It’s about stopping the ones you can’t see. That’s why Microsoft security employs over 3,500 cybercrime experts and uses AI to help anticipate, identify and eliminate threats so you can focus on growing your business, and Microsoft security can focus on protecting it. Learn more at microsoft.com/cybersecurity.
Laurel: So stepping back and looking at it from a distance, how can security regulations and standards actually help companies? Often they probably feel like they’re in this alone, but there are agencies or departments like NIST [National Institute of Standards and Technology] that can help.
Ken: Yes, there are. So regulation is surprisingly lacking in the cyberspace. It’s something I’ve been lobbying for and campaigning around for a number of years now, particularly in the space of consumer IoT [internet of things]. It’s something of a wild West. We keep finding very serious vulnerabilities day after day in consumer IoT products. And my feelings, whilst I’m not a huge advocate of regulation because I very much prefer free-market dynamics, in the consumer IoT space I think there was a very strong case for regulation. I was very pleased to see in California, Senate bill 327 was enacted, came into force on first of January this year. And what made me super happy was that the introducer of the bills cited some of our work on a vulnerable kids Bluetooth doll called My Friend Cayla as the inspiration and the catalyst behind that regulation. And even whilst the regulation is pretty basic, I think it’s a really good step in the right direction. And it’s sad, but particularly in consumer IoT, I think we need regulation.
We’ve also had some successes in the UK. We’ve had consultation just finished, and the government’s committed to introduce some basic regulation to protect consumers from smart product manufacturers who don’t play safe with our data and our privacy.
Laurel: Can you talk a little bit about My Friend Cayla? That was an interesting experiment.
Ken: So My Friend Cayla is an interactive talking kid’s doll that I first stumbled upon about five years ago. Now she used a mobile app to listen to words the child was saying to the doll, process those into text, and then answer a number of questions. So the child could have an interactive chat with a dolly, which was great fun. Now I saw this doll, I realized that she communicated over Bluetooth to the mobile phone. So what I wanted to know was, how secure was that Bluetooth connection? And the problem was, it wasn’t really.
So when you connect your cell phone to your vehicle so you can make phone calls, you put in a PIN, right? Six-digit PIN. And that creates a relatively secure connection. The problem with My Friend Cayla is when she connects to your phone, there’s no PIN at all, which means that anyone nearby can join and connect. So again, the hacker in me is thinking, I wonder if I can make this innocent kid’s dolly swear? So we realized we could tamper with the mobile app, and the child could say anything they wanted to the dolly, and she would swear right back. So I’ve recorded her for you just here.
Recording: Hey, calm down or I will kick the [beep] out of you.
Laurel: That’s amazing.
Ken: But worse than that, because there was no encryption on the Bluetooth connection, anyone nearby could connect to the microphone in the doll and listen to your children, or the speaker in the doll and speak to your child. So we’re talking people in the next-door house, people on the street outside. So people could creep on your kids through this doll or worse, they could spy on your conversations in your house because you put a smart dolly and gave it to one of your children. And I don’t know about you, but there’s conversations I wouldn’t want my neighbors to listen to. Right? So again, our privacy was utterly invaded by a nice idea of a kid’s interactive doll.
Now good news for this is, well we reported it to the vendor who dismissed it out of hand and accused us of staging a prank, but fortunately the German telecommunications regulator realized this violated some consumer privacy and spying laws leftover from just after the Second [World] War, and overnight they banned the doll. It was illegal to own that doll in Germany. So there is good news to come out of it
Laurel: And a good example of how regulation can help in those specific instances. To stay on the topic of consumer devices and to join it with another very popular trend, which is bring your own device to work, or BYOD, what happens when the Wi-Fi coffee machine at work is connected and could be an unsuspecting launching point for a hacker to actually launch an attack in the workplace? What are some other vulnerable devices that businesses may not think to consider?
Ken: Yeah, so BYOD is often thought of as smartphones and tablets being used to do work. To me it’s smart stuff entering the workspace. My very first piece of research in IoT was on a Wi-Fi-enabled teakettle, and we discovered that once you connected to your network, I could stand on the street outside, connect to the kettle, and steal your Wi-Fi password from it. So yeah, I could hack your house. And if you took it to the office to make a cup of tea or a cup of coffee, you could also steal your business Wi-Fi password. So BYOD is a real concern to me right now. We started looking at smart refrigerators a little while back, so, again, it’s difficult to buy goods, consumable goods, white goods, to put in your offices that aren’t smart anymore.
We looked at a Samsung smart fridge with a great big screen on the door. It was really funny. So it had a camera on the inside, and a big screen on the outside. And the idea was is that you could see what’s on the inside without opening the door. And I’m thinking, why don’t I just open the door? Surely. But anyway, what we discovered, so to be smart it needed to connect to the internet, in order to tell you about things and products expiring, going out past their due date, it needed to be able to email you. And we discovered, you could drive past someone’s refrigerator and steal your Gmail password from a fridge. Which is silly, right?
But we’re also seeing Bluetooth devices. We’ve seen smart coffee machines that were completely vulnerable. So you can do some mad things like stop the coffee machine working. But then we started to see devices that can be used as a piggyback into your organization. So we started to see, again, back to the shadow tech, being made smart, so that facilities people can administrate, I don’t know, the gate lines or HVAC from their mobile phone, using an app. And then we discover there are vulnerabilities in the app, the API in the smarts product as well, and you can use that as a backdoor into the organization. So BYOD in the context of smart devices is making businesses more vulnerable.
Laurel: I’m also thinking about some very shiny office spaces with smart televisions as conference call defaults and even smart whiteboards, right?
Ken: Yeah. Do you remember a story a little while ago about Samsung smart TVs that were shown to be listening to you? Do you remember that one?
Laurel: Oh, yeah.
Ken: Yeah, well that was our work. Some bright spark had noticed in the terms of conditions … Who reads the terms and conditions of your TV, right? Nobody does. But some bright spark had read it, and saw in there a sentence that said something to the effect of, “Don’t say anything sensitive around your TV.” And now that got a little bit of coverage. We picked up on that, cause I had a Samsung smart TV with voice control at home. So I went home that night, got my TV, took it into my office the next day much of the confusion of my wife, and we hooked it up to some sniffing products. We got some technologies on there and started listening to the internet traffic that the TV was sending. So I wonder if there’s something more to this. And we discovered that not only was the TV listening to you, it was also sending what you were saying in plain text, unencrypted, over the public internet to third parties overseas. And that was really unpleasant.
Laurel: Unbelievable. Your work is just, infiltrates every corner, which I think that is the whole point, right? As we become more smart and want to do things faster from anywhere at any time, we actually do have to think about security first and how we’re layering that into pretty much every action.
Ken: I think it’s for reasons of efficiency, economy and just working smart. I think technology is getting into everything that we do and the problem is it’s being rushed in without sufficient thought about security. So we’re making stuff smart and making stuff vulnerable.
Laurel: So can I connect that to the supersystemic flaws? So when we do do that and then we have mass distribution of anything, whether it’s an app or a television set, what happens then? How do you then backtrack vulnerability when it’s everywhere?
Ken: We spoke briefly about API security, so every time you make something smart, you’re probably going to hook it up to a mobile app. That’s probably the purpose of making it smart, so you make it easy to administrate remotely so you can see your stuff at home, in your office, your CCTV from anywhere in the world. And to do that you’re going to need an API.
And this big problem we keep finding is that all the smart products, they have APIs and they’re not sufficiently secure. So it’s not like perhaps say hacking one device. It’s one thing going up to one smart device in one person’s home or office and hacking one of them. What really bothers me is when we discovered that remotely we can hack all of the devices and all of a sudden you’re starting to think about a smart vehicle and you discover a vulnerability in the API that allows you to stop every vehicle in that fleet that uses the app. So you’re bringing entire fleets of vehicles to a halt.
And that is what we mean by supersystemic. It applies across entire ecosystems of devices. Often coming back to one single platform provider that’s maybe provided services to lots and lots of different brands and vendors. One vulnerability affects everything.
Laurel: How many times a day does someone mention, “Are you sure we’re not living in an episode of Black Mirror?”
Ken: Well, oddly enough, one of my first outings on TV was with the producer of Black Mirror. I introduced him to My Friend Cayla many, many years ago. He’s a very, very bright guy. Yeah.
Laurel: And it’s haunted him ever since. Yeah.
Ken: I think so. That’s the problem we have is we’re just rushing. We’re going too fast. Don’t get me wrong. There are some vendors out there who take security very seriously and do it really well, and I commend them. The problem is they’re very much the exception to the rule. I wish that every vendor was as committed, and that’s why I’m saddened that I think regulation is the only way because unless we force manufacturers to start playing like the good guys, I don’t think there’s going to be any financial commercial incentive for them to do so. There’s no way of product labeling. There’s no way of working out which products are more secure than the others, which is why sadly I think regulation is the answer.
Laurel: Interesting. So you’d almost be like fair-trade labeling for security.
Ken: Yeah, there’s been quite a bit of work on that in various countries. I know in the US there’s been some work and certainly in Europe as well, but the challenge is how do you provide a single label that tells you how secure or not a product is? That’s a really difficult thing to do because security is so many faceted. You think about the average smart device, it’s going to have some chips on there. It’s going to have some firmware in there. It’s going to have some radio-frequency stuff like Wi-Fi or Bluetooth, maybe ZigBee or Z-Wave. It’s going to have an API, it’s going to have a mobile app. It’s going to have a cloud infrastructure. It’s going to have a telematics platform. It’s so complex that it makes labeling really, really difficult.
Laurel: What happens in highly regulated industries? I read a piece about satellites. How do you hack a satellite or secure a satellite?
Ken: It has been done, and in fact I’m very involved in the Defcon aviation and aerospace village. We had our first outing in the summer last year, and over the next year we’ve had some interest from satellite manufacturers and operators. And we’re hoping to build a platform at Defcon for people, researchers, interested parties to help assure the security of satellites, which I think is great. It’s involving the whole community to help the whole industry improve their cybersecurity.
Now there’s one example of a regulated industry, I think I’d like to hold up as a good example. So in the US, smart health-care devices, certainly a subset of those are regulated and approved by the Food and Drug Administration. And that was one of the very first areas where regulation was brought in that’s made a real difference. And I congratulate that market for making big steps towards cybersecurity. Why? Well, because we’re talking about life-and-death devices. We’re talking about devices that keep us alive. So again, a great case for bringing in standards, improving everyone’s game, and showing that actually we can make secure devices that keep us safe and keep us alive.
Laurel: I feel like we heard a story a couple of years ago about pacemakers being hacked. So this seems like a good reason for regulation.
Ken: Yeah, it was really great that the FDA stepped up there actually. Was it a lack of attention to detail, maybe to give many of the manufacturers the benefit of the doubt, I think it was probably they just didn’t really think deeply enough about the cybersecurity. You go and source some components; you go and source some development, and all too often assumptions are made about whether devices are cyber secure. And I think actually what the regulation has done is it’s encouraged people and forced people to actually go and ask those difficult questions, to make sure that the developers you’re using do get cyber and make sure the chipsets you’re using do have reasonable security features like a trust execution environment, secure storage, good source of entropy, all those useful things that if you don’t ask the questions or specify in your procurement, you’re not going to get.
Laurel: So tell me a story about one of your favorite pen-testing exploits.
Ken: Oh, my gosh. Where do I start?
Laurel: Well, I kind of gave a hint at the beginning of the episode with the oil rig.
Ken: Ah, so more recently we’ve been starting to look at maritime. So we’ve been looking at shipping. It’s an area that’s really had little attention from the cyber industry over for the last few years. And the reason for it is primarily vessels were offline. You had shore Wi-Fi and 4g when you’re in shore, but as soon as that vessel set sail, you’re effectively offline.
Yes. So you can get a sat phone but they’re expensive and you could use fleet broadband satcoms, but they are frighteningly expensive data usage. So to all intents and purposes, a vessel, once it set sail, it was offline. And that all changed two, three, four years ago with the advent of VSAT.
All of a sudden satcoms became affordable, and it meant the vessels could start being monitored for efficiency, fuel utilization, routine, all these important things that save everybody money. And it also meant that your crew—you could hire the best crew because they wanted data plans, they wanted social media access and you could give it to them. So you’ve got the best people working on the best ships, delivering the best efficiency. Fantastic.
But along the way, no one thought about cyber. So ships were being hooked up to the internet with zero thought to the safety and security of that vessel. So what we’ve been doing over the last couple of years is working on vessels with forward thinking operators saying, “OK, let’s check this out.”
And a great example was an exploration drilling rig we looked at. It was an organization that would do early stage well exploration. There was a self-propelled rig and had satellite VSAT connectivity, and we were asked to look to see if we could in theory interrupt its ability to drill. And what we found was a lack of effective segregation between the various networks. So we could drilling control networks, we could access corporate networks, we could access crew networks, we could see the internet, we could see everything.
And whilst the organization had made some pretty reasonable steps to secure stuff, what happens is actually the crew in operation often break down that segregation. Why? Because I want to stream Spotify when I’m working the drill head, so they install a Wi-Fi router so I can get Spotify on my headphones whilst I’m out there on the deck. And then we discovered random things like remote access being plumbed in, accessible from the public internet. And what essentially happened is we discovered over the public internet without any authentication, we could take control of engine and other critical systems on ships and rigs around the world.
Literally turning engines off, turning engines on, affecting steering gear, potentially crashing ships by interfering with navigation systems if we so wanted. Now, there’s been very little criminal activity in this space, but it is starting to emerge. We’re starting to see legal cases pop up, where if you then analyze the court papers, you’ll see evidence of attacks.
There was an attack against a superyacht called Lady May a couple of years ago that was investigated by the FBI, and it looked like certain of the control systems on the vessel were tampered with by persons unknown, possibly foreign powers. That’s made the press already, but there are increasing number of anecdotal stories and stories starting to make industry press about vessels either experiencing things like ransomware on the navigational systems or direct and deliberate hacks.
Ken: Yeah. The sad thing is most of this reads like the script of Hackers, the movie, so yeah. Hackers 20-plus years ago foretold told the future.
Laurel: Doesn’t science fiction often. Look, Ken, thank you so much for joining me for this amazing conversation on Business Lab. I really appreciate it.
Ken: Thank you. Lovely speaking to you.
Laurel: That was Ken Munro, founder and partner at Pen Test Partners, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. Also, thank you to our partner, Microsoft Security.
That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in prints on the web and at dozens of live events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.
This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.
Security threats are everywhere. That’s why Microsoft Security has over 3,500 cybercrime experts constantly monitoring for threats to help protect your business. More at microsoft.com/cybersecurity.