By Mark Shortt

The cybersecurity firm Keyfactor started in 2001 as a consulting effort that would help enterprises get their digital certificates under control by properly structuring and managing their public key infrastructures (PKI). Several years later, Keyfactor’s founders adapted their business model as they began to develop software.

“That’s when we built our platform, which is PKI-as-a-Service, or PKI on premise, that we would design and build for a client,” said Ellen Boehm, vice president of IoT strategy and operations at Keyfactor, in an interview with D2P. “We also built a software tool to interface and be able to view all the certificates that are attached to devices within an organization.

“So, we started by working within the enterprise IT organization, and enterprise PKI. But very recently, as more connected devices are needing to embed security into their identity, we have leveraged that same kind of platform and PKI to the IoT devices.”

Since the onset of the COVID-19 pandemic, manufacturers across the nation and world have rapidly re-tooled to meet urgent demands for ventilators, blood analyzers, and various types of diagnostic and support systems. Many of these systems use sensors and network interfaces to collect and communicate data, placing them squarely in the category of medical IoT (internet of things) devices.

Like any connected devices, medical IoT devices pose inherent security risks. Hardening the security of the devices is critical because of the all too real risks to patients’ lives.

“Devices that relay regulated signals, like pacemakers or insulin pumps, can be intercepted,” Boehm said in an emailed statement. “Attackers can change the data or alter the device’s firmware and software. For example, if an attacker accesses an insulin pump, they can alter the data that a doctor receives, potentially changing dosage requirements and impacting patient safety. Manufacturers need to plan and properly secure devices that are fast becoming more complex machines with complex functions.”

Boehm said the pandemic is changing the way we work today and is likely to bring longer-term changes as well. Manufacturers must pivot their capabilities, including how they design, develop, and test—remotely.

“These new processes may mean more automation on factory floors and the introduction of smart control systems that reduce human interaction. If you’re going to be a viable device business, your workforce must determine how they can maintain productive operations when they can’t be physically present in manufacturing facilities. This is the reality today, and companies will have to accept it as a long-term scenario.”

Ellen Boehm spoke with Design-2-Part recently about some of the challenges that medical device manufacturers face when developing IoT devices, as well as tools they can use to build security into them. Following is a transcript of our conversation, edited for length and clarity.

 

D2P: What can design engineers do to ensure they design cybersecurity into connected medical devices at the beginning of a design cycle?

Ellen Boehm: A design team typically understands what functionality the device needs to have, whether it’s regulating some level of medicine, or heart [rate or rhythm], or stimulating the brain. All of that functionality needs to be designed, and it occurs mostly within the circuitry and the physical-mechanical functions that happen.

But the other piece is, these devices are now also becoming part of a connected system, so they likely need to interact with a mobile app or talk back to a gateway or a cloud application somewhere. In order for that to happen securely, you need to create secure channels between the end device and all these other end points that the device might talk to. And the best way to do that is by using the asymmetric certificate.

So, before you [a designer] go and pick whatever microprocessor you’re using, whatever communication chip you’re using, whatever secure element for storage you’re using, think about what endpoints are going to be in that system that you need to connect to, and how you want to do authentication, and how you want to do encryption. Because if you don’t do that upfront, then you might run out of space, processing power, and you don’t have enough room to put in the right level of crypto (cryptography) that you need or want for your device.

 

D2P: What are some of the challenges the pandemic presents that could, potentially, make the process of developing medical IoT devices more difficult?

 EB: When it comes to designing the devices, currently, a lot of our customers are working remotely. They are not able to get into their labs or their factories to do tests and test runs. So, we’re enabling them to do virtual applications.

For example, the way our technology works is that we provide an SDK (software development kit) that can get embedded into the actual device. We can simulate a physical device so that, as designers are trying to figure out how to build in security, they can use our platform to issue fake certificates, embed the cryptography into their device, and test that whole process out. That can be all done even while you’re sitting in your home, or wherever you are when you’re not in the office.

So, as opposed to being in their lab and having access to their physical devices, designers have had to adjust to being able to test virtually to simulate something.

 

D2P: To what extent have medical device engineers already been attuned to the issue of cybersecurity?

EB: A lot of this has been driven by the FDA, which has [created] a lot of awareness around cybersecurity. In the past seven to 10 years, they’ve put out papers with guidance around how manufacturers should handle cybersecurity.

In that guidance, it talks about several things. For example, using unique identities for every device, and having a way to be able to update those credentials in the event that something gets compromised. Or having a way to update your cryptography if the algorithm that you have chosen becomes weakened over time.

That’s just the nature of how computing power is advancing. When you’re developing something, you pick the best available library to encrypt your device. But we know that over time, somebody will be working on trying to hack that, or the quantum computing is going to be able to make that occur much more quickly—in a matter of hours, versus days or weeks, or years.

So that’s why we need to have a plan for being able to update these things over time. The FDA won’t approve your device unless you have a plan to be able to update the certificates on it. If you’re making a piece of equipment that’s going to last for 10 to 15 years, you can’t just have one certificate that’s valid for that long. You need to be able to swap out certificates every couple of years because things are going to change. And that’s just the best practice.

So, I think medical device makers have been given guidance that they’re working towards. But, as with anything, manufacturers are still early on in figuring out the best way, the most optimal way to embed certificates into [devices].

 

D2P: Could you give us a sense of the range of medical devices that your platform is suited to, and those that it may not be suited to, if any?

 EB:  Because of the way that we have designed our platform, and the way we designed our agent, which is essentially an SDK that you can build into the firmware of your device, it can be a very small footprint, physically, on the device. We’ve worked with customized pacemakers, and we’ve worked with insulin pumps, which are more personalized, small sized things. And we’ve worked all the way up to engine control units in cars, laser cutting equipment, and things like that.

We can scale from small things to big things. But if you can do the small thing, then the big thing is not an issue. We can run embedded Linux; we can run Java, Windows. So, depending on what operating system you have in those bigger devices, we have ways of connecting to them.

 

D2P: Keyfactor’s website describes Keyfactor Control as “an end-to-end secure identity platform for connected devices” that is said to make it easy to build in “high-assurance secure identity at each step of the IoT device lifecycle.” How does Keyfactor Control allow product development teams to use different tools and applications to accomplish this?

EB: Keyfactor Control is most closely tied to creating the certificate that gets paired with the device at the embedded level. When you think about the security stack throughout the IoT device, there’s the crypto libraries, and the PKI that creates the certificates that go on the device itself. It’s also likely there’s a trusted platform module where the private key is stored in the physical device. That’s one layer.

Another tool that people use is—one level up—the IoT development platform. [Some of the well-known] IoT development platforms are great tools for initial onboarding and device management. They give the manufacturer an overall view of all the things that they’re making and producing, and where they go.

But when you hook into a PKI provider, such as Keyfactor, you have your certificates being created from a PKI that has been designed specifically for your company. That’s what we do, is we create an offline route that is dedicated only for your company.

Let’s say you are “Medical Device Company A.” We build out a design and policies for your certificates that is based on that offline route that we protect and keep under very secure controls. That is a differentiator for us versus some other solution providers that talk about roots of trust being built into the devices. But the question I like to ask is, ‘Do you know, really, where that root is being hosted? Is it somehow being shared with another customer? And is it something that you really own?’

What we build for the customer is something that the customer owns. If they choose to leave Keyfactor and have somebody else go and manage their PKI, they can take that root with them because they own it. So then, all of the identities and hierarchy for all of their certificates and issuing certificate authorities are built around that—they stay purpose built for that.

 

D2P: What are the most important things that engineers need to do to maintain cybersecurity throughout the product’s lifecycle? What should they be paying attention to?

EB: The first thing is firmware updates. When you push an OTA (over the air update) of new code, functionality, or fixed bugs, we strongly recommend that you sign that firmware image with a code signing certificate.

Many companies already do this. They have code signing certs [certificates] that they buy and that they use to sign. But the danger can be, if you don’t protect that code signing certificate in the right way—and allow your engineers and developers to access it in a very controlled manner—then the signing cert can be compromised. It could be stolen and then used to sign firmware in your name, because then it looks like it’s valid.

So, that’s another part of the Keyfactor platform: We manage code signing certificates in an HSM (hardware security module) and we provide an interface or API (application programming interface) to integrate into your development environment. So, when you’re pushing out code, or signing code, you essentially can go and check out the certificate for a certain period of time. You can make sure that only certain developers can access it; you can make sure that it’s only accessed so many numbers of times; and then, there’s this whole audit trail.

So, that’s one thing that we want to make sure the designers and developers think about—that is,  when they’re pushing over the air (OTA) updates, making sure that the code is signed, and that you’re keeping track of that code signing certificate. There might only be a handful of code signing certificates in a company, and so they’re prime targets for hackers.

But then the other thing is what I was mentioning about the updatability of certificates and crypto libraries over time. Because when you build a piece of equipment, maybe it goes out into a utility substation somewhere, and it’s going to stay there for 20 years. It’s a rugged piece of equipment, and you’re going to be giving it the firmware updates, but in that same vein, you should also be prepared to rotate out the credentials on the certificates, just for additional protection. Then, in the event that something did get hacked, you have to be able to swap everything out quickly, so that you don’t have any downtime, and your device doesn’t go offline. So, that’s the second use case.

And thirdly, I would mention secure booting. It’s about not booting up the device, or booting up a certain application segment, until we verified the signature of the device and verified it back to the root of trust. So, in combination with the secure code signing, that’s one of the things to validate.

 

D2P: In some ways, software seems to define a medical IoT device even more than the hardware. With all the influence that software has, is there anything about your software platform that a device engineer should be aware of when designing the hardware?

EB: My first thought would be around, where do you store the private key?

The way that asymmetric certificates work is that you have a private key, and then you have a public key that is mathematically related to that private key and generated from it. The public key is the thing that can be exchanged out into the world and passed down to other people. Whenever a recipient receives that, if they have the ability to decrypt it and then send a message back, then the original device that has that private key can decrypt and then say, ‘Okay, yes, I  trust this person.’ So that’s the value of having this asymmetrically, two different things, versus symmetric encryption, where you have two devices that have the same key. It’s easier to intercept one of those and then create a connection.

When you have that private key, that is like the keys to your kingdom. If you’re a hardware designer, you have to make sure you have a place to store that.

In the asymmetric case, when you have that private key, that is like the keys to your kingdom. If  you’re a hardware designer, you have to make sure you have a place to store that. So, do you have a secure element? There are a variety of trusted platform modules that you can look up online. Those types of hardware elements are important to understand if you have space on the board to put them, if you have power enough to power them and architect that into your circuit design.

If you do that up front, then you work around those constraints and you know you’ve put in enough of a hardware platform to be able to handle the security and any additional software libraries that you might want to run your security. You have to pick a TLS (transport layer security) library for asymmetric encryption. A popular one is OpenSSL. There are also companies that we work with directly. So, wolfSSL is a TLS library that is very small, lightweight, and feature-rich, but optimized for embedded platforms.

So, it’s important to make sure that you pick the right crypto library provider and, then, the hardware chip to be able to store those private keys.

Subscribe Now

Design-2-Part Magazine

Get the manufacturing industry news and features you need for free in a format you like.

FREE Print, Digital, or Both »

You have Successfully Subscribed!