Connected devices are transforming our home lives, and are now starting to have a real impact on our healthcare. Thanks to advances in health tech, medical professionals now have reliable remote patient monitoring, automated drug dose delivery, live updated patient records, patient and equipment tracking, and a plethora of other functionalities that not only optimise the care pathway but also improve the level of care provided by medical professionals.
But while health tech manufactures have been forging ahead developing a multitude of Internet of Things (IoT) devices to support medical professionals, one key area has been neglected during development: the security of those devices.
Time to disconnect?
The WannaCry ransomware attack brought UK hospitals’ administration processes to their knees, but the future security issues our healthcare services face due to connected devices could have a significantly more direct and sinister patient impact.
This insecurity in health tech isn’t a new problem. Back in 2013, former US Vice President, Dick Cheney made a decision, along with his cardiologist, to have the wireless access feature disabled in his connected pacemaker for fear that it could be remotely accessed and triggered by an adversarial individual. More recently, manufactures have also issued recalls for both insulin pumps and pacemakers due to security issues, flaws that could potentially have life threatening implications.
While simply disconnecting these products might provide a quick fix, it’s not going to solve the bigger underlying problem. Currently, many IoT products are receiving software patches to enhance security, for example enabling encryption for communications, and organisations are developing dedicated IoT antivirus solutions. But despite this being a step in the right direction for the broader connected device market, delivering security for health tech comes with slightly more complications.
Take implantables like pacemakers as an example, similar to the one Dick Cheney had. Even though delivering a software patch to the device over its wireless connection is possible, the potential result of that patch not working, or causing a problem with how the device operates, is far more dangerous than it is in a connected TV. And by relying solely on patching software insecurities, it leaves people vulnerable to a ‘zero day’ or targeted malicious attack, in the same way those who don’t regularly update their phone are vulnerable to malware. In fact, simply having a device that is capable of being triggered to accept a downloaded firmware update is open to abuse, unless the update process itself is fully secured.
People need to be able to trust their healthcare providers. By the same token, healthcare providers need to be able to trust the medical devices they use. The only real way to ensure trust all around is by health tech manufacturers taking a secure by design approach to their product development.
Secure by design means making security the number one priority during the design phase, and keeping it there throughout the whole lifecycle of the device. Without security, none of the other safeguards that are designed to prevent harm can be assured. It’s something which has become so crucial for IoT products that the UK government introduced a Code of Practice, to help drive manufacturers, retailers and other industry stakeholders to boost the security of connected devices. It is comprehensive, yet easy to follow and provides 13 clear guidelines for manufacturers to adhere to.
The Code of Practice is part of the five-year £1.9 billion security initiative – the Secure by Design review – that was created in collaboration with consumer device manufacturers, retailers and the National Cyber Security Centre to address the number of glaring vulnerabilities in smart devices. Whilst it was developed with a focus on consumer products, the advice provided is valid for those developing connected devices for healthcare.
One of the biggest things which can help those trying to take a secure by design approach is implementing a hardware root of trust.
In general, there are good reasons why hardware-based security is considered to be more secure than a purely software-based approach. Delivered at chip level through a secure core, the hardware root of trust separates general processing from secure processing – allowing a separate processor element dedicated to security tasks. Having a hardware root of trust, which uniquely and immutably identifies a device and can be used as a cryptographic seed, permits a manufacturer to manage the access rights for the devices it produces, and assign those access rights to legitimate parties. Using cryptographic functions, this then enables a secure line of communication to be had between the device and those who need access to it, removing the ability for remote access by a non-approved individual. By eliminating the risk of unauthorised communication and access, along with enabling devices to only accept digitally signed updates that are specific to the device, trust can be assured both in the functions of the device and the data it reports.
Connected healthcare devices: are they as safe as they could be?
As our world get smarter around us, the potential benefits connected technology can provide are huge. But if we really want to see the connected age deliver on its healthcare promise, then we need to make sure that security is front and centre of the conversation. If we don’t do this, then the trust needed for the healthcare sector to benefit from connectivity, big data and automation will be lacking and patients won’t see the benefits they should.
Scott Best, Technical Director of Anti-counterfeiting Technology, Rambus (opens in new tab)
Image Credit: Lightpoet / Shutterstock