This story was produced by Kaiser Health News, an editorially independent program of the Kaiser Family Foundation.
Jay Radcliffe breaks into medical devices for a living, testing for vulnerabilities as a security researcher.
He’s also a diabetic, and gives himself insulin injections instead of relying on an automated insulin pump, which he says could be hacked.
“I’d rather stab myself six times a day with a needle and syringe,” Radcliffe recently told security experts meeting near Washington, D.C. “At this point, those devices are not up to standard.”
Concern about the vulnerability of medical devices like insulin pumps, defibrillators, fetal monitors and scanners is growing as health care facilities increasingly rely on devices that connect with each other, with hospital medical record systems and —directly or not — with the Internet.
Radcliffe made headlines in 2011 by showing a hackers’ convention how he could exploit a vulnerability in his insulin pump that might enable an attacker to manipulate the amount of insulin pumped to produce a potentially fatal reaction. Now he talks about going without a pump to raise awareness about the potential for security lapses and the need for better engineering.
While there have been no confirmed reports of cyber criminals gaining access to a medical device and harming patients, the Department of Homeland Security is investigating potential vulnerabilities in about two dozen devices, according to a Reuters report. Hollywood has already spun worst-case scenarios, including a 2012 episode in the Homeland series portraying a plot to kill the vice president by manipulating his pacemaker.
“The good news is, we haven’t seen actual active threats or deliberate attempts against medical devices yet,” said Kevin Fu, a University of Michigan researcher who has made his career testing the vulnerability of medical systems.
The bad news is that hospital medical devices may be vulnerable to hackers simply because they can be the weak link that gives a criminal access to a hospital’s data system — especially if the devices haven’t been updated with the latest security patches, said Ken Hoyme, a scientist at Adventium Labs, a cybersecurity firm in Minneapolis.
In the real world, he said, a hacker is more likely interested in stealing records he can sell than in harming a patient.
“There are not that many bad…guys whose goal in life is to go and randomly mess with patients in hospitals,” Hoyme said. “They want money, not to shut off the ventilator of a particular patient.”
Hospitals are targets because they collect so much data, from patients’ Social Security numbers and financial information, to diagnosis codes and health insurance policy numbers.
Radcliffe estimates that medical identity information is worth 10 times more than credit card information —about $5 to $10 per record on the black market compared to 50 cents per account for credit card information.
Crooks can use it to apply for credit, file fake claims with insurers or buy drugs and medical equipment that can be resold.
And unlike the victims of credit card theft, those with stolen medical identities might not know for months or even years, giving the thieves more time to use their information.
New FDA Guidelines
Yet there are few cybersecurity standards for medical devices.
In October, the FDA issued guidance outlining what security features developers should bake into their products when seeking approval for a new device.
The guidelines, which aren’t binding, say that when seeking approval for a new device, manufacturers should detail cybersecurity threats they considered and create better ways to detect when it might have been hacked.
They should also build in protections, such as limiting access to authorized users and restricting software updates only to products with authenticated coding.
While a good start, some security experts say the guidelines should be binding. Others fear that giving them the force of regulation could be more harmful because they would become outdated quickly.
Nonetheless, the FDA’s guidance has, in effect, changed the conversation among device makers from, “‘Do I believe this is a real threat?’ to ‘What do I have to do to satisfy the FDA?’” said Hoyme.
By the end of the year, the agency is expected to issue similar recommendations for devices already on the market.
Common Vulnerabilities
One reason many existing devices might be vulnerable is they run on defunct operating systems like Windows XP, which Microsoft stopped supporting in April, meaning there won’t be any new security patches. Other, newer devices may have built-in passwords that are difficult to update. Gaining access to them can be fairly easy which could make them more vulnerable to attack, researchers say. In addition, sometimes, a password is intentionally disabled so it’s easily accessible to medical staff in an emergency.
Hackers can also get into some inadequately protected hospital systems when staff members click on links in emails, not knowing they contain malicious code. Once transmitted to a hospital’s intranet, that malware could find its way into unprotected device software and cause malfunctions, said Hoyme and Fu.
“If cyber criminals decide they can hack into a device to get health records, they won’t think about whether they’re messing with device performance: They’re going after the money,” Hoyme said.
Security experts warn that some of the same design flaws that make medical devices vulnerable would also make breaches hard to track.
“If your iPhone is compromised, it’s a lot more straightforward for someone to determine if it’s been tampered with. We’re not there yet” with medical devices, said Billy Rios, a former Google software engineer turned security consultant.
He describes how he was able to buy a secondhand EKG machine, used to measure the heart’s electrical activity, for just $25 online. Some infusion pumps and patient monitoring systems go for less than $100. That makes devices more readily available to those who want to figure out vulnerabilities to exploit.
“The effort required is so much lower,” he says. “That’s not a good position to be in.”
What Hospitals Are Doing
Hospitals are loathe to talk about device security publicly, but many are working to ensure their systems are stronger.
In a two-year test of information security, experts working for Essentia, a large Midwestern health system, found that many devices were hackable. For instance, they found settings on drug infusion pumps could be altered remotely to give patients incorrect doses, defibrillators could be manipulated to deliver random shocks and that medical records could be changed.
Stephen Curran, acting director of the Division of Resilience and Infrastructure Coordination with the Department of Health and Human Services, could not say how many facilities have a chief security officer or someone in charge of cybersecurity. But even small facilities have some relatively simple options for boosting the security of devices on their networks, he said, including “routine backups and patching of the systems and the use of anti-virus firewalls.”
Still, while “we definitely see a trend in hospitals to improve their security,” says Mike Ahmadi, global director of critical systems security at cybersecurity firm Codenomicon, vendors have to do more to engineer security.
“The bigger issue is that vendors are not held accountable for writing insecure code,” says researcher Rios. “There’s no incentive…so they don’t invest.”
Pressure On Vendors
A few hospitals, including the Mayo Clinic, have started to write security requirements into their procurement contracts.
At the University of Texas MD Anderson Cancer Center in Houston, any new software application has to be approved by the hospital’s security team, headed by Lessley Stoltenberg, chief information security officer.
He says device makers also will have to meet a slew of security requirements: Can the device be encrypted? Is there a unique identification for users? If the vendor is hosting the device, what does their system look like in terms of firewalls and other protections? Will the manufacturer provide up-to-date security patches?
Some companies, like Ahmadi’s Codenomicon, specialize in selling software to detect software bugs that could lead to security holes.
While Codenomicon has a number of device makers as customers, those are a fraction of the more than 6,500 medical device manufacturers in the U.S., some of which may not be doing even the most basic testing. Most vendors are small — 80 percent have fewer than 50 employees — and many are startups without the capital to invest in a security expert.
So, could hackers target infusion pumps or ventilators?
“Is it possible?” Stoltenberg mused. “Yes. Is it likely? No. No device in the world is absolutely 100 percent secure.”
About the Author