“We had a lot of warnings about 9/11 happening before it did. We have the potential for something of that magnitude on the horizon in health care.”
Attorneys with the law firm King & Spalding, whose largest office is in Atlanta and which has medical device company clients in Georgia, issued an alert last month to warn of the threat: “It is only a matter of time before malicious hackers target networked medical devices.”
The health care industry is also expressing concern, though it is not publicly sounding alarms.
There are no recorded patient injuries or deaths tied to cybersecurity problems with medical devices, and the industry says that many such devices have built-in administrative controls to prevent catastrophic events.
They assure: With many devices, the last line of patient care is a human being — a doctor, a nurse or another hospital employee.
Some medical device makers and the hospitals that use their tools downplay the threat, though they are tight-lipped, in part, because they don’t want to tip off criminals who might take advantage of security flaws. They also might already have mitigation strategies in place.
“If a risk is identified, we will issue patches (security fixes) for any known issues,” said Micki Sievwright, a spokesperson for St. Jude Medical. This year the company acquired Atlanta-based CardioMEMS, which makes the first FDA-approved implantable heart failure monitoring device.
She added that St. Jude is not aware of any reported security breach “or malicious manipulation of a St. Jude Medical device while implanted in a patient.”
Marie Yarroll, a spokesperson for Medtronic, one of the largest U.S. device maker, said, “We do not disclose specifics related to our security protocols. This is in the interest of public safety. We are not aware of any incidents related to our devices.”
But the news service Reuters reported last month that both companies are among targets of cybersecurity investigations by the U.S. Department of Homeland Security. The companies aren’t accused of any wrongdoing, but the department was investigating potential flaws that could leave their devices vulnerable to an attack.
Critics suggest that industry and government response to the potential threat is late and insufficient. The issue is not whether to use the devices but whether they can be made more secure and how much attention is being paid to that — and how much should be.
“The entire (medical) industry is probably 10 years behind what we see in financial and retail sectors,” said Scott Erven, a medical devices IT security consultant and associate director with the consulting firm Protiviti. “We haven’t historically designed devices for cybersecurity.”
Out of about 1,500 companies that make medical devices that must account for cybersecurity vulernabilities, Ahmadi said, “there are maybe a total of a dozen medical device companies involved in cybersecurity working groups.
“There is an enormous number of manufacturers that seem to be doing little or anything at all,” he said.
The U.S. Food and Drug Administration, which has primary responsibility for evaluating the safety and effectiveness of medical devices, says it got the first indications that they are vulnerable to cybersecurity threats in 2003.
Yet it’s take a decade since for the government to act and for the mainstream medical community to take notice.
One reason: medical device innovation has outpaced the government’s ability to take action.
What’s more, if any devices were hacked, manufacturers and health care providers may not even know. That’s because medical devices aren’t traditionally designed with information security in mind, so most don’t contain logs that could capture a cybersecurity event.
But incidents in recent years signaled that such threats are hardly a fanciful notion. Consider:
— In 2007, then-Vice President Dick Cheney, citing security concerns, ordered some of the wireless features on his defibrillator to be disabled. He later told reporters it wasn't necessary for the average person to do the same.
— In 2008, a group from the University of Washington and the University of Massachusetts-Amherst reported they had demonstrated in a lab how implantable cardioverter defibrillators — such as pacemakers, which rely on tiny embedded computers — could be interfered with to essentially shock someone needlessly, or not shock the person when his or her heart needed the boost.
— In 2009, the FBI arrested Jesse McGraw, a 26-year-old hacker who used a hospital's computer network to bombard and crash the website of rival hacking groups. McGraw, who worked at the Texas hospital as a security guard, knew his tampering could affect the hospital's environmental controls, damaging temperature-senstive drugs and supplies. He was eventually sentenced to prison.
— In 2011 at an industry conference, researcher Jay Radcliffe, a diabetes patient who at the time relied on an implantable insulin pump, showed how to crack into devices like his to dump the device's entire payload. That would instantly send patients into shock, ultimately killing them.
In 2012, the U.S. Government Accountability Office reported that a number of intentional security threats could exploit vulnerabilities in implantable medical devices and called on the FDA to consider the risks in its premarket approval process.
There is still no law or regulation requiring cyber-security safeguards and validation in medical devices.
But in June 2013, the FDA initially issued a cybersecurity safety communication to device manufacturers and health care facilities with recommendations on evaluating devices and network security.
Last month, the FDA issued final guidance on medical device cybersecurity, and regulators held a workshop to push the issue forward.
Estimates are that there are more than 200 medical device companies in Georgia, a number of which stand to be affected.
Some major metro Atlanta hospitals, asked to discuss their level of their concern with cybersecurity or the potential threat to medical devices they use, either did not respond or downplayed their role.
However, the American Hospital Association acknowledged the potential threat.
“There even exists the threat of cyber terrorism against a hospital, which might include attempts to disable medical devices and other systems needed for the provision of health care,” an AHA report noted.
Intruders, it said, “may seek to harm patients by remotely disabling or modifying medical devices.”
The devices don’t even have to be directly connected to the Internet to be invaded. They can be connected to a network of computers that can be maliciously tampered with.
Separately, individual devices could be hacked through something as small as a USB port, or tampered with through a Bluetooth, radio signal, WiFI or other electronic connection. Or, those machines could be undone through simple user names and passwords that are hard-coded into a device and widely-known within the ranks of device manufacturers.
It would be similar to purchasing a laptop that comes with a user name and password that can’t be changed and would always allow access to employees of the computer manufacturer.
Even as government and industry wrestle with making medical devices more secure in an interconnected world, patients continue to depend on them.
Patrick Drews, an IT administer who lives in Cumming, counts on the vital medical equipment that aids his 14-year-old daughter, Makayla, and considers the digital risks extremely remote.
A tethered insulin pump feeds Makayla the hormone throughout the day, preventing her from going into diabetic shock and regulating her disease better than a needle.
She also has a continuous glucose monitor hooked up to a cell phone that sends him alerts about her health, so he can get her help if she needs it.
“Forty miles from her, I know exactly what’s happening,” he said.