Health
Next Story
Newszop

Musk's brain chips to AI, how tech is challenging healthcare regulators

Send Push

Christina Jewett

There are now artificial intelligence programs that scan MRIs for signs of cancer, Apple AirPods that work as hearing aids, and devices that decode electrical blips of the brain to restore speech to those who have lost it. Medical device technology can have a stunning impact on patients’ lives. As advancements become more tangible to millions of Americans, regulation of the devices has commanded increasing attention at the Food and Drug Administration (FDA).

Dr Michelle Tarver, a 15-year-veteran of the agency, is taking the reins of the FDA’s device division from Dr Jeffrey Shuren, who forged deep ties with the device industry, sped up the pace of approvals, and made the agency more approachable to companies. Some of those device makers were represented by Shuren’s wife and her law firm, posing ethical conflicts for him that continue to draw scrutiny. Lawmakers and consumer advocates have become increasingly concerned about the device industry’s influence over the sprawling division, which has a budget of about $790 million. Device safety and standards for agency approvals of products as intimate as heart valves or neural implants will be at the forefront of the division’s mission in the coming years. Among the issues Tarver will encounter are:

Brains, Computers and Elon Musk
Few devices will require such intense oversight as one of the most breathtaking technologies in development: brain-computer interfaces that dip into the surface layers of the brain to decode its electrical noise — and return function to people who have lost it. Researchers from a number of teams have demonstrated the capability to restore the voice and speech of a California man with ALS; to enable a paralysed man to walk; and to help a man paralysed below the neck to play Mario Kart by simply thinking about steering left or right.

The medical device division is playing a crucial role in this field by authorizing and overseeing trials that evaluate the devices’ safety and effectiveness and, at some point in the future, deciding whether they can be sold. Perhaps no company developing a device is more high-profile than billionaire Elon Musk’s Neuralink. It is developing the brain-computer device that enabled an Arizona man to play video games with his mind. Neuralink is also beginning work on a device that Musk hopes could restore vision.

Musk, a vocal supporter of former President Donald Trump, has criticised the FDA during campaign events, railing incorrectly about the agency’s failure to approve a drug that cured a friend’s mother’s brain cancer. It turns out the drug Musk named had been approved in 2021, as STAT news first reported. “Overregulation kills people,” Musk told an audience in Pittsburgh in Oct, going on to say that “simply expediting drug approvals at the FDA, I think, will save millions of lives.”

Neuralink has already received the green light from the agency to implant its device, which is inserted in a quarter-width hole bored into the skull, in a second patient. Depending on the outcome of the presidential election, Musk could gain considerable sway across several federal agencies overseeing his businesses, including Tesla, SpaceX and presumably Neuralink.

The Ballooning Field of AI
Harvard University researchers recently reviewed dozens of cardiology device recalls and found that the FDA had deemed many of the devices to be of moderate risk, although they turned out to be deadly. An editorial by Dr Ezekiel Emmanuel, a former federal health official and vice provost at the University of Pennsylvania, accompanied the article and called on the FDA to place safety over speed. The FDA said it disagreed with an assertion in the study that devices similar to those already marketed need to be thoroughly tested in people.
Doctors and researchers vetting agency-cleared AI programs have also found the agency’s review records lacking. As they consider using such tools in patient care, a lot of answers they seek about how the programs work are nowhere to be found in agency approval records. A vast majority of those programs have been authorized under the agency’s 510(k) programme, in which products are typically authorized in 90 days. They include software programs meant to spot cancers and other problems on MRIs, CT scans and other images.
Researchers from Stanford University published a study in October noting that a vast majority — 96% of nearly 700 — of AI programs authorised by the FDA had no information about race or ethnicity, “exacerbating the risk of algorithmic bias and health disparity.” The agency said the summaries criticised in the study were merely brief descriptions that did not reflect the extent of staff reviews that can amount to thousands of pages.
Researchers from Mass General Brigham and elsewhere published a report concluding that information from the FDA about the performance of certain programs was too sparse to justify using in medical practice. Still, the promise of AI in healthcare has generated sky-high interest, and the FDA has discussed its use in drug development and employing it internally to catch “cheating” in product applications, Dr Robert Califf, the agency’s commissioner, said at a conference in October.
Jeffrey Shuren has often said the regulatory framework for medical devices was developed for technology dating to his grandmother’s time, nearly 50 years ago. At that Las Vegas venue, Califf acknowledged the agency’s limitations in regulating the vast reach of AI programs. Evaluating the scope of AI programs extends far beyond the agency, he said. “It’s so bad,” he said. “If you said, ‘Well, the FDA has got to keep an eye on 100% of it,’ we would need an FDA two to three times bigger than it currently is.” NYT NEWS SERVICE
Video

Loving Newspoint? Download the app now