UK’s NHS Thinking Through Role of AI in Medical Decision-Making

In many areas of industry and research, people are excited about artificial intelligence (AI). Nowhere more so than in medicine, where AI promises to make clinical care better, faster and cheaper.

Last year, UK Prime Minister Theresa May said that “new technologies are making care safer, faster and more accurate, and enabling much earlier diagnosis”.

Backing developments in medical AI is seen in the UK as an important step in improving the provision of care for the National Health Service (NHS) – the UK Government recently launched a “Grand Challenge” focusing on the power of AI to accelerate medical research and lead to better diagnosis, prevention and treatment of diseases.

With companies and research institutions announcing that AI systems can outperform doctors in diagnosing heart disease, detecting skin cancer and carrying out surgery, it’s easy to see why the idea of “Dr Robot” coming in to revolutionize healthcare and save cash-strapped public healthcare is so attractive.

But as AI-enabled medical tools are developed and take over aspects of healthcare, the question of medical liability will need to be addressed.

Whose fault is it?

When the outcome of medical treatment is not what was hoped for, as inevitably is the case for some patients, the issue of medical negligence sometimes come into play. When things go wrong, patients or their loved ones may want to know whether it was anyone’s fault.

Medical negligence in the UK is a well-established area of law which seeks to balance the patient’s expectations of proper care with the need to protect doctors who have acted competently but were not able to achieve a positive outcome. Generally, doctors will not be held to have been negligent if they followed accepted medical practice.

The UK’s General Medical Council guidelines say that doctors can delegate patient care to a colleague if they are satisfied that the colleague has sufficient knowledge, skills and experience to provide the treatment, but responsibility for the overall care of the patient remains with the delegating doctor.

Similarly, a surgeon using a robotic surgery tool would remain responsible for the patient, even if the robot was fully autonomous. But what if, unbeknownst to the surgeon, there was a bug in the robot’s underlying code? Would it be fair to blame the surgeon or should the fault lie with the robot’s manufacturer?

Currently, many of the AI medical solutions being developed are not intended to be autonomous. They are made as a tool for medical professionals, to be used in their work to improve efficiency and patient outcomes. In this, they are like any other medical tool already used – no different, for example, to an MRI scanner.

If a tool malfunctions, the clinician could still be at risk of a medical negligence claim, though the hospital or the practitioner could then mount a separate action against the product manufacturer. This remains the same for AI products, but as they acquire more autonomy new questions on liability will arise.

The use of AI decision-making solutions could lead to a “black box” problem: although the input data and the output decision are known, the exact steps taken by the software to reach the decision from the data cannot always be retraced. If the outcome decision is wrong, it may be impossible to reconstruct the reason the software reached a particular outcome.

In a situation where an AI device matches or outperforms a medical professional on a certain medical problem, the device would statistically be providing better care if, on average (and with statistical significance), patients were receiving better outcomes by being visited by “Dr Robot” rather than “Dr Human”.

Though Dr Robot could perform better overall, for example by diagnosing correctly 98 per cent of the time as compared to Dr Human’s success rate of 95 per cent, there may be a small number of edge cases for which Dr Robot reaches the wrong outcome, but where the right outcome would be obvious to a doctor. Legally, medical negligence arises if a doctor fails to do what a reasonable doctor would have done.

If the autonomous AI system reaches a decision that no reasonable doctor would have reached, the individual patient would have grounds to claim for medical negligence even if, overall, across all patients, the AI system is reaching better outcomes.

The question for the misdiagnosed patient will be: who should I sue?

Innovative solutions

One answer to this could be new insurance models for AI tools. Last year, the US gave the first ever medical regulatory approval to an autonomous AI device to the IDx-DR retinal scanner. This tool operates independently, without the oversight of a medical professional, to assess whether a patient needs to be referred to a doctor.

The tool’s developers, IDx, have medical negligence insurance to protect the company against liability issues, and specialist AI insurance packages have sprung up to cater for this developing need.

Rather than every medical tool having its own insurance, the idea of governmental solution has also been proposed. An organization could assess the risk of each AI product and allow the developer of the product to commercialize it in return for paying the appropriate “risk fee” to the regulator.

The fee would then fund a pool for pay-outs in case of AI malfunction. This solution, sometimes called a Turing Registry (or EU Agency for Robotics and Artificial Intelligence, as proposed by the European Parliament) would get round the “black box” problem, as the registry would pay out on claims involving a registry-certified AI product, without having to determine the underlying cause of the problem.

Read the source article in The Telegraph.



from AI Trends https://ift.tt/2U3iCE9
via IFTTT

Comments

Popular posts from this blog

Underwater Autonomous Vehicles Helping Navy Get More for the Money 

Canada regulator seeks information from public on Rogers-Shaw deal