At InsightRX we are trying to improve the practice of medicine with computational solutions, focusing on precision dosing. As a team of scientists and engineers we are obviously very excited about the huge leaps forward in artificial intelligence (AI) and machine learning (ML) seen in the last decade. Our goal is to apply such solutions wherever possible in the tools we are building.

When we founded InsightRX almost three years ago and created our first prototypes we were unsure how much artificial intelligence (AI) we should put in our software. What would be feasible from a computational or scientific viewpoint? What level of decision automation would be accepted by the clinical community? What data would be useful and what data would be available? What would regulatory bodies permit? None of these questions had obvious answers at the time, but some have become clearer over time.

In the case for a precision dosing tool, one could imagine a “no AI” solution were the software would only calculate specific individual model parameters from available patient characteristics and measured lab values, but would not provide a clinical answer to the end user such as a specific change in treatment. This solution leaves it to the clinician to decide on a further course of action. It might be easy to build, validate, and integrate within a clinician’s workflow, but the question is whether it would provide enough real value.

On the other end of the spectrum one could imagine a solution where AI is fully integrated and pivotal in decision making: it would take any incoming patient data, parse it, and determine the most optimal treatment course for the clinician. An example output could e.g. be: “Change to 1500 mg given bi-daily starting today at 13:00 to reach desired exposure tomorrow at 9:00 with 90% probability”. I tend to call this second extreme the red button approach where the software product would — like in a TV quiz show — just have a big red button that the doctor or pharmacist would have to push, after which the answer would come rolling out of the system. The advice would then be implemented, possibly without any extra verification by a pharmacist or doctor. An even more extreme implementation would be hooking up the AI algorithm directly with the pharmacy ordering system, without human input at all.

It’s not a stretch to say that both extremes described above have their merits but also have important drawbacks, flaws, or raise safety concerns. While perhaps technologically feasible, full automation of too many parts of the drug prescribing process can have serious consequences, as has been documented for example in this particular case at one of our partner hospitals. Too little AI will likely not provide enough value to the clinician. The interesting space to explore is therefore between those extremes, where precision dosing algorithms and AI are available to the clinician, but still allow (and require) human involvement.

This optimal balancing of human and computer collaboration is an interesting topic, and crucial for optimal integration of clinical decision support tools. And I believe we can learn in particular from previous experiences in computer chess…

Kasparov’s Law

The game of chess is often regarded as the guinea pig of AI, since AI researchers have been using this game since the 1950s as their main platform to perform basic research. Before 1996 computers were basically no match for any chess grandmaster worth its salt, but that rapidly changed in the ensuing decade, culminating in the (stolen?) victory of IBM’s Deep Blue over Garry Kasparov in 2007. Since then, computers and chess engines have steadily gained ground. Today, there is no human chess grandmaster in the world that could defeat a well equipped chess computer.

Interestingly, in 2005, an internet chess tournament with considerable prize money was organized where humans were not pitted against computers, as was often the case at the time, but where any chess player in the world could team up with a computer (or full-blown computer clusters, if they wished) and use the computer’s algorithms in any decision. The expectation was that the tournament would be won by either a strong (human) grandmaster using a computer with a state-of-the-art chess engines on the side, or, that one of the teams with “weak” human players but massive computer clusters and strong AI chess software would win, in a similar way as IBM’s Deep Blue would later beat Kasparov.

However, that is not how the competition panned out. The winner of this chess tournament turned out to be a pair of amateur (“weak”) American players, who used only a few limited computers on the side to determine optimal strategy in their games, but had no access to massive computer clusters or some secretive innovative chess software. Instead, their winning approach focused primarily on the interaction and “coaching” of their computers rather than having superior chess skills themselves or leaving the decisions fully up to the computer’s AI.

There are many more of such examples in chess, and within the broader field of AI. This particular example led Garry Kasparov — who besides being the former world champion in chess is also an Oxford-tenured influential thinker in AI — to hail this as the victory of human–machine collaboration:

A clever process beat superior knowledge and superior technology. It didn’t render knowledge and technology obsolete, of course, but it illustrated the power and efficiency and coordination to dramatically improve results.

It inspired him to postulate the following “equations”, later referred to by others as Kasparov’s law:

weak human + machine + better process beats strong machine

and

weak human + machine + better process beats strong human + machine + inferior process.

AI in digital health

I feel this “law” also applies remarkably well to the digital health field, and in particular to precision dosing: for many cases it is much more important to apply a software solution that is tightly integrated with the clinical workflow than to focus solely on human strength or improving the strength of AI algorithms. Obviously, with “weak humans” I do not intend to downplay the knowledge and expertise of our human doctors and pharmacists, but rather that it is practically not feasible to have all the world’s current knowledge and expertise in a particular field — pharmacology in this case — available at the point-of-care, at all times, for all clinical personnel, in all hospitals.

In my opinion, Kasparov’s law’s implication for precision dosing is that any clinician coupled with a proper precision dosing algorithm (“machine”), and in combination with a superior way to interact with that tool (“better process”) creates the optimal result. I feel strongly that developing an optimal precision dosing tool should not merely focus on the implementation of the latest machine learning or pharmacometric techniques, or use the latest QSP models, but should rather optimize the full human process of drug prescription and therapeutic drug monitoring, and allow the machine to assist wherever it is beneficial. This approach is often referred to intelligence amplification (IA) rather than AI, and to me this seems the best path forward for precision dosing and many other digital health tools for the foreseeable future. In later blog posts I will explain in more detail how we are implementing this vision at InsightRX.

comments powered by Disqus