At InsightRX we are trying to improve the practice of medicine with computational solutions. As a team of scientists and engineers we are obviously very excited about the huge leaps forward in artificial intelligence (AI) and machine learning (ML) seen in the last decade. We obviously want to apply such solutions wherever possible in the precision medicine tools we are building.

When we founded InsightRX two years ago and created our first prototypes we were unsure how much artificial intelligence (AI) we should put in our software. What would be feasible from a computational or scientific viewpoint? What level would be accepted by the clinical community? What data would be useful and what data would be available? What would regulatory bodies allow? None of these questions had obvious answers at the time, but some have become more clear over time.

For complex problems, I find it usually helps to look at the extremes first. In the case for a precision dosing tool, on one extreme one could imagine a solution were no AI (or only “dumb AI”) is implemented. The software would then perhaps only calculate specific individual parameters from available patient characteristics and measured lab values, but would not suggest specific changes in treatment and leave it to the clinician to decide on a further course of action. Such a solution might be easy to build, validate, and integrate within a clinician’s workflow, but the question is whether it would provide enough real value.

The other extreme would be were AI would be fully integrated and pivotal in decision making: it would take any incoming patient data, parse it appropriately, and determine the most optimal treatment course for the clinician. An example output could e.g. be: “Change to 1500 mg given bi-daily starting today at 12:00 to reach desired exposure with 90% probability”. I tend to call this second extreme the red button approach where the software product would — like in a TV quiz show — just have a big red button that the doctor of pharmacist would have to push, after which the answer would come rolling out of the computer. The advice would then be implemented, possibly without any extra checking from a pharmacist or doctor. An even more extreme implementation would be hooking up the AI algorithm directly with the pharmacy ordering system, without human input at all.

It’s not a stretch to say that both extremes described above have their merits but also have important flaws or raise significant safety concerns. While perhaps technologically feasible, automation of too many parts of the drug prescribing and administration process can have serious consequences, as has been documented for example in this particular case at one of our partner hospitals. Too little AI will not provide enough value to the clinician, and will result in the tool not being adopted widely. So it seems the most interesting space to explore is between those extremes, where precision dosing algorithms and AI are available to the clinician, but still allow (or require) human involvement.

This optimal balancing of human and computer collaboration is an interesting topic, and crucial for optimal integration of clinical decision support tools. Obviously, it is not the first time this particular topic comes up in AI research, in fact it has been studied for decades. I believe we can learn in particular from previous experiences in computer chess…

Kasparov’s Law

The game of chess is often regarded as the guinea pig of AI, since AI researchers have been using this game since the 1950s as their main platform to perform basic research. Before 1996 computers were basically no match for any chess grandmaster worth it’s salt, but that rapidly changed in the ensuing decade, culminating in the (stolen?) victory of IBM’s Deep Blue over Garry Kasparov in 2007. Since then, computers and chess engines have steadily gained ground, and since a few years there is not a human chess grandmaster in the world that would win from a well equipped chess computer.

Interestingly, in 2005, an internet chess tournament with considerable prize money was organized were humans were not pitted against computers, as was often the case at the time, but where any chess player in the world could team up with a computer (or full-blown computer clusters, if they wished) and use the computer’s algorithms any way they wanted. Anyone’s expectation was that the tournament would be won by one of the participating strong (human) grand masters using a computer with a state-of-the-art chess engines on the side. Or, alternatively, that one of the teams with “weak” human players but massive computer clusters and strong AI chess software would win, in a similar way as IBM’s Deep Blue would later beat Kasparov.

However, that is not how the competition panned out. The winner of this chess tournament turned out to be a pair of amateur (“weak”) American players, who used only a few limited computers on the side to determine optimal strategy in their games, but had no access to massive computer clusters or some secret innovative chess software. Instead, their approach really focused much more on the interaction and “coaching” of their computers rather than having superior chess skills themselves or leaving the decisions fully up to the computer’s AI.

There are many more of such example in chess, and within the broader field of AI. This particular example led Garry Kasparov — who besides being the former world champion in chess is also an Oxford-tenured influential thinker in AI — to hail this as the victory of human–machine collaboration:

A clever process beat superior knowledge and superior technology. It didn’t render knowledge and technology obsolete, of course, but it illustrated the power and efficiency and coordination to dramatically improve results.

It inspired him to postulate the following “equations”, later referred to by others as Kasparov’s law:

weak human + machine + better process beats strong machine

and

weak human + machine + better process beats strong human + machine + inferior process.

AI in digital health

I feel this “law” also applies remarkably well to the digital health field, and in particular to precision dosing: for many cases it is much more important to apply a software solution that is tightly integrated with the clinical workflow than to focus solely on human strength or improving the strength of AI algorithms. By the way, it should be clear that with “weak humans” I do not intend to downplay the knowledge and expertise of our human doctors and pharmacists, but just that it is practically not feasible to have all the world’s knowledge and expertise in a particular field — pharmacology in this case — available at the point-of-care, at all times, for all clinical personnel, in all hospitals.

In my opinion, Kasparov’s law’s implication for precision dosing is that any clinician with a moderately good understanding of pharmacology (“weak human”), coupled with a proper precision dosing algorithm (“machine”), and in combination with a superior way to interact with that tool (“better process”) creates the optimal result. The key then is to investigate what this “better process” is for precision dosing. In working with various major clinics (such as UCSF Benioff Children’s Hospital and Stanford Children’s Health) over the last few years, and having had many internal and external discussions, two important factors stand out:

  • The tool should be integrated with the electronic medical record (EMR). We are putting a lot of our engineering efforts into development of integration solutions for the hospitals we work with. When we first activated an integration between our platform and the EMR of a partner hospital last year, it was thrilling to see pharmacists time spent on dose individualization drop to a fraction of what it was before. (A scientific report on this is upcoming, and will be posted here as well).

  • The UI / UX should be easily adoptable by non-experts. We recognized this very early on as a major challenge to address. Therefore, we teamed up with a group of UX experts, who had previously designed dashboards for e.g. airplane pilots and logistics professionals. After performing several UX studies for our platform and implementing the suggestions that came out of it, we have made a lot of progress, but we we will be constantly re-evaluating and updating our workflow to match the user needs best.

The two challenges highlighted above are certainly not the only ones we face to increase adoption of precision dosing and digital health in the clinic. Obviously, being data scientists at heart, we are also looking into various ways of how novel AI and pharmacometric approaches can improve the predictiveness of our algorithms.

However, I feel strongly that developing an optimal precision dosing tool should not merely focus on the implementation of the latest deep learning or pharmacometric techniques, or use the latest QSP models, but should rather optimize the very human process of drug prescription and therapeutic drug monitoring, and allow the machine to assist wherever it is benificial. This approach is often referred to intelligence amplification (IA) rather than AI, and this seems the best path forward for precision dosing and many other digital health tools for the foreseeable future.

comments powered by Disqus