Latest Posts

The robot surgeons

In some German clinics it is already a common view: The patient lies on the operating table. Above him are four sterile robot arms. A few feet away from the table, the surgeon sits in a kind of cockpit, concentrating his head in an opening in the apparatus, where he sees images of the patient’s body magnified up to tenfold. With control handles, the surgeon operates the arms of the robot, performing mainly prostate, bladder and kidney surgery. The logo always says “da Vinci” on the robots. The US company Intuitive Surgical has built a monopoly since its approval in 2000 with this system and made a huge business.

Now, however, a whole series of robots are in the starting blocks, which make da Vinci competition with more intelligence. One reason for this is that relevant patents have expired. However, caution should be taken with the coming robot boom: Surgeons say that the robots make some operations easier, but not all, and the high cost of a robot could lead clinics to use the technique for balance sheet reasons, not just for the benefit of the patient.

About 877,000 patients were operated worldwide with the help of da Vinci last year. 684 of these systems, the manufacturer sold in the same year – depending on the equipment for up to two million Euros plus a maintenance contract with up to just under 150,000 Euros per year. In addition, there are 500 to 3,000 Euros of consumption costs per operation.

The benefits of da Vinci are still debated after 18 years. In the USA clinics advertise offensively: There are fewer complications, less pain, less blood loss, shorter hospital stays. But so far, there are no independent studies that provide sufficient evidence. Benjamin Chung, a professor of urology at the University of Stanford, found neither convincing arguments for a surgery with da Vinci in one of the few long-term studies nor against it.

Chung said in a press conference that there are no significant statistical differences in the outcome of the surgery or the length of hospital stay. The study covered a period of 13 years. What is certain is that robotic-assisted surgery costs more – about € 2,000 more per patient. Also, the likelihood of a longer operating time with robots is higher.

Researchers at the University of Illinois came to another conclusion with da Vinci: in 14 years there were 144 deaths, 1391 injuries and 8061 device errors. Hot parts of the instruments fell in the patient, the instruments carried out unwanted actions, there were system crashes and problems in the image display. However, these reports affect older versions of the device. In addition, the study does not reveal how many complications there were in the same period without robots.

Today, surgeons say that the system actually makes their surgery easier, including Professor Sören Torge Mees, Managing Director of the Department of Visceral, Thoracic and Vascular Surgery of the Dresden University of Technology. “We use the robot primarily for oncological surgery, when we need to be particularly subtle, for example, deep down in the pelvis in a rectal resection – the rectal removal.” Due to the spatial tightness and bleeding, the view can be significantly limited. Da Vinci helps with his visual enlargement, but also with his four arms: one arm leads the camera, three arms work and an assisting surgeon at the operating table can intervene in addition.

In esophageal surgery, the seven so-called degrees of freedom of the robotic instruments are useful for sutures: The robots have more joints than a human hand and, according to Mees, are clearly superior to the classical minimally invasive instruments. For surgeons, the system is ergonomically advantageous because they can sit in the chair and do not have to dislocate over the patient, as is necessary in some minimally invasive surgeries.

“On the other hand, you work so concentrated in a small area that it can still be exhausting in a multi-hour operation,” says Mees. But the device also has disadvantages: The surgery time can indeed extend, since the operation takes time. It will take a long time for surgeons to get involved. In a gastric surgery, an experienced surgeon needs in about 20 procedures to achieve the same quality with the system as without, in pelvic operations rather 50. Also, the device is technically not yet mature. Mees says, “when I work between the upper abdomen and lower abdomen in larger areas, I have to be careful that the robot’s arms do not collide.” That could be prevented with sensors.

Another disadvantage is the lack of feeling. An experienced surgeon can palpate a tumor with minimally invasive surgery because he senses the resistance of the nodule. In a robot-assisted surgery, surgeons need makeshift mechanisms – such as pre-marking the tumor with ink or clips so they know exactly where it is.

“There are, therefore, operations where the da Vinci Surgical System has clear advantages in my opinion, such as in prostate surgery, deep down in the pelvis, or in the chest,” says Mees. “For other OPs, the system is unlikely to prove superior to traditional minimally invasive surgery.” Indeed, initial studies indicate that the system is no better for gallbladder or uterine removal, for example. In addition, it has not yet been proven to what extent the benefits for surgeons also bring which for patients.

The manufacturers of new robots therefore want to do something different than at da Vinci and steal from him market share. There is, for example, the TransEnterix “Senhance Surgical Robotic System”, which consists of three robotic arms and is suitable for the same operations. It has force eedback: The surgeon can roughly feel the resistance of the tissue being touched by the robotic arm. By eye tracking – the measurement of eye movements – the surgeon can control the endoscope with his eyes so that he has his hands free to control the other instruments.

The system was recently approved by the FDA in the US. Other robots that are coming soon or have already arrived are from companies such as Versius, Medtronic, CMR Surgical, Auris Health, Smith & Nephew, Stryker, Mazor Robotics and Zimmer Biomet.

However, these systems do not bring major innovations into the OR. The haptic feedback, for example, has to become much more precise, for instance in order to be able to differentiate automatically healthy tissue from tumor tissue. Alexander Schlaefer, Professor of Medical Systems at the Technical University of Hamburg is working on this problem. One challenge is to do this without additional sensors on the instruments. The integration would otherwise be too expensive, as well as the cleaning too difficult.

His team relies on a combination of endoscope and optical coherence tomography. “It allows us to see how the tissue in the body reacts when it comes into contact with an instrument – how it deforms on the surface and below,” says Schlaefer. The difficulty is to deduce from this information the exact force acting on the tissue and the software must be as fast as possible to minimize latencies during surgery. Pinpointing the force would be helpful in providing the surgeon with more accurate haptic feedback than before, but also to better isolate tumors – helping surgeons to completely remove the tumor and damage as little healthy tissue as possible. It will probably be years before robots get that done reliably.
 
As long as there are no such significant advances and independent studies are lacking, it is not easy for patients and clinics to opt for or against surgery with one of the new robots. There is a risk that clinics will purchase an expensive system and use it for operations that it may not be ideal for – to get the cost back. The company CMR Surgical even plans not to sell a robot to the clinics, but instead to offer a service contract – the clinics would have to commit to a minimum number of missions, so that it is worthwhile for all. The company keeps the system up to date.

“Ultimately, it is extremely difficult to evaluate such technologies,” says Jörg Raczkowsky, Head of the Medical Group at the Institute of Anthropomatics and Robotics of the Karlsruhe Institute of Technology. From a neutral point of view, a system is worthwhile only if it prevents reoperation or complications or, for example, exposes the patient to less radiation during the operation. “But only statistics can prove that in the long run.”

(This is a translation of my article that was published in Süddeutsche Zeitung.)

The Current State of Medical AI

A life is said to have been saved by Watson, after all: In 2016, he was presented the case of a Japanese patient on whose condition the doctors puzzled. Only the AI ​​system of the IT company IBM was able to diagnose a rare leukemia, the woman was cured. That’s exactly what IBM promised. The doctor only has to enter the symptoms into the computer, Watson suggests the diagnosis and a tailor-made therapy. The AI ​​would better than any human being take into account the current state of knowledge, automatically evaluate medical databases, studies and patient records.

But it stayed with the anecdote. So far, IBM has not yet exposed its system to many independent studies. The online medical magazine STAT even reported that doctors in Denmark were convinced of Watson’s diagnosis in only 33 percent of the cases. In other countries, Watson’s proposals would have covered more than 90 percent of doctors’ diagnoses. However, they wondered, why should they need a system do if it at best confirmed a verdict?

The revolution in medicine with the help of artificial intelligence is canceled for the time being, and some experts are very happy about it. Because now you can take care of the real benefits of the new approaches. For example, Thomas Friese, an AI expert at Siemens Healthineers, prefers to speak of assistance systems and intelligent tools for physicians. Medical technology experts such as Philips’s Michael Perkuhn are confident that 50 percent of clinics will implement artificial intelligence over the next five years.

You need to know what an AI is currently capable of to understand this. Watson, for example, uses a neural network that simulates the function of neurons in the human brain. Such networks are capable of learning, but need training data. For example, in order to detect a tumor on the images of magnetic resonance imaging, such an AI has to evaluate thousands of sample images of healthy and sick people. It then looks for patterns that characterize the tumor. What it finds, computer scientists do not know. The systems should learn this independently, because the rules for a tumor are so complex that they can not be programmed. The only important thing is that the distinction succeeds reliably.

The benefits are obvious. Siemens Healthineers, for example, has developed a system that determines the best possible position for the patient in computed tomography. The system looks at him with a 3-D camera and records his shape, position, size and contours. The algorithm evaluates the data and optimally positions the patient in the scan unit. So far, the medical staff had to do that, which requires experience and time. “Our system is more likely to produce good results in the first attempt,” says Thomas Friese. “That saves time.” At the same time, it is possible to reduce the radiation dose, because secondary examinations are less frequently necessary.

CT also produces shots of the body. To come to a conclusion, the doctor must scroll through these layers on the computer. An “AI Advanced Visualization System” allows physicians to scroll not only along the longitudinal axis but from different angles. This allows physicians to more easily assess the degree of occlusion of vessels.

Also companies like Philips, GE Healthcare or Canon offer AI-supported systems. “The general goal is to introduce as much assistive automation as possible, so that radiologists can better compensate for rising patient numbers, a shortage of skilled workers and time pressure,” says Friese. “The AI ​​already relieves us of some routine activities,” confirms Michael Forsting of the University Hospital Essen, “We use AI, for example, to count the inflammatory foci of multiple sclerosis in the brain or to measure tumor sizes.” The AI ​​can do in seconds, for which doctors would take at least an hour.

With the objective measurement by an AI, the development of the tumor can be followed more closely and thus enables better prognoses. “With cervical cancer, we can predict with great probability if it has already spread – at a time when we could not see it without an AI,” says Forsting. Doctors could react sooner and spare the patient screenings. Similarly, the AI ​​works well in predicting strokes. In the case of a liver tumor, the system can calculate whether the organ will regenerate after a tumor treatment.

In addition, the AI ​​is likely to drive radiological research, says Ben Glocker of the Biomedical Image Analysis Group at Imperial College London. In craniocereberal injury, for example, researchers speculate that certain, not yet well-known changes in the brain indicate long-term damage. “Because an AI can not account for a few, but thousands, of parameters when comparing different patients’ CT scans, we hope to find the exact patterns for long-term damage,” says Glocker.

However, AI always shows weaknesses where medical understanding is still incomplete. “For example, in neurodegenerative diseases, there is a great deal of data that research has not advanced – neither in therapy nor in early detection,” says Glockner. “The less you know about a condition like Alzheimer’s, the worse you can train current AI algorithms, which need reliable basic assumptions about the relationship between input and output data, that is, readings and illness.”

KI will probably also improve imaging. In CT or MRI, software uses raw data – physical measurements – to create images that a human can understand. “Perhaps it would make sense to go to this source and start the AI ​​on the raw data,” says Glocker. “Who knows what insights we can gain with an AI there – patterns of diseases that were previously completely unknown.”

For similar reasons, Gernot Marx of the Aachen University Hospital sees great opportunities, even in intensive care medicine: “In an intensive care unit, we receive more than 1000 data per patient per hour, which makes it difficult for physicians to pay attention to the decisive signals that indicate a worsening of the patient. ” Marx was involved in the development of a system that brings together the data from different devices, making the use of AI possible. Here, too, there is hope that the AI ​​will find unknown connections that indicate changes in the condition of patients – and allow an earlier response. Marx will probably use it already next year.

The AI ​​technique, however, has its pitfalls. Even the training of the systems is difficult. Although often there is a lot of data, but they are not structured so that the AI ​​can use them – they would have to be annotated and provided with descriptive keywords. “Because an AI is trained on the outcome, we always have to be sure that a patient whose data is included actually has a certain illness,” says Marx. “So we do not need vast amounts of data, but validated data in that sense, and that’s why it takes so long for AI to gain a foothold in medicine.” Unreliable data was also likely behind Watson’s weaknesses.

Structured data is also very valuable for pharmaceutical companies. The Heidelberg-based company Molecular Health therefore wants to make all biomedical world knowledge available in a structured way. Company founder Friedrich von Bohlen says: “Any AI technique is only as good as the quality of the underlying data, and in Biomedicine we have an incomplete data set, because today’s knowledge is not complete and many rules in the system are still unclear.”
 
Nevertheless, his company could already predict which clinical trials with which patients would be promising: “This is an important analysis, because the failure of a study means for pharmaceutical companies that sometimes up to ten years of research and billions of euros are in the sand – and many studies are still failing”. So it is no wonder why the molecular data of a single patient are worth several tens of thousands of Euros to pharmaceutical companies. The data help in the search for patients who respond to a drug particularly well.

This creates a fundamental conflict. On the one hand, privacy advocates are afraid that non-anonymised data can be tapped and sold when exchanging data between clinics, research institutes and companies. On the other hand, patients could benefit from the AI ​​- and thus from the release of their medical data. Data protection is only for the healthy, say some doctors.

“Of course, everyone is afraid that, for example, health insurance companies refuse or rank high-risk patients,” says Friedrich von Bohlen. “But you can and must do that in a positive way.” Bohlen believes that in the future, more people will volunteer molecular biology profiling, once they see the benefits. In addition, they could be rewarded for their readiness with reduced cash contributions. In turn, insurance companies could restructure their care if they knew which patients were at increased risk for certain diseases – this would be more economical because the course of an illness could be alleviated.

Of course, there are many conjunctives in such visions. Michael Forsting estimates that companies like Google, which are leading in AI research and are currently investing heavily in medical technology, are going to build their own hospitals to get valid data, another development that is causing stomach-wrenches for privacy advocates.

But even ethicists are already dealing with the AI ​​in medicine. A learning technique that does not allow for insights into the learning process has certain risks. What if it learns the wrong thing? Stefan Heinemann, a theologian and business ethicist at the FOM-Hochschule and the Essen University Medical Center, regards the technical change as both positive and critical. “Everything that helps patients is worthy of note,” he says. “The question is rather how much autonomy we concede to the AI.” It would be problematic if one at some point thinks doctors still have to be replaced. It us by no means too early to debate. “We have to ask ourselves who will be responsible and that can not be delegated away.”

Doctors should not rely solely on the judgment of the AI, demands Heinemann. They would have to provide information on what they use the AI ​​for, what they can do and where the limits lie. “But it is also perfectly clear that if the combination of AI and physician expertise leads to better results, and it does so in many cases, then you need very good arguments to reject this technique.”

(This is a translation of my article that was published in Süddeutsche Zeitung.)