Artificial intelligence in medicine: Expect the unexpected

Everyone is talking about artificial intelligence in medicine - and everyone understands something different about it. At least, this is the impression given by those who have studied the subject more intensively. Artificial intelligence therefore always comes across as a bit nebulous and uncertain. And uncertainty is something that Meedio wants to prevent by all means - be it in the handling of data or the question of product development. Runi Hammer, CEO and founder of Meedio, answered questions about artificial intelligence and its role within the company.


No IT company can avoid the topic of artificial intelligence (AI). As Meedio CEO, what do you think about the risks and potentials of AI and how does AI find its way into Meedio products?

Runi: When we talk about the use of AI, one thing is important to emphasise: Artificial intelligence is no longer a future topic, it has long since arrived in all of our everyday lives. In everyday medical life, too. So when we talk about what role AI will play, it is not linked to the question of whether it will come one day. Rather, we are talking about how we want to deal with AI in the future, what space we give it and in what direction we steer it. One thing is already certain: The use of AI is growing exponentially, we are currently at the beginning of a very steep curve and in just a few years AI will be the most important factor for innovations. Companies that have not yet integrated AI into their products are working hard to do so to remain competitive in the future.

In which areas of medicine is AI already firmly anchored today?

Runi: In all of them. Anyone who visited the DMEA, Germany's largest health IT trade fair in Berlin this year, will have noticed that almost all exhibitors are advertising the integration of AI in their products. The use of AI in radiology for diagnostic support or the use of voice-controlled reporting are just the tip of the iceberg.Β  And if we ask ourselves what future, AI-based solutions in medicine will look like, we have to conclude: We don't even know today. And that is the second important message regarding AI: the potential of AI is huge and beyond our current imagination. It would be a mistake to see AI-based innovations exclusively in further developments of software for the detection of round foci or a complete voice control of software. It will be crucial that we direct our energy to those areas that offer the greatest potential for reducing the workload - medical documentation, for example.

ChatGPT caused a big stir in the medical community, how do you rate the solution?

Runi: ChatGPT is a good example of the next level of AI. The first level was to select information based on data about preferences in a way that is most likely to be relevant to the user. This is what we know from social media. ChatGPT goes a step further because the solution enters into a direct dialogue with the user, creating the risk of individualised manipulation. Artificial intelligence now controls our language. And we all know what wonderful worlds and what threats language can create. By mastering language, AI can potentially guide human behaviour. It can give life-saving instructions to a doctor during surgery. Or it can help radicalise people. Both are possible.

That sounds a bit dystopian.

Runi: Artificial intelligence doesn't have any characteristics. We give them to it. That's why it's so important that we make smart decisions about how we use AI. In medicine, we've done that very well so far, where AI is helping doctors do their jobs, helping to make better individual decisions and making people's care more personal and therefore better. Now it's a question of continuing on this smart path. As I said, we need to focus on the essential issues. Documentation, for example, is still not very automated today; doctors still have to document a lot themselves and obtain information themselves. We can change that with AI. Soon, doctors will spend less time gathering and aggregating information and more time treating patients. And these developments are happening incredibly fast.

Let's move on to Meedio and its core product, video communication. Does Meedio already use AI for product development?

Runi: Of course we also use AI in our development. But only for training our customer service. We want to find out if AI can make our customer service better and faster so that users can benefit even more from our solution. In the future, our customers should no longer have to read through FAQs, but should quickly receive a solution to their concern - regardless of the language.

And of course we use AI internally to further develop our software. Writing code is no longer conceivable today without the use of AI. We would be far too slow and no longer competitive.

But Meedio does not use customer data to feed AI?

Runi: No, absolutely not. For the training of the service tool, we only use the data from our user manual. And no customer data is used at any other point either.

What about the product part that the users see and experience. So the video communication and the exhibition ring?

Runi: Here it's about protecting the data and information that is exchanged as well as possible. So guaranteeing that there is no AI in our product. Let's think of the now already disturbing cases of deep-fake images and videos, which make it almost impossible for laypeople to decide whether what they see or hear is real. Deep-fake is an issue that also concerns us, from a security point of view. We set ourselves the task of preventing at all costs that our users can doubt the authenticity of a conversation or a counterpart. We have to guarantee that a conversation is genuine. That a patient is talking to his doctor. We are already dealing with this task and we already have some good approaches to it. The most important one is that we can control the entire platform on which our solution runs.

What role do legal regulations play in steering AI in the right direction?

Runi: A big one. There have to be binding rules for everyone and there need to be legal definitions about the use of AI. In the EU, we have created a solid framework with the GDPR. But it does not answer the pressing questions: Who is responsible for AI-generated results? Who is liable if something happens. All of this is far from clear. If we look at the coming developments, politics must really step up a gear in the development of concepts for the protection of citizens and the responsible use of AI.