What Lies Behind ChatGPT? Artificial Intelligence with Potential Use Even in Clinical Medicine
ChatGPT, a program that enables conversation with sophisticated 'artificial intelligence', or rather an algorithm based on machine learning, is currently a huge hit. However, it represents just the tip of the iceberg. It is a 'demo version' of a much more powerful technology called GPT-4, which is showing practically boundless potential for use in the near future, including its applications in clinical medicine. Yet, with great power and capabilities comes equally great responsibility. It is important to keep in mind that due to the rapid development of this technology, many key questions — including ethical ones — regarding its use have not yet been resolved.
Training Using Machine Learning
In March 2023, a new version of the multimodal 'large language model' (LLM - large language model) called GPT-4, developed by OpenAI, was launched. Among its founders was Elon Musk, who later had to leave due to a potential conflict of interest with his company Tesla.
The more famous and accessible version to the general public is the 'limited' version ChatGPT, which does not include all the capabilities of GPT-4. GPT stands for generative pre-trained transformer. Though this term may seem abstract at first glance, the explanation is very simple. The program generates responses based on input data (in the form of questions or images), with the content transformed into answers pre-trained using machine learning (it is not connected to the internet and its current knowledge is limited to the year 2021).
'Perfect and Knowledgeable' Companion
As already mentioned, part of the GPT system is currently available to the public, namely ChatGPT (for GPT-3.5 version) and ChatGPT Plus (for GPT-4 version). For a certain monthly fee, users can ask this chat program various questions, to which the application can often respond very sophisticatedly. The sky's the limit regarding creativity, so alongside purely factual debates, the program can even be asked to create original poems or sci-fi stories.
The GPT-4 technology, which is not open to the general public, can do much more. In addition to text, it can also handle image inputs very well, and it also has substantial knowledge in many scientific fields, including medicine. Example results include tests originally devised for human experts, such as MKSAP (Medical Knowledge Self-Assessment Program). In this test, GPT-4 achieved 75% success, whereas the previous version achieved 'only' 53%. GPT-4 also performed very well (and mostly better than GPT-3.5) in several other technical, humanities, and social science tests (for instance, in the field of law).
Potential Use in Medicine - We're Just at the Beginning
Currently, GPT-4 is implemented in several programs. An undoubtedly interesting and successful effort is the Be My Eyes application, available for both the Android and iOS platforms. Its aim is to help people who are blind or have severe visual impairments. The principle of the application is to pose questions to a virtual assistant or communicate using inserted photographs, which the assistant can analyze and describe. Besides reading food labels, it can also help with orientation on the street, using public transport, or even exercising in a gym.
However, GPT technology could be much more extensively implemented in medicine in the near future. For instance, its use is being considered within clinical documentation, where it could analyze records about a particular patient and thus alleviate the work of attending physicians in searching for data in the medical history, or even transcribe spoken input into a structured clinical report. This technological platform could obviously be a vital assistant in the differential diagnostic or subsequent therapeutic process due to the accumulation of a vast amount of facts and data. For patients themselves, it could serve as a 'first-line expert', identifying likely causes of issues through a series of questions and recommending, for example, a visit to a particular specialist.
Many Unresolved Questions
GPT-4 is accompanied by many unresolved, yet extremely important questions, whether concerning limitations in terms of functionality and relevance, as well as ethical considerations and controversies.
One limitation is that while it has extensive knowledge thanks to machine learning, it presents this information to the user without any references. This is a significant problem in the context of evidence-based medicine. Moreover, it draws information from what was available up to 2021, as it is not connected to the internet for various reasons.
Another issue is the security of inputted data and the ability to search or present potentially dangerous information.
Given these and other challenges, the program is constantly being 'trained' by its authors to ensure that the benefits of its use outweigh any risks.
(holi)
Sources:
1. Mesko B., Dhunnoo P. Beyond ChatGPT: what does GPT-4 add to healthcare? The Medical Futurist, 2023 Mar 28. Available at: https://medicalfuturist.com/what-does-gpt-4-mean-for-healthcare
2. Introducing GPT-4, OpenAI’s most advanced system. OpenAI, 2023. Available at: https://openai.com
Did you like this article? Would you like to comment on it? Write to us. We are interested in your opinion. We will not publish it, but we will gladly answer you.
News from the world of medicine
All conferences
Popular this week
- Fluoroscopically Calibrated 3D-Printed Tools Increase Osteotomy Precision During Tumor Resection Near Joints
- Definition and Classification of Chronic Kidney Disease According to KDIGO
- Metamizole in Pain Therapy in the General Practitioner and Pediatric Office
- Alopecia Areata – Autoimmune Inflammatory Disease and a New Targeted Treatment Option
- How Diabetes Shortens Life or Mortality of Patients with Type 2 Diabetes