For this month’s Digital Review I explored the hype surrounding Chat Generative Pre-trained Transformer (GPT) from Open AI. The free version is still available at https://chat.openai.com/chat (ChatGPT 3.5) and was used in this article.
At the time of writing this review, March 2023, ChatGPT 4.0 is already out and some experts have described it as being the first system developed that meets the criteria of an artificial general intelligence (AGI). AGI refers to the ability of an artificial intelligence system to perform a wide range of tasks that are usually associated with human intelligence, typically this includes reasoning, problem-solving, and decision-making in a wide range of fields. ChatGPT 4.0 is available on a $20 dollars per month subscription. I will provide a more extensive review of ChatGPT 4.0 abilities and uses later in the year.
The ChatGPT system (OpenAI) has also been joined by competitors from Google (Google Bard, powered by Language Model for Dialogue Applications (LaMDA) and Microsoft Bing AI (powered by ChatGPT 4). Google Bard is only open for some testers at present. Bing AI can be accessed through the Microsoft Bing Browser.
Below is an ‘interview’ with ChatGPT 3.5 which gives a background on the artificial intelligence system and what the system considers its own limitations.
Hopefully this sort of natural language system / artificial intelligence will allow us to be free to perform more complex tasks in medicine, thereby driving further efficiencies in patient care. Easy potential adaptations of the system are in generating email replies, drafting presentations, creating patient leaflets and potentially summarising patient problems in lay language.
Impressively, ChatGPT 4.0 has even been tested using the United States Medical Licencing Exam (USMLE) achieving passing scores equivalent to a third year USA medical student [1]. We can assume that these AI models will only ever get better. An interesting podcast on the use of ChatGPT 3.5 is available on GU Cast with Dr Anobel Odisho, Dr Todd Morden, Dr Renu Eapen and Professor Declan Murphy who hosted the show (www.gucast.org/episodes/chatgpt-this-is-nuts-ai-goes-mainstream). We are only at the start of this exciting journey!
What is ChatGPT?
ChatGPT is a natural language processing model that uses deep learning algorithms to generate text-based responses to user inputs or prompts. It is part of the larger family of generative pre-trained transformer (GPT) models developed by OpenAI, a leading artificial intelligence research organisation. ChatGPT is specifically designed to generate human-like responses to text-based inputs, making it well-suited for applications in conversational interfaces such as chatbots or virtual assistants. It has been trained on large datasets of text data and uses advanced machine learning techniques to analyse and understand the patterns and structures of language. This allows it to generate responses that are fluent, coherent, and often indistinguishable from those generated by a human. ChatGPT has been used in a variety of applications, including customer service, education, and healthcare.
How can we use ChatGPT in urological surgery?
ChatGPT can be used in several ways to support urological surgery:
- Preoperative planning: ChatGPT can help surgeons to develop a more comprehensive understanding of a patient’s medical history, including their surgical history, medications, and allergies. This information can be used to develop a more personalised treatment plan and reduce the risk of complications during surgery.
- Intraoperative support: ChatGPT can provide real-time guidance and support to surgeons during procedures. This can include alerts for potential complications or errors, as well as suggestions for alternative approaches based on the specific needs of the patient.
- Postoperative care: ChatGPT can assist with postoperative care by providing patients with personalised instructions and information about their recovery process. This can include information about medications, wound care, and follow-up appointments.
- Education and training: ChatGPT can be used to support the education and training of urological surgeons, providing interactive learning tools and simulations that allow them to practise surgical techniques in a safe and controlled environment.
- Patient communication: ChatGPT can be used to improve communication with patients, providing them with a more personalised and responsive experience. This can include answering common questions, providing information about surgical procedures and recovery, and addressing concerns or fears that patients may have.
Overall, ChatGPT has the potential to improve the quality of care provided to patients undergoing urological surgery. By providing personalised guidance and support to surgeons and patients alike, ChatGPT can help to reduce the risk of complications, improve outcomes, and enhance the overall patient experience.
What are the limitations of the use of ChatGPT in medicine?
While ChatGPT is a powerful tool that has the potential to transform many aspects of medicine, there are also several limitations that must be considered.
- Limited data quality: ChatGPT relies on large datasets of text data to train its language model. However, the quality of these datasets can vary widely, and there may be errors or biases in the data that can affect the accuracy and reliability of the model.
- Limited contextual understanding: While ChatGPT is capable of generating responses that are fluent and coherent, it may not always fully understand the context in which a question or request is being made. This can lead to inaccurate or inappropriate responses, particularly when dealing with complex medical issues.
- Limited clinical experience: ChatGPT is a machine learning model and does not have the same level of clinical experience or expertise as a trained medical professional. This can limit its ability to provide accurate diagnoses, treatment recommendations, and other forms of clinical decision support.
- Privacy and security concerns: ChatGPT relies on access to large amounts of medical data, including patient records and other sensitive information. This can raise concerns around privacy and security, particularly if the data is not adequately protected.
- Lack of transparency: ChatGPT is a complex machine learning model that can be difficult to interpret or understand. This can make it challenging to identify the factors that are driving its responses, or to diagnose errors or biases in the model.
Overall, while ChatGPT has enormous potential to improve many aspects of medicine, it is important to carefully consider its limitations and ensure that its use is appropriate for each specific application. It should always be used in conjunction with other forms of clinical decision support and medical expertise, and its accuracy and reliability should be carefully monitored and validated.
Reference
1. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health 2023;2(2):e0000198.