Google shares 4 updates on generative AI in healthcare
Google shares 4 updates on generative AI in healthcare

Last year at Google Health’s Check Up event, we introduced Med-PaLM 2, our large language model (LLM) fine-tuned for healthcare. Since the introduction of this research, the model has been made available to a range of global customers and partner organizations that are building solutions for a range of applications – including streamlining nurse handovers and supporting clinician documentation. Late last year, we introduced MedLM, a family of basic healthcare models built on top of Med-PaLM 2, and made it more widely available through Google Cloud’s Vertex AI platform.

Since then, our work on generative AI for healthcare has progressed—from the new ways we train our healthcare AI models to our latest research on applying AI to the healthcare industry.

New modalities in healthcare models

Medicine is a multimodal discipline; it consists of different types of information stored in formats — such as radiological images, laboratory results, genomic data, environmental context, and many others. To more fully understand a person’s health, we need to build technology that understands all of this information.

We bring new capabilities to our models in hopes of making generative AI more useful for healthcare organizations and people’s health. We have just introduced MedLM for chest radiography, which has the potential to help transform radiology workflows by helping classify chest radiographs for different use cases. We start with chest X-rays because they are critical for detecting lung and heart disease. Chest X-ray MedLM is now available to trusted testers in an experimental preview on Google Cloud.

Research to fine-tune our models for the medical field

Approximately 30% of the world’s data volume is generated by the healthcare industry – and it is growing at 36% per year. This includes large amounts of text, images, audio and video. And in addition, important information about patient histories is often buried deep in the medical record, making it difficult to find relevant information quickly.

For these reasons, we are exploring how a version of the Gemini model fine-tuned for the medical field could unlock new possibilities for augmented reasoning, understanding large volumes of context, and processing multiple modalities. Our recent research resulted in state-of-the-art benchmark performance for US Medical Licensing Exam (USMLE) style questions at 91.1% and on a video dataset called MedVidQA.

And because our Gemini models are multimodal, we were able to apply this fine-tuned model to other clinical metrics—including answering questions about chest X-rays and genomic information. We are also seeing promising results from our fine-tuned models for complex tasks such as generating reports on 2D images such as X-rays as well as 3D images such as CT scans of the brain – representing a step change in our medical AI capabilities. Although this work is still in the research phase, there is potential for generative AI in radiology to provide assistive capabilities to healthcare organizations.

Personal Health LLM for personalized training and referrals

Fitbit and Google Research are working together to build a big language model of personal health that can power personalized health and wellness features in the Fitbit mobile app, helping people get even more insights and recommendations from data from their Fitbit devices and Pixel. This model is fine-tuned to provide personalized learning opportunities such as actionable messages and guidance that can be individualized based on personal health and fitness goals. For example, this model can analyze variations in your sleep patterns and quality, and then offer recommendations on how you can change the intensity of your workout based on those insights.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *