Medical imaging is an expanding market. According to Zion Market Research, it is expected to reach $48.6 billion by 2025. Medical images comprise the major data source in healthcare, accounting for at least 90% of all medical data, according to GE Healthcare. Unfortunately, a massive flow of various images clashing with outdated manual review processes increases the chances of medical errors and misdiagnosis.
Indeed, those engaged in developing medical image processing software recognize this problem and provide the industry with automated methods for analyzing medical images based on computer vision. In healthcare, computer vision can complement routine diagnostics and optimize the workflows of radiologists and pathologists.
Computer vision applications in healthcare
With the exponential growth in hardware performance, computer vision is gradually becoming a popular decision-making support tool in healthcare.
Non-invasive cancer diagnostics
Google, IBM, clinical researchers at universities, and more than 100 startups invest their time and effort into leveraging computer vision for diagnosing cancer with digital imaging only, without invasive biopsies. These virtual biopsies supersede invasive procedures in accuracy, cost-effectiveness, patient comfort, and time to result.
MIT Computer Science and AI Lab developed a cancer prediction tool based on machine vision and deep learning. This tool can predict cancer development for up to five years in advance. It was trained on 90,000 mammograms from 60,000 patients, supplied by Massachusetts General Hospital.
MIT claims this tool works equally well for white populations and ethnic minorities, unlike other similar projects that are biased toward white women in their training data. This is especially important as black women are 42% more likely to die from breast cancer than white women. It happens because the existing cancer detection techniques do not well serve black women.
Mindshare Medical developed the RevealAI-Lung computer-assisted diagnostic software to be used together with CT scans for faster and easier lung cancer detection.
In addition to helping with diagnostics, RevealAI-Lung can offer recommendations for individual follow-ups. It integrates well with Picture Archiving and Communication Systems. Mindshare Medical has demonstrated the ability of its product to reduce false positives and the time needed to decide on a diagnosis. In addition, this can reduce a patient’s exposure to radioactive substances and extra biopsies.
Enhancing precision medicine
Precision medicine offers tailored treatment according to patients’ detailed profiles and the available health, environmental and socioeconomic data. Respectively, precision medicine software requires substantial technical support to process and analyze enormous datasets.
Computer vision makes part of the precision medicine tech stack along with big data analytics and AI, allowing doctors to extract quantifiable data points from each image in any modality.
Health Nucleus is a clinical research center that offers whole genome sequencing combined with MRI scanning to create a better picture of an individual’s health and disease risks. The company evangelizes its proprietary approach to the prevention and early detection of neurodegenerative, cardiovascular, and metabolic disorders by simultaneously looking at a patient on macro- and micro-levels, generating about 150GB of data per individual.
Quibim is both an installable and cloud platform that provides hospitals and diagnostic imaging centers with an array of imaging biomarkers that help to track how an individual’s genotype interacts with the environment. The company offers insights into the human phenotype and quantifies a patient’s response to treatment, genetic expression, and environmental factors. These findings can be used during clinical trials or for tracking chemotherapy results.
The platform extends to multiple patient health domains, including neuro- and musculoskeletal systems, liver, lungs, and a vast oncology cluster.
Decision support in emergency care
MaxQ-AI creates a set of diagnostic tools with 3D imaging, patient-specific data, deep vision, and cognitive analytics at their core, partnering up with GE Healthcare, IBM, and Samsung. All the partners focus on using real-time data in the emergency room to assess patients suspected of acute head trauma or stroke and detect intracranial hemorrhage. In such a case, patients can receive well-timed treatment and avoid the long-term consequences of a chronic condition.
Dermatologists rely on visual inspection while examining patients and coming up with a diagnosis. This opens the possibility for machine vision and AI-based applications to enter the field to assist dermatologists in the early detection of skin conditions.
Detecting skin abnormalities using a mobile app
ECD-Network has developed the SkinIO app that uses computer vision and deep learning to detect skin abnormalities using a mobile device.
Patients begin by downloading the app and creating their accounts. Next, they capture a photo of the region they want to check for cancer and upload it to SkinIO. The app can either instruct the patient to submit more photos or immediately schedule an appointment with a dermatologist. If there is nothing to worry about, SkinIO will schedule reminders for patients to upload follow-up photos after a predefined period to check for any potential skin changes.
SkinIO can detect various skin conditions, including cancerous cells and benign tumors such as lipoma.
Computer vision coupled with AI in radiology can spot fractures, dislocations, and soft tissue injuries. These are typically hard to detect by the human eye and with standard imaging while causing long-term suffering for patients if they remain undetected.
Vertebral fracture detection using neural networks
Computer vision with deep neural networks can detect osteoporotic vertebral fractures. The problem with osteoporosis is that it develops over a long time and can only be diagnosed after discovering the first fracture.
The current standard for detecting spinal fractures is to use CT or X-ray, which are manually checked by a health professional.
Researchers at Dartmouth College, Hanover, developed a neural network-based model that uses computer vision to detect osteoporotic vertebral fractures. The system was trained on over a thousand CT scans of the chest, abdomen, and pelvis areas. After testing the model, it showed an 89.2% accuracy, which surpassed professional radiologists’ accuracy of 88.4%.
Computer vision augments healthcare
The diagnostics field experiences disruption with technologies such as computer vision. There are certain debates about whether it will entail job shortages or improve precision and help radiologists accomplish their work faster.
However, incorporating computer vision into medical image analysis has undoubted benefits:
- It improves the quality of diagnosis: while diagnosticians rely on their experience and human judgment cannot be avoided in some cases, algorithms provide superior accuracy and can pick up on details that may escape human eyes.
- It saves time and lives: computer vision can detect life-threatening conditions in earlier stages.
- It reduces costs: misdiagnosing can result in wasting thousands of dollars on the wrong treatment for both the patient and the provider. Computer vision works precisely and suggests a highly likely diagnosis from the start, to be verified by a human expert.