- Artificial Intelligence can help early detection of ailments.
- Federal healthcare agencies are increasing the use of Artificial Intelligence in their activities.
On 31 Dec 2019, a Toronto-based company called BlueDot alerted Canadian government and commercial clients that an unusual form of coronavirus was spreading rapidly in the Chinese port city of Wuhan.
This was almost a full week before the Centers for Disease Control and Prevention reported the outbreak COVID-19, and nine days before the World Health Organization reported it. BlueDot is the creator as well as user of a global health monitoring platform that is driven by artificial intelligence (AI). The technology used in this platform helped BlueDot to learn about a new virus.
With the role of AI increasing in the healthcare sector, private healthcare organizations who have been employing AI capabilities for years for a wide variety of use cases, are now using it even more. Hence, now federal civilian and defense healthcare agencies are also embracing and adopting AI.
The Department of Veterans Affairs is employing AI to scan medical records to look for vets at high risk of suicide. This aids them to help Veterans Affairs doctors interpret cancer lab results and suggest medications. National Artificial Intelligence Institute is also carrying out AI research and development projects. Similarly, the Centers for Disease Control & Prevention, the National Institutes of Health, the Food & Drug Administration (FDA), the Defense Health Agency, and other healthcare-related agencies are exploring and fielding AI tools in support of their missions.
AI uses algorithms that work monotonously and healthcare is a sector where no medical treatment can be monotonous. The treatment that works for one patient may not necessarily work for another. Therefore, apart from using AI for population health, precision medicine, and better outcomes, health agencies need to aggregate data from multiple sources that include time series data from Fitbits, image data from magnetic resonance imaging (MRIs) and X-rays, textual data from social media, and columnar data from lab results. Triangulating multiple modes of data produces better results by ensuring improved accuracy, more generalizable algorithms, and better insights.
Machine Learning makes algorithms smarter
The development of an area of AI called Multi-modal machine learning (MMML) can combine multiple data types to perform real-world tasks. The data modalities include acoustic signals, natural language, infrared images, Internet-of-Things (IoT) streaming data, MRI images, text, video data, and more. Within each type, there are numerous sub-modalities. The use of MMML has improved outcomes in a wide array of use cases that include detecting lung lesions, diagnosing skin cancers, predicting mild cognitive impairment, predicting wellness, predicting pain, and more.