The Centers for Disease Control reports that heart disease is responsible for 25% of all deaths in the United States. It also reports that the condition accounts for $219 billion in healthcare costs and lost productivity. The statistics look grim, but advances in machine learning put hope on the horizon.
How supercomputing came to be linked with heart disease
Supercomputer models are capable of predicting how blood flows through our veins and arteries. We can examine high-resolution simulations of individual patients’ hearts all the way down to the cellular level.
We began this research by studying one particular disease: coarctation of the aorta. This congenital heart defect narrows the largest artery and is linked to a serious risk of hypertension, stroke, and heart failure. Machine learning, 3D printing, and supercomputing allow us to evaluate risk factors for people with this disease during changes in their lifestyles and activities.
We begin by conducting millions of hours of simulations of stressors such as high elevation or pregnancy on the narrowed aorta. The AI algorithm then spends millions of hours training on the data from these simulations in order to build predictive models of flow under conditions not previously simulated. Taking a design-of-experiments approach common in the pharmaceutical industry, we were able to identify the minimal number of simulations required to enable transfer learning for a new patient.
Prior to AI and supercomputing, we based our treatments on average results and invasive measures acquired in a clinical setting. Today, we capture images of each patient’s aorta and model the stress of various day-to-day activities on the aortic walls of each individual.
To obtain these individualized results, we needed efficient code and powerful computers. “HARVEY” is the name of the software package we have developed to calculate accurate flow patterns for an individual patient. We optimized this sophisticated software so it could be efficiently run on a supercomputer consisting of over 1.5 million processors, one of the world’s most powerful supercomputers at the time. Using this system, 70 million compute hours were used to simulate a wide range of potential conditions and create the training data necessary for the machine learning model.
Our goal is simple; we can’t run 70 million hours of simulations on a supercomputer for every new patient who comes to the hospital. However, machine learning allows us to run far fewer simulations, and let HARVEY’s predictive model take it from there. We have validated the results of our simulations of coarctation of the aorta with controlled fluid experiments in 3D-printed models, as well as through comparison with invasive measures in the patient, and are working to create predictive models for other cardiovascular diseases in the same way.
Replacing supercomputers with machine learning and predictive models
Simulations from supercomputers can accurately predict results before patients are on the operating table. When physicians perform a bypass graft or insert a stent, they want to know what they’re up against. We have the capability to look inside someone’s heart and create high-resolution, 3D blood flow models, but that capability is typically limited to simulating one or two heartbeats. This information is valuable, but we could do so much more.
Continuously tracking a patient’s blood flow and vascular dynamics over months or years is still out of our reach. We don’t have enough supercomputers, but machine learning enables us to bypass that level of computational power. By combining high-fidelity, physics-based models such as HARVEY with machine learning, we can assess an individual’s blood flow under a range of daily activities. We are actively developing methods to build long-term blood flow maps and ultimately drive them from wearable devices. Our long-term goal is to provide physics-based AI models that can assess blood flow dynamics on a daily basis and throughout daily activities. This information would provide doctors with unprecedented insight into a patient’s flow patterns and allow for more informed, remote monitoring.
We are in the process of shifting our source of information from supercomputers that analyze single heartbeats to machine learning that creates long-term maps of blood flow dynamics. It’s an exciting time to be involved in medical technology. Machine learning has the potential to revolutionize modern healthcare.
About Dr. Amanda Randles:
Amanda E. Randles, Ph.D., is the Alfred Winborne Mordecai and Victoria Stover Mordecai Assistant Professor of Biomedical Sciences at Duke University. Randles has made significant contributions to the fields of high performance computing and vascular modeling. Randles is the recipient of the NSF CAREER award, the ACM Grace Murray Hopper Award, IEEE-CS Technical Consortium on High Performance Computing (TCHPC) Award, the NIH Director’s Early Independence Award, the LLNL Lawrence Fellowship. Randles was also named to the World Economic Forum Young Scientist List and the MIT Technology Review World’s Top 35 Innovators under the Age of 35 list. She holds 120 U.S. patents and has published 71 peer reviewed articles.