In the second episode of our Innovative Technology Insights podcast Jeff Elton, CEO of ConcertAI, joins Natasha Allen for an insightful discussion on the growing intersection of health care and artificial intelligence – What are the areas of medicine that stand to benefit the most from AI? How will this adoption assist in addressing health disparities? Where are the potential ethical pitfalls? And what can companies do to prepare their enterprises for this change?
The below episode transcript has been edited for clarity
Welcome everyone. My name is Natasha Allen and I am a partner in Foley’s Silicon Valley office and the Co-Chair of the AI area of focus within our Innovative Technology sector. On today’s podcast, we’ll be discussing AI in health care.
Joining me to provide their expertise on the topic is Jeff Elton. Jeff is CEO of ConcertAI – providing research solutions and patient centric solutions for life’s sciences, innovations, and the world’s leading providers. Prior to joining ConcertAI, Jeff held management positions at Accenture and the Novartis Institutes of Biomedical Research Inc.
Thank you very much Natasha.
Thank you for joining me today. Let’s just hop right into it. Similar to many industries, AI technology will have a great impact on health care. The health care industry, however, is facing some challenges with regards to the implementation of AI. Do you have a perspective of whether AI adoption in health care is lagging?
It’s hard to say whether it’s lagging but it’s certainly advancing. First, I’ll address some of the questions that come up around AI and health care. AI represents a model that allows you to either predict features of a patient to make an interpretation of their needs, or augment an interpretation that a medical provider may do. And it has the ability to handle vast amounts of data and information.
Usually, AI has been trained. And this is where some concerns come in. That training usually comes off of retrospective data – including images, digital pathology, radiological, electronic medical record data, etc. The underlying question is whether these training data sets really represent the population that is being looked at. Transparency is key. Can you have confidence in the training methodology? Did you pick the right kind of group to conduct the primary training? There a process of conserving or preserving some for doing ex-post validation and you want some randomization between those as you begin doing that.
And then comes stability. Populations are not always the same. We do treatment paradigms and outcomes improve, so any AI model built once may not actually always continue to exhibit the same performance that’s there.
The field is maturing in how it advances and pulls those in. But within the clinical areas we’re in, there’s a very high expectation that AI is going to be integral to just about all parts of the health care system. The result is very huge amounts of data and complexity. AI really can augment the decision-makers – getting a foundation in place that allows for trust, transparency, and considerations of equity in health care.
You touched on it briefly – what are some of the ethical issues you think that are associated with it, and the regulatory landscapes that are impacting those issues?
Ethical issues have to do with whether you’re seeing all subpopulations behave as you would expect. A lot of the biomedical research that we do tends to occur in academic medical centers and [they] also tend to be in urban areas.
People seeking treatment at academic medical centers tend to be of more means. They tend to be healthier, even though they are seeking treatment. This is even true in areas like cancer care. Patients are generally healthier and less decompensated. They don’t necessary all look like the 80% of patients who receive care in the community or a retail health system. I say that because that’s tended to be the epicenter of a lot of biomedical innovation. Part of the ethical consideration is aligning the AI model on relevant population data – those truly suffering with the conditions.
How is AI able to assist in addressing health disparities?
AI is, I think, unfairly being attributed to bias. That does not have to be the case. In fact, there are a lot of efforts and people working to use AI to assure equity in health care. An example is prostate cancer that may disproportionally affect Black American males. During trial design, I may need to assure that the lab values of the trial, even where that trial is going to be run, allows substantial participation of sub-cohorts. For Black American men in that particular, the goal is for results to be statistically valid for that sub-population or sub-group. AI models and tools and trial design are now bringing that capability to bear.
In the same way, I may need to actually identify the clinical sites that have the capability of gaining adequate participation of patients with diverse characteristics to address potential health care disparities and inequities. Again, AI models are contributing a lot to begin doing that – all the way over to the workflows of health care providers. We’re now doing patient identification solutions, working with those providers to identify the patients and ensure that everyone that potentially meets the eligibility criteria are included in the study – particularly those who may be disproportionally affected. In these particular cases we’re leveraging AI models and data to overcome historical inequities and disparities.
That’s amazing. What areas of medicine do you think are best poised to benefit from AI?
I’ll divide the answer into a couple different elements. Where are we already seeing benefits occurring? Then how do we see this beginning to evolve and change? And where might this go over the mid-term and then to the longer term?
On the provider side, mostly in radiological imaging, there’s been a lot of advances in AI models for image interpretation. This is true in two-dimensional imaging (x-rays) and advanced, three-dimensional imaging (MRI). AI can perform lots of different roles. Not only do I have an image, but I can do image pre-processing. I can use it to clear and clean up features of the image that then allow AI models to do a more effective job.
Why is this important? One reason is that radiologists are actually becoming in shorter supply. That’s a practical consideration. The number of new radiologists going into practice versus those retiring, versus the demand on the profession is in flux.
Second, AI models don’t get tired. They can aid a primary diagnosis to ensure that features are presented to the radiologists for their acceptance. Their results don’t always have to be accepted at face value either.
AI models can also be used for ex-post-secondary reviews to make sure critical features aren’t missed or raising red flags to improve overall care quality. On the longer term, we’ll see a shift to digital pathology because many of the same model-based approaches will work there. With these digital slides coming from anatomical pathology on different platforms, you’ll start seeing technical convergences around that.
The next layer of the convergence is connecting electronic medical records and other data types to imaging data. Take electronic medical records for example. I’ll probably have a non-small cell, lung cancer patient. I’m going to do an image interpretation and also going to bring in portions of the medical record with molecular data. The integration of those two is further going to aid the accuracy.
Now what I’m going through is not just the reading of the actual image, I’m actually beginning to go towards selection of the treatments for the clinical trials that the patient may be eligible for. I’m stopping short by saying this isn’t clinical decision support.
That’s actually a very important feature because of the way that AI will continue transform health care. It’s like a pointillist painting in the sense that you’re going to get lots of very small aperture solutions done with exquisite precision. Together they start to continue to provide ever more functionality – supporting the decision making of those particular clinical teams, and making sure they don’t miss features.
Eventually there will be interconnection across data types, then interconnection between different disciplines. There will then be a convergence in the decision architecture. In fact, clinical data sources – electronic medical record data sources – will actually have a little less utility. Smarter, intelligent layers of data will be able to integrate across multiple modalities of data. These models are what’s going to start driving greater quality levels, productivity levels, and much higher precision in predicting and fostering much better outcomes for the patients.
That’s very good. Now let’s talk a little about ConcertAI. How is ConcertAI using AI in its company?
AI can be thought of in three different levels. We use AI in our own operations. As a company we work with hundreds of health care providers. Some of our clinical solutions are deployed with 1,400 providers in Europe, the United States, and Japan. We work with hundreds here in the United States on different aspects of particular cancers. Although we’re not exclusive to cancer, we have a very deep emphasis on it.
We’re using AI and natural language processing models to read the structured portions of records. These may be PDF documents, notes from nurse’s physicians, and more. With AI, we can parse them – turning all non-accessible, non-readable data into a machine-readable form so that analysis can be done.
We also use AI models to interrogate the data for quality control. We are looking for values that can’t be found in nature. For example, a human temperature shouldn’t be recorded as 131 degrees Fahrenheit – that would not be possible. You can actually start to use this for quality control. In our operations, AI and natural language processing informs how we manage and process data.
Then above that level we actually start to develop AI models and solutions that are very specific in our software as a service (SaaS) layer. We have a set of analytic solutions. There is something called metastatic status which gauges where patients are in the cancer journey. In electronic medical records, it’s usually done at just the first point of diagnosis. We have AI models that can read the patient record and assess that status at any point in time. Non-small cell lung cancer does not have its own ICD code. With our model, we can read the record and assess whether that’s a non-small cell or a small cell lung cancer. That’s a fairly complex, predictive process.
So that’s our kind of analytic environment. We have clinical trial design tools assuring that clinical studies can be designed robustly, with minimum burden and bias. As discussed, we’re using AI solutions for everything from identifying patients for treatment eligibility to our TeraRecon solution, which is a radiological image interpretation solution.
We are also in the process of launching an AI machine learning layer that will deploy software and medical device solutions too. It’s going to be an open architecture that allow provider entities and other third parties to integrate their models and architecture.
We think of ourselves as trying to bring together vast amounts of data, of different types, to different decision makers – orchestrating that in a way that aids their efficiency and effectiveness on things in ways that weren’t possible before. Deeply partnered with biomedical innovators and health care providers, we’re kind of nestled in between two ecosystems. We have 150 actively deployed AI models in production.
Wow that’s amazing. One final question. Say there is a business that’s looking to implement AI in their operations. Are there any tips or suggestions you have for them?
Data management – the preparation of data and the accessibility of a company’s data, is one of the big impediments. Frankly, if you were looking at our data science team, around 70% of their work is preparing the data before they can actually start to develop the model.
A lot of organizations thinking about AI and starting to bring intelligence into their enterprise are going to be concerned with data collection and organization. Historically, large enterprise applications locked down data into functionally aligned pieces of infrastructure. Whereas AI and machine learning are focused on leveraging intelligence that orchestrates functions in ways that are not cut and dry. The first step is overcoming legacy data architecture at that foundational level.
The next part of the process is governance. Think of the kinds of your models being developed, where they are being deployed, what their usage is, what the workflows will be, and how to decide when results are valid or not. You always want to have a smooth and discreet decision-making process – from model development and deployment to release and oversight.
The last part is anticipating and allowing for dynamic workflows – bridging intelligence and orchestration – looking for new efficiencies and more intelligent ways of working. You may find that legacy operations, which were much more sequential, may not look the same. Our personal view is that we see tremendous value in AI. We see it in our own organization, and we see it in what we are capable of bringing to our customer base.
That’s amazing. Thank you so much, I appreciate you taking the time to walk through this with us – discussing how AI is used and can be used in health care – plus some of the benefits and some of the things that need to be improved. We really appreciate your time. Thank you everyone for joining us, until next time.
Foley & Lardner’s Innovative Technology Insights podcast focuses on the wide-ranging innovations shaping today’s business, regulatory, and scientific landscape. With guest speakers who work in a diverse set of fields, from artificial intelligence to genomics, our discussions examine not only on the legal implications of these changes but also on the impact they will have on our daily lives.