Ìý
Episode Transcript:
Minimizing the risk of bleeding is a critical factor in quality care management for patients with end stage kidney disease. In part three of our exploration into Translating Science into Medicine, we talk with Hanjie Zhang, biostatistician at RRI, the Renal Research Institute, and an expert in artificial intelligence and machine learning.ÌýScientists at RRI have created a mobile application using vascular access images to identify and classify aneurysms that may pose a risk to patients.ÌýBy quickly analyzing smartphone images taken by a caregiver, the app classifies the presence or severity of an aneurysm in the patient’s vascular access.ÌýThis app is an example of how artificial intelligence can help recognize patterns in data, automatically flag potential problems, and improve personalized patient care.
Dr. Maddux
With me today is Dr. Hanjie Zhang. She was part of our process when we began to understand that aneurysms as a patient safety risk for hemodialysis patients was really an important issue. This came up at a Medical Advisory Board meeting that we held a year ago. Hanjie, would you talk a little bit about aneurysms and what the risks are for patients?
Dr. Zhang
Sure. Thank you, thank you for the introduction. So, you know, in our home dialysis patient, we have probably around 80% of patients there on the fistula graph. And when you use the fistula graph you may develop certain complications such as aneurysms. The reason is because, for example the repeated excess in the same location, three times a week. This could weaken the fistula wall, and this results in the aneurysm formation, and also you know some patients have outflow stenosis. This resulting in the inflow outflow mismatch increasing the pressure in the excess. This also leading to the aneurysms. So, the incidence of aneurysms really varies by report from five to 60%. The most feared consequence of any of them is rupture and it could lead to deaths. There is a study they look at the patient in New York City. They found 100 deaths from 2003 and 2011, they died because the hemorrhage from the fistula graph. And this is about one deaths per month in New York City alone so this is really not a rare case and such this happens really rapidly, and the most of the patients could even make it to the hospital they either died at home or on the way to the hospital so this could be a really life threatening condition when aneurysm get really bad.
Dr. Maddux
Yeah, thank you, Hanjie. So it's been one of the things that we've recognized as the bleeding risk and its impact on patient safety in general. Tell us a little bit about the work that you and your colleagues at RRI are doing to try to address aneurysms.
Dr. Zhang
Okay, so because currently you know the clinical practice for the rear inspection that require the patient to have it by the healthcare professional who is capable of doing such diagnosis or making recommendations, but you know such things require the patient to travel to a site, and also make appointment and the such things so it's not that easy to do it, routinely, especially for patient who dialyzes at home. So, our project’s trying to do is to automate this process and minimize the traveling to the clinic so what we're trying to do is adjust and use the smartphone with the technology of artificial intelligence, we can automatically classify the aneurysm image by just taking a picture, you know, no matter where it can be in the clinic or it can be at home and then you're going to know the aneurysm right away without going to any specific facilities.
Dr. Maddux
So, these techniques are image processing techniques that require a lot of advanced computing power and some techniques for trying to understand how the aneurysm is evolving over time through its image. Can you describe a little bit about what the process is and the theory behind this convolutional neural network that you're using to try to do these analyses?
Dr. Zhang
Sure. So, for the process, we have multiple steps. So first we need to do, you know, collect images from our patients on the fistula graph. It can be any device, iPhone, iPad or Android device, and we want to collect images from a diverse population. So the patient have a different age, have a different skin tone, and all sorts of things. And then we're going to share with our vascular access expert, which is Doug Morris’ team. This images, and then they're going to do the adjudication and diagnosis, and they share us back the aneurysm adjudication because this is very important step because for supervised learning, we really need the ground truth, and this is the ground truth provided by our vascular access expert. And then when we receive all of this, we're going to do you know separate data into trainingÌý and validation this other standard process for machine learning, because you know we want to check the model performance on the independent data set, which has been used for training, and then because you know we use different device for collecting the images and we receive images have a different resolution different size and all kind those things so we need to standardize all of them, and then we're going to fit this standardized image into our convolutional neural network. So convolutional neural network is the algorithm we're using to build our project. So, it is one kind of deep learning algorithm. It is widely used for computer vision for image classification for face recognition. This is the same technology, you are using right now for unlock your phones. So full convolutional neural network. It has three layers. So first layer is the input layer, which is just image itself. And then the second layer is the hidden layer, the hidden layer involve a lot of things. So they have multiple convolution layer. A pooling layer and dense layer. And then the last layer is clear, so for our case here, it is just the aneurysm classification results.
Dr. Maddux
So when we think about these aneurysms and we think about what this application might look like. Can you describe sort of what the application workflow would be once you have enough images that you can actually validate this, this particular model.
Dr. Zhang
So currently we are actually collecting more image right now in the eye clinics. So when we get roughly I would say over 1000 images, and we are going to share with the vascular access expert like I just mentioned with Dr. Morris team, and then we're going to do the process I just talked about. And also we you know we already built the app. So this app, we have two phases so first phase is the training flow. So for the training flow theory this time, the clinician, check image in the clinic, are they upload it, but they are not going to receive the feedback right now. So all these images will go to the AWS, which is Amazon Web service platform. And then, all this image is going to be saved there, and we follow the process, we train our algorithm, and we optimize it, and we find the best algorithm, and we could host it on the AWS platform. And then the next phase will be the real time flow. So there will be real time flow, the clinicians take the image, and they upload it, and then this time they're going to see the feedback. So this is the thing we are working right now.
Dr. Maddux
And as you think about this, you're requiring fairly advanced computing, high performance computing, and the cloud environment. Can you describe the technical nature of what's required there and how you've begun to interact between these images and the cloud processing that's occurring?
Dr. Zhang
Yes. So, yes, the computation power is really something important for this project because the training required really a large number. So I actually did this in the initially on my HPC, it really couldn't work, and then that's why we use this, currently we use the AWS SageMaker. So this is a new platform, but it couldn't be another Cloud Platform also. So, what we did is we have everything set up on the cloud. So this is really a lot of help from Julian's IT team. So they help us set up everything on the cloud platform, and we have our algorithm checked over there, because currently we could use the GPU on the clouds and really really improve our computation, used to be training hours and even days. Right now we could change in several minutes, and also the results: If you do image classification of a one image, it could just come back within seconds. So this cloud computation really provides us the capability to scaling up our model deployment, it could be from some of the patients to hundred of the patients and also it could be real time. To really within a second you can have your result back.
Dr. Maddux
So, you’ve begun capturing images at this point, tell us a little bit about the stage of the process here and how many images have you begun to capture and analyze at this point and what do you think the timeline is for getting the app essentially ready to be piloted.
Dr. Zhang
So currently we collect about, I will say so far 700 images. I already sent this images to our rescue access expert, so they are doing the adjudication right now, so I'm waiting for that, and then I could do the training, but we did some testing pilots before so that was using a much smaller patient population butÌý that time we collect videos, so we extract frames from these videos. So it helped us to build a relatively reasonable size of image data set. so I did training using that one already. And our pilot study works pretty well, the accuracy on the validation data set is about 800%, so it is very promising at the current time point and we are really looking forward to receive adjudication from our Dr. Morris’ team, and I could work on this new data set, I hopefully we could refine our algorithm, even better.
Dr. Maddux
Great. When we think about smartphones and these sort of daily applications that we use, do you think that there are various versions of the smartphone, and I'm curious, is the image quality from older phones equivalent to those that would be with the new smartphones or do you have to, would people have to get something they don't have today to actually use this?
Dr. Zhang
Yes, it's a very good question. So currently we are using any phone or device is fine because you know, even the old version of the phone the resolution is still pretty okay, and for this training process we didn't really have to require high resolution, we actually standardized to a much lower resolution. One reason is also because if you reduce the resolution, the computation is going to be much faster. And also, you're going to require much less, so it's easier to standardize for all the process because in the future maybe we do it not only in the U.S. but also in other countries.Ìý We really want to make this more robust, so it has less requirements on the on the hardware part. So just a regular resolution, it's okay for us.
Dr. Maddux
So, as we think moving forward on this image analysis that you have been spearheading, what are some of the other applications that you would see potentially for use of the same kind of image analysis that you're currently using on aneurysms?
Dr. Zhang
Yes. So, there is actually a lot of ideas going on. What we could do is use the image analysis. For example, you may be aware of, in Len’s group, but they are currently trying to check food answer because we did the monthly visit for the diabetes fit checks so we could check the images and then we could try to see if we could be able to identify the answer much earlier. Then, you know, current practice. So it's one possibility. So the other possibility is you may know in RRI we do a lot of research related to metabolomics. So for metabolomics, you know, they may use some miraculous expression level, and you may see a lot of heat maps was plotted for the patients and metabolomics profiles so we're thinking of you know we could try to build algorithm criticals through automatic this metabolomics profile of each patient and try to see if we can cluster certain group of patient or associate certain profile with certain outcomes. And, besides this we also think a lot of this: Maybe we could also utilize some additional sensor, which you build into the iPhone, or like any other device, maybe the sensors can make the most sense rather than you know the regular images, For example they may be able to get the temperature from the skin or such things and then we analyze this kind of images. It’s different, but it's still imagery processing. So to do a lot of more things.
Dr. Maddux
I think that these advanced analytic techniques and this artificial intelligence that you're using is quite fascinating. And I think again in our organization, it is one of those things that we do, not strictly to try to advance the understanding of how to use AI but actually to create actual tools for our clinicians and physicians and nurses to use. And so I would thank you very much for this description of what the work is and what you're doing and would ask if you have any other final comments or thoughts around how artificial intelligence might actually help our clinical care for patients at ÁñÁ«ÊÓƵ»ÆƬ Medical Care.
Dr. Zhang
Sure. Thank you. I think at first I want to say I want to thank you. And also thanks to all the people working for this project because there is actually a lot of people involved in it so I really want to thank them. Secondly I want to say is, you know, artificial intelligence, really useful and it really help us, you know, to see beyond what we can see with our naked eyes there's a lot of potential, we could do, and especially with current to the cloud platforms really enable us to really do a lot of other ones analytics in a really large scale and in high speed, actually in real time so there's a lot of possibilities, we could do here. Thank you.
Dr. Maddux
Great, thank you so much, Dr. Hanjie Zhang for joining me today on our dialogue series, andÌý Hanjie good luck with the rest of the work on aneurysm identification.
Dr. Zhang
Thank you so much for having me here today. Thank you.
Ìý