Visualizing the medical data explosion - Anders Ynnerman
[Music] [Music] I will start by posing a little bit of a challenge. Uh, the challenge of dealing with data. Uh, data that we have to deal with in medical situations is really, um, a huge challenge for us, and this is our Le, the burden.
This is a computer tomography, uh, machine, a CT machine. It's a fantastic device. It uses x-rays, uh, x-ray beams that are rotating very fast around the human body. It takes about 30 seconds to go through the whole machine, and it's generating enormous amounts of information that comes out of the machine. So, this is a fantastic machine that we can use for, uh, improving healthcare, uh, but as I said, it's also a challenge for us.
The challenge is really found in this picture here; it's the medical data explosion that we're having right now. Uh, we're facing this problem. Uh, and let me step back in time. Let's just go back a few years in time and see what happened back then. Uh, these machines that came out, they started coming in the 1970s. Uh, they would, uh, scan off, uh, human bodies, and they would generate about 100 images of the human body. I've taken the liberty just for clarity to translate that to data sizes, uh, that would correspond to about 50 megabytes of data, uh, which is small when you think about the data that we can handle today just on normal mobile devices.
If you translate that to phone books, it's about 1 meter of phone books in a pile. Uh, looking at what we're doing today with these machines that we have, uh, we can just in a few seconds get 24,000 images out of the body, and that will correspond to about 20 gigabytes of data, or 800 phone books, and the pile would then be 200 meters of phone books.
What's about to happen, uh, and we're seeing this at the beginning, a technology trend that's happening right now, is that starting to look at time-resolved situations as well. So, we're getting the dynamics out of the body as well. And just assume that we will be collecting data during five seconds, uh, and that will correspond to about one terabyte of data. Uh, that's 800,000 books and 16 kilometers of phone books. That's one patient, one data set, and this is what we have to deal with.
So, this is really the enormous challenge that we have already today. This is 25,000 images. You know, imagine the days when we had radiologists doing this. They would put up 25,000 images; they would go like this, 25,000, okay, okay, okay, there, there is the problem, right? They can't do that anymore; that's impossible. So, we have to do something that's a little bit more intelligent than doing this.
Okay, so what we do is that we put all these slices together. Imagine that you slice your body in all these directions, and then you try to put the slices back together again into a pile of data, into a block of data. So, this is really what we're doing. So, this gigabyte or terabyte of data, we're putting it into this block, but of course, the block of data just contains the amount of x-ray that's been absorbed in each point in the human body.
So, what we need to do is to figure out a way of looking at the things that we do want to look at and make things transparent that we don't want to look at. So, uh, transforming the data set into something that looks like this, and this is a challenge. This is a huge challenge for us to do that.
Uh, using computers, even though they're getting faster and better all the time, it's a challenge to deal with gigabytes of data, terabytes of data, uh, and extract it, uh, the relevant information. I want to look at the heart, I want to look at the blood vessels, I want to look at the liver, maybe even finding the tumor in some cases. Okay, so this is where, uh, this little dude comes into play. This is my daughter. This is as of 9:00 a.m. this morning, uh, she's playing a computer game. She's only two years old, U, and she's having a blast.
Um, so she's really the driving force behind the development of graphics processing units. As long as kids are playing computer games, graphics is getting better and better and better. So, please go back home, tell your kids to play more games because that's what I need. So, what's inside of this machine is what enables me to do the things that I'm doing with the medical data.
Um, so really what I'm doing is using these fantastic little devices, and you know, going back maybe, um, 10 years in time when I got the funding to buy my first graphics computer, it was a huge machine. It was cabinets of processors and storage and everything. I paid about $1 million for that machine. Um, that machine is today about as fast as my iPhone.
So, um, every month, there are new graphics cards coming out, and here's a few of the latest ones from the vendors: Nvidia, ATI, Intel is out there as well, and you know, for a few hundred bucks you can get these things and put them into your computer, and you can do fantastic things with these graphics cards.
So, this is really what's enabling us to deal with the explosion of data in medicine, together with some really nifty work in terms of algorithms, compressing data, extracting the relevant information that people are doing research on. So, uh, I'm going to show you a few examples of what we can do.
Uh, this is a data set that was captured using a CT scanner. Um, you can see this is a full data set; it's, um, it's a woman. Uh, you can see the hair, you can see the individual structures of the woman. Uh, you can see that there is scattering of x-rays on the teeth, the metal in the teeth; that's where those artifacts are coming from.
But, um, fully interactively on standard graphics cards on a normal computer, I can just put in a clip plane, and of course, all the data is inside. So, I can start rotating, I can look at it from different angles, uh, and I can see that this woman had a problem.
Uh, that was a, she had a bleeding up in the brain, uh, that's been fixed with a little stent, a metal clamp that's tightening up the vessel, and just by changing the functions and I can decide what's going to be transparent, uh, and what's going to be visible. I can look at the skull structure, and I can see that, okay, this is where they opened up the skull on this woman, and that's where they went in.
So, these are fantastic images; they're really high resolution, uh, and they're really showing us, uh, what we can do with, uh, with standard graphics cards today. Now, um, we have really made use of this, and we have tried to squeeze a lot of data, uh, into the system. One of the applications that we've been working on, and this has gotten a little bit of traction worldwide, is the application of virtual autopsies.
So, again, looking at very, very large data sets, and you saw those full-body scans that we can do. We're just pushing the body through the whole CT scanner, uh, and just in a few seconds we can get a full-body data set. So, this is from a virtual autopsy, and you can see how I'm gradually peeling off. First, you saw the body bag that the body came in, uh, that I'm peeling off the skin. You can see the muscles, and eventually, you can see the bone structure of this woman.
Now, at this point, I would also like to emphasize that with the greatest respect for the people that I'm now going to show, I'm going to show you a few cases of virtual autopsies. So, it's with great respect for the people that have died under violent circumstances that I'm showing these pictures to you, um, in the forensic case.
This is something that there's been approximately 400 cases so far, just in the part of Sweden that I come from, uh, that has been undergoing virtual autopsies in the past four years. So, uh, this will be the typical workflow situation. The police would decide in the evening when there's a case coming in, they would decide, okay, this is a case where we need to do an autopsy.
So, in the morning between, uh, 6:00 and 7:00 in the morning, the body is then transported inside of the body bag to our center, and it's being scanned through one of the CT scanners. Then the radiologist, together with the pathologist, and sometimes the forensic scientist looks at the data that's coming out, and they have a joint session, and then they decide what to do in the real physical autopsy after that.
Now, um, looking at a few cases, here's one of the first cases that we had. You can really see the details of the data set; it's very high resolution, and it's our algorithms that allow us to zoom in on all the details. Uh, and again, it's fully interactive so you can rotate and you can look at things in real-time on these systems here. Without saying too much about this case, this is a traffic accident, and a drunk driver hit a woman.
Uh, and it's very, very easy to see the damages on the bone structure, and the cause of death is the broken neck, and this woman also ended up under the car, so she's quite badly beaten up by this injury. Here's another case, uh, a knifing, and this is also again showing us what we can do. It's very easy to look at metal artifacts that we can show inside of the body.
Uh, you can also see some of the, uh, some of the artifacts from, uh, from the teeth; that's actually the filling in the teeth. But because I've set the functions to show me metal and make everything else transparent, here's another violent case. This really didn't kill the person; the person was killed by stabs in the heart, but they just deposited the knife by putting it through one of the eyeballs.
Here's another case; it's very, very interesting for us to be able to look at things like knife stabbings. Uh, here you can see that the knife went through the heart. Uh, it's very easy to see how air has been leaking from one part to another part, which is difficult to do in a normal standard physical autopsy. So, really, really helps the, um, uh, the criminal investigation to establish the cause of death and in some cases also direct the investigation in the right direction to find out who the killer really was.
Here's another case that I think is interesting; here you can see a bullet that has lodged just next to the spine on this person, and what we've done is that we've turned the bullet into a light source. So, the bullet is actually shining, and it makes it really easy to find these fragments during a physical autopsy. If you have to dig through the body to find these fragments, that's actually quite hard to do.
Um, one of the things that I'm really, really happy, uh, to be able to show you here today is our virtual autopsy table. It's a touch device that we have developed based on these algorithms using standard graphics GPUs. It actually looks like this. Uh, just to give you a feeling for what it looks like, it's really, um, it really just works like a huge iPhone.
So, we've implemented all the gestures that you can do on, on the table. Um, and you can think of it as an enormous, uh, touch interface. So, if you were thinking of buying an iPad, forget about it. This is what you want instead, Steve, I hope you're listening to this, right? Right?
Okay, so it's, it's a very nice little device. Uh, so if you have the opportunity, please, uh, try it out. It's, it's really a hands-on experience, so it gains some traction, and we're trying to roll this out and trying to use it for educational purposes, but also perhaps in the future in a more clinical situation.
There's a YouTube video that you can download and look at this if you want to convey the information to other people about virtual autopsies. Okay, now we're talking about touch. Uh, let me move on to really touching data, U, and this is a bit of science fiction now. So, uh, so we're moving into the, the really the future. This is not really what the medical doctors are using right now, but I hope they will in the future.
So, what you're seeing on the left is a touch device; it's a little mechanical pen that has very, very fast step motors inside of the pen. Um, so I can generate a force feedback. So, when I virtually touch data, it will generate forces in the pen so I get a feedback. Okay, so in this particular situation, it's, it's a scan of a living person. I have this pen, and I look at the data, and I move the pen towards the head, and all of a sudden, I feel resistance. Okay, so I can feel the skin. If I push a little bit harder, I'll go through the skin.
I can feel the, the bone structure inside. If I push even harder, I'll go through the bone structure, especially close to the ear where the bone is very soft, and then I can feel the brain inside, and it's a bit slushy like this. So, this is really nice, and you know, to take that even further, this is a heart, and this is also due to these fantastic new scanners that just in, in 0.3 seconds, I can scan the whole heart, and I can do that with time resolution.
So, just looking at this heart, uh, I can play back a video here, and this is Khuan, one of my graduate students, who's been working on this project, and he's sitting there in front of the haptic device, the force feedback system, um, and he's moving his pen, uh, towards the heart, and the heart is now beating in front of him.
So, he can see how the heart is beating; he's taking the pen and he's moving it towards the heart, and then he feels the heart beats from the real living patient, and he can examine how the heart is moving. He can go inside, push inside of the heart and really feel how the valves are moving, and this I think is really the future for heart surgeons. I mean, it's probably the wet dream for a heart surgeon to be able to go inside of the patient's heart before you actually do surgery and do that with high-quality resolution data.
So, this is really neat. Okay, now we're going, um, even further into science fiction, and we heard a little bit about functional MRI. Now, uh, this is really an interesting project. Uh, MRI is using magnetic fields and radio frequencies to scan off the brain or any part of the body, uh, so what we're really getting out of this is information of the structure of the brain, but we can also measure the difference in magnetic properties of blood that's oxygenated and blood that's depleted of oxygen.
That means that it's possible to map out the activity of the brain. So, this is something that we've been working on, and you just saw Matz, the research engineer there, going into the MRI system, um, and he was wearing goggles so he could actually see things in the goggles. So, I could present things to him while he's in the scanner, and this is a little bit freaky because what Matz is seeing is actually this.
He's seeing his own brain, so Matz is doing something here; probably he's going like this with his right hand because the left side is activated within the motor cortex, right? And then he can see that at the same time. These visualizations are brand new, and, and this is something that we've been researching for a little while.
This is another sequence of Matz's brain, and here we asked Matz to go to calculate backwards from 100. So, he's going 100, 97, 94, and then he's going backwards, and you can see how the little math processor is working up here in his brain and is lighting up the whole brain. This is fantastic; we can do this in real-time. We can investigate things; we can tell him to do things. You can also see that his visual cortex is activated in the back of the head because that's where he's seeing; he's seeing his own brain, and he also hears our instructions when we tell him to do things.
The signal is really deep inside of the brain as well, and it's shining through because all of the data inside of this volume. Uh, and in just a second here you will see, okay, here Matz, okay, now move your left foot. Okay, so it's going like this, okay, for 20 seconds it's going like that, and all of a sudden it lights up up here, so we get motor cortex activation up there.
So this is really, really nice, um, and, uh, and I think this is a great tool, and connecting also to the previous talk here, this is something that we could use as a tool to really understand how the neurons are working, how the brain is working, and we can do this with very, very high visual quality and very fast resolution.
Now we're also having a bit of fun at the center, so this is a CAT scan. I don't know if you know what that is. Computerized tomography, okay? So this is a lion from the local zoo outside of North Shiring, called Moran. Um, Elsa, uh, so she came to the center, and, uh, they sedated her and then put her straight into the scanner, and of course, I get the whole data set from the lion, and I can do very nice images like this. I can peel off the layer of the lion, I can look inside of it, and you know, we've been experimenting with this, and I think this is a great application for the future of this technology because there's very little known about the animal anatomy.
What's known out there for veterinarians is kind of basic information, and we can scan all sorts of things, all sorts of animals. The only problem is to fit it into the machine. Okay, so here's a bear. Uh, it was kind of hard to get it in. Um, and you know, the bear is a cuddly friend animal, and here it is. Here's the nose of the bear, and you know, you might want to cuddle this one until you change the functions and look at this.
So, be aware of the bear! All right, so with that, I’d like to, um, to thank all the people that have helped me to generate these images. Uh, it’s a huge effort that goes into doing this, gathering the data and developing the algorithms, writing all the software. So, some very talented people, uh, my motto is always I only hire people that are smarter than I am, and most of these are smarter than I am. So, thank you very much.