Q. Is EyesOnAlz your first collaboration on research into human disease?
A. Not exactly. It was standing room only when I arrived to hear James Watson speak at the 2003 NIH symposium celebrating the 50th anniversary of the discovery of DNA. A stranger flagged me down and offered the seat next to his. This Samaritan turned out to be Javed Khan, head of Oncogenomics at the National Cancer Institute. After hearing about his work using artificial neural networks to predict pediatric cancer outcomes (which helps calibrate treatment selection), I spent the next year helping him refine and bring that capability online so that other researchers could upload their own DNA microarray data to the prognostic system.
Q. How did you get the idea to “crowd source” the job of identifying stalled capillaries? Is it a task that computers can’t do (ie, it can’t be automated)?
A. Once I learned about Chris Schaeffer’s' recent discoveries regarding the role of brain blood flow in Alzheimer's disease and their preliminary results involving symptom reversal in animals, I got very excited. But when he explained that analyzing the data from each experiment takes about a year, I realized that a treatment based on this work could take decades. So my first question was "have you tried using a computer to do the analysis?"
See 2017-20 ADR grant to Chris Schaffer, PhD, “Improving Brain Blood Flow in AD to Improve Cognitive Function.”
It turns out they had tried, not once, but many times, always using the latest methods as they emerged from the computer vision community. Unfortunately, the best machine-based approach was only 85% accurate, when they needed more than 99%. So I asked him to show me the actual analytic task that was taking so long in the laboratory. Once he showed me, I realized that because of the complexity of the images being analyzed, this was something that could not be fully automated in the foreseeable future. But some tasks that are difficult for machines are much easier for humans, and I suddenly realized that this was very similar to a task that was crowdsourced successfully in UC Berkeley's stardust@home project. The next step was to get in touch with the stardust project leader, Andrew Westphal, to see what we could learn. Andrew, who lost a close relative to Alzheimer's disease, agreed that there was a high likelihood of success in adapting their methods to the Alzheimer's analysis. Moreover, he offered to personally help.
Q. How difficulty is it to get a crowd-sourced research project off the ground, and how does this project rank in complexity compared to, say, astrophysics, and other research efforts that have been crowd-based?
A. For EyesOnALZ to be successful, Accomplishing this means we have to meet the stringent accuracy requirements of the biomedical research, motivate participation through compelling game play, understand the range of relevant human abilities and how they might fluctuate from one moment to the next, and develop and validate methods that combine many individual answers into one expert-like crowd answer. I am not an astrophysicist, so it is difficult to compare, but building reliable information processing systems out of humans is definitely "its own kind of crazy."