Last month I wrote a post about what the Clock Drawing Test is, how it is scored, what that means, and new research on a home-based version that is done on a touchscreen. It’s a great post, and I’m not just saying that because I wrote it 🙂 It already has had 150 views – and is my second most popular post (after my post about Coloring as a Purposeful Activity).
Well, today, another way that technology is improving the Clock Drawing Test has come across my radar. This time, researchers are developing a method that uses a digital pen when drawing the clock. While the earlier article I posted describes using a tablet or touchscreen to analyze how the test is drawn in real time (how long it takes between writing numbers and placing the hands, where they are drawn, and can even replay the drawing process so that doctors can look for further abnormalities), this new one, using the digital pen, essentially does similar things. The pen has a small camera on it that also looks at how long it takes between strokes and to complete the drawing, movements, and the process as a whole. The Anoto Live Pen is from a Swedish company, Anoto.
The pen is a bit bulky, but still usable
The first (tablet and touchscreen) is from Georgia Tech and the second (camera pen) is from MIT. Both are great schools for engineering and computer science and both are working towards the same goal (a more thorough analysis of the clock drawing process and more objective scoring).
Numerous papers in the clinical literature describe a variety of manual scoring systems for the clock test (Manos and Wu, 1994, Royall et al, 1998, Shulman et al, 1993, Rouleau et al, 1992, Mendez et al, 1992, Borson et al, 2000, Libon et al, 1993, Sunderland et al, 1989), none of which used a machine learning approach to optimize for accuracy. There have also been a few attempts to create novel versions of the clock drawing test.
The closest work to ours (Kim et al, 2011a,b) builds a tablet-based clock drawing test that allows the collection of data along with some statistics about user behavior. However, that work focuses primarily on the user-interface aspects of the application, trying to ensure that it is usable by both subjects and clinicians, but not on automatically detecting cognitive conditions.
No work that we know of – and certainly none used in practice – has used state-of-the-art machine learning methods to create these systems or has reported levels of accuracy comparable to those obtained in this work. In addition, no work that we know of has aimed to understand the tradeoff between accuracy of prediction and interpretability for the clock drawing test.
Ok, so the major difference with the MIT camera pen is that it uses machine learning methods (my understanding of the Georgia Tech research is that they are looking at accuracy of prediction and how to interpret the test, but MIT is looking deeper into these).
One thing that is cool, is that they are using IF-THEN rules to improve accuracy. These IF-THEN rules are classification rules to simplify inputs and outputs. An easy way to understand these is to use a simple example, like the weather. IF it is raining, THEN I will take my umbrella. It’s pretty cool when these are applied in computer programs and using multiple rules can give a lot of insight. So, for example in the MIT research, IF a person takes longer than 2.3 seconds to draw the first hand on the clock after their last stroke AND at least one hand is missing, THEN there is indication of a memory impairment disorder (with high confidence – there are other rules for Vascular-related impairments and for Parkinson’s Disease).
And here is their article from MIT News:
By Adam Conner-Simons | CSAIL on August 13, 2015
For all of the advances in medical technology, many of the world’s most widely-used diagnostic tools essentially involve just two things: pen and paper.
Tests such as the Montreal Cognitive Assessment (MoCA) and the Clock Drawing Test (CDT) are used to detect cognitive change arising from a wide range of causes, from strokes and concussions to dementias such as Alzheimer’s disease.
What’s disconcerting, though, is that, with dementia and other disorders growing in prevalence, most current diagnostic methods detect cognitive impairment only after it starts affecting people’s lives. In Alzheimer’s, for example, changes in the brain may occur 10 or more years before the cognitive change becomes noticeable, and no easily administered test can detect these changes at the very earliest stage.
At least, not yet.
This month researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) were part of a team that published a paper demonstrating a predictive model that, coupled with existing hardware, opens up the possibility of detecting disorders such as dementia earlier than ever before.
For several decades, doctors have screened for conditions including Parkinson’s and Alzheimer’s with the CDT, which asks subjects to draw an analog clock-face showing a specified time, and to copy a pre-drawn clock.
But the test has limitations, because its benchmarks rely on doctors’ subjective judgments, such as determining whether a clock circle has “only minor distortion.”
CSAIL researchers were particularly struck by the fact that CDT analysis was typically based on the person’s final drawing rather than on the process as a whole.
Enter the Anoto Live Pen, a digitizing ballpoint pen that measures its position on the paper upwards of 80 times a second, using a camera built into the pen. The pen provides data that are far more precise than can be measured on an ordinary drawing, and captures timing information that allows the system to analyze each and every one of a subject’s movements and hesitations.
Research at Lahey Hospital and Medical Center and CSAIL produced novel software for analyzing this version of the test, producing what the team calls the digital Clock Drawing Test (dCDT).
Predictive power of drawings
Working with a collection of 2,600 tests administered over the past nine years, the team developed computational models that show early promise in being able to better detect whether someone has a cognitive impairment, and even determine precisely what they may have.
They tested their models against standard methods used by physicians and found that the machine learning models were significantly more accurate.
“We’ve improved the analysis so that it is automated and objective,” says CSAIL principal investigator Cynthia Rudin, a professor at the Sloan School of Management and co-author of the paper. “With the right equipment, you can get results wherever you want, quickly, and with higher accuracy.”
Some of the machine learning techniques they used were designed to produce “transparent” classifiers, which provide insights into what factors are important for screening and diagnosis.
“These examples help calibrate the predictive power of each part of the drawing,” says first author William Souillard-Mandar, a graduate student at CSAIL. “They allow us to extract thousands of features from the drawing process that give hints about the subject’s cognitive state, and our algorithms help determine which ones can make the most accurate prediction.”
Souillard-Mandar and Rudin co-wrote the paper with MIT Professor Randall Davis and researchers Dana Penney of Lahey Hospital, Rhoda Au of Boston University, David Libon of Drexel University, Catherine Price of the University of Florida, Melissa Lamar of the University of Illinois Chicago, and Rod Swenson of the University of North Dakota Medical School.
Different disorders reveal themselves in different ways on the CDT, which asks people to draw a clock showing 10 minutes after 11, and then asks them to copy a pre-drawn clock showing that time.
For example, while healthy adults spend more time on the dCDT thinking (with the pen off the paper) than “inking,” memory-impaired subjects spend even more time than that thinking rather than inking. Parkinson’s subjects, meanwhile, took longer to draw clocks that tended to be smaller, suggesting that they are working harder, but producing less — an insight not detectable with previous analysis systems.
Beyond the significant potential to improve people’s health are the work’s implications for automating a tedious scoring process.
“Neurologists see dozens of patients every day, and so the amount of time they spend sifting through databases and hand-coding their observations adds up very quickly,” says Phil Cohen, a vice president at VoiceBox Technologies who has done extensive research involving digital-pen technologies. “The work is still in a relatively early state, but this has the potential to not just better detect disease, but save clinicians a lot of time.”
Now that the team has proven the dCDT’s effectiveness, they are working to develop an interface that would allow neurologists and non-specialists alike to more easily use the technology in hospitals.
“We’re eager to see how well our model will work with other screening tools we’re developing,” Davis says. “As researchers, we’re just beginning to investigate all of the ways that your subtle behaviors reveal things about your brain.”