The 2018 PhysioNet Challenge published the source-code of the algorithms its challenge entrants devised to automatically detect sleep disturbances.
The number of times someone experiences an arousal during the night by the collapse of their airways caused by obstructive sleep apnea is not difficult to measure, but what about arousals not caused by the cessation of breathing?
Apneas are arguably the best understood cause of sleep disturbances, but they are far from the only reason why someone may arouse. Arousals during the night can also be spontaneous, and these spontaneous events can be difficult to measure, potentially skewing data collected from sleep tests and limiting research. Automating the accurate detection of sleep arousals may allow larger scale studies to be performed to better determine what future health risks are correlated with arousal frequency.
“Having an algorithm that can automatically identify these arousals would be of particular interest because it would help phenotype people in terms of sleep apnea as well as other neurological disorders,” says Gari Clifford, DPhil, interim chair and associate professor in the Department of Biomedical Informatics, Emory University School of Medicine.
This is why the 20th annual 2018 PhysioNet Challenge invited scientists to try and solve the problem of automatically detecting disturbances in a patient’s sleep. Every year, PhysioNet—a research institute for complex physiological signals managed by the MIT Laboratory for Computational Physiology—brings researchers together to look at important clinical questions. The 2018 challenge, titled “You Snooze, You Win,” gave participants a collection of physiological signals recorded during sleep from subjects in labs at the Massachusetts General Hospital, including electroencephalogram, electrocardiogram, and electromyogram. The challenge entrants then developed algorithms to automatically identify arousal events.
Judges scored the scientists based on how well their algorithm’s results agreed with those of expert human annotators. Research developed by a team of scientists from the Montreal-based wearable technology company OMSignal won first prize.
The competition took place over a nine-month period, commencing in September at the Computing in Cardiology Conference, where participants gathered in the Netherlands to defend their work. The top three teams were presented cash prizes of an undisclosed amount, says Clifford, an organizer of the challenge.
The winning team constructed and trained a dense recurrent convolutional neural network to detect arousals. The network is trained to detect sleep/wake intervals as well as arousals from apneas and hypopneas, explains researchers Bahar Pourbabaee, PhD, and Matthew Howe-Patterson.
“One of the main advantages of our work, which makes it stand out from the rest, is the use of state-of-the-art deep learning structures [that]were recently proposed for highly competitive object detection benchmark tasks,” says Pourbabaee, who now works for Sportlogiq, an artificial intelligence-powered sports analytics company based in Montreal.
“It was an amazing achievement and definitely an honor for our team to win such a tough competition with a significant margin from the other participants,” she says.
Other top prize winners included scientists from Harbin Institute of Technology, The University of Manchester, and Nox Research, the research branch of the Iceland-based sleep diagnostics company Nox Medical.
The majority of the teams that entered the competition uploaded open-source code to the judges as they worked on their projects. This enabled the administrators of the competition to run the source code and verify the answers that the scientists claimed to be able to generate, says Clifford, who previously helped found the Sleep & Circadian Neuroscience Institute at the University of Oxford.
Then the code was made public and published online. Instead of having to reconstruct what the researchers did from a description in an article, other data scientists can work off of the code that has already been established, says Clifford.
“You can’t possibly describe a complex algorithm in a journal, there isn’t enough space. Things get left out. If you have source code, the source code is what it is. It predicts exactly what you claim it predicts,” says Clifford. “It’s not just a one-off competition, we want people to continue to use this benchmark data in the field.”
Lisa Spear is associate editor of Sleep Review.