A comparison of machine learning algorithms for the surveillance of autism spectrum disorder
Autoři:
Scott H. Lee aff001; Matthew J. Maenner aff001; Charles M. Heilig aff001
Působiště autorů:
Centers for Disease Control and Prevention, Atlanta, GA, United States of America
aff001
Vyšlo v časopise:
PLoS ONE 14(9)
Kategorie:
Research Article
doi:
https://doi.org/10.1371/journal.pone.0222907
Souhrn
Objective
The Centers for Disease Control and Prevention (CDC) coordinates a labor-intensive process to measure the prevalence of autism spectrum disorder (ASD) among children in the United States. Random forests methods have shown promise in speeding up this process, but they lag behind human classification accuracy by about 5%. We explore whether more recently available document classification algorithms can close this gap.
Materials and methods
Using data gathered from a single surveillance site, we applied 8 supervised learning algorithms to predict whether children meet the case definition for ASD based solely on the words in their evaluations. We compared the algorithms’ performance across 10 random train-test splits of the data, using classification accuracy, F1 score, and number of positive calls to evaluate their potential use for surveillance.
Results
Across the 10 train-test cycles, the random forest and support vector machine with Naive Bayes features (NB-SVM) each achieved slightly more than 87% mean accuracy. The NB-SVM produced significantly more false negatives than false positives (P = 0.027), but the random forest did not, making its prevalence estimates very close to the true prevalence in the data. The best-performing neural network performed similarly to the random forest on both measures.
Discussion
The random forest performed as well as more recently available models like the NB-SVM and the neural network, and it also produced good prevalence estimates. NB-SVM may not be a good candidate for use in a fully-automated surveillance workflow due to increased false negatives. More sophisticated algorithms, like hierarchical convolutional neural networks, may not be feasible to train due to characteristics of the data. Current algorithms might perform better if the data are abstracted and processed differently and if they take into account information about the children in addition to their evaluations.
Conclusion
Deep learning models performed similarly to traditional machine learning methods at predicting the clinician-assigned case status for CDC’s autism surveillance system. While deep learning methods had limited benefit in this task, they may have applications in other surveillance systems.
Klíčová slova:
Algorithms – Autism – Autism spectrum disorder – Disease surveillance – Children – Machine learning algorithms – Neural networks – Support vector machines
Zdroje
1. Maenner MJ, Yeargin-Allsopp M, Braun KV, Christensen DL, Schieve LA. Development of a machine learning algorithm for the surveillance of autism spectrum disorder. PLOS ONE. 2016 Dec 21;11(12):e0168224. doi: 10.1371/journal.pone.0168224 28002438
2. American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders: DSM—5. Washington, DC: American Psychiatric Association.
3. Breiman L. Random forests. Machine learning. 2001 Oct 1;45(1):5–32.
4. Autism and Developmental Disabilities Monitoring Network Surveillance Year 2008 Principal Investigators. Prevalence of autism spectrum disorders—Autism and Developmental Disabilities Monitoring Network, 14 sites, United States, 2008. MMWR Surveill Summ 2012;61(No. SS-3):1–19.
5. Rice CE, Baio J, Van Naarden Braun K, Doernberg N, Meaney FJ, Kriby RS. A public health collaboration for the surveillance of autism spectrum disorders. Paediatric and Perinatal Epidemiology. 2007 Mar 1;21(2):179–90. doi: 10.1111/j.1365-3016.2007.00801.x 17302648
6. Christensen DL, Braun KV, Baio J, Bilder D, Charles J, Constantino JN, et al. Prevalence and characteristics of autism spectrum disorder among children aged 8 years—autism and developmental disabilities monitoring network, 11 sites, United States, 2012. MMWR Surveillance Summaries. 2018 Nov 16;65(13):1.
7. Autism and Developmental Disabilities Monitoring Network Surveillance Year 2006 Principal Investigators. Prevalence of autism spectrum disorders–Autism and Developmental Disabilities Monitoring Network, United States, 2006. MMWR Surveill Summ 2009;58(No. SS-10):1–20.
8. Autism and Developmental Disabilities Monitoring Network Surveillance Year 2010 Principal Investigators. Prevalence of autism spectrum disorder among children aged eight years—Autism and Developmental Disabilities Monitoring Network, 11 sites, United States, 2010. MMWR Surveill Summ 2014;63(No. SS-2).
9. Blei DM, Ng AY, Jordan MI. Latent dirichlet allocation. Journal of Machine Learning Research. 2003;3(Jan):993–1022.
10. Ramage D, Hall D, Nallapati R, Manning CD. Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora. InProceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1 2009 Aug 6 (pp. 248–256). Association for Computational Linguistics.
11. Dumais ST, Furnas GW, Landauer TK, Deerwester S, Harshman R. Using latent semantic analysis to improve access to textual information. InProceedings of the SIGCHI conference on Human factors in computing systems 1988 May 1 (pp. 281–285). ACM.
12. Rennie JD, Shih L, Teevan J, Karger DR. Tackling the poor assumptions of naive bayes text classifiers. InProceedings of the 20th international conference on machine learning (ICML-03) 2003 (pp. 616–623).
13. Cortes C, Vapnik V. Support-vector networks. Machine Learning. 1995 Sep 1;20(3):273–97.
14. Wang S, Manning CD. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2 2012 Jul 8 (pp. 90–94). Association for Computational Linguistics.
15. Mesnil G, Mikolov T, Ranzato MA, Bengio Y. Ensemble of generative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprint arXiv:1412.5335. 2014 Dec 17.
16. Joulin A, Grave E, Bojanowski P, Mikolov T. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. 2016 Jul 6.
17. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Girsel O, et al. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research. 2011;12(Oct):2825–30.
18. Walt SV, Colbert SC, Varoquaux G. The NumPy array: a structure for efficient numerical computation. Computing in Science & Engineering. 2011 Mar;13(2):22–30.
19. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. 2016 Mar 14.
20. R Core Team. R: A language and environment for statistical computing.
21. Hothorn Torsten, Bretz Frank and Westfall Peter (2008). Simultaneous Inference in General Parametric Models. Biometrical Journal 50(3), 346–363
22. Platt J. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers. 1999 Mar 26;10(3):61–74.
23. Pascanu R, Mikolov T, Bengio Y. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning 2013 Feb 13 (pp. 1310–1318).
24. Hochreiter S, Schmidhuber J. Long short-term memory. Neural computation. 1997 Nov 15;9(8):1735–80. 9377276
25. Cho K, Van Merriënboer B, Bahdanau D, Beniog Y. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. 2014 Sep 3.
26. Kim Y. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. 2014 Aug 25.
27. Zhang X, Zhao J, LeCun Y. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 2015 (pp. 649–657).
28. Denil M, Demiraj A, Kalchbrenner N, Blunsom P, de Freitas N. Modelling, visualising and summarising documents with a single convolutional neural network. arXiv:1406.3830. 2014 Jun 15.
29. Kalchbrenner N, Blunsom P. Recurrent convolutional neural networks for discourse compositionality. arXiv preprint arXiv:1306.3584. 2013 Jun 15.
30. Tang D, Qin B, Liu T. Document Modeling with Gated Recurrent Neural Network for Sentiment Classification. In EMNLP 2015 Sep 17 (pp. 1422–1432).
31. Dai AM, Olah C, Le QV. Document embedding with paragraph vectors. arXiv preprint arXiv:1507.07998. 2015 Jul 29.
32. Newcombe RG. Improved confidence intervals for the difference between binomial proportions based on paired data. Statistics in medicine. 1998 Nov 30;17(22):2635–50. 9839354
33. Leisenring W, Alono T, Pepe MS. Comparisons of predictive values of binary medical diagnostic tests for paired designs. Biometrics. 2000 Jun 1;56(2):345–51. 10877288
34. Demšar J. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research. 2006;7(Jan):1–30.
Článek vyšel v časopise
PLOS One
2019 Číslo 9
- S diagnostikou Parkinsonovy nemoci může nově pomoci AI nástroj pro hodnocení mrkacího reflexu
- Je libo čepici místo mozkového implantátu?
- Pomůže v budoucnu s triáží na pohotovostech umělá inteligence?
- AI může chirurgům poskytnout cenná data i zpětnou vazbu v reálném čase
- Nová metoda odlišení nádorové tkáně může zpřesnit resekci glioblastomů
Nejčtenější v tomto čísle
- Graviola (Annona muricata) attenuates behavioural alterations and testicular oxidative stress induced by streptozotocin in diabetic rats
- CH(II), a cerebroprotein hydrolysate, exhibits potential neuro-protective effect on Alzheimer’s disease
- Comparison between Aptima Assays (Hologic) and the Allplex STI Essential Assay (Seegene) for the diagnosis of Sexually transmitted infections
- Assessment of glucose-6-phosphate dehydrogenase activity using CareStart G6PD rapid diagnostic test and associated genetic variants in Plasmodium vivax malaria endemic setting in Mauritania
Zvyšte si kvalifikaci online z pohodlí domova
Všechny kurzy