By Marcello Cherchi, MD PhD

Artificial intelligence (AI)

Credit is usually given to John McCarthy (1927 – 2011) for having coined the term “artificial intelligence” (AI) in a workshop at Dartmouth in 1955 held with Marvin Minsky, Nathaniel Rochester and Claude Shannon (McCarthy et al. 1955).

Figure: John McCarthy (1927-2011) is usually credited with having coined the phrase, "artificial intelligence." From https://www.nytimes.com/2011/10/26/science/26mccarthy.html
Figure: John McCarthy (1927-2011) is usually credited with having coined the phrase, “artificial intelligence.” From https://www.nytimes.com/2011/10/26/science/26mccarthy.html

The definition of artificial intelligence (AI) is debated, but generally taken to refer to the ability of computers to perform tasks that normally require human cognition. Artificial intelligence can be classified in several ways.

A common classification of AI is by its level of capabilities which, in ascending order is:

  • Artificial narrow intelligence (ANI), also called “weak AI” or machine learning, specializes in one area and solves one kind of problem.
  • Artificial general intelligence (AGI), also called “strong AI” or machine intelligence, refers to a computer program whose problem-solving abilities are equivalent to those of a human (Goertzel 2014; Yamakawa 2021). This probably does not yet exist, though some commentators submit that the advent of platforms for large language models such as ChatGPT (by OpenAI), LLaMa (by Meta), Bard (by Google), Claude (by Anthropic) and others show that it is “already here” (Auguera y Arcas and Norvig 2023).
  • Artificial superintelligence (ASI), also called machine consciousness, refers to a computer that is self-aware (sentient) and whose capabilities surpass those of a human (Katritsis 2021). This does not yet exist.

Another classification of AI is by its level of functionality which, in ascending order is:

  • Reactive machines. This is the most basic type of AI which, when given a particular input, will always respond with the same specific output. This AI does not change after it has been trained; in other words, it does not “learn” from experience, it cannot adapt or iteratively improve. Real-world examples of reactive machines include IBM’s Deep Blue systems.
  • Limited memory machines. This type of AI retains information to which it is exposed. Real-world examples include the systems of self-driving cars that take input from the environment and generate a reaction to that set of circumstances.
  • Theory of mind. This kind of AI can understand human emotions and beliefs and respond to situations like a person would — basically, it could pass the Turing test (Turing 1950). This kind of AI does not yet exist.
  • Self-aware machines. This kind of AI has human-like feelings, beliefs, awareness, and cognitive abilities superior to humans. This kind of AI does not yet exist.

Machine learning (ML)

Machine learning (ML) is a subtype of artificial intelligence (AI). According to the classifications mentioned earlier, the ML usually applied to medical diagnostic problems is an artificial narrow intelligence (“weak AI”) that is driving either a reactive machine or a limited memory machine.

Origins of ML

Arthur Lee Samuel (1901 – 1990) (McCarthy and Feigenbaum 1990; Weiss 1992) is credited with having defined machine learning (ML) as “the field of study that gives computers the ability to learn without being explicitly programmed,” though this quotation is not found in either of his initial articles on this topic (Samuel 1959, 1967).

Figure: Arthur Lee Samuel (1901 - 1990) was a pioneer in the field of machine learning.  From McCarthy and Feigenbaum (1990).
Figure: Arthur Lee Samuel (1901 – 1990) was a pioneer in the field of machine learning. From McCarthy and Feigenbaum (1990).

Basic function of machine learning

In traditional computer programming, the computer takes known data as input, processes them with known (programmed) rules, and outputs (previously unknown) answers.

In machine learning, the computer takes known data as input and known answers as input, and outputs (previously unknown) rules that capture the relationship between the data and answers.

The general schemata of “traditional programming” and “machine learning” are compared in the Figures below.

Schematic of traditional programming
Schematic of traditional programming
Schematic of programming for machine learning
Schematic of programming for machine learning

Thus, at a basic level, ML maps an input to an output. In engineering terms this is a systems identification problem of determining a transfer function from input to output in which:

  • The input is one or more “features.” Features are “observable quantities that are input[ted] to a machine learning algorithm” (Bastanlar and Ozuysal 2014). Examples of such data in medicine include specific symptoms (e.g., “vomiting is present”), signs (e.g., “ataxia is present”) or specific results of a workup (e.g., “hemoglobin is 13.5”).
  • The output is a “target.” Examples of such targets in medicine include specific interpretations of findings (e.g., “nystagmus is up beat”) or specific diagnoses (e.g., “benign paroxysmal positional vertigo”).

Evaluating machine learning

The performance evaluation of an ML algorithm assesses the quality of this mapping from input to output, and can be measured in a variety of ways, including recall, sensitivity, specificity, F‑measure and receiver operating characteristics (Ahsan et al. 2022).

Types of machine learning

There are many kinds of ML. The Figure below, from Ahsan and colleagues (Ahsan et al. 2022), offers a partial taxonomy of ML.

Figure: Partial taxonomy of machine learning.  From Ahsan et al. (2022).
Figure: Partial taxonomy of machine learning. From Ahsan et al. (2022).

This following outline of machine learning algorithms is adapted from https://www.geeksforgeeks.org/machine-learning/ (accessed 10/8/2023).

  1. Supervised learning. The model is trained on a labeled dataset (which has both input and output parameters).
    1. Types of supervised learning.
      1. Regression. Algorithm learns to predict a continuous output value.
        1. Linear regression. Predict a continuous output value based on input features.
        2. Logistic regression. Predict a binary output value based on input features.
        3. Polynomial regression
        4. Stepwise regression
        5. Decision tree regression
        6. Random forest regression
        7. Support vector regression
        8. Ridge regression
        9. Lasso regression
        10. ElasticNet regression
        11. Bayesian linear regression
      2. Classification. This is used when inputs are divided into two or more classes, and the model must assign new (unseen) inputs to one or more of these classes. Algorithm learns to predict a predefined categorical output variable or class label.
      3. Methods that can be used for regression or classification.
        1. Decision trees. Use a tree-like structure to model decisions and their possible consequences. Decision tree regression predicts the value linked with a leaf note. Decision tree classification improves classification and prevents overfitting.
        2. Random forests. Made up of multiple decision trees. Random forest regression and random forest classification both improve accuracy and reduce overfitting.
        3. Support vector machine. Create a hyperplane to segregate n‑dimensional space into classes and identify the correct category of new data points. Support vector regression predicts continuous values. Support vector classification finds the best fit hyperplane that maximizes the margin between data points of different classes.
        4. K‑nearest neighbors (KNN). Finds k training examples closest to a given input and then predicts the class or value based on the majority class or average value of these neighbors. K‑nearest neighbors regression predicts continuous values by averaging the outputs of the k closest neighbors. K‑nearest neighbors classification classifies data points based on the majority class of their k closest neighbors.
        5. Gradient boosting. Combines weak learners (such as decision trees) and iteratively builds a strong model. Gradient boosting regression builds an ensemble of weak learners to improve prediction accuracy through iterative training. Gradient boosting classification creates a group of classifiers to continually enhance the accuracy of predictions through iterations.
    2. Applications of supervised learning.
      1. Image classification. Identify objects, faces, etc.
      2. Natural language processing. Extract information from text (e.g., sentiment, entities, relationships).
      3. Speech recognition. Convert spoken language into text.
      4. Recommendation systems. Make personalized recommendations to users.
      5. Predictive analytics. Predict outcomes, such as sales, stock prices, etc.
      6. Medical diagnosis. Detect diseases.
      7. Fraud detection. Identify fraudulent transactions.
      8. Autonomous vehicles. Recognize and respond to objects in the environment.
      9. Email spam detection. Classify emails as spam or not spam.
      10. Quality control in manufacturing. Inspect products for defects.
      11. Credit scoring. Estimate the risk of a borrower defaulting on a loan.
      12. Gaming. Recognize characters, analyze player behavior.
      13. Customer support. Automate customer support tasks.
      14. Weather forecasting. Make predictions about temperature, wind speed, precipitation, etc.
      15. Sports analytics. Analyze player performance, make game predictions, optimize strategies.
  2. Unsupervised learning. The model is trained on an unlabeled dataset. Analyzes and clusters unlabeled datasets using machine learning algorithms that find hidden patterns and data without any human intervention. The training model has only input parameter values; the algorithm discovers the groups or patterns on its own. Inputs can consist of unstructured data or labeled data.
    1. Types of unsupervised learning.
      1. Clustering. This is applied to group data based on different patterns, such as similarities or differences, that the machine model finds. Unlike classification (in supervised machine learning), in clustering the groups are not known beforehand.
      2. Association. This is a rule-based technique that finds relationships between parameters of a large data set.
    2. Applications of unsupervised learning.
      1. Clustering. Group similar data points into clusters.
      2. Anomaly detection. Identify outliers or anomalies in data.
      3. Dimensionality reduction. Reduce the dimensionality of data while preserving its essential information.
      4. Recommendation systems. Suggest products, movies, or content to users based on their historical behavior or preferences.
      5. Topic modeling. Discover latent topics within a collection of documents.
      6. Density estimation. Estimate the probability density function of data.
      7. Image and video compression. Reduce the amount of storage required for multimedia content.
      8. Data preprocessing. Help with data preprocessing tasks such as data cleaning, imputation of missing values, and data scaling.
      9. Market basket analysis. Discover associations between products.
      10. Genomic data analysis. Identify patterns or group genes with similar expression profiles.
      11. Image segmentation. Segment images into meaningful regions.
      12. Community detection in social networks. Identify communities or groups of individuals with similar interest or connections.
      13. Customer behavior analysis. Uncover patterns and insights for better marketing and product recommendations.
      14. Content recommendation. Classify and tag content to make it easier to recommend similar items to users.
      15. Exploratory data analysis. Explore data and gain insights before defining specific tasks.
  3. Semi-supervised learning. Used when data is partly labeled and partly unlabeled. Often used in image data sets. Applications of semi-supervised learning include:
    1. Image classification and object recognition. Improve the accuracy of models by combining a small set of labeled images with a larger set of unlabeled images.
    2. Natural language processing. Enhance the performance of language models and classifiers by combining a small set of labeled text data with a vast amount of unlabeled text.
    3. Speech recognition. Improve the accuracy of speech recognition by leveraging a limited amount of transcribed speech data and a more extensive set of unlabeled data.
    4. Recommendations systems. Improve the accuracy of personalized recommendations by supplementing a sparse set of user-item interactions (labeled data) with a wealth of unlabeled user behavior data.
    5. Healthcare and medical imaging. Enhance medical image analysis by utilizing a small set of labeled medical images alongside a larger set of unlabeled images.
  4. Reinforcement learning (RL). This trains a model through trial and error. The model continually increases its performance using reward feedback to learn the behavior or pattern. Applications of reinforcement learning include:
    1. Game playing. RL can teach agents to play games, even complex ones.
    2. Robotics. RL can teach robots to perform tasks autonomously.
    3. Autonomous vehicles. RL can help self-driving cars navigate and make decisions.
    4. Recommendations systems. RL can enhance recommendation algorithms by learning user preferences.
    5. Healthcare. RL can be used to optimize treatment plans and drug discovery.
    6. Natural language processing. RL can be used in dialogue systems and chatbots.
    7. Finance and training. RL can be used for algorithmic trading.
    8. Supply chain and inventory management. RL can be used to optimize supply chain operations.
    9. Energy management: RL can be used to optimize energy consumption.
    10. Game AI. RL can be used to create more intelligent and adaptive non-player characters in video games.
    11. Adaptive personal assistant. RL can be used to improve personal assistants.
    12. Virtual reality (VR) and augmented reality (AR). RL can be used to create immersive and interactive experiences.
    13. Industrial control. RL can be used to optimized industrial processes.
    14. Education. RL can be used to create adaptive learning systems.
    15. Agriculture. RL can be used to optimize agricultural operations.

Application of machine learning in medicine in general

Dr. Scott Gottlieb, the 23rd commissioner of the United States Food and Drug Administration from 2017 – 2019, said in a statement that:

“Artificial intelligence and machine learning have the potential to fundamentally transform the delivery of health care. As technology and science advance, we can expect to see earlier disease detection, more accurate diagnosis, more targeted therapies and significant improvements in personalized medicine” (from https://www.fda.gov/news-events/press-announcements/statement-fda-commissioner-scott-gottlieb-md-steps-toward-new-tailored-review-framework-artificial, accessed 11/24/23).

The following is a partial list of machine learning algorithms commonly used for medical purposes (Jiang et al. 2017):

  • Support vector machine (SVM). “SVM is mainly used for classifying the subjects into two groups” (Jiang et al. 2017).
  • Neural network (NN). “One can think about neural network as an extension of linear regression to capture complex non-linear relationship between input variables and outcome” (Jiang et al. 2017).
  • Deep learning (DL). “Deep learning is a modern extension of the classical neural network technique. One can view deep learning as a neural network with many layers” (Jiang et al. 2017). A deep learning technique commonly applied in the medical field is a convolution neural network (CNN). CNN “are a subset of artificial neural networks that are extensively used in image processing” (Ahsan et al. 2022).
  • Logistic regression (LR). “Logistic regression (LR) is an ML approach that is used to solve classification issues. The LR model has a probabilistic framework, with projected values ranging from 0 to 1” (Ahsan et al. 2022).
  • Discriminant analysis (DA).
  • Random forest (RF).
  • Linear regression (LR).
  • Naïve Bayes (NB). “The naïve Bayes (NB) classifier is a Bayesian-based probabilistic classifier. Based on a given record or data point, it forecasts membership probability for each class” (Ahsan et al. 2022).
  • Nearest neighbor (NN). “K-nearest neighbor (KNN) classification is a nonparametric classification technique… suitable for classification as well as regression analysis. The outcome of KNN classification is class membership. Voting mechanisms are used to classify the item. Euclidean distance techniques are utilized to determine the distance between two data samples. The projected value in regression analysis is the average of the values of the KNN” (Ahsan et al. 2022).
  • Decision tree (DT). “The decision tree algorithm follows divide-and-conquer rules. NDT models the attribute may take on various values known as classification trees; leaves indicate distinct classes, whereas branches reflect the combination of characteristics that result in those class labels. On the other hand, DT can take continuous variables called regression trees” (Ahsan et al. 2022).
  • Adaptive boosting, also called AdaBoost, “Is a classifier that combines multiple weak classifiers into a single classifier. AdaBoost works by giving greater weight to samples that are harder to classify and less weight to those that are already well categorized. It may be used for categorization as well as regression analysis” (Ahsan et al. 2022)

From the examples reviewed above we can see that several different machine learning algorithms may be relevant to different problems in a particular topic or field. For example, in healthcare:

  • Supervised learning can be used for medical diagnosis. Relevance in otoneurology: Use data from intake questionnaire (features) to predict diagnosis (target).
  • Unsupervised learning can be used for genomic analysis.
  • Semi-supervised learning can be used for medical image analysis. Relevance in otoneurology: Analysis of infrared eye movement videos.
  • Reinforcement learning can be used to optimize treatment plans and drug discovery.

Machine learning in otoneurology: What has already been attempted?

Many machine learning techniques have found application specifically in otoneurology, including:

  • Bayesian probabilistic modeling (Miettinen and Juhola 2010)
  • Decision tree (Viikki et al. 1999; Viikki et al. 2002)
  • Deep learning (Kong et al. 2023; Li and Yang 2023a, b; Lim et al. 2019; Rastall and Green 2022; Wu et al. 2023; Yiu et al. 2019)
  • Linear discriminant analysis (Allum et al. 1991; Mouelhi et al. 2021)
  • Neural networks (Friedrich et al. 2023; Juhola et al. 2001; Kentala et al. 1997; Krafczyk et al. 2006; Lin et al. 2023; Newman et al. 2021a, b; Siermala et al. 2008)
  • Random forest (Du et al. 2022; Filippopulos et al. 2022)

Machine learning has also been applied to interpretation of specific otoneurological test results, such as:

  • Pupil segmentation and gaze estimation (Yiu et al. 2019)
  • Various forms of nystagmus (Ben Slama et al. 2020; Friedrich et al. 2023; Kong et al. 2023; Lee et al. 2023; Li and Yang 2023b; Lu et al. 2022; Newman et al. 2021a), including torsional nystagmus (Li and Yang 2023a)
  • Posturography (Ahmadi et al. 2019; Brandt et al. 2012; Krafczyk et al. 2006)

Machine learning has also been applied to the problem of identifying specific diagnoses, including:

  • Benign paroxysmal positional vertigo (Formeister et al. 2022; Lim et al. 2019; Wu et al. 2023)
  • Acute vestibular syndrome in stroke (Korda et al. 2022)
  • Motion sickness (Li et al. 2022)
  • “Central” vertigo (Kim et al. 2021), and distinguishing it from “peripheral” vertigo (Ahmadi et al. 2020)

Final comments

The authors of an opinion paper published by the National Academy of Medicine stated that “AI will not replace [healthcare] providers, but providers who leverage AI will replace those who do not” (Lomis et al. 2021).  The advent of machine learning in medicine is inevitable, so this is no longer a question of whether the profession will integrate it into practice and research, but how.  In doing so, medicine will have to confront a variety of questions, including ethical issues (Sood et al. 2022), the impact ML may have on the human interaction (Coiera 2019), and many others.

In general, ML may aid clinicians by improving diagnostic accuracy and medical decision making. More specifically, ML may also be useful in clinical fields that are esoteric or deal with rare diseases, or in a field such as otoneurology in which there is a significant mismatch between supply (small number of clinicians who practice) and demand (large number of patients who need care)

References

Aguera y Arcas B, Norvig P (2023) Artificial general intelligence is already here. Noema.

Ahmadi SA, Vivar G, Frei J, Nowoshilow S, Bardins S, Brandt T, Krafczyk S (2019) Towards computerized diagnosis of neurological stance disorders: data mining and machine learning of posturography and sway. J Neurol 266: 108-117. doi: 10.1007/s00415-019-09458-y

Ahmadi SA, Vivar G, Navab N, Mohwald K, Maier A, Hadzhikolev H, Brandt T, Grill E, Dieterich M, Jahn K, Zwergal A (2020) Modern machine-learning can support diagnostic differentiation of central and peripheral acute vestibular disorders. J Neurol 267: 143-152. doi: 10.1007/s00415-020-09931-z

Ahsan MM, Luna SA, Siddique Z (2022) Machine-Learning-Based Disease Diagnosis: A Comprehensive Review. Healthcare (Basel) 10. doi: 10.3390/healthcare10030541

Allum JH, Ura M, Honegger F, Pfaltz CR (1991) Classification of peripheral and central (pontine infarction) vestibular deficits. Selection of a neuro-otological test battery using discriminant analysis. Acta Otolaryngol 111: 16-26. doi: 10.3109/00016489109137350

Bastanlar Y, Ozuysal M (2014) Introduction to machine learning. Methods Mol Biol 1107: 105-28. doi: 10.1007/978-1-62703-748-8_7

Ben Slama A, Sahli H, Mouelhi A, Marrakchi J, Boukriba S, Trabelsi H, Sayadi M (2020) Hybrid clustering system using Nystagmus parameters discrimination for vestibular disorder diagnosis. J Xray Sci Technol 28: 923-938. doi: 10.3233/XST-200661

Brandt T, Strupp M, Novozhilov S, Krafczyk S (2012) Artificial neural network posturography detects the transition of vestibular neuritis to phobic postural vertigo. J Neurol 259: 182-4. doi: 10.1007/s00415-011-6124-8

Coiera E (2019) The Price of Artificial Intelligence. Yearb Med Inform 28: 14-15. doi: 10.1055/s-0039-1677892

Du Y, Ren L, Liu X, Wu Z (2022) Machine learning method intervention: Determine proper screening tests for vestibular disorders. Auris Nasus Larynx 49: 564-570. doi: 10.1016/j.anl.2021.10.003

Filippopulos FM, Strobl R, Belanovic B, Dunker K, Grill E, Brandt T, Zwergal A, Huppert D (2022) Validation of a comprehensive diagnostic algorithm for patients with acute vertigo and dizziness. Eur J Neurol 29: 3092-3101. doi: 10.1111/ene.15448

Formeister EJ, Baum RT, Sharon JD (2022) Supervised machine learning models for classifying common causes of dizziness. Am J Otolaryngol 43: 103402. doi: 10.1016/j.amjoto.2022.103402

Friedrich MU, Schneider E, Buerklein M, Taeger J, Hartig J, Volkmann J, Peach R, Zeller D (2023) Smartphone video nystagmography using convolutional neural networks: ConVNG. J Neurol 270: 2518-2530. doi: 10.1007/s00415-022-11493-1

Goertzel B (2014) Artificial General Intelligence: Concept, State of the Art, and Future Prospects. Journal of Artificial General Intelligence 5: 1-48. doi: doi:10.2478/jagi-2014-0001

Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y, Dong Q, Shen H, Wang Y (2017) Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol 2: 230-243. doi: 10.1136/svn-2017-000101

Juhola M, Viikki K, Laurikkala J, Pyykko I, Kentala E (2001) On classification capability of neural networks: a case study with otoneurological data. Stud Health Technol Inform 84: 474-8.

Katritsis DG (2021) Artificial Intelligence, Superintelligence and Intelligence. Arrhythm Electrophysiol Rev 10: 223-224. doi: 10.15420/aer.2021.61

Kentala E, Pyykko I, Auramo Y, Juhola M (1997) Neural networks in neurotologic expert systems. Acta Otolaryngol Suppl 529: 127-9. doi: 10.3109/00016489709124102

Kim BJ, Jang SK, Kim YH, Lee EJ, Chang JY, Kwon SU, Kim JS, Kang DW (2021) Diagnosis of Acute Central Dizziness With Simple Clinical Information Using Machine Learning. Front Neurol 12: 691057. doi: 10.3389/fneur.2021.691057

Kong S, Huang Z, Deng W, Zhan Y, Lv J, Cui Y (2023) Nystagmus patterns classification framework based on deep learning and optical flow. Comput Biol Med 153: 106473. doi: 10.1016/j.compbiomed.2022.106473

Korda A, Wimmer W, Wyss T, Michailidou E, Zamaro E, Wagner F, Caversaccio MD, Mantokoudis G (2022) Artificial intelligence for early stroke diagnosis in acute vestibular syndrome. Front Neurol 13: 919777. doi: 10.3389/fneur.2022.919777

Krafczyk S, Tietze S, Swoboda W, Valkovic P, Brandt T (2006) Artificial neural network: a new diagnostic posturographic tool for disorders of stance. Clin Neurophysiol 117: 1692-8. doi: 10.1016/j.clinph.2006.04.022

Lee Y, Lee S, Han J, Seo YJ, Yang S (2023) A nystagmus extraction system using artificial intelligence for video-nystagmography. Sci Rep 13: 11975. doi: 10.1038/s41598-023-39104-7

Li CC, Zhang ZR, Liu YH, Zhang T, Zhang XT, Wang H, Wang XC (2022) Multi-Dimensional and Objective Assessment of Motion Sickness Susceptibility Based on Machine Learning. Front Neurol 13: 824670. doi: 10.3389/fneur.2022.824670

Li H, Yang Z (2023a) Torsional nystagmus recognition based on deep learning for vertigo diagnosis. Front Neurosci 17: 1160904. doi: 10.3389/fnins.2023.1160904

Li H, Yang Z (2023b) Vertical Nystagmus Recognition Based on Deep Learning. Sensors (Basel) 23. doi: 10.3390/s23031592

Lim EC, Park JH, Jeon HJ, Kim HJ, Lee HJ, Song CG, Hong SK (2019) Developing a Diagnostic Decision Support System for Benign Paroxysmal Positional Vertigo Using a Deep-Learning Model. J Clin Med 8. doi: 10.3390/jcm8050633

Lin SC, Lin MY, Kang BH, Lin YS, Liu YH, Yin CY, Lin PS, Lin CW (2023) Artificial Neural Network-Assisted Classification of Hearing Prognosis of Sudden Sensorineural Hearing Loss With Vertigo. IEEE J Transl Eng Health Med 11: 170-181. doi: 10.1109/JTEHM.2023.3242339

Lomis K, Jeffries P, Palatta A, Sage M, Sheikh J, Sheperis C, Whelan A (2021) Artificial Intelligence for Health Professions Educators. NAM Perspect 2021. doi: 10.31478/202109a

Lu W, Li Z, Li Y, Li J, Chen Z, Feng Y, Wang H, Luo Q, Wang Y, Pan J, Gu L, Yu D, Zhang Y, Shi H, Yin S (2022) A Deep Learning Model for Three-Dimensional Nystagmus Detection and Its Preliminary Application. Front Neurosci 16: 930028. doi: 10.3389/fnins.2022.930028

McCarthy J, Feigenbaum EA (1990) In Memoriam: Arthur Samuel: Pioneer in Machine Learning. AI Magazine 11: 10. doi: 10.1609/aimag.v11i3.840

McCarthy J, Minsky ML, Rochester N, Shannon CE (1955) A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955.

Miettinen K, Juhola M (2010) Classification of otoneurological cases according to Bayesian probabilistic models. J Med Syst 34: 119-30. doi: 10.1007/s10916-008-9223-z

Mouelhi A, Ben Slama A, Marrakchi J, Trabelsi H, Sayadi M, Labidi S (2021) Sparse classification of discriminant nystagmus features using combined video-oculography tests and pupil tracking for common vestibular disorder recognition. Comput Methods Biomech Biomed Engin 24: 400-418. doi: 10.1080/10255842.2020.1830972

Newman JL, Phillips JS, Cox SJ (2021a) 1D Convolutional Neural Networks for Detecting Nystagmus. IEEE J Biomed Health Inform 25: 1814-1823. doi: 10.1109/JBHI.2020.3025381

Newman JL, Phillips JS, Cox SJ (2021b) Detecting positional vertigo using an ensemble of 2D convolutional neural networks. Biomed Signal Process Control 68: 102708. doi: 10.1016/j.bspc.2021.102708

Rastall DP, Green K (2022) Deep learning in acute vertigo diagnosis. J Neurol Sci 443: 120454. doi: 10.1016/j.jns.2022.120454

Samuel AL (1959) Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development 3: 210-229. doi: 10.1147/rd.33.0210

Samuel AL (1967) Some Studies in Machine Learning Using the Game of Checkers. II—Recent Progress. IBM Journal of Research and Development 11: 601-617. doi: 10.1147/rd.116.0601

Siermala M, Juhola M, Kentala E (2008) Neural network classification of otoneurological data and its visualization. Comput Biol Med 38: 858-66. doi: 10.1016/j.compbiomed.2008.05.002

Sood A, Sangari A, Chen JY, Stoff BK (2022) The ethics of using biased artificial intelligence programs in the clinic. J Am Acad Dermatol 87: 935-936. doi: 10.1016/j.jaad.2021.11.031

Turing AM (1950) I.—Computing machinery and intelligence. Mind LIX: 433-460. doi: 10.1093/mind/LIX.236.433

Viikki K, Kentala E, Juhola M, Pyykko I (1999) Decision tree induction in the diagnosis of otoneurological diseases. Med Inform Internet Med 24: 277-89. doi: 10.1080/146392399298302

Viikki K, Kentala E, Juhola M, Pyykko I, Honkavaara P (2002) Generating decision trees from otoneurological data with a variable grouping method. J Med Syst 26: 415-25. doi: 10.1023/a:1016463032661

Weiss EA (1992) Biographies: Eloge: Arthur Lee Samuel (1901-90). IEEE Annals of the History of Computing 14: 55-69. doi: 10.1109/85.150082

Wu P, Liu X, Dai Q, Yu J, Zhao J, Yu F, Liu Y, Gao Y, Li H, Li W (2023) Diagnosing the benign paroxysmal positional vertigo via 1D and deep-learning composite model. J Neurol 270: 3800-3809. doi: 10.1007/s00415-023-11662-w

Yamakawa H (2021) The whole brain architecture approach: Accelerating the development of artificial general intelligence by referring to the brain. Neural Netw 144: 478-495. doi: 10.1016/j.neunet.2021.09.004

Yiu YH, Aboulatta M, Raiser T, Ophey L, Flanagin VL, Zu Eulenburg P, Ahmadi SA (2019) DeepVOG: Open-source pupil segmentation and gaze estimation in neuroscience using deep learning. J Neurosci Methods 324: 108307. doi: 10.1016/j.jneumeth.2019.05.016

Page first published on October 6, 2023. Page last updated on November 28, 2025

Loading