Computational Psychiatry. IBM’s new AI is being deployed to diagnose psychosis

Image: Davi Ozolin - Flickr
Alex Stockwell

Computational Psychiatry is now A Thing but any widespread adoption needs to be handled with kid-gloves

A team comprised of IBM researchers and university academics has developed an artificial intelligence (AI) tool to help diagnose certain mental disorders. The signs are promising but how much faith should we place in this emerging field of Computational Psychiatry?

Mental health professionals often use an assessment of a patient’s language as an indicator when diagnosing mental disorders such as psychosis or schizophrenia. And this new AI can independently ‘listen’ to a patient’s speech and can predict with relative precision the onset of certain conditions.

This could prove invaluable for occasions where a patient lacks access to trained professionals or the necessary facilities for diagnosis, but can we really trust algorithms when it comes to matters of the mind?

AI ‘checks’ speech patterns for signs of psychosis

The team, lead by Guillermo Cecchi, lead researcher and manager of the Computational Psychiatry and Neuroimaging groups at IBM Research, built their new AI on the foundations of a previous study in 2015 that used Natural Language Processing (NLP) to model specific variations in speech patterns of people who were deemed likely to develop psychosis.

It proved to be successful as a working predictive model with a high accuracy rating. And for this new psychosis-predicting AI, the team applied what was learned in 2015, posing a different question and using a much larger patient group.

The results of the study showed that the system could predict the eventual onset of psychosis with an accuracy of 83 percent, which for Cecchi, is a significant step towards machines featuring more in mental disorder diagnosis.

The rise of computational psychiatry

Any and all improvements made to diagnosis results in better treatment for sufferers of psychosis. And for Guilermo Cecchi, the success of the study should pave the way for widespread psychiatric assessment. In conversation with Futurism, Cecchi said:

This system can be used, for instance, in the clinic. Patients considered at-risk could be quickly and reliably triaged so that the (always-limited) resources can be devoted to those deemed very likely to suffer a first episode of psychosis.

Also, people who don’t have access to clinics or specialists could potentially provide audio samples to the AI, meaning that treatment could be issued without the cost and difficulty associated with having to see a specialist as is the way with traditional treatment.

Ceccho also went on to highlight the potential of computational psychiatry beyond the diagnosis of psychosis alone, stating that the technology could be developed for aiding in the treatment and diagnosis of other conditions such as depression, Parkinson’s disease, chronic pain and Alzheimer’s.

But returning to the diagnosis of psychiatric conditions such as psychosis and schizophrenia, Cecchi stated in a 2017 study that he believes that AI and machine learning could eventually eliminate human subjectivity when it comes to accurately assessing patients and reaching diagnosis.

And he’s absolutely not alone, as the current growing trend surrounding AI diagnosis in psychiatry reflects the need to quantify what was previously thought of as unquantifiable.

But what if that human subjectivity is entirely necessary? What if relying on AI algorithms are a slippery slope to removing the human element altogether, with the potential risk of more frequent misdiagnosis?

All watched over by machines of loving grace

In no way should medical progress, especially that which has been proved to be successful, be hampered by sentimentality and a fear of moving too fast. But in playing devil’s advocate, what if the preoccupation with showing how AI can do things better than us opens up the possibility for mistakes?

Algorithms merrily mine data on a daily basis in our lives today, personalizing our music playlists and suggesting purchase recommendations, but what happens when they’re entrusted with making far more important decisions?

The criminal risk assessment tool, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), has been proven to make mistakes on criminal convictions, with sentencing sometimes based eerily close to our own fleshy racial biases.

So what then about psychiatric evaluation? What if our personal idiosyncrasies in speech coincidentally align with those deemed mentally ill? A lifetime of unnecessary pharmaceutical treatment could have been avoided if a conversation with an empathetic, skilled human had taken place.

But who knows? As algorithms are managed, and more data are made accessible, perhaps the process will become ever more accurate.

And as people continue to fall behind in this difficult healthcare climate, perhaps it’s best that the machines take the reins every now and then.

Start typing and press Enter to search

retina scanvr headset