December 26, 2024 03:37 pm (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
Anna University sexual assault case: Accused is a DMK worker, claims BJP's Annamalai | Celebrities too responsible for crowd control: Telangana CM Revanth Reddy to Telugu filmdom amid Pushpa 2 stampede row | Boat capsizes off Calangute Beach in Goa; 1 killed, 20 rescued | Canada announces change to immigration system, likely to impact Indians seeking permanent residence | Azerbaijan Airlines tragedy: 32 passengers rescued, flight attempted several emergency landing before crashing | Man sets himself on fire near Parliament building; locals, police rush him to hospital | Azerbaijan Airlines passenger plane enroute to Russia with over 70 people onboard crashes in Kazakhstan | Atishi will be arrested in fake case, claims Arvind Kejriwal after Delhi govt disowns health and women's schemes | Delhi govt departments disown Arvind Kejriwal's major poll promises, AAP chief reacts | 'Our nation will always be grateful to him': PM Modi writes article in tribute to Atal Bihari Vajpayee on his birth centenary

When you're blue, so are your Instagram photos, finds study

| | Aug 09, 2017, at 01:34 am
New York, Aug 8 (IBNS): When you're feeling blue, your photos turn bluer, too. And more gray and dark as well, with fewer faces shown. In other words, just like people can signal their sadness by body language and behavior—think deep sighs and slumped shoulders—depression reveals itself in social media images.

That's the conclusion of new research showing that computers, applying machine learning, can successfully detect depressed people from clues in their Instagram photos.

The computer's detection rate of 70 percent is more reliable than the 42 percent success rate of general-practice doctors diagnosing depression in-person.

"This points toward a new method for early screening of depression and other emerging mental illnesses," says Chris Danforth, a professor at the University of Vermont who co-led the new study with Andrew Reece of Harvard University. "This algorithm can sometimes detect depression before a clinical diagnosis is made."

The team's results were published Aug. 8 in a leading data-science journal EPJ Data Science.

The scientists asked volunteers, recruited from Amazon's Mechanical Turk, to share their Instagram feed as well as their mental health history. From 166 people, they collected 43,950 photos. The study was designed so that about half of the participants reported having been clinically depressed in the last three years.

Then they analyzed these photos, using insights from well-established psychology research, about people's preferences for brightness, color, and shading. "Pixel analysis of the photos in our dataset revealed that depressed individuals in our sample tended to post photos that were, on average, bluer, darker, and grayer than those posted by healthy individuals," Danforth and Reece write in a blog post to accompany their new study.


They also found that healthy individuals chose Instagram filters, like Valencia, that gave their photos a warmer brighter tone. Among depressed people the most popular filter was Inkwell, making the photo black-and-white.

"In other words, people suffering from depression were more likely to favor a filter that literally drained all the color out the images they wanted to share," the scientists write.

Faces in photos also turned out to provide signals about depression. The researchers found that depressed people were more likely than healthy people to post a photo with people's faces—but these photos had fewer faces on average than the healthy people's Instagram feeds. "Fewer faces may be an oblique indicator that depressed users interact in smaller settings," Danforth and Reece note, which corresponds to other research linking depression to reduced social interaction—or it could be that depressed people take many self-portraits.

"This 'sad-selfie' hypothesis remains untested," they write.

As part of the new study, Danforth and Reece had volunteers attempt to distinguish between Instagram posts made by depressed people versus healthy. They could, but not as effectively as the statistical computer model—and the human ratings had little or no correlation with the features of the photos detected by the computer. "Obviously you know your friends better than a computer," says Chris Danforth, a professor in UVM's Department of Mathematics & Statistics and co-director of the university's Computational Story Lab, "but you might not, as a person casually flipping through Instagram, be as good at detecting depression as you think."

Consider that more than half of a general practitioners' depression diagnoses are false—a very expensive health care problem—while the computational algorithm did far better. The new study also shows that the computer model was able to detect signs of depression before a person's date of diagnosis. "This could help you get to a doctor sooner," Danforth says. "Or, imagine that you can go to doctor and push a button to let an algorithm read your social media history as part of the exam."

As the world of machine learning and artificial intelligence expands into many areas of life, there are deep ethical questions and privacy concerns. "We have a lot of thinking to do about the morality of machines," Danforth says. "So much is encoded in our digital footprint. Clever artificial intelligence will be able to find signals, especially for something like mental illness." He thinks that this type of application may hold great promise for helping people early in the onset of mental illness, avoid false diagnoses, and offer a new lower-cost screening for mental health services, especially for those who might not otherwise have access to a trained expert, like a psychiatrist.

"This study is not yet a diagnostic test, not by a long shot," says Danforth, "but it is a proof of concept of a new way to help people."

Photo courtesy: EPJ Data Science

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.