It's hard to tell what's more alarming: that Facebook intentionally manipulated the moods of nearly 700,000 users for research purposes or that the social network doesn't understand why people are sp concerned about it.
In a recently published paper, Facebook revealed that it manipulated the posts some 689,000 users saw on their newsfeeds in order to determine whether it could enact a process of "emotional contagion" through the dissemination of positive and negative information. The study, which was carried out alongside researchers from Cornell and the University of California, concluded that "[e]motions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks."
Unsurprisingly, the news prompted concerns over the ethics of the study. In response, a Facebook spokeswoman said the aim of the manipulation was "to improve our services and to make the content people see on Facebook as relevant and engaging as possible. A big part of this is understanding how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow." She added that there is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely."
UPDATE: One of the company's three data scientists, Adam Kramer, responded to the controversy with the following Facebook post:
"OK so. A lot of people have asked me about my and Jamie and Jeff's recent study published in PNAS, and I wanted to give a brief public explanation. The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends' negativity might lead people to avoid visiting Facebook. We didn't clearly state our motivations in the paper.
Regarding methodology, our research sought to investigate the above claim by very minimally de-prioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012). Nobody's posts were "hidden," they just didn't show up on some loads of Feed. Those posts were always visible on friends' timelines, and could have shown up on subsequent News Feed loads. And we found the exact opposite to what was then the conventional wisdom: Seeing a certain kind of emotion (positive) encourages it rather than suppresses is.
And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it -- the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.
The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.
While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper."
[via Forbes]