- Free newsletter
- The Latest
- Topics
-
About
Facepalm: Facebook experiments on its users, Part II
So Facebook did an experiment on 1 in 2500 users who did not know they were part of an experiment. What did the researchers find? For context (and contrary to what many commentators assume) previous research had found that hearing a continuous stream of happy-happy-me chat tends to cause readers to express resentment, not joy. The controversial Facebook research weakly disconfirmed this, suggesting that people who heard positive words were slightly more likely to update their status more positively and that those who heard negative words were slightly more likely to update their status more negatively.
But as bioethicist Michelle N. Meyer observes, the effect was not only tiny but questionable:
Note two things. First, while statistically significant, these effect sizes are, as the authors acknowledge, quite small. The largest effect size was a mere two hundredths of a standard deviation (d = .02). The smallest was one thousandth of a standard deviation (d = .001). …
Second, although the researchers conclude that their experiments constitute evidence of “social contagion” —that is, that “emotional states can be transferred to others”—this overstates what they could possibly know from this study. The fact that someone exposed to positive words very slightly increased the amount of positive words that she then used in her Facebook posts does not necessarily mean that this change in her News Feed content caused any change in her mood.No, not necessarily, because we have no independent measure of her mood. Maybe users do resent the Group hug!! Group hug!! view of life shoved onto their page but know better than to show it, because online can be forever. So they pin on their smiley face and grin along.
Does the sheer size of the study mean anything, as Big Data enthusiasts claim? University of Western Australia’s David Glance says no:
Big Data brings with it the naive assumption that more data is better when it comes to statistical analysis. The problem is, however, that it actually introduces all sorts of anomalies especially when dealing with extremeley small differences that appear in one single measure at scale.For one thing, different cultures, even in the same country, may have different social (not personal) reactions to happy talk, confounding the data.
Reactions have varied. Stephen Green, the Vodkapundit, feels betrayed:
Facebook has been described as an internet-within-the-internet, and the secret to making that work is it’s an internet curated for you by people you trust. … But for this to work, Facebook must remain neutral. What you see must be what your trusted friends have curated and presented to you. There can’t be any monkeying around with the Facebook timeline, any more than AT&T or Verizon can decide which phone calls you may receive, or when you may receive them.He sees Facebook as now “essentially corrupt.”
Some analysts sound more nonchalant: danah boyd (sic) thinks it’s all just fear of Big Data, and offers suggestions as to how companies should handle their “new power.”
Their new what? Meyer takes a middle view: The experiment wasn’t technically unethical because only the career academics are bound by federally funded research protocols, and they did not have direct access to the data. And maybe it was a good thing if people learned how much Facebook manipulates what users see anyway.
Nilay Patel notes at Vox that the entire Facebook proposition is really about marketing brands to vast audiences, which is a form of manipulation.
True, but if I see an ad for toothpaste, I have no reason to think they are really trying to sell me a Chevy Volt.
Market research and science research are just not the same thing. Advertisers are generally private companies whose marketing/ad agencies guard closely whatever they find out from their research. The research is not likely to be published later in a science journal, perhaps ending up as public policy “based on science.” Indeed, private research is valuable to a company principally because competitors don’t have access to it.
Of course, a scientist can get a grant to research the same questions and publish the results in a journal. But she must follow human subject protocols in return. It’s a tradeoff, and Facebook is trying to efface the rules.
Patel seems closer to the mark when he also says,
It's true that Facebook is getting a little too good at apologizing for creepy behavior, as Mike Isaac just wrote in The New York Times. The company doesn't seem to know where its users will draw the line. But a big reason for that disconnect is everyone's totally confused about how our data actually turns into money. If Facebook wants to stop apologizing, it had better start being a lot clearer about what it's actually selling.Yes, and its recent admission of “communications error” hardly gets to the bottom of that. See also: Facepalm: Facebook experiments on its users Part I Note: Facepalm. This is what actual informed consent looks like:
Denyse O’Leary is a Canadian journalist, author, and blogger.
Join Mercator today for free and get our latest news and analysis
Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.
Have your say!
Join Mercator and post your comments.