In my personal opinion Facebook’s experiment on emotional contagion was unethical. That is a bad thing.
The results they found are very interesting, and seem to go against recent findings in studies carried out in the U.S. and Germany . That is a good thing.
For me, the big issue here is not about Facebook manipulating users, although this is also troubling.
It seems, since Facebook paid for, carried out and analysed the data they did not require ethical approval. I don’t think that is acceptable, but that is a different argument.
The argument here is about informed consent. From everything I have read today Facebook, did not ask for informed consent from participants who took part in this study.
In terms of healthcare informed consent is “a process for getting permission before conducting a healthcare intervention on a person” .
In terms of UX , or HCI , when carrying out some research where humans come into contact with technology, the people who are involved must be informed in some way in order for the research to be deemed ethical.
Requesting informed consent usually involves the researcher; a) notifying the participant that some research is being carried out, b) asking the participant to confirm that the purpose of the data gathering and how the data will be used has been explained to them and thet they are happy to continue.
It is also good practice to make it clear to the participant that they are free to withdraw from taking part, at any moment and that, if this happens, the data generated will be deleted and not used in the study.
Facebook’s defence on the matter of informed consent has been to state users provided informed consent by agreeing to Facebook’s data usage policy. Personally, I have never read this document, I would expect a large number of people have never read the document either.
Taken directly from the Facebook Data Usage policy , this is what you have agreed Facebook can do with your posts, messages, photos, etc:
How we use the information we receive
We use the information we receive about you […]. […] we may use the information we receive about you:
– as part of our efforts to keep Facebook products, services and integrations safe and secure;
– to protect Facebook’s or others’ rights or property;
– to provide you with location features and services, […];
– to measure or understand the effectiveness of ads you and others see, […];
– to make suggestions to you and other users on Facebook, such as […]; and
– for internal operations, including troubleshooting, data analysis, testing, research and service improvement.
[The emphasis on the last line is mine]
Using the defence of ”oh the users have given their consent by agreeing to the terms and conditions of the site” is not acceptable. There is plenty of research carried out to show users do not read user agreements, Ts and Cs. Seeing as usage agreements can change regularly this is an even thinner defence.
Playing devil’s advocate for a moment, I do also think the negative reaction has been partly fuelled by Facebook’s unhealthy…opinion of privacy.
I read a very interesting article on The Faculty Lounge , written from the point of view of an academic involved in research and ethics.
The author essentially states that an ethics review board would have approved Facebook’s research. She does challenge details around the lack of requirement for ethics reviews for private companies as opposed to academic institutions, the extent of actual research carried out by the 2 academics who co-authored the paper with the Facebook employee, and informed consent.
One of the academic institutions involved, Cornell said there was no need for ethical approval as the data being analysed was from a pre-existing dataset. This seems to be illogical since the manipulation of the users newsfeeds must have been carried out after the experiment started. Unless manipulation was being carried out on a regular basis.
I mentioned this is an argument about informed consent, but maybe the bigger argument is about ethics approval in general for private companies. Quoting the article:
Many have expressed outrage that any IRB could approve this study, and there has been speculation about the possible grounds the IRB might have given. The Atlantic suggests that the “experiment is almost certainly legal. In the company’s current terms of service, Facebook users relinquish the use of their data for ‘data analysis, testing, [and] research.’” But once a study is under an IRB’s jurisdiction, the IRB is obligated to apply the standards of informed consent set out in the federal regulations, which go well, well beyond a one-time click-through consent to unspecified “research.” Facebook’s own terms of service are simply not relevant. Not directly, anyway.
[Again, my emphasis on the last 2 lines]
Here Prof. Meyer seems to say that Facebook’s mechanism of “one-time click-through consent” does not constitute informed consent. However since they are a private company, the are not bound by the same rules as an academic institution.
Effects of the research
Seeing as Facebook doesn’t know exactly the psychological status of those who took part, they cannot know if the overly negative or positive affect it has caused. Amy Bucher a psychologist has written an excellent article on the effects this could have had.
It is a pity the results were arrived at by these means. Doing research into subjects that can create negative outcomes about privacy, security, emotion is necessary, particularly when bad actors won’t care/take ethics into account. My interest in usable security research comes across these issues regularly.
To the issue of users news feed manipulation. A few thoughts:
– my opinion would be the majority of Facebook users were, until now, not aware the contents of their news feed was being controlled.
– as a result users could not have given their informed consent in the first place as they did not know manipulation was being carried out.
– does the public know that newspapers have certain opinions and policies on news and have certain ways they will report the news?
– is Facebook’s manipulation of users news feeds similar to a newspapers editorial policy?
– the defence of regular A/B testing of Facebook’s news feed algorithm can’t be used as a defence either, unless their ultimate goal is to affect the psychological status of their users
– the public doesn’t know how the manipulation is happening