Social Media. Impacts, look under the hud and problems

I love and hate social media at the same time. This seeming contradiction comes from the difference between what social media could be and what it usually ends up being.

There are many points we could raise here. The privacy, monopoly, usability… but this post would get incredibly long, if I were to talk about it all. So today I’d like to focus on this MIT review, discussing problems with Facebooks’s AI, related to misinformation and polarization.

The text is result of over half of a year long investigation and it’s quite long. So, I’ll point its most important findings.

The most important factor, driving FB’s grow is “engagement”. Portal wants to have as many users as possible and for those users to spend long time on Facebook. To do that, it uses machine learning algorithms, to determinate what kind of content people engage with. Uses that, to pick post form friends, groups and also suggest some other content. Then it collects data from interactions with those posts in continuous effort to keep you on the site and market to you.

If that wasn’t bad enough by itself, we very quickly learned that controversy, misinformation and polarizing content leads to most time spent “engaging” with the platform. It’s not like FB specifically wants to spread lies. It just happens, that showing this kind of content is works best in achieving its goals. And when they are trying to address those problems if the solution reducing misinformation reduces “engagement” too much – it’s discarded.

Extremism is another huge problem. 64% of joins to extremist groups are due to Facebook’s recommendations.

When FB’s researcher’s team found that potentially depressed users, could easily spiral into consuming increasingly negative material risking worsening of their mental health. They proposed tweaking content-ranking models, to show less depressing stuff. Leadership avoided answering the questions asked and proposal didn’t go anywhere.

And when FB’s researchers put some work to address misinformation, the model needed to fit within FB’s requirement of “fairness”. Which sounds great, until you learn that what they mean by this, isn’t to judge all opinions objectively, but rather to have the same number of warnings under both sides of political spectrum (I don’t like this one dimensional divide, but that what’s used here). So, if one of them for whatever reasons posts more misinformation, their misleading content will be marked as such less often. For example, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns.

2 Likes