Tech

When it comes to social media moderation, reach matters

John Funge is the Chief Product Officer at DataTribe, a cybersecurity startup foundry. He’s founded, built and sold three technology companies.

Social media in its current form is broken.

In 20 years, we’ll look back at the social media of 2020 like we look at smoking on airplanes or road trips with the kids rolling around in the back of a station wagon without seatbelts. Social media platforms have grown into borderless, society-scale misinformation machines. Any claim that they do not have editorial influence on the flow of information is nonsense. Just as a news editor selects headlines of the day, social media platforms channel content with engagement-maximizing algorithms and viral dynamics. They are by no means passive observers of our political discourse.

At the same time, I sympathize with the position that these companies are in — caught between the interests of shareholders and the public. I’ve started technology companies and helped build large-scale internet platforms. So I understand that social media CEOs have a duty to maximize the value of the business for their shareholders. I also know that social media companies can do better. They are not helpless to improve themselves. Contrary to Mark Zuckerberg’s recent heel dragging in dealing with President Trump’s reckless posts, the executives and boards at these companies have full dominion over their products and policies, and they have an obligation to their shareholders and society to make material changes to them.

The way to fix social media starts with realizing it is two different things: personal media and mass media.

Personal media is most of social media. Selfies from a hike or a shot of that Oreo sundae, stuff you share with friends and family. Mass media is content that reaches large audiences — such as a tweet that reaches a Super Bowl-sized audience in real-time. To be clear, it’s not just about focusing on people with a lot of followers. High-reach content can also be posts that go viral and get viewed by a large audience.

Twitter’s decision to annotate a couple of Trump’s tweets is a baby step in this direction. By applying greater scrutiny to a mega-visibility user, the company is treating those posts differently than low-reach tweets. But this extra attention should not be tied to any particular individual, but rather applied to all tweets that reach a large audience.

Reach is an objective measure of the impact of a social media post. It makes sense. Tweets that go to more people carry more weight and therefore should be the focus of any effort at cleaning up disinformation. The audience size of a message is as important, if not more, than its content. So, reach is a useful first-cut filter removed from the hornet’s nest of interpreting the underlying content or beliefs of the sender.

From a technology perspective, it is very doable. When a social media post exceeds a reach threshold, the platform should automatically subject the content to additional processes to reduce disinformation and promote community standards. One idea, an extension of what Twitter recently did, would be to prominently connect a set of links to relevant articles from a pool of trusted sources — to add context, not censor. The pool of trusted content would need to be vetted and diverse in its point of view, but that’s possible, and users could even be involved in crowd-sourcing those decisions. For the highest-reach content, there could be additional human curation and even journalistic-style fact checking. If these platforms can serve relevant ads in milliseconds, they can serve relevant content from trusted sources.

From a regulatory perspective, reach is also the right framework for reforming Section 230 of the Communications Decency Act. That’s the pre-social media law that gives internet platforms a broad immunity from liability for the content they traffic. Conceptually, Section 230 continues to make sense for low-reach content. Facebook should not be held liable for every comment your uncle Bob makes. It’s when posts reach a vast number of people that Twitter and Facebook start to look more like The Wall Street Journal or The New York Times than an internet service provider. In these cases, it’s reasonable that they should be subject to similar legal liability as mass media outlets for broadly distributing damaging falsehoods.

Improving social media intelligently starts with breaking the problem down based on the reach of the content. Social media is two very different things thrown together in an internet blender: personal media and mass media. Let’s start treating it that way.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Popular

To Top