Can We Control Junk Information?
Bad information is everywhere. Social platforms dominate the media landscape. I'm not sure that we can control the junk without rediscovering how to talk to one another.
The Teardown
Thursday :: October 3rd, 2024 :: Approx. 14 min read
👋 Hi, this is Chris with another issue of The Teardown. In every issue, I cover how we interact with technology that enables our day-to-day lives.
If you’d like to get emails like this in your inbox every week, subscribe below.
And, if you enjoy today’s thinking, let me know by tapping the Like (❤️) icon or forwarding the email to someone else.
Quick thanks: I wanted to welcome lots of new folks to The Teardown this week. I’m pumped that you’re here, reading, and contributing to something that I really enjoy doing. Hope you love today’s post. -C
What Do You Mean, Exactly
It’s overwhelmed with white-picket fences. It is a developed monstrosity miles away from any meaningful store, service, or transit station. It is the American dream.
To some, it represents every incorrect privilege and trapping of modern life, a large plot of living space defined by massive spread, inconvenient proximity, cookie-cutter houses, and those pesky cars. To others, it is a place of open space, more space, non-urban calm, and positive inevitability - something you ultimately love once you plant your things as an adult, possibly married, possibly with kids.
What am I talking about? You might already know. The suburb.
But, what does it mean to live in a suburb? And, what is a suburb?
I exchanged a few messages with another Substack writer in response to his post about stranded kids. I’m not linking his post here because I don’t care to publicize his name or reputation. He is irrelevant, his words are not:
The majority of kids in the U.S. are stuck at home with no where to go unless parents drive them somewhere.
Here in Spain I see so many kids out in the wild enjoying themselves. When friends visit, they always comment on how different it is.
A friendly reminder that if you don’t like where you live, you can always move somewhere else. It’s not that crazy of an idea, humans beings have been migrating to other places since the dawn of time.
I thought the first sentence was vague. I thought the concluding paragraph was reductive. But I was curious. What, exactly, did the writer want to tell me?
So, I asked:
Just curious: what defines stuck here? Can they walk down the street to friends? Can they walk to a park? Are they miles from anything?
It is hard to unpack what you mean by “stuck with nowhere to go”
I thought those questions were reasonable. Perhaps pedantic, too. And, through one lens, kind of annoying. But, in an era when it’s hard to know fact from not-quite-fact, it’s important to be precise with words. The writer responded to my queries:
What I mean by “stuck” is that in the suburbs everything is spread so far apart which means that people need to drive to get to anywhere meaningful and kids can’t drive.
Does that make sense or should I keep unpacking?
Oh, well that cleared the fog in my understanding. The precise definition of stuck was spread so far apart, and the precise definition of nowhere to go in the original post was anywhere meaningful.
I was flummoxed with the word choice. This writer did not seem to care about the precision of his statements, or the factual accuracy. And, to be clear, I wasn’t sure if he was right or wrong at first glance.
I wanted him to support his statements with data, but instead, he qualified them.
So, I was left not really knowing what he meant, or understanding the facts, and otherwise witnessed something common on the internet: conjecture.
It’s not easy to police opinions. They are, by definition, not facts.
People must engage each other, sometimes uncomfortably, to unpack the meaning of an opinion. The moment when you find an indisputable fact hiding in an opinion is rare.
Something with that sort of rarity comes with effort. To get the thing, you need to do the work. Between people, the work is their conversation. It is their willingness to engage, prod, object, disagree, and squirm through something to achieve a resolution. Colloquially, you break bread.
And conversation requires practice.You practice articulating and strengthening or restructuring topics you care about. You evolve your thinking and awareness to keep up with society, technology, and everyone around you.
So, today, I’ll explore how we’re communicating with each other and thinking about guardrails on that communication.
The short story: We’re probably too reliant on technology, but technology is just one puzzle piece. Technology won’t solve communication problems. People need to revisit the art of conversation.
The Value Of Conversation
Real Time With Bill Maher is an HBO (yes, Max, I know) show hosted by politically-tilted comedian Bill Maher. I’m not a fan because of him, but instead because he often speaks with guests holding interesting and sometimes controversial views. You usually hear something evocative.
The format component that I most like is the panel discussion. Maher introduces topics to the panel and converses with them. This past week’s show included guests Ian Bremmer and Yuval Noah Harari, both highly-educated well-known speakers and authors.
Maher opened with a conversation about the topic of AI regulation as discussed recently at the U.N. Prompted by Maher asking if the U.N. or any other organization should regulate AI, Harari said (abbreviated by me):
We need to understand the problem before we rush to solutions.
We tend to solve problems, and then figure later out that we solved the wrong problem
The point is easy enough to understand. Many AI regulation ideas might simply be bandaids over problems that aren’t the real problems.
Maher then asked:
So what’s the problem that we’re missing and what’s the problem that’s really there?
Harari’s response (abbreviated):
AI is not a tool, it’s an agent (-35:26)
Let’s pause for a moment so I can fill in a gap. Harari described how most technology tools over time were deterministic. You gave them instructions and they produced results. And, they produced reliable identical outputs based on identical inputs.
AI is different in that it is probabilistic. Two prompts tossed to a Large Language Model (LLM) like ChatGPT will not generate the same outcome. A model such as Open AI’s o1 is further both probabilistic and capable of reasoning. In effect, it thinks about its response.
This distinction is one reason why Harari explicitly used the word agent, an autonomous actor capable of thinking and decisioning.
It is easier, conceptually, to regulate something deterministic. You make sure the tool can’t produce the output (or can’t accept the input, or both). To instead regulate something that thinks and decides and acts on its own seems, well, complicated.
Bremmer also opined with a geopolitical observation:
We can’t wait for decades for the Americans and Chinese to talk about AI arms control even though we don’t trust each other (-34:27). Need to make sure the world is talking to each other and understanding the nature of these agents.
I highlighted something important in that observation that I’ll come back to in a moment.
Maher then posed a free speech question, asking Harari if he said in the past that he supported free speech regulation for AI bots. Harari’s response was nuanced:
No free speech, or no free pass for the algorithms that are now managing the social media platforms that are now the most important media platforms in the world
And:
We now have the most sophisticated technology in history and people are losing the ability to hold a conversation.
And, there, connected back to Bremmer’s statement, is the key word: conversation. In Harari’s view, we’re veering off the well-understood path of conversation with one another.
So, what are we replacing it with? Technology use. Lots of it.
But I think it’s reductive to say something like phones are killing us or social media rots our brains. Real-life doesn’t often fit into short phrases. Modern algorithms aren’t simple formulas.
One-Way Ticket To Amplification
The New York Times both propagates and explains part of a vexing amorphous problem space: one-way information. From people.
You read a newspaper and learn about a topic, something that happened, or get a glimpse into the future. But what you read is editorialized by the time it hits your eyeballs. You consume a version. The writer and editor aren’t sitting next to you for support and context.
One way to work around that gap is to engage the internet. You ask - people, anywhere — about something, or promote something, or refute a point.
In some moments, you might get what you want and move on. Others fall into a perverse trap of throwing napalm at people from the broader internet that feel, for whatever reason, compelled to respond to random inquiries. They’re the internet’s informal inflammatory editors.
The back-and-forth is probably unproductive, serving mostly to polarize rather than inform, educate, and resolve.
Underlying these asynchronous barbs is the exchange of opinions, or of information. And Harari brought up that topic during Maher’s show:
Now the most important editors in the world are algorithms, the algorithms that decide what you will see on the news feed…
And they are given a very specific and narrow goal by their human masters - to increase human engagement, to increase user engagement, and they discovered by experimenting on billions of human guinea pigs that the easiest way to grab user attention is to press the hate button, or the greed button, or the fear button, in our minds, and this is what they’ve been doing for the last couple of years or decades. And this is what makes the conversation all over the world almost impossible.
They are flooding the world with junk information. And nobody is liable for that because they hide behind free speech. But what we want really from the corporations is not to be liable for what the human users are saying, here free speech would definitely be protected, they should be liable for what the algorithms do.
I’ll fast forward to what Harari said next after some other back-and-forth:
And the other thing to realize is the vast majority of information is not truth. A key misconception especially in places like Silicon Valley is to equate information with truth. Most information is junk. I mean, the truth is a very rare and costly and precious sub-kind of information because, you know, to write truthful story, you need to invest a lot of time and effort and money and research and fact-checking whereas fiction is very cheap.
Verbal communication is well-aligned with written word. You invest a lot of time, effort, money, research, and fact-checking into speaking in truth, in concrete data.
Conversely, it’s very easy to speak fiction, but it might get you into trouble in face-to-face conversation. Unless, remarkably, you are you-know-who. I won’t go there, don’t worry.
You mitigate that trouble by hiding, partially, behind the veil of the internet. You can say things on Facebook, Instagram. Twitter and largely not get in trouble. Some platforms seem almost interested in amplifying junk information.
The New York Times also participates in this problem. On September 24th, it published an article titled Anti-Aging Enthusiasts Are Taking A Pill To Extend Their Lives. Will It Work?
Through one lens, that is the phrasing of an article about emerging longevity science that might be useful or helpful to readers, and at least interesting. The 548 comments suggest that readers thought so too.
Through another lens, it is a story about unproven science that mentions (boosts them, too) internet health personalities Dr. Peter Attia and Bryan Johnson. The article describes some of the facts, some of the fictions, and otherwise drops information in your hands to let you be the decider. Attia and Johnson do the same, via their podcasts. And the Times throws in a bit of a disclaimer:
There isn’t data on how many people use rapamycin for anti-aging purposes, since the drug is taken off label or purchased from overseas providers. Like Mr. Berger, some of the other users interviewed for this article said they believed rapamycin has provided mild benefits, such as helping them lose weight, alleviating their aches and pains or even causing them to regrow dark hair years after going gray.
But while users are optimistic and the evidence that rapamycin can increase longevity in animals is promising, the research in humans is thin and long-term side effects are uncertain. In the few studies in which rapamycin has been compared to a placebo, tangible benefits are hard to come by.
So, for humans, there’s not much that’s robust and concrete in the article. Was the article worth the space it consumed on the site? And was it was worth the 548 comments from readers all over the world? Keep in mind that those comments represent 548 choices to broadcast opinions into digital outer space.
In a sense, the algorithm (editors, user data, etc.) of the New York Times publishes an article that might be useful and confusing and misleading all at once.
When that article hits any sort of feed-oriented social platform, you can forget about any real sense of truth. It’s not that it’s unattainable so much as beaten and buried by the algorithm in favor users posting about rapamycin. Posting anything - the specifics basically don’t matter.
The more people engage with the topic, the more it is worthy of continued amplification within a given platform.
Three major platforms - Instagram, Threads, and Twitter - show posts that scroll off the page in perpetuity. There’s very little real conversation.
Who’s to blame? Well, you can blame the algorithms, of course. You can blame people for fueling those algorithms. And you can blame a traditional media outlet for publishing something that seems like it would bait lots of people into broadcasting unsubstantiated opinions - junk information.
Can We Agree On The Rules?
Here, again, I’m reminded of the No Rules In The Park game that fascinates me months after encountering it.
I wrote about the game in April:
The game was No Vehicles In The Park, in which you determine whether something is in park. You needed to resolve two questions to play the game:
What are vehicles?
What are parks?
Sure, these questions seem simple - almost stupid. The game provided some guidance around the approach.
In essence, what the game asked you to do was ignore any pre-conceived understanding of other rules, exceptions, and violations from past experience
David Turner, the creator, wrote about the game’s intentions and his analysis of the results. Two stood out to me in reference to what I’m covering today:
Vehicles are simple compared to political philosophies. They're also mostly context-free; understanding a political tweet might require reading an entire thread, together with a history book. I know of at least one case where someone interpreted the sending of certain flowers as a death threat (and I'm not entirely sure, in the context of that relationship, that they were wrong to do so; nor am I sure that they were right).
I hope that this game has made you reconsider your views on content moderation. Maybe you will decide to live with the nebulosity, but have more sympathy for the refs. Maybe you will decide that you would prefer to live with the consequences of less moderation. Maybe you will think really hard about decentralization (which is not a panacea). Maybe you will give up on social media altogether.
These bits of text talk about moderating content. All major social platforms have usage policies that you agree to when you create accounts and post within their walls. You can’t do X, you can’t say Y.
What if, as Harari said, we moderate and regulate the algorithms.
At first glance, this doesn’t seem so complicated. You might create a rule that removes a tweet or post if it is shared by users other than the originator more than 10,000 times in 30 minutes. Easy. Every tech company tracks that sort of thing.
But that rule is unbending in the face of a exception, right? What if the President of the United States shares a tweet that says the State of the Union address happens at 10 pm ET / 7pm PT.
That tweet is simple and factual and likely to be amplified by lots of users. Under the regime of the oversimplified rule I just mentioned, the President’s tweet would be wiped away.
Is that fair? It doesn’t seem so. Should it be an exception? Probably. And, ok, what else needs to be an exception. How do we determine exceptions? Who decides?
It doesn’t take long to see that it’s at least as difficult to control the algorithm’s mechanics as it is to control the content.
By the way, just because an algorithm doesn’t let you do something doesn’t mean the root problem is resolved. Plenty of folks - prominent and otherwise - want uncensored AI tools, and unadulterated free speech.
There isn’t a consensus on how to comprehensively get at the root of ring-fencing bad information, and frankly may never be.
The Uncertain Road Ahead
How could such a consensus occur?
As a start, there needs to be an explicit resolution to a collective action roadblock. People need to consider interests beyond their own, even if there are some personal inconveniences.
That goes for people like you and me when we talk. That goes for companies liker Apple, Google, and OpenAI, when they release commercial tools disguised as democratizing platforms. And it goes for quasi-or-formal governments like China and the United States (and others) that should sit at the table before we are barreling down the road without brakes, completely unwilling to disconnect from our own views.
And to get there, achieve consensus, and resolve collective action, we need to recall and practice how to talk to one another. Side-by-side, with bread in hand.