The Teardown
Tuesday :: December 3rd, 2024 :: Approx. 4 min read
👋 Hi, this is Chris with another issue of The Teardown. In every issue, I cover how we interact with technology that powers our day-to-day lives.
If you’d like to get emails like this in your inbox every week, hit the subscribe button below.
Australia’s Country-Level Social Media Ban
Do you read the Terms of Service?
What about a Privacy Policy?
Do you click Accept or Reject All Cookies without wondering what they do?
Do you select your legitimate birthday when presented with the “Verify Your Age” question?
Lots of people waive their hands over these friction points. Me too. I don’t care to read the legalese. I mostly accept whatever cookies want to do to me. And, I’ll tap a random birthday that gets me through an age prompt (e.g. 1/1/1950) rather than waste time picking the month, day, and year of my glorious birthday.
I’m lying to the company in that last user path, right? I’m not 74. I’m 42. Does that matter? Probably not for me.
Instead, let’s assume I’m 15. I live in Australia. I’m a high-schooler, and may or may not have a cell-phone. Current trends suggest that I do and that my phone’s home screen displays a plethora of apps.
What apps should I be allowed to use? In the eyes of Australia’s government, not many social media apps that exist today (emphasis mine):
Australia has imposed a sweeping ban on social media for children under 16, one of the world’s most comprehensive measures aimed at safeguarding young people from potential hazards online. But many details were still unclear, such as how it will be enforced and what platforms will be covered.
After sailing through Parliament’s lower house on Wednesday, the bill passed the Senate on Thursday with bipartisan support. Prime Minister Anthony Albanese has said that it puts Australia at the vanguard of efforts to protect the mental health and well-being of children from detrimental effects of social media, such as online hate or bullying.
The law, he has said, puts the onus on social media platforms to take “reasonable steps” to prevent anyone under 16 from having an account. Corporations could be fined up to 49.5 million Australian dollars (about $32 million) for “systemic” failures to implement age requirements.
Neither underage users nor their parents will face punishment for violations. And whether children find ways to get past the restrictions is beside the point, Mr. Albanese said.
There’s much to unpack. I’ll keep it short, though.
Can You Enforce Bans? Who Will Enforce Bans?
I support bans in concept. Just about everyone agrees that extensive social media use does more harm than not. So, let’s reduce or erase that harm.
Australia believes that platforms should enforce an age-based legal restriction. You can’t use an app (e.g. Instagram) if you’re not at least 16.
It is something. I think of this move as better than nothing. But there are many glaring holes in this regulatory action. To highlight a few:
Users can and will lie about their age. There’s plenty of algorithmic sniffing that catches these sorts of behaviors, but not all. How comprehensive is that sniffing? See below…
Big platforms use plentiful resources to enforce policies and rules. Small platforms may not have those resources. Do these regulations force small platforms (well-intentioned or not) out of business?
Big platforms need big fines. Scott Galloway thinks $50B. Small fines are faint pencil marks on their balance sheets. Meta can pay a $5M fine without blinking. It can pay much more than that without blinking.
What, exactly, is a social media platform? Some are obvious. Instagram. TikTok. What about YouTube? Discord? Substack? I remain convinced that half-height walls with doors for exceptions start sturdy and ultimately weaken over time.
Here’s another question: what about device makers?
Apple makes iPhones. Teens use those phones to access their social media app portfolios. Can Apple do something?
Here’s my idea (using Apple as example):
A parent buys an iPhone and creates an Apple account for their kid.
That account requires the kid’s birthdate. Let’s assume that parents don’t care (as much) to lie about their kid’s age when asked.
Apple’s parent-controlled device restrictions include a whitelist check. In short, the kid can’t download or use an app unless it’s on the parent’s approved whitelist.
Furthermore, the device requires that the app do the following:
Expose a clear minimum age flag to the device.
Let the device check that age flag as a second-level check.
All of the above works like a phone tied to a corporate network. The kid uses the phone as they please within a defined box. Remote-controlled by parent.
Important: the phone can’t be wiped off its Apple ID and age validator without parental consent.
You want (6) because - otherwise - kids will wipe devices and move on.
Of course, my idea isn’t foolproof. I’m sure some of you will counter-argue with other holes. Please do.
But I’m perplexed by the constant attention on social media that glosses over the folks making the devices. Apple is unique and powerful in its position. Google, also, though its advertising cash-cow is an obvious conflict of interest.
So, if we’re talking about responsibilities, maybe we should talk about the makers too.
On the next episode of "Australia, the social media experiment"....
It will be really interesting to see how this unfolds. I'm sure many will be waiting and watching. Compliance will be tough and potentially a beaurocratic money-pit.
Your Apple /product maker idea is solid 👌
This is super interesting, and not something I knew about. You make really great common sense remarks about the theoretical value and the practical problems in enforcement.