In late 2022, it sometimes seems that social media rules the world of Internet business. Headlines bristle with baby-blue bird icons as Elon “Megabucks” Musk tries to figure out how to run Twitter.
It's tempting for old-school techies to file all social media under “teenage angst” — a plethora of platforms oft consumed by youth transfixed by their mobile phone screens. But do phone zombies matter when it comes to real-world tech concerns?
Yes, because social media is at the forefront of governmental actions to limit perceived social damage. And recently, we've seen a renewed effort by governments in diverse regions.
Chief digital officers (CDOs) must take note. Disparate governmental actions indicate a serious level of disapproval regarding information and opinions transmitted by said platforms. And subsequent actions can serve as legal precedents.
The situation is fluid and ongoing. Let's take a look at recent claims, counterclaims, and proposals.
Singapore
The Southeast Asian city-state has a reputation for draconian measures. Still, in practice, the Singapore government often exemplifies reasonable governmental restraint in censorship.
A new law, known as the Online Safety (Miscellaneous Amendments) Bill, follows the Protection from Online Falsehoods and Manipulation Act (POFMA) of 2019. It “empowers the IMDA to deal with harmful online content accessible to Singapore users, regardless of where the content is hosted or initiated,” said a report on Channel News Asia.
Do phone zombies affect real-world tech concerns?
“Social media sites will be required to block access to harmful content within hours, after a law to strengthen online safety was passed by Parliament on Wednesday (Nov 9),” said CNA. “If an online platform refuses to take down harmful content, the Infocomm Media Development Authority (IMDA) can issue a direction to Internet access service providers to block access by users in Singapore.”
The new regulation thus follows a global trend in such legislation: emphasis on local culture and customs.
The E.U.
Individual European countries and the E.U. have levied fines on social media platforms for years. As CDOTrends reported in May of this year: “Margrethe Vestager, executive vice president for the European Commission (the executive arm of the E.U.), speaking at the International Competition Network conference in Berlin, said, “The DMA (Digital Markets Act) will enter into force next spring and we are getting ready for enforcement as soon as the first notifications come in.”
The DMA is pending, as is another E.U. directive, the Digital Services Act. “The DSA will be directly applicable across the E.U. and will apply 15 months after entry into force or from 1 January 2024, whichever comes later,” said the European Commission in a statement.
The DSA is “an E.U. regulation to modernize the e-Commerce Directive [of 2000] regarding illegal content, transparent advertising, and disinformation.” The pending regulation is also: specifically Eurocentric, according to the European Commission website: “The responsibilities of users, platforms, and public authorities are rebalanced according to European values, placing citizens at the center.”
Among other things, the regulation promises to provide:
“For society at large:
• Greater democratic control and oversight over systemic platforms
• Mitigation of systemic risks, such as manipulation or disinformation”
Also on the agenda is a new definition: “Very large online platforms [which] pose particular risks in the dissemination of illegal content and societal harms.” The statement adds a metric: “Specific rules are foreseen for platforms reaching more than 10% of 450 million consumers in Europe.”
It's not difficult to foresee which platforms fall into this particular category.
Definitions and metrics
Unpacking the E.U. regulation reveals concrete metrics that may or may not feature in other jurisdictions. Among others:
“• measures to counter illegal goods, services, or content online
• new obligations on traceability of business users in online marketplaces
• effective safeguards for users, including the possibility to challenge platforms’ content moderation decisions
• ban on a certain type of targeted adverts on online platforms (when they target children or when they use special categories of personal data, such as ethnicity, political views, and sexual orientation)
• transparency measures for online platforms on a variety of issues, including the algorithms used for recommendations
• obligations for very large platforms and online search engines to prevent the misuse of their systems by taking risk-based action and by independent audits of their risk management systems
• access for researchers to key data of the largest platforms and search engines to understand how online risks evolve
How and when this list of measures might be enforced is speculative, but even suggesting banning targeted advertising should garner some attention from “very large platforms.”
The U.S.
Non-U.S. companies can expect anti-tech rhetoric Stateside, especially during election seasons. One such precept was recently promoted by two U.S. politicians representing states in the north and south of the nation.
Unpacking the EU regulation reveals concrete metrics
In an op-ed published by the Washington Post, Senator Marco Rubio (Florida) and Mike Gallagher (Wisconsin) targeted a popular phone application. “The app can track cellphone users’ locations and collect internet-browsing data,” wrote the politicians, “even when users are visiting unrelated websites.”
Unsurprisingly, the app in question isn't based in the U.S. It's the popular short-form video hosting service owned by Chinese company ByteDance.
Federal opprobrium
Rubio and Gallagher have an ally in Brendan Carr, a commissioner at the Federal Communications Commission. In June, Carr tweeted: “I’ve called on @Apple & @Google to remove TikTok from their app stores for its pattern of surreptitious data practices.
“The U.S. government should ban TikTok rather than come to a national security agreement with the social media app that might allow it to continue operating in the United States, according to Carr,” said CNN.
“The Committee on Foreign Investment in the United States, a multi-agency government body charged with reviewing business deals involving foreign ownership, has spent months negotiating with TikTok on a proposal to resolve concerns that Chinese government authorities could seek to gain access to the data TikTok holds on U.S. citizens,” said CNN.
Carr added more salt to his comments in a mid-November television interview, as reported by Yahoo. “'At the end of the day, TikTok is China’s digital fentanyl,” Carr said Friday in [a television] appearance,” said the report. “'Again, it’s not the videos, but it’s pulling everything from search and browsing history, potentially keystroke patterns, biometrics, including face prints and voice prints'.”
It's worth noting that the use (and potential misuse) of the biometrics cited by FCC Commissioner Carr are not exclusive to TikTok.
Key takeaway
Media outlets and governments have been discussing proscriptions against social media for years. It's tempting for CDOs to pay little heed.
But it seems that in late 2022, as the post-pandemic world slowly reopens for all businesses, attention needs to be paid to potential legislation. Watch this space.
Stefan Hammond is a contributing editor to CDOTrends. Best practices, the IoT, payment gateways, robotics, and the ongoing battle against cyberpirates pique his interest. You can reach him at [email protected].
Image credit: iStockphoto/wildpixel