News insights

News & Insights

April 2024

Japanese Government Planning to Regulate Apple’s and Google’s App Stores

Report from Nikkei Asia

In Brief – Japan’s ruling Liberal Democratic Party has approved a draft bill to regulate the largest mobile app store businesses rather than rely on traditional antitrust enforcement processes. The stated goal is to deter anticompetitive conduct and allow regulators to respond to abuses much more quickly. Under the proposed legislation, the Japan Fair Trade Commission (JFTC) will have the authority to fine mobile app store giants, meaning Apple and Google, up to 20% of their domestic app store sales if they do not comply with a set of requirements, including opening their mobile ecosystems to third-party app stores and payments systems. The mobile ecosystem giants would be permitted to oversee security, data management, and protections for young users provided by the alternative app stores. The proposed fines far exceed the 6% authorized for similar “exclusion-type private monopolization” behavior under Japan’s Anti-Monopoly Law.

Context – Japan, home to a very robust digital economy with a unique mix of domestic and foreign-based platform giants, has been pursuing a sort of middle ground space on digital regulation, meaning less aggressive than the EU. This could be because the country’s five digital giants are Amazon, Apple, Google, Rakuten and Yahoo Japan – the latter two based in Japan. For example, legislation enacted in 2020 to protect small businesses from unilateral changes in rules and fees on the marketplace platforms of all five has been broadly enforced by the Ministry of Economy, Trade & Industry. Moving to regulate just the mobile ecosystem giants, Apple, and Google, changes the trajectory in Japan by targeting just US-based digital giants. The EU’s Digital Markets Act (DMA), which also does not regulate any home-market firms, is being compared to this Japanese bill. They are not that alike. In short, the DMA is much broader in scope, covering far more than app store rules. However, the expected DMA enforcement battle between Apple and the European regulators will cover much of the same ground expected here.

Zuckerburg Likely Won’t Face Liability in Social Media Addiction Lawsuits

Report from the Bloomberg

In Brief – Federal District Judge Yvonne Gonzalez Rogers has dismissed the effort by two dozen plaintiff groups to impose personal liability on Meta CEO Mark Zuckerberg in suits accusing social media companies of knowingly addicting and harming children with their products and services. Her decision does not affect claims against Meta or any of the other social media companies targeted in the hundreds of such lawsuits filed against the platforms. Some of them alleged that Zuckerberg was repeatedly warned that Instagram and Facebook weren’t safe for children but ignored the findings and chose not to share them publicly. Judge Rogers ruled that Meta’s top executive couldn’t be held liable just because he’s the public face of Meta, and that finding otherwise would create “a duty to disclose for any individual recognizable to the public,” and that she would “not countenance such a novel approach here.” However, she did grant the plaintiffs the opportunity to amend their complaints, which they said they would take up and “not relent until Mr. Zuckerberg is held accountable for his actions”.

Context – Judge Rogers gained prominence in tech circles during the massive antitrust trial pitting Epic Games against Apple in 2021, and she is currently overseeing Epic’s effort to have her begin regulating Apple’s proposal to allow alternative payments options and set developer fees. She is overseeing the growing collection of lawsuits, including by school districts across America (and now Canada), trying to circumvent Sec. 230 liability protection for social media platforms by arguing that alleged harms to young people are caused by the conscious design of the platforms to encourage longer, more intensive, and therefore allegedly more dangerous use. In November, she issued a ruling that rejected the companies’ argument that the suits should be dismissed en masse based on Section 230 and the First Amendment. Although she did dismiss some of the plaintiffs’ claims, the “social media addiction” cases are proceeding in her court, in a California state court, and with lawsuits filed by dozens of State AGs suing Meta alone.

Bill Banning TikTok Passed By Congress – Expect Courts to Decide Fate Next Year

Report from New York Times

In Brief – The House-passed bill that forces China-based ByteDance to sell its US TikTok business within a year or face what is effectively a ban on its operations has cleared the US Senate and will be enacted into law. When the standalone measure overwhelmingly passed the House in March, its fate in the Senate was uncertain. Those hurdles evaporated once it was included in the bill funding aid to Ukraine and Israel that was fast-tracked to pass the Senate without amendments and is a top priority of President Biden. Looking forward, if selling TikTok within the initial nine-month window does not work out, or in the additional 90-day extension, and odds are against it with many potential hangups, then court challenges will take center stage. Here at PEI, we’ve been saying for months that the biggest hurdle for a TikTok ban is the First Amendment. Back in 2020, three federal judges granted injunctions blocking President Trump’s efforts to ban TikTok and WeChat. While their rulings (here, here, here) covered more than free speech jurisprudence, those sections are helpful background reading, as is this sharp review. Quite a time for the General Counsel of US TikTok to reportedly be leaving the role.

Context – On top of their huge setback in the US, TikTok is facing an escalation of their regulatory problems in Europe, with the European Commission opening an investigation of the TikTok Lite app in France and Spain. TikTok Lite is a limited function version of the app targeted to jurisdictions with larger numbers of users with limited broadband or smartphones with less memory. The Commission, which regulates TikTok under the Digital Services Act because it has more than 45 million active monthly users in the EU, is exercising its authority over the new version of the app and is particularly focused on a rewards function that attempts to encourage greater use through financial-type rewards. The Commission argues that the new version was required to undergo “safety test” and demanded immediate submission of TikTok’s threat self-assessment, which the company says it sent in forthwith.

EU Commission Expected to Approve Apple Tap-to-Pay Offer By Summer

Report from the Reuters

In Brief – Reports from within the European Commission point to the regulators accepting Apple’s offer to give third party mobile wallet and payment services providers access “near field communications” (NFC) technology on iOS devices to settle an antitrust complaint initiated in 2022. The highlight of the commitments is to give payments services access to most NFC components, free of charge and on non-discriminatory terms, in what Apple calls “Host Card Emulation (HCE) mode” that the iPhone giant says provides them with the ability to securely store payments credentials and complete transactions using NFC. The company has pledged to provide third parties with added functionalities including the ability to be set as a default payment app and access to authentication features such as FaceID. The Commission began the process of gathering stakeholder feedback on Apple’s settlement offer back in January and the company is reported to have made some tweaks to their proposal to address concerns raised by the Commission.

Context – Among the Digital Markets Act’s 18 mandates on the largest “gatekeeper” platforms are non-discrimination and interoperability requirements that will force Apple to open its “walled garden”. Apple has never been shy about how they control the popular ecosystem, telling users in clear terms that they design the system to promote privacy, security, and controlled user experiences. The headline DMA issue for Apple has been allowing alternative app stores and “sideloading”. Apple claims their DMA plan meets that demand, but they include new fees and safety standards, and will vet apps taking advantage of the new openness. Apple’s critics, including many of the largest app developers, are denouncing the plan. It is now under review by a seemingly unsympathetic Commission, which has a year to rule on its acceptability. The prospect that the iPhone giant and the Commission can agree on a way to settle the admittedly less wide-ranging NFC access payments problem hints at the chance that the DMA process is not certain to end up in a non-compliance finding and court challenge.

X Reverses Course in Brazil and Will Comply with Orders to Restrict Accounts

Report from Bloomberg

In Brief – Social media platform X has reversed course and said that it would comply with court orders issued by Brazilian Supreme Court Justice Alexandre de Moraes to restrict some X accounts as part of an ongoing judicial investigation of online disinformation and hate speech. While X had been complying with orders from Justice de Moraes to block several accounts as part of a broader clampdown on social media abuses, Elon Musk announced in a series of inflammatory online posts in early April that the platform was challenging the legality of the judge’s online bans and would restore the accounts. De Moraes responded by opening an investigation of Musk over possible obstruction of justice and included him in his inquiry into online disinformation campaigns. De Moraes has also been heading investigations into charges that former President Jair Bolsonaro had planned a military takeover and banned the former president from serving in public office until 2030. As a result, the judge is an increasingly high-profile target of criticism in conservative circles, especially in Brazil.

Context – Elon Musk, the most followed person on X, has a well-known communications style. While he vocally defends “free speech”, including on X, his actual policy has been to comply with “local laws” regarding legal speech. Following local laws means different things in markets like India, Turkey, Australia, or the EU. And they are all a lot different than the US standard. In Brazil, X may proceed with entirely valid legal challenges of de Moraes’s orders but complying with them in the meantime seems consistent with Musk’s content moderation stance. The broader political conflict in Brazil, including charges that progressives are trying to ban conservative voices online as misinformation and hate, is creating fodder for those promoting similar narratives in other jurisdictions, including the US. As such, the US House Judiciary Committee, where Republicans have been investigating “weaponization” of the US Government, has asked Musk’s X to report on the events in Brazil.

EU Data Protection Board Rejects Use of “Pay or Consent” for Targeted Ads

Report from the TechCrunch

In Brief – The European Data Protection Board (EDPB), made up of the data protection authorities for the European Commission and its member states, says that large digital platforms should give users an option to use their services for free without targeted advertising. Whether so-called consent or pay models comply with the GDPR became a flashpoint last year when Meta, having come out on the wrong side of a series of European court and regulatory rulings, offered EU users an ad-free paid subscription option for Facebook and Instagram as the means to show user consent for targeted ads. The company says that the European Court of Justice had indicated that offering a paid, ad-free, subscription to a digital service, such as those from YouTube or Spotify, provided an acceptable form of user consent for digital advertising as part of a free version. Many privacy advocates argued that individuals should not have to pay to avoid behavioral advertising and criticized the cost of the Facebook and Instagram offerings as well. The EDPB opinion calls on large digital platforms to give people the option of free versions that includes advertising without personal tracking, such as through content-based advertising models.

Context – One conundrum around targeted online advertising is that while users tell pollsters that they don’t want to be tracked, very few people will pay to avoid ads. While Meta’s subscription plan will add to its years of litigation around the GDPR, the digital regulatory playing field is rapidly growing. For example, the European Commission now has direct authority over Meta for the Digital Services Act and the Digital Markets Act, and the price of Meta subscriptions is on the table. Not only might EU regulators soon be ruling on prices, but the EDPB shows how it could be done through the backdoor by requiring free versions with advertising that is known to bring lower returns. Finally, while Meta continues to be in the spotlight, the overall issue will eventually impact every level of the digital advertising ecosystem, including other platforms earning advertising revenue, AdTech service providers, and small online ad buyers who most benefit from precise user targeting.

Microsoft Investment in UAE AI Firm has US Foreign Policy Implications

Report from New York Times

In Brief – Microsoft has made a $1.5 billion investment in G42, a major AI company based in the United Arab Emirates (UAE). The deal that was reportedly negotiated under the eyes of the Biden Administration with conditions intended to bring the company and the UAE more broadly onside in the high tech and foreign policy competition between the US and China. US Secretary of Commerce Gina Raimondo said, “When it comes to emerging technology, you cannot be both in China’s camp and our camp.” Under the deal, Microsoft will allow G42 to sell Microsoft services that use powerful AI chips. In return, G42 is removing Chinese-made technology from its operations, Microsoft’s President Brad Smith will have a seat on the G42 board, and G42 will use Microsoft’s cloud services under security arrangements agreed to by the US government. Smith said the deal reflects the US government’s interest that “the most important technology is guarded by a trusted US company.” G42, backed by UAE state wealth, has major biotech, AI, and surveillance operations, and has been criticized for connections to China.

Context – A mantra of tech regulation advocates is that governments should not make the “same mistakes” with AI that they made with social media, meaning regulate more and sooner. On competition, they say AI is being dominated by the same digital giants. A recent UK CMA report lists Microsoft, Google, Amazon, Apple, Meta, and Nvidia. The EU’s Margrethe Vestager says choice in AI services should not be “American or American” and Europe is trying to foster local “champions”, with France’s Mistral the current darling. But being giant and compliant is less problematic. FTC boss Lina Khan is studying competition in AI while also calling for regulation of the industry. Microsoft’s global goodwill campaign around regulator concerns was key to winning approval of its Activision acquisition, although they did need to beat the FTC in court. Comfort cooperating with government may have them front of the US AI national champion line. Don’t expect this AI investment to be scrutinized.

Memes and Opinion Fill News Void After Facebook Bans News Media in Canada

Report from the Reuters

In Brief – Two unpublished studies in Canada that were shared with the Reuters news service indicate that after Facebook began blocking news media stories that users engaged more with opinion content, memes, and other “non-verified” sources on current news topics. Meta began blocking the posting and sharing of content from Canadian media companies last summer after the government enacted legislation requiring Meta and Google to pay a government-set fee to media companies when their content appeared on the tech giants’ platforms. Although Google also threatened to block Canadian media stories from its search results, the company instead reached agreement with the Canadian Government to pay $100 million Canadian annually into a fund for media companies. Meta has maintained its news ban despite ongoing criticism from Canadian government leaders and media companies.

Context – The divergence between Meta’s and Google’s responses to government mandates to pay traditional media companies is now the central drama on the issue. When Australia enacted a media payments bill in 2021, Meta and Google both threatened to block news content. Meta did it for a few days creating a global news media backlash. The Aussie legislation was modified, both companies signed agreements to pay millions to media companies, and the conflict settled down. When Canada enacted a similar bill two years later, Meta went straight to a ban. They argued that their users did not much value news media content and the media companies undervalued the free distribution on Meta platforms. Meta’s bans remain in place and traffic did not decline. Media sites have lost traffic, and the Canadian researchers bemoan the lower quality of news discourse on Facebook. The next showdowns will come in Australia and California. Meta is nearly certain to block news. Given all the trouble and angst “news” content create, we wonder if they hope they get the excuse. Google is testing the same and facing expected criticism from political leaders and media companies. We wait with bated breath for the next moves.

TikTok Ban Might Be Attached to House Ukraine and Israel Aide Package

Report from New York Times

In Brief – Speaker of the House Mike Johnson (R-LA) is reportedly trying to include legislation to force a sale of the US TikTok business by its China-based parent company ByteDance in a legislative package providing military aid to Ukraine and Israel. While the Senate passed a combined Ukraine and Israel aid bill two months ago, the package has been stalled in the House due primarily to opposition to the Ukraine assistance by a sizeable number of conservative Republicans, as well as opposition to military assistance to Israel by some progressive Democrats. Along with Ukraine and Israel aid, the still fluid legislative package may also include humanitarian assistance to Palestinians in Gaza, support for Taiwan, measures to address the US southern border, and a version of the TikTok bill passed in March. While the initial TikTok bill gave ByteDance six months to complete a sale, the new version will allow for up to 12 months. Key Senate Committee Chair Maria Cantwell (D-WA), who recently called for extending the TikTok timeline to 12 months, said she supported the modified measure. While the House is expected to vote on the final components separately, the procedural framework will likely automatically combine the package and present them to the Senate as a single measure. President Biden urged quick passage of the aid package.

Context – The initial House TikTok vote showed some opposition from both Republicans and Democrats. And there were clearly some concerns in the Senate as well. Running out the clock in that less-transparent legislative body is a tried-and-true strategy. However, if a TikTok ban ends up as part of a Ukraine-Israel package that actually passes the House, it will likely circumvent any Senate hurdles. But Congress was never the main firewall for TikTok or its users. The 1st Amendment and federal courts are. When the Trump Administration tried to ban TikTok and WeChat in 2020, three federal judges imposed temporary halts primarily on 1st Amendment ground. Most noteworthy, a judge ruled that Americans had the right to use WeChat and that platform is clearly a part of China’s digital influence and censorship regime. TikTok says they are not.

Meta Oversight Board to Consider Policies on AI-Created “Deepfake” Nudity

Report from the Reuters

In Brief – The quasi-independent Meta Oversight Board, made up of noted outside experts from around the world, has agreed to review how the company handles explicit, AI-generated images. The Board announced that it is looking into two specific instances, one involving Instagram in India, and the second Facebook in the US, where the company was confronted with AI-generated images of female public figures shown in varying degrees of nudity. In both cases, users asked that the images be taken down, and eventually both were. However, the Board says that the image of the American woman was taken down by Facebook much quicker than the image made to look like a well-known Indian woman was taken down by Instagram in that market. Meta says that it uses a mix of artificial intelligence and human reviewers to detect sexually suggestive content. The Board is taking public comments until April 30, including on the harms of deepfake porn, the proliferation in different regions, and weaknesses of Meta’s approach to deal with it.

Context – Hundreds of millions of people post content on digital platforms every day… endless video, audio, images, and text. The platforms all have rules, some more strict, some less strict, trying to create their version of a successful platform. While debates about rules and content moderation decisions often focus on a few politically salient topics, the number of nuances, grey areas, and mind-boggling scale issues, never ends. Super smart tech policy expert Mark Masnick describes the content moderation job as “Impossible to do well”. The Meta Oversight Board is the most innovative attempt to improve the process without resorting to government orders. Ironically, while the panel is delving into the increasingly high-profile issue of “deepfake” nudes, Meta has historically been more aggressive restricting nudity and sexually explicit content than most big social media platforms. In fact, if you scan past Oversight Board decisions, you’ll see that they’ve taken on a number of cases involving policies on nudity, generally encouraging Meta to be more flexibility, nuanced, and better consider context.

Google Exploring Hiding News Story in California to Address News Link Tax

Report from AP

In Brief – Google has announced that it is testing a modified version of Google search on a small share of users in California that does not deliver links to the kind of news stories that would require Google to pay news media companies if the state enacted the California Journalism Preservation Act (CJPA). The CJPA forces large digital platforms such as Facebook and Google to pay news media companies a “journalism usage fee”, which Google calls a “link tax”, when news content they publish appears on the platforms next to advertising. The measure was passed by the State Assembly last year. The State Senate has until the end of 2024 to pass the measure before the legislation expires and the process would need to restart in 2025. Meta has said they would block covered news links in California if the law were enacted.

Context – A confrontation over legislation that forces Google, Meta, and other giant digital platforms to pay media companies when users post, share, or access news content has been building in the US for years. The divergence between Meta’s and Google’s policies, which has manifested in Canada, is the central drama. Backing up, Australia enacted a forced media payments bill in 2021. Meta and Google threatened to block news content and Meta did for a few days. The Australian legislation was modified, both companies signed agreements to pay millions of Australian media companies, and the conflict settled down. Canada enacted similar legislation last year. Again, both companies had threatened to cut off news links. Meta did. Their blocks remain in place. Traffic to their top platforms did not decline, but media sites have suffered traffic losses. Google instead agreed to pay $100 million Canadian annually into a news fund. Back in Australia, the platforms’ media deals need to be renewed. Meta is expected to stop payments, implement a Canada-style policy, and set off a row. If California, or the US Congress, were to enact a news company payment regime, Meta’s course seems set. Google needs to consider whether they want to be the sole payer, or whether a Meta-like policy works for them. Is TikTok willing to pay?

UK CMA Outlines Latest Competition Concerns with AI Development

Report from the TechCrunch

In Brief – The UK Competition and Markets Authority (CMA) believes that investments by the largest digital companies in AI startups is raising the prospect that the emerging industry could become dominated by the same digital giants that exercise immense market power in the internet economy. A new report by the CMA on the development of AI foundation models, which follows on an initial report released last fall, says that an “interconnected web” of partnerships and investments within AI involves a handful of the largest tech firms that may be able “to shape these markets in their own interests.” Microsoft, Meta, Google, Apple, Amazon, and AI-chip giant Nvidia are highlighted, including Microsoft’s massive backing of OpenAI and French leader Mistral, and both Google and Amazon investments in Anthropic, a top OpenAI challenger. Chipmaker Nvidia has backed dozens of startups including LLM builder Cohere.

Context – One mantra of tech regulation advocates is that governments should not make the “same mistakes” with AI that they made with social media, apparently meaning regulating more and sooner. On competition policy, they often say AI is being dominated by the same digital giants. However, startup OpenAI is the leader in LLM chatbots, having outperformed the homegrown models of the internet giants, while Clearview AI pulled off a similar trick beating the giants on facial recognition. Regardless, the EU Competition Authority, UK CMA, and US FTC are all on the AI beat, in particular studying Microsoft’s relationship with OpenAI. The CMA’s reticence to trust the market on AI is another example of the UK’s schizophrenic effort to be a pro-innovation alternative to the EU while pursuing their own flavor of aggressive regulatory interventions. The proposed UK Digital Markets Competition and Consumer Bill, a 2024 legislative priority, parallels the EU Digital Markets Act. The UK’s Online Safety Bill (OSB), passed last year, is the UK take on the EU Digital Services Act. And how long can the UK Government press forward with dire AI safety warnings while claiming they won’t regulate AI “in the short term”.

Progressive Groups Call for More Policing of Election Year Misinformation

Report from Washington Post

In Brief – A coalition of progressive advocacy groups are calling on ten major social media companies, including Meta, Google, Reddit, TikTok and X, to quickly institute more aggressive content moderation practices to address what they claim is going to be a swell of dangerous political propaganda as more than 60 countries hold elections this year. The organizations, which include groups like Color of Change and GLAAD that have long decried digital giants for failing to combat online hate and misinformation, accuse the platforms of weakening their policies meant to fight political misinformation, and want them to reinstate election-integrity policies and enforcement teams, including restricting content around what they call the “Big Lie” that the 2020 US election was fraudulent. The groups also claim that AI is a major misinformation threat and want the platforms to prohibit deepfakes, label AI-generated content and be transparent around the data powering AI tools the platforms use to enforce their content rules.

Context – This letter is a straightforward example of why the US Congress has not, and will not, make meaningful changes to Sec. 230 anytime soon. (File it under “Things that could happen if the Senate filibuster is eliminated”.) One party’s activists want platforms to aggressively police what they claim is hateful and deceptive content. The other side wants the platforms to allow more unfettered speech, arguing that content moderation has been a tool for progressive censorship. Savvy observers are instead waiting for rulings from the US Supreme Court on two issues at the nexus of government policy, social media content moderation, and the First Amendment. And while government directing social media content moderation in the US is constrained by Sec. 230 and the First Amendment, the EU has enacted its Digital Services Act to regulate how digital platforms deal with objectionable content. They claim it is not a censorship regime at all. Although the European Commission is investigating X for not complying with DSA rules regarding user reactions to last October’s Hamas attack on Israel.

Key Senate Committee Chair Supports Extending Time for TikTok Divestiture

Report from the Reuters

In Brief – The chair of the Senate Commerce Committee has said that the Senate might amend the House-passed bill that aims to force China-based ByteDance to sell TikTok by doubling the length of time given to find a buyer. The Protecting Americans from Foreign Adversary Controlled Applications Act passed the House by a vote of 352 to 65, but there was bipartisan opposition from progressive Democrats and libertarian-leaning conservative Republicans. The legislation’s outlook in the Senate is uncertain, with hurdles including a relatively tight schedule for the remainder of the 118th Congress, ample opportunity for Senators to delay proceedings in committees and create procedural complications, and likely wariness of the politics due to the large number of TikTok users. Chairwoman Maria Cantwell (D-WA) said that she believed extending the deadline for ByteDance to sell TikTok from six months, which was in the House bill, to a year, would help “guarantee success”, and congressional staffers noted that the change would push the deadline beyond the 2024 election.

Context – Running out the clock in the Senate is a tried-and-true legislative strategy. Not only does changing the House-passed bill indicate that there will Senate committee action, which itself takes time with hearings and markups, but changes might mean more open floor processes, and then the need to resolve differences between the bills. And every day gets closer to the election on an issue that is divisive on both sides of the aisle and brings great uncertainty. Beyond the politics and the legislative strategies, the biggest issue remains the First Amendment. Don’t forget that when the Trump Administration tried to ban TikTok in 2020, they also targeted WeChat. TikTok’s algorithms might be influenced the Chinese Government, but WeChat is definitely a part of China’s digital censorship and influence regime. The WeChat ban was nevertheless halted by a federal judge. Back during the height of the Cold War, the US Communist Party had a daily paper. It was protected by the First Amendment. It just wasn’t that popular. Serious constitutional questions were left unanswered back in 2021.

Washington State Judge Blocks Use of AI-Enhanced Video in Criminal Trial

Report from NBC News

In Brief – A Washington state judge has barred the use of video enhanced by artificial intelligence as evidence in a criminal case involving the shooting of three people outside a bar in 2021. The defendant, who argues that the shootings were in self-defense, was attempting to submit as evidence cell phone videos from the scene that were processed using a tool from Topaz Labs, an AI-service the company says is most often used by film studios to improve video clarity. King County Superior Court Judge Leroy McCullogh prohibited the videos as novel technology that uses “opaque methods” to represent “what the AI model ‘thinks’ should be shown” and noted “the non-peer-reviewable-process used by the AI model.” A forensic video analyst retained by the prosecutors said the video “contains the illusion of clarity and increased image resolution that does not accurately represent the events of the original scene” while the defense lawyers countered that such claims were “exaggerated and overblown.” Topaz Labs, whose service was used by an independent video production expert retained by the defense, says that they “strongly” recommend against using their AI technology for forensic or legal applications.

Context – So much attention is given to AI chatbots and large language models, including the prospect of “hallucinations” infiltrating court filings, that other AI policy areas like surveillance and facial recognition can fall through the cracks. It was a top issue as the EU wrapped up their AI Act. Before OpenAI burst onto the scene, Clearview AI was the most known, or at least notorious, AI start-up. They used billions of images scraped from the internet to build a “search engine for faces” that rivaled tech giants like IBM, Microsoft, and Google. Under global regulatory pressure, the company agreed to only offer its face-based search service to government authorities in select countries, particularly the United States, where it is widely used by security agencies and police. The Washington State judge’s ruling also highlights the potential implications of “black box” AI systems versus so-called “open source” AI systems, especially in court and governmental contexts.

Progressive Social Issue Content Creators Ask Meta to Change “Political” Rules

Report from the Washington Post

In Brief – Hundreds of social media influencers and activists are calling on Instagram to modify policies the company adopted in February to de-emphasize political issues and commentary on its platforms. The mostly progressive advocates argue that Instagram and Threads policies that consider social issues like LGBTQ rights, women’s rights, racial inequality, and disability, to be “political” is drastically reducing the online reach of many Black, female, disabled, and LGBTQ content creators who post about issues important to them, often without transparency into the exact standards. When Meta announced the changes to the recommendation policies of Instagram, Threads, and Facebook’s short-video service Reels, they said users would still see posts about political and social issues from accounts they follow, but the algorithms would not show users posts focused on laws, elections, or social issues from accounts the users don’t follow. The content creators want Instagram to change from default settings that limit political content to instead offer users the ability to opt-out of seeing political content.

Context – After Twitter, now X, was acquired by Elon Musk and made policy changes decried by progressives, many hoped Meta’s new text-based Threads service would be a welcoming alternative. But Instagram’s Adam Mosseri said Threads would not encourage politics and hard news on the platform because they were not worth the scrutiny and negativity. For more than a year we’ve been highlighting the public evolution of Meta’s thinking about the value of hard news and politics to their business. The standoff in Canada over legislation to force Meta and Google to pay news companies has been a prime example. Meta argued that most of their users don’t value news and that media companies undervalue the ability to reach people over the Meta platforms. So, Meta blocked news in Canada and say they intend to do it whenever they are forced to pay media companies a government-set rate. Maybe Meta’s leadership really does see “hard news” carrying so much baggage that it’s a net negative when you add a tax. Will TikTok be next?

Two of Japan’s Corporate Giants Speak Out on Generative AI Policy

Report from Wall Street Journal

In Brief – Two of Japan’s leading corporations, Nippon Telephone and Telegraph (NTT) and media company Yomiuri Shimbun, have released a joint statement expressing their views on the opportunities and risks of Generative AI (GAI) systems. The two influential companies have been working on AI governance with Keio University. While they recognize the potential benefits of GAI, they take a generally skeptical view of the technology, especially due to its propensity for hallucinations and the spreading of misinformation. They claim that “in the worst-case scenario, democracy and social order could collapse, resulting in wars”. The two companies are also clear that Japan will not legislate quickly to regulate the technology and therefore they call for “the introduction of rules and joint regulations mainly by the media and various industries, the establishment and dissemination of effective technologies, and making efforts to revise the law”, especially related to copyright.

Context – The big question in AI-related public policy is whether governments are moving toward legislation and direct regulation, or, in the words of the companies’ joint statement, governance through “soft laws”. The EU’s AI Act is the standard for concrete regulation. The two companies list it along with the GDPR, Digital Services Act, and Digital Markets Act but say that it will take a “long time and involve many obstacles” to develop similar laws in Japan. Soft laws, things like voluntary company “best practices” and “safe AI” recommendations like digital watermarks, are in the ascendency, including in the US, UK, and Japan, and international forums like the G-7, where Japan is leading the effort. Comprehensive AI legislation in the US will also take a “long time and involve many obstacles” and President Biden’s massive executive order on artificial intelligence includes many soft law efforts. But it also proposes some unprecedented information sharing and “safety” testing mandates for companies developing the largest AI foundation models. Legal challenges will follow if the federal government attempts to enforce those policies on an unwilling AI developer.

Apple’s EU Music Streaming Antitrust Offer Continues DMA Shadow-Boxing

Report from the Reuters

In Brief – The European Commission competition authority is reviewing Apple’s recently released proposal to address the early March antitrust decision that it abused its dominant position in the market for music streaming services to iPhone and iPad users by prohibiting developers from informing their users of alternative ways to make purchases. The Commission fined Apple over 1.8 billion euros and ordered the digital giant to end their anti-steering practices. The antitrust case originated in 2019 with a complaint from Spotify. Under Apple’s music streaming proposal, Spotify and other developers can include in their app a link to their websites giving users alternative ways to purchase digital goods or services outside Apple’s App Store. However, when digital purchases are made through links that originated on the iPhone app, Apple will still impose a 27% fee.

Context – The Commission’s decision on Apple and music streaming apps came out on the eve of the March 7 deadline for the six digital “gatekeepers”, including Apple, to submit their compliance plans for EU’s Digital Markets Act (DMA). Apple’s DMA plan claims to address many of the same App Store policies and fees raised in the music streaming case, and it also proposes similar fee levels to their music app offering. It elicited howls from giant app developers who want far lower Apple fees. To take a step back, European leaders have been very clear that digital platform regulation, including the DMA, is needed because digital services are so important, equating them to regulated industries like banking, telecommunications, and utilities. We’ve been saying for years that price regulation was inevitable on that path, and it’s already a core issue for Apple, whether for music streaming specifically or for apps overall. The European Commission enforcement team now has a year to decide on Apple’s DMA plan and any amendments they might offer before rendering a decision with potential penalties. Then there will be legal challenges. Meta and Google join Apple in facing DMA compliance deep dive reviews.

Bill Banning Data Broker Sales to Adversaries Passes in TikTok Bill’s Shadow

Report from Politico

In Brief – Coming on the heels of the US House of Representatives passing a bill to force China-based ByteDance to sell TikTok or face it being banned from US app stores, that body also passed legislation prohibiting data brokers from selling Americans’ personally identifiable information to buyers located in countries that are foreign adversaries, including China, Russia, and Iran. While the TikTok bill and the Data Broker Bill were reported out of the House Energy & Commerce Committee together, and both passed the full House with overwhelming margins, the lower-profile Data Broker bill passed unanimously, while 65 members, including progressive Democrats and conservative Republicans, opposed the TikTok bill. President Biden has indicated he would sign the TikTok bill if sent to his desk, but its prospects in the Senate are not clear.

Context – Security officials have long raised two big concerns with TikTok being owned by a Chinese-based parent. One is that the app collects a huge amount of personal data about its users, and Chinese national security law requires any business based in the country to grant access to its data to the government. Critics argue that ByteDance could be compelled to grant access to TikTok data to Chinese intelligence agencies. While TikTok argues that it is not based in China and that US user data is held outside China, many Western governments, including the US Federal Government and its military branches prohibit TikTok from being installed on official devices. In the House debate of the TikTok ban, some critics argued that the user data issue is not just a TikTok problem and really covers all social media and the data broker industry, and that the Chinese Government could just buy US user data in any case. The data broker bill is clearly intended to close that work-around and address that argument against the higher profile bill. The second and more serious concern with TikTok regards its rise as an important platform for viral news and policy content in the US, and that its’ algorithms could be used by the Chinese regime to influence the US body politic.

Wisconsin Supreme Court Rules That Amazon’s Gig Drivers are Employees

Report from the Associated Press

In Brief – The Wisconsin Supreme Court has chosen to let stand a State Court of Appeals ruling that delivery drivers participating in the Amazon Flex program were company employees not independent contractors. The Amazon Flex program is an Uber-style “Gig work” smartphone app that allows individuals to deliver packages with their own cars. After a Wisconsin driver who participated in Flex was expelled from the program in 2017 and filed for unemployment compensation, the State Department of Workforce Development (DWD) conducted an audit of 1,000 Flex drivers and determined that all but two should have been classified as Amazon employees based on the state’s nine-point classification test, a decision backed by an administrative judge. While a Waukesha County Circuit Court overturned the DWD’s decision, the State Court of Appeals overturned that decision, ruling that Amazon was the employer. The State Supreme Court’s decision not to overturn the Appeals Court decision is expected to result in Amazon owing more than $200,000 in unemployment insurance premiums for 2018 alone.

Context – Amazon is one of the largest Gig work businesses with two major Gig delivery platforms that have been subject to complaints for driver monitoring, safety, and health shortcomings. Along with various state-level worker classification challenges in US, the company has faced court setbacks in Australia and Spain, as well as a class action in the UK by the law firm that forced Uber to reclassify drivers as company workers. In Japan, Amazon delivery drivers have moved to form a union. The EU Platform Labor Directive, likely to be enacted this spring, was initially intended to set uniform Gig worker employee classification standards in the EU, but the final compromise leaves that question in the hands of the member states. Finally, the Biden Administration is pressing regulatory action at the Department of Labor and the FTC on worker classification that they hope will energize progressive labor advocates but are not likely to change actual Gig work platforms before the November election.

Microsoft to Separate Teams and Office Globally in EU Antitrust Pitch

Report from TechCrunch

In Brief – Microsoft is globally rolling out versions of its Windows and Office 365 software service that exclude its Teams business collaboration product. The digital giant made that change last fall in the European Economic Area in a bid to address concerns of the European Commission competition authority who had received complaints from rivals such as Slack. In its new announcement, the company said that expanding its stand-alone Teams offerings globally “addresses feedback from the European Commission by providing multinational companies more flexibility when they want to standardize their purchasing across geographies.” Alongside the new ability to purchase Office 365 without Teams, Microsoft also allows customers to purchase Teams without buying the rest of the Office suite. The price differential between stand-alone Teams, which costs $5.25 per month, and the savings a buyer will gain from buying Office 365 without Teams, which is just $2.25 per month, is likely to again raise the ire of competitors, as it did when Microsoft began offering the option in Europe last October.

Context – Microsoft’s regulatory good behavior campaign and Activision game licensing deals helped win over the European Commission on its massive videogame acquisition. Now the company needs to manage a collection of other competition policy issues in Europe with a Commission that might be interested in proving it can be tough on the giant. Along with the Teams investigation, there are complaints from European cloud services providers that Microsoft uses software licensing to pull customers into its cloud infrastructure, and NextCloud has complained that Microsoft unfairly bundles its OneDrive cloud system. The Windows operating system and LinkedIn have been designated gatekeeper core platforms under the Digital Markets Act (DMA), but the Commission determined that Bing, Edge, and the MS Advertising Platform are currently not. With the EC competition regulators investigating Apple and Meta pricing features in the context of the DMA, it will be interesting to see how the Teams and Office pricing differential are dealt with here.

The Biden FCC Will (Finally) Vote on Net Neutrality Rules April 25

Report from the Reuters

In Brief – The US Federal Communications Commission (FCC) will vote April 25th to reinstate Net Neutrality (NN) rules that parallel the ones enacted by the FCC in 2015 during the Obama Administration and rescinded in 2017 by the FCC during the Trump Administration. The three Democratic commissioners strongly support the NN effort while the two Republican commissioners strongly object. The one surprise in the latest iteration of the NN debate has been the time it has taken for the Biden FCC to pass the rules.

Context – It is remarkable and unexpected that the Biden FCC won’t end up voting to restore NN rules aligned with the policies of the Obama Administration until there is barely more than six months to the next presidential election. There was never any doubt about the Biden Administration’s policy. The President supported Net Neutrality as a candidate, it remains a progressive fixture of internet policy, and his federal agencies have been motivated by progressive activism throughout his term. However, the network providers were able to delay the confirmation of a third Democratic FCC commissioner until last September, keeping the agency from engaging on any partisan issues for more than two-and-half-years! Breaking that logjam was a result of the Democrats winning a one-vote majority in the Senate in 2022. Once three Democratic commissioners were in place, Net Neutrality rules were a certainty. After the vote, the next certainty is legal challenges. When the Obama Administration FCC enacted Net Neutrality rules in 2015, the industry challenges were rejected in federal court, and when the Trump Administration FCC effectively ended the Obama era rules, the legal challenges of the progressive NN advocates were also rejected. Although a pair of former Obama Administration legal advisors publicly advised the new Democratic majority not to pursue the policy again, arguing the Supreme Court would likely overturn it based on the newly resurgent major questions doctrine, the full Net Neutrality process, including FCC rulemaking, legal battles, and related political mobilization, is internet policy comfort food.

Bipartisan House-Senate Privacy Bill Might Mean Election Year Breakthrough

Report from Washington Post

In Brief – A bipartisan pair of congressional committee chairs have announced agreement on draft federal data privacy legislation that may break years of deadlock. Rep. Cathy McMorris Rodgers (R-WA), who leads the House Energy & Commerce Committee, and Senator Maria Cantwell (D-WA), who leads the Senate Commerce Committee, believe their bill can be enacted this year. Key provisions include requiring companies to minimize their data collection practices, giving users the right to access, correct and delete their data as well as opt out of targeted advertising, as well as regulating algorithms that make decisions related to matters like housing, employment, and health care, including the right to opt-out of their use and empowering the Federal Trade Commission to challenge discriminatory impacts. The draft aims to resolve the two issues that have stymied privacy bills for years, with the Democratic chair getting a robust “private right of action” allowing consumer class action lawsuits, while the Republican chair has a robust preemption provision covering most state laws.

Context – For years, tech policy commentators have argued that federal privacy legislation was inevitable based on more states enacting their own laws and the international trend exemplified by EU’s GDPR. We’ve consistently recommended betting the under and pointed out how bills, federal and state, bogged down when class action lawsuits were in the mix. When a similar bill had the backing of three of the four key committee leaders last Congress, only Sen. Cantwell held out, likely because the class action provision was not good enough. That bill failed amidst complicated crosscurrents. This deal seems a pretty big shift, especially on class actions, and one must wonder about how business groups will react. There are still big operational hurdles, including a short legislative calendar, the inevitable rush to add things, the very narrow House and Senate margins, Republican and business antipathy for the current FTC, and GOP reluctance to give President Biden a high-profile election year legislative win. Probably still bet the under but get better odds now.

Elon Musk and X in Content Moderation Showdown with Top Brazilian Judge

Report from the Wall Street Journal

In Brief – Social media platform X under the leadership of Elon Musk is engaged in a public fight with Brazilian Supreme Court Justice Alexandre de Moraes who has been leading a multi-year investigation of online disinformation and hate speech. X had previously complied with orders from Justice de Moraes to block several accounts as part of a broader clampdown on social media abuses, but Musk has announced that X will now restore the accounts and challenge the legality of the bans, harshly criticizing the judge in the process. In response, de Moraes has opened an investigation into Musk over possible obstruction of justice and included him in his inquiry into online disinformation campaigns. De Moraes is also heading an ongoing investigation into former President Jair Bolsonaro including accusations that he planned a military takeover and led a trial last year that resulted in Bolsonaro’s ban from political office until 2030. Some conservative politicians and commentators have rallied to Musk in his standoff with the judge.

Context – Elon Musk’s often robust defenses of “free speech” are not overly consistent on a case-by-case basis. But you can count on him upsetting many progressives and delighting many conservatives. More relevant to actual content moderation policy has been his repeatedly saying that his platform will follow “local laws” regarding legal speech. In the US that is built on the First Amendment, but following local laws means entirely different things in many markets around the world, whether India and Turkey where governments criticized as authoritarian ask social media platforms to take down critical posts, or the EU and Australia, where democratic governments mandate content moderation practices. This move in Brazil is potentially an entirely valid challenge of the scope of Justice de Moraes authority, especially given that Brazil’s so-called social media “fake news” content moderation law, backed by the Government of President Lula da Silva, has not been enacted after two years of debate. Musk’s X has challenged similar account blocking in India, and that was after their social media law was enacted.

Ask A Question!