Archive – 2023

September 2023

OpenAI Announces Dublin for Its First Office in the EU

Report from TechCrunch

In Brief – OpenAI has announced Dublin as the site of its first office in the EU and job openings linked to it indicate that the company is focused on the legal and regulatory challenges in that huge market. According to its careers page, among the positions OpenAI is looking to fill in Dublin are an associate general counsel for the Europe, Middle East, Africa (EMEA) region, a media relation lead, a policy and partnerships lead for global affairs, a privacy program manager, and a software engineer focused on privacy. The Dublin office will be the company’s third, after San Francisco and London.

Context – Last May, Sam Altman, OpenAI’s CEO, expressed concerns that the EU’s legal and regulatory framework on AI might become so onerous that the company would not operate in the bloc. At an industry event in London, he said that the EU’s AI Act could overshoot, focusing on changes made by the European Parliament to regulate “foundation models” and how copyrighted material is used in “training data” of generative AI systems. Altman drew a sharp rebuke from EU Commissioner Thierry Breton “that our rules… cannot be bargained” and was quickly making very conciliatory comments, including talking about an EU headquarters with hints pointing to France, Germany, and Spain among others. It’s worth noting that French President Emmanuel Macron has lately also said that the AI Act might go too far. Well, Ireland won out, with OpenAI joining a host of US digital giants based there. Along with talent pool, language, and tax benefits, some see regulatory benefits to an Ireland-based EU headquarters. The country’s data regulator has been widely criticized by privacy advocates for going easy on Big Tech from its perch as their regulatory lead under the GDPR’s “one stop shop” regime. Those frustrations led to the European Commission being put in the driver’s seat for the largest digital platforms in the DSA and DMA. It is reported that enforcement of some aspects of the AI Act might likewise be placed with a European Commission AI agency rather than with Member State regulators.

Netflix and SK Broadband Settle Network Usage Fees Fight with Business Partnership

Report from the Korean Economic Daily

In Brief – Netflix and SK Broadband, one of South Korea’s largest internet service providers (ISP), are dropping their long-running legal battle over whether Netflix should pay “network usage fees” and instead are creating a strategic partnership where the companies will work together to improve services to their joint customers. It is reported that the streaming giant, who is very popular in South Korea due in part to its production of very successful programming created in the country, including the massive hit Squid Game, will pay SK Broadband an amount equivalent to the usage fees, but it will be done in the context of a range of joint programs to improve Netflix services to the ISP’s customers. The new Netflix-related offerings, which will be released starting in 2024, may include bundled offerings on SK Broadband’s linear broadband networks and its related mobile business, SK Telecom, as well as developing AI-enabled services to expand Netflix customer relationships.

Context – Telecom companies have long accused large internet platforms of “free riding” on their networks. South Korea is the major market with the most telecom-friendly digital payments regime. There, the telecom companies have been uniquely successful in imposing data usage fees on internet-based businesses to supplement the data services payments of consumers, although overseas-based online companies have pushed back against the unique regime. Legislation in the Korean National Assembly to clarify the obligations of large non-South Korean online companies remains under consideration with divided consumer opinions. The frontlines have now shifted to Europe, where telecom providers are pressing a sender-pays “fair share” plan to impose some type of new fee on digital platforms to help fund 5G and broadband. Thierry Breton, Europe’s Internal Market Commissioner, sympathizes with charging video streamers such as Amazon, Apple, Google, Meta, Netflix, and TikTok. But critics question whether there is a network investment shortfall, see new fees being passed to consumers, and fear undermining net neutrality.

UK Parliament Committee Chair Presses Platforms to Demonetize Russell Brand

Report from the BBC

In Brief – Dame Caroline Dinenage, who chairs the UK House of Commons’ Committee on Culture, Media, and Sports, has reached out to Instagram, TikTok, X (formerly Twitter), and Rumble asking if they would follow the example of YouTube and stop Russell Brand from making money on their platforms. The comedian, actor, and former TV presenter, faces allegations from several women that he had raped and sexually assaulted them at various times between 2006 and 2013. Brand, who has strongly denied the allegations, has robust followings on several social media platforms, including 6.6 million subscribers on YouTube, where his videos in recent years have often involved controversial topics often described as right-wing conspiracies, including about COVID and vaccines. A YouTube spokesperson confirmed that the platform suspended monetization on Russell Brand’s channel for violating its creator responsibility policy, and Dinenage asked the other companies if they intend to join YouTube in suspending his ability to earn money on their platforms. Rumble, a Toronto-based platform that touts its free speech ethos and is popular with conservatives who argue they are discriminated against on larger platforms, publicly rejected Dinenage’s request and criticized YouTube’s policy decision. Following Elon Musk’s post on X that appeared to sympathize with Brand’s claim that he was being maligned, Dinenage also asked X officials if Musk has influenced X policy on Brand.

Context – Coincidentally, the same week that the chair of a key parliamentary committee was asking social media executives if they would follow YouTube’s lead in demonetizing a controversial personality, the UK Parliament enacted the Online Safety Bill. The legislation, years in the making, establishes a regulatory regime overseen by Ofcom, the UK communications industry regulator, to govern how digital platforms police objectionable content including child sexual abuse, controlling or coercive behavior, extreme sexual violence, people smuggling, promoting suicide and self-harm, animal cruelty, selling illegal drugs or weapons, and terrorism.

Tech Critics Blumenthal & Hawley Looking for EU-Style US AI Regulation

Report from the New York Times

In Brief – Two top congressional critics of Big Tech have teamed up to release a Bipartisan Framework for a US AI Act. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), who lead the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law, announced their regulatory proposal in conjunction with a subcommittee hearing on AI legislative proposals. They propose creating a new federal regulatory agency overseeing AI and a federal licensing regime for companies working on “sophisticated general-purpose A.I. models” such as OpenAI’s Chat-GPT, as well as “models used in high-risk situations” such as facial recognition. The licensing process would require companies to meet regulatory standards for risk management, pre-development testing, and incident reporting practices, and calls for companies working on AI to promote transparency, protect consumers and kids, and defend national security

Context – Blumenthal and Hawley, like many American advocates for aggressive digital regulation, often look enviously to the EU for tech policy inspiration. Their US AI Act is an homage to the EU’s AI Act plan. When created by the European Commission, it proposed a risk-based regulatory model for AI-enabled applications but not underlying AI tech. However, the EU Parliament expanded regulation to cover “foundational models” following the boom created by Chat-GPT. Now, concerns about AI over-regulation are even emerging in Europe, such as from French President Macron. To be clear, the US Senate is not close to enacting major AI regulation. Instead, the Biden Administration approach of high-level AI principles and corporate commitments to “responsible” AI practices is holding sway. The largest companies are also promulgating AI “safety” policies, such as Google requiring advertisers to prominently disclose when political ads contain content that’s been “digitally altered or generated”, including by AI, and Amazon creating Kindle platform guidelines requiring authors to notify it (but not consumers) of books containing AI-generated content, although not AI-assisted material.

The Federal Trade Commission Finally Files Major Amazon Antitrust Lawsuit

Report from the New York Times

In Brief – After years of anticipation, the Federal Trade Commission has filed a major antitrust complaint against Amazon for anticompetitive practices in its massive ecommerce marketplace business that the government claims holds a 70% market share. A robust antitrust challenge has been long expected given that the agency is chaired by progressive antitrust activist Lina Khan who rose to prominence as a law school student arguing that Amazon was an antitrust abuser unconstrained by decades of lax federal enforcement. The FTC complaint, which was joined by 17 State Attorneys General (although just two Republicans), focuses on Amazon policies that the agency claims protect very high Amazon fees charged to third-party sellers, both for its logistics services and for its marketplace, from lower-priced competition, harming competition in the marketplace services market, driving up costs for sellers, and raising consumer prices on Amazon and elsewhere on the internet.

Context – When Khan was in law school, Amazon was seen as a low-price retailer trading profits for market share. That’s outdated thinking. Prices on Amazon are not particularly low, and its first-party retail is a shrinking, low-margin business. Astute analysts now focus on how Amazon operates its marketplace, populated mostly by small third-party sellers, and aggressively pushes them to use its massive FBA logistics service and pay for on-platform advertising. All combined, Amazon fees charged to sellers are often 50% or more of a product’s price! This is no longer novel ground. Amazon pushing third-party sellers to pay for FBA logistics was a focus of recent antitrust challenges in the EU and UK. Amazon settled the EU case with concessions related to logistics and are proposing something similar in the UK. It’s unclear if Amazon was unwilling to do the same in the US or if the FTC wanted more. Finally, Amazon already faces “price fixing” lawsuits in federal and state courts alleging that it presses sellers to raise prices elsewhere on the internet to shield the high Amazon fees from low-price competition, which is also a big feature in this complaint. And to be clear, the FTC suit does not ask to break up Amazon.

FCC Finally Has a Democratic Majority Able to Take on Partisan Policies

Report from the Washington Post

In Brief – The Senate has confirmed Anna Gomez, a veteran telecom lawyer with significant government experience, to serve as the third Democratic commissioner on the Federal Communications Commission (FCC). The 55-43 vote means that for the first time during the Biden Administration there is a Democratic majority on the five member FCC. Gomez has served in multiple federal government agencies, including the FCC and the National Telecommunications and Information Agency, and was a policy adviser at the State Department when nominated. She has also served as a Senate staff member, a corporate official of Sprint-Nextel, and an attorney at Latham & Watkins. She was nominated after progressive consumer advocate Gigi Sohn withdrew her nomination after a year-and-a-half of strident opposition from Senate Republicans and segments of the telecommunications industry, which became untenable when a handful of Senate Democrats turned against her.

Context – The whole Sohn affairillustrated that telecom companies still carry significant influence on both sides of the aisle, but as long as there is a Democrat in the White House and a Democratic majority in the Senate, they can’t hold back the nominee tide forever. Given Biden’s past support for net neutrality >and the level of progressive activism in his federal agencies, it’s hard to believe that any Democratic nominee to the FCC would line up much differently than Sohn. And reports already point to the FCC Democrats planning to start up a net neutrality rulemaking. Interestingly, two senior legal experts from the Obama Administration are urging the agency to move elsewhere, predicting the Supreme Court would now overturn a rulemaking based on their revitalized major questions doctrine, but progressive net neutrality backers are already pointing out that the two were financially backed by telecom industry trade groups. In other nomination news, the Federal Trade Commission, riven by extreme ideological division and without any Republican commissioners since Christine Wilson resignedin March, might have two Republicans in place soonso that they can get outvoted 3-to-2 and file dissenting views.

Irish Regulator Fines TikTok €345 Million for Mistakes Handling Young Users’ Data

Report from the BBC

In Brief – The Irish Data Protection Commission (DPC) has announced that it has fined TikTok 345 million euros for violating the General Data Protection Regulation (GDPR) through its treatment of users aged 13 to 17. Among the problems highlighted by the regulator, who serves as the lead GDPR enforcement authority for many large digital platforms, was a lack of transparency in user communications, setting the privacy mode to public by default, a “Family-Pairing” mode that led to a range of risks for those under 16, and the use of “dark patterns” to nudge users to the more privacy intrusive options during registration. TikTok responded by noting that the period covered by the investigation was 2020, and that they have since changed the implicated policies, including setting the default to private for users under 16.

Context – Ireland, the European home to many global digital giants, has been widely criticized for going easy on Big Tech. Recent GDPR decisions and fines against Instagram, and now TikTok, highlight how the DPC has been forced to accept changes, and ramp up fines, pushed by other Member State regulators. For example, the TikTok dark patterns finding was pushed by German and Italy regulators. Also, frustration with the GDPR “One Stop Shop” model that elevates the role of the Irish (and Luxembourg) regulators led to the European Commission being put in the regulatory driver’s seat for very large platforms in both the Digital Services Act and the Digital Markets Act. TikTok’s rapid emergence as a social media giant has led to a flood of lawsuits and regulatory complaints, mainly focused on young users, including in the Netherlands, Italy, France, the UK, South Korea, and the United States. This is their largest fine. On the other hand, despite nearly four years of review by the US Government’s Committee on Foreign Investment in the US that threatens to order the company’s US business to be sold off, and reports that the Biden Administration was backing that move, recent reports indicate growing opposition to that penalty from inside both major US parties that could protect the core US business from being shut down.

Microsoft-Activision Deal on Path to Approval by UK Competition Regulator

Report from the Wall Street Journal

In Brief – The UK Competition and Markets Authority (CMA) has announced that it is inclined to accept the offer from Microsoft and Activision to address the regulator’s concern that the proposed $75 billion acquisition of the videogame developer could undermine competition in the cloud gaming market. Following the CMA’s rejection of Microsoft’s acquisition proposal in April, which it reaffirmed in August, the companies submitted a “substantially different” deal to the regulator, in particular proposing to sell French videogame publisher Ubisoft Entertainment the cloud-streaming rights for Activision games. The plan is for Ubisoft to acquire exclusive rights outside the European Union to market the cloud streaming rights to existing and new Activision games for 15 years, and non-exclusive rights to streaming sales inside the EU. The CMA has set itself a deadline of October 18 to rule on the deal, the same date the companies have set for the acquisition closing. The CMA’s provisional finding that the new offer is acceptable is now open to a public comment period to gather market feedback.

Context – It’s truly all over but the shouting. Since the “new” deal involving the Ubisoft arrangement was announced, it was very likely that the CMA would accept the offer and the deal would be done. The biggest takeaway is the disjointed state of the sequential competition policy roulette. The CMA rejected Microsoft-Activision but then found itself isolated by the European Commission’s approval based on Microsoft commitments the CMA and US FTC found insufficient. Then a US judge rejected the FTC’s effort to halt the deal. While some see the affair as a testament to the post-Brexit influence of the UK regulator, others see the opposite, a UK Government feeling pressure to modify regulatory policies and engaging in a scripted walking back. Meanwhile, the European Commission is technically weighing the new Ubisoft-focused plan and has also recently taken some harder line tech antitrust actions, highlighted by challenging Microsoft’s bundling its Teams service to its Office 365.

Amazon Walks Back (Crazy) New Fee on Sellers Not Paying for Amazon Logistics

Report from Bloomberg

In Brief – Just a month after announcing that third-party sellers will be charged a new 2% fee if they fulfill orders themselves as part of Amazon’s Seller Fulfilled Prime (SFP) program, the ecommerce and logistics giant has announced that they are abandoning the fee increase due to negative seller reaction. On the Amazon Marketplace, most sales involve products that qualify for “Prime” designation, including sales by third-party sellers. Most Prime products are stored, picked, packed, and shipped by Amazon’s Fulfilment By Amazon (FBA) logistics system. In 2015, the company created a limited SFP program to allow third-party sellers to use alternative logistics services and still qualify for Prime. However, in 2019 entrance into SFP was suspended, pushing all new Prime-eligible sellers into FBA. Amazon announced this summer that SFP was selectively reopening but it reportedly remains difficult to join.

Context – Amazon announced the fee increase on sellers who do not use FBA at the same time the Federal Trade Commission was wrapping up work on what is expected to be a major antitrust complaint. The move was seen by industry experts as a finger in the FTC’s eye. Jason Boyce, an online sales advisor, said the new SFP fee “shows Amazon is not scared at all.” Now? It’s out-of-date thinking to see Amazon as a low-price online retailer trading profits for market share. Smart analysts now focus on how Amazon’s third-party marketplace pushes sellers to use its massive FBA logistics service and advertise on Amazon. When all those fees are calculated, sellers on Amazon now often pay 50% in fees. It’s become their top antitrust problem. In the EU, Amazon offered to change practices that push sellers into FBA to settle a major antitrust complaint, and they are offering similar changes to address antitrust concerns in the UK. Whether they do the same in the US remains to be seen. Amazon is also widely accused of using “dark patterns” to nudge consumers to not cancel Prime. They have changed practices in Europe on that issue, but not yet in the US, where they still face an FTC complaint.

X Sues to Block California’s Social Media Content Moderation Law

Report from Reuters

In Brief – X Corp, formerly known as Twitter, has sued the State of California in federal court to block the enforcement of AB 587, a state law enacted last year imposing transparency rules on large social media companies detailing how they address a range of objectionable content. The law requires covered platforms to have detailed public content moderation policies and report twice a year on the numbers of objectionable posts and how they were addressed. Categories of content listed in the legislation include hate speech, racism, extremism, radicalization, disinformation, misinformation, harassment, and foreign political interference. In its legal complaint, X argues that the law’s “true intent” is to pressure platforms to more aggressively act against content that the government finds objectionable, violating the First Amendment, Dormant Commerce Clause, and federal preemption under Sec. 230 of the CDA, and is asking for a preliminary and permanent injunction.

Context – California enacted its social media transparency law following social media regulation being passed in Florida, Texas, and New York. Each state is effectively controlled by one political party. Republican-led Florida and Texas acted first, enacting laws in 2021 to combat alleged Big Tech “viewpoint” discrimination, a top conservative concern. Democrat-led New York and California followed with laws pushing platforms to better police hate speech and harassment. Trade groups representing the largest digital platforms challenged the laws in Florida and Texas. Florida’s law was blocked in the US Eleventh Circuit while Texas’s law was upheld in the US Fifth Circuit, although it was later enjoined by the US Supreme Court. Backers of aggressive content moderation have criticized X (formerly Twitter) since progressive critic Elon Musk took over the platform, so X’s willingness to take on the California law seems true to form. Many expect the High Court to take up the issue next year, encouraged by the Biden Administration Solicitor General to step in to preserve platform First Amendment rights.

UK Parliament Finally Passes the Online Safety Bill

Report from the BBC

In Brief – The Online Safety Bill (OSB), which began its long and winding journey in 2017 with a promise to make the UK “the safest place in the world to be online”, has cleared its final parliamentary hurdle and will become law. Over the years, the plans were expanded from the starting point of combatting child sexual material and terrorism advocacy to cover more and more types of bad content, culminating with platforms mandated to combat “legal but objectionable” material. Under fire for threatening free speech, the OSB was trimmed back last fall and refocused somewhat on harms to younger users. The legislation’s final list of content that platforms will need processes to control, overseen by regulator Ofcom, includes child sexual abuse, controlling or coercive behavior, extreme sexual violence, people smuggling, promoting or facilitating suicide and self-harm, animal cruelty, selling illegal drugs or weapons, and terrorism. The prospect that Ofcom would require messaging apps to have the capability to scan user communications to detect prohibited content, effectively outlawing end-to-end encryption, had led leading messaging app providers including WhatsApp and Signal to threaten to leave the market. The government defused the standoff by saying that Ofcom would not consider doing so until scanning messages could be done while still protecting privacy, something they admitted was not yet technically possible.

Context – The travails of the OSB highlights the UK’s schizophrenic digital policies pairing regulatory intervention with aspirations to be a pro-innovation alternative to the EU. Along with the high-profile encryption battle, the OSB threatens jail time for corporate executives when their companies fail to appropriately police CSAM and creates an age-gating requirement that draws the ire of platforms like Wikipedia. The UK Digital Markets, Competition and Consumers bill largely parallels the EU Digital Markets Act, and the UK competition regulator has proven particularly aggressive on tech acquisitions. Finally, the OSB comes on line after the EU Digital Services Act, adding potentially conflicting compliance measures to meet similar goals.

Microsoft’s Offer to Unbundle Teams in EU Not Likely to Resolve Antitrust Case

Report from Bloomberg

In Brief – A formal antitrust complaint from the European Commission competition authority accusing Microsoft of anticompetitive practices in bundling its Teams videoconferencing and messaging service into the Microsoft Office 365 software package remains likely despite Microsoft’s efforts to placate concerns. The Commission, which has received complaints from competitors stretching back to Slack’s in 2020, opened a formal investigation in July. The digital giant is said to have offered regulators a range of concessions to resolve the investigation in talks throughout the year. Despite the lack of agreement, Microsoft announced at the end of August that it would make unilateral changes for some users of Office 365 in the EU and Switzerland on October 1, including no longer requiring business customers of Office 365 to have Microsoft Teams automatically installed on their devices and reducing the price of Office 365 when Teams was not included. However, critics contend that the Microsoft offer covers too narrow a range of EU-based customers and the proposed price differential for the Office-Teams bundle is far too small.

Context – Microsoft’s regulatory good behavior campaign and a string of Activision videogame licensing deals appear to have been key to winning over the European Commission on its massive acquisition and delivering what increasingly looks like a win. But the digital giant is now needing to manage a collection of other competition policy issues in Europe and the Commission might be interested in proving its toughness. Along with the Teams investigation, there are complaints from some European cloud services providers that Microsoft uses software licensing to pull EU customers to its cloud infrastructure, and a NextCloud complaint that Microsoft unfairly bundles its OneDrive cloud system with Windows. Microsoft’s Windows PC operating system and LinkedIn have been designated gatekeeper core platforms under the Digital Markets Act regulatory scheme, and the Commission is studying adding Microsoft’s Bing, Edge, and the Microsoft Advertising Platform as well.

Judge Blocks California Law Imposing Child Protection Standards on Internet Services

Report from Reuters

In Brief – The California Age-Appropriate Design Code (CAADCA), bipartisan legislation unanimously enacted by the state legislature requiring digital platforms likely to be accessed by users under age 18 to evaluate their products before release to determine if they could harm children and teens, has been temporarily blocked by a federal judge who believes it is likely to be found to violate the First Amendment. The CAADCA standards, many of which focus on data collection for commercial use, are modeled after UK regulations implemented there in 2021. In her ruling, US District Court Judge Beth Labson Freeman evaluated ten CAADCA provisions to determine whether each directly advances California’s interest in protecting minors and whether each requirement is more extensive than necessary to meet that interest. She found that the State would likely fail to satisfy its legal burden for each one. The law’s age estimation provision, an aspect of a growing number of state and federal bills aiming to protect younger internet users, came in for harsh scrutiny, with Judge Freeman noting that the bill’s scheme, rather than protecting the privacy of young users, was “likely to exacerbate the problem by inducing covered businesses to require consumers, including children, to divulge additional personal information.”

Context – Legislation “protecting” teenagers from supposed online dangers, including social media, digital advertising, and, of course, pornography, is an increasingly global phenomenon. It is seemingly good politics everywhere. Republican-led US States are enacting laws requiring parental approval for social media and requiring age verification for porn sites. They are running into First Amendment problems too. And progressive jurisdictions such as California, France, and the UK are also pushing to create a separate, more regulated, version of the internet for teenagers. If there is a “splinternet” coming to Western countries, this is likely what it looks like. Creating internet-wide age verification causes most privacy advocates to be worried.

Musk’s X Criticized for Improved Compliance on German Law Enforcement Requests

Report from the Washington Post

In Brief – Although public perception of its policy on content moderation is that Musk’s X, formerly known as Twitter, is less willing to address objectionable user content than the prior regime, that is not proving to be the case in Germany. Rather, public prosecutors in Germany report that the platform is complying with requests to turn over information about users in online hate-crime investigations to a greater degree than in the past. Germany has strict hate-speech laws forbidding certain defamation of politicians and promotion of white supremacy, and the company under Musk has turned over reams of user data to prosecutors to help identify those who break the law, leading to hundreds of new cases being pursued.

Context – Criticism of the content moderation policies of Elon Musk’s X, and previously Twitter, has nearly always included an ideological component. His defenses of “free speech,” criticism of “woke” thinking, restoring some banned Twitter accounts, and massive personnel changes, have upset many progressives. But he never said that the platform was going to reject government or law enforcement demands. Instead, he’s consistently said that the platform will follow “local laws” in determining speech restrictions, not impose its own views. In the US, with the First Amendment, the bar for speech being legal is very low. But outside the US, in places like the EU and India, local laws empower government to press digital platforms to police online speech. Germany, with its NetzDG law, is one of those places. Some anti-hate speech activists in Germany suspected that the company would comply even less under Musk, filing suit and winning a judicial hearing in late 2022 challenging the company’s NetzDG compliance. The feedback from German public prosecutors seems to point to results in the other direction, instead raising concerns with civil libertarians who supported Twitter’s past hesitancy to comply with government data requests. The EU-wide Digital Services Act mandates are likely to raise similar tensions.

Canada Releases Forced Media Payments Regulations as Meta Blocks News

Report from Reuters

In Brief – The Canadian Government’s Department of Canadian Heritage has released draft regulations to implement the Online News Act, legislation requiring Meta and Google to pay Canadian media companies when their news content appears on the digital giants’ platforms.  Both companies have consistently and strongly objected to paying media companies for content they don’t actively place on their own platforms. With Meta, that’s news posted by users, especially media companies themselves. With Google, its links surfaced in basic search results. Among the ancillary concerns has been that the actual cost of compliance was indeterminant because it was not clear how much content would be posted or appear in searches. The draft regulations propose to address that question by clarifying that the companies would need to pay news outlets a minimum of 4% of their annual revenue in Canada, which government estimates put that at C$172 million for Google and C$62 million for Meta’s Facebook per year. Meta’s response was that the “proposed regulations will not impact our business decision to end news availability in Canada,” while a Google spokesperson said the company was reviewing the proposed regulations “to assess whether they resolve the serious structural issues” with the law.

Context – Canada has become ground zero in the campaign of media companies to have government force the largest digital platforms, especially Google and Meta (but sometimes others) to pay them when “news” appears on their platforms. Meta is being increasingly straightforward about blocking news from their top platforms rather than be forced to pay a government-set rate they claim overvalues the news content and undervalues the immense free distribution the media companies gain. Meta is already blocking Canadian news in preparation of the law going into effect in December. Media companies and politicians are loudly complaining in Canada and other jurisdictions exploring similar plans, such as California. It’s noteworthy that Google is also threatening to change search in Canada to exclude news.

Governor Newsom Proposes California Government Agencies Develop AI Policies

Report from Bloomberg

In Brief – California Governor Gavin Newsom has issued an executive order directing the state’s agencies to develop “deliberate and responsible” processes to evaluate and deploy Artificial Intelligence within state government. Rather than propose new regulations governing AI technologies in the state, or even how the state government uses it, Newsome’s path hews closely to the Biden Administration’s strategy of proposing high-level principles for “Safe, Secure, and Trustworthy AI”. The executive order tells California government agencies to create risk assessment reports for how AI could affect their work, the state’s economy, and energy usage, and set rules requiring that AI tools the state procures are developed and used ethically. The order also urges legislators to devise new policies for the tech. In mid-August, the California legislature passed a resolution expressing support for the principles outlined in President Biden’s AI Bill of Rights and committing to prioritize them in their own legislation and policies.

Context – The public release of OpenAI’s ChatGPT kicked into high gear talk of existential AI threatsand calls for government regulation. The EU is farthest along. They are five years into developing their comprehensive AI Act planIt was initially based on a“risk-based model” that proposed regulating AI applications, not underlying technologies, but the EU Parliament changed gearsand now proposesregulating underlying “foundational models”. While many advocates of tighter tech regulation applaud the EU’s willingness to step out front, the more hands-off approach to AI regulation of the Biden Administration seems to have more backers. They are rallying US-based companies to agree to implement “responsible” AI practicessuch as helping combat AI-generated fakes with industry standard digital watermarks, and allowing outside experts to test AI system security. If the US Congress legislates, look for targeted measures impacting how federal agencies use AI. And mid-summer talks at the G7 showed that most governments were focused on promoting AI principles rather than regulation.

Ask A Question!