SEC denies Apple and Disney bids to skip shareholder votes on AI

Media giants fail to block union moves to table vote on their use of AI.

The SEC has rejected requests from Apple and Disney to omit shareholder votes in upcoming annual meetings about their deployment of artificial intelligence.

The proposals had been filed by the American Federation of Labor and Congress of Industrial Organizations, the union behemoth known as the AFL-CIO, which sent similar shareholder requests to Comcast, Netflix, Warner Brothers and Discovery.

Apple and Disney had argued the proposals could be left off their ballots because they related to “ordinary business operations,” such as the company’s choice of technologies.

The SEC disagreed. “In our view, the Proposal transcends ordinary business matters and does not seek to micromanage the Company,” the agency wrote in separate letters.

Ethical guidelines

At Apple, the AFL-CIO asked for a report on the company’s use of AI “in its business operations and disclose any ethical guidelines that the company has adopted regarding the company’s use of AI technology”. In a similar request, it also asked Disney to report on its board’s role overseeing AI usage.

In its supporting statement at Apple, the union body wrote that “AI systems should not be trained on copyrighted works, or the voices, likenesses and performances of professional performers, without transparency, consent and compensation to creators and rights holders”.

Brandon Rees, deputy director of the AFL-CIO’s office of investment, told Reuters that the SEC’s decisions could pave the way for agreements with Apple and Disney that would bring them into line with the AI disclosure approach of other companies such as Microsoft (see below).

Apple and Disney, in contrast, “haven’t even begun to grapple with these ethical issues” around AI, Rees said.

Microsoft’s agreement

Microsoft’s December agreement with the AFL-CIO over the use of AI calls for their collaboration to educate workers with new training modules, incorporate labor input into tech development, and shape policies supportive of employees.

The alliance involves a “neutrality framework,” ensuring Microsoft’s impartiality concerning workers’ future organizational efforts.

The Microsoft partnership addresses growing concerns about AI’s impact on job displacement and ethical considerations and aims to ensure workers’ rights and incorporate their perspectives on AI deployment in the workforce.

Significance of the ruling

Within a broader context, the agency’s ruling showcases a number of things, one of them being the securities watchdog’s intent to ensure registered entities are adopting responsible AI practices as they develop such tools.

Secondly, when it comes to shareholder proposals on the topic of AI, this new feature is likely to linger and generate support and attention.

Microsoft’s shareholder vote on the topic attracted the support of more than one-fifth of investors (21%), which was impressive for a first-time proposal.

The proposal was put forward by US activist Arjuna Capital, and it asked the tech giant to report on how it is managing financial risks and those to “public welfare” that might arise from its “role in facilitating misinformation and disinformation disseminated or generated via artificial intelligence”.

Speaking to Responsible Investor last month, Arjuna’s co-founder and managing partner Natasha Lamb said Microsoft is one of several technology stocks in its portfolio it will be engaging with on the issue.

“In our view, the Proposal transcends ordinary business matters and does not seek to micromanage the Company.”

SEC’s response to Apple and Disney about the AFL-CIO’s intended shareholder voting

Since 2024 is an election year in the US, she added that her firm is “acutely aware of the risks that disinformation and misinformation pose to our democratic process”.

And this goes to the heart of the third point about the discourse on AI in 2024: The rapidly developing technology has raised new questions about where the responsibility lies. Social media firms have been able to use existing law to argue that since harmful content is created by users, they are not legally responsible for it.

Legal liability

But when it comes to generative AI, the content is being created by the technology, parsing through and connecting data together itself. This increases the potential for corporate liability if the business – for example its shareholders – can be liable for the damage caused by these tools and their information output.

Finally, the discourse concerning the ethical application of AI in business operations will be a prevailing topic this year and beyond. Businesses, labor groups, consumers, regulators, and government entities will continue to examine and debate what the ethical governance of AI tools and strategies entails.

In terms of the government entities, in 2023, the US Congress announced proposals to create a federal agency to provide comprehensive regulation of digital platforms.

And the White House launched a coordinated federal government approach to promoting the secure and trustworthy development and use of artificial intelligence, seeking non-binding deals from tech giants is so doing. In each instance, the government noted these efforts were just in their early phases.