While some players are performing a cautious tango, taking two steps forward and one step back, others appear to be training for the Olympic long jump, leaping boldly ahead with billion-dollar investments and sweeping institutional reforms.
As US states fine-tune their regulatory approaches and federal agencies begin grappling with AI oversight, Canada is sharpening its national governance tools, and American University is reimagining business education from the ground up.
Virginia governor vetoes bill targeting ‘high-risk AI’
In a setback for state-level efforts to regulate artificial intelligence, Virginia’s governor, Glenn Youngkin, has vetoed the High-Risk Artificial Intelligence Developer and Deployer Act. The proposed bill, passed by a hair in February, sought to rein in AI systems used in sensitive fields such as education, hiring, finance, healthcare, and legal services.
Developers of so-called “high-risk” algorithms would have been obliged to spell out their systems’ limitations, intended uses, and potential biases, as well as publish summaries of performance evaluations.
Firms deploying such systems would have been held to a reasonable duty of care, obliging them to put in place risk-management policies and safeguards against discriminatory outcomes. Violators would have faced civil fines ranging from $1,000 to $10,000.
In announcing his veto, Governor Youngkin, a Republican, argued that the bill would impose unnecessary burdens on the Commonwealth’s growing AI sector. He pointed to Virginia’s recent success in attracting technology firms and startups, crediting a business-friendly environment and a lighter regulatory touch.
Youngkin also warned that the legislation’s one-size-fits-all approach would risk stifling innovation, particularly among small and medium-sized businesses lacking the resources for extensive compliance.
He contended that existing laws governing discrimination, data use, and consumer protection were sufficient for now and that the bill risked “turning back the clock” on Virginia’s economic momentum.
Colorado weighs in on AI regulation
Youngkin’s veto in Virginia comes as other states continue to grapple with how to regulate artificial intelligence without stifling innovation.
Colorado, for instance, became the first state to sign comprehensive AI legislation into law in 2024. But even as the ink dried, calls for revisions began to mount.
Colorado Governor Jared Polis, Attorney General Phil Weiser, and leading state lawmakers have since acknowledged that Senate Bill 205, the states landmark AI law, requires “additional clarity” and refinement.
A task force convened to examine the law recently issued a report that grouped proposed revisions into four categories, ranging from areas of broad consensus to those marked by entrenched disagreements.
The process included engagement with industry, civil society, and legal experts, reflecting a desire to fine-tune the legislation before it fully takes effect.
Among the issues where consensus appears achievable are more precise definitions of “consequential decisions,” clarification around exemptions for certain AI systems, and adjustments to the timing and scope of impact assessments.
Other areas may require more complex trade-offs, such as balancing industry concerns over algorithmic discrimination provisions with public interest demands for accountability. These interconnected revisions may need to be addressed holistically rather than in isolation.
Still, not all issues are ripe for compromise. Disagreements remain on key questions, including whether to keep or revise the “duty of care” requirement for developers and deployers, the scope of exemptions for small businesses, and whether to include a cure period before enforcement kicks in.
Stakeholders also clash on consumer appeal rights, protections of trade secrets, and the Attorney General’s rulemaking authority, highlighting the difficult balance between innovation and regulation.
Modified AI bill in Texas
Texas’s revised artificial intelligence bill, House Bill 1709, places the state in close alignment with a broader trend among US jurisdictions experimenting with AI oversight.
Currently pending in the State’s House Committee, Texas’s proposal, the recently vetoed bill in Virginia and the now-revised law in Colorado, zeroes in on “high-risk” AI systems, those that substantially influence consequential decisions in such fields as education, employment, healthcare, housing, and voting.
It imposes a duty on developers and deployers to adopt risk management policies, conduct regular impact assessments, and report incidents of algorithmic discrimination.
Consumer rights are also acknowledged: individuals must be informed if their data is used by an AI system and offered explanations about the system’s role in decisions that affect them.
Where Texas distinguishes itself is in its framing.
While Virginia’s bill was criticized for being too burdensome and Colorado’s is now being reworked after strong stakeholder pushback, Texas has tailored its proposal to appear innovation-friendly from the start.
The bill introduces a regulatory sandbox for AI experimentation and establishes an Artificial Intelligence Council to provide guidance rather than top-down control.
This lighter-touch approach aims to reassure both startups and enterprise developers that oversight will evolve in step with technology, reflecting not only growing consensus among states, but a broader US regulatory trend favoring flexibility over rigidity.
For instance, SEC Acting Chair Mark Uyeda recently emphasized the need to avoid overly prescriptive rules in the AI space, advocating instead for engagement with innovators and market participants to ensure oversight remains practical and future-proof.
AI Institute debuts at American University
As many universities continue to treat generative artificial intelligence as a classroom intruder, banning tools like ChatGPT or limiting their use, American University is charting a different course. Its Kogod School of Business has launched the Institute for Applied Artificial Intelligence (IAAI), aiming to embed AI into curricula, research, and training across disciplines.
The move reflects a growing understanding that people will not just encounter AI in the workplace, they will be expected to master it.
American University’s approach is striking not only for its boldness in higher education, but for how well it aligns with evolving views on responsible innovation in government and finance.
The SEC held its first roundtable in artificial intelligence in the financial sector on March 27, where regulators and industry executives debated the balance between innovation and oversight.
Gregg Berman of Citadel Securities dismissed alarmism over AI missteps, arguing that humans have long made errors in consumer service and that AI should be viewed through a similar lens.
But Hillary Allen, a law professor at American University, countered with a cautionary example: Klarna, the fintech firm that once touted AI chatbots as the future of customer interaction, is now publicly shifting back toward human agents, an implicit recognition that technology alone cannot replace human judgment.
Canada’s update on AI oversight
Canada is doubling down on its efforts to steer the development of artificial intelligence in a safe and accountable direction. On March 6, the federal government announced a package of initiatives designed to align domestic innovation with evolving international norms while bolstering public trust.
These include a refreshed Advisory Council on Artificial Intelligence, an updated guide for managers implementing the 2023 Voluntary Code of Conduct, and the creation of a new Safe and Secure AI Advisory Group.
The latter, chaired by a renowned AI researcher Yoshua Bengio, will offer technical advice on risk mitigation and feed into the newly established Canadian AI Safety Institute.
The announcement also brought new buy-in from industry. Six additional organizations, including CIBC and Intel, joined the code of conduct, bringing the total number of signatories to 46.
Backed by over $2.4 billion in AI investments from Budget 2024, Canada is positioning itself as a leader in setting the guardrails for next-generation AI, not just through regulation, but by cultivating cross-sector alignment and norms that evolve with the technology.
“As AI technology continues to evolve, our government is committed to making sure Canadians can benefit from it safely and that companies are developing it responsibly,” said François-Philippe Champagne, Minister of Innovation, Science and Industry. “The measures announced today are a positive step forward in securing an AI ecosystem that works for – and in the interests of – all Canadians.”