Let’s take a peek at what’s been going on in the the artificial intelligence (AI) arena, since, you know, nothing ever happens there.
White House tackles datacenter energy consumption
The Biden-Harris administration has announced a task force that is dedicated to tackling the energy consumption emanating from the creation and use of AI technology.
Top executives from Microsoft, OpenAL, Google, Meta, Exelon, Nvidia and others, attended a White House meeting last week. Under discussion were strategies to meet clean energy, permitting, workforce requirements for developing large-scale AI datacenters, and power infrastructure needed for advanced AI operations in the US.
As a result, the White House said it would take a number of initiatives, including:
- Launching a new Task Force on AI Datacenter Infrastructure to coordinate policy across the government. Led by the National Economic Council, National Security Council, and the White House Deputy Chief of Staff’s office, the White House says the interagency group “will provide streamlined coordination on policies to advance datacenter development operations in line with economic, national security, and environmental goals.”
- Scaling up technical assistance to federal, state and local authorities – handling datacenter permitting.
- Creating an AI datacenter engagement team at the Department of Energy (DOE) to support AI datacenter development through loans, grants, tax credits, and technical assistance so datacenter operators can secure “clean, reliable energy solutions.”
- Sharing resources on repurposing closed coal sites with datacenter developers through another program at DOE. One example of this is taking existing land and facilities at the power plant sire and repurposing it – when a power plant site is being closed down, for example, and can be retrofitted for other uses for those connections to the grid.
The industry participants at the meeting committed to enhancing cooperation with policymakers to explore further dialogue and collaboration.
Bigger bots need bigger (better) governance
OpenAI has launched “01” – what it had been calling “Strawberry” – and Salesforce has launched Agentforce. They are supposed to increase the autonomy of today’s generative AI capabilities and produce better reasoning results.
The early testing results are positive: OpenAI said the new model does better on tasks that require more analysis and follow instructions better. Results take longer, but for higher quality results, that might be a small price to pay.
Wiley has been testing out Agentforce, and it has said the technology is helping it increase its case resolution to a far greater extent than its old chatbot.
But more targeted and well-reasoned data is not the end of the discussion with AI. All companies need to consider the important AI governance issues that come with deploying AI solutions that interact with humans and handle human data.
Many companies are already making pledges to ensure safety when it comes to harnessing the decision-making power of AI tools. These tools could be providing advice that have an impact on someone’s health, financial security, etc.
“You don’t want to just give AI unlimited agency,” Salesforce chief ethical and humane use officer Paula Goldman told Axios. “You want it to be built on a set of guardrails and thresholds and tested processes. That’s where you’re going to get good results, and otherwise, you’re inviting a lot of risk for your company.”
“AI governance is about much more than doing the right thing for the sake of it. It is about being able to fully benefit from a transformative technology by ensuring that its implementation is founded on practices that are directly aimed at delivering the best possible outcomes, said Eduardo Ustaran, a partner at Hogan Lovells.
And Wharton Professor Ethan Mollick referred to 01 specifically, reminding us that to really know how useful it is versus other chatbots, it will take lots of analysis from experts in areas that require deep expertise.