BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

Every Tool In The Toolbox: AI Regulations That Aren’t Being Talked About

Deloitte

Whether you view artificial intelligence (AI) as a miracle of human innovation or a harbinger of the robot takeover, one thing is certain: AI is moving and it’s moving fast. And at the upcoming World Economic Forum Annual Meeting—where generative AI’s role in the economy and society will take center stage—the drive to sort out AI regulation is certain to be a major source of debate.

It’s not that governments aren’t taking action on AI. The European Union (EU) reached a provisional agreement on its Artificial Intelligence Act just last month and the US government released its executive order on AI a few months prior. But opinions differ widely on the impact of the EU agreement, and many consider the US order to be only a first step. And while AI has been in the news for years now, the swift rise of GenAI tools like ChatGPT have sparked a more intense public debate and call for regulation.

According to a recent Deloitte review of more than 1,600 regulations and policies from 69 countries and the EU, there are a range of ways governments can help shape AI’s impact on citizens and businesses right now—without imposing any new regulations. All it will take is for governments to have a more expansive view of their role in the market and of regulations that are already on the books.

Taking action on AI with buying power and infrastructure

Some of the most effective ways for governments to shape an emerging technology’s impact on citizens and businesses do not involve regulations at all. In fact, over the years, there have been numerous examples of governments exerting influence over emerging technology through two key actions: buying power and infrastructure development.

In most countries, governments are often one of the largest users of technology—and their extensive buying power often means they can create large market incentives. When the government requires certain standards be met in order for it to purchase a particular product or service, it often makes sense for the entire industry to simply adopt those standards overall—that’s how impactful government can be as a buyer. For example, even though the US federal government’s roughly US$8 billion in cloud spending was only a fraction of the market, it still represented one of the largest single purchasers—and was able to move the whole industry to adopt standards required for data protection. And with AI, the recent US executive order may indicate plans to leverage purchasing power to gain voluntary adherence to certain standards.

Governments can also help shape emerging technologies by playing a key role in providing the infrastructure needed for development. Highway systems represent physical infrastructure developed by governments to support commerce while government education and workforce development programs help build the skills businesses need to leverage a new technology. With GenAI, governments can do more to develop the technical infrastructure—such as computer-sharing platforms or representative training data sets—that supports the technology while at the same time steering it toward desirable outcomes.

The Government of Canada launched an initiative that is already doing this. Innovative Solutions Canada is designed to stimulate technology research, development, and commercialization of Canadian innovations, helping startups and small-to-medium-sized businesses overcome testing and development hurdles. Not only do the innovations often help improve government operations, but the initiative supports inclusion goals by increasing the participation of underrepresented business owners—women, First Nations, youth, LGBT+, and people with disabilities—in government procurement.

Looking towards more overlooked regulations

Another way governments can shape the impact of a fast-moving technology like AI is by applying regulations that focus on “AI-adjacent” issues, such as data privacy, cybersecurity, and intellectual property. Take data protection agencies, which have regulatory powers to help protect citizens’ data privacy. While these agencies may not have AI or machine learning named specifically in their charters, the importance of data in training and using AI models makes them an important AI-adjacent tool.

This can hold true for policies on consumer protection, employment, anti-discrimination, or competition laws. In the United States, for example, the executive order on AI explicitly reiterates that AI and AI-enabled products must follow existing consumer protections for bias, discrimination, privacy, and more. Even small modifications to AI-adjacent pieces of legislation can go a long way in shaping the responsible development of AI.

A clear path forward

When it comes to emerging tech, the real enemy, ultimately, is regulatory uncertainty. While policymakers continue to debate the why and how of regulating AI, innovators are left with a lack of clarity that hinders the application of this game-changing technology. How AI regulation that protects society can be balanced with innovation will be a key question participants at Davos will be asked to address.

One response is for governments to lean into their roles as buyer and provider as well as consider AI-adjacent regulations. These actions can be critical to reducing the uncertainty that comes with regulating fast-moving technologies. Now is the time to use every tool in the toolbox.

To learn more about government, regulation, and AI, check out The AI regulations that aren’t being talked about or visit the Global Government and Public Services site Deloitte.com and here for more on Deloitte’s presence at Davos.