Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
ChatGPT, from Microsoft-backed OpenAI, has ignited a generative AI arms race, and the technology is now having cybersecurity implications. Photo: AP

Microsoft’s head of Responsible AI flags cybersecurity dangers and benefits of the new tech at HSBC summit

  • Generative AI can be used to find new types of attacks, but companies are also using the tech to assess these threats, said Microsoft’s Sarah Bird
  • The tech community is calling for more regulatory clarity, as generative AI can be applied to different industries with disparate regulations, she added
The use of generative artificial intelligence (AI), the powerful tool behind OpenAI’s ChatGPT, could push the capabilities of cyberattacks to new heights while also offering new defence mechanisms, but most organisations are still learning to harness the tool, according to one of Microsoft’s leading AI experts.
“AI is an incredibly powerful technology, and so it’s unfortunately a very exciting tool, for example, in cybersecurity for threat actors,” Sarah Bird, Microsoft’s chief product officer of Responsible AI, said on Wednesday during a panel discussion at the Global Investment Summit organised by HSBC in Hong Kong.

Amid a frenzy of AI development worldwide, international technology companies are trying to speed up research and development as they push to develop their own large models in what has become a highly competitive field. But Bird warned it is also crucial to think about “how to build with the technology responsibly and safely”.

“Like any new technology … [AI] has some limitations,” Bird said.

Microsoft touts booming enterprise AI demand in Hong Kong amid cloud push

AI can generate harmful content and code, according to Bird, possibly making systems more susceptible to new types of attacks, such as prompt injection attacks and jailbreaking, which allow attackers to bypass software restrictions.

Bird noted, though, that AI can be both the cause of and solution to tackling these new cybersecurity challenges. Microsoft is currently using AI to help security analysts assess the number of threat signals in an attack to help the company respond more effectively, according to Bird.

“So we’re gonna see a new level of attack and defence because of this technology,” she said.

Another challenge in adopting generative AI tools is varying regulations across different industries and countries, Mark McDonald, the head of data science and analytics for the global research arm of HSBC, Hong Kong’s biggest commercial bank, mentioned in the same panel.

“We have seen multiple regulations focus on the area,” MacDonald said. It has become very difficult for global organisations with businesses across multiple regions to comply with these disparate rules, he added.

The tech community is calling for more clarity and consistency in the regulation of newly emerging technologies. Bird said regulators should think about the whole ecosystem when formulating new regulations, as generative AI can be applied in many sectors – including highly regulated industries such as financial services and healthcare – each with their own requirements.

01:58

China denies accusations of state-sponsored hacking from US, UK and New Zealand

China denies accusations of state-sponsored hacking from US, UK and New Zealand

“One of the challenges is the regulations are moving quickly,” Bird said. “They’re all taking different approaches.”

Educating regulators in fields in which they may not have first-hand knowledge is important, according to the Microsoft executive.

“Frankly, a lot of them just don’t have experience with the technology or the complex practices required for that,” she said. “So I have an enormous urgency to go and educate around this space if people don’t understand what actually works and what doesn’t work.”

Post