After a failed Linux backdoor attempt grabs headlines, open-source leaders warn of more attacks

Programmer coding
Humans are among the vulnerabilities in open-source software.
Getty Images

The beauty of open-source software lies in the dispersed communities that develop and maintain the code, often thanklessly. But while there’s strength in this approach, it can also present risks.

This was recently made clear with the discovery of a backdoor that had been inserted into XZ Utils, a data-compression toolkit that’s baked into many Linux operating-system distributions. Discovered by a Microsoft engineer named Andres Freund, the flaw could have enabled a major cyberattack with global consequences, as corporate servers commonly run on Linux.

A couple weeks after Freund’s discovery, we are none the wiser as to the real identity of the culprit, known to the community only as “Jia Tan”; this was probably a state-sponsored operation, but either way, “Jia Tan” spent years getting involved with and eventually taking over much of the XZ Utils project.

Yesterday, open-source leaders warned that the XZ Utils incident probably wasn’t a one-off. In a blog post, senior staffers at the Open Source Security Foundation and the OpenJS Foundation, which steers the development of many JavaScript technologies that underpin the web, called on everyone maintaining open-source projects to “be alert for social engineering takeover attempts, to recognize the early threat patterns emerging, and to take steps to protect their open source projects.”

According to the post, somebody recently tried to persuade the OpenJS Foundation to draft them as a maintainer of a popular JavaScript project (it’s not clear which one) in order to “address any critical vulnerabilities.” The modus operandi was apparently similar to that employed by Jia Tan, and the foundation spotted a “similar suspicious pattern” in two other JavaScript projects that it doesn’t host, so it alerted the relevant project leaders and U.S. authorities.

“Open-source projects always welcome contributions from anyone, anywhere, yet granting someone administrative access to the source code as a maintainer requires a higher level of earned trust, and it is not given away as a ‘quick fix’ to any problem,” wrote OpenJS Foundation executive director Robin Bender Ginn, and Open Source Security Foundation general manager Omkhar Arasaratnam.

“These social engineering attacks are exploiting the sense of duty that maintainers have with their project and community in order to manipulate them,” they added. “Pay attention to how interactions make you feel. Interactions that create self-doubt, feelings of inadequacy, of not doing enough for the project, etc. might be part of a social engineering attack.”

Endor Labs chief security advisor Chris Hughes told Computer Weekly he wasn’t surprised by the existence of more attempts to infiltrate open-source projects in this fashion.

“We can likely suspect that many of these [attacks] are already underway and may have already been successful but haven’t been exposed or identified yet,” he said. “Most open-source projects are incredibly underfunded and run by a single or small group of maintainers, so utilizing social engineering attacks on them isn’t surprising, and given how vulnerable the ecosystem is and the pressures maintainers are under, they will likely welcome the help in many cases.”

A reminder, if it were needed, of how much technical vulnerability we humans present. More news below.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

NEWSWORTHY

Microsoft’s $1.5 billion G42 investment. Microsoft has invested $1.5 billion in Abu Dhabi’s G42, the largest AI firm in the UAE. As Bloomberg reports, this follows G42’s agreement to end its presence in China and its use of Chinese technology so it could retain access to U.S. technology, most notably Nvidia’s market-dominating AI chips. Under the new deal, Microsoft president Brad Smith becomes a G42 board member, and G42 will use the Azure cloud.

U.K. AI regulations. Remember when the U.K. said it wouldn’t rush to legislate over AI, all the way back in, er, February? Now the Financial Times reports that new legislation is indeed being crafted, to ensure that tech firms developing large language models give the government access to the algorithms and demonstrate safety compliance. “Officials are exploring moving on regulation for the most powerful AI models,” one unnamed source told the newspaper. In related news, the British government has announced new legislation that will make creating a sexually explicit deepfake image a criminal offense.

X in Brazil. Less than two weeks after X owner Elon Musk declared he would reinstate accounts that Brazil’s Supreme Court ordered blocked, the company has decided that actually it will comply with the court’s orders, along with those of the Brazilian Superior Electoral Court. As Reuters reports, Supreme Court Justice Alexandre de Moraes had opened an obstruction-of-justice probe into Musk over the showdown. In other X news, Musk is reportedly planning to charge new users a fee to start posting, as a measure against X’s bot scourge.

ON OUR FEED

“[Trump Media & Technology Group] may be subject to greater risks than typical social media platforms because of the focus of its offerings and the involvement of President Donald J. Trump.”

—The Truth Social parent uses an SEC filing to explain various ways in which Trump’s involvement threatens the company. Wired lists the stated risks, which range from Trump’s potential criminal conviction and his companies’ history of filing for bankruptcy protection to the possibility that he might decide to focus on posting elsewhere instead. The filing’s suggestion that Trump could sell his stake caused TMTG’s share price to drop by more than 18% yesterday.

IN CASE YOU MISSED IT

Fortune partners with Accenture on AI tool to help analyze and visualize the Fortune 500: ‘You can’t ask a spreadsheet a question,’ by Marco Quiroz-Gutierrez

Tesla’s top engineering exec, who led development of critical technologies, has resigned after 18 years, adding to concerns about who will succeed Elon Musk as CEO, by Bloomberg

Asking Big Tech to police AI is like turning to ‘oil companies to solve climate change,’ AI researcher says, by Eleanor Pringle

Expert argues AI won’t lead to mass layoffs for workers anytime soon: ‘Look at when we were promised fully autonomous cars,’ by Christiaan Hetzner

AI could gobble up a quarter of all electricity in the U.S. by 2030 if it doesn’t break its energy addiction, says Arm Holdings exec, by Christiaan Hetzner

BEFORE YOU GO

Ad transparency fails. How are Big Tech platforms doing on the ad-transparency front, especially as EU law means they’re supposed to be providing searchable databases of the ads they carry? Very badly, according to researchers at Mozilla and anti-disinformation outfit CheckFirst. “We find a huge variation among the platforms, but one thing is true across all of them: None is a fully functional ad repository, and none will provide researchers and civil society groups with the tools and data they need to effectively monitor the impact of [very large online platforms and search engines] on Europe’s upcoming elections,” they wrote in a report cited by TechCrunch.

This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.