Advertisement
UK markets closed
  • FTSE 100

    7,895.85
    +18.80 (+0.24%)
     
  • FTSE 250

    19,391.30
    -59.37 (-0.31%)
     
  • AIM

    745.67
    +0.38 (+0.05%)
     
  • GBP/EUR

    1.1607
    -0.0076 (-0.65%)
     
  • GBP/USD

    1.2370
    -0.0068 (-0.55%)
     
  • Bitcoin GBP

    51,855.07
    +1,398.04 (+2.77%)
     
  • CMC Crypto 200

    1,371.97
    +59.34 (+4.52%)
     
  • S&P 500

    4,967.23
    -43.89 (-0.88%)
     
  • DOW

    37,986.40
    +211.02 (+0.56%)
     
  • CRUDE OIL

    83.24
    +0.51 (+0.62%)
     
  • GOLD FUTURES

    2,406.70
    +8.70 (+0.36%)
     
  • NIKKEI 225

    37,068.35
    -1,011.35 (-2.66%)
     
  • HANG SENG

    16,224.14
    -161.73 (-0.99%)
     
  • DAX

    17,737.36
    -100.04 (-0.56%)
     
  • CAC 40

    8,022.41
    -0.85 (-0.01%)
     

Twitter put ads next to tweets sharing images of child sexual abuse

Twitter Climate Disinformation (Copyright 2021 The Associated Press. All rights reserved)
Twitter Climate Disinformation (Copyright 2021 The Associated Press. All rights reserved)

Twitter put promoted advertisements next to tweets soliciting child pornography, leading many brands to pull their marketing campaigns on the social media site.

Major advertisers including Disney, NBC, and Coca Cola were among over 30 advertisers that appeared next to accounts peddling links to the exploitative material, discovered by cybersecurity group Ghost Data.

These tweets include key words related to “rape” and “teens,” with one user saying they were “trading teen/child” content. In another example, a user tweeted searching for content of “Yung girls ONLY, NO Boys,” which was immediately followed by a promoted tweet for Texas-based Scottish Rite Children’s Hospital. Scottish Rite did not return multiple requests for comment.

ADVERTISEMENT

“We’re horrified,” David Maddocks, brand president at Cole Haan, a shoe and accessory brand whose promotions appeared next to those tweets, told Reuters, which first reported on the news.

“Either Twitter is going to fix this, or we’ll fix it by any means we can, which includes not buying Twitter ads.”

Twitter said that the company “has zero tolerance for child sexual exploitation” and is investing more resources dedicated to child safety, including hiring for new positions to write policy and implement solutions. The company added that it is working closely with its advertising clients and partners to investigate and take steps to prevent the situation from happening again.

Twitter, like all social media platforms, Twitter bans depictions of child sexual exploitation, which are illegal in most countries. But it permits adult content generally and is home to a thriving exchange of pornographic imagery, which comprises about 13 per cent of all content on Twitter, according to an internal company document seen by Reuters. Twitter declined to comment.

Ghost Data identified the more than 500 accounts that openly shared or requested child sexual abuse material over a 20-day period this month. Twitter failed to remove more than 70 per cent of the accounts during the study period, according to the group, which shared the findings exclusively with Reuters.

Reuters could not independently confirm the accuracy of Ghost Data’s finding in full, but reviewed dozens of accounts that remained online and were soliciting materials for “13+” and “young looking nudes.”

The traffickers often use code words such as “cp” for child pornography and are “intentionally as vague as possible,” to avoid detection, according to the internal documents. The more that Twitter cracks down on certain keywords, the more that users are nudged to use obfuscated text, which “tend to be harder for (Twitter) to automate against,” the documents said.

For the accounts identified by Ghost Data, nearly all the traders of child sexual abuse material marketed the materials on Twitter, then instructed buyers to reach them on messaging services such as Discord and Telegram in order to complete payment and receive the files, which were stored on cloud storage services like New Zealand-based Mega and Dropbox. Discord, Telegram, and Mega, and Dropbox all say they take action against users attempting to share such content.

After Reuters shared a sample of 20 accounts with Twitter last Thursday, the company removed about 300 additional accounts from the network, but more than 100 others still remained on the site the following day, according to Ghost Data and a Reuters review.

Reuters then on Monday shared the full list of more than 500 accounts after it was furnished by Ghost Data, which Twitter reviewed and permanently suspended for violating its rules, said Twitter’s Carswell on Tuesday.

“Twitter needs to fix this problem ASAP, and until they do, we are going to cease any further paid activity on Twitter,” said a spokesperson for Forbes.

“There is no place for this type of content online,” a spokesperson for carmaker Mazda USA said in a statement to Reuters, adding that in response, the company is now prohibiting its ads from appearing on Twitter profile pages.

A Disney spokesperson called the content “reprehensible” and said they are “doubling-down on our efforts to ensure that the digital platforms on which we advertise, and the media buyers we use, strengthen their efforts to prevent such errors from recurring.”

A spokesperson for Coca-Cola, which had a promoted tweet appear on an account tracked by the researchers, said it did not condone the material being associated with its brand and said “any breach of these standards is unacceptable and taken very seriously.”

NBCUniversal said it has asked Twitter to remove the ads associated with the inappropriate content.

Twitter’s transparency reports on its website show it suspended more than 1 million accounts last year for child sexual exploitation. It made about 87,000 reports to the National Center for Missing and Exploited Children, a government-funded non-profit that facilitates information sharing with law enforcement, according to that organization’s annual report.

Additional reporting by Reuters