This week, WIRED reported that a group of prolific scammers known as the Yahoo Boys are openly operating on major platforms like Facebook, WhatsApp, TikTok, and Telegram. Evading content moderation systems, the group organizes and engages in criminal activities that range from scams to sextortion schemes.
On Wednesday, researchers published a paper detailing a new AI-based methodology to detect the “shape” of suspected money laundering activity on a blockchain. The researchers—composed of scientists from the cryptocurrency tracing firm Elliptic, MIT, and IBM—collected patterns of bitcoin transactions from known scammers to an exchange where dirty crypto could get turned into cash. They used this data to train an AI model to detect similar patterns.
Governments and industry experts are sounding the alarm about the potential for major airline disasters due to increasing attacks against GPS systems in the Baltic region since the start of the war in Ukraine. The attacks can jam or spoof GPS signals, and can result in serious navigation issues. Officials in Estonia, Latvia, and Lithuania blame Russia for the GPS issues in the Baltics. Meanwhile, WIRED went inside Ukraine’s scrappy and burgeoning drone industry, where about 200 companies are racing to build deadlier and more efficient autonomous weapons.
An Australian firm that provided facial recognition kiosks for bars and clubs seems to have leaked more than 1 million patrons’ records data. This incident underscores the risk involved in handing over your biometric data to businesses. In America, the Biden administration is urging tech companies to voluntarily promise to exert “good-faith” efforts to execute crucial cybersecurity enhancements. We reported this week that the administration is also modifying its strategy for safeguarding the nation’s critical infrastructure against hackers, terrorists, and natural catastrophes.
That’s not all. Every week, we shed light on the news that we didn’t delve deep into ourselves. Click the following headlines to read the complete stories. Stay safeguarded out there.
A government procurement document discovered by The Intercept indicates that two primary Israeli weapon manufacturers are mandated to utilise Google and Amazon if they require any services based on the cloud. This report challenges repeated assertions from Google that the technology it provides to Israel doesn’t serve military purposes—including the ongoing bombing of Gaza that has taken the lives of over 34,000 Palestinians. The document encompasses a list of Israeli businesses and government agencies “mandated for purchase” of any Amazon and Google cloud services. This list includes Israel Aerospace Industries and Rafael Advanced Defense Systems, the latter being the builder of the dreaded “Spike” missile, reportedly used in the April drone hit killing seven World Central Kitchen aid workers.
In 2021, Amazon and Google formed a contract with the Israeli government under the combined venture named Project Nimbus. As per this agreement, these tech giants cater to the Israeli government, including its Israel Defense Forces, with cloud services. In April, Google staff staged sit-ins protesting against Project Nimbus in offices based in Silicon Valley, New York City, and Seattle. Approximately 30 workers were sacked by the company in response.
A report by Notus disclosed that a mass surveillance tool named TraffiCatch is currently being used at the border to track people’s locations in real time. This tool works by intercepting wireless signals transmitted from devices such as smartwatches, earbuds, and vehicles, and associates these signals with the vehicles recognized by license plate readers in the same area. A representative from the sheriff’s office in Webb County, Texas, confirmed its use to detect devices in unauthorized areas to locate intruders.
Even though some states require law enforcement agencies to secure warrants before deploying devices that imitate cell towers to obtain data from deceived devices, the legal status of technologies like TraffiCatch that passively collect environmental wireless signals remains undecided. This situation underscores the increasing accessibility of signals intelligence technology, formerly exclusive to military entities, to local governments and the public.
An article in The Washington Post claimed that an official from India’s intelligence service, the Research and Analysis Wing, was purportedly implicated in a failed assassination plot against one of Indian Prime Minister Narendra Modi’s prime critics in the US. The Indian foreign ministry dismissed the allegations in the Post article, while the White House asserted that the matter was being viewed with grave concern. The purported plan to eliminate the Sikh separatist Gurpatwant Singh Pannun, a citizen of both the US and Canada, was initially reported by American authorities last November.
Previously, Canadian authorities announced that they had obtained “credible” intelligence suggesting a connection between the Indian government and the murder of another separatist leader, Hardeep Singh Nijjar. Nijjar was assassinated outside a Sikh temple in a suburb of Vancouver last summer.
US lawmakers have introduced a bill aimed at establishing a new wing of the National Security Agency dedicated to investigating threats aimed at AI systems—or “counter-AI.” The bipartisan bill, introduced by Mark Warner and Thom Tillis, a Senate Democrat and Republican, respectively, would further require agencies including the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) to track breaches of AI systems, whether successful or not. (The NIST currently maintains the National Vulnerability Database, a repository for vulnerability data, while the CISA oversees the Common Vulnerabilities and Exposures Program, which similarly identifies and catalogues publicly disclosed malware and other threats.)
The Senate bill, known as the Secure Artificial Intelligence Act, aims to expand the government’s threat monitoring to include “adversarial machine learning”—a term that is essentially synonymous with “counter-AI”—which serves to subvert AI systems and “poison” their data using techniques vastly dissimilar to traditional modes of cyberwarfare.
Lauren Goode
Matt Simon
Paresh Dave
Jordan Pearson