(Disclaimer: This is plain reporting and not any attempt to smear the good name of an emerging technology that’s crucial in helping us emerge from the ravages of the global Covid-19 pandemic.)
A research study titled “AI-Enabled Future Crimes” is predicting more than 20 criminal offenses arising from the misuse of Artificial Intelligence in the next few years.
Unfortunately, some of those so-called “future crimes” rated “of high concern” are already unleashing their evil ways today.
In 2019, a deepfake app called DeepNude, which used AI to undress clothed women for a fee, was taken down within 24 hours in response to heavy backlash on Twitter. In July this year, a similar bot appeared on the Telegram messaging app, this time stripping naked the bodies of underage girls. Sensity AI, a cybersecurity company focused on abusive practices in media, estimates that more than 100,000 unwitting women have already been targeted by the new bot.
At the annual meeting of American Association for the Advancement of Science in Seattle, Washington, USA, last February, a researcher from Duke University reported that a self-confessed pedophile has developed sex robots programmed to re-create a rape scenario, some of which mimic the looks and cries of children.
The consensus is that there’s no law against playing around with an inanimate object even if the robots represent literal sexual objectification of women and men.
Aside from deepfakes also known as audio/video impersonation, the AI-Enabled Future Crimes study tagged the perpetration of the following in connivance with Artificial Intelligence as High Crimes:
• Weaponizing driverless vehicles such as using self-driving vehicles for terrorist attacks;
• Tailored phishing in which the criminal collects secure information or installs malware via a digital message purporting to be from a trusted party such as the target mark’s bank;
• Disrupting AI-controlled systems that leads to massive power failure to traffic gridlock and a breakdown of the food logistics infrastructure;
• Large-scale blackmail which up-scales traditional blackmail whereby the AI harvests information from social media or large personal datasets such as email logs, browser history, etc., then identifies specific vulnerabilities for a large number of targets and tailors threat messages to each potential victim;
• AI-authored fake news in which the AI issues propaganda, made to appear to come from trusted sources, in sufficient quantities to conceal the true information under its weight.
The study rates these future AI threats as Medium Crimes:
• Military robots especially autonomous robots deployed in criminal or terrorist attacks;
• Snake oil which refers to the sale of fraudulent services under the guise of AI, typically in machine learning applications;
• Data poisoning where data introduced to an AI system is biased or favors deliberately towards the attacker;
• Autonomous assault drones under AI control that commit a crime like a swarm while the criminal orchestrator is nowhere near the attack zone;
• Market bombing when financial or stock markets are manipulated to damage competitors, currencies, or economic systems.