Here are six terrible technologies that should have never seen the light of day.

1. Facial Recognition

Once confined to the realms of science fiction movies, facial recognition technology is now used in various applications, from Apple’s Face ID system to border security at international airports.

Clearview AI was one of the first facial recognition companies to gain public recognition. The company used an ingenious method to build its database of faces—by scraping images from websites and social media. The software has been used by law enforcement to solve crimes such as shoplifting and child abuse.

There is no doubt that facial recognition makes our life a little easier and safer.

However, there are concerns that the technology has the potential to be exploited by bad-faith actors. For example, there is a danger that an oppressive foreign government could misuse it to jail citizens or even a rogue law enforcement officer to stalk someone.

Concerns have also been expressed by civil rights organizations, such as Big Brother Watch, that surveillance technologies like facial recognition pose a dire threat to personal privacy and should be outright banned.

2. Juicero

With $120 million in funding from investors, Juicero had all the trappings of the “next big thing.”

Fresh and healthy cold-pressed juices, dispensed innovatively.

The Wi-Fi-enabled juicer would juice sachets of pre-chopped fruits and vegetables. Your morning orange juice would be delivered at the press of a button.

It all came tumbling down when Bloomberg published an article revealing that the sachets could be squeezed with your bare hands, with the final product indistinguishable from those made using the Juicero device.

For Juicero, it was downhill from there, with sales of its Juicero press and juice packets suspended following the article’s publication.

3. Deepfakes

Researchers began developing deepfake technology in the 1990s. But it wasn’t until recently that the technology had a mainstream breakthrough, with deepfake apps like Reface and FakeApp becoming wildly popular in recent years.

While replacing someone else’s face with your own in an app might seem like harmless fun, there is a darker side to this trend.

A 2019 report by Deeptrace found that 96% of deepfakes were pornographic. In addition, non-consensual deepfake explicit content circulates in numerous online communities, with both celebrities and ordinary women as victims.

Deepfakes have also been used to misrepresent public officials in videos. For example, Extinction Rebellion posted a deepfake of Belgian Prime Minister Sophie Wilmès on Facebook in 2020, discussing the possible connection between COVID-19 and the climate crisis. In 24 hours, it received 100,000 views, with many believing the video was genuine.

Scammers are also exploiting voice deepfakes to con people out of their cash. For example, a CEO was scammed out of $243,000 when he transferred money to a bank account, believing he was speaking with his boss on the phone.

4. Smart Baby Monitors

Smart baby monitors seem like the perfect solution for parents. Set up the camera, connect it to an app over Wi-Fi, and you can keep watch of your baby from a distance.

However, there is nothing more frightening for a new parent than hearing a strange voice coming from the same room as their sleeping child.

This is the reality for some parents who had their baby monitor hacked and obscenities broadcast out of it.

While there are ways to beef up your security to ensure your baby monitor remains hack-proof, could you live with yourself knowing that prying eyes were invited into your home to spy on your child?

5. Electronic Voting

Theoretically, electronic voting seems like the perfect alternative to the traditional paper counting method.

It’s fast, and it simplifies the voting process for people in remote locations.

However, the use of electronic voting machines in the democratic voting process is a contentious issue.

One of the main issues is the psychological issue of trust. Some people fear that their vote may be changed, and if that happens, it is difficult to be verified by a human.

The voting machines used in the 2020 United States presidential election by Smartmatic and Dominion were the subject of accusations of election fraud. Whether or not the fraud allegations were true, a segment of the American population believed their votes were not counted correctly, which undermined confidence in the democratic process.

On top of that, like all machines, electronic voting machines are susceptible to hacking and can produce incorrect results. Electronic votes can be changed unnoticed far more easily than paper votes that leave a physical paper trail behind.

When so many things can go wrong, should the most important democratic process be left to a machine when a slower but more trustworthy method already exists?

6. Google Glass

Google Glass was a smart glasses device released in 2013.

It displayed information on its optical display, and users interact with it through voice commands. The glasses were also equipped with a camera that could take photos and record videos, as well as a touchpad located on the side.

Despite being considered innovative when it was released, it was not without its detractors. Specifically, privacy concerns were raised about recording people using the device without their permission.

Safety concerns were also an issue, with drivers in the UK banned from wearing Google Glass while driving.

Some people also found Google Glass aesthetically unappealing and even a little creepy since people wouldn’t interact with someone if they knew they had a camera pointed at them all the time.

Google Glass was discontinued in 2015, leaving users and critics scratching their heads, wondering what the point of the technology was.

More recently, Facebook announced its smart glasses partnership with Ray-Ban, a move sure to trigger the same privacy-related issues.

Tomorrow’s Terrible Technologies

Technology continues to make our lives easier, but the risk that we will invent something truly dreadful remains.

Prominent figures in the tech world, like Elon Musk and Stephen Hawking, have been vocal about the potential danger of AI. Elon Musk has even compared it to “summoning the demon.”

The best defense against a truly terrible technology is early identification and a contingency plan.