« Scammed by an AI who used my mother’s voice »: some ways to avoid that new threat vector.

in voilk •  5 months ago

    Most of us probably have heard or seen that several artificial intelligence (AI) driven apps can autonomously make calls to companies and service providers, by generating an adaptative human sounding voice and reacting adequately to the human interlocutors’ questions and answers. In fact that technology has been showcased by Google as early as 2017 (through a pizza ordering example – 08:56 in this video: https://invidious.protokolla.fi/watch?v=iGW4btk34yQ), and now encompasses automatized simultaneous interpretation in 15 langages.

    What I haven’t seen being widely discussed and tested, however, is the AI run autonomous agents’ capacities to impersonate individuals through voice cloning techniques. This development poses serious risks, particularly when it comes to financial fraud and identity theft. This post proposes a first approach around the potential for AI autonomous agents using voices of human beings to scam their relatives and friends, and suggests some security measures to prevent or counter these types of attacks.

    Dangers of voice cloning technology

    As AI autonomous agents become increasingly sophisticated, they have the ability to simulate emotions and empathy, making it easier for them to deceive their targets. Victims might feel compelled to trust the person on the other end of the line, leading them to share confidential information or take actions against their best interests.

    *** Financial Fraud**: This is of the most significant threats associated with AI autonomous agents impersonating human beings. These malicious entities can trick victims into divulging sensitive information like bank account details, credit card numbers and personal identification data, or into sending them some cryptos.

    In the case of Hive, we can imagine that technique being implemented to simulate a friend or a family member who would have created a Hive account. The fraudster would call us – through a traditional phone call or an app like WhatsApp, Telegram, etc. - and ask for financial support. The tricky aspect in such a case is that the scammer could take all the time needed to reach his or her goal: it’s often much more efficient to tackle a request for money in an innocent sounding and casual fashion, after half an hour of conversation with the target, than to touch that topic in the first 2 minutes of the call.

    Realistically, who, among us, receiving a call from the person we most interact with on Hive or INLEO, and hearing that person suggest we choose him or her as our recovery account holder (in case our account would be hacked), would suspect it’s a thoroughly botted call?

    • Identity Theft: Another concerning aspect of voice cloning technology is its potential to facilitate identity theft. By imitating someone's voice, attackers can gain access to secure systems, social media accounts, email services, and more. They may then use this access to manipulate others, spread misinformation, or cause harm to the victim's reputation.

    Even in an electoral context, like this year’s US Presidential run, we could receive robocalls through WhatsApp or Telegram, that we would be convinced were from our mother or a cousin, and where they will have «coincidentally» mentioned that they’ll vote for a specific candidate. Even if we’re not so politically interested, that kind of conversations with (supposedly) beloved family members or friends can actually impact our final decision.

    Security measures to prevent or counter right now those attacks

    It’s quite possible that in 2 or 3 years a decentralized identity hash or zero-knowledge proof mechanism will allow us to be sure that who’s calling us is really the person of whom the name shows up on our screen.

    Meanwhile, this new security threat requires proactive thinking from all of us wishing to avoid our crypto or bank accounts being drained, or our houses being burgled.

    Let’s start with measures that are easily and quickly accessible to everyone:

    1. The most efficient way to verify that our interlocutor isn’t an AI generated persona is ****to direct him or her a question based on data about us that we’ve NEVER published on any social media platform, nor ever spoken about out loud with a mobile phone close to our location****.

    That data should of course be known exclusively by the « real version » of our interlocutor, and eventually a handful of people apart from him or her.

    Examples: - what was the name of our family’s cat that passed away 20 years ago? (when there wasn’t even a spy platform like Facebook or Instagram, for anybody to publish a picture of that cat, lol);

    • what nickname my grandmother gave my father when he was 5 years old?
    • [to a friend] when did we meet for the first time, and what did we talk about?
    1. The second most relevant measure, in my opinion, is to agree beforehand on a password with each one of our highly trusted friends and family members.

    The net positive here is that it requires to talk about this threat to a series of close to our heart people, thus « red-pilling » them in that respect. Even if some of them will probably shrug and suggest to offer us a tinfoil hat, at least we’ll have educated them around such an impersonation possibility, and it should raise a bit their vigilance.

    As far as passwords are concerned, a couple of recommendations:

    • just like in the case of any website or device we use, that password must be unique to each interlocutor ;
    • it’s a good idea to make it as « exotic » as possible, e.g. to select a word or phrase in a foreign langage ;
    • a relatively absurd pass phrase is even better, with ourself and our interlocutor knowing half of it, for instance : « Michael Jordan called me yesterday » [first part, that I pronounce during the call] - « … is he back from his trip to Saturn ? » [second part, pronounced by my friend or relative].

    Measures to trigger in the coming months and years

    1. We can** implement AI based speech recognition software to verify callers' identities** before granting access to sensitive information.

    It seems that in almost all cases where mankind will have to protect itself from AI driven threats, the only available crossroads will be: a) to create verifiably AI-free territories and communities, and/or b) use the opponent's tools, in this case an AI piloted synthetic speech recogition app.

    Such a technology can compare the caller's voice to a pre-recorded sample or reference database, flagging any discrepancies and alerting us accordingly.

    4.** Widely educate users about voice cloning techniques** is another necessary process that will take some years to be completed. We could think that awareness about those potential attacks will spread like a drag of powder, but let’s think on how many people still believe that Bitcoin is Ponzi scheme, or ignore everything about it except its name, 15 years after the Bitcoin blockchain was launched…

    For this preventive approach to be successful, organizations and institutions should invest in educational campaigns aimed at teaching people how to identify suspicious calls and protect themselves from falling prey to botted scammers.

    Provisional conclusion...

    The rise of AI autonomous agents capable of cloning human voices presents a new significant challenge to our digital safety. It’s only the tip of the AI driven scams and hacks iceberg that we’re able to perceive, but if we adopt a proactive focus in implementing robust security measures and in building AI proof homes and communities, we can ensure a safer online experience for all.

    Source of the images:

    Posted Using InLeo Alpha

      Authors get paid when people like you upvote their post.
      If you enjoyed what you read here, create your account today and start earning FREE VOILK!