AI-Generated Voice Copies
In recent years, even months, artificial intelligence (AI) has advanced by leaps and bounds, bringing new capabilities and convenience to our everyday lives. One such advancement is the ability to create perfect AI-generated copies of human voices. A good example of this is seen in Samsung’s latest announcement for its Bixby mobile assistant called “Custom Voice Creator.” This feature allows a user to record themselves talking, and Bixby will analyze it and create an AI-generated copy of their tone and voice. While this technology may seem exciting and innovative, it also raises concerns about the potential dangers of someone hacking and obtaining your voice for malicious purposes.
The Risks of AI-Generated Voice Copies
The ability to create AI-generated voice copies raises several risks that could compromise the security and privacy of individuals’ personal data. Hackers could potentially use these voice clones for nefarious purposes, such as impersonation, voice phishing, or social engineering attacks. For example, a hacker could use a voice clone to manipulate a user’s voice and gain unauthorized access to their accounts, change bank information, or extract sensitive information. This could result in financial loss, reputational damage, and legal consequences for the victims.
Most small businesses look at emails requesting to update direct deposit information with some degree of skepticism. However, what if an employee called and asked for it to be updated? Most people would not think twice. By using new voice technologies, it is possible for scammers to change their voices in real time to that of the person they are impersonating. Another close example of this is if a vendor called and said they had a new updated payment method. Instead of thousand dollar + fraud, it could be up in the tens of thousands.
How Can you Protect Yourself?
So, how can you protect yourself from the dangers of voice cloning?
Multi-Factor Authentication (MFA): Don’t act surprised. You had to have known MFA was going to come up somehow. A second channel should always be used to confirm sensitive requests. For the examples above, an HR employee could require an employee read back a code texted to their cell phone # on file, or if possible, a quick confirmation in person.
Source Verification: Pay attention to where calls are coming from. Maintaining pre-collected contact information lets you hang up and call a person back at a known good number. Incoming caller ID’s can unfortunately be spoofed.
Educating Users: Educate employees about the risks of voice cloning and how to identify potential voice cloning attacks. This includes avoiding sharing sensitive information over the phone without proper verification, being cautious of unexpected phone calls or messages from known contacts, and being aware of the limitations and potential risks of voice-controlled systems.
Seasoning Periods: A required time period from the submission of a request to its implementation should be in place to allow for thorough fraud detection. This designated period, commonly referred to as a “season period,” should incorporate notification mechanisms through various mediums such as SMS, email, and others to promptly inform relevant parties of any changes.
Protect Your Voice
The advancement of AI-generated voice cloning brings both opportunities and risks. While it offers new capabilities and convenience, it also raises concerns about potential malicious use by hackers. To protect against these risks, Managed Service Providers can play a crucial role in safeguarding personal data and mitigating vulnerabilities. By implementing measures such as multi-factor authentication, source verification, user education and seasoning periods, MSPs can help companies safeguard their privacy and security in the era of AI-generated voices.