this post was submitted on 12 Apr 2024
66 points (100.0% liked)

Gaming

30229 readers
262 users here now

From video gaming to card games and stuff in between, if it's gaming you can probably discuss it here!

Please Note: Gaming memes are permitted to be posted on Meme Mondays, but will otherwise be removed in an effort to allow other discussions to take place.

See also Gaming's sister community Tabletop Gaming.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

The cat is out of the bag and despite many years of warning before this and similar technology became widely available, nobody was really prepared for it - and everyone is solely acting in their own best interests (or what they think their best interests to be). I think the biggest failure is that despite there being warnings signs long before, every single country failed to enact legislation that could actually meaningfully protect people, their identity and their work(s) while still leaving enough room for research and the beneficial use of generative AI (or at least finding beneficial use cases).

In a way, this is the flip side of the coin of providing such easy access to cutting edge tech like machine learning to everyone. I don't want technology itself to become the target of censorship, but where it's being used in a way that harms people, like the examples used in the article and many more, there should be mechanisms, legal and otherwise, for victims to effectively fight back.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 2 months ago (4 children)

And we thought identity theft was shitty before. I hope that we'll have better tools to identify AI voices in the future. In some cases right now I have a hard time telling between an actual person and a faked voice.

[–] [email protected] 9 points 2 months ago (3 children)

This problem cannot be solved by tools, because you can use these tools to make AI-generated content more realistic (adversarial training).

[–] [email protected] 2 points 2 months ago

I'd honestly go one step further and say that the problem cannot be fully solved period.

There are limited uses for voice cloning: commercial (voice acting), malicious (impersonation), accessibility (TTS readers), and entertainment (porn, non-commercial voice acting, etc.).

Out of all of these only commercial uses can really be regulated away as corporations tend to be risk averse. Accessibility use is mostly not an issue since it usually doesn't matter whose voice is being used as long as it's clear and understandable. Then there's entertainment. This one is both the most visible and arguably the least likely to disappear. Long story short, convincing enough voice cloning is easy - there are cutting-edge projects for it on github, written by a single person and trained on a single PC, capable of being run locally on average hardware. People are going to keep using it just like they were using photoshop to swap faces and manual audio editing software to mimic voices in the past. We're probably better off just accepting that this usage is here to stay.

And lastly, malicious usage - in courts, in scam calls, in defamation campaigns, etc. There's strong incentive for malicious actors to develop and improve these technologies. We should absolutely try to find a way to limit its usage, but this will be eternal cat and mouse game. Our best bet is to minimize how much we trust voice recordings as a society and, for legal stuff, developing some kind of cryptographic signature that would confirm whether or not the recording was taken using a certified device - these are bound to be tampered with, especially in high profile cases, but should hopefully somewhat limit the damage.

load more comments (2 replies)
load more comments (2 replies)