21/10/2022
Brilliant new science from the University of Florida against the human-like voice-thieving systems in 2022! 👏🏻 👏🏻 👏🏻
The digital sound-alikes / audio "deepfakes" / audio "deep fakes" / voice-thieving synthesis systems are known to have been used for crimes since March 2019. (link)
----
Thank you to the University of Florida and their Florida Institute for Cybersecurity Research (FICS) and very warm thank you to the respective scientists Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler and Professor Patrick Traynor for this ground-breaking science against the voice-thieving systems. May ☮️ be with you.
They published their work in a scientific paper titled
"Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction"
and presented to peers at the USENIX Security Symposium in August 2022. (link)
The University of Florida researchers' work was funded by the Office of Naval Research.
----
This looks like an awesome start on future automated armor for us humans against the voice-thieving systems!
These methods based on extremely innovative application of existing scientific knowledge and realizing to ask the right questions and the system the University of Florida researchers made to spot the fake synthesized voices and how this system was designed, implemented, tested and found highly effective against the voice-thieving machines of today gives us hope in humanity's struggle to stay humanity despite of the menaces of the synthetic human-like fakes.
Office of Naval Research funding was a natural source of funding for this innovation, as it is clear that protecting the US Navy and US Marines audio communications networks against any voice-forging adversaries is of high importance.
People who were aware of the voice-thieving-machines problem had been waiting for something and this is more than at least I expected. 🥳 🥳 🥳
Cheers American taxpayers and DoD for funding this breakthrough science!
Please get this technology against the fake voices to humans to protect the humans and humanity! 🫵🏻
~ Juho Kunsola
----
WHAT ABOUT THE MOVING IMAGES?
Please keep funding the struggle to defend against the appearance-thieving synthesis criminals too e.g. the existing research programs:
👍🏻 Media Forensics (MediFor since 2016) (link)
👍🏻 Semanatic Forensics (SemaFor, since 2019) (link)
And please consider purveying, as a public good, a good AI to police 24/365 what visual forgeries the evil AIs spit out. This could be achieved through e.g.
👍🏻 Reviving Mr. Lionel Hagege's existing system FacePinPoint-dot-com (link), which is currently closed due to lack of financial support 🤜🏻🤛🏻
👍🏻 I discovered the same principle, as Mr. Hagege had already implemented, on my own on Friday 2019-07-12, and called it APW_AI. The description of APW_AI is basically identical with FacePinPoint's "How it works", but I thought of it specifically against the synthetic digital look-alikes. I had been searching for solutions to these problems that come from the synthetic human-like fakes since 2003.
------
Credits for the republished article:
Post-grad student Logan Blue and Professor Patrick Traynor wrote this article for the laymen titled "Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices" and licensed it permanently under Creative Commons-Attribution-NoDerivatives (CC-BY-ND) on Tuesday 2022-09-22 at TheConversation-dot-com. (link)
Juho Kunsola in Finland 🇫🇮 for Stop Synthetic Filth! wiki gratefully republished the article on the Stop Synthetic Filth! wordpress from original, under the clauses of CC-BY-ND and titled it "Amazing method and results from University of Florida scientists in 2022 against the menaces of digital sound-alikes / audio deepfakes"
To detect audio deepfakes the researchers at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.