Facebook and Michigan State University have revealed a new method for identifying deep fake images and tracing them back to their source. Or, at the very least, tracing back to which generative model was used to create the images. The new system, according to reports surrounding the reveal, uses a complex reverse engineering technique. Specifically, to identify patterns behind the AI model used to generate a deep fake image.
The system works by running images through a Fingerprint Estimation Network (FEN), to parse out patterns — fingerprints — in those images. Those fingerprints are effectively built from a set of known variables in deep fake images. With generative models leaving behind measurable patterns in “fingerprint magnitude, repetitive nature, frequency range, and symmetrical frequency response.”
And, after feeding those constraints back through the FEN, the method can detect which images are deep fakes. Those are then fed back through a system to separate the images via “hyperparameters” which are set to guide the system to self-learn various generative models.
This is still in its infancy but it does move one step closer toward identifying and tracing deep fake images
One of the big setbacks to the current iteration of the system serves to highlight that this is still new technology. It’s nowhere near ready for primetime. Namely, it can’t detect fake images created by a generative model that it hasn’t been trained on. And there are countless such models in use.
What’s more, this is by no means a finalized method for identifying deep fake images from Facebook and MSU. Not only is there no way to be sure that every generative model is accounted for. There aren’t any other research studies related to this topic. Or, at the very least, there are no data sets to build up a baseline for comparison. Summarily, there’s no way of knowing, for sure, just how good the new AI model is.
The team behind the project indicates that there is “a much stronger and generalized correlation between generated images and the embedding space of meaningful architecture hyperparameters and loss function types.” And it compares that to a random vector of the same length and distribution. But that’s based on its own, self-created baseline.
So, without further research, the only takeaway is that the model detects AI-made deep fake images and their source better than a straightforward guess.
What could this be used for?
The goal of the project, as presented by the team, is to generate a way to trace deep fake images back to their source after identifying them. That could potentially serve to make enforcement of misinformation policies and rules easier. Particularly, as that pertains to social media sites and the still-rampant spread of misinformation.
The post Facebook Baby Steps Toward Identifying Deep Fake Images & Their Source appeared first on Android Headlines.
Gadget Reviews: mamaktalk.com
Car Reviews: automoview.com
Entertainment News: 38today.com
Gossip News: 38now.com
[Recommended Post] Best natural looking Malaysian Girl - Photo Album