- Samsung AI researchers have found a way to apply so-called “deepfake” technology to classic portraits, bringing them to life.
- Normally to make deepfake videos you need lots of data, ideally using clips of video.
- The researchers developed a way of using as few still images as possible to generate convincing video.
- Visit Business Insider’s homepage for more stories.
AI researchers at Samsung have developed a way to animate classic portraits, bringing them to life.
In a new paper, researchers from Samsung’s AI center in Moscow, along with the Skolkovo Institute of Science and Technology, detail how they built a system for modelling human faces using as few images as possible.
The findings build on existing so-called “deepfake” techniques, which is used to graft one person’s face onto a video of someone else.
Deepfake software usually relies on capturing lots of data about a person’s face, usually using clips of video. The new technique focuses on using just a few photographs, or even just one still image.
They achieved this by feeding the software with a huge dataset of celebrity talking-head videos from YouTube, training it to pinpoint significant “landmarks” on human faces.
This works together with a Generative Adversarial Network (GAN), which pits two algorithms against each other – one generating fake images, and the other trying to catch it out by spotting which images are fake.
Here it is when applied to photos of Marilyn Monroe and Salvador Dalí:
The technology is adept enough at identifying human faces that it can even be applied to old paintings, such as the Mona Lisa:
The researchers note that just using one picture makes the software less effective, you can see the animated versions all have distinct “personalities” they’re deriving from the people they’re being modelled onto.