Create a DeepFake Motion model for image animation Using Python

Create a DeepFake Motion model for image animation Using Python

Hey guys, In this article, we are going to see how to create a deepfake video using python. Deepfaking is such an interesting topic to explore. on my view, it's just in the initial stages of its advancements.

df4.gif We are going to use "First Order Motion Model for Image Animation" proposed by Aliaksandr Siarohin

As a basic requirement, I expect you to know python, modules like numpy, matplotlib,skImage.

before getting into coding, we have to launch an environment, I use google collab for its simpler setups. enough of reading

lets code...

your first step is to clone the below repository in your workspace. copy the code and run it in your cell.

!git clone https://github.com/Adithya-jh/first-order-model.git

run the following command in the cell to install pyYAML v5.3.1

!pip install pyYAML==5.3.1
cd first-order-model
from google.colab import drive
drive.mount('/content/gdrive')

a tab will open to get permissions, to access files from your google drive. Click "allow" and continue the process.

our next step is to import required packages and modules and to set them up.

import imageio
import numpy as np
import matplotlib. pyplot as plt
import matplotlib. animation as animation
from skimage.transform import resize
from IPython.display import HTML
import warnings

warnings.filterwarnings("ignore")

now we have to import the source image and the mapping video to get the deepfake done. store an image of a person and a video of another person the image has to be mapped with, in the file the source of the actual file or store it in your google drive.

source_image = imageio.imread("/content/gdrive/MyDrive/01.png")
reader = imageio.get_reader('/content/gdrive/MyDrive/Colab Notebooks/00.mp4')

#Resize image and video to 256x256

source_image = resize(source_image, (256, 256))[..., :3]

fps = reader.get_meta_data()['fps']
driving_video = []
try:
    for im in reader:
        driving_video.append(im)
except RuntimeError:
    pass
reader.close()

driving_video = [resize(frame, (256, 256))[..., :3] for frame in driving_video]

def display(source, driving, generated=None):
    fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6))


    ims = []
    for i in range(len(driving)):
        cols = [source]
        cols.append(driving[i])
        if generated is not None:
            cols.append(generated[i])
        im = plt.imshow(np.concatenate(cols, axis=1), animated=True)
        plt.axis('off')
        ims.append([im])

    ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000)
    plt.close()
    return ani


HTML(display(source_image, driving_video).to_html5_video())

you will get the merged video of your given image and video, like this...

df1.gif

after you get the output, run the following code in the cell.

from demo import load_checkpoints
generator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml', 
                            checkpoint_path='/content/gdrive/MyDrive/Colab Notebooks/vox-adv-cpk.pth.tar')

time to make some animation by mapping...

from demo import make_animation
from skimage import img_as_ubyte

predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=True)

// save resulting video
imageio.mimsave('../generated.mp4', [img_as_ubyte(frame) for frame in predictions], fps=fps)
//video can be downloaded from /content folder

HTML(display(source_image, driving_video, predictions).to_html5_video())

you will get the output of the deep faked image as

df2.gif

to get better results we add an extra attribute to the animation called predictions which has relative=False, adapt_movement_scale=True, for better output.

predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=False, adapt_movement_scale=True)
HTML(display(source_image, driving_video, predictions).to_html5_video())

df3.gif

That's it for you, you have created your own deepfake! cheers ๐Ÿฅ‚.

This is just a simple application of the model, the creative implementations are limitless. so, try out yourself and come up with new ideas and implementations.

related sources :

Thank you for reading ๐Ÿ˜Š.

HAPPY LEARNING

-JHA