Live portraits: Bringing back the dead digitally

IMAGE CREDIT:
Image credit
iStock

Live portraits: Bringing back the dead digitally

Live portraits: Bringing back the dead digitally

Subheading text
Computer software is re-animating still portraits to bring them to life, including photos of the deceased as a memento for the living.
    • Author:
    • Author name
      Quantumrun Foresight
    • December 27, 2022

    Insight summary

    Artificial intelligence and machine learning (AI/ML) has pushed the boundaries of what is possible, including bringing back the dead. Deepfake technology, a form of deep learning (DL), is now being used to animate historical paintings and portraits of the deceased, blurring the lines between entertaining and disturbing content. The long-term implications of these "live" portraits could include funerals integrating these products into their services and museums using this technology to "reanimate" historical figures.

    Live portraits context

    One of the early applications of live portraits is that of Leonardo da Vinci’s Mona Lisa, which became viral on the Internet. The animated picture was created using deepfake technology developed by Samsung’s AI center in Moscow and the Skolkovo Institute of Science and Technology. The researchers used varying degrees of reference material, including just a few frames or even a single image.

    With this method, they were able to recreate famous expressions from paintings like Johannes Vermeer’s Girl with a Pearl Earring. These live portraits are made possible by a DL method called convolutional neural network (CNN), which self-trains to analyze reference images. The program then takes the facial movements from a series of frames and applies them to a static image, like the Mona Lisa. The more angles there are in reference images, the more realistic the portrait becomes. 

    While the animated Mona Lisa is entertaining, the increasing popularity of deepfakes has prompted fears that computer-generated representations may be utilized to defame individuals, stoke racial and political divisions, and damage trust in online media. According to the think tank Brookings Institution, manipulated content erodes belief in all videos, both real and fake. Because people can no longer determine what is genuine, truth itself becomes unstable. The good news is that while deepfakes are made using AI, the same technology can also be used to identify manipulated videos by searching for anomalies that go unnoticed by the human eye. However, experts warn that deepfake technology will soon be cheap and accessible enough for anyone to take advantage of for numerous applications.

    Disruptive impact

    In 2021, the genealogy website MyHeritage introduced Deep Nostalgia, which transforms photographs of deceased relatives to appear to be nodding, smiling, or dancing. In 2022, they added a new feature called LiveStory, which animates the face so that they can recount their life stories. The feature can be accessed on the MyHeritage website or app once an account has been opened.

    The user will be able to upload pictures and text, and decide which language they want the AI animation to read the text. There are 31 different languages, over a dozen dialects, and 152 various synthetic voices to choose from. The software was developed with Israel-based “creative reality” startup D-ID, which applies AI and DL to create these video recreations. While people were initially “creeped out” by LiveStory, D-ID CEO Gil Perry is not too concerned about potential backlash.

    D-ID is becoming a major player in the live portrait space. In 2021, the firm announced its partnership with the virtual and augmented reality (VR/AR) platform The Glimpse Group. The collaboration focuses on AR/VR/AI products that will make the Metaverse more interactive. The Metaverse is a quickly developing space encompassing blockchain, AI, AR, VR, social media, and online gaming. It promises to enrich the online experience by additional dimensions and making it simpler to access.

    Even though some of the biggest tech companies are already trying to colonize the Metaverse, there is still a lot of room for development and new technology. The partners are already working on a proof of concept (PoC) to test how their technologies can merge seamlessly. The PoC is a technology that allows users to immediately animate still images on their phones utilizing D-ID’s technology and Glimpse subsidiary PostReality’s solutions.

    Implications of live portraits

    Wider implications of live portraits may include: 

    • Live portraits being played during funerals to commemorate the deceased, becoming a new product-as-a-service market.
    • Museums using live portraits to make famous paintings more realistic and interactive, leading to exclusive experiences and premium prices.
    • Increased use of portrait memes for entertainment and propaganda, resulting in more disinformation and manufactured scandals.
    • Advertisers creating live portraits for hyper-realistic advertisements in city centers, merging marketing with entertainment.
    • More startups in the creative reality space partnering with different brands and organizations.
    • Live portraits in education allowing historical figures to "speak" in classrooms, fostering a deeper engagement in learning.
    • Retail industries incorporating live portraits for personalized virtual try-ons, significantly enhancing the customer shopping experience and reducing return rates.
    • Public spaces adopting live portrait displays for real-time environmental awareness campaigns, directly influencing public behavior towards sustainability.

    Questions to consider

    • If you have seen a live portrait, how did it make you feel?
    • How else can live portraits change the way people interact with historical images?

    Insight references

    The following popular and institutional links were referenced for this insight: