Elize.AI News

GPT-3 based Elon Musk conversational AI animated by elize.AI

Pandorabots have asked GPT-3 to write Elon Musk's opening monologue for SNL, and then Kuki and GPT-Elon sat down for a chat.
And elize.AI has delivered audio-based character animation.

See it yourself on Youtube: https://www.youtube.com/watch?v=T8YvdOenQ3A

GPT-Elon + Kuki AI Explained

A Saturday Night Live parody monologue in the style of Elon Musk generated by ICONIQ using GPT-3, followed by two embodied AI bots chatting (Kuki and GPT-Elon) created by Elize.AI and ICONIQ, the company behind Kuki. GPT-Elon’s responses were generated by accessing OpenAI’s GPT-3 API and telling it to “reply like Elon Musk.” (Both the GPT-Elon SNL monologue transcript here and the full AI conversation transcript here were unedited, save for length.)

Why do this?

For the dank memes

To illustrate the current state of the art in conversational AI & CG avatar animation

How does this work?

GPT-Elon’s brain: is powered by Open AI GPT-3, a massive deep learning language prediction model that learns how to mimic certain styles by reading the internet

Kuki AI’s brain: is powered by Pandorabots spinout ICONIQ; trained over a decade on +1B chats using a hybrid, rules-first approach to ensure sensical, nontoxic replies

Kuki & Elon’s Avatars: were made using the MetaHuman Creator, a new tool from Epic Games that lets anyone make photoreal avatars for Unreal very fast—for free

Kuki & Elon’s Speech: comes from Amazon Polly’s synthetic speech APIs (note there were no South African options available so Welsh was the closest we could get!)

Kuki & Elon’s Movements: are rendered by Elize.AI, which uses machine learning to provide automatic, real-time character animation, lip sync and behavior

Why does this matter?

Conversational AI is getting pretty good, although GPT-Elon particularly illustrates how a lot of dialog is still sensical nonsense if trained (like GPT-3) on the entire internet. Humanity may be only a few years away from AI becoming capable of passing as human on a video call via deepfake or a photoreal avatar. What happens next is up to us—and the design decisions we make and rules we choose to follow today will hugely impact our future.