Your First Character

A quickstart guide for your first LipSync Pro character.

1. Prerequisites

Before starting setup, you should ensure your character is supported by LipSync Pro. Through custom Blend Systems, technically any kind of character can be supported, but the following support is provided by us:

Blend System

Enables Support For

Blend Shape Blend System

Adobe/Mixamo Fuse characters,

Reallusion Character Creator 3 characters,

Custom-made characters with facial blend shapes

Bones Only Blend System

MakeHuman characters,

Custom-made characters with a bone based facial rig

Sprite Blend System

2D Sprite-based characters with multiple sprites for mouth, eyes, etc.

Texture Offset Blend System

2D Mesh-based characters with multiple textures,

3D characters with separate 2D mouths

UMA2 Blend System

Unity Multipurpose Avatar (UMA) 2 characters

If using a fully custom-made character, or one downloaded from the Asset Store or another digital marketplace, you must ensure it contains either enough blend shapes or a face rig capable of creating the required poses.

2. Add Components

Once you have your character in Unity, create an instance of it in a scene. You will probably want to create a prefab for future use. First, add the LipSync component to your character. You can find it in the Add Component menu under Rogo Digital/LipSync Pro.

You will also need an AudioSource for your dialogue to play through. This can be added to the same root GameObject if you wish or, for more accuracy, to a new GameObject parented to the character's head bone and positioned in the character's mouth.

3. Set Up a Blend System

After adding the LipSync component, you will see this inspector:

A Blend System is required in order for LipSync to interact with your character. Use the table in step 1 above to determine which Blend System is appropriate for your character. The dropdown will show all the Blend Systems currently in the project, though there may be additional ones available for download from the Extensions Window. Choose one from the dropdown to use it.

Depending on the Blend System you've selected, you may see some options followed by a Continue button.

This shows any settings that must be completed before you can begin using the LipSync component. In the case of the Blend Shape Blend System, at least one Skinned Mesh Renderer needs to be added, from which the Blend System will get its blend shapes. Once this setup has been done, click the Continue button.

Many Blend Systems do not require initial options to be set here, but may add a Manager Component that will need attention. Check the documentation for the specific Blend System you're using for more information.

If you're using the Bones Only Blend System, be sure to enable the Use Bones checkbox at the top of the LipSync inspector, otherwise the component will be useless.

4. Create Poses

There may be a Preset available that suits your model. This will let you skip the pose creation stage. Check the Presets page for more information.

The core of LipSync's animation is the Phoneme Pose. With the basics of the LipSync component set up, you should now see a list of Phonemes. Clicking one of these will expand the Pose Editor out, allowing you to customise each pose one by one.

The standard phoneme list with the AI phoneme open

The exact buttons you see will depend on the Blend System you selected and whether Use Bones is enabled or not, but the basic process remains the same: click the Add button to add a new row to the pose, select the blend shape (or equivalent) you want to use and then choose its value using the slider. You can use this system to precisely fine-tune your poses for each phoneme. To help you out, the current pose is displayed on the character as you edit, and a guide is graphic displayed in the lower-right corner of the Scene View. More detailed information on setting up poses can be found on the Poses page.

A phoneme pose with one blend shape and a single bone transform added.

The same system is used for defining Emotion Poses. These can be found on the Emotions tab, and work in the same way.

5. Final Options

Finally, before the character can be used for playback, the audio source we created earlier needs to be assigned. Use the Audio Source field at the top of the component to pick the correct AudioSource component and it will automatically be used when a LipSyncData clip is played.

At the bottom of the component you'll find several other options that can be adjusted to get the results you want. We can leave these how they are for now, see the LipSync Editor page for details on what each one does.

That's it!

You can use the Play On Awake setting and one of the included example LipSyncData clips if you want to test out you new character.