Before starting setup, you should ensure your character is supported by LipSync Pro. Through custom Blend Systems, technically any kind of character can be supported, but the following support is provided by us:
Enables Support For
Blend Shape Blend System
Adobe/Mixamo Fuse characters,
Reallusion Character Creator 3 characters,
Custom-made characters with facial blend shapes
Bones Only Blend System
Custom-made characters with a bone based facial rig
Sprite Blend System
2D Sprite-based characters with multiple sprites for mouth, eyes, etc.
Texture Offset Blend System
2D Mesh-based characters with multiple textures,
3D characters with separate 2D mouths
UMA2 Blend System
Unity Multipurpose Avatar (UMA) 2 characters
Once you have your character in Unity, create an instance of it in a scene. You will probably want to create a prefab for future use. First, add the LipSync component to your character. You can find it in the Add Component menu under Rogo Digital/LipSync Pro.
You will also need an AudioSource for your dialogue to play through. This can be added to the same root GameObject if you wish or, for more accuracy, to a new GameObject parented to the character's head bone and positioned in the character's mouth.
After adding the LipSync component, you will see this inspector:
A Blend System is required in order for LipSync to interact with your character. Use the table in step 1 above to determine which Blend System is appropriate for your character. The dropdown will show all the Blend Systems currently in the project, though there may be additional ones available for download from the Extensions Window. Choose one from the dropdown to use it.
Depending on the Blend System you've selected, you may see some options followed by a Continue button.
This shows any settings that must be completed before you can begin using the LipSync component. In the case of the Blend Shape Blend System, at least one Skinned Mesh Renderer needs to be added, from which the Blend System will get its blend shapes. Once this setup has been done, click the Continue button.
The core of LipSync's animation is the Phoneme Pose. With the basics of the LipSync component set up, you should now see a list of Phonemes. Clicking one of these will expand the Pose Editor out, allowing you to customise each pose one by one.
The exact buttons you see will depend on the Blend System you selected and whether Use Bones is enabled or not, but the basic process remains the same: click the Add button to add a new row to the pose, select the blend shape (or equivalent) you want to use and then choose its value using the slider. You can use this system to precisely fine-tune your poses for each phoneme. To help you out, the current pose is displayed on the character as you edit, and a guide is graphic displayed in the lower-right corner of the Scene View. More detailed information on setting up poses can be found on the Poses page.
The same system is used for defining Emotion Poses. These can be found on the Emotions tab, and work in the same way.
Finally, before the character can be used for playback, the audio source we created earlier needs to be assigned. Use the Audio Source field at the top of the component to pick the correct AudioSource component and it will automatically be used when a LipSyncData clip is played.
At the bottom of the component you'll find several other options that can be adjusted to get the results you want. We can leave these how they are for now, see the LipSync Editor page for details on what each one does.