< Previous | Contents | Manuals Home | Boris FX | Next >
Single-Camera Motion Capture
In carefully controlled shoots, you can do a type of facial motion capture using only a single camera.
You need to have a accurate and complete head model—but one that is purposefully devoid of expression.
You must be able to find enough trackers that are rigidly connected (not animated independently) to get a rigid-body moving object solve for the motion of the overall pre-existing head model.
Add a second disabled moving object and add additional trackers on the independently-moving facial features that you want to track to get expressions for.
Run the Animate Trackers by Mesh Projection script with the head model selected, while the disabled object (and its trackers) are active in the viewport, creating motion-capture 3D tracks for those trackers on the surface of the mesh.
Method #1: Create a new triangulated mesh using the motion-capture trackers, for example using Assemble mode or Convert to Mesh and Triangulate.
Method #2: Link the trackers to specific vertices that you want animated on the reference mesh. Any vertex that is not linked will not move!
Export the mesh as an OBJ file
Export the mesh animation using the MDD Mesh Animation script.
In your target animation package, import the OBJ file, and apply the MDD file as an animated deformation.
Note that it is very important to re-export the mesh if you use method #2: you must always add the MDD deformation to exactly the mesh that was used in
SynthEyes, since the vertex numbering will not match exactly. SynthEyes does renumber and adjust vertices as needed when it reads OBJs, to match its internal vertex processing pipeline, especially if there is normal or texture coordinate data.
You may also find it helpful to export the 3D path of a tracker by itself, using the Tracker 3D on Mesh exporter. You can use that data to drive bones or other rigging if desired.
©2024 Boris FX, Inc. — UNOFFICIAL — Converted from original PDF.