Reference - Overview
The library provides some generic face models that were trained on the MUCT database and some additional self-annotated images. Check out clmtools for building your own models.
For tracking in video, it is recommended to use a browser with WebGL support, though the library should work on any modern browser.
For some more information about Constrained Local Models, take a look at Xiaoguang Yan's excellent tutorial, which was of great help in implementing this library.
- Tracking in image
- Tracking in video
- Face substitution
- Face masking
- Realtime face deformation
- Emotion detection
Usage ###Download the minified library clmtrackr.js, and include it in your webpage.
/* clmtrackr libraries */ <script src="js/clmtrackr.js"></script>
The following code initiates the clmtrackr with the default model (see the reference for some alternative models), and starts the tracker running on a video element.
You can now get the positions of the tracked facial features as an array via ``
You can also use the built in function ``
draw()`` to draw the tracked facial model on a canvas :
See the complete example here.
Development ###First, install node.js with npm.
In the root directory of clmtrackr, run
npm installthen run
npm run build. This will create
To test the examples locally, you need to run a local server. One easy way to do this is to install
http-server, a small node.js utility:
npm install -g http-server. Then run
http-serverin the root of clmtrackr and go to
https://localhost:8080/examplesin your browser.