FlexPoser
Webcam controlled facial animation in Garry's Mod

Frequently Asked Questions

This page provides detailed information and troubleshooting tips.

For server admins:

Does FlexPoser work with Garry's Mod 13?

Yes.

How do I enable FlexPoser on my Garry's Mod server?

Please follow the instructions for server admins on the Download page.

Can players join my FlexPoser enabled server if they haven't downloaded the FlexPoser binary module?

Yes, they can. Players will receive a message in the chatbox upon joining that FlexPoser is enabled on the server and that they can type !flexposer to open the FlexPoser GUI. The GUI will provide further instructions for downloading the binary module, should they want to join in on the fun.

Can players see other players' expressions if they haven't downloaded the FlexPoser binary module?

Yes, they can. Although a player that wants to track and broadcast his or her own facial expressions must have the binary module installed, any player can see the expressions.

How will FlexPoser impact my server's data traffic?

For every player that is currently using FlexPoser the server will receive and broadcast 0.35 KiB/s excluding packet headers, so 8 players using FlexPoser at the same time means that the server will receive and broadcast 2.8 KiB/s. This is insignificant compared to the data traffic from regular gameplay.

For players:

How do I start broadcasting my facial expression on a Garry's Mod server with FlexPoser enabled?

You need to plug in a webcam and download the FlexPoser binary module from the Download page, which allows the game client to communicate with your webcam. Then, type !flexposer in-game to bring up the FlexPoser GUI and follow the instructions.

Does using the binary module put me at risk of being VAC-banned?

No, the DLL is set up properly, is only loaded when required by LUA and only interacts with the game via LUA, so it will not get you VAC-banned.

Do I have to download the binary module to join a server with FlexPoser enabled?

No, you only have to install the binary module if you want to track and broadcast your own facial expressions. Without the module installed, you can still join FlexPoser enabled servers.

Do I have to download the binary module to see the facial expressions of other players?

No, without the module you can still see the facial expressions of other players who do have the module and are currently broadcasting.

Will you make the binary module available for other platforms than Windows?

Currently I have made no plans for that.

Troubleshooting:

Why isn't the binary module being detected in-game?

Please make sure you have followed the instructions for players on the Download page correctly. In particular, check that you have placed the binary module in the Garrysmod/garrysmod/lua/bin folder, since there are many other bin folders. The binary module is only targeted at Windows platforms. Try checking the in-game console for information after a failed attempt at loading the module in-game.

I have downloaded the binary module, but my game crashes as soon as I enable FlexPoser!

This is likely due to the binary module missing dependencies. Please download and install Visual C++ redistributables for Visual Studio 2012. Make sure you have extracted the file face2.tracker, which is included in the binary module's ZIP package. If you're still experiencing crashes, please post to this topic on the FacePunch forums. I will try to respond as soon as possible.

Why isn't my webcam being detected in-game?

Please check whether your webcam is plugged in and that it is enabled in your Windows Device Manager. Make sure that your webcam drivers are up-to-date: visit your webcam manufacturer's website for the most up-to-date drivers. Be aware that when trying to connect your webcam to FlexPoser in-game, the webcam should not be in use by any other application.

Test whether other applications can access your webcam. If other applications can use your webcam but FlexPoser cannot, it might be that your webcam is not supported by OpenCV (a library included in the binary module that FlexPoser relies on). Unfortunately it is impossible for anyone to maintain an up-to-date list of all OpenCV supported webcams, but an internet search for "Is [my webcam model] supported by OpenCV" should provide useful information. Most modern webcams are supported by OpenCV. If you suspect yours is not, please try a different webcam.

Finally, you might have multiple video recording devices available. FlexPoser uses the first available one. To make sure that your intended webcam is used, please disable all other video recording devices via your Windows Device Manager.

I have multiple webcams plugged in. Which one will FlexPoser use?

FlexPoser uses the first available video recording device. To make sure that your intended webcam is used, please disable all other video recording devices via your Windows Control Panel > Device Manager.

Why isn't my face being detected in-game?

Please try the following to make face detection easier:

For geeks:

How is the face tracking done?

Estimating the rotation and deformation of the face is done client-side using Jason Saragih's open source library FaceTracker. FaceTracker estimates and updates a 3D deformable mesh which follows the movements of the person in the webcam image. This functionality is included in the binary module. Visit Jason Saragih's website for more information about the face tracking.

How is the face tracking data applied to a player model?

FlexPoser looks at distances between specific vertices of the deformable mesh constructed by FaceTracker. These distances are converted to "flex weights": values between 0 and 1 that represent in the Source engine how far a certain facial action unit such as "left eyebrow raised" is activated.

To make sure that everyone has the same expressive range, an in-game calibration process is used to determine per user the distances that represent flex weights 0 and 1 for every facial action unit.

What are the facial parameters used in FlexPoser?

In total 18 floating point numbers are used to represent the facial configuration: 3 to represent the head rotation (pitch, yaw and roll) and 15 flex weights:

How are action units synchronised across clients?

Clients apply the facial parameters directly to the player model of the local player, who might be visible due to third person perspective or camera view. The facial parameters are also sent to the server at a rate of 5 times per second. The server, upon receiving facial data, broadcasts the data to all players except the sender (he doesn't need the info). The other clients upon receiving the facial data apply the weights to the client-side player model of the sending player with interpolation.