Instructions to use CrucibleAI/ControlNetMediaPipeFace with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use CrucibleAI/ControlNetMediaPipeFace with Diffusers:
pip install -U diffusers transformers accelerate
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline controlnet = ControlNetModel.from_pretrained("CrucibleAI/ControlNetMediaPipeFace") pipe = StableDiffusionControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet ) - Notebooks
- Google Colab
- Kaggle
Control lips, teeth and tongue for lip sync task.
#20
by Temir - opened
Is it possible to force sd model output precise lips, teeth and tongue position for stylize video? CrucibleAI trained control net model for face position. Maybe there is a way to train mouth as well?
Not with this control setup. :( We don't have a good pipeline for detecting teeth and tongues, as we lean on MediaPipe face. In the future, if another network comes along with this capability, we can try and train with that network.
Could you use MediaPipe's FaceMesh as control to provide much more detailed landmarks, perhaps?