



We then create an object to track the n positions our eyes have been. With this, we can do a bitwise_and, and copy just our eyes from the frame. With this, we can then use a model (in this case, the shape_predictor_68_face_landmarks on Github), and get back a set of 68 points with our face’s orientation.įrom the points that match the eyes, we can create a polygon matching their shape in a new channel. Once we’ve got a decent frame rate, we’ll convert our webcam image frame to black and white, then pass it to Dlib for face detection.ĭlib’s get_frontal_face_detector returns a set of bounding rectangles for each detected face an image. We’ll then resize this raw stream, using the imutils resize function, so we get a decent frame rate for face detection. We’ll use OpenCV to get a raw video stream from the webcam. Once you’ve gotten OpenCV installed, you should be set for the rest of this lesson. Something like this might work for Ubuntu.įor Windows users, you may want to try your luck with this unofficial wheel. Otherwise, you’ll need to figure out installation on your own platform. If you’re running on MacOS, you can try this post to get OpenCV setup. Just a simple pip install dlib should be enough to get you up and running.įor OpenCV, however, installation is a bit more complicated. Installing Dlib is easy enough, thanks to wheels being available for most platforms. We’ll use two of the biggest, most exciting image processing libraries available for Python 3, Dlib and OpenCV. You can find the video walkthrough at the end of this page. In today’s post, we’ll build out a method to track and distort our face in real time, just like these apps do.įor those who’d like a video walkthrough, this entire post is also available as a walkthrough on YouTube. Snapchat, Instagram, and now Apple have all gotten in on the real time face effects. Building a Snapchat Lens Effect in Python
