Face Recognition relies on the ability to detect traits in the face that uniquely set individuals apart from one another. Skin tone, texture, feature point location, and head shape all play a role in any particular algorithm’s ability to differentiate faces.  When unique features are hidden by occlusions, face algorithms have fewer data points from which to make their decisions.  Occlusions can include physical objects (hats, glasses, masks, cellphones held up to the face, etc.), image distortions (reflectance on materials such as windows, windshields or even light on the camera’s lens), to environmental conditions (strong directional lighting causing dark cast shadows, lack of lighting at night time or in poorly light indoor locations), and evidence damage (scanning paper photo or an ID document that has been wrinkled, damaged etc.).  

 

All of these situations result in some uniquely identifying traits in the face from being captured and used in the biometric identification process.

 

Modern face recognition platforms recognize when the face is occluded and mitigate the negative impacts in various ways.  For example, SAFR detects when occlusions are present, and if the occlusions' traits are characteristic of a COVID-style face mask, we ignore that region of the face and rely on the periocular region for matching.

 

When both the lower and upper portions of the face become obscured, less information is available for matching and hence an algorithm’s ability to continue to differentiate individuals decreases.

 

Proper camera selection, placement, environmental control measures, and selection of a face recognition algorithm developed to deal with the challenges of matching faces in the wild all contribute to improving face recognition accuracy.

 

RealNetworks and the SAFR team is ready to work with customers to address wide ranging challenges and assure optimal performance of the SAFR face recognition platform for our valued customers.