SAFR trains its algorithms on large datasets that are demographically representative of a global population. This training ensures our algorithms are robust to variations in ethnicity, race, gender, age, and facial hair. 

The SAFR algorithm's resilience to these variations has been documented by NIST as part of their bias and race analysis. 

The SAFR algorithm exhibits high performance uniformity across gender and skin tones. SAFR ranks second (out of 103 algorithms) among the least variable algorithms for gender and skin tone according to an evaluation by National Institute of Standards and Technology  (NIST) using a sample of the worldwide population, with an accuracy variance of <0.25% (@FMR 1:1K). 

The figure below (from NIST Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects, December 2019, Figure 13 <>) shows SAFR (labeled realnetworks003 in red box) shows variability of bias across races.

SAFR has the lowest variability across racial groups of all algorithms tested with the exception of 6 algorithms developed in China (separated at bottom in NIST report).  See NIST Results Once Again Demonstrate SAFR’s Consistency and Fairness Among Racial Groups for more information.

You don't need to specially configure SAFR to take advantage of the low bias recognition; this feature is inherent to the SAFR facial recognition algorithm.