SAFR Embedded SDK runs on many different types of devices with a wide range of performance.  To validate if SAFR is fast enough for your device, you can use the panoptes command line applications as described in this article.


SAFR Embedded SDK offers a benchmark mode as described here:

Panoptes Benchmark Output


Panoptes “recognize” action returns only recognition time output:

./bin/panoptes recognize sample_images -b
Fake_Device_1.jpg       b0b7d13b-080c-4366-ad8e-2f712d71d6f7    187.40% 46ms
Live_FaceContext.jpg    3a34c0f2-f56a-4e96-b419-41232965f1b1    187.40% 46ms
Fake_Paper_1.jpg        2e5b0064-8900-47df-8dc0-a97a91248975    187.40% 45ms
…

It is similar for panoptes “analyze” action, where only face detection execution time is shown (dtime). If liveness is enabled, then dtime will equal to face detection and liveness detection time combined.


For panoptes “benchmark” action, execution time of all models is measured.  For example, following is how the output will look for every image in sample_images directory,

./bin/panoptes benchmark sample_images 
Decoded jpeg in 9ms
Running face detection and recognition 1 times
Detected 1 faces in 27ms
Estimated emotions on 1 faces 21ms
Estimated occlusion on 1 faces 20ms
Estimated mask on 1 faces 18ms
Estimated age on 1 faces 18ms
Detected gender on 1 faces 19ms
Ran recognition on 1 faces in 19ms
Total time for everything: 154ms


Thus, panoptes benchmark action can be used to determine the expected execution time of various eSDK features. It measures only a single eSDK API call time, without any initialization overhead or any other overhead that might have been introduced by unoptimized code. Benchmark should be done on multiple images or repeated multiple times on the same image to get the most accurate results (use -Brd parameter to repeat multiple times).