Last week, Niccolo Mejia shared in interesting view of the usage of facial recognition in financial applications.
He listed three categories: Verifying identity for loan approval, authentication for digital wallets and blockchain enabled applications, and matching similar faces for marketing campaigns. The visual that captured our attention was a screenshot from Kairos, a Miami-based facial recognition company.
They claim to use diverse data sets in order to prevent misidentification of people of color as well as different age groups and genders.
At first, we got a bit worried about features such as age, gender, and racial recognition, as this can lead to all sort of bias and privacy issues. But we also noticed that the company labels itself on its homepage as an “Ethical Vendor”, and what that means is well described in this blogpost by Dr. Stephen Moore, Kairos’ Chief Scientific Offering.
Misidentification of people based on ethnicity, gender, and age plagues the facial recognition industry, and it’s a continuing mission of ours to fix this problem… We fundamentally believe that to have truly useful human to AI interactions—even AI to AI interactions— we need to be able to explain themselves in intuitive, understandable ways. For the general public to have trust in AIs they need to be auditable.
Accountable, auditable and ethical AI is definitely the way to go and should be part of any digital ethics conversation.
Guest post by The Futures Agency content curator Petervan
Example of “Facial Relevance Map” by Untangle