Facial recognition, privacy, bias & cognitive offload
8 minute read
The benefits from AI are clear. It can enhance efficiency, accuracy, and support problem solving and decision-making processes. This can reduce cognitive load for individuals and organisations, freeing up thinking for more critical and strategic tasks. But reliance on AI without understanding risks, limitations and our own cognitive biases creates problems. Used incorrectly, facial recognition AI can unlawfully reduce our privacy. AI can infringe on our human-rights and even weaken our ability to think critically. Automation bias can incline us to take AI output at face value, increasing potential for erroneous decision-making. AI systems and their training data, the individuals using AI, and the context in which AI is deployed creates a complex system of variables that determine an AI system’s efficacy, and the potential harm when things go wrong. Being aware of limitations and biases, in AI systems and ourselves, is a critical step toward implementing the correct organisational strategies to help avoid negative consequences while realising benefits.
Facial Recognition Technology
Facial recognition is an error prone AI technology known to cause concern among civil liberties advocates. These concerns include the right to privacy and rights to equality and non-discrimination.[1] Not all Facial Recognition Technology (FRT) is identical. Identification FRT does not raise the same concerns as classification facial recognition such gender identification or emotion recognition. Nonetheless, deployment of any form of FRT without consideration of ethics, flaws, and the underlying data, is dangerous.
On 29 October 2024, the Australian Privacy Commissioner, Carly Kind, after a 2-year investigation, determined that leading Australian retailer Bunnings Warehouse breached the privacy of likely hundreds of thousands of customers by collecting their personal and sensitive information through use of its FRT system. In a video released January 2025, the retailer noted that implementation of the trial of FRT was done so with the sole intent of keeping their team and customers safe and preventing unlawful activity by repeat offenders, and that they intend to seek a review of the Commissioners determination before the Administrative Review Tribunal.[2]
The Privacy Act 1988 recognizes facial images and other biometric information as sensitive, which incurs a high level of privacy protection.[3] This includes that consent is generally required for its collection. Commissioner Kind found that the retailer collected individuals’ sensitive information without consent, failed to take reasonable steps to notify individuals that their personal information was being collected, and did not include required information in its privacy policy, highlighting a failure of governance.[4]
This case is not isolated. Commissioner Kind and the Office of the Australian Information Commissioner (OAIC) have several high-profile investigations into other Australian corporations for their use of FRT.[5] The OAIC have since released guidance to help Australian companies navigate the ethical and privacy risks associated with the technology.[6]
Loss of privacy is not the only danger here. An even larger concern is reliability. Errors, for example, can lead to someone being denied entry to a flight, being wrongfully accused a crime, or being barred from a store. What makes this more concerning is that these systems have been shown to have a significantly higher error rate on people of colour.[7]
Algorithmic bias
As of September 2024, all false arrests within the United States stemming from errors in FRT have been people of colour.[8] The case of Robert Williams typifies these concerns. In January 2020, Williams was arrested, handcuffed in front of his children, and spent the night in jail after being wrongly identified by a facial recognition program for allegedly stealing five watches from a Shinola store in Detroit. While Williams was later cleared of the alleged crime, the time spent in jail has influenced relationships with his family, friends, co-workers, and neighbours.[9]
Datasets that underpin AI systems are typically sourced from the internet. This means they inherit stereotypes, inequalities, and power asymmetries that exist in society. These biases are then exacerbated by the algorithmic systems that use them.[10] A known issue in machine learning systems is that they are designed precisely to infer hidden correlations in data, and that attempts to correct negative correlations can be ineffective.[11]
Matches identified by FRT can be filtered through a layer of human oversight. This practice is commonly known as ‘human in the loop’ and can potentially mitigate machine bias. In the case of Bunnings, they claim any matches identified by their system were sent to a small, specialized team for review. If the team confirmed a positive match local authorities would be called to the store. Human in the loop is a necessary caution, but the existence of automation bias can raise questions about its efficacy.
Automation bias
In a 1992 study, Mosier et al. simulated aircraft engine fire situations to test automation bias in pilot decision making. When airline pilots received an incorrect engine failure warning from an automated system, 75 percent of them followed automated advice to shut down the wrong engine. Only 25 percent of pilots using a paper checklist made the same error.[12] Parasuraman R and Manzey DH, in their 2010 paper on complacency and bias in human use of automation, make the following overarching observations:
- Automation bias can be found in different settings including aviation, medicine, command-and-control operations in military contexts, and process control;
- Automation bias occurs in native and expert participants;
- Automation bias seems to depend on the level of automation and overall reliability of the aid;
- Automation bias cannot be prevented by training or explicit instructions to verify the recommendations;
- Automation bias seems to depend on how accountable users of an aid perceive themselves for overall performance; and
- Automation bias can affect individuals as well as in teams.[13]
Automation bias is pervasive across different industries and contexts. There may be limited utility in preventative measures, such as training or explicit instructions to verify recommendations, but part of the solution may lie in effective governance through a culture of accountability for outcomes in individuals and teams. The interrogation of automated decision making FRT, particularly the underlying processes, criteria, and data used by the AI system, to test reliability of recommendations is warranted.
Cognitive offload
As AI becomes further ingrained within organisations the relationship between its usage and cognitive offload should also be considered. Cognitive offload resulting from AI use represents another pervasive phenomenon through reduced critical thinking ability. Pairing this with potential for automation bias in high-risk settings, for example FRT to support decision making in enforcement contexts, has a compounding effect on risk of harmful outcomes.
A recent study found that reliance on AI in problem-solving and decision-making contexts can erode independent analytical skills leading to a decline in cognitive flexibility and creativity.[14] It also notes that factors such as educational background and cognitive engagement serve as mitigators to diminished critical thinking ability from cognitive offload through AI use. This suggests organisations can implement AI governance practices to balance the benefits of AI use with independent analysis to lessen the impact of AI cognitive offload.[15] Implementation of training or explicit instruction to verify recommendations through independent analysis alone may not work.[16] Consequently, it’s important to simultaneously integrate a culture of accountability across teams and individuals. Another strategy may lie in the promotion of curiosity and understanding of AI’s limitations within an organisational context. A contributing factor to higher education’s role in mitigating reduced critical thinking when using AI was increased scepticism of AI’s abilities.[17] In other words, the study’s participants that held higher degrees of education were less inclined to take information at face value.
AI represents a field of different technological methods that present unique benefit and risk. The challenges posed are not limited to FRT, and underscore a need for robust governance, ethical considerations, and awareness of biases in AI systems, the data they’re trained on, as well as bias within ourselves. Automation bias and cognitive offload are pervasive phenomena that complicate the relationship between humans and AI. Strategies to address these issues include fostering a culture of accountability, informed governance frameworks, and promoting curiosity and healthy scepticism of AI systems within organisational contexts. Higher education and training tailored to understanding AI’s capabilities can also help to alleviate erosion of critical thinking ability when relying on AI systems. If an organisation deploys AI to supplement decision making, a combination of accountability and awareness of limitations is necessary. By prioritizing accountability, learning, and critical understanding of AI’s limitations, organisations can navigate the complexities of AI integration while safeguarding individual rights, human judgement, and ensuring AI serves as a tool for progress.
[1] https://humanrights.gov.au/our-work/rights-and-freedoms/rights-and-freedoms-right-right.
[2] https://www.bunnings.com.au/about-us/facial-recognition-technology.
[3] https://www.oaic.gov.au/privacy/australian-privacy-principles/australian-privacy-principles-guidelines/chapter-6-app-6-use-or-disclosure-of-personal-information.
[4] https://www.oaic.gov.au/news/media-centre/bunnings-breached-australians-privacy-with-facial-recognition-tool.
[5] Kmart, 7-Eleven, and Clearview AI.
[6] https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/organisations/facial-recognition-technology-a-guide-to-assessing-the-privacy-risks.
[7] Artificial Intelligence, A Guide for Thinking Humans by Melanie Mitchell, p148-149.
[8] AI Snake Oil, What Artificial Intelligence Can Do, What It Can’t, and How To Tell the Difference, 2024, Arvind Narayanan & Sayash Kapoor, p15.
[9] https://www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/.
[10] Nature 610, 451-452 (2022) doi: https://doi.org/10.1038/d41586-022-03050-7
[11] The Alignment Problem – Machine Learning and Human Values, Brian Christian, 2020, Introduction 58:30.
[12] Mosier, K. L., Palmer, E. A., & Degani, A. (1992). Electronic Checklists: Implications for Decision Making. Proceedings of the Human Factors Society Annual Meeting, 36(1), 7-11. https://doi.org/10.1177/154193129203600104
[13] Parasuraman R, Manzey DH. “Complacency and Bias in Human Use of Automation: An Attentional Interrogation.” Hum Factors 52, no. 3 (June 2010): 381-410.
[14] Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. https://doi.org/10.3390/soc15010006
[15] Ibid.
[16] Parasuraman R, Manzey DH (2010).
[17] Ibid.

Leave a comment