{"title":"Human vs. algorithmic auditors: the impact of entity type and ambiguity on human dishonesty","doi":"10.3389/frbhe.2025.1645749","date_updated":"2026-03-27T16:07:20Z","publisher":"Frontiers Media SA","date_created":"2026-02-06T13:50:44Z","author":[{"last_name":"Protte","full_name":"Protte, Marius","id":"44549","first_name":"Marius"},{"first_name":"Behnud","orcid":"0000-0002-6271-5912","last_name":"Mir Djawadi","full_name":"Mir Djawadi, Behnud","id":"26032"}],"volume":4,"year":"2025","citation":{"bibtex":"@article{Protte_Mir Djawadi_2025, title={Human vs. algorithmic auditors: the impact of entity type and ambiguity on human dishonesty}, volume={4}, DOI={10.3389/frbhe.2025.1645749}, number={1645749}, journal={Frontiers in Behavioral Economics}, publisher={Frontiers Media SA}, author={Protte, Marius and Mir Djawadi, Behnud}, year={2025} }","mla":"Protte, Marius, and Behnud Mir Djawadi. “Human vs. Algorithmic Auditors: The Impact of Entity Type and Ambiguity on Human Dishonesty.” Frontiers in Behavioral Economics, vol. 4, 1645749, Frontiers Media SA, 2025, doi:10.3389/frbhe.2025.1645749.","short":"M. Protte, B. Mir Djawadi, Frontiers in Behavioral Economics 4 (2025).","apa":"Protte, M., & Mir Djawadi, B. (2025). Human vs. algorithmic auditors: the impact of entity type and ambiguity on human dishonesty. Frontiers in Behavioral Economics, 4, Article 1645749. https://doi.org/10.3389/frbhe.2025.1645749","ama":"Protte M, Mir Djawadi B. Human vs. algorithmic auditors: the impact of entity type and ambiguity on human dishonesty. Frontiers in Behavioral Economics. 2025;4. doi:10.3389/frbhe.2025.1645749","ieee":"M. Protte and B. Mir Djawadi, “Human vs. algorithmic auditors: the impact of entity type and ambiguity on human dishonesty,” Frontiers in Behavioral Economics, vol. 4, Art. no. 1645749, 2025, doi: 10.3389/frbhe.2025.1645749.","chicago":"Protte, Marius, and Behnud Mir Djawadi. “Human vs. Algorithmic Auditors: The Impact of Entity Type and Ambiguity on Human Dishonesty.” Frontiers in Behavioral Economics 4 (2025). https://doi.org/10.3389/frbhe.2025.1645749."},"intvolume":" 4","publication_status":"published","publication_identifier":{"issn":["2813-5296"]},"article_number":"1645749","language":[{"iso":"eng"}],"_id":"63909","user_id":"26032","department":[{"_id":"179"}],"abstract":[{"lang":"eng","text":"IntroductionHuman-machine interactions become increasingly pervasive in daily life and professional contexts, motivating research to examine how human behavior changes when individuals interact with machines rather than other humans. While most of the existing literature focused on human-machine interactions with algorithmic systems in advisory roles, research on human behavior in monitoring or verification processes that are conducted by automated systems remains largely absent. This is surprising given the growing implementation of algorithmic systems in institutions, particularly in tax enforcement and financial regulation, to help monitor and identify misreports, or in online labor platforms widely implementing algorithmic control to ensure that workers deliver high service quality. Our study examines how human dishonesty changes when verification of statements that may be untrue is performed by machines vs. humans, and how ambiguity in the verification process influences dishonest behavior.MethodWe design an incentivized laboratory experiment using a modified die-roll paradigm where participants privately observe a random draw and report the result, with higher reported numbers yielding greater monetary rewards. A probabilistic verification process introduces risk of identifying a lie and punishment, with treatments varying by verification entity (human vs. machine) and degree of ambiguity in the verification process (transparent vs. ambiguous).ResultsOur results show that under transparent verification rules, cheating magnitude does not significantly differ between human and machine auditors. However, under ambiguous conditions, cheating magnitude is significantly higher when machines verify participants' reports, reducing the prevalence of partial cheating while leading to behavioral polarization manifested as either complete honesty or maximal overreporting. The same applies when comparing reports to a machine entity under ambiguous and transparent verification rules.DiscussionThese findings emphasize the behavioral implications of algorithmic opacity in verification contexts. While machines can serve as effective auditors under transparent conditions, their black box nature combined with ambiguous verification processes may unintentionally incentivize more severe dishonesty. These insights have practical implications for designing automated oversight systems in tax audits, compliance, and workplace monitoring."}],"status":"public","type":"journal_article","publication":"Frontiers in Behavioral Economics"}