成人VR视频

Event

AI, Explainability and Epistemic Dependence

Wednesday, 7 january, 2026 18:15to19:45
Universit盲t Hamburg, DE
All information available on this page

Conf茅rence donn茅e par Jocelyn Maclure 脿 l鈥橴niversit茅 de Hambourg

The idea that people subjected to opaque AI-based decisions have a 鈥渞ight to explanation鈥, under specific circumstances, is generating a stimulating and productive debate in philosophy. Some early normative defenses of the right to explanation or public justifications (Vredenburgh 2021; Maclure 2021) are being challenged from a variety of perspectives (Ross 2022; Taylor 2024; Fritz 2024; Karlan & Kugelberg 2025). Alternatively, some are qualifying or refining the case for a right to explanation (Da Silva 2023; Grote & Paulo 2025; Dishaw 2025). While I addressed the argument according to which deep artificial neural networks are not significantly more fallible and opaque than human minds in a previous paper (2021), I now want to turn my attention to two new emerging counterarguments to the right to explanation thesis. The first one is normative: the standards of public reason do not typically apply to AI decisions and the interests at play do not justify the cost of a granting a right to explanation. The second one is epistemic: social epistemologists have long been urging us to recognize human thinkers鈥 basic epistemic dependence upon the testimonies of others and upon a variety of complex social processes. The defenders of the right to explanation arguably overlook the possibility that it may be justified to defer epistemically to black box algorithms. Although serious, I will argue that these counterarguments are unsuccessful.

Back to top