Project overview
WSI Pilot Project
This project addresses the topic of administrative inhumanity and its relationship to the potential uses of AI in public administration in order to begin to draw out implications for the governance and design of AI in such contexts.
The general issue of administrative inhumanity refers to a mode of citizen-state relationship in which the agency of the citizen is engaged by the system (we might think of this as the state instrumentalising the agency of the citizen for its ends) but in a way that manifests as experiences of powerlessness, frustration and humiliation combining a lack of responsiveness to the individual case and a lack of accountability in (and for) the decision-making process. It has two primary foci:
1. Categorical reasoning, discretion and the scope for justice: the ways in which administrative systems (whether involving AI or not) operate through structures of categorization and in which decision-trees are forms of categorical reasoning – and the need for the decision-making to be responsive to individual cases those justice-salient features are not adequately captured by the categories in play => both process of refinement/revision of categories and the necessity of a capacity for discretionary judgment when doing justice to the individual case requires. The central issue here concerns the logical point that the morally relevant features of an individual case may not be made visible by the categories in terms of which the administrative system operates and hence that any administrative system must be alert to this possibility and have a discretionary mechanism for addressing it.
2. Accountability and agency in AI contexts: the general opaqueness of algorithms to those subject to them (and often those executing decisions arrived at through them) and hence problems of accountability. The main issue here concerns that ways in which the design (and redesign) of algorithms might be opened up to users (administrators) and the subjects of public administration in order to build some form of accountability into the system. The underlying concern is that (2) threatens to make the problem identified in (1) more intractable unless the governance and design of AI for use in public administration can address the needs identified in (1) and (2).
This project addresses the topic of administrative inhumanity and its relationship to the potential uses of AI in public administration in order to begin to draw out implications for the governance and design of AI in such contexts.
The general issue of administrative inhumanity refers to a mode of citizen-state relationship in which the agency of the citizen is engaged by the system (we might think of this as the state instrumentalising the agency of the citizen for its ends) but in a way that manifests as experiences of powerlessness, frustration and humiliation combining a lack of responsiveness to the individual case and a lack of accountability in (and for) the decision-making process. It has two primary foci:
1. Categorical reasoning, discretion and the scope for justice: the ways in which administrative systems (whether involving AI or not) operate through structures of categorization and in which decision-trees are forms of categorical reasoning – and the need for the decision-making to be responsive to individual cases those justice-salient features are not adequately captured by the categories in play => both process of refinement/revision of categories and the necessity of a capacity for discretionary judgment when doing justice to the individual case requires. The central issue here concerns the logical point that the morally relevant features of an individual case may not be made visible by the categories in terms of which the administrative system operates and hence that any administrative system must be alert to this possibility and have a discretionary mechanism for addressing it.
2. Accountability and agency in AI contexts: the general opaqueness of algorithms to those subject to them (and often those executing decisions arrived at through them) and hence problems of accountability. The main issue here concerns that ways in which the design (and redesign) of algorithms might be opened up to users (administrators) and the subjects of public administration in order to build some form of accountability into the system. The underlying concern is that (2) threatens to make the problem identified in (1) more intractable unless the governance and design of AI for use in public administration can address the needs identified in (1) and (2).
Staff
Lead researchers
Other researchers