Public Voice

The UK ESRC Digital Good Network has published a call for abstracts for a proposed special issue in the journal Big Data and Society aiming to advance scholarship on the state-of-the-art and future prospects of including public voices in AI.
They say ‘Public voice’ is not easy to define or operationalise. “There is no one ‘public’. Benefits, harms and risks are distributed unevenly. The hopes, concerns and experiences of different groups with AI vary. What has been identified as a ‘participation gap’ is worsened by insufficient and ineffective processes of consultation, implementation and ongoing management. Compounding these issues are structural inequities and overlapping systems of power and oppression (for instance racism, sexism, ableism, colonialism, transphobia, classism) which afford some groups more resources and access to shape AI technologies than others.”
They put forward a number of themes for the special issue (for full text see the call website).
These include
Advancing methodological challenges in public voice
We are especially interested in submissions that address gaps in advocacy, understanding and methods in public voice in AI research, design and policy. We welcome investigations that appraise the efficacy of surveys and alternative deliberative strategies such as co-design, public participation in innovation, deliberative democracies and other forms of public empowerment in innovation.
Confronting dominant innovation narratives of AI
These questions gain salience as narratives used by private sector firms, public research agencies, politicians and government policy makers to promote AI continue to be dominated by framings of inevitability and speed. Similarly, benefits of AI are framed narrowly around economic measures such as efficiency, productivity and profit. These powerful narratives overwhelm space to deliberate how AI might positively contribute to broader capacities for flourishing such as care, conviviality and wellbeing.
Organising AI innovation with society
Initiatives such as Responsible AI have gained currency as a set of commitments and organising principles with which to advance ethical, safe, trustworthy, fair or equitable AI innovation, policy and practice. In short, to include in AI concerns for society and the planet which are externalised by pro-market frames mentioned above. For some advocates of Responsible AI, doing AI research, development and policy with society is important.