“Siri, talk dirty to me” — The Ethics of Conversational AI

Aditya Singh
4 min readJan 26, 2021

[This piece was originally published on the Trilateral Research blog in August 2020]

This blog focuses on conversational agents such as Siri and Alexa and the ethical implications of how they respond to gendered abuse.

In designing such agents, technologists must make decisions regarding their persona and personality. This includes decisions about their gender, age, race, culture or class. In particular, they must make decisions about certain forms of conduct, such as how the agents respond to abuse, sexual overtures, or even indications of psychological distress or mental illness. Research demonstrates that a lot of profanity is directed at chatbots. Users are more likely to harass a chatbot than another human, particularly if the chatbot is female-presenting. Responses to questions such as ‘What are you wearing’ (Answer: ‘Why do people keep asking me this?’) and ‘Talk dirty to me’ (Answer: ‘The carpet needs vacuuming’) indicate that such prompts were sufficiently frequent to require the developers to create ‘coping mechanisms’. Such decisions are not ethically neutral and require reflection on how they may perpetuate harmful stereotypes or behaviours.

Girl trouble

Siri, Alexa, Cortana and the default Google Assistant present as female. These agents are gendered through female-presenting names, feminine voices and mannerisms. They are coded as witty and even flirtatious, as shown by their responses to obscure questions. Even though Siri is unbodied, Siri’s response, ‘I’d blush if I could’, (to sexualised insults) hints at an imaginary body, one that is ‘performatively female’. In addition, popular discourse goes beyond an assessment of these agents as technological objects to focus on their role as good companions and providers of gendered labour. This perception is visible in how they are marketed. The focus is on their personality, humour and gentle guidance. They are portrayed as personal assistants, parental companions and objects of desire. They are ‘always ready’, providing essential support and care labour.

This gendering is particularly evident in how the agents’ response to gendered abuse appears to be coded. Agents’ responses to sexualised comments and gendered abuse can vary from a non-response (‘I don’t understand the question’), a neutral response, a response in-kind or an escalation to a human agent.

Crucially, both in-kind and neutral responses can serve as endorsements of the problematic attitudes underlying the users’ statements — especially since they were evidently anticipated. The responses from these agents arguably tend to be the typical adaptive response women adopt to an inappropriate comment: acknowledging the comment, diffusing the situation and swiftly returning to business. Thus, designing for ‘neutral’ stances, with strategies of avoidance and deflection, can imply endorsement, trivialisation or devaluation of the issue or the experience of it. As a consequence, these stereotypes may entrench a larger culture of discrimination and violence faced by women. This ranges from normalising sexually aggressive behaviour, to not associating women with positions of leadership, authority and expertise.

Designing Ethical Conversational AI

Inspiration may be drawn from the (sex-positive) feminist response to pornography, which is also understood to perpetuate harmful gender stereotypes. The response to bad pornography was ‘feminist’ pornography. Briefly, this entails three approaches, at the level of content, procedure or context.

Content focuses on actual representations and depictions of women in pornography. The procedural approach focuses on how pornography is produced and the level of involvement of women. The contextual approach argues that even the most objectifying pornography can be feminist if presented thoughtfully, highlighting women’s perspectives and acknowledging issues of inequality.

With conversational AI, these approaches could be mutually reinforcing. The content of responses to sexual comments can be changed to respond to gendered abuse in a way that recognises and minimises the underlying sexism and objectification. This approach will further benefit from greater inclusion and diversity in the teams that design these technologies. Further, these responses themselves can present context to users by highlighting the harms such behaviours perpetuate, and the importance of gender equality.

However, none of these requirements appear suitable to be articulated into bright-line, enforceable rules. They would be better incorporated and emphasised within broader frameworks of ethical governance, as explained below.

Impact assessments, diversity and stakeholder engagement

As recommended by the European Commission’s High Level Expert Group on AI, such ethics-based frameworks could include creating ethics committees, ensuring diversity in design teams, greater stakeholder engagement and conducting various impact assessments. For conversational AI, such assessments could specifically consider issues of gender parity, posing questions like ‘Will you assign a gender? Why? In what ways might this reinforce or challenge stereotypes? In what ways might this prompt users to behave unethically or in prejudiced ways?’.

Along the lines of the General Data Protection Regulation and obligatory Data Protection Impact Assessments, such assessments could be made mandatory when the technology could have a significant impact on the rights of stakeholders and users.

Transparency

In addition, a regulatory solution could be for companies to be transparent about the responses they programme for gendered abuse. Transparency can allow for broader ethical review from stakeholders and society. This could also be the first step towards greater uniformity and standardisation, and possibly developing a code of conduct for how conversational agents should address gendered abuse. At first blush, there appears to be no compelling reason why different agents should handle abuse differently.

Conclusion

Ethical reflections on the design of conversational agents are particularly relevant, on account of their proliferation, along with an arms race towards greater believability and sophistication. Technologists have articulated the goal of ambient and ubiquitous intelligence — much of which can be mediated through voice-based conversational AI.

‘Hard’ law measures appear ill-suited to address the harms presented by these agents. On the other hand, Ethical Impact Assessments (with a focus on gender impact), stakeholder engagement, diversity and transparency must become part of the development cycle for ethical conversational agents.

--

--

Aditya Singh

PhD Candidate at the University of Edinburgh — Data, Agriculture and Philosophy