If artificial intelligence (AI) systems are shaping who gets hired, who is monitored, who receives care, and who is believed, then the question of who builds those systems becomes a feminist question in itself. We cannot meaningfully talk about feminist AI while women remain underrepresented in computer science and marginalised within technical careers.
My stance here is that AI isn’t neutral; it reflects the values and hierarchies of the societies that create it. That reflection begins long before a model is trained. It starts in classrooms, universities, and early-career pipelines where decisions are made about who belongs in computer science and who doesn’t. When women are missing at these stages, their absence doesn’t stay local, it scales.
Encouraging more women to study computer science matters because technical skills are no longer neutral tools. They are a form of social power. If women are excluded from the design of AI systems, they are more likely to be treated as data points rather than decision-makers, as edge cases rather than defaults. Feminist critiques of AI often focus on outcomes such as biased algorithms, unfair surveillance, harmful automation, but these outcomes are inseparable from participation.
Feminism teaches us that representation without structural change is fragile. Inviting women into computer science while leaving its cultures, incentives, and hierarchies intact risks reproducing the same exclusions under the banner of diversity.
If AI is a feminist issue, then the problem is not that women lack interest or aptitude for computer science. The problem is that the field has historically been organised around norms that privilege certain identities and working styles, while treating others as deviations. This shows up in who gets credit, whose technical judgement is trusted, and who is expected to adapt.
From a feminist perspective, encouraging women into computer science means rethinking how the discipline is framed. Computer science is often taught as abstract, competitive, and detached from social consequence. Yet AI’s impact on healthcare, welfare systems, policing, and labour markets shows that computing is profoundly political. Making those connections explicit doesn’t dilute the discipline, it clarifies why it matters, and why diverse participation is essential rather than optional.
Encouraging more women into computer science, then, is not about fixing women. It is about aligning technical education and careers with feminist values: fairness, accountability, care, and shared power. That means changing teaching practices, promotion criteria, leadership norms, and whose expertise is recognised. It means treating inclusion as core infrastructure, not a side initiative.
Retention is where this connection becomes most urgent. Many women leave technical careers not because they can’t do the work, but because they are worn down by subtle exclusion, constant self-justification, and a lack of pathways to influence. Power is being rewritten in code, and if women are not writing that code, they are being written out of the future.
Without women building these systems, feminist critique remains reactive, pointing out harm after it has already been scaled. With women inside the machinery, shaping it, breaking it, rebuilding it, critique becomes transformation.
AI will redistribute power. The only question is whether that redistribution entrenches old hierarchies or dismantles them.
Dr Nicky Danino is Head of School of Computing and Creative Industries at Leeds Trinity University.