Lujain Ibrahim
about // research // news // contact
I’m a PhD candidate in social data science at the University of Oxford. My research focuses on sociotechnical approaches to evaluating and governing AI systems, their societal impact, and their relational harms. I am also building up a design studio working on public AI literacy.
My background is in computer engineering and international relations. Formerly, I was a student researcher at Google DeepMind working on sociotechnical model evaluations, a Schwarzman Scholar, and a fellow at the Centre for Governance of AI, the Digital Asia Hub, and The Montreal AI Ethics Institute.
I’m currently thinking about and working on the following (pls reach out if you are too and want to chat!):
research
My research approach is grounded in information theory and human-centered computing, but also draws on cognitive science and design studies. I work directly with both models and people, aiming to evaluate and enhance their interactions by developing and testing frameworks, as well as conducting experiments and user studies. My publications can be found here.
news
* [Feb 2025] New preprint on evaluating anthropomorphic language in LLMs
* [Feb 2025] Received a grant from the Responsible Youth Tech Power fund for research on young people & AI
* [Jan 2025] Started a visiting academic position at NYU Center for Data Science
* [Jan 2025] New policy report on promising topics for dialogue between the US and China on AI ethics, safety, and governance
* [Dec 2024] Short paper on human-LLM interaction modes accepted as an oral in the Evaluating Evaluations (EvalEval) workshop at NeurIPS 2024
* [Sep 2024] Presented at Imperial College London's symposium on Human and Artificial Intelligence in Organizations
* [Jul 2024] New preprint on Open Technical Problems in AI Governance
* [Jun 2024] Presented "The Algorithm" at Sheffield DocFest
* [May 2024] Started internship with Google DeepMind's Ethics Research Team, working on model safety evaluation
* [May 2024] New preprint on human interaction evaluations for LLM safety, risks, and harms
* [Apr 2024] New preprint on harms from AI user interface designs, presented in CHI 2024 Workshop on Human-centered Evaluation and Auditing of Language Models
* [Feb 2024] Launched "The Algorithm" - a web-based game explainer of recommendation systems
* [Jan 2024] Joined Centre for Governance of AI as a winter research fellow to work on contextually-relevant model safety evaluations
* [Dec 2024] New preprint on the persuasiveness of role-playing large language models
* [Nov 2023] Awarded Dieter Schwarz Foundation-OII grant for research on AI, Government and Policy
* [Nov 2023] Presented work-in-progress game at International Documentary Film Festival of Amsterdam DocLab
See more
contact