Centering the Humanity in Technology
February 27, 2025
Civic Science Fellow Christine Custis Brings Accountability, Agency, and Inclusion to AI Policy and Governance
“For the longest time, we joked that our phones were listening to us,” says Christine Custis, Civic Science Fellow at the Institute for Advanced Study. “But they donât have to listen. We give them everythingâour location, our preferences, our choices. And all these systems are doing is guiding us toward the light, like an amphipod under the control of a parasite. But we have to ask ourselves: at what cost?”

Inspired by image by former Civic Science Fellow Anand Varma (CSF 2020-21)
A computer scientist by training and an AI innovator with more than two decades of experience, Custis has seen firsthand how technology quietly commandeers human agency.
“Weâre not the users anymore; weâre the product,” she says. “Our data is the product.”
Now, as a Civic Science Fellow in Alondra Nelson’s Science, Technology, and Social Values Lab, Custis is on a mission to push for something different: technology that respects autonomy, embraces inclusivity, and prioritizes humanity. At the intersection of ethics, policy, and society, sheâs asking complex questions about how artificial intelligence impacts human livesâquestions that can generate actionable answers.
The Path to Civic Science
Custisâs career began in technical spaces. She honed her expertise at IBM and The MITRE Corporation before becoming director of programs and research at the Partnership on AI. Over two decades, she built a reputation as a dynamic problem-solver, tackling issues including AI safety, labor policy, and transparency. Yet, despite her professional success, something felt incomplete.
“My assignments were always about impacting people, but they could sometimes be very vacant of those peopleâs insights and inputs,â she says. “I didnât want to just deliver a technical asset. I wanted to see its effects on the people it was actually for.”
This desire to merge technical innovation with a focus on humanity led her to the Civic Science Fellowship program. Already familiar with Alondra Nelsonâs work bridging science, technology, and society, Custis saw a post on LinkedIn announcing Nelsonâs return to the Institute for Advanced Study from a stint in the White House Office of Science and Technology Policy, along with an opening for a fellowship in her lab. Intrigued, Custis applied and was quickly selected.
“It felt serendipitous,” Custis says of the transition. “This work lets me position myself as a scientist whose content and context better inform civil life, because itâs more about people.”
Interdisciplinary Work at the Lab
Custis plays a central role in the lab by supporting one of its flagship projects: the AI Policy and Governance Working Group. This initiative convenes leaders from industry, academia, civil society, and government to explore the ethical and societal implications of artificial intelligence.
“This group is a space to have deep thought about the ethical issues around AIâthe responsible design, development, and use of it,” she says. While the 18-month-old group initially focused on responding to government proposals, such as a request for input on dual-use models for AI, Custis sees a shift ahead. “Weâre looking at how to be more forward-thinking, how we can set strategy instead of just reacting.”
Each formal meeting of the Working Group combines private sessions with public-facing workshops, illustrating the groupâs commitment to inclusion. Custis attended a session at a meeting in Hawaii in March that brought together educators, artists, policymakers, and members of the local community for discussions about the risks and threats posed by generative AI systems.
“Youâd hear from educators sharing how theyâre using AI or what they fear about it, as well as artists asking, âWhat about me? How does this impact my work?â” she says. “It became this open dialogue where people who arenât necessarily technical experts had the chance to engage.”
Custis says public engagement like this shouldnât be optional but standard practice. “These conversations should happen way more often. People who donât use but are affected by these systemsâwhat I think of as impacted non-usersâdeserve to be part of the dialogue.”
Even behind closed doors, Custis sees the diversity of perspectives within the working group as central to its success. By blending rigorous policy work with lived experiences, the group ensures that its strategies and recommendations reflect real-world concerns. Helping to plan these meetings and synthesize the results is one of Custisâs primary contributions.
“Itâs one thing to look at a policy proposal,â she says. âItâs another to see how, say, an educator responds or how different communities interpret the impact.”
Beyond the working group, Nelsonâs lab is what Custis calls a “thinkerâs space,” filled with fellows and affiliates from a wide range of disciplines.
“I get to hear the research of folks writing books, creating documentaries, and exploring big questions,” she says. “And sometimes the most interesting discussions just happen when youâre walking around or having coffee together.”
Human Agency and Inclusion
Custisâs work is grounded in accountability and inclusion. At its heart is her concern for human agencyâthe ability for people to maintain autonomy in a world increasingly shaped by technology, including artificial intelligence. “The coercive architecture of so many applications strips us of our autonomy,” she says. “Weâre handing over our data willingly. And then these systems guide us. Toward what? And who benefits?”
For Custis, preserving agency alone isnât enough. She argues that scientists and technologists must prioritize bi-directional collaboration with the people their work affects.
“Itâs not done until everyone participates,” she says. “The product, the policy, the scienceâitâs not done until the people impacted are part of the process, their lived experiences are part of the science. Including those voices isnât just ethical, it makes the science better.”
Custisâs time at the Partnership on AI revealed how deeply entrenched resistance to inclusivity can be. During external discussions on AI in healthcare, she encountered extreme opposition when advocating for patient voices in the conversation.
“I remember getting so much pushback for wanting to include patient advocates in the breakout rooms, and it was just baffling to me. Why wouldnât we invite these voices into the room?” she says. “It struck me how technologists or specialists can easily see the people that their product or service is for as being in the way.”
At the root of Custisâs work is a guiding principle: “What we build and how we build it says a lot about what we value. We need to design for humanityâs needsânot just for innovationâs sake.”
Looking Ahead: A Living Network
As her fellowship progresses, Custis is thinking deeply about the futureânot only her own but also the legacy of the Civic Science Fellowship. “This fellowship isnât just about the 18 months,” she says. Itâs about seeding the future. “Maybe nothing comes of a connection now, or even in the next five years. But on the sixth year? Youâll remember who to call.”
Custis sees the fellowship model as critical for tackling the most complex societal challenges, particularly in fields like AI governance.
“We are all part of the answer,â she says. âScientists, technologists, impacted non-users, advocatesâeveryone. And if we donât talk and work together, we lose the nuanced truth.”
Beyond her work with AI governance, Custis is planning a collection of essays centered on the theme of disembodiment: How do we maintain our sense of self, our ability to think and choose for ourselves when, for example, we’re constantly entangled with AI systems that are designed to learn from us and influence us, often without our full understanding or control?
“Itâs the way we sort of lose ourselves in artificial intelligence, or we actually allow it to commandeer our lives,” she says. Drawing on her own experiences as a product innovator and blending that with broader societal commentary, she hopes to explore both the practical and personal dimensions of ethical AI.
The writing project ties into the larger questions she explores: How do we build responsibly? How do we ensure marginalized voices arenât left out? And how do we create systems that reflect the full complexity of human lives?
Custis knows these questions donât have simple answers, but thatâs precisely the point.
“This is a chance to slow down and ask the big questions,” she says, “before we lose sight of the humanity weâre trying to serve.”
Christine Custisâs Civic Science Fellowship at the Institute for Advanced Study is supported by the Rita Allen Foundation and The Kavli Foundation.