Skip to main content

ISGMH Trainee Awarded Ethics and Artificial Intelligence Research Grant

will-liem-headshot.jpgThe Center for Bioethics and Medical Humanities at Northwestern University recently awarded the Ethics and Artificial Intelligence Research Grant to Will Liem, a PhD student in social sciences and health. With the support of this grant, Liem will research the development of generative AI tools for sex education geared towards LGBTQ+ teens. A current trainee with ISGMH Professor Kathryn Macapagal, PhD, Liem will receive project mentorship from Macapagal and Andrew Berry, PhD.

ISGMH spoke with Liem about his project, Co-designing Equitable Generative AI Tools for Inclusive Sexual Health Education with Queer Teens, in the following Q&A.

What does an equitable AI tool look like? Are there certain issues with existing generative AI that you will be working to counter so that these tools are equitable?

The answer to this question can vary depending on who you ask. For me, creating an equitable AI tool is a dynamic and ongoing process, one that genuinely reflects the needs and perspectives of the users it serves. Equity isn’t just an end-goal; it’s a continuous process that requires us to acknowledge our own privileges and actively work to redistribute power in ways that foster systemic change. An equitable AI tool must be designed to be fair, inclusive, and accessible to all users, regardless of their background or identity. In the context of our work with LGBTQ+ teens, it’s crucial that an equitable AI tool recognizes and addresses the unique challenges and disparities faced by queer youth.

To achieve this, several elements are essential. First, the creators of AI tools must be transparent about how these tools were made, including the datasets used, the assumptions underlying their creation, and the intended users. Transparency is key to building trust and ensuring accountability. Second, AI tools should be designed in close collaboration with the end-users, incorporating features that address their concerns, support their aspirations, and align with their goals. Finally, these tools should be adaptable, allowing for ongoing monitoring and feedback from stakeholders to ensure they continue to reflect the evolving needs and values of the users.

Many existing AI models are trained on datasets that predominantly reflect cisnormative and heteronormative perspectives, leading to outputs that can perpetuate biases and reinforce harmful narratives. For instance, datasets used in chatbots and other AI-driven tools often contain gender biases, such as the conflation of sex and gender, toxic masculinity, and the erasure of non-binary identities. These biases can manifest in harmful ways, such as misgendering, which is particularly distressing for individuals experiencing gender dysphoria.

One of my goals in engaging queer teens in this project is to critically examine these issues and explore ways to mitigate biased outputs through participatory design. I firmly believe that when AI technologies are created thoughtfully and equitably, they can become a powerful force for societal good.

What kinds of tools do you plan to design?

Throughout this process, my goal is to develop a suite of tools that could include a custom large language model, such as a chatbot or custom GPT, alongside a set of AI literacy guidelines to help users navigate these tools effectively. For this project, I’ll be collaborating closely with a group of queer teens known as the Youth Advisory Council (YAC) from ISGMH’s Teen Health Lab—an inspiring group of LGBTQ+ youth who provide invaluable advice on digital sexual health interventions and campaigns. As part of this participatory design project, I’m eager to incorporate any additional ideas the YAC might bring to the table.

My approach to this work is grounded in what’s known as “constitutional AI,” a method for building AI systems that ensure they align with specific ethical principles, values, and guidelines. Together, we aim to create a moral framework, or “constitution,” that outlines how they want AI systems to interact with and support them. Once we establish these ethical standards, we’ll train an AI tool to adhere to them and involve the YAC in evaluating its outputs. Through ongoing iteration and refinement, I hope we can co-design a tool that not only meets our moral framework but also empowers the YAC to develop AI literacy guidelines based on their hands-on experience.

Are you looking forward to working with Northwestern faculty members Andrew Berry and Kathryn Macapagal as mentors on this project?

Yes, I’m thrilled to be combining the expertise of both Andrew and Kathryn for my project!

Andrew is my PhD advisor and has been instrumental in my development as a human-centered design researcher. He truly embodies what it means to be human-centric, both as a mentor and a researcher, consistently demonstrating critical and thoughtful approaches. In the context of my project, Andrew’s expertise in helping people articulate and communicate what matters most to their health will be invaluable in designing an AI tool that effectively meets the needs of queer teens.

Kathryn is someone I deeply admire for her holistic approach to research. She’s action-oriented, community-centered, and always focused on making a tangible impact. Kathryn has an impressive history of working with LGBTQ+ teens through the Teen Health Lab, where she has created infrastructure that allows queer youth to actively participate in the design of sexual health education initiatives. Her work has spurred youth-led campaigns, community murals, and much more. Kathryn exemplifies the importance of bridging the gap between academia and real-world impact by giving queer teens a meaningful voice in research.

What excites me most about this project is the opportunity to learn and grow under the mentorship of these two visionaries. Andrew and Kathryn both have an extraordinary ability to think expansively about their research, not only in terms of generating new areas to explore, but also in finding practical ways to translate their work into societal benefits. I’m eager to nurture my research passions under their guidance and forge my own path in creating a queer-centric community initiative centered around equitable AI.

How does your social work background and prior experience consulting with tech companies inform your research?

It’s surreal to launch a passion project that I’ve been incubating for some time now! Before my PhD program, I worked as a consultant for big tech companies, exploring participatory methods to address challenges related to bias and unfairness in algorithms. I’ve always wanted to explore this space with the queer community, especially since there has been growing attention to how biased AI tools can exploit or harm queer individuals. This project allows me to build on those experiences and dive deeper into a problem space that aligns closely with both my professional and personal values.

People often assume that working in AI means unequivocally supporting all matters related to AI, sometimes overlooking the potential consequences of deploying these powerful tools. However, many of us in this field share significant concerns about the broader implications of AI technology. My background in social work, though unconventional in the tech world, brings practical skills that are vital to tech equity. This experience has uniquely shaped my approach to AI, as I strive to create harmony between technology and society, ensuring that people from all walks of life can equitably access the benefits AI offers.

As a human-centered AI researcher, I focus on the sociotechnical aspects of creating equitable AI tools. This means deeply considering how human values, needs, identities, and cultures are woven into the development of AI technologies. My social work background is particularly invaluable here. It equips me with the skills to understand the sociocultural implications of AI and to engage marginalized communities in shaping tools that truly reflect their needs and experiences. Social work has taught me the importance of empathy, advocacy, and the power of giving communities a voice—principles that are essential for ensuring AI serves everyone fairly. Reflecting on this journey, I’m excited about the path ahead, where technology genuinely serves humanity, embracing diversity and promoting equity.