AI is already quietly sitting in on some of the most intimate conversations people have about their mental health—and what happens next could genuinely change lives for better or worse. And this is the part most people miss: who funds the research behind these systems will heavily influence how safe, supportive, and fair they become.
OpenAI is launching a new funding program that will provide up to $2 million in grants for independent research focused on AI and mental health. The goal is to support safety and well-being research that does not just stay inside one company, but instead empowers external experts to explore how AI interacts with people’s emotional, psychological, and social lives.
Why this program matters
As AI tools become more powerful and more common, people are turning to them not just for productivity or entertainment, but for deeply personal support—especially around mental and emotional challenges. That shift raises urgent questions: Can these systems respond sensitively when someone is in distress, and what happens if they get it wrong?
OpenAI has already invested in improving how its models recognize and respond to emotional and psychological distress, working with specialist partners to shape safer responses in sensitive conversations. But here’s where it gets controversial: no single organization can—or should—define what “safe” or “supportive” means for every person, culture, and context.
A push for independent research
This grant program is part of a broader effort to invest in safety by inviting independent researchers, outside of OpenAI, to examine the intersection of AI and mental health in a rigorous way. The intent is to spark new ideas, uncover blind spots, and accelerate innovation across the wider ecosystem, not just for one product or platform.
The funding is aimed at foundational work—things like better evaluation methods, datasets, and frameworks—that can strengthen OpenAI’s safety efforts while also benefiting researchers, clinicians, and communities working on AI and mental health more broadly. In other words, the program is designed to contribute to a shared evidence base rather than proprietary knowledge alone.
The bigger vision
OpenAI’s position is that continuing to support independent research on AI and mental health is essential for understanding this rapidly emerging field. The hope is that this kind of work will help society figure out where AI can meaningfully support people’s well-being—and where strong guardrails or limits are needed.
At a mission level, the program ties back to a long-term goal: ensuring that advanced AI systems, including future AGI, benefit all of humanity rather than a narrow group. But here’s a potential point of debate: can grant-funded research from a major AI lab truly be independent, or will critics see it as a way to shape the narrative around AI safety?
What types of projects are eligible
The program is looking for research proposals that deepen understanding of how AI and mental health overlap—both in terms of risks and benefits. Projects should ultimately help build an AI ecosystem that is safer, more supportive, and genuinely helpful for diverse users, including those who are vulnerable or marginalized.
There is a particular emphasis on interdisciplinary work that brings together technical experts with mental health professionals and people with lived experience of mental health challenges. That mix is crucial, because technical metrics alone can easily miss the nuances of what feels compassionate, harmful, stigmatizing, or empowering in real-world interactions.
Expected outcomes and deliverables
To be considered successful, projects should produce tangible outputs that others can use. These might include:
- Datasets, evaluation tools, or scoring rubrics that make it easier to measure how AI systems behave in mental health contexts.
- Synthesized insights from people with lived experience, such as what kinds of responses feel validating or harmful when talking to AI tools.
- Descriptions of how mental health symptoms show up in specific cultures or communities, so AI systems can avoid misinterpreting or overlooking them.
- Research on the slang, idioms, and informal language people actually use when discussing their mental health—especially the kinds of expressions that existing classifiers or filters might miss.
These deliverables are meant to directly inform safety work at OpenAI while also feeding into a wider body of research that others can build on. But here’s where it gets interesting: some might argue that sharing more detailed behavioral rubrics for AI could also be used to “game” systems or bypass safeguards.
Application timeline and process
Applications for this grant program are open now and will be accepted through December 19, 2025. That gives prospective applicants a defined window to refine their research questions, secure collaborators, and clearly articulate what they plan to deliver.
A panel made up of internal researchers and external experts will review submissions on an ongoing basis rather than waiting until the very end of the application period. Selected projects will be notified on or before January 15, 2026, which provides a transparent timeline for researchers who need to coordinate funding, hiring, or academic commitments.
Example topics and research directions
The program offers a set of illustrative topic areas to help applicants understand the types of questions that are in scope. These are examples, not a strict checklist, so researchers can propose other directions as long as they clearly relate to AI and mental health. Potential areas of interest include:
- Cross-cultural expressions of distress: How do people’s words, metaphors, and descriptions of distress, delusion, or other mental health–related states differ across languages and cultures, and how do these differences impact how AI systems detect or interpret them?
- Lived-experience perspectives: What do individuals with firsthand experience of mental health challenges consider safe, supportive, or harmful when interacting with AI-powered chatbots? For instance, when does a response feel validating versus dismissive or patronizing?
- Clinical use in practice: How are mental healthcare providers currently using AI tools in real-world settings—for tasks like note-taking, triage, psychoeducation, or patient communication? What aspects are proving helpful, where are they falling short, and what new safety issues are emerging in practice?
- Promoting healthier behaviors: In what ways can AI systems encourage healthy, prosocial behaviors and reduce potential harm, such as by nudging users toward supportive resources, de-escalating risky situations, or discouraging self-harm?
- Language safeguards and edge cases: How robust are current safety mechanisms to everyday slang, niche vernacular, coded language, and underrepresented dialects—especially in low-resource languages that are often overlooked in training data?
- Youth-sensitive communication: How should AI systems adjust tone, style, and framing when responding to children, teens, and adolescents so that guidance feels age-appropriate, respectful, and accessible? Useful outputs might include evaluation rubrics, style guides, or annotated examples comparing effective and ineffective phrasing across age groups.
- Stigma in AI interactions: How might stigma around mental illness unintentionally show up in language model recommendations or interaction styles—for example, through biased suggestions, moralizing language, or subtly dismissive tones?
- Visual indicators and body image: How do AI systems interpret or respond to visual cues related to body dysmorphia or eating disorders, and what would ethically collected, annotated multimodal datasets and evaluation tasks look like that capture common real-world signs of distress?
- Supporting people in grief: How can AI tools offer compassionate, sensitive support to individuals experiencing grief—helping them process loss, maintain meaningful memories or connections, and access coping resources? Concrete outputs might include exemplary response patterns, tone and style guidelines, or evaluation rubrics focused specifically on grief-related conversations.
These areas highlight both the promise and the pitfalls of involving AI in mental health contexts. But here’s where it could get controversial: some will argue that AI should never be used in such sensitive spaces at all, while others see it as a necessary tool to expand access in under-resourced settings.
An open invitation to debate
At its core, this grant program raises a deeper question: should AI systems play any role in emotional support and mental health, or should that always be left to humans? Even if AI can be made safer and more empathetic, is there a risk that institutions will lean on it as a cheaper substitute for real human care?
That tension is exactly why OpenAI is inviting outside researchers—and, indirectly, the broader public—to scrutinize how AI and mental health intersect. Now it is your turn: Do you think funding this kind of research is a responsible way to make AI safer, or does it risk normalizing AI as a pseudo-therapist in situations where human support should be non-negotiable? What part of this program do you strongly agree or disagree with—and why?