Hear nonprofit leaders share how AI and skilled volunteering reduce admin work, build confidence, and support ethical, human-centered impact at scale.
Artificial intelligence is rapidly reshaping how nonprofits operate, engage communities, and scale impact. In this panel discussion, nonprofit leaders from youth development and education share how they moved from curiosity and hesitation to hands-on adoption of AI. Through real-world examples from Goodera’s AI Lab and AI for All initiative, the session explores how AI can streamline operations, support students, and empower nonprofit teams when paired with skilled corporate volunteers. The conversation highlights practical use cases, lessons learned, and the importance of responsible AI adoption rooted in data privacy and ethics.
Q: Michael, how was your organization thinking about AI before joining the AI Lab?
Michael Cheever:
We were intrigued but hesitant. As nonprofits, we’re always capacity-constrained, so AI’s promise of efficiency was appealing. At the same time, we work with students, which means FERPA compliance and data privacy are critical. We knew AI was coming fast, especially as students began using it, but we needed to understand how to embrace it responsibly.
Q: Rana, what was your perspective on AI before this initiative?
Rana Dawood:
I had some hesitation, especially around authenticity and voice. My mom was an English teacher, so plagiarism and over-reliance were real concerns for me. But I quickly saw AI as a thought partner and brainstorming tool. What we’re learning now is how powerful it can be for task automation and workflow optimization, as long as we use it responsibly.
Q: Was there a tipping point that made you decide to actively explore AI?
Rana Dawood:
AI is already everywhere in our lives, from streaming services to our phones. My tipping point was realizing how much time I spent brainstorming when AI could help me get started faster. It doesn’t replace judgment, but it gives you something to react to and refine.
Michael Cheever:
For us, it was watching students adopt AI. We’re a relational organization, and if students are using these tools, we need to meet them where they are. This felt like an opportunity to modernize how we support students.
Q: Rana, can you describe your experience in the AI Lab and AI Jam?
Rana Dawood:
We took 30 students to Amazon for an AI Jam. They worked in small teams using AI tools to solve age-appropriate challenges and even build games. Watching how quickly they learned and confidently presented their work was incredible. It became clear that our students are leading the way, and we had to follow.
Q: Michael, what was the AI Lab experience like for your team?
Michael Cheever:
We focused on prompt engineering and workflow automation. One group created sample student data using AI, and another built workflows that analyzed that data and produced advisor-ready reports. What surprised us was how accessible it felt. We didn’t need deep technical expertise, just curiosity and guidance.
Q: How did skilled corporate volunteers influence the experience?
Rana Dawood:
They made it a safe space to ask questions and experiment. Their knowledge made AI feel tangible and achievable. They walked us through everything step by step, which really boosted our confidence.
Michael Cheever:
They acted like mentors. Instead of prescribing solutions, they helped us clarify the problem, test prompts, refine outputs, and try again. That guidance was invaluable.
Q: Did you have an “aha” moment during the workshop?
Michael Cheever:
Yes, realizing how much detail and specificity matters in prompts. My initial prompts were too short, but once I got more thorough, the results improved dramatically.
Rana Dawood:
For us, it was seeing how much AI could actually automate. My team left excited about how many parts of their workday could be streamlined.
Q: What impact have you seen since the session?
Michael Cheever:
We formed an internal AI working group with staff, board members, and external experts. We’re piloting tools across regions and being very intentional about governance and compliance.
Rana Dawood:
We’re using AI as a collaborator. From event planning timelines to speech feedback to hiring workflows, we’re already applying what we learned in practical ways.
Q: How are you thinking about data privacy and security?
Michael Cheever:
It’s our top concern. We’re reviewing data-sharing agreements and prioritizing security in our AI working group. Responsible adoption is non-negotiable.
Q: What advice would you give nonprofits starting their AI journey?
Michael Cheever:
Be strategic. Focus on a few high-impact workflows instead of chasing every new tool.
Rana Dawood:
Start small and practice. Use AI in everyday tasks first, then bring it into your work. Treat it as a partner, not a replacement, and stay mindful of privacy.
Q: Any final takeaways?
Vipul Agarwal:
Start with curiosity, not complexity. You don’t need to be an expert to begin. Skilled volunteers are catalysts, and responsible AI adoption must be built on strong data ethics. The impact can be immediate when you choose the right projects.





