Our Approach to AI
AI has changed the landscape for mission-driven organizations. How people find information, evaluate credibility, and decide where to give their time and money is shifting in real time. Many of the organizations we work with are feeling it, and many have serious, legitimate concerns about what AI means for their work, their communities, and the world.
We share those concerns. We also believe that understanding AI, whether or not you choose to adopt it, is essential for organizations whose missions depend on reaching people online. Here's how we're approaching it.
AI is powerful and imperfect. Our team is accountable.
Capellic uses AI in our work, for research, drafting, code review, and data analysis, because it helps us work more effectively and deliver more value to our clients. It can also be remarkably capable and confidently wrong in the same session. Every deliverable we produce is shaped and reviewed by experienced humans on our team.
We treat AI output as a starting point, never a finished product.
Our clients serve people navigating legal crises, accessing healthcare, pursuing education, and supporting causes they believe in. When the stakes are that high, our team is accountable for the quality, accuracy, and integrity of what we deliver, regardless of which tools helped get us there.
- We never input sensitive or non-public client information into AI prompts. We also carefully evaluate the privacy and data practices of the business tools we use, many of which now include AI features.
- Every member of our team has dedicated, non-billable time to stay current on AI developments and experiment with new approaches.
- When clients ask how AI was used on their project, we have a clear answer.
- At the start of every engagement, we ask whether our clients have policies or preferences about AI use by their partners.
We meet organizations where they are
Every organization's relationship with AI is different. Some are enthusiastic, others are deeply skeptical, and many are simply indifferent. Almost none are fully aligned internally on when and how to use it. Even within a single organization, you'll find people eager to explore what AI can do alongside others who have serious reservations about the environmental costs, the labor practices of major AI companies, or the ways AI might undermine the human work their teams do every day.
We don't push organizations toward AI adoption. We help them understand how AI is already affecting their digital presence and their audiences, and we work with them to respond in ways that align with their values.
When an AI-related initiative does make sense, we develop clear approach summaries and evaluation rubrics before any work begins, so the goals, boundaries, and measures of success are shared from the start. And when the right opportunity is clear, we move quickly and build carefully.
We help organizations lead, not just react
Through our Not a Bot AI Leadership Summit, we give nonprofit leaders the space, time, and shared expertise to move from uncertainty to action. We designed Not a Bot because we wanted to counter the narrative of AI as the ideal "worker" and instead celebrate the creativity, experience, and expertise of the humans working in the nonprofit space.
The insights from our summits confirmed what we suspected: many organizations don't need to rush into adoption. They need internal alignment, a realistic view of the landscape, and the confidence that comes from working through hard questions with peers facing similar challenges.
That philosophy shapes our everyday client work, too:
- Across our client portfolio, we're tracking generative engine optimization (GEO) KPIs so that when we implement strategies like schema markup and LLMs.txt files, we can measure whether they're making a difference.
- When AI crawlers started degrading site performance for a client, we diagnosed the problem and developed mitigation strategies that other teams across the Drupal community have since adopted.
- We're helping clients understand how AI is reshaping project management so they can plan ahead rather than scramble.
What we believe
User behavior is changing faster than most organizations realize. People increasingly start with AI for answers and turn to your website, if they visit at all, to verify, act, or connect. That means your content strategy, site architecture, and how you measure success all need to evolve. We're helping clients implement structured data and schema markup so AI tools can accurately interpret and cite their content, and we're tracking new indicators like AI citation frequency and brand accuracy alongside traditional analytics.
Start small, measure what matters, and stay focused on real problems. AI excels when the scope is narrow, the context is detailed, and the outcomes are measurable. We encourage our clients and ourselves to pilot before scaling and to resist the pull of shiny objects that don't solve an actual problem.
The ethical concerns are real, and we take them seriously. The AI industry carries significant ethical, ecological, legal, and philosophical baggage. We don't shy away from these conversations, and we're always evolving our approach in response to new research and current events. Organizations like the NAACP are doing critical work holding AI companies accountable for the impact of large-scale data centers on frontline communities, work we believe the nonprofit sector should support. These aren't abstract issues to us, and they shouldn't be for anyone building technology for social good.
Where is this heading?
No one knows exactly what AI will look like in two years. But we believe that websites will remain the essential home for authentic storytelling, brand identity, structured content, and verified data, the source of truth that AI tools ultimately pull from. The organizations that invest now in making their content clear, well-structured, and mission-aligned will be the ones AI platforms represent accurately.
Our job is to help our clients build that foundation, so that, however the technology evolves, their mission comes through.
If your organization is working through its own AI questions, we'd welcome the conversation.
This page was drafted collaboratively by our team and AI, then reviewed and edited by humans at Capellic. Sources included our internal AI acceptable use policy, insights from our Not a Bot AI Leadership Summit, and a review of public AI statements from peer agencies and nonprofit sector resources.