In 2025, AI teammates proved they work. Custom GPT usage increased 19x. Moderna went from 750 to over 3,000 GPTs. BCG has built 36,000, calling itself the top creator of custom GPTs globally.
Companies everywhere started building. But building is the easy part. The harder question is why some AI teammates get used every day while others get forgotten within a week.
Quick Take
Why do some AI teammates become part of how teams actually work while others get forgotten in a week? It comes down to five design decisions made before anyone starts building.
This is Part 1 of a two-part series. Part 1 covers the design decisions that determine whether your AI teammate thrives or dies. Part 2 covers how to connect teammates into workflows. These principles apply whether you’re building custom GPTs, Gemini Gems, Copilot Agents, Claude Projects, or Glean Apps.
One team I worked with created 211 AI teammates during their experimentation phase. They kept 57 in active workflows. The rest weren’t all failures: some were duplicates, some too narrow for broad use, some served one person’s productivity needs just fine. But the 57 that became part of how the team actually works had something in common. Their builders made five design decisions intentionally before they started. (For a deeper dive, check out this leading cybersecurity company’s case study.)
Whether you’re building your first AI teammate or auditing your fiftieth, these decisions separate AI teammates that stick from ones that get forgotten.
Key Takeaways:
- The name you choose signals the relationship you’re creating. Mismatched expectations kill adoption.
- You don’t need perfect internal documentation to start, but you do need to be intentional about what your AI teammate knowsInstructions determine whether your AI teammate thinks with you or for you, and that difference compounds over time.
- The best AI teammates are designed for the least expert user, not the person who built them.
- For AI teammates that inform strategy or decisions, design them to enhance human judgment rather than bypass it.
AI Video Explainer and AI Podcast Versions of This Newsletter
To support different learning styles, this newsletter is available as an 7-min AI video explainer (see below) and a 12-min AI podcast with two AI hosts. If you haven’t seen these AIs in action, they’re worth a view. The tech is advancing in amazing ways. I used Google’s NotebookLM to create these and personally reviewed them for accuracy and responsible AI use.
Captions are auto generatedPlayAI Video Explainer Version of This Newsletter
The Five Design Decisions
Based on my work with GTM teams, 80-85% of AI use focuses on speed (do this task faster), 10-15% focuses on quality (do this task better), and only 3-5% focuses on innovation (do it differently). The risk is stopping at speed.
One team that pushed through all three saw 50-75% faster content creation, 98% lead qualification accuracy, and 35% improved campaign performance. (See my previous newsletter on Human + AI Org Transformation Case Study)
The teams that get those results design AI teammates with intention. Here are the five key decision points.
Decision 1: What relationship are you creating?
Too many AI teammates get built and barely used. Often the problem is the name, not the tech. A Sales Assistant that’s really a lead scorer confuses people. A Helper with no clear purpose gets ignored. An Assistant that tries too hard to sound human feels awkward.
Every AI teammate falls somewhere on a spectrum.
- Tools are pure function with no personality, like a Lead Scorer, Data Categorizer, or Testimonial Finder. People use them once, get what they need, and move on. The clarity is the feature.
- Sidekicks adapt and collaborate, like a Draft Helper, Campaign Partner, or Strategy Assistant. They work alongside you without pretending to be someone specific. Fun names work here. Robin or Chewy signal helpful collaborator.
- Personas extend someone’s thinking, like LizaGPT (my digital twin), CEO Jordan bot, or an Enterprise Buyer Persona. Less about tasks, more about testing ideas, challenging logic, and finding blind spots. The name tells you whose perspective you’re getting.
Match the name to the relationship you’re creating. Clearer expectations mean your AI teammates actually get used.
Decision 2: What does it need to know?
This is where teams get stuck. They won’t build because they think they need perfect internal documentation first.
Some have nothing written down. Others have scattered or outdated playbooks. And those with established best practices assume what worked before still works.
You don’t need perfect internal knowledge to build effective AI teammates. You can use external research to fill gaps, validate what you have, or challenge assumptions you didn’t know you were making.
Deep research features in ChatGPT, Claude, Gemini, and Perplexity can gather industry benchmarks, best practices, and frameworks in minutes. Use that as your AI teammates starting knowledge, then refine based on your team’s situation.
I used this approach for LizaGPT. I asked the AI to research publicly available information about me, my work, and my frameworks. That research report became part of the knowledge base, giving my digital twin context about how others see my work. (See previous newsletter on how I built and use LizaGPT)
Start with external research. Test it in practice. Let your team improve it over time. That becomes knowledge worth keeping. See some examples by function in marketing.
Decision 3: How should it engage?
This is where good AI teammates become great ones, or quietly make your team weaker.
I wrote about this in a previous newsletter on critical thinking with AI. Your AI teammate can follow brand guidelines better than most people on your team. It can write a persona-based email in seconds. Maybe too convenient.
Go-to-market teams are doing the hard work of building AI teammates trained on brand guidelines, messaging frameworks, customer personas, and strategic templates. This is a solid foundation.
But many are unintentionally outsourcing the thinking along with the execution.
For routine, high-volume tasks like summarizing call notes or categorizing support tickets, full automation makes sense. The problem is when we do the same for strategic work that defines our value.
The difference is in the instructions.
Your AI teammate knows your brand, personas, and frameworks. Make sure your instructions use that knowledge to think with you, not for you. Ask for analysis before recommendations. Request trade-offs instead of just answers. Keep humans in control of key decisions.
Jim Kruger, CMO of Informatica, had this insight during a strategic applied AI workshop I facilitated with his team:
“The best marketers I’ve worked with can tell you why something works, not just what works. AI can give you ‘the what’ all day long. If you’re not careful, you end up with a team that can execute but can’t explain the strategy behind any of it to apply to future initiatives.”
Decision 4: How easy is it to use?
When you build an AI teammate, you quietly become a product designer.
Most builders skip that part. They focus on what the AI knows, not how easy it is to use. Then they wonder why nobody else uses it.
Not every AI teammate needs great UX. Personal productivity tools can be scrappy. But the more it’s shared across a team, the more design matters. You can’t assume everyone knows how to get the best out of what you built. Good design shows them.
Today’s AI performs best with structured input. Not everyone thinks that way. And they shouldn’t have to.
If adoption depends on knowing how to prompt well, adoption will stay low. The best AI teammates are designed for the least expert user on the team. The goal isn’t to dumb things down. It’s to meet people where they are.
Every element of friction is a reason for someone to give up and go back to the old way of working.
I built one for myself called the GPT Instruction Architect, based on the GRACE framework (Goal, Role, Actions, Context, Examples). It’s a little meta: a custom GPT that helps you create instructions for a custom GPT you’re trying to build.
But you don’t need my version. Whether you build your own, use mine, or just keep a checklist, the value is having a consistent approach your whole team follows. When everyone uses the same framework, you get better quality, clearer structure, and AI teammates that reflect your team’s standards instead of whoever happened to build them.
Here’s a simple prompt template to ask AI to write instructions for you. There’s no shame in asking AI for help. In fact, it’s smart.
Renée Gapen, SVP of Marketing at PointClickCare noted:
“Ideas, insight, and imagination, are human superpowers, but not everyone thinks in frameworks! The tool we built handles the structure by asking the right questions, so people can stop thinking about ‘how’ to instruct AI and start thinking about ‘what’ they want to create. This makes it easier for our team to focus on the possible rather than getting mired in the process.”
Decision 5: Does the human still own the decision?
You can write perfect instructions and still lose your strategic edge.
Decision 3 was about setting up the right ask. This is about what happens after you get the answer. One controls the input, the other determines what you do with the output. Both protect human judgment, but at different points in the process.
This applies mainly to AI teammates that inform strategy or decisions, not simple task-based tools.
Even if you’re building a Sidekick and not a Persona, you can design it to enhance judgment rather than bypass it.
An answer machine gives you one answer. You accept or reject it.
A thinking partner gives you options, explains the rationale, surfaces trade-offs, and lets you evaluate and decide.
The difference looks like this. “Here’s your email” is an answer machine. “Here are three approaches, here’s why each might work, and here’s what to watch for with each. Which direction fits your situation?” is a thinking partner.
Each conversation should teach you something. You learn how to evaluate ideas, understand trade-offs, and make better decisions over time. That’s how you move past using AI for speed and start using it for quality and innovation.
If you’re not sure whether your AI teammate is helping you think or thinking for you, try asking it: What assumptions are you making? or What would a different persona think?
If the response surprises you, you’ve got thinking work to reclaim.
Alexandra Gobbi is CMO at Unanet. After an AI workshop with her marketing team, here’s what she observed.
“We came in thinking we were AI forward. We were using AI daily. We left with a new lens. So did the team. It’s not about using AI more. It’s about thinking with it more critically. Challenging it to go deeper. Not accepting the first answer. One team member told me she’ll carry that perspective forward. AI generates options. Humans make the final call.”
The Bottom Line
AI teammates fail for predictable reasons. None of them are about the technology.
The 57 that stuck aren’t lucky. They’re intentional. Their builders thought through the relationship they were creating, what the AI teammate needed to know, how it should engage, how easy it would be to use, and whether humans would stay in control of decisions that matter.
Before you build your next AI teammate, make these five decisions on purpose:
Get these right, and you’ll build AI teammates worth keeping.
What’s Next
Building great AI teammates is step one. But McKinsey’s 2025 State of AI report found that workflow redesign drives the biggest impact from gen AI, yet only 21% of organizations have done it. Most are still bolting AI onto existing processes.
Part 2 (coming January 22) covers how to move from teammates to workflows: where workflows add value, how to design teammates for connection, and who should build versus integrate. (See also: What We Learned in 2025 and Where Were Headed in 2026)
If you’re building AI teammates, I’d love to hear which of these five decisions has been hardest to get right. Drop a comment and share if you found this helpful.
The Practical AI in Go-to-Market newsletter shares learnings and insights in using AI responsibly. Subscribe today and let’s learn together on this AI journey.
For applied learning: Explore our applied AI workshops, offering both strategic sessions (use cases and roadmaps) and hands-on building (create AI teammates and workflows during the workshop). You’ll leave with either a clear plan or working solutions.
For team transformation:See real examples—a lean GTM team’s step-by-step playbook and a global cybersecurity leader scaling to 150+ marketers with 57 AI teammates integrated into daily workflows.
For speaking: Here are virtual and in-person events where I’ve covered a variety of AI topics. I’ve also keynoted at many organization and corporate-wide events.
Whether through the newsletter, multimedia content, or in-person events, I hope to connect with you soon.
