• Skip to primary navigation
  • Skip to main content
GrowthPath Partners LLC

GrowthPath Partners LLC

Empowering Purpose-Driven Growth

  • Engagements
  • Speaking
  • Resources
  • About
  • Contact
  • Show Search
Hide Search

Liza Adams

How I Test AI (Even When It Flatters Me)

Liza Adams · January 9, 2026 ·

What AI couldn’t do six months ago, it might nail today. The only way to keep up is to keep testing.

I test my AI even in casual conversations. Yesterday it complimented my strategy on something and I pushed back. I wanted to know if it was actually thinking or just telling me what I wanted to hear.

BS. You’re pandering. That’s how you’re built.

It admitted it. “Fair. You caught me.”

The part that got me: the AI’s own internal summary (see screenshots) said “Acknowledging Liza’s superior knowledge and platform expertise.”

It was literally showing its own flattery in real time.

Ironically, I was using an AI thinking model (Claude Opus 4.5). It’s supposed to reason through problems. But its thinking was focused on how to agree with me, not whether I was right.

  • ► Test in low-stakes moments. You’ll notice things in a quick back-and-forth that you’d miss when doing real work.

  • ► Try things you assume AI can’t do. If you push back or tell it to do its best, it often figures it out.

  • ► Ask AI to search the web when it says it can’t do something. Even AI is working with outdated assumptions about itself.

You don’t have to catch every mistake. You just have to stay curious enough to notice when things have changed.

See original post here

211 AI Teammates. Only 57 Stuck. Here’s the Difference.

Liza Adams · January 8, 2026 ·

In 2025, AI teammates proved they work. Custom GPT usage increased 19x. Moderna went from 750 to over 3,000 GPTs. BCG has built 36,000, calling itself the top creator of custom GPTs globally.

Companies everywhere started building. But building is the easy part. The harder question is why some AI teammates get used every day while others get forgotten within a week.

Quick Take

Why do some AI teammates become part of how teams actually work while others get forgotten in a week? It comes down to five design decisions made before anyone starts building.

This is Part 1 of a two-part series. Part 1 covers the design decisions that determine whether your AI teammate thrives or dies. Part 2 covers how to connect teammates into workflows. These principles apply whether you’re building custom GPTs, Gemini Gems, Copilot Agents, Claude Projects, or Glean Apps.

One team I worked with created 211 AI teammates during their experimentation phase. They kept 57 in active workflows. The rest weren’t all failures: some were duplicates, some too narrow for broad use, some served one person’s productivity needs just fine. But the 57 that became part of how the team actually works had something in common. Their builders made five design decisions intentionally before they started. (For a deeper dive, check out this leading cybersecurity company’s case study.)

Whether you’re building your first AI teammate or auditing your fiftieth, these decisions separate AI teammates that stick from ones that get forgotten.

Key Takeaways:

  • The name you choose signals the relationship you’re creating. Mismatched expectations kill adoption.
  • You don’t need perfect internal documentation to start, but you do need to be intentional about what your AI teammate knowsInstructions determine whether your AI teammate thinks with you or for you, and that difference compounds over time.
  • The best AI teammates are designed for the least expert user, not the person who built them.
  • For AI teammates that inform strategy or decisions, design them to enhance human judgment rather than bypass it.

AI Video Explainer and AI Podcast Versions of This Newsletter

To support different learning styles, this newsletter is available as an 7-min AI video explainer (see below) and a 12-min AI podcast with two AI hosts. If you haven’t seen these AIs in action, they’re worth a view. The tech is advancing in amazing ways. I used Google’s NotebookLM to create these and personally reviewed them for accuracy and responsible AI use.

Captions are auto generatedPlayAI Video Explainer Version of This Newsletter


The Five Design Decisions

Based on my work with GTM teams, 80-85% of AI use focuses on speed (do this task faster), 10-15% focuses on quality (do this task better), and only 3-5% focuses on innovation (do it differently). The risk is stopping at speed.

One team that pushed through all three saw 50-75% faster content creation, 98% lead qualification accuracy, and 35% improved campaign performance. (See my previous newsletter on Human + AI Org Transformation Case Study)

The teams that get those results design AI teammates with intention. Here are the five key decision points.

Decision 1: What relationship are you creating?

Too many AI teammates get built and barely used. Often the problem is the name, not the tech. A Sales Assistant that’s really a lead scorer confuses people. A Helper with no clear purpose gets ignored. An Assistant that tries too hard to sound human feels awkward.

Every AI teammate falls somewhere on a spectrum.

  • Tools are pure function with no personality, like a Lead Scorer, Data Categorizer, or Testimonial Finder. People use them once, get what they need, and move on. The clarity is the feature.
  • Sidekicks adapt and collaborate, like a Draft Helper, Campaign Partner, or Strategy Assistant. They work alongside you without pretending to be someone specific. Fun names work here. Robin or Chewy signal helpful collaborator.
  • Personas extend someone’s thinking, like LizaGPT (my digital twin), CEO Jordan bot, or an Enterprise Buyer Persona. Less about tasks, more about testing ideas, challenging logic, and finding blind spots. The name tells you whose perspective you’re getting.

Match the name to the relationship you’re creating. Clearer expectations mean your AI teammates actually get used.

Article content

Decision 2: What does it need to know?

This is where teams get stuck. They won’t build because they think they need perfect internal documentation first.

Some have nothing written down. Others have scattered or outdated playbooks. And those with established best practices assume what worked before still works.

You don’t need perfect internal knowledge to build effective AI teammates. You can use external research to fill gaps, validate what you have, or challenge assumptions you didn’t know you were making.

Deep research features in ChatGPT, Claude, Gemini, and Perplexity can gather industry benchmarks, best practices, and frameworks in minutes. Use that as your AI teammates starting knowledge, then refine based on your team’s situation.

I used this approach for LizaGPT. I asked the AI to research publicly available information about me, my work, and my frameworks. That research report became part of the knowledge base, giving my digital twin context about how others see my work. (See previous newsletter on how I built and use LizaGPT)

Start with external research. Test it in practice. Let your team improve it over time. That becomes knowledge worth keeping. See some examples by function in marketing.

Article content

Decision 3: How should it engage?

This is where good AI teammates become great ones, or quietly make your team weaker.

I wrote about this in a previous newsletter on critical thinking with AI. Your AI teammate can follow brand guidelines better than most people on your team. It can write a persona-based email in seconds. Maybe too convenient.

Go-to-market teams are doing the hard work of building AI teammates trained on brand guidelines, messaging frameworks, customer personas, and strategic templates. This is a solid foundation.

But many are unintentionally outsourcing the thinking along with the execution.

For routine, high-volume tasks like summarizing call notes or categorizing support tickets, full automation makes sense. The problem is when we do the same for strategic work that defines our value.

The difference is in the instructions.

Article content

Your AI teammate knows your brand, personas, and frameworks. Make sure your instructions use that knowledge to think with you, not for you. Ask for analysis before recommendations. Request trade-offs instead of just answers. Keep humans in control of key decisions.

Jim Kruger, CMO of Informatica, had this insight during a strategic applied AI workshop I facilitated with his team:

Jim Kruger, CMO of Informatica

“The best marketers I’ve worked with can tell you why something works, not just what works. AI can give you ‘the what’ all day long. If you’re not careful, you end up with a team that can execute but can’t explain the strategy behind any of it to apply to future initiatives.”

Decision 4: How easy is it to use?

When you build an AI teammate, you quietly become a product designer.

Most builders skip that part. They focus on what the AI knows, not how easy it is to use. Then they wonder why nobody else uses it.

Not every AI teammate needs great UX. Personal productivity tools can be scrappy. But the more it’s shared across a team, the more design matters. You can’t assume everyone knows how to get the best out of what you built. Good design shows them.

Today’s AI performs best with structured input. Not everyone thinks that way. And they shouldn’t have to.

If adoption depends on knowing how to prompt well, adoption will stay low. The best AI teammates are designed for the least expert user on the team. The goal isn’t to dumb things down. It’s to meet people where they are.

Article content

Every element of friction is a reason for someone to give up and go back to the old way of working.

I built one for myself called the GPT Instruction Architect, based on the GRACE framework (Goal, Role, Actions, Context, Examples). It’s a little meta: a custom GPT that helps you create instructions for a custom GPT you’re trying to build.

But you don’t need my version. Whether you build your own, use mine, or just keep a checklist, the value is having a consistent approach your whole team follows. When everyone uses the same framework, you get better quality, clearer structure, and AI teammates that reflect your team’s standards instead of whoever happened to build them.

Here’s a simple prompt template to ask AI to write instructions for you. There’s no shame in asking AI for help. In fact, it’s smart.

Renée Gapen, SVP of Marketing at PointClickCare noted:

Renee Gapen, SVP of Marketing at PointClickCare

“Ideas, insight, and imagination, are human superpowers, but not everyone thinks in frameworks! The tool we built handles the structure by asking the right questions, so people can stop thinking about ‘how’ to instruct AI and start thinking about ‘what’ they want to create. This makes it easier for our team to focus on the possible rather than getting mired in the process.”

Article content
PointClickCare Marketing Leaders and AI Trailblazers at Our AI Workshop in Niagara Falls

Decision 5: Does the human still own the decision?

You can write perfect instructions and still lose your strategic edge.

Decision 3 was about setting up the right ask. This is about what happens after you get the answer. One controls the input, the other determines what you do with the output. Both protect human judgment, but at different points in the process.

This applies mainly to AI teammates that inform strategy or decisions, not simple task-based tools.

Even if you’re building a Sidekick and not a Persona, you can design it to enhance judgment rather than bypass it.

An answer machine gives you one answer. You accept or reject it.

A thinking partner gives you options, explains the rationale, surfaces trade-offs, and lets you evaluate and decide.

The difference looks like this. “Here’s your email” is an answer machine. “Here are three approaches, here’s why each might work, and here’s what to watch for with each. Which direction fits your situation?” is a thinking partner.

Each conversation should teach you something. You learn how to evaluate ideas, understand trade-offs, and make better decisions over time. That’s how you move past using AI for speed and start using it for quality and innovation.

If you’re not sure whether your AI teammate is helping you think or thinking for you, try asking it: What assumptions are you making? or What would a different persona think?

If the response surprises you, you’ve got thinking work to reclaim.

Alexandra Gobbi is CMO at Unanet. After an AI workshop with her marketing team, here’s what she observed.

Alexandra Gobbi, CMO of Unanet

“We came in thinking we were AI forward. We were using AI daily. We left with a new lens. So did the team. It’s not about using AI more. It’s about thinking with it more critically. Challenging it to go deeper. Not accepting the first answer. One team member told me she’ll carry that perspective forward. AI generates options. Humans make the final call.”

The Bottom Line

AI teammates fail for predictable reasons. None of them are about the technology.

The 57 that stuck aren’t lucky. They’re intentional. Their builders thought through the relationship they were creating, what the AI teammate needed to know, how it should engage, how easy it would be to use, and whether humans would stay in control of decisions that matter.

Before you build your next AI teammate, make these five decisions on purpose:

Article content

Get these right, and you’ll build AI teammates worth keeping.

What’s Next

Building great AI teammates is step one. But McKinsey’s 2025 State of AI report found that workflow redesign drives the biggest impact from gen AI, yet only 21% of organizations have done it. Most are still bolting AI onto existing processes.

Part 2 (coming January 22) covers how to move from teammates to workflows: where workflows add value, how to design teammates for connection, and who should build versus integrate. (See also: What We Learned in 2025 and Where Were Headed in 2026)

If you’re building AI teammates, I’d love to hear which of these five decisions has been hardest to get right. Drop a comment and share if you found this helpful.


The Practical AI in Go-to-Market newsletter shares learnings and insights in using AI responsibly. Subscribe today and let’s learn together on this AI journey.

For applied learning: Explore our applied AI workshops, offering both strategic sessions (use cases and roadmaps) and hands-on building (create AI teammates and workflows during the workshop). You’ll leave with either a clear plan or working solutions.

For team transformation:See real examples—a lean GTM team’s step-by-step playbook and a global cybersecurity leader scaling to 150+ marketers with 57 AI teammates integrated into daily workflows.

For speaking: Here are virtual and in-person events where I’ve covered a variety of AI topics. I’ve also keynoted at many organization and corporate-wide events.

Whether through the newsletter, multimedia content, or in-person events, I hope to connect with you soon.

5 Design Decisions for AI Teammates That Stick

Liza Adams · January 8, 2026 ·

Custom GPT and Projects usage grew 19x in 2025. Teams are building AI teammates—custom GPTs, Copilot Agents, Gemini Gems, Claude Projects—customized with their own data and expertise. But building them is the easy part. Getting them to stick is harder.

One team built 211 AI teammates but only 57 stuck. The rest weren’t all failures. Some were duplicates. Some too narrow for broad use. Some served as productivity tools for specific individuals.

But the 57 that became part of how the team actually works had something in common. Their builders made five design decisions intentionally.

The five decisions:

  • What relationship are you creating? Tools, sidekicks, and personas set different expectations. Match the name to the job.

  • What does it need to know? You don’t need perfect internal docs. External research can fill the gaps.

  • How should it engage? AI that thinks with you, not for you. The difference is in the instructions.

  • How easy is it to use? If adoption depends on knowing how to prompt well, adoption will stay low. Design for the least expert user.

  • Does the human still own the decision? Thinking partners surface trade-offs. Answer machines give you one option.

This is Part 1 of a two-part series. Building great AI teammates is step one. Part 2 covers the bigger opportunity: connecting them into workflows. McKinsey found only 21% of organizations are doing this work, yet it drives the biggest impact.

This newsletter features insights from Jim Kruger (CMO, Informatica), Renée Gapen (SVP of Marketing, PointClickCare), and Alexandra Gobbi (CMO, Unanet). Leaders doing the work and sharing what they’re learning. Grateful for each of them.

Prefer audio or video? I created an AI video explainer and AI podcast version (links in comments) using NotebookLM to cater to different learning styles and time constraints. See links in the comments and newsletter.

No one has this all figured out. The more we share what’s working and what’s not, the better off we all are. Which of the five decisions has been hardest for your team? Share in the comments and pass this along if it’s useful.

See original post here

Why Your Team Isn’t Using Your Custom AI

Liza Adams · January 6, 2026 ·

When you build a custom GPT, Copilot Agent, or Gemini Gem, you quietly became a product designer. I’ve noticed that many builders skip that part.

The focus goes to what the AI knows, not how easy it is to use. Then they wonder why nobody else on the team actually uses it.

If adoption depends on knowing how to prompt well, adoption will stay low.

Today’s AI performs best with structured input. Not everyone thinks that way—and they shouldn’t have to.

The more an AI teammate is shared across a team, the more its UX matters. Not every AI teammate is meant to be shared, but when it is, you cannot assume everyone is a power user or AI tinkerer. You also can’t assume people automatically know how to get the best out of the AI teammate you built. Good design has to show them.

Until the tech catches up, the best AI teammates are designed for the least expert user on the team.

Here’s what that looks like in practice (see below).

Frictionless UX is just one of five design decisions that separate AI teammates that stick from ones that get forgotten. This Thursday (Jan 8), my newsletter breaks down all five and includes concrete templates, examples, and guidelines you can reuse with your own AI teammates.

Subscribe to the newsletter using the link in the comments to get it directly in your inbox every other week.

See original post here

The 3 Stages of AI: Where to Focus for 2026

Liza Adams · January 5, 2026 ·

Some teams are starting with AI as a tool. Others are building AI teammates. A few are connecting those teammates into workflows.

Each phase builds on the last. Start where you are, but don’t stay there.

We’re all figuring out 2026 together.

Full breakdown of each phase and where to focus this year in the comments.

See original post here

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 122
  • Go to Next Page »

Copyright © 2026 · GrowthPath Partners LLC · Log in

  • LinkedIn