• Skip to primary navigation
  • Skip to main content
GrowthPath Partners LLC

GrowthPath Partners LLC

Empowering Purpose-Driven Growth

  • Engagements
  • AI
  • Speaking
  • Expertise
  • Impact
  • Resources
  • About
  • Contact
  • Show Search
Hide Search

Liza Adams

Great Strategies Die in the Reactions You Didn’t Simulate. AI Lets You Test Them First.

Liza Adams · September 6, 2025 ·

Liza Adams

Liza Adams
50 CMOs to Watch in 2024 | AI & Exec Advisor | Go-to-Market Strategist | Public Speaker | Fractional/Advisor of the Year Finalist

Hello go-to-market (GTM) leaders, strategists, and innovators! 👋 Thank you for dropping by to learn practical AI applications and gain strategic insights to help you grow your business and elevate your team’s strategic value.

Quick Take

We test subject lines, ad copy, and CTAs. But we don’t test as much how the people who influence our success will actually react to strategic moves.

Product marketers launch positioning without fully anticipating competitive responses. Brand teams may present to analysts while crossing fingers about tough questions. Revenue teams find themselves launching campaigns quickly based on assumptions about buyer priorities.

This guesswork costs millions in failed campaigns, competitive surprises, and deals that stall because we misread the buyer.

GTM teams no longer have to choose between moving fast and moving with strategic intelligence. AI simulation lets you pre-test strategies against key stakeholders in hours, not weeks, before spending budget or burning relationships. Real-world validation still happens, but you’ve already caught some of the major risks.

If you’ve built a digital twin of yourself, you already understand the power of AI simulation. (If you haven’t, see the newsletter on digital twins to learn more and how to build them). This takes that concept further. Instead of simulating yourself, you’re simulating the people who matter most to your business. The shift from “what would I do?” to “how would they respond?” changes strategic planning.

Most teams use AI personas to guide messaging like “Would this copy resonate with an IT director?” But that’s different from simulating how a real person — your CFO, your buyer, your fiercest competitor — will respond to strategic moves like market expansion, new tiered platform pricing, and channel partner approaches.

We’re going way beyond testing copy. We’re testing reactions to reduce the risk. That’s the difference between personalization and prediction.

Every strategy has a breaking point in the reactions. If you’re not simulating them, you’re flying blind.

Key takeaways:

  • AI simulators let you test strategies against the people who influence your success before you engage them
  • Three validation approaches: reactive (learn from failures), proactive (test before launch), predictive (anticipate moves)
  • The People Simulator Priority Matrix shows which stakeholders matter most for each GTM function
  • Most teams get the biggest impact by starting with research-based simulation for their top 2-3 critical stakeholders

Not all stakeholders carry equal weight. The People Simulator Priority Matrix helps teams prioritize whose feedback matter most and which functions need to test more deeply. Start with the people who can derail your launch, influence your strategy, or slow your momentum.

We’ll unpack this framework below to guide how your team applies simulation based on role.

Want strategic AI insights and practical AI applications like this delivered every two weeks? Subscribe to get the latest case studies and breakthroughs from leading GTM teams.


Prefer to listen to an AI-generated podcast?

AI Podcast
AI Podcast Version of this Newsletter

To support different learning styles, this newsletter as an AI podcast (15 mins) with two AI hosts. I used Google’s NotebookLM to create it and personally reviewed it for accuracy and responsible AI use. (Quick tip: After you click through, the player might take a moment to load after you press play.)

The Cost of Strategic Guesswork

Every quarter, GTM teams make million-dollar bets on assumptions.

Sales teams build pitches based on what they think buyers care about. Customer success teams launch retention campaigns without knowing what actually drives churn. Partnership teams negotiate deals while guessing at the partner’s real motivations.

When these assumptions are wrong, the costs add up fast. Campaigns that miss the mark. Analyst briefings that expose weak positioning. Competitive responses that catch teams off guard.

Teams face two bad choices: slow down with expensive research and surveys, or speed up and hope for the best. AI simulation offers a third path.

From Reactive to Predictive: Three Validation Approaches

Most teams validate reactively using traditional methods. They learn from expensive failures and adjust for next time.

Article content

How AI Simulation Works

Test your strategies against AI versions of the people who influence your success. Think of it as having permanent advisory access to your key stakeholders.

Before you start building simulations, check your company’s AI policy. Don’t input confidential, proprietary, or personally identifiable information. These tools are powerful, and using them responsibly builds trust and keeps your team protected.

Most teams follow a maturity progression through three implementation levels. Teams often start by testing messaging. But these tiers go beyond that. They help you test how real people will respond to the decisions that shape your strategy.

1. Basic Simulation

Simple role-play using AI’s general knowledge. Let’s say you’re testing new pricing strategy for your SaaS platform.

Example prompt: “You are a mid-market CFO evaluating our new pricing model. What concerns would you have about our 30% price increase? What would make you stick with us versus switching to a competitor? Please explain your rationale.”

Teams use this when they’re just getting started with AI validation, testing quick hypotheses, or need immediate directional feedback with no time for research or setup.

The benefits are zero setup time, immediate insights, works in any chat interface. But you get generic responses based on AI training data, not specific behavioral patterns.

2. Research-Based Simulation

These are AI advisors built with specific stakeholder data like their communication style, decision patterns, past positions, and known priorities. You can implement this through chat with uploaded research or custom GPTs (AI you can train to do a specific task) with knowledge bases.

Same pricing scenario, but now with research behind it.

Example prompt: “You’re a mid-market CFO based on this research data [upload persona profile, interview transcripts, survey data, past objection patterns]. I’m testing a 30% price increase. Instead of just reacting, help me think critically about this: What assumptions might I be making about CFO priorities that could be wrong? What questions should I ask myself to pressure-test this pricing strategy? What alternative scenarios should I consider – both best and worst case? Walk me through your reasoning for each concern you raise.”

Teams are ready for this when they’ve identified 2-3 critical stakeholders. They’re willing to invest research time upfront for ongoing value. They need consistent perspective across multiple team members. And they’re making repeated decisions involving the same stakeholders.

This works best for key accounts, major competitive threats, important industry analysts, and primary buyer personas.

3. Dynamic Simulation

With dynamic simulations, there’s real-time integration with CRM data, social listening, competitive intelligence, and other data sources. Updates happen automatically as stakeholder behavior changes.

Here’s the same pricing scenario, but with live market intelligence.

Example capability: Your CFO simulator automatically updates based on recent earnings calls, competitive pricing moves, and current economic conditions. When you ask about pricing strategy, it responds with current context: “Based on Q3 earnings calls, mid-market CFOs are focused on cash preservation due to rising interest rates. Your timing might be off. Three competitors dropped prices 15% in the last quarter. Here’s what CFOs are actually saying about budget priorities right now…”

Dynamic simulation makes sense for teams in fast-moving markets with access to real-time data. It requires technical setup, integration, and oversight, so it’s usually supported by specialists. Most GTM teams start with basic or research-based simulation for quick wins, then advance to dynamic once they have the right foundation in place.

Justin Parnell, my business partner who specializes in AI automation provides his perspective on the implementation reality.

Justin Parnell
Justin Parnell, Founder of Justin GPT

“Dynamic simulation doesn’t mean every GTM professional is wiring up AI systems.

A select few specialists build and maintain the automated workflows, manage integrations, and handle governance in partnership with legal and IT. They create the infrastructure so the rest of the organization can use it safely and effectively.”

No simulation is perfect. The value is in creating a strong draft of stakeholder reactions you can validate and refine. It’s far easier to pressure-test and adjust a simulation than to start from scratch every time.

The People Simulator Priority Matrix

Using marketing as an example, I built an interactive framework to help you identify which stakeholders matter most for your function.

Priority Matrix

Great strategies often fail because one overlooked stakeholder derailed them. This matrix helps you identify who can make or break your move.

Key stakeholder types:

  • Executives/C-Suite – Internal decision makers and budget holders
  • Customers – Existing relationships and revenue base
  • Buyer Personas – Target prospects you’re trying to reach
  • Partners – Channel and alliance relationships
  • Competitors – Market dynamics and positioning battles
  • Media & Analysts – External validation and market perception

The matrix shows exactly which combinations create the biggest impact for your specific role. Product Marketing teams get most value from buyer persona, competitor, and analyst advisors. Brand teams need media, analyst, executive, and customer advisors.

Below is an example (Buyer Persona for Product Marketing) of the guidance for building a simulator once you click on one of the icons in the matrix.

Buyer Persona Example

Claire Darling, CMO at Clari, has put this approach into practice at scale:

Claire Darling
Claire Darling, Chief Marketing Officer at Clari

“We’ve doubled our marketing team by creating 40 AI teammates in Q2. Our Persona Messaging Auditor has been transformational. Before launching any campaign, we audit messaging against our CRO, RevOps, and finance buyer personas. The auditor surfaces specific concerns each persona would have, like when our RevOps messaging focused on features instead of the workflow integration challenges they actually face.

This process gave us deeper insights about buyer decision patterns that become competitive intelligence. We’ve moved from assuming our messaging works to validating it works before we spend money.

That’s just the beginning. Messaging is where we started, but we can now explore how to simulate stakeholder reactions to strategic decisions — not just what we say, but what we do.”

The Impact

Some of the benefits of AI advisors are as follows:

  • Risk Reduction – Catch problems before launch instead of learning from expensive mistakes. Test positioning with your analyst advisor before briefings. Check retention messaging with your customer advisor before campaigns.
  • Strategic Preparation – Get perspective when you need it most. War-game competitive responses before product launches. Test partnership proposals before formal presentations.
  • Competitive Advantage – Move faster with better intelligence than teams still using guesswork. While competitors learn from post-mortems, you prevent problems by testing first.

Your Next Steps

Start with one key stakeholder whose perspective would most improve your strategies and decision-making:

  1. Pick your first advisor based on your biggest strategic blind spot
  2. Research their behavior through public statements, past interactions, and communication patterns
  3. Build your advisor using the research as foundation (custom GPT works well)
  4. Test on a real decision and compare guidance to actual outcomes
  5. Expand your advisory team based on what you learn

You’ll still validate in the real world, but simulation gives you a massive head start. Instead of choosing between moving fast or getting insight, you get both.

Great strategies fail in the reactions. AI-forward teams won’t guess anymore, they’ll simulate first.


The Practical AI in Go-to-Market newsletter is designed to share practical learnings and insights in using AI responsibly. Subscribe today and let’s learn together on this AI journey!

For those who prefer more interactive learning, explore our applied AI workshops, designed to inspire teams with real-life use cases tailored to specific go-to-market functions.

We also guide teams through their AI transformation journey. Check out this team transformation case study and step-by playbook of how we helped transform a lean GTM team into a human-AI powerhouse with human and AI teammates.

Or, if audio-visual content is your style, here are virtual and in-person speaking events where I’ve covered a variety of AI topics. I’ve also keynoted at many organization and corporate-wide events. Whether through the newsletter, multimedia content, or in-person events, I hope to connect with you soon.

AI Is Only as Sharp as Your Questions: How Critical Thinking Turns Any Work into Better Decisions

Liza Adams · August 21, 2025 ·

Hello go-to-market leaders, strategists, and innovators! 👋 Thank you for dropping by to learn practical AI applications and gain strategic insights to help you grow your business and elevate your team’s strategic value.

Quick Take

Most people use AI to get tasks done faster and take the first response. They’re training themselves to accept answers without question. Every day, teams walk away from high value insights because they never learned to think systematically about what AI tells them.

This approach changes any AI interaction into deeper insight. Here’s what happens when you shift your approach:

  • Ask “why” behind every recommendation – You see how AI thinks, can check its work, and train yourself to think more systematically about your own decisions
  • Challenge assumptions in any context – Whether you’re questioning “educational content performs best” for social media or “Europe is our best market” for expansion, the same principles apply
  • Get multiple views and confidence levels – Turn any AI response into a thinking exercise that shows blind spots and options
  • Critical thinking works regardless of scope – The same line of question that improves email subject lines also makes sense for strategic planning
  • Work in a judgment-free space – AI doesn’t care about your ego or timeline, making it easier to question assumptions you’d defend in front of colleagues

The difference isn’t the technology or the complexity of your work. It’s how you think about thinking. When you approach AI as a thinking partner rather than a task doer, every interaction becomes an opportunity to make your decision-making better.


Prefer to listen to an AI-generated podcast?

AI Podcast Version of this Newsletter

To support different learning styles, this newsletter is also available as an AI podcast (13 mins) with two AI hosts. I used Google’s NotebookLM to create it and personally it for accuracy and responsible AI use. (Quick tip: After you click through, the player might take a moment to load after you press play.)


From Execution to Insights

AI can help with any type of work across your entire GTM org and beyond. Whether you’re writing email subject lines or planning market expansion, creating battle cards or setting sales territories, the same tech supports both daily tasks and big decisions.

But here’s what separates good AI use from transformational AI use: applying this approach to any of it, tactical or strategic work as shown in the table below.

Article content

Most teams automate what they already know how to do. Teams that understand AI’s real potential use it to uncover what they don’t yet know.

The Power of Asking “Why”

The key is asking AI to explain its reasoning.

Most people take AI’s first answer and run with it. But when you ask “Why do you recommend this?” or “What’s your reasoning behind this ranking?”, we learn something new.

You see how AI thinks. When you ask for reasons behind every response, you’re not just getting answers. You’re learning to think more clearly yourself.

You can check its work. This is how you catch AI confidently recommending terrible strategies that look brilliant at first glance. You catch gaps in logic. And you train yourself to think more clearly about your own decisions.

We’ve been rewarded our entire lives for having the right answers with good grades, promotions, recognition. But watch any great meeting: the most valuable person isn’t the one with all the answers. It’s the one asking the insightful questions that shift how everyone thinks about the problem. AI doesn’t change this dynamic, it amplifies it.

Getting better AI outputs is just the beginning. The real value is building better thinking habits that stick with you in every conversation and decision. Whether you’re planning enterprise strategy or choosing social media topics, asking “why” transforms any work into deeper analysis.

Mandy Dhaliwal, CMO at Nutanix, has experienced this firsthand.

Mandy Dhaliwal, CMO of Nutanix

“It’s important not to use AI like a Q&A machine. We guide its thinking, we brainstorm together, but we still make the final call.

How many breakthrough ideas get killed because someone had to wait for the next team meeting to bounce them around? We can test messaging ideas off hours or work through event plans during our morning walk.

That immediate access to a thinking partner completely changes how we make decisions.”

The Psychology Behind Better Questions

When you shift from asking AI to execute tasks to asking it to challenge your thinking, the psychological barriers that normally keep us from questioning our own work disappear.

This works because:

  • AI doesn’t judge – No ego, politics, or timelines. This makes honest evaluation possible.
  • Private testing leads to public confidence – Challenge ideas with AI first, then show up to meetings prepared.
  • Silos vanish – A Harvard study with P&G professionals found teams working with AI “stop caring as much about the normal boundaries of your job.” AI focuses on problems, not politics.
  • Real example: strategic confidence builds fast – During a recent workshop, an ABM marketer used AI to challenge her key account marketing plan. Despite her excitement about the possibilities, she admitted, “It hurt because I’d worked so hard on this. This is my baby and AI was calling it ugly. But the questions were so good.” She realized she could strengthen her strategy by validating key assumptions.

This shift happens faster than you might expect. This week in Atlanta, I conducted function-specific AI workshops with 60+ marketers at Cox Automotive Inc., the company behind Kelley Blue Book and Autotrader that helps dealers and partners buy and sell millions of vehicles each year. Ramon L. Cortes, AVP of Marketing Operations, saw these mindset changes happening in real-time.

Ramon Cortes, AVP of Marketing Operations at Cox Automative Inc.

“I watched lightbulbs come on and mindset shifts happen right before my eyes. Once people started asking AI deeper questions and working as a group, the discussions became richer, more analytical, and focused on business outcomes.

These kinds of conversations typically don’t happen until later in our process – sometimes not until we’re already presenting to executives. Now it will happen sooner, which means we’ll identify gaps faster, use resources more efficiently, and increase our strategic value.”

The Critical Thinking Framework

This approach turns every AI conversation into critical thinking practice. I’ve been using this with teams for a couple years and my own thinking has gotten sharper because of it.

Here’s a three-level approach behind this thinking.

Level 1: Basic Evaluation

  • Offer some alternatives to this approach.
  • Give me the pros and cons of each option.
  • Rank these ideas based on [specific criteria].
  • Rate these from highest to lowest confidence.
  • Sort these options into must-have, preferred, and nice-to-have.
  • Break this down into smaller steps and show the timeline.

Level 2: Different Views

  • How might this be seen by [specific stakeholders]?
  • Put these options in a 2×2 chart using [X and Y criteria].
  • What would [competitor/customer/exec team] think about this approach?
  • Show me the decision tree for checking these choices.
  • What are the key factors and how does weighing them change the outcome?
  • Compare how this problem is solved in [different industry/company size/region].

Level 3: Assumption Challenging

  • What assumptions am I making that might not be true?
  • What would make this idea 10x stronger?
  • Where might this approach fail or backfire?
  • Run 3 what-if scenarios and show how each changes the outcome.
  • Challenge my assumption that [specific belief]. What situations would make our current approach actually work best?
  • Point to relevant data sources and provide reasons for your recommendations?

Yes, this takes more time upfront, but it saves you from spending weeks executing the wrong plan.

The important step is to always ask “What’s your reasoning?” or “Why do you recommend this?”

These techniques work whether you’re allocating marketing budgets or brainstorming content ideas. The scope changes, but the thinking discipline stays the same.

Critical Thinking in Action

Here are examples of what you can expect when you ask basic questions vs insightful ones. These assume some basic context was provided, but the real difference is how better questioning turns any conversation into deeper insights. With basic questions, we get basic answers. Insightful ones result in thoughtful responses that help us make better decisions.

The final judgment and answer are always on us, not AI.

Case 1: Social Media Topics (Level 1 Critical Thinking)

Instead of “Please give me social media topic ideas” that outputs this fairly generic response below:

Article content

Try: “Please suggest 3 social media topics for our demand gen audience. I’ve been doing mostly educational content but engagement feels flat. What different approaches should I try? Please rank them by likely impact vs effort and tell me why.”

Sample Response:

Article content

Case 2: Pricing Model Analysis (Level 2 Critical Thinking)

Instead of asking “Please compare these pricing options” which gives you this basic answer:

Article content

Try: “We’re considering three pricing models for our project management platform. Please put them in a 2×2 matrix using customer acquisition vs revenue predictability. Please show how sales, finance, and product would view each differently.”

Sample Response:

Article content

Case 3: Account-based Marketing Strategy (Level 3 Critical Thinking)

Instead of “Please create an ABM strategy for our cybersecurity platform targeting mid-market companies” that gives you this answer:

Article content

Try: “We’re considering ABM for our cybersecurity platform. What if our assumption that “bigger accounts mean better ROI” is wrong? Challenge this approach. What alternative targeting strategies might work better? What could make traditional ABM backfire for us?”

Sample Response:

Article content

In the AI era, the most dangerous decisions aren’t the ones you get wrong. They’re the ones you make quickly, confidently, and unquestioned because the answer sounded right.

Note: In any of the examples above and in your work, context and what we share with AI so that it can better help us are key. Reminder to share your goal, role you want it to play (e.g., competitive analyst, skeptical buyer, etc.), actions it should do or not do, context (e.g., current situation, relevant files, accurate data, etc.), and examples (i.e., what good looks like).

Where Do You Stand?

Want to see how you currently approach AI? Take this quick assessment to discover whether you’re using AI as Task Executor, Analytical Collaborator, Developing Critical Thinker, and Critical Thinking Partner. You’ll get personalized next steps based on your results.

The quiz takes 3 minutes and shows you specific ways to level up your AI thinking, regardless of whether you’re working on major strategic initiatives or daily tactical tasks.

The Breakthrough

Critics worry AI will make us intellectually lazy. The opposite is happening with teams that take this approach.

When you systematically challenge AI outputs and ask for reasons behind every recommendation, you develop stronger evaluation skills than most traditional education provides. You’re getting hands-on practice in logic, understanding perspectives, and systematic analysis.

These same analytical habits show up in your team meetings, strategic reviews, and decision-making conversations.

I’ve noticed this with my own thinking. The questions I ask now, of AI and in regular work conversations, are richer. I automatically look for alternatives, ask for confidence levels, and pressure-test assumptions in ways I didn’t before.

Your Critical AI Thinking Starter Kit

Want to start using AI as thinking partner? Try this prompt with your next project:

“I’m working on [PROJECT DESCRIPTION] with the goal of [SPECIFIC OUTCOME]. Here’s some context: [SITUATION/CONSTRAINTS]. You’re a [strategist/devil’s advocate/customer advocate].

Instead of solving this for me, suggest 5-7 strategic questions I should ask you that will help me think more critically, evaluate options thoroughly, consider different perspectives, and understand implications I might be missing. Focus on questions that challenge assumptions and identify blind spots.”

Then use those questions. Ask for rationale. Push back on the reasoning. Build on the ideas. That’s where the real value lives.

Sydney Sloan, CMO of G2, applies this thinking to customer feedback analysis.

Sydney Sloan, CMO of G2

“When I look at customer feedback, I don’t just ask AI to tell me what people are saying. I ask it to spot what we might be getting wrong. Like ‘Based on these reviews, what customer problems are we not solving that we think we are?’ or ‘What are customers actually using our product for that’s different from what we built it for?’ Those questions reveal gaps between what we assume and what’s really happening.”

The Bigger Picture

Whether you think you’ve mastered AI or you’re still struggling with it, you’re probably operating at 20% of what’s possible.

The biggest AI advantage comes from questioning what everyone else takes for granted, not from better tools or prompts.

When teammates say this is overthinking, show them the difference between your basic and thoughtful responses, the value becomes obvious quickly.

This also becomes essential as we build AI teammates. Teams that can’t think critically with these tools now won’t be able to build teammates that think critically later.

Pick one decision your team made in the last month. Ask AI to play devil’s advocate and identify risks you missed. Share those insights with your team. That’s how you demonstrate AI’s potential for critical thinking and get the most out of it.

AI makes it easier than ever to act fast. But it also makes it easier to be confidently wrong. Clear thinking still sets you apart.

Remember: you’re building critical thinking habits that improve every meeting and decision.


The Practical AI in Go-to-Market newsletter is designed to share practical learnings and insights in using AI responsibly. Subscribe today and let’s learn together on this AI journey!

For those who prefer more interactive learning, explore our applied AI workshops, designed to inspire teams with real-life use cases tailored to specific go-to-market functions.

We also guide teams through their AI transformation journey. Check out this team transformation case study and step-by playbook of how we helped transform a lean GTM team into a human-AI powerhouse with human and AI teammates.

Or, if audio-visual content is your style, here are virtual and in-person speaking events where I’ve covered a variety of AI topics. I’ve also keynoted at many organization and corporate-wide events. Whether through the newsletter, multimedia content, or in-person events, I hope to connect with you soon.

How I Stay Grounded in the AI Era

Liza Adams · August 14, 2025 ·

With AI, a little balance, a little grace, a little perspective go a long way. The struggle as we learn and infuse AI in our daily lives is real. I feel it every single day. Here are just a few ways I keep myself grounded. Thought it might help others.

On Jobs

60% of the jobs we have today did not exist in 1940. We no longer have elevator operators and stenographers, but we now have software developers and social media managers.

There will be jobs lost but there will also be new jobs created. We will adapt again. But the difference today is the pace. The bigger question is… will we be able to upskill/reskill and create new jobs fast enough?

AI as teammates

I have terrible memory. Some days I’m exhausted. I worry about my aging mother in Manila or stress about my daughter heading to college. I’m human. I can’t be my best self every single day. AI remembers much more than I do. It isn’t affected by its environment. It doesn’t get distracted or have off days. But AI doesn’t have a moral compass and doesn’t have fire in the belly. I guide it so it knows right from wrong.

Together, we overcome each other’s limitations. That’s how my AI teammates and I work better together.

Be human first

Be an amazing human being first, then be an amazing business person. Don’t chase algorithms. Focus on being grounded in our values and authentic in how we show up.

AI is an amplifier of what’s already there, good or bad. When we’re genuine and focused on truly helping people, AI will amplify that. When we’re not, that shows up too.

AI Literacy

Regardless of how we feel about AI, being educated about it is foundational.

When we understand AI, we can make better decisions for ourselves, our families, our businesses, our careers, and our communities. When we don’t, we risk being influenced by others who may not hold the same values as us.

Give ourselves grace

We all need to give ourselves and others a lot of grace wherever we are on the AI journey. We all have different things happening in our work and personal lives. My husband is a published author of young adult sci-fi novels who hesitates to put his work in AI because he fears others will be able to write his next series.

But as a right-leg amputee due to childhood cancer, he embraces AI in robotics because he believes it can change how people work and live, especially those with disabilities.

A person’s acceptance and adoption of AI can be very personal and even vary by use case. We need to respect that.

Speaking of balance… this is me working with AI on the deck with my new CMO Coffee Talk with Matt & Lat mug (and matching sticker on my laptop) that reminds me of community. You can’t see it in the photo but I’m also on my walking pad.

Sometimes the best way to stay grounded is to literally get outside and remember that AI is just one part of our lives, not the whole thing.

What’s helping you find balance in this AI era?

Person working with a laptop outside on a deck, with a coffee mug and walking pad visible.

See original post here

Hiring Managers Want You & Your AI Teammates

Liza Adams · August 13, 2025 ·

Hiring managers are starting to evaluate more than just you. They want to see you and your AI teammates.

71% of executives already prefer less experienced candidates with AI skills over seasoned pros without them. They’re looking for people who can create workflows that deliver faster work, better quality, and completely reimagined processes.

I have terrible memory. Some days I’m exhausted. I worry about my aging mother in Manila or stress about my daughter heading to college. I’m human. I can’t be my best self every single day.

AI remembers much more than I do. It isn’t affected by its environment. It doesn’t get distracted or have off days.

But AI doesn’t have a moral compass and also doesn’t have fire in the belly. So I guide it so it knows right from wrong.

Together, we overcome each other’s limitations. That’s how I and my AI teammates work better together.

(See link below for more info about AI teammates.)

Teams using this approach see 50-75% faster content creation, 98% accuracy in lead qualification, and 35% better campaign performance.

I’m curious how many people have actually built and use AI teammates regularly. Vote in the poll below.

Companies want people who can work this way. Someone who can create workflows to analyze data faster and write more timely, insightful content. Someone who can make strategic decisions quickly and with confidence.

The opportunity is wide open. You can start small, learn as you go, and build the AI teammates that complement your strengths and cover your weaknesses.

Whether you’re hiring or looking for your next role, the question is the same: are you ready to work as a team with AI?

See original post here

Hiring for Hybrid AI Teams: Adaptability is Key

Liza Adams · August 12, 2025 ·

Your next hire might manage more AI teammates than human ones.

I’m working with a GTM team of 70+ people who built, trained, and manage 150+ AI teammates so far. That’s more than 2 AI teammates per human and it continues to grow.

What should you look for in candidates as you build your hybrid human + AI org?

The challenge is that many haven’t built and orchestrated AI teammates yet. So you may have to look for other signs to figure out fit. You might focus on people who adapt well rather than requiring deep AI experience.

Some ideas:

  • Learned new ways quickly when their company adopted new systems

  • Stepped outside their role to solve problems

  • Uses AI tools in their current work (even basic ones)

You have two options: hire for the ability to adapt quickly or find people who’ve done this before. Both have trade-offs.

The first approach gets you more candidates but needs training and patience. The second approach limits your options but gets you someone who can hit the ground running if you’re willing to wait and pay for proven experience.

If you want help creating specific interview questions for either approach, I built an AI-Ready Employee Hiring Guide GPT (see link in comments) that walks you through this decision and generates custom questions based on the role and priorities. Let me know how it worked for you.

How are you thinking about this in your hiring?

AI-Ready Employee Hiring Guide GPT

Image related to AI teammates and hiring strategy

See original post here

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Page 10
  • Interim pages omitted …
  • Page 71
  • Go to Next Page »

Copyright © 2025 · GrowthPath Partners LLC · Log in

  • LinkedIn