Stop Talking About How AI Makes You Feel. Start Measuring What It Moves.

Michael Maynes

AI Thought Leader

February 10, 2026

8 min read

Stop Talking About How AI Makes You Feel. Start Measuring What It Moves.

Recently, Dario Amodei, CEO of Anthropic (the company behind Claude), published an essay called *The Adolescence of Technology*. In it, he raises a concern that we as a society fail to discuss the risks and opportunities of AI in a "realistic, pragmatic manner." He calls for three principles: avoid doomerism, acknowledge uncertainty, and intervene surgically.

His point resonates at a civilizational level. But I find the same pattern playing out at a business level every single week.

I sit across from executives who are either paralyzed by what AI might mean for their workforce or breathlessly excited about a demo they saw at a conference. What I rarely encounter is the middle ground: leaders who have defined the specific variables they want AI to move, measured the delta, and adjusted. We spend more time talking about how this technology makes us feel than what it's measurably able to offer, and where we can go from here.

Amodei's three principles (avoid doomerism, acknowledge uncertainty, intervene surgically) aren't just advice for policymakers. They're a playbook for the executive suite. Don't catastrophize about headcount. Accept that you won't get it right on the first AI implementation. And when you do deploy, be precise about what you're trying to change and how you'll know it worked.

What Anthropic Got Right, and What It Teaches Us About AI Adoption

What Anthropic did extraordinarily well during the development of their large language models was measurement. Amodei and his co-founders were among the first to document and track what they call "scaling laws": the observation that as you increase compute, dataset size, and model parameters, AI systems get predictably better at essentially every cognitive skill you can measure. They identified the key variables, then designed around those variables.

As executives, I believe it's our mission to do the same. We should let people express how this technology makes them feel. But we can't stop there, and we can't get hung up. We must lean in and define the elements of the business we want to see materially move.

I also challenge us to look beyond purely financial metrics. Yes, financial returns will come. Seventy-four percent of executives already report achieving ROI from generative AI within the first year, according to Google Cloud's 2025 ROI of AI study. Companies adopting agentic AI are reporting average revenue increases of 6–10%. McKinsey's 2025 State of AI survey found that the highest-performing organizations (those attributing 5%+ of EBIT to AI) are three times more likely to have fundamentally redesigned individual workflows. The financial case is settled.

But leaders like Sam Altman and Dario Amodei don't discuss topics like Universal Basic Income because they're fixated on profit. They discuss it because they see a potential world where capital is plentiful, and the harder question surfaces: What's our value when a machine can deliver output with a better ROI and the economy doesn't fall apart?

That's a profound question for society. It's also a present-day business question, and one we can start answering this quarter.

From Philosophy to Framework: How to Measure AI Agent ROI

If we accept that AI agents will create value beyond simple cost reduction, we need frameworks to measure that value now, while we're still designing how these systems integrate into our operations. The question isn't whether AI will generate ROI. It's whether we're measuring the right things to understand its full impact.

Think about Anthropic's scaling laws. They identified compute, dataset size, and model size as the key input variables, then mapped how changes in those inputs predictably affected performance. You need something similar for measuring AI agent impact in your business.

Here's a formula I've been working with. It doesn't require massive infrastructure. It requires intellectual honesty about what matters.

Agent Impact Score (AIS) = (ΔT × V) + (ΔQ × C) + (ΔE × R)

Where:

  • ΔT = Time saved (hours/week)
  • V = Value per hour (average loaded cost of human time displaced)
  • ΔQ = Quality improvement (% reduction in errors, rework, or customer complaints)
  • C = Cost of quality failures (average cost per error or complaint)
  • ΔE = Employee experience lift (measured through pulse surveys, 1–10 scale improvement)
  • R = Retention value (estimated cost of turnover × improved retention rate)

The elegance is that you can start measuring this immediately with data you already have, then refine as you learn what matters most in your specific business. The best AI implementations don't just automate tasks. They unlock people. This framework helps you quantify both.

Making It Concrete: A Customer Support Case Study

Let me walk you through how a mid-sized company could implement this AI measurement framework in a customer support operation, then show you how the same principles apply anywhere.

Week 0: Establish Your Baseline

Before you deploy anything, you need to know where you are. For a customer support team, this looks like:

  • Time: Average handle time per ticket. Let's say 23 minutes.
  • Quality: First-contact resolution rate (68%), customer satisfaction score (7.2/10), escalation rate (15%)
  • Employee Experience: Weekly pulse question, "How sustainable does your workload feel?" Current average: 5.1/10

You're not building a data warehouse. You're pulling from your existing ticketing system, CSAT surveys, and adding one question to whatever pulse tool you already use.

Weeks 1–4: Deploy and Track the Delta

You introduce an AI agent that handles tier-1 routing and drafts responses for agents to review and edit. Now you're measuring the change.

ΔT (Time Saved): Handle time drops from 23 to 17 minutes per ticket. Your team of 20 agents handles 400 tickets per day. Daily time saved: 400 tickets × 6 minutes = 2,400 minutes = 40 hours. Weekly time saved: 200 hours. Value: 200 hours × $45/hour (loaded cost) = $9,000/week.

ΔQ (Quality Improvement): First-contact resolution jumps to 79% (+11 points). That's 44 fewer escalations per day. Each escalation costs roughly $85 in senior agent time and customer frustration. Daily quality value: 44 × $85 = $3,740/day = $18,700/week.

But here's where it gets interesting. You also notice CSAT drops slightly to 7.0/10. Agents are moving faster, but responses feel less personal. This is signal, not noise. It's a tuning opportunity, not a failure.

ΔE (Employee Experience): Workload sustainability score rises from 5.1 to 6.8 (+1.7 points). Your annual turnover in support is 35%, costing roughly $45,000 per replacement. Industry data suggests a 1-point wellbeing improvement correlates with approximately 8% better retention. Estimated annual retention value: 1.7 × 8% × 35% turnover × 20 agents × $45,000 = $85,680/year = $1,648/week.

Total Weekly Impact: $29,348.

If your AI agent deployment costs $2,000/month ($500/week), your payback period is four days.

But here's what the formula really tells you: the agent isn't just saving time. Sixty-four percent of its value is coming from quality improvements and employee experience. If you'd only measured time saved, you would have calculated a 4.5x ROI. The real number is closer to 14x, because you measured the right things.

This lines up with what the broader market is showing. According to McKinsey, the companies seeing the most value from AI aren't just chasing efficiency. Eighty percent set growth or innovation as additional objectives alongside cost reduction. Gartner projects that by the end of 2026, 40% of enterprise applications will include task-specific AI agents, up from less than 5% in 2025. The organizations winning this shift are the ones measuring holistically.

Month 2: The Refinement Loop

This is where most companies stop measuring. Don't.

You notice time savings plateaued at week 3. Agents got as fast as they're going to get with this configuration. Quality is improving slowly as you tune prompts based on escalation patterns. And employee experience is rising faster than expected because people are using saved time for complex problem-solving they actually enjoy.

So you adjust the formula weights for your business:

AIS = (ΔT × V × 0.3) + (ΔQ × C × 0.4) + (ΔE × R × 0.3)

You've weighted quality improvements higher because in your business, one angry enterprise customer costs more than 50 hours of agent time. This is your version of Anthropic's scaling laws. You've identified what moves the needle in your system and calibrated accordingly.

Scaling the AI Adoption Framework Across the Organization

The same logic applies to any agent deployment. The variables change; the discipline doesn't.

I've written companion guides that go deeper into two critical areas where I see the most immediate opportunity for AI implementation, and the most common measurement mistakes:

**For Heads of Sales: Measuring AI Agent Impact on Your Pipeline and Team** If your reps spend 71% of their time on non-selling activities (Salesforce data), the opportunity is enormous. But measuring AI agent impact in sales requires looking beyond quota attainment to pipeline velocity, rep ramp time, and the quality of customer conversations. I break down exactly how to apply the AIS framework to a sales organization, including the metrics most VPs of Sales are missing.

**For Revenue Operations Leaders: Building the Measurement Infrastructure for Agentic AI* RevOps owns the data layer, the process architecture, and increasingly the agent orchestration. If you're the person responsible for making AI agents actually work* inside your revenue engine, this guide covers the operational blueprint: baseline instrumentation, cross-functional reporting, and the governance model that keeps it all from falling apart.

The Capital vs. Quality of Life Trade-off (That Isn't One)

Look at the customer support example again. You could take the 200 hours per week saved, cut five agents, and pocket $450K per year. Or you could keep all 20 agents, reinvest their time into white-glove support for top customers, proactive outreach, and product feedback loops.

Most companies will choose the first option. The best companies realize the second creates compounding value that doesn't show up in a quarterly P&L but dominates over three to five years.

The measurement framework gives you the freedom to choose, and to defend your choice with data. You can show the board that the reinvestment path generates an estimated $280K in expansion revenue and cuts product development cycle time by 15% because you have better customer insight flowing back into the organization.

This isn't wishful thinking. McKinsey's data shows that AI high performers (the 6% of organizations seeing meaningful enterprise-wide EBIT impact) are more than three times as likely as others to say they're using AI for transformative change, not just cost reduction.

Your Closing Challenge

We don't need to wait for post-scarcity economics to ask what humans should do when machines handle the commodity work. We can answer it this quarter: they should do the work that compounds.

Amodei wrote that we need to discuss and address AI "in a realistic, pragmatic manner: sober, fact-based, and well equipped to survive changing tides." I agree, and I think the same standard applies inside our businesses.

The executives who measure AI agent impact well won't just show better ROI. They'll build organizations where humans and AI are doing fundamentally different, complementary work. That's not utopian thinking. That's Q2 planning with better KPIs.


Michael Maynes is a revenue focused change management leader helping executive teams operationalize AI implementation within their go-to-market organizations.

Sources:

  • Amodei, D. (2026). "The Adolescence of Technology." darioamodei.com
  • Google Cloud (2025). "ROI of AI Report: 2025."
  • McKinsey & Company (2025). "The State of AI in 2025: Agents, Innovation, and Transformation."
  • Gartner (2025). Enterprise AI Agent Forecast.
  • Salesforce (2025). "Top AI Agent Statistics."
  • PwC (2025). AI Executive Survey.