Some companies are now tracking over 200 data points per employee.

Error rates. Output volume. Ticket velocity. And one metric that should make everyone in HR stop and think:

How much work was completed without AI assistance.

That number works against you.


The Review Has Been Inverted

The traditional performance review evaluated what you produced. Quality of output. Impact on the team. Growth over time.

The new system evaluates how you produced it — and whether the method matches the company’s current ideology.

Use AI extensively and demonstrate results: rated up. Produce excellent work through traditional methods: flagged.

That’s not performance management. That’s behavioral compliance with a technology agenda dressed up as performance management.

I’m not saying it’s wrong. I’m saying we should be honest about what it actually is.


The Question Nobody Is Asking

Fewer than 16% of managers and employees understand their company’s AI vision — and yet those same employees are now being evaluated against it.

You’re being graded on a test. Nobody gave you the study guide.

Nearly two-thirds of HR professionals report that fewer than half of managers at their organization effectively address underperformance. 43% say managers aren’t adequately trained to conduct reviews at all.

Now we’re adding AI adoption metrics on top of a system that was already broken.

The foundation wasn’t solid. We’re adding another floor anyway.


What’s Actually Being Measured

Let’s be precise.

When any AI-powered performance tool aggregates your contributions — it measures what is legible to the system.

Activity that passes through monitored platforms: visible. Judgment calls made in a hallway conversation: invisible. The engineer who caught the architectural flaw before it shipped: invisible. The HR business partner who talked someone off a ledge at 4pm on a Thursday: invisible.

AI can analyze large volumes of employee data and identify performance trends — but only in the data it can see.

The system measures its own blind spots as zero. And zero looks like underperformance.


The Feedback Loop Problem

Here’s where it gets structural.

In organizations where performance outcomes increasingly reflect AI adoption — where those demonstrating “AI-driven impact” are eligible for higher rewards — a quiet loop forms.

Which means the employees who get promoted are the ones who used AI most visibly. Who then design the next generation of performance criteria. Which reward the behavior that got them promoted.

The system trains itself to reproduce its own values.

That’s not neutral. That’s compounding. And it happens silently, inside the logic of the tool, while HR leaders are in a meeting debating rating scales.


The Trust Gap Is Already Here

A 2026 Betterworks study shows a widening trust gap between executives and employees — exposing critical blind spots in readiness, clarity, and adoption.

Executives believe performance systems have kept pace with AI-driven work. Employees are still waiting for systems that match the moment.

That gap isn’t a communication problem. It’s an architecture problem.

The systems were designed for a different kind of work. The work changed faster than the systems did. And instead of redesigning the systems, most companies are bolting AI features onto existing frameworks and calling it transformation.

It isn’t.


What I Think This Actually Requires

Not less technology. More precision about what we’re trying to measure.

Define what good looks like before you measure it. If you can’t describe the outcome you want in plain language, no AI system will find it for you.

Audit what the system can’t see. Every performance tool has blind spots. Name them. Account for them. Don’t let the measurable crowd out the meaningful.

Be honest about what you’re actually incentivizing. If AI adoption is a business requirement, say so directly. Don’t embed it in a performance framework as if it’s a proxy for quality.

Keep a human in the interpretation layer. Not as a rubber stamp. As an actual check on what the data means.


91% of CHROs say AI is among their top concerns in 2026. Nearly half haven’t established clear productivity measurements yet.

So we’re evaluating people with systems we don’t fully understand, against criteria we haven’t fully defined, inside a trust gap we haven’t fully acknowledged.

That’s not the AI’s fault.

The AI is just running the process we gave it.

The question is whether we designed that process well enough to be proud of what it produces.

Most organizations haven’t answered that yet. They’re just running the review cycle.