Field Notes

Work Smarter

kodak_professional_portra_160_07_20_2022_000054940018 Exhaust Vent, Bentonville Arkansas 2023

Last week, I was called before a committee of lawyers, IT executives, and my own boss. It was not to win an award.

The week prior, I had used a non-approved AI to analyze a spreadsheet containing employee information. To my mind, the risk was negligible: it was a pivot table of people who had attended certain training sessions. I was trying to figure out whom to invite to a new course, and the file wasn't labeled "sensitive" — though, unknown to me, one field indicated which employees had been terminated.

My interim (European) boss flagged this as improper. I said fine, it won't happen again. I've been the "AI guy" in my department for a couple of years — hosting webinars with outside experts, regularly comparing how different AI tools respond to the same prompts. This was, however, the first time I'd uploaded a file for analysis. For what it's worth: Claude handled it better than Copilot did, and Claude needed only one prompt to get there, while Copilot needed four. But both eventually gave me what I was looking for.

She called the next day to reiterate that what I'd done was improper, and that she was reporting it to the European security team. (We're a global company, and the EU has stricter data rules than we do domestically.) I thought this was a gross overreaction. I said nothing.

The day after that — five minutes before a class I was co-teaching — she called again: expect a meeting invitation from the investigative committee. The invitation arrived an hour later. I had to ask a colleague to take over the class.

The outcome: no one was concerned beyond the initial "don't do that again." That was it. All of this happened two weeks after my boss of more than a decade was fired.

Today, on our global HR call, the head of HR said three things that stood out:

This is framed as an inevitable business case. What it actually is: We have no values beyond what's expedient — so suck it, peasants.

Honestly? I could live with that message. What I can't live with is the combination of "work smarter" and "if you use AI in any way that isn't explicitly sanctioned, we will drag you before the Council of Corporate Elders."

Pick one. Encourage AI adoption, or police it. The committee and the HR call happened in the same two-week window. No one in that organization seems to have noticed.