Paranoid Exuberance, Intellectual Honesty, and the Anxiety of Choosing Both
#36 || The tools keep getting better. We are too. Really.
Some of us are building with AI. Most of us are learning. And all of us are anxious. This was confirmed after a week of conferences, networking events, and meetings with leaders. It wasn’t the formal presentations that revealed the underlying anxiety. It was the hallway conversations, the 1:1s in which I heard what I’ve been hearing for a while:
I’m overwhelmed. I can’t keep up. I don’t know how to continue to learn beyond good prompting for myself, let alone how to help my team and organization. When am I supposed to be learning all of this and rethinking work when I’m in back-to-back meetings all day?
So, the good news is that we are not alone. The not so good news is that the pace and scale of AI change is not slowing down any time soon. The question is what we do with both of those truths.
My “twofer” this week of learning from and with others came from attending Charter’s Leading with AI Summit. It is always full of thought-provoking speakers and this year was no exception. What I found humbling and validating is that none of the speakers pretended to be an expert. Nobody knows what’s coming with AI. The models we just got familiar with last week have already evolved to be stronger this week. This is all coming fast and it’s hard to metabolize the pace and scale of these changes.
Jessica Lessin, founder and CEO of The Information, named the mood perfectly: paranoid exuberance. Exuberance is that overflowing, can’t-contain-it energy. That, combined with some paranoia running side by side, can make us feel the tension between the two, trying to acknowledge and manage both while we learn yet another AI tool.
I’ve called a similar feeling “scarcited,” being scared and excited at once. Physiologically, both feel the same, so why not just reframe how we’re feeling to be the “excited” part. I’m scarcited every time I step on stage to speak. But with public speaking, I know what’s on the other side. I’ve done it before. Paranoid exuberance is different. With AI, we don’t know what’s on the other side. We can’t picture the endpoint. We can’t even prepare for it the way we have prepared for big shifts before.
So, what do we do? Here's what I took away.
Building Exuberantly
Helen Kupp, Co-Founder and CEO of Women Defining AI, shared one of the best lines of the day: Stop using AI. Start building with it. Shift from “how do I use this tool?” to “how can I make this better?” and better, and better. She walked us through how she took a single problem, solved it, then kept going. And going again. Problem after problem, until she had a full workflow that saved her hours and is now repeatable for every cohort she runs in WDAI (an organization I recommend to every woman I know).
Helen shared her work with AI through one of the best ways to communicate shifts, the “from/to”:
From one-off tools and processes to systems
From repetition to automation
From implicit thinking to explicit instructions
That last one, especially, is a skill that we have been refining for decades. So much of what makes a great leader, a great operator, a great manager lives in our head. The instincts, the pattern recognition, the “I just know.” AI forces us to codify it. To make the implicit explicit. And when you articulate your processes clearly, you can then delegate clearly to both humans and agents, which allows you to not just build it, but teach it, and scale it.
I’m experiencing this firsthand. A few weeks ago, I used Claude CoWork to build a weekly goals dashboard. It cut my time in half compared to the one I had been doing manually in Google Docs. This week, I moved that same dashboard into Claude Code and built an app that cut that time in half again. Each iteration forced me to get clearer about what I actually wanted, and each time the tool met me there. And I’m not doing this alone. I’m exchanging texts with former colleagues about whether we’re using Firebase for collaboration, sharing what’s working and what isn’t. The building is better when it’s social.
Hannah Pritchett, Chief People Officer at Anthropic, modeled this beautifully. When she couldn’t explain to her team what she wanted, she built a prototype to show them. They finally understood exactly what she wanted and were able to then build on her prototype. That’s the builder mindset in action. (Side note: Claude Code, Anthropic’s coding tool, started as an internal product. Someone inside the company built it because they needed it. That’s what high-agency cultures produce.)
AI is shrinking the distance between having an idea and making it real. You don’t need to be a coder. You need to be a builder. You need to make space to learn, experiment, fail, and sit with the discomfort of not knowing what you’re doing yet. Because if you are clear about what you want, you will get there. Maybe somewhere even better.
Leading Honestly
Brandon Sammut, Zapier’s Chief People and AI Transformation Officer, called for “intellectual honesty” in how we talk about AI transformation. Employees don’t just want the headline. They want to know what comes next. If this works, what changes? What does it mean for me? They want to see leaders show and share what they are learning in All Company meetings. But the real honesty happens in team meetings and 1:1s, where leaders can share what’s changing, what we know, and most importantly, what we don’t know.
Brandon was talking about honesty with AI. But for me, intellectual honesty goes further. It’s about admitting that most of what we’re trying to do with AI isn’t new. AI is just forcing us to finally implement it.
Jessica Swank, Senior Vice President and Chief People Officer at Box, put it simply: AI takes away the drudgery. And she's right. But eliminating drudgery so people can do meaningful work isn't a new idea. Neither is "manager as coach," team-based structures, collaborative pods organized around initiatives, breaking down silos. These are ideas we've been putting on slides, competency models, and strategy decks for years. We have automated a lot of work over the past 20 years, we have eliminated some processes altogether, we have streamlined our work. But AI is forcing us to do all of this at scale, as a system. Manager as coach, for example, has new meaning when AI tools are embedded in the flow of work, in Slack, in your project tools, handling the tasks managers used to spend their 1:1s tracking. Those conversations can shift even more from "what's the status" to "what are you learning and how can I help you grow?"
Donna Morris, Executive Vice President and Chief People Officer at Walmart, and Aneesh Raman, Chief Economic Opportunity Officer at LinkedIn, landed in the same place: the middle manager isn't disappearing. It's becoming what it always should have been. Donna framed Walmart's approach as people-led, tech-powered, and backed it up with a challenge: companies have a responsibility to bring their people with them, not optimize them out. Upskill them. Redesign roles around them. Create new paths, instead of just eliminating old ones.
The real opportunity isn't the technology. It's what human skills can do with it. But that's only good news if we're honest about what's gotten in the way every time we've tried before. We rolled out technology changes without training people on how to use them. Or we relied on individual skills to cover the gaps rather than building systems to close them.
Anthropic’s Hannah Pritchett shared another insight that I think is under-appreciated: her team tried enabling peer feedback requests through Claude, and it didn’t gain traction. Why? Because it didn’t align with existing social norms. The tech was fine. The culture wasn’t ready. Technology has to fit culture, not the other way around.
AI adoption isn't a technology problem. It's a management and culture problem. Brandon's version of intellectual honesty is building in public, showing your work, sharing what you're learning as you go. Mine is admitting that most of what we're trying to do isn't new, and that we haven't fully leveraged the tools to build the entire system that's needed. Both are needed. You can have the best tools in the world. But if leaders aren't building in public, if employees don't feel safe to experiment, if teams aren't sharing what's working and what isn't, nothing will change. We'll just be paying for very expensive technology licenses hoping that usage will go up next month.
Choosing Both
There’s a growing conversation about staying human in the age of AI. And it’s important. But I think we’re only talking about half of it. The conversation is usually framed as a counterweight: don’t forget the human side. As if building with AI and being deeply human are in competition. They’re not. In fact, our ability to hold both at the same time may be the most human thing about us. We can sit with contradictions, hold polarities, feel the anxiety and the exuberance simultaneously in ways our digital tools can’t. That’s not a limitation to protect. It’s a strength to leverage.
The conversation is usually framed as a counterweight: don’t forget the human side. As if building with AI and being deeply human are in competition. They’re not. In fact, our ability to hold both at the same time may be the most human thing about us.
The anxiety between sessions wasn’t a sign that people aren’t ready. It’s a sign we know the stakes.
I’m feeling this daily. The pressure to keep up, to stay ahead. But I’m also forcing myself to have humanly delicious moments in between those pressures. Sitting next to my friend and former colleague, Lynee, in the front row of the conference, I was struck by how comforting that felt. Being able to look over with a smile when a provocative statement landed, or when someone said something we had talked about before and our eyes met to validate it. Feeling her energy and the energy in the room, nodding at the same moments. Having the 1:1 conversations about life transitions, these are experiences that require presence more than good prompts.
I'm building apps, I'm listening to my fav AI experts like @nate.b.jones and @hardfork, I’m collaborating with colleagues.
I'm also meeting with friends IRL instead of sending another text and taking that long beach hike to Black Sands Beach with my husband.
We're building our digital system and our human system at the same time. That tension creates anxiety, but it can also create exuberance. The goal isn't to eliminate the anxiety. It's to not freeze from it. To keep going with others, choosing practices that feed both systems, intentionally, together.









AI adoption should be driven by empowering employees to work smarter and grow their careers.
I really resonated with “But if leaders aren't building in public, if employees don't feel safe to experiment, if teams aren't sharing what's working and what isn't, nothing will change”. I’ve always been a fan of turning quick prototypes and giving people something to respond to. If we aren’t creating, we personally have nothing to respond to. We can’t trust what we don’t know. Get to know AI by doing. Lovely post!