Categories
Thought Leadership

How Founders Are Rethinking Team Building in the AI Era

In our recent work with growth-stage leaders, we’re seeing a fundamental shift in how they approach hiring, development, and team structure. It’s not just about adding AI tools to existing processes—it’s about reimagining what makes teams effective when intelligence itself becomes more accessible.

The Transparency Question: What Happens to Middle Management?

One CEO made a striking move: eliminating an entire layer of middle management. Not because people weren’t performing, but because AI-enabled transparency changed what coordination actually required.

Here’s what drove that decision: When work becomes more visible—through automated progress tracking, collaborative AI documentation, and real-time synthesis of team outputs—the traditional information-brokering role shifts. The question isn’t whether middle managers add value (they do), but whether that value lies in coordination or in something harder to replicate.

What Brian Elliott’s research with 10,000+ knowledge workers reveals is crucial here: the executives who succeed aren’t the ones who double down on control. They’re the ones who get comfortable with transparency, use it to understand patterns rather than police activities, and invest in developing people rather than managing information flow. This matters because organizations where people trust their leaders are 11 times more likely to be high performers, according to research from the Institute for Corporate Productivity.

This leader redistributed those management resources into deeper technical expertise and coaching capacity. The result? Faster decisions, yes. But also higher pressure on senior leaders to actually develop people rather than coordinate them.

 

Hiring for the Unknown: Curiosity Over Credentials

“I’m hiring for curiosity now more than specific skills,” one founder told me. “The technical stack we need six months from now might not exist yet.”

This isn’t about abandoning expertise—these leaders still need people who can execute. But they’re adding a new filter: How does this person approach learning when the answers aren’t known yet?

One company now includes a specific interview module: “Walk us through how you’re using AI in your current role.” Not to test for sophistication (people are at wildly different adoption levels), but to surface openness to new ways of working. They’re looking for people who experiment, who can articulate what worked and what didn’t, who show genuine interest in capability-building rather than defensiveness about disruption.

The subtext: If you haven’t tried any AI tools in your domain yet, you’re probably not wired for the speed of change we’re navigating.

 

The Self-Check: Letting AI Audit Our Blind Spots

One executive I work with uses AI to reality-check his interview notes against job descriptions. In his words, he’s guarding against his “excitable inner child”—that tendency to fall for charisma or get excited about peripheral experiences that don’t actually map to role requirements.

This is a different kind of AI application. Not efficiency (though it’s faster), but intellectual honesty. The tool flags gaps he missed, highlights when he’s overweighting certain signals, forces him to articulate why instinct should override data.

What’s interesting: This hasn’t made his hiring more robotic. It’s made his intuition more reliable. When he overrides the AI’s concerns, he now knows exactly why, and he’s building a track record of when those bets pay off.

Structural Innovation: Creating AI Scouts

Several leaders have recognized a pattern: Someone on their team is naturally interested in testing AI tools. Rather than let that interest diffuse across everyone (leading to scattered experimentation and duplicated learning), they’ve formalized it.

One gave this person a title reflecting their emerging expertise—not “AI Officer” (too heavy), but a designation that signals both skill and organizational mandate. That person now:

  • Tests new tools against team workflows
  • Documents what actually improves outcomes
  • Cascades learnings when tools make the cut
  • Protects the team’s focused time by filtering noise

This aligns with what Elliott identifies as a critical leadership behavior: “getting serious about eliminating the work that doesn’t matter.” When one person becomes the filter for AI experimentation, the rest of the team gains 3-5 hours weekly by not context-switching into every new tool that promises transformation.

The role isn’t about being the most technical person. It’s about having the judgment to distinguish between genuine capability and shiny objects, and the communication skills to help others adopt what matters.

 

Apprenticeship Returns: Side-by-Side Learning

With AI handling more routine analysis and documentation, several leaders are restructuring how junior people develop. Instead of the traditional model—junior person does the grunt work, senior person reviews—they’re moving toward side-by-side learning.

AI does the initial analysis. Junior and senior people review it together, in real time. The junior person learns not just what the right answer is, but how to interrogate AI outputs, what questions to ask, where the tools typically fail, how to integrate machine speed with human judgment.

One VP described it as “compression of the learning curve.” Junior people are seeing senior-level thinking much earlier in their tenure. Senior people are forced to articulate their reasoning rather than just edit outputs.

The challenge: This requires senior people who can teach, not just do. Research from the Center for Creative Leadership shows that managers who are rated as empathetic by their team members are also rated as high performers by their own managers. The ability to develop people—not just produce work—is becoming the differentiating skill.

And it requires protecting time for development in cultures that historically optimized for productivity. Yet BCG research demonstrates the payoff: generative AI use increased by 89% when managers took the time to adjust their training approaches to account for team members’ different adoption mindsets.

 

What This Means for Team Building

We’re moving from “hire people who have done this job” to “hire people who can figure out jobs that don’t exist yet.” From “manage the work” to “develop the people.” From “everyone experiments” to “someone scouts, everyone learns.”

The leaders navigating this well share some common patterns:

  • They’re specific about what they’re testing and why
  • They’re honest about what’s working and what isn’t
  • They’re redesigning workflows, not just accelerating old ones
  • They’re investing in people who are energized by uncertainty
  • They’re building trust through transparency and reliability, not control

For leaders and future leaders thinking about team building: The question isn’t whether to use AI in hiring and development. It’s already happening. The question is whether you’re using it to reinforce old assumptions or to challenge them.

What we’re curious about: How are you thinking about team building as AI capabilities accelerate? Where are you seeing the most interesting experiments in hiring, development, or structure?