The AI Era Is a Scale Problem — And CISOs Can’t Solve It the Old Way
Every major technology shift forces security teams to relearn the same lesson:
Risk doesn’t grow linearly — it explodes.
AI is no different, except for one thing that is fundamentally new: scale.
If engineers are about to generate significantly more code, deploy more services, connect more systems, and move faster than ever, then the attack surface will grow accordingly. Not just a little more — orders of magnitude more. And no, security teams will not get orders of magnitude more headcount, budget, or time.
That means a simple truth we need to face now:
If we try to manage AI-era risk the way we managed risk before, we will fail.
The answer is not “AI security tools.”
The answer is changing how we think, where we invest, and when we say no.
AI Changes the Game Because It Changes the Rate of Change
Listening to recent conversations around AI adoption (including Lenny’s Podcast and Newsletter), one thing stands out:
AI doesn’t just make people better — it makes them faster.
Prompt engineering in 2025 isn’t about clever wording. It’s about workflows, chaining, delegation, and agent-based execution. Engineers are already experimenting with systems that:
- Write code
- Test code
- Review code
- Deploy code
- Call external services
- Make decisions
Projects like CAMEL, AgentLayer, and experiments around agentic workflows show where this is heading:
software that acts, not just responds.
For security, this is the real inflection point.
More actions.
More permissions.
More inferred access.
More things happening without a human in the loop.
That’s the scale problem.
What Should CISOs Actually Do?
1. Get Educated About AI (Not “AI Security”)
CISOs and security teams need to understand how AI is being used, not just how it can be attacked.
This means:
- How teams are using foundation models
- How prompts, agents, tools, and plugins interact
- Where decisions are automated vs supervised
- What model providers already protect by default
Most companies are not training models from scratch. They are:
- Using existing models
- Fine-tuning them
- Wrapping them with workflows and agents
That matters — because model providers already absorb part of the risk. The real risk shifts to:
- How models are connected
- What they are allowed to access
- What actions they are allowed to take
Security teams that understand this early can participate in the transition instead of reacting to it later.
And that participation matters — because once AI workflows are embedded deep in engineering processes, retrofitting controls becomes nearly impossible.
2. Get Visibility — Rebuild the Attack Surface
Before controls, before tools, before policies — visibility.
Treat AI usage exactly like a new class of third-party systems:
- What AI services are being used today?
- What is planned for the next quarter?
- What data do they access?
- What actions can they perform?
- What credentials and permissions do they hold?
- What is externally exposed?
Rebuild the attack surface deliberately.
And resist the pressure for “black box” access. We’ve seen this movie before.
Remember when teams opened S3 buckets to the world because speed mattered more than security?
AI adoption has the same failure mode — just faster and harder to unwind.
Visibility is not about blocking innovation.
It’s about making informed tradeoffs before bad defaults become permanent.
3. Invest in Architecture, Not Just Controls
In an AI-driven system, architecture is the control plane.
The most important questions CISOs should push their teams to ask are:
- What is externally exposed?
- What is the minimum permission required for each action?
- Are identities separated between agents, services, and humans?
- Can permissions be inferred or escalated indirectly?
- What data is reachable without explicit intent?
Research around agentic AI, intention deceleration, and structured planning (including ideas discussed in recent papers and experiments like Claude-based subagents enforcing TDD) all point to the same conclusion:
Safety emerges from design, not from bolt-on tools.
This doesn’t require new security products.
It requires discipline:
- Clear separation of entities
- Explicit permission boundaries
- Intent-aware workflows
- Mandatory checkpoints where automation slows down
Control is important — but asking the right questions early is far more powerful.
4. Wait Before You Buy Tools
This is the uncomfortable part.
With hype comes ambulance chasers.
Many vendors are racing to sell “AI security” by:
- Validating prompts
- Filtering inputs
- Blocking outputs
This is fundamentally flawed.
The attack surface is infinite. Prompt-level controls alone cannot scale, and they create a dangerous illusion of safety.
That doesn’t mean visibility and control aren’t possible — they are.
But effective control must live:
- In the development environment
- In the production workflow
- In identity, permissions, and data access layers
Dedicated AI security tools are still early because the problem itself is not yet well-defined.
Before buying anything, CISOs should be brutally clear:
- Are we worried about external exposure?
- Data access?
- Permission sprawl?
- Autonomous actions?
Each of these has a different solution.
Buying tools before you understand your primary risk is not just risky — it can actively slow down the transformation you’re trying to secure.
So wait.
Observe.
Participate.
Architect.
Then buy — when you know exactly what problem you’re solving.
So far, this may sound defensive — control the risk, slow things down, avoid mistakes.
But that’s only half the story.
There is a massive opportunity here, and it’s one that security teams almost never get.
I believe — and I’m willing to predict — that AI will improve the effectiveness of security teams more than the risk it introduces.
Not because AI is safer.
But because, for the first time, security can operate at the same scale and speed as engineering.
From Observation to Action
Historically, security teams have been mostly observational:
- We detect
- We alert
- We review
- We escalate
- We block (sometimes)
What we rarely do is take action continuously and contextually.
AI changes that.
Security teams can now:
- Analyze far more signals than humans ever could
- Reason across architecture, code, identity, and data
- Act consistently, every time, without fatigue
- Scale decisions without scaling headcount
This is not about replacing security engineers.
It’s about amplifying them.
Instead of reviewing one design doc, one PR, one exception at a time, security can embed intelligence directly into workflows — the same workflows engineers already use.
Bridging the Knowledge Gap — Automatically
One of the hardest problems in security has always been the knowledge gap:
- Engineers move fast
- Security knowledge is specialized
- Training is infrequent and quickly forgotten
Most organizations still rely on:
“Train developers twice a year to code securely”
We all know how well that works.
AI gives us a better option.
By embedding security-aware agents into development workflows — planning, coding, testing, and deployment — security guidance becomes:
- Continuous
- Context-aware
- Actionable
- Non-blocking
The work described in experiments like Claude-based subagents enforcing planning, TDD, and structured execution points to a powerful model:
- Security doesn’t lecture
- Security participates
- Security influences outcomes in real time
That’s a fundamentally different relationship with engineering.
Positive Influence Beats Enforcement
This is where AI can change security culture, not just tooling.
Instead of:
- “Security said no”
- “Security reviewed this late”
- “Security slowed us down”
We can move toward:
- “Security helped shape this design”
- “Security caught this early”
- “Security made the right thing the easy thing”
When security input is:
- Embedded
- Timely
- Automated
- Consistent
It stops feeling like enforcement and starts feeling like enablement.
That’s not something we’ve been able to do at scale before.
The Asymmetry Finally Favors Security
Attackers have always benefited from automation and scale.
AI flips that asymmetry.
Security teams can now:
- Reason faster
- Act broader
- Influence earlier
- Scale decisions without burning out people
Yes, the attack surface will grow.
But for the first time, our ability to manage risk can grow faster.
If we choose to use AI not just to observe risk — but to act on it.
Final Thought
The AI era doesn’t require CISOs to become machine learning experts.
It requires us to accept reality:
- The pace will increase
- The surface will grow
- Resources will not scale with risk
Success won’t come from doing more of the same.
It will come from rethinking where control actually lives — and moving security upstream, into understanding, architecture, and intent.
That’s how we stay relevant in the AI era.
Table of contents
- The AI Era Is a Scale Problem — And CISOs Can’t Solve It the Old Way
- AI Changes the Game Because It Changes the Rate of Change
- What Should CISOs Actually Do?
- From Observation to Action
- Bridging the Knowledge Gap — Automatically
- Positive Influence Beats Enforcement
- The Asymmetry Finally Favors Security
- Final Thought
Table of contents
- The AI Era Is a Scale Problem — And CISOs Can’t Solve It the Old Way
- AI Changes the Game Because It Changes the Rate of Change
- What Should CISOs Actually Do?
- From Observation to Action
- Bridging the Knowledge Gap — Automatically
- Positive Influence Beats Enforcement
- The Asymmetry Finally Favors Security
- Final Thought
