Do organizations think? Rarely, if ever.
It’s easy to fall into the trap of taking an organization, which can seem, at a distance, to be a human-like actor, and ascribe decision-making power to it. Instead, what is usually happening is that humans are making decisions and taking actions, based on some instructions that they’ve received in the past, and learning from their own success or failure how to adapt either their sensors or their choices.
Like with humans, we can use the OODA Loop as a model to look at how teams within an organization behave. A team might be as small as a single individual, or as large as a planet-spanning organization, as long as the team responds to some input with an action. It’s often easier and more helpful to start by modeling smaller teams.
It’s often helpful to consider that what might look like a single action from outside might actually be many different actions by different subteams. Publishing a blog post (like this one), might actually have as subactions: CMO expresses desire for a blog post on AI. Advisory CISO pitches an idea. CMO approves. Content team puts it on the calendar. Content team requests update from Advisory CISO (this step repeated many times between all remaining steps; redacted for clarity). Advisory CISO writes first draft. Content manager edits. Advisory CISO rewrites. SEO team reviews. Advisory CISO rewrites again. Content manager edits again. Web team does a bunch of steps to publish. Social team creates campaign. Social team publishes, notifies employees of option to reshare/comment/like.
Observe
While humans rely on their senses to notice what’s happening around them, teams generally work at a step removed. While they can (and sometimes even do) act as a result of a team member noticing something unusual in the world, more often than not, a team has a set of sensors that trigger work. These might be inbound routine work requests, alerts from automated systems, or management directives. Sometimes a team might have a pre-filter, as it watches the world looking for specific things to trigger work on. Consider a security team instructed to “review all third parties.” It might have a work queue that Procurement or IT notifies of new third parties; but it might also have a sub-team that is expected to notice third parties, and bring them to the attention of the team – effectively running an OODA Loop as an input to the team’s OODA Loop.
Orient
For structured inputs from sensors, orientation can be thought of as “inputs to prioritization”: having received a signal that carries with it an implicit set of instructions, is this work important enough to do now? Is it deferrable to later (never)? Can we just ignore it? Since the work request is already recorded, this decision is also implicitly recorded (which may change the Decision step). If the team already has a published list of priorities, where does this work fall on the list?
For unstructured inputs, the team member has to first orient what the sensor means in terms of their team’s mission (maybe this isn’t their job), what work is necessary, make an estimate of the prioritization likely to happen if they take some action, before moving on to the Decide step.
Decide
Usually, Decide is the easiest step for most teams, since they are usually operating from a playbook already. They have a set of actions that they are expected (or allowed) to take, and this step is just acting on the prioritization input from orientation. For many teams, especially if they are overworked, there is an additional choice they can take: to defer the work by creating more work. When they’re unsure if work is so important that it needs to take priority, or so obviously unimportant as to be parked, they can ask the requester to do more work to demonstrate priority (if they let the request go unanswered, clearly the work wasn’t that important to them).
Act
Act is the step where most teams spend the bulk of their time. Having standardized their sensors and inputs, and created methods of prioritization, now the team just needs to do something. And it’s something that they do regularly (at least at a high level; write code is rarely the same exact task over and over again). But the types of these actions fall into three widely different categories: Doing, Helping, and Stopping, although almost all teams think of themselves as Doers.
Doers
Doers have a really simple model of the world: they have tasks to accomplish, and they go do them. Maybe they are responding to support tickets, building software, or writing marketing copy. They observe a need, have to orient that need against their current priorities, decide how to solve it (or shelve it, delegate it, or ignore it), and then act to meet the need.
Helpers
Helpers are on the lookout for Doers who are engaged in specific types of tasks, and help them out. A release manager who coordinates software changes, an executive assistant who handles meetings and expense reports, or even the operations crew who cleans the office and refills the fridge. What makes Helpers an interesting subset of people/teams is that, by and large, they ought never silently drop work: if they’re asked to do something, you can expect something to happen.
Stoppers
Who doesn’t love a good stopper? Well, besides all of the Doers trying to get things done, that is. Stoppers exist because at some point, the organization failed in a bad way, and an executive said, “Let’s never do that again.” Some of those failures are so well-known that every organization just creates Stoppers even before they have a failure.
The Stopper’s primary job is to say, “No, don’t do that.” Whether it’s the lawyers making sure that bad contracts aren’t signed, or the HR team making sure labor law is followed … or the security team, trying to stop dangerous software, Stoppers all start from a mission of preventing work from happening. This puts them explicitly in tensions with all of the Doers who are trying to get work done.
Building Organizations
The fascinating thing about organizations is that individuals and teams are often implementing their own sensors, and part of their Act choices involve outputting information to other team’s sensors. Consider the classic Board report on cybersecurity, which most CISOs are now doing monthly or quarterly. That report is a sensor being used by the executive team… but creating it is a decision and set of actions that the security team is taking based on their own sensors.
Most teams inside an organization will try to work as atomically as possible: they have sensors, and engage their decision-making based on those sensors. Organizational leadership needs to understand what work is falling by the wayside when looking at the organization they have.
Coming Soon…
Now that we’ve explored a bit about how humans make decisions, and how organizations are a little like humans, next we’ll look into some of the types of AI, and how to incorporate those into your workflows.