Extending Value Stream Mapping Notation For Value Creation Networks
When I first started reading about and using lean in a context for helping improve software delivery performance, the tools I used were either classic lean techniques (e.g. value Stream mapping) or agile practices in a slightly rebranded way (i.e. my scrum board is a kanban). This was in large part due to the fact that most of the literature I was exposed to was material straight out of the manufacturing world, or the various Poppendick books based on lean software delivery, which are in my opinion great reads, but primarily focused on explaining why agile works using a lean vocabulary, rather than providing ideas around how lean can be used to extend some of the practices already in agile.
Over the last several years I’ve had the opportunity to interact with some great minds within the lean systems and software community. Obvious names like David J. Anderson, Don Reinersten, and Alan Shalloway come to mind, but more importantly the general discussion around the lean for systems and software community has really reshaped the way I think about building systems of work to support the delivery of high-quality software.
One tool that has fallen out of favor with a significant number of the community is value stream mapping. Interestingly, this is a tool I continually reach for when trying to define either the current state of the software delivery process, or to help various stakeholders come up with what the various handoffs, activities, and order of work should be for future project, future program, or to support a future organizational design. That being said, I tend to agree with those in the community that feel that the value stream. mapping notation as it exists right now contains some significant flaws. The notation is really meant for stable, serial/linear processes. There is no real way to describe different strategies to handle variability, or the non-homogenous nature of the work entering the system.
I’ve since come up with a notation to extend value stream. mapping with some simple notations that allow me to model what I perceive to be a value creation network. The notation is certainly in its alpha form, I’m sure I’ve missed a whole bunch of concepts. I’m sure they’ll be others out there that also think that the notation still leads to defining processes that are to serial in nature, I’m hoping to post some example diagrams showing popular methods and approaches that are nonlinear in nature (e.g. extreme programming) and I’ll tweak the notation as I go.
If anybody wants a stencil that I created for Omnigraffle, you can find it here. This will allow you to leverage the value creation network notation to build your own diagrams using your Mac or iPad.
Classical Value Stream Mapping Notation
the icons below represent classical value Stream mapping. This should be familiar to anybody that has done value Stream mapping before, I have limited myself to only a couple of essential artifacts. key
Explicit Activity that needs to be completed for value to be created
one of my favorites, the need for multiple activities to be completed by a co-located team, in order to deliver value
of course activities need people to get things done, color-coded for different specialties.
When people work, artifacts are created, and inventory builds up (unfortunately).
Ye old push model. Build a bunch of stuff, when it’s finished push it to the next team, hopefully they have capacity because your top-down upfront plan was so perfect :-)
or we could always allow downstream teams to pull work when they have capacity to do so, triggering that an upstream team can start work
people don’t just move inventory around, they also pass along information, this is probably even more critical to knowledge work in this for manufacturing
Extending Value Stream Mapping to Handle Non-Homogenous Work
Of course one big difference between knowledge work in manufacturing is the number of different sources of work, and how the risk, size, and knowledge required to do the work can change dynamically. Work also can break up and merge repetitively, and work can be be completed in parallel.
Here are some notations that I’ve used to try to capture some of these concepts
All knowledge work stems from a good idea
hopefully most work delivers business value, work will often come in different sizes...
occasionally there will be emergencies, let’s hope that these emergencies are small in size
business reality will dictate that some work will have a fixed date, this also varies in size quite often
any optimal system should also spend a certain portion of its work on fine-tuning/optimizing itself, keeping technologies up to date, rThis Work Can Get Really Big (and Platform Upgrades)latform upgrades)
work can become blocked, which requires management intervention
hopefully do system will also generate ideas on how to improve the work , not just generate work itself
knowledge work will frequently start as a very large concept, scatter into finer grained projects, and further scatter into releases/stories/features/use cases/etc. so that it can be completed in small atomic units
likewise knowledge work will aggregate from discrete specific units into something of business value, think of how individual user stories can merge into a business release that the customer wants
describing worked in terms of features has become increasingly popular in the lean/kanban world
a minimum marketable feature set represents the smallest collection of features that could be deployed as a unit giving meaningful business value
Notations for Knowledge Worker Interaction Patterns
Here are some patterns that I’ve observed using various different delivery models, with some thrown in from my readings of kanban and flow
an agile favorite, let’s get a dedicated team, hopefully cross functional, definitely co-located, and have that teams swarm on the various activities necessary to create value
teams can be formed as necessary from a greater pool of similarly skilled resources
there are situations (and I will argue this to death with the agile purists folks) where a cross functional team can leverage the help of external specialists for specific tasks. Think of security/vulnerability specialists, legacy subject matter experts, or masters in a very esoteric business domain. In this situation teams can pull from a pool of specialist resources for the duration require, after which the specialists go back into the pool. this is a great pattern to handle need for resources that do not have enough stable demand within any specific team to be a full-time member of that team, book still need to work closely enough at that team t that creating a downstream specialist team doesn’t make sense
a certain portion of the work requires differentiated knowledge because of business or application concerns, or there is a reasonable amount of variability in demand across numerous business channels. It may make sense to create channel specific entry points for each business client, but to back the people representing the multiple entry points with a common pool that possesses common skill sets necessary to handle the different channels of demand. This only makes sense if the multiple entry points are supported by some kind of common/cross cutting set of skills
Notation That Described Various Strategies to Handle Variability in Demand
Again reading the works of Reinersten and Anderson have helped me to articulate many of the strategies that we knowledge workers use to handle variability, having these explicitly described I think is very helpful for all the obvious reasons
if there’s one thing that lean and agile thinking teaches us, it’s not to underestimate the value of slack time. Smart agile teams reserve capacity explicitly by limiting how many hours in the day are reserved to code. The greater the cost of delay is for a particular piece of work, the more critical it is that capacity be reserved to handle variability
if work is coming into a system with various risk/priority profiles, then it makes sense to have an understanding of this demand and to load balance it. This will allow knowledge workers to drop low priority work for higher priority work whenever it comes into the system. I really like this pattern as it allows knowledge workers to reserve capacity for high priority work by making themselves busy doing lower priority work that is still essential for improvA ent
A T shape resource is a resource who takes the time to make sure he is competent in both upstream and downstream processes. Not he gets more senior the T not only gets better at his core responsibilities , he also makes sure that he can contribute in other areas as well. Creating these kinds of people cost money, and causes them to be less effective on their core capability than if they practiced at their primary concern . This however is a worthwhile economic trade-off in almost all aspects of knowledge work, due to the high degree of variability. It is very hard to predict the fact amount of work required for each skill set, requiring more senior people to have a balanced set of skills is one of the best strategies out there for dealing with fluctuations demand.
supporting 2 or more different teams with a part-time expert is another excellent strategy for handling variability. The part-time expert spends enough time with two teams to understand the context of both to make himself useful as a part timer. When one of the teams faces a spike in demand, the part timer switches to full time (plus overtime if necessary) and significantly increases the teens capacity until things get under control
Some Other Symbols that I wasn’t sure How to Categorize¶
unfortunately some work has to get done through meetings, would it be nice if this is never so...
and the evil stage gate never seems to completely go away...
decision rules are great ways for constraining the environment so that not every decision needs to be made over and over again. Of course great care needs to be taken that decision rules do not become inflexible and permanent. I I find that a organization need to be relatively sophisticated and mature to approach this concept in an appropriate way.
Some of the best thinking from the agile world revolves around how to create automatic signaling that kicks off a automated activity based on certain events, think of continuous integration, test driven development and the like
here is a picture of the entire stencil