Saturday, January 30, 2010

Tackling LEAN change week 1

Well I have finished my first week on a two month initiative to help a very large conservative organization starts it's journey to becoming a solution factory base on lean principles.
http://agileconsulting.blogspot.com/2010/01/tackling-agile-organizational-change.html

First of all I want to thank the folks on the agile lean yahoo group for some great advice and feedback.

http://tech.groups.yahoo.com/group/leanagile/message/4692

I have had a chance to talk to the CIO and get a sense of what he means by a solution factory.
-clear visibility around how things are being done and progress (ie the catwalk over the factory floor)
-flexible assembly lines where parts of the
supply chain can be interconnected in diffrerent ways to provide value
-an environment where workers can be proud of the work they do, and actually want to be more productive.

I also had an opportunity to discuss lean with the architecture group. This was a great conversation. The group had a keen awareness of how command and control and throwing things over the wall were not working, we had a great discussion on lean IT governance (ftp://ftp.software.ibm.com/software/rational/web/whitepapers/Lean_Development_Governance.pdf) ala Scott Ambler and Perr Kroll. The architects really got it, and were generally enthusiastic to try to break down the wall between architecture and delivery.

I asked the architects to give their opinions on how we should proceed building a vision and roadmap by asking them to ask wether they leaned to the left or right on the following

visionary <*----> pragmatic
educational <*-----> self learning
e2e value stream <---*-> IT perspective
planning <-*---> doing

I was quite surprised about the almost unanimous desire for creating an ideal state. I think the ideal state is important, it motivates and energizes people to push beyond the possible and truly excel.

I percieved the really strong desire for education over self teaching as a general apprehension over what lean would mean to the organization, it's clear to me that people here are looking for answers, I agree that education from the outside is crucial, but I hope I can hammer home the concept that people need to get into a self learning mode. (baby steps)

I'm fairly concerned around the IT focus versus end to end value. The rational given is that the IT group already knows the problems of the business and that the IT group would like to get it's house in order before approaching the business with their desire to go in a lean direction. My major concern is that the idea that "IT already knows" is a root causes of IT-business mis-alignmnent. I'm also concerned that some leadership is emphasizing efficiency over effectiveness.

That being said I think the evidence I've collected so far signifies a genuine desire to provide better value to there cumstomers, a hunger for better collaboration, and a real understanding that it will be the people on the shop floor that will make this successful.

That's it until next week.



Location:Brock Ave,Toronto,Canada

Sunday, January 24, 2010

Tackling Agile Organizational Change

I'm just about to start a new project where I will have the chance to assist a client in setting up a "software solution factory" based on Lean principles.

The client has had some serious challenges relating to software delivery in the past, and is supporting systems with a significant backlog of defects which don't seem to be getting any smaller any time soon. The client is also in the initial stages of some very ambition systems replacement initiatives, and would like to get things right this time.

Using a mixture of lean and agile principles clearly can offer alot of value to the client. But historically the organization has many elements that could make lean change challenging, a tendencancy to rely on buraucracy and collective bargaining are a few examples.

The client wants help in the typical things that are part of any transformation effort,
ie: establishing a vision, communicationg the urgency of the problem, listing challenges, developing a target, and developing a plan.

I see a number of potential ways to tackle this problem, and would really appreciate other people submitting thoughts and ideas. The more fresh eyes I can get on this problem the better.


It should be noted that these ideas are not mutually exclusive to each other. But there is a limit to what can be accomplished during this engagement, so I've tried to bucket options in a sensible way.

OPTION 1: Realist and Careful

-In this approach the initial effort would be focused on carefully cataloging the current state.

-Structure, work habits, technology, HR, would all be reviewed and assessed.

-These inputs would be used to identify the biggest issues and realistic roadmap would be created that would show target states over time (1 year, 3 years, etc)

This approach has the benefits of being easy to scope, and allows the client some time to prepare the message in a way that minimally disrupts the way things currently work.

However this is my exact issue with the approach, if things are broken what exactly is wrong with disrupting them if a better outcome is the result. A conservative approach has the most chance of becoming shelfware. Disruption is a critical part of any change, we need to test the organization's resolve at some point, and in my mind the sooner the better.

OPTION 2: Idealist and Disruptive

In this approach I would

- spend much less time collecting empirical evidence, but do the minimum necessary to give me contex.

- create an idealized vision of the way the organization should work. Every article I read on organizational change and lean seems to indicate that the idealized vision motivates people to stretch themselves toward excellence. This intuitvely makes sense to me.

- quickly identify groups that could benefit from agile lean rught now, and coach them to some degree of success.

- set up a "supply chain" of analysis, education, and adoption. The idea is to get a repeatable process instantiated that would allow my client to increase internal capability as quickly as possible.

- instrument adoption on as many parts of the organization as possible. Then learn from the experience, then optimize the education supply chain. Plan, Do, Check, Act

I really like this aproach as it has the opportunity to offer real value, and it is inherently lean. (use lean to bootstrap lean) However this client is conservative, and may not be able to move this quickly. This approach is also really hard to scope, so much depends on the client stepping up to the table.

OPTION 3:Educate and Participate
- get context, focusing on skills and ethics gaps
- create an online education forum where practices, principles, and other educational material can be posted and improved in a structured and collaborative fashion.
- create a repeatable practice around self serve education, slowly releasing the training reins to the client
- hold a organize wide conference with the dual purpose of education and collaboratively developing a transition roadmap

This approach has the benefit of involving or attempting to involve a large portion of the organization. It also can help spread the message far and wide accross the organization. My main issue with this approach is that education becomes quickly stale if it's not used. Also education alone is not sufficient for adoption, hands on mentoring is essential as well.

Again insight from the community would sincerely be appreciated. I promise to post updates on my progress in the hopes that this will help others who are on the same journey. I am also hopeful that this level of public discourse will help my client get the most they can out if going in a lean direction.

Monday, December 28, 2009

Agile Documentation

As a new theme to my blog, each week I will be pulling out one of the LEAN practice cards to talk about and discuss some of the concepts behind it as well as add in any colour commentary I may have based on my own individual project experiences.

The first card I thought would be interesting to look at is a Management Practice titled "Agile Documentation". Documentation is often overemphasized by some project teams or ignored by others depending on whether they subscribe to a waterfall or iterative based process. There is no clear cut answer on exactly "how much" documentation should be done as it is dependent on a variety of scaling factors.

Regardless of which development process your team may subscribe to, if we go back to the "why" behind documentation, I think applying Agile Documentation will help any team minimize documentation waste while also providing the benefits of documentation (yes, documentation is a good thing and even agile teams need to document as going agile is not an excuse to avoid documentation).

The LEAN Agile Documentation card states:



Document...
- When the business asks you to
- For clarity across teams working in different locations
- To establish appropriate business and technical context for the solution
- To define interfaces across systems
- To enable your next effort
- Informally
- To understand
- To communicate

Don't Document...
- To specify
- Because your process says so
- Unproven ideas (prototype it first)
- Without obtaining the structure from your intended audience
- Without considering maintenance and cost of ownership
- Implementation and design details that can be just as easily expressed in the solution itself

There are three main themes I want to pull out from this card that I think are worthwhile to discuss.

1) Document for an audience - Documentation requires time and effort and the business is paying for this just like they are paying for each feature built into the solution. Every project should contain estimates for time/effort (i.e. cost) towards documentation and this is something the business is paying for. If you look at many organizations and their development processes you will notice that there is often a large set of documentation that needs to be completed as part of the project. However, it's worth questioning does the business really need each one of these documents? Does the technology teams building and supporting the solution need these documents? If the team is documenting...

- Because your process says so
- Without obtaining the structure from your intended audience

Then it's likely the team is not documenting for the business. Generally the type of documentation I find invaluable to the business and the technology teams supporting the solution are operations and support manuals (e.g. run books), developer setup and build manuals, end user manuals and the delivery or release plan. This set may need to be expanded depending on the project.

2) Avoid documenting things that will change - Just like code and test cases, documentation is impacted by changes. To avoid paying the "rework" cost, hold off on documenting until a steady state is achieved (document at the last responsible moment) and stay away from implementation and detailed design details that will require frequent changes to the documentation.

3) Document to understand, communicate and establish contracts - It's important to recognize when the scaling factors impact your project and documentation is a great way to mitigate these risks and challenges. When your team is:

- Geographically distributed
- Large and difficult to manage
- Working in a complex business domain

Then documentation helps everyone understand the same language, communicate decisions and changes, and establishes contracts that helps each member understand how their work interacts with everyone else. The type of documentation I find myself often using to help the team navigate through these challenges are the high-level requirements (e.g. use case hierarchy) produced from the initial requirements envisioning sessions, ubiquitous language dictionary and high-level domain model, system context diagrams and bounded contexts, and system interfaces. An extremely effective practice that I often apply to documenting system interfaces or complex business domains is to use "executable requirements". Writing test cases as a form of documentation is a great way to precisely capture the details while also validating for correctness.

I have found these three themes combined with the bullet point checklists in the Agile Documentation practice card to be incredibly helpful in deciding what and when to document.

Wednesday, November 4, 2009

Agile isn't Just About Change

Over the last couple of months several clients of the firm I work for have stated that agile software delivery was not suitable to a particular development project because the need to continually embrace change was not there.

I hear this so often that I feel I need to emphatically point out that agile development practices can provide alot value to "fixed" projects. I have a couple of practices below.

Iterative development: Even if change is not the order of the day breaking up work into small chunks will go along way to mitigating a whole slew of implementation risks.







Test Driven Development: Anyone who has spent serious time practicing TDD can attest to the fact that TDD leads to better design than traditional development. It might not be practical for all situations but where it is TDD lets you safely take a second and third pass at your design.






Behaviour Driven Development: Having a consistent format to describe requirements and test cases in business friendly format that supports automation is just plain common sense, and a great way to specify contracts between teams on large scale projects.

Continous Integration: Code integration is never a fun job and the longer you put it off the harder it is, regardless of how much change is expected in a project.







Planning Poker, Collaborative Modeling, Daily Standups, Retrospectives and Agile Planning Boards: Because "static" projects will also benefit from approaches that increase collaboration, penetrate organizational siloes, and encourage resources who are actually doing the work to partake in the planning process.















Hopefully the point is made, software projects of almost any shape should strive to encorperate as many agile practices as possible.

Practically and common sense should be used to determine which practices should apply, but rate of change should not be the only value driver. Agile practices reduce project risk, increase quality, and reduce churn and rework.

Tuesday, November 3, 2009

Complex Up Front Estimation Tools Drive Me Crazy

Way to often I encounter both my colleagues and clients spending way to much effort and stock in building the uber estimation toolkit.

I'm sure many of you have seen this type of tool. The spreadsheet that uses function points, component complexity factors, non functional adjustments, and some really clever math to produce a really impressive, but, an largely fictional and arbitrary guess of what the actual effort of a software implementation effort will actually be.

You know you are using one of these tools when:
-you feel like you are doing the requirements modeling with a spreadsheet
-you are doing architecture and design in your head so that you can estimate the exact detailed code components that you think you need to build
-you are estimating at the hourly level for tasks that will be completed over 6 months from now.

I know that many of us have been the author and user of some of these tools.(I know I have built some wicked estimation steadsheets) But the problem with these tools is that they hide the fact that software development is an excercise in discovery and adaption. Planning is essential, but creating uber detailed estimations, and uber detailed plans for software development is alot like creating an uber detailed plan for a vacation. Setting in stone that you have to go to the beach on a particular afternoon cam get easily waylaid if a storm comes in, planning to bring swimwear with you because you plan to go to the beach on a particular afternoon just makes sense.

The same type of reasoning goes into software planning and estimation. Deciding that business requirement A is X+50% harder to build than requirement B is a whole lot more sensible than trying to calculate the work involved for both requirements to the exact hour. The high level estimate is likely to be just as accurate as the one developed by the complex one, and way easier to change if it's wrong.

Here are some pitfalls I see with using complex estimation toolkits






1)Requirements, even when well written always end up being notoriously innacurate.

Blame it on user always changing their minds, subpar analysts, immaturity of the field, etc. But what you based your estimates on will often end up being dramatically different than what you end up building.

2) Technology effort is based on platforms that are changing, dramatically, and the diversity is overwhelming

Don't think you can take your fancy web estimation toolkit based on j2ee version XXX apply a few tweeks and then use it for your upcoming Ruby on Rails project, especially if you don't have significant Rails experience. It is just as likely that your innovative portal/SOA estimator most likely won't cut it when trying to figure out an RIA/REST solution, especially if you are trying to estimate at a very detailed coding component level.

3)Development is not even close to a linear excercise.

The first few times a team build a component of type X will take alot longer and be alot riskier than after the tenth. Over time the team should be building the solution in such a way that the 10th component of type X should be way easier to implement, especially of they are working in an intelligent fashion.

4) A heavy dose of guestimation fudge is applied to even the most rigorous of models.

Every complex estimation toolkit has a way to parameterize the output with a heavy dose of fudge. Frequently the estimator will look at the estimate produced by the wonderful tool, sniff, and apply a 40%+ adjustment to get it to feel right.

5) These tools are incapable of incorperating the biggest inputs to software scheduling and variance to effort, which are human factors

Below is a tag cloud that shows some of the biggest factors to consider when estimating software delivery.






While I'm happy to debate the exact priorities of these factors, I'm pretty adamant on a couple of key points.

- Human factors trump development factors: Now I'm not saying that requirements complexity and target platform are not important, they are, but you can take two identical requirements on an identical platform and give them two different teams and see a exponential difference in output.

The same goes for the amount of organizational churn you have to go through to get something delivered. I have observed (and been guily of being part of ) teams that allowed a lack of appropriate accounting for beaurocracy leading to huge delivery variances from original estimates. Or in simple terms, you won't know how long it takes a particular organization to digest and deploy new software until you have hamds on experience with that organization.


Business ownership is another huge one, recently more and more developers are realizing that software delivery is primarily an excercise in communication. Not having accountable business owners properly at the table will have a massive impact in estimates.

Another way of interpreting the above cloud is as follows:

Consistent tools don't lead to consistent results, consistent experience does

Good estimates come from experts who have done the work before, have used the technology in question, understand the business, understand the organizational context and know the capability of their team members. In my experience this is a tall order, and as a result I have not heard of very many good estimates.

Good (and more likely) estimates are done frequently, by a cross functional team, by the people who will actually be doing the work, and have a level of detail that match how soon they will be done.(but that is a topic for another post)

Recently I was one of 4 groups asked to estimate a web site. One approach used function points, one approach used component complexity, one approach used a page complexity tool, and the last approach used planning poker. Eatimators were all experienced web developers.

They all came with 20% percent of each other. So thx but I'm sticking with planning poker.

Do estimation tools have any use?

Absolutely, yes.

They can provide structure to the thinking of an experienced craftsman.

They can help articulate the problem space and highlight any unknowns.

But a little goes a long way, and no estimation should be confused with fact, no matter how detailed they are.


Friday, October 30, 2009

Testing Architecture

There are many aspects to look at when one is trying to define or describe the architectural elements of a system.
-application architecture
-infrastructure architecture
-security architecture
Etc, etc, etc....

One aspect that I see that is completely missed by by most software vendors and system developers and integrators is what I call Testing Architecture.




I define testing architecture as a description of the testing tools, products and other system components that are dedicated to supporting testability of the solution.

Testing architecture describes the patterns used to support building testable components, and provides guidance on when and how to utilize stubs, mocks, data setup and cleaning services, automated testing frameworks and other testing artifacts.

Finally Testing Architecture describes testing coverage metrics for various aspects of the solution, and lists any constraints that may limit the use of automated testing.

It's time for major software vendors to take a page from many OSS groups and start making tests and testability far more prominent in there solution.


Jef

Tuesday, October 13, 2009

Sucesssive Planning Poker



While I think planning poker is an excellent way to estimate any software delivery work, I've noticed that actual development work is sometimes started to soon, supported by a backlog and release plan that are made up of largish items (12 - 20 ideal days)

While agile methods espouse getting started as soon as possible I think a little more up front analysis can actually help reduce rework and eliminate ambiguaty.

I've come up with something I call Successive planning poker. The premise is simple

1: Conduct an initial round of planning poker

Frequently the estimating group groups will often end up with lots of items that end up with big estimate ranges. (ie 10 ideal days +) Large estimates represent work that is not well understood, contain many assumptions, and are riddled with unknowns.

2: Conduct analysis and research activities necessary to resolve ambiguities in any large estimates

During the initial planning poker session make sure to keep careful track of ambiguity and uncertainties that could be resolved with a reasonable amount of research. Because the group doing planning poker is cross functional, they should be skilled to answer questions ranging from existing legacy functionality to product limitations to existing business processes or policies, etc,. Give team members a day or two come back with answers.
3: Conduct a successive round of planning poker

Go through each largish estimate from the initial planning round and use the answers and research completed by the team to break up the work items into smaller, more manageable pieces and estimate these pieces independently, hopefully ending up with work items ranging from 1- 8 ideal days.

4:Repeat

Repeat successive rounds of estimating and researching until you have a quality backlog of items that are reasonably fined grained enough to tackle within an iteration in a reasonably predicable manner.

While the above approach might be obvious to some, I've been on several projects where we didn't do this an the first few iterations really suffered, of course a couple iterations of development really did alot to smooth out future iteration. Still I think just a little but more planning dillegence (done in a collaborative and cross functional way of course) could have helped to gain the same information faster and far easier.

Sunday, July 19, 2009

A Couple of More Agile Flashcards

a couple of weeks ago Jeff langr and Tim Ottinger asked the public at large to contribute a couple more ideas for agile flashcards onto their site Agile in a Flash.

I decided to quickly create a couple which you can find at their site, but I've listed them again here just for reference purposes.

Of course, I probably need to spend a little bit more time on them to polish and clean up the quality of the grammar.

domain driven design(by Eric Evans)
-Agile documentation (by Scott ambler)
-agile modeling (by Scott ambler)
-product oriented development lifecycle
-enterprise stakeholders

Here are some other ones I hope readers also find useful...
========================
Behavior Driven Development:
1) use story driven scenarios to specify the acceptance criteria for an iteration

Given [some State]
And [some more state]...
When [an activity completes]
And [another activity completes]...
Then [an assertion is true]
And [another assertion is true]...

2) wire these acceptance criteria into an automated acceptance testing framework (like fitness)

3) have your developers write code for the iteration into the acceptance criteria pass

-make sure to use the ubiquitous language when defining your stories/tests
-focus your requirements and quality testing efforts on designing these executable requirements at the beginning of an iteration
========================
the Kano model of customer satisfaction
-when creating a backlog for your customers, categorized items into:
-mandatory
-linear
-exciters
-reverse (don't implement these :-))

Do this by developing a questionnaire that asks different stakeholders how they would feel if a specific feature was present or not present
1. I expect it to be this way
2. I like it that way
3. I am neutral
4. I can live with it this way
5. I dislike it this way

both negative and positive answers can be cross-referenced to determine the category of the feature

See http://agilesoftwaredevelopment.com/2006/12/kano-questionnaires for a example...

================================
Value Stream Mapping:
the best way to determine how much agile can help an IT delivery organization is to perform a value Stream mapping analysis.

Measure the ratio of work to wait time/waste for both the current and suggested future development process

Do this for
-larger features
-typical bug fixing
-emergency fixes

(Picture here)

=========================
Web 2.0 Based Documentation:
make sure all documentation, including technical specifications, standards, decisions, etc. are all placed on a web-based collaboration platform.

This collaboration platform must
-allow all participants to modify content
-be available to both internal staff, contractors, and outsourcing
-promote an open, collaborative approach to running projects

Excellent examples are wiki, blogs and user forums...

Wednesday, June 24, 2009

Appreciative Inquiry

a colleague of mine has suggested a new approach towards getting delivery teams to improve the way that they do their work.

It's called appreciative inquiry, the basic idea is that organizations should spend more time focusing on what works, and expanding those practices, rather than focusing on the negative.

It's pitched as as being the opposite of problem solving, the following is a good writeup from Wikipedia.

"
AI focuses on how to create more of the occasional exceptional performance that is occurring because a core of strengths is aligned. The approach acknowledges the contribution of individuals, in order to increase trust and organizational alignment. The method aims to create meaning by drawing from stories of concrete successes and lends itself to cross-industrial social activities. It can be enjoyable and natural to many managers, who are often sociable people.
"

One thing that can be said about agile/lean practices is there is a huge focus on what's broken. Not on what is currently working, i.e. requirements up front or back, unfinished code leads to waste, untested code leads to fertility, etc. A different tactic using the AI approach would be a focus on the benefits of software in the first place, i.e. software leads to good agility abstraction for businesses in general, and use that as a metaphor to expand how we should be approaching delivery software for organizations in general.

Our team plans to use this approach on our next retrospective, I'll make sure to share the results here.

Monday, June 22, 2009

Experience Reports from the Agile Trenches: What's Really Working, What's Not Working, and Why


A couple of weeks back, I was invited to speak at the Rational Software Conference 2009.

I had the pleasure to to take part at the Experience Reports from the Agile Trenches: What's Really Working, What's Not Working, and Why panel hosted by Scott Ambler.

It was a great opportunity for me to discuss my experiences in deploying agile techniques and tools as a part of Deloitte. The panel was attended by Mark Lines, Nate Oster, Anthony Crain, Scott Ambler (facilitator), Jeff Anderson, Shawn Hinrichs, and Matt Heinrich.

There were a lot of interesting questions raised by the audience, some of these include:
1) how do you deal with agile in a regulatory environment?
2) how does agile work in distributed/offshore scenarios?
3) how do you address skeptical business owners that agile can work?
4) where does agile work? And where doesn't it?
5) How does agile work on large teams?

Rather than go through the specifics of each question-and-answer, I thought I would point out what I thought was the most interesting disconnect between the answers given by the members, including myself.

Basically, the panel could be divided into individuals with a more Unified Process oriented bias, and those that seem to follow a more pure form of agile. (I fell into the latter camp.)

One perspective raised by Anthony, was that there was nothing really wrong with the way the rational unified process works in its current form. The agile counterpoint was that rup, when performed incorrectly, resulting in a overly complex, waterfall style approach.(okay a mini-waterfall)

Some members of the rational Camp (**ahem** Anthony) didn't particularly see any issue with this. They raised a valid point that that many software organizations currently are performing using a macro waterfall strategy. They typically deploy software every 3 to 4 months. Getting them to use the rational unified process and employing every 6 to 8 weeks is a huge improvement.

For my part, I can't disagree with this, but, a miniature waterfall is not the best way to be agile. more importantly,IMHO this is missing the point of trying to be agile in the 1st place.

Agile and lean development processes are implemented because of their potential to reduce waste. According to lean manufacturing principles, inventory is one of the biggest causes of waste and inefficiency. In software delivery terms waste is expressed as unfinished code, which can be expressed as:
1) documentation representing code that is not in production
2) a particular branch of code that has not been merged into the trunk
3) code that is not being used by users

I'm sure I've forgotten a couple of examples, but the point is plain, artifacts that are developed can quickly become a source of inefficiency if they lay around too long. The sooner you can take an idea and get it through the lifecycle than the less "secondary artifacts" you have to manage, these include change requests, defects, test cases and the list goes on and on and on...

of course, this doesn't address my major issue with the RUP. Namely that it simply, is just too complicated. Of course, rather than reiterate all my issues with the approach, I invite any reader to take a look at my article on the pros and cons of the rational unified process for more details.

Also, I recommend taking a look at how I combine agile and RUP to create a process that is both agile and scalable.

I also had a conversation with one of the panelists afterwards (okay it was Anthony again), who rates the comment that agile wouldn't exist if we were able to understand RUP in the 1st place. I think the important point here, is that products whether they be methodologies, software, or something else, need to be simple, and well understood by consumers in order for them to be effective. This is another point that is fundamental to agile that I think is lost on many within the RUP, things sometimes need to be complicated in order to be effective, but taking simplicity as a value in taking every effort to work towards simplicity is essential to creating any effective product, especially a software development lifecycle framework.


That all being said, RUP, and the unified process in general is still one of the best families of software processes out there. Let's hope it keeps receiving criticism, and improving as a result.

Monday, April 6, 2009

My Custom Agile Flashcards

In a previous post I mentioned how impressed I was by the concept of Agile Flashcards, I started to put together some of my own flashcards to reflect on some of the best practices that I use on my projects. Here they are below:


Domain Driven Design


by Eric Evans



  • Articulate and encapsulate the business logic of the system into one or more software models

  • Organize and abstract the knowledge of multiple business SMEs, partitioning business logic into:


    • Entities

    • Value Objects

    • Aggregates

    • Repositories

    • Factories

    • Services

    • Specifications

  • Refactor the model to reflect the realities and limitations of the target technical platform (avoid ivory tower models)

  • Build a model that both technology and business can understand, providing a Ubiquitous Language for the project

Agile Documentation


by Scott Ambler


Document...


When the Business Asks you to


For Clarity Across Teams working in different locations


To establish appropriate business and technical Context for the solution


To define Interfaces across systems


To Enable your Next Effort


Informally


To Understan


To Communicate


Don’t Document...


To Specify


Because your Process Says So


Unproven ideas (prototype it first)


Without obtaining the structure from your Intended Audience


Without Considering Maintenance and cost of ownership


Implementation and Design Details that can be just as easily expressed in the solution itself


Essential Documents


Operations and Support


Developer Setup & Manuals


User Manuals


Delivery Plan


Phases of a Product Based Lifecycle


Lightly apply the concepts of a product based SDLC lifecycle on large-scale, enterprise projects...



  • Inception generate a vision, build the business case and obtain funding

  • Elaboration address technical risks, clarify on certain requirements, experiment and prototype, and build the foundation

  • Construction expand the team, run the assembly line, and manufacture the complete solution

  • Transition train users, transition to operations, and hand over the solution

Remember that contributions from a combination of disciplines (requirements, design, build, test, etc.) are required in some shape or form through all phases of the lifecycle



Enterprise Stakeholders


On large projects extend the typical agile definition of a product owner to include the following stakeholders:



  • Operations Manager

  • Enterprise Architects

  • Developer Teams dependent on code you are building

Be sure to solicit their input when defining requirements for the system.


Agile Modeling


by Scott Ambler


Software modeling can be incredibly useful, but models are expensive to create, and incredibly expensive to maintain.To maximize ROI,


Model...



  • To communicate an idea

  • To understand the concept

  • Using Collaborative Techniques like CRC Cards

Don’t Model...



  • Without an audience in mind

  • Just because your process says so

Focus Modeling on...



  • Integration interfaces between teams and systems

  • Complex business logic

  • Code being reused across multiple teams
test

Thursday, March 19, 2009

Agile in a Flash

I try not to do too many "news message style blogs", ones where all the blogger does is repeat somebody else’s message and hyperlink to it.

I would rather the majority of my posts reflect my personal experiences and add some significant value.


Still, every once in awhile something comes along that impresses me enough that I can’t resist.


Jeff Langr and Tim Ottinger are blogging about a concept called Agile in a Flash, which brilliantly, and build on the concept that real tangible, concrete tools that you can touch often have way more impact than equivalent software solutions.


This is one of the base principles of agile. I.e. planning boards that are clearly visible to that everybody passes by, are way better than software project plans. Using collaboration cards to build models with your stakeholders are way better than software models that you simply review.

Agile in a Flash builds on this concept by placing specific best practices on distinct cards, everything from technical practices like using meaningful names when creating code, to project management practices like daily standup meetings and retrospectives. What is really interesting is that power cards are also included in the deck, such as extreme measures.


The two authors have positioned using Agile in a Flash as a great way to anchor one’s thoughts, as well as a mechanism to quickly teach and provide learning to people who are not aware of some of these practices. I personally like the idea of actually placing the specific cards are actually being used by a team on the actual planning board. That way everybody in the organization who goes by the planning board will have a clear understanding of what actual practices are being followed by the team.


I think tools like this are great enablers for change management, and personally, want to brand some of these cards according to look and feel of my clients logo, and start introducing them to the various technical people that I work with on a daily basis.




..

Sunday, March 8, 2009

The Dangers of Agile

The Dangers of Agile Development


For my last several posts, I’ve given some fairly philosophical submissions on the benefits of agile, how to mix it with other practices, and for the most part, described the benefits of the principles and methods that can collectively be termed "agile".


For the last several months, I’ve had the distinction to be able to serve on a software project that has been following a "purer" form of agile than I typically experience. While I originally was looking forward to this experience with relish, I now have to admit that agile brings very real risk. Not only to software development projects, but world health and safety as well...

skeptical? read on...


1) clumsy developers put project planning at risk


Agile practices state that planning, modeling and other project artifacts are best represented as physical, tactile objects like index cards being placed on particle boards. These artifacts are then placed in public places, where anybody can walk by, and get a quick understanding of where the project is currently at.


Sounds great in theory, right? Of course, what makes it obvious to me that nobody with real-world development experience has actually tried this is the obvious fact that developers are terminally out of shape, lumbering, unforgivable week, clumsy animals. Have you ever seen, a developer try to walk down the hallway without bumping into every third person, doorway or other inanimate object. In my experience, I could not go through a week of an agile project without some developer literally tripping against the agile planning board, and sending the index cards flying in every different direction. This effectively makes the project of any scale impossible to track and impossible to monitor.


clumsy developer



2) the business will always take away your daily standup/retrospectives /whatever room from you


Supposing that for one second we decide to mitigate against clumsy developers wrecking havoc on our planning boards by moving them into slightly out-of-the-way location, such as a common planning room dedicated just for developers. Let’s just face it, developers are always going to be on the lowest totem poll. No matter what you do to reserve rooms, facilities, etc. expect to have your room taken away from you..no access


3) Agile practices (like co-location and pair programming) encourage the spreading of communicable diseases...


In many ways the inventors of agile delivery are completely living in a padded room devoid of any news and are completely unaware of any of the issues facing today’s world and workforce environment. Let’s face it the corporate world encourages things like working in separate cubicles, floors, and even in separate countries (i.e. off shoring) not because it’s more efficient, or cheaper, but because it is SAFE. Agile practitioners seem to think that they are very clever by trying to make everybody sit in one room, and even going so far as to forcing developers to sit right beside each other and share computers, keyboards, and other peripheral devices. Everybody take heed!! This is unsafe, and unsanitary. I have personally experienced half of my development team being taken out by one nasty virus because of these unhealthy practices espoused by agile development.


agile spreads viruses


4) agile modeling guarantees your models will be vandalized


Let’s face it, in any project of large-scale, software models, diagrams, and pretty pictures are more important than your code. No one can understand the code anyway, the business is certainly never going to understand it, and your project managers will only just pretend to. On the other hand, everyone can always understand a pretty picture. Developers probably won’t even understand your code, once new developers go onto your team there just going to rewrite it anyway, but no one will redraw a pretty picture/diagram, especially if it has pretty colors.


Using agile tools (i.e. whiteboards and CRC cards) guarantees that your pretty diagrams will only be temporary in nature, eventually it will get destroyed, even if there is a Please Leave On (PLO) marker. This is especially true if somebody from the business or project management gets a hold of your white board, they will erase your models just on principle...



diagram vandalization


5) Agile delivery is destroying our environment


anybody knows the agile developers answer to anything is "I know, let’s put it on an index card...".



  • Need to do some modeling? Put it on an index card...


  • need to gather some requirements? Put it on an index card...

  • need to fill out a change request? You got it, put it on an index card...

I think it’s a given that the need for software development teams and software development in general is only growing at an exponential rate. If even a small fraction of these teams follow agile approach, the effect will be the destroying of our forests because of all this need for an exponentially larger amount of index cards. And nobody in the agile community has actually thought about this? Not only does agile helps spread disease, it’s also destroying the environment!



6) Agile development is also an assault against fashion


One thing we also as responsible developers need to be collectively aware of. Software programmers can’t dress. They have no sense of fashion, and look ridiculous even when viewed in isolation. This is one of the other many reasons that sensible project managers try to isolate developers from each other, putting two together for a prolonged period of time is just too much of an offence to the eyes. Then along comes these agile developers and they say "hey let’s put two developers in front of one keyboard" like that’s not going to be an assault of the senses. What makes it even worse is that developers tend to make the same fashion mistakes at the exact same time...
bad developer fashion sense


So what is my point?


Everybody knows that agile provides some great ideas if you want to add value to software delivery projects. But what I haven’t seen is a fundamental look at the effects of agile on our health, on our sanity, and the safety of our environment...


As responsible professionals we need to look outside of our need to successfully deliver valuable software to the business, we also need to be responsible citizens of the world.

.

Wednesday, March 4, 2009

DSLs and the new Analyst tester

Domain specific languages (DSLs) are quickly gaining momentum and there is definite interest in how the industry can adopt DSLs in the requirements world. I think DSLs can act as a bridge to help cross the divide between an "analyst" and a "tester". While, I don't think the industry is ready to adopt DSL's in the large, there are several easy wins here that I think can be easily gained by the industry if we start small.

1) Get the analysts involved with delivery -it's time to accept that the old fashioned approach of making up requirements and throwing it over the wall has failed. Instead, embed the analysts with the developers and testers. Make them work with the team, jointly capturing requirements together and creating tests that validate that the system.

2) Executable Requirements - start writing automated acceptance tests. These acceptance tests are described one level up from "code" and can start looking like a natural language. Wikis like FitNesse (http://fitnesse.org/) are an excellent example

3) Behaviour Driven Development - the next step up from executable requirements is to embrace a DSL-like mindset to drive the design, development and testing. There are emerging frameworks that allow a tester or developer to write test cases in code that mirror the language of the business. See, http://behaviour-driven.org/

If we can get these three practices right, then I think we are half way up the hill towards understanding and using DSLs. Already there are many organizations and teams that are pushing the frontier with these practices. As these practices get adopted, the "analyst" will evolve into a capital A - "Analyst", lower case t - "tester" and the two roles will be merged into one.

Monday, March 2, 2009

TDD improved coding quality? Really?

this might seem obvious to those of us that actually do TDD, but now we actually have some empirical evidence to back us up...



http://www.infoq.com/news/2009/03/TDD-Improves-Quality

Leading Agile: The Reluctant Product Owner

A pretty good suite of posts are being put together on how to scale up the concept of an agile product owner...

I recommend checking it out@
Leading Agile: The Reluctant Product Owner

Friday, February 20, 2009

Behavior Driven Development - TDD and DDD rolled into one

To be honest, I like to think that I’m pretty good at keeping up with the cutting edge of what developers are doing to improve their effectiveness, I’m pretty well versed in agile software delivery approaches like domain driven design, test driven development, agile modeling. I also am pretty well-versed in and have real experience in things like SOA, AOP, Web 2.0 , REST, patterns of all kind...


and then every once in a while I become completely humbled by learning that I have been totally ignorant of some fantastic new innovation within the development team.

This time that innovation is behavior driven development, a fusion of test driven development , agile user stories, and domain driven design.


What behavior driven development allows you to do is take a set of user stories and supporting acceptance tests written in natural language and turn them into explicit, testable requirements that become part of your automated testing and build routine.

For example:


take the following user story

As a customer
I want to update my customer profile
So that my personal data is always accurate

Scenario 1: customer enters invalid birthday (must be 18 years or older)
given that the customer enters a birthday
and that the customers age is calculated to be less than 18 years
When the test is run
the following error message should be displayed "customers must be 18 years or older to be valid members of this website!"
  Scenario 2: etc.
Using a behavior driven specification framework like Story Test, one develops acceptance tests (either automatically using a tool or manually)
like the following
  namespace StoryQ.Tests.Documentation
{
[TestFixture]
public class CustomerSpecification
{

[Test]
public void ValidCustomerBirthdayScenario()
{
Story story = new Story(" writing executable specification for customer birthday");

story.AsA("customer")
.IWant(" update my consumer profile")
.SoThat(" my personal data is always accurate")

.WithScenario("invalid birthday scenario")
.Given(" the customer enters a birthday")
.Add ("any age is calculated to be less than 18 years")
.Then(" display error message saying that the customer must be 18 years or older to be a valid member of this website")

.WithScenario("Writing specifications again")
.Given("That i have written everything in text")
.And("still have no asserts")
.And("and have another condition")
.When("the test is run")
.And("I am happy")
.Then("the test result should be pending")
.And("here’s just some more and’s");

story.Assert();
}

}
}
These user story specifications leverage the concept of a ubiquitous language from domain driven design, and are built just like regular user stories leveraging concepts such as value driven questions , and responsibility driven design.
IMHO, this is brilliant stuff and really ties together the concept of testable requirements and executable specifications.
 
This kind of innovation that comes from the open source world is also why, IMHO big vendors finding it increasingly difficult to match the pace of free products.
aa
..

Friday, February 13, 2009

Agile over RUP Part 4


RUP Principles

As I have said before, one of the best parts of RUP is the principles and best practices. Probably the fact that RUP has explicit principles that are clearly articulated makes it a better methodology that many of the SDLCs that I've seen put together, especially by those in the management consultant world. In fact most SDLC I have seen don't even bother having proper principles. Having a consumable set of principles means that development teams can pretty much guide work around whether they are following a principle, not whether they are following a specific detailed piece of processed material.

I have listed below the major RUP principles along with some minor alterations in order to make them more agile.

The RUP is a use case centric

RUP makes a big deal about how every single artifact in the process can be traced back to a particular use case. In RUP, use cases are a collection of scenarios grouped together according to their ability to help a specific user accomplish a specific goal. The idea of developing use cases and then hanging your design documents, plans, test, and anything else might need off of these use cases is a great idea.

Use case traceability is a complete waste of time...

Unfortunately RUP takes what is a reasonable approach and pushes it to the extremes of absurdity. RUP recommends that you follow what is known as "use case traceability". Use case traceability is the notion that you can have a database of every artifact created in your SDLC (including code) that tracks each artifact and their traceability to each use case and each subsequently developed artifact. In my experience, this is a colossal waste of time. By doing this, you are in effect trying to track a multidimensional, real-time, many to many relationship. What makes this even harder, is the only person capable of properly managing the database of artifacts is someone who has a reasonable understanding of the entire SDLC. This is most likely a very senior person on your team and he has better things to do than traceability management. More than likely, the person responsible for managing traceability is the person who drew the short straw in your project and is almost completely clueless about what he is actually managing.

But using use cases as a framework for managing a project is actually not a bad idea...

that being said, I almost always develop and maintain a use case hierarchy like the one prescribed by Alastair Cockburn. I do not always develop complete use cases, but I do use this hierarchy (usually stored in some kind of wiki) to act as an anchor to associate most of my other major SDLC elements. Of course in the agile world, there are a lot less SDLC artifacts so usually I associate individual use case nodes with things like:

  • individual user stories

  • test cases (using tools like FIT)

  • simple models (as needed)

Traceability with working code is done by utilizing a revolutionary concept known as talking to the person responsible for developing or supporting the code. Relying on people seems to be something very scary in the manager world, but rather than focusing a whole bunch of work on doing complex traceability, try spending more work on making sure that there's always somebody who knows how your solution works.
The RUP Is Architecture Centric

The RUP also makes a big deal about being architecture centric. The RUP describes architecture as a filter on the various models necessary to build the system. (e.g. requirements model, design model, implementation model, deployment etc.) This filter represents a common vision of the system, it's common components, and unifying elements. The RUP spends a lot of time describing what architecture is and what it is and using terms that would probably baffle most of us, but the point is that according to RUP, architecture is something that needs to be considered throughout all aspects of developing software. While most people who follow agile probably recoil at the heavyweight definitions of architecture offered by the RUP, what is refreshing about RUP compared to most other methodologies is that RUP believes that architecture permeates all aspects of the system. Most other methods seem to put architecture "above and around" the actual implementation of the system. RUP on the other hand, believes that architecture is represented in its requirements, its design, its code, and how it's deployed. In other words architecture is more than a bunch of pretty diagrams with boxes and lines, and it doesn't cut architects who have no desire to be involved with implementation any slack.

Architecture is important, but keep it lightweight...

Where RUP tends to fail, is in the details. RUP has lots of advice on how to develop detailed use case models, detailed use case realizations and traceability matrices necessary to modeling and maintaining the "architecture" of the system.

Keeping a focus on architecture on any large scale project is important but in order for it to be maintainable it needs to be lightweight, and focused on value. What is valuable is going to change from project to project but in my experience make sure that effort is put into developing a set of architecture and coding standards, and that everyone be on the team is aware of them.

Everything else is gravy. (And gravy is not a good thing if you're trying to stay lean)
The RUP Is Iterative

In terms of principles that add value, this is a no-brainer to anybody who's trying to adopt agile. That being said, RUP does not offer very tangible advice on how long iteration should be, and how to manage what goes on in iteration. Most of the milestones, gating criteria and metrics are based around the larger scale phases.

While the RUP makes a big deal of iterating the best advice you are going to get around how to manage iterations are going to come from things like scrum or XP. Whenever anybody attempts to adopt RUP, the first thing I do is train them up on how to manage a project using iterations using approaches described by a really good text, Agile Estimating and Planning.

The RUP Is Model Driven

The RUP phases places a heavy emphasis on developing one or more diagrams/models to represent the system. When going through the RUP, one will encounter guidelines on how to develop use cases models, use case realization models, design models, deployment models, activity diagrams, etc.

In point of fact, it's probably a full-time job just to keep up with all of the UML diagrams recommended by RUP as well as the approach to using these diagrams within RUP. The biggest issue in using these models is one of maintenance. While not specifically saying so, RUP does imply that these models need to be developed, and kept up to date using reverse and forward engineering principles.

Models have A Lot Of Value, But Use Them for What They're Worth, They Aren't the Solution

Models are actually a great thing, they help us communicate, they help us understand, and they help us abstract implementation details that can take an awfully long time to learn. But models are expensive to create, and they are incredibly expensive to maintain. Models are also only so valuable if they are developed by the technical team in isolation of the business team. In order to be useful agile modeling best practices need to be used to supplement the RUP model driven approach. In short these practices consist of
  • use collaborative modeling techniques (like CRC cards)

  • don't be afraid to throwaway models

  • focus models on interfaces between teams, complex business logic, and code that is currently being reused by multiple teams

  • don't create a model without having an audience in mind

  • don't create a model because your process says so

When Using RUP to Scale Agile Make Sure You Follow Agile Documentation Principle

Scott Ambler, has some great on how to develop documentation in an agile manner. In a nutshell, Scott suggests that you should only document when
  • you are satisfying a specific project stakeholders

  • you have a stakeholder who can help you build a table of contents and direct you on what he wants to see

  • when there is specific business value

  • above all, don't create a document because some piece of processed material says so.

So if you've actually made it to the bottom of this document, you should have a pretty good idea about how I, and you could scale agile using specific modified portions of RUP in a pragmatic fashion. Scott Amber also has some great posts on how to scale up agile with RUP
Hopefully, anybody reading this will also have a better understanding of the particular project take to implementing large-scale software development projects.

Agile over RUP Part 3

In my Previous postI mentioned that the rational unified process is slightly modified can offer good value to large-scale projects, in this submission I elaborate on some of the components of the process.

Product Lifecycle Phases and Skill-based Disciplines

One of the biggest differentiators of the Rational Unified Process is the way that the method uses a two-dimensional grid to categorize work into one or more phases, as well as a specific disciplines or skill sets of work.

The problem with many structured development processes that they categorize work along one dimension only, this dimension is usually based on skill sets. What this in effect means is that work is organized around completing requirements, then completing design, then completing development, etc. Given that the whole waterfall approach to methodology was first described as a anti-pattern almost 30 years ago, it's surprising how often I see software development methodologies popping up that prescribe to it, especially in the management consulting world. The RUP has at least has the good sense to realize that if you're going to break things up by phases, then create phases based on the natural lifecycle of a product.

While the RUP does categorize work into disciplines (requirements, design, etc.) the RUP explicitly states that this work from different disciplines can be conducted in parallel and that there is plenty of opportunity for the work to overlap. Furthermore, the RUP provides advice on how to break up phases into multiple iterations.


RUP Phases

a brief description of each of the RUP phases are as follows:

Inception

At the beginning of a project, there has to be a vision, a good idea that will benefit the business, and there has to be some method of generating the money. Agile doesn't talk about any of this, when you start an agile project, you're in requirements design and development. Somebody has to talk about putting together the business case, looking at the solution from an enterprise technology perspective, (should the solution is Java, or.Net, how about a package like PeopleSoft?) Stakeholders need to be lined up, and a high-level estimate of what the overall solution is going to cost needs to be put together. Someone asked all sorts think about organizational change and training. Many agile projects are doomed to failure (IMHO I don't have a stat of this or anything) if they don't spend it least a couple of weeks to a couple of months (depending on project size) working on inception. Think of inception as an iteration zero on steroids, and work is not only done by developers, although they do need to be heavily involved on the technical side.

Elaboration

Once the team has a general idea of what they are doing, and why they're doing it, the RUP recommends to start the SDLC process (i.e. requirements, design, development, test, deploy) in multiple iterations focusing specifically on technical uncertainties, scary requirements, and generally anything that makes the developers "stay up and shiver at night". Again, elaboration can be looked at as the enterprise version of a combination of multiple iteration zeros, but supplemented with a comprehensive, planned set of spiking. One of the fundamental pillars of RUP is that any large development project should contain a phase where a subset of a large development team can get together and experiment, prototype, and mitigate technical risk before applying a large-scale team to developing the entire solution. One interesting thing to note is that many companies adopting RUP confuse elaboration with design, in point of fact elaboration requires one or more complete iterations of requirements, design, develop, test and deploy to be considered complete. The difference between elaboration and construction is that elaboration is focused on mitigating technical risk.

Construction

Once the majority of technical risk or guiding a new platform, adopting some legacy code, or understanding some complex requirements have been mitigated through a number of completed development iterations. Management is supposedly able to magically deem that all architecture risk has been eliminated, and it's time to start a brick laying, brainless, assembly-line approach to completing the solution. Now that all risk has supernaturally disappeared, it's time to expand the team, and optimize the delivery approach so that is an effective, efficient manufacturing style process. (Excuse me while I laugh hysterically) As naïve as this viewpoint is to developing software is, what is even worse is that many organizations confuse construction with the phase where all software development is conducted. According to RUP construction still requires revisiting requirements, revisiting design, and of course developing and testing. The emphasis is that more development than requirements or design should take place.

Transition

At some point in time, the solution needs to be handed over to the client. Training needs to take place, consultants need to be replaced with counterparts within the organization and the solution needs to be entrenched within the organization. This is all completely reasonable, and is something to say doesn't really talk about, and in my opinion probably shouldn't. I'm not sure I see the value in having any SDLC try to provide advice on how to fundamentally deliver a software product to an organization, it's not that I find the transition phase to be incredibly important, I just think that the RUP provides extremely superficial advice, and waters down its own strong points by trying to do too much. I personally work for a consulting company that has an extremely strong organizational change department. And trust me, software process geeks really have no idea on how to approach this One. Again, the biggest mistake organizations make when adopting RUP is to confuse transition with testing and deployment, the two have nothing to do with each other. If anybody out there is really interested in how to tackle transition, I recommend reading the Heart of Change for a start.

Compared to other methodologies RUP phases make sense but...

Let's be honest, has anybody out there actually done any development work and said "hey we are done elaboration phase, it's time to start construction...". Well I have actually been on projects where we had a "elaboration team" which was responsible for putting together the architecture framework, setting up the technical foundation, and making sure that the platform was solid from a performance point of view, and could handle the various "complex" requirements. We then had a "construction team" which was scheduled to start several months after the elaboration team, which was supposed to use the "common design components", patterns and other pieces that the elaboration team that put together.

While this sounds great on paper, the reality is that the majority of elaboration work really started once the construction team had landed, it was only when they started using the "common components" to implement the "non-complex requirements" that the elaboration team really figured out how the common components should operate. In short, software development cannot be neatly broken out into elaboration and construction phases, what I see happening is that iterations tend to start out with what appear to be largely "elaboration style" activities. These early iterations tend to have more experimentation, prototyping, and spiking necessary to mitigate technical risk and figuring out the intricacies of whatever new technologies are being used on the project. Subsequent iterations tend to have less and less elaboration style work and more and more construction style work. That being said, every once in a while, a drastically new requirement comes into play, or the development team figures out a new approach that can drastically improve your overall solution but requires a dramatic rethinking of the way things are being done. In short, replace the RUP construction and elaboration phases with a development phase, and plan to have a decreasing but fluctuating amount of elaboration style activities in each development iteration.

In my next post on this topic, I complete this article by giving an overview of RUP principles and best practices and how to modify them to make them more agile.