Friday, October 30, 2009

Testing Architecture

There are many aspects to look at when one is trying to define or describe the architectural elements of a system.
-application architecture
-infrastructure architecture
-security architecture
Etc, etc, etc....

One aspect that I see that is completely missed by by most software vendors and system developers and integrators is what I call Testing Architecture.




I define testing architecture as a description of the testing tools, products and other system components that are dedicated to supporting testability of the solution.

Testing architecture describes the patterns used to support building testable components, and provides guidance on when and how to utilize stubs, mocks, data setup and cleaning services, automated testing frameworks and other testing artifacts.

Finally Testing Architecture describes testing coverage metrics for various aspects of the solution, and lists any constraints that may limit the use of automated testing.

It's time for major software vendors to take a page from many OSS groups and start making tests and testability far more prominent in there solution.


Jef

Tuesday, October 13, 2009

Sucesssive Planning Poker



While I think planning poker is an excellent way to estimate any software delivery work, I've noticed that actual development work is sometimes started to soon, supported by a backlog and release plan that are made up of largish items (12 - 20 ideal days)

While agile methods espouse getting started as soon as possible I think a little more up front analysis can actually help reduce rework and eliminate ambiguaty.

I've come up with something I call Successive planning poker. The premise is simple

1: Conduct an initial round of planning poker

Frequently the estimating group groups will often end up with lots of items that end up with big estimate ranges. (ie 10 ideal days +) Large estimates represent work that is not well understood, contain many assumptions, and are riddled with unknowns.

2: Conduct analysis and research activities necessary to resolve ambiguities in any large estimates

During the initial planning poker session make sure to keep careful track of ambiguity and uncertainties that could be resolved with a reasonable amount of research. Because the group doing planning poker is cross functional, they should be skilled to answer questions ranging from existing legacy functionality to product limitations to existing business processes or policies, etc,. Give team members a day or two come back with answers.
3: Conduct a successive round of planning poker

Go through each largish estimate from the initial planning round and use the answers and research completed by the team to break up the work items into smaller, more manageable pieces and estimate these pieces independently, hopefully ending up with work items ranging from 1- 8 ideal days.

4:Repeat

Repeat successive rounds of estimating and researching until you have a quality backlog of items that are reasonably fined grained enough to tackle within an iteration in a reasonably predicable manner.

While the above approach might be obvious to some, I've been on several projects where we didn't do this an the first few iterations really suffered, of course a couple iterations of development really did alot to smooth out future iteration. Still I think just a little but more planning dillegence (done in a collaborative and cross functional way of course) could have helped to gain the same information faster and far easier.

Sunday, July 19, 2009

A Couple of More Agile Flashcards

a couple of weeks ago Jeff langr and Tim Ottinger asked the public at large to contribute a couple more ideas for agile flashcards onto their site Agile in a Flash.

I decided to quickly create a couple which you can find at their site, but I've listed them again here just for reference purposes.

Of course, I probably need to spend a little bit more time on them to polish and clean up the quality of the grammar.

domain driven design(by Eric Evans)
-Agile documentation (by Scott ambler)
-agile modeling (by Scott ambler)
-product oriented development lifecycle
-enterprise stakeholders

Here are some other ones I hope readers also find useful...
========================
Behavior Driven Development:
1) use story driven scenarios to specify the acceptance criteria for an iteration

Given [some State]
And [some more state]...
When [an activity completes]
And [another activity completes]...
Then [an assertion is true]
And [another assertion is true]...

2) wire these acceptance criteria into an automated acceptance testing framework (like fitness)

3) have your developers write code for the iteration into the acceptance criteria pass

-make sure to use the ubiquitous language when defining your stories/tests
-focus your requirements and quality testing efforts on designing these executable requirements at the beginning of an iteration
========================
the Kano model of customer satisfaction
-when creating a backlog for your customers, categorized items into:
-mandatory
-linear
-exciters
-reverse (don't implement these :-))

Do this by developing a questionnaire that asks different stakeholders how they would feel if a specific feature was present or not present
1. I expect it to be this way
2. I like it that way
3. I am neutral
4. I can live with it this way
5. I dislike it this way

both negative and positive answers can be cross-referenced to determine the category of the feature

See http://agilesoftwaredevelopment.com/2006/12/kano-questionnaires for a example...

================================
Value Stream Mapping:
the best way to determine how much agile can help an IT delivery organization is to perform a value Stream mapping analysis.

Measure the ratio of work to wait time/waste for both the current and suggested future development process

Do this for
-larger features
-typical bug fixing
-emergency fixes

(Picture here)

=========================
Web 2.0 Based Documentation:
make sure all documentation, including technical specifications, standards, decisions, etc. are all placed on a web-based collaboration platform.

This collaboration platform must
-allow all participants to modify content
-be available to both internal staff, contractors, and outsourcing
-promote an open, collaborative approach to running projects

Excellent examples are wiki, blogs and user forums...

Wednesday, June 24, 2009

Appreciative Inquiry

a colleague of mine has suggested a new approach towards getting delivery teams to improve the way that they do their work.

It's called appreciative inquiry, the basic idea is that organizations should spend more time focusing on what works, and expanding those practices, rather than focusing on the negative.

It's pitched as as being the opposite of problem solving, the following is a good writeup from Wikipedia.

"
AI focuses on how to create more of the occasional exceptional performance that is occurring because a core of strengths is aligned. The approach acknowledges the contribution of individuals, in order to increase trust and organizational alignment. The method aims to create meaning by drawing from stories of concrete successes and lends itself to cross-industrial social activities. It can be enjoyable and natural to many managers, who are often sociable people.
"

One thing that can be said about agile/lean practices is there is a huge focus on what's broken. Not on what is currently working, i.e. requirements up front or back, unfinished code leads to waste, untested code leads to fertility, etc. A different tactic using the AI approach would be a focus on the benefits of software in the first place, i.e. software leads to good agility abstraction for businesses in general, and use that as a metaphor to expand how we should be approaching delivery software for organizations in general.

Our team plans to use this approach on our next retrospective, I'll make sure to share the results here.

Monday, June 22, 2009

Experience Reports from the Agile Trenches: What's Really Working, What's Not Working, and Why


A couple of weeks back, I was invited to speak at the Rational Software Conference 2009.

I had the pleasure to to take part at the Experience Reports from the Agile Trenches: What's Really Working, What's Not Working, and Why panel hosted by Scott Ambler.

It was a great opportunity for me to discuss my experiences in deploying agile techniques and tools as a part of Deloitte. The panel was attended by Mark Lines, Nate Oster, Anthony Crain, Scott Ambler (facilitator), Jeff Anderson, Shawn Hinrichs, and Matt Heinrich.

There were a lot of interesting questions raised by the audience, some of these include:
1) how do you deal with agile in a regulatory environment?
2) how does agile work in distributed/offshore scenarios?
3) how do you address skeptical business owners that agile can work?
4) where does agile work? And where doesn't it?
5) How does agile work on large teams?

Rather than go through the specifics of each question-and-answer, I thought I would point out what I thought was the most interesting disconnect between the answers given by the members, including myself.

Basically, the panel could be divided into individuals with a more Unified Process oriented bias, and those that seem to follow a more pure form of agile. (I fell into the latter camp.)

One perspective raised by Anthony, was that there was nothing really wrong with the way the rational unified process works in its current form. The agile counterpoint was that rup, when performed incorrectly, resulting in a overly complex, waterfall style approach.(okay a mini-waterfall)

Some members of the rational Camp (**ahem** Anthony) didn't particularly see any issue with this. They raised a valid point that that many software organizations currently are performing using a macro waterfall strategy. They typically deploy software every 3 to 4 months. Getting them to use the rational unified process and employing every 6 to 8 weeks is a huge improvement.

For my part, I can't disagree with this, but, a miniature waterfall is not the best way to be agile. more importantly,IMHO this is missing the point of trying to be agile in the 1st place.

Agile and lean development processes are implemented because of their potential to reduce waste. According to lean manufacturing principles, inventory is one of the biggest causes of waste and inefficiency. In software delivery terms waste is expressed as unfinished code, which can be expressed as:
1) documentation representing code that is not in production
2) a particular branch of code that has not been merged into the trunk
3) code that is not being used by users

I'm sure I've forgotten a couple of examples, but the point is plain, artifacts that are developed can quickly become a source of inefficiency if they lay around too long. The sooner you can take an idea and get it through the lifecycle than the less "secondary artifacts" you have to manage, these include change requests, defects, test cases and the list goes on and on and on...

of course, this doesn't address my major issue with the RUP. Namely that it simply, is just too complicated. Of course, rather than reiterate all my issues with the approach, I invite any reader to take a look at my article on the pros and cons of the rational unified process for more details.

Also, I recommend taking a look at how I combine agile and RUP to create a process that is both agile and scalable.

I also had a conversation with one of the panelists afterwards (okay it was Anthony again), who rates the comment that agile wouldn't exist if we were able to understand RUP in the 1st place. I think the important point here, is that products whether they be methodologies, software, or something else, need to be simple, and well understood by consumers in order for them to be effective. This is another point that is fundamental to agile that I think is lost on many within the RUP, things sometimes need to be complicated in order to be effective, but taking simplicity as a value in taking every effort to work towards simplicity is essential to creating any effective product, especially a software development lifecycle framework.


That all being said, RUP, and the unified process in general is still one of the best families of software processes out there. Let's hope it keeps receiving criticism, and improving as a result.

Monday, April 6, 2009

My Custom Agile Flashcards

In a previous post I mentioned how impressed I was by the concept of Agile Flashcards, I started to put together some of my own flashcards to reflect on some of the best practices that I use on my projects. Here they are below:


Domain Driven Design


by Eric Evans



  • Articulate and encapsulate the business logic of the system into one or more software models

  • Organize and abstract the knowledge of multiple business SMEs, partitioning business logic into:


    • Entities

    • Value Objects

    • Aggregates

    • Repositories

    • Factories

    • Services

    • Specifications

  • Refactor the model to reflect the realities and limitations of the target technical platform (avoid ivory tower models)

  • Build a model that both technology and business can understand, providing a Ubiquitous Language for the project

Agile Documentation


by Scott Ambler


Document...


When the Business Asks you to


For Clarity Across Teams working in different locations


To establish appropriate business and technical Context for the solution


To define Interfaces across systems


To Enable your Next Effort


Informally


To Understan


To Communicate


Don’t Document...


To Specify


Because your Process Says So


Unproven ideas (prototype it first)


Without obtaining the structure from your Intended Audience


Without Considering Maintenance and cost of ownership


Implementation and Design Details that can be just as easily expressed in the solution itself


Essential Documents


Operations and Support


Developer Setup & Manuals


User Manuals


Delivery Plan


Phases of a Product Based Lifecycle


Lightly apply the concepts of a product based SDLC lifecycle on large-scale, enterprise projects...



  • Inception generate a vision, build the business case and obtain funding

  • Elaboration address technical risks, clarify on certain requirements, experiment and prototype, and build the foundation

  • Construction expand the team, run the assembly line, and manufacture the complete solution

  • Transition train users, transition to operations, and hand over the solution

Remember that contributions from a combination of disciplines (requirements, design, build, test, etc.) are required in some shape or form through all phases of the lifecycle



Enterprise Stakeholders


On large projects extend the typical agile definition of a product owner to include the following stakeholders:



  • Operations Manager

  • Enterprise Architects

  • Developer Teams dependent on code you are building

Be sure to solicit their input when defining requirements for the system.


Agile Modeling


by Scott Ambler


Software modeling can be incredibly useful, but models are expensive to create, and incredibly expensive to maintain.To maximize ROI,


Model...



  • To communicate an idea

  • To understand the concept

  • Using Collaborative Techniques like CRC Cards

Don’t Model...



  • Without an audience in mind

  • Just because your process says so

Focus Modeling on...



  • Integration interfaces between teams and systems

  • Complex business logic

  • Code being reused across multiple teams
test

Thursday, March 19, 2009

Agile in a Flash

I try not to do too many "news message style blogs", ones where all the blogger does is repeat somebody else’s message and hyperlink to it.

I would rather the majority of my posts reflect my personal experiences and add some significant value.


Still, every once in awhile something comes along that impresses me enough that I can’t resist.


Jeff Langr and Tim Ottinger are blogging about a concept called Agile in a Flash, which brilliantly, and build on the concept that real tangible, concrete tools that you can touch often have way more impact than equivalent software solutions.


This is one of the base principles of agile. I.e. planning boards that are clearly visible to that everybody passes by, are way better than software project plans. Using collaboration cards to build models with your stakeholders are way better than software models that you simply review.

Agile in a Flash builds on this concept by placing specific best practices on distinct cards, everything from technical practices like using meaningful names when creating code, to project management practices like daily standup meetings and retrospectives. What is really interesting is that power cards are also included in the deck, such as extreme measures.


The two authors have positioned using Agile in a Flash as a great way to anchor one’s thoughts, as well as a mechanism to quickly teach and provide learning to people who are not aware of some of these practices. I personally like the idea of actually placing the specific cards are actually being used by a team on the actual planning board. That way everybody in the organization who goes by the planning board will have a clear understanding of what actual practices are being followed by the team.


I think tools like this are great enablers for change management, and personally, want to brand some of these cards according to look and feel of my clients logo, and start introducing them to the various technical people that I work with on a daily basis.




..

Sunday, March 8, 2009

The Dangers of Agile

The Dangers of Agile Development


For my last several posts, I’ve given some fairly philosophical submissions on the benefits of agile, how to mix it with other practices, and for the most part, described the benefits of the principles and methods that can collectively be termed "agile".


For the last several months, I’ve had the distinction to be able to serve on a software project that has been following a "purer" form of agile than I typically experience. While I originally was looking forward to this experience with relish, I now have to admit that agile brings very real risk. Not only to software development projects, but world health and safety as well...

skeptical? read on...


1) clumsy developers put project planning at risk


Agile practices state that planning, modeling and other project artifacts are best represented as physical, tactile objects like index cards being placed on particle boards. These artifacts are then placed in public places, where anybody can walk by, and get a quick understanding of where the project is currently at.


Sounds great in theory, right? Of course, what makes it obvious to me that nobody with real-world development experience has actually tried this is the obvious fact that developers are terminally out of shape, lumbering, unforgivable week, clumsy animals. Have you ever seen, a developer try to walk down the hallway without bumping into every third person, doorway or other inanimate object. In my experience, I could not go through a week of an agile project without some developer literally tripping against the agile planning board, and sending the index cards flying in every different direction. This effectively makes the project of any scale impossible to track and impossible to monitor.


clumsy developer



2) the business will always take away your daily standup/retrospectives /whatever room from you


Supposing that for one second we decide to mitigate against clumsy developers wrecking havoc on our planning boards by moving them into slightly out-of-the-way location, such as a common planning room dedicated just for developers. Let’s just face it, developers are always going to be on the lowest totem poll. No matter what you do to reserve rooms, facilities, etc. expect to have your room taken away from you..no access


3) Agile practices (like co-location and pair programming) encourage the spreading of communicable diseases...


In many ways the inventors of agile delivery are completely living in a padded room devoid of any news and are completely unaware of any of the issues facing today’s world and workforce environment. Let’s face it the corporate world encourages things like working in separate cubicles, floors, and even in separate countries (i.e. off shoring) not because it’s more efficient, or cheaper, but because it is SAFE. Agile practitioners seem to think that they are very clever by trying to make everybody sit in one room, and even going so far as to forcing developers to sit right beside each other and share computers, keyboards, and other peripheral devices. Everybody take heed!! This is unsafe, and unsanitary. I have personally experienced half of my development team being taken out by one nasty virus because of these unhealthy practices espoused by agile development.


agile spreads viruses


4) agile modeling guarantees your models will be vandalized


Let’s face it, in any project of large-scale, software models, diagrams, and pretty pictures are more important than your code. No one can understand the code anyway, the business is certainly never going to understand it, and your project managers will only just pretend to. On the other hand, everyone can always understand a pretty picture. Developers probably won’t even understand your code, once new developers go onto your team there just going to rewrite it anyway, but no one will redraw a pretty picture/diagram, especially if it has pretty colors.


Using agile tools (i.e. whiteboards and CRC cards) guarantees that your pretty diagrams will only be temporary in nature, eventually it will get destroyed, even if there is a Please Leave On (PLO) marker. This is especially true if somebody from the business or project management gets a hold of your white board, they will erase your models just on principle...



diagram vandalization


5) Agile delivery is destroying our environment


anybody knows the agile developers answer to anything is "I know, let’s put it on an index card...".



  • Need to do some modeling? Put it on an index card...


  • need to gather some requirements? Put it on an index card...

  • need to fill out a change request? You got it, put it on an index card...

I think it’s a given that the need for software development teams and software development in general is only growing at an exponential rate. If even a small fraction of these teams follow agile approach, the effect will be the destroying of our forests because of all this need for an exponentially larger amount of index cards. And nobody in the agile community has actually thought about this? Not only does agile helps spread disease, it’s also destroying the environment!



6) Agile development is also an assault against fashion


One thing we also as responsible developers need to be collectively aware of. Software programmers can’t dress. They have no sense of fashion, and look ridiculous even when viewed in isolation. This is one of the other many reasons that sensible project managers try to isolate developers from each other, putting two together for a prolonged period of time is just too much of an offence to the eyes. Then along comes these agile developers and they say "hey let’s put two developers in front of one keyboard" like that’s not going to be an assault of the senses. What makes it even worse is that developers tend to make the same fashion mistakes at the exact same time...
bad developer fashion sense


So what is my point?


Everybody knows that agile provides some great ideas if you want to add value to software delivery projects. But what I haven’t seen is a fundamental look at the effects of agile on our health, on our sanity, and the safety of our environment...


As responsible professionals we need to look outside of our need to successfully deliver valuable software to the business, we also need to be responsible citizens of the world.

.

Wednesday, March 4, 2009

DSLs and the new Analyst tester

Domain specific languages (DSLs) are quickly gaining momentum and there is definite interest in how the industry can adopt DSLs in the requirements world. I think DSLs can act as a bridge to help cross the divide between an "analyst" and a "tester". While, I don't think the industry is ready to adopt DSL's in the large, there are several easy wins here that I think can be easily gained by the industry if we start small.

1) Get the analysts involved with delivery -it's time to accept that the old fashioned approach of making up requirements and throwing it over the wall has failed. Instead, embed the analysts with the developers and testers. Make them work with the team, jointly capturing requirements together and creating tests that validate that the system.

2) Executable Requirements - start writing automated acceptance tests. These acceptance tests are described one level up from "code" and can start looking like a natural language. Wikis like FitNesse (http://fitnesse.org/) are an excellent example

3) Behaviour Driven Development - the next step up from executable requirements is to embrace a DSL-like mindset to drive the design, development and testing. There are emerging frameworks that allow a tester or developer to write test cases in code that mirror the language of the business. See, http://behaviour-driven.org/

If we can get these three practices right, then I think we are half way up the hill towards understanding and using DSLs. Already there are many organizations and teams that are pushing the frontier with these practices. As these practices get adopted, the "analyst" will evolve into a capital A - "Analyst", lower case t - "tester" and the two roles will be merged into one.

Monday, March 2, 2009

TDD improved coding quality? Really?

this might seem obvious to those of us that actually do TDD, but now we actually have some empirical evidence to back us up...



http://www.infoq.com/news/2009/03/TDD-Improves-Quality

Leading Agile: The Reluctant Product Owner

A pretty good suite of posts are being put together on how to scale up the concept of an agile product owner...

I recommend checking it out@
Leading Agile: The Reluctant Product Owner

Friday, February 20, 2009

Behavior Driven Development - TDD and DDD rolled into one

To be honest, I like to think that I’m pretty good at keeping up with the cutting edge of what developers are doing to improve their effectiveness, I’m pretty well versed in agile software delivery approaches like domain driven design, test driven development, agile modeling. I also am pretty well-versed in and have real experience in things like SOA, AOP, Web 2.0 , REST, patterns of all kind...


and then every once in a while I become completely humbled by learning that I have been totally ignorant of some fantastic new innovation within the development team.

This time that innovation is behavior driven development, a fusion of test driven development , agile user stories, and domain driven design.


What behavior driven development allows you to do is take a set of user stories and supporting acceptance tests written in natural language and turn them into explicit, testable requirements that become part of your automated testing and build routine.

For example:


take the following user story

As a customer
I want to update my customer profile
So that my personal data is always accurate

Scenario 1: customer enters invalid birthday (must be 18 years or older)
given that the customer enters a birthday
and that the customers age is calculated to be less than 18 years
When the test is run
the following error message should be displayed "customers must be 18 years or older to be valid members of this website!"
  Scenario 2: etc.
Using a behavior driven specification framework like Story Test, one develops acceptance tests (either automatically using a tool or manually)
like the following
  namespace StoryQ.Tests.Documentation
{
[TestFixture]
public class CustomerSpecification
{

[Test]
public void ValidCustomerBirthdayScenario()
{
Story story = new Story(" writing executable specification for customer birthday");

story.AsA("customer")
.IWant(" update my consumer profile")
.SoThat(" my personal data is always accurate")

.WithScenario("invalid birthday scenario")
.Given(" the customer enters a birthday")
.Add ("any age is calculated to be less than 18 years")
.Then(" display error message saying that the customer must be 18 years or older to be a valid member of this website")

.WithScenario("Writing specifications again")
.Given("That i have written everything in text")
.And("still have no asserts")
.And("and have another condition")
.When("the test is run")
.And("I am happy")
.Then("the test result should be pending")
.And("here’s just some more and’s");

story.Assert();
}

}
}
These user story specifications leverage the concept of a ubiquitous language from domain driven design, and are built just like regular user stories leveraging concepts such as value driven questions , and responsibility driven design.
IMHO, this is brilliant stuff and really ties together the concept of testable requirements and executable specifications.
 
This kind of innovation that comes from the open source world is also why, IMHO big vendors finding it increasingly difficult to match the pace of free products.
aa
..

Friday, February 13, 2009

Agile over RUP Part 4


RUP Principles

As I have said before, one of the best parts of RUP is the principles and best practices. Probably the fact that RUP has explicit principles that are clearly articulated makes it a better methodology that many of the SDLCs that I've seen put together, especially by those in the management consultant world. In fact most SDLC I have seen don't even bother having proper principles. Having a consumable set of principles means that development teams can pretty much guide work around whether they are following a principle, not whether they are following a specific detailed piece of processed material.

I have listed below the major RUP principles along with some minor alterations in order to make them more agile.

The RUP is a use case centric

RUP makes a big deal about how every single artifact in the process can be traced back to a particular use case. In RUP, use cases are a collection of scenarios grouped together according to their ability to help a specific user accomplish a specific goal. The idea of developing use cases and then hanging your design documents, plans, test, and anything else might need off of these use cases is a great idea.

Use case traceability is a complete waste of time...

Unfortunately RUP takes what is a reasonable approach and pushes it to the extremes of absurdity. RUP recommends that you follow what is known as "use case traceability". Use case traceability is the notion that you can have a database of every artifact created in your SDLC (including code) that tracks each artifact and their traceability to each use case and each subsequently developed artifact. In my experience, this is a colossal waste of time. By doing this, you are in effect trying to track a multidimensional, real-time, many to many relationship. What makes this even harder, is the only person capable of properly managing the database of artifacts is someone who has a reasonable understanding of the entire SDLC. This is most likely a very senior person on your team and he has better things to do than traceability management. More than likely, the person responsible for managing traceability is the person who drew the short straw in your project and is almost completely clueless about what he is actually managing.

But using use cases as a framework for managing a project is actually not a bad idea...

that being said, I almost always develop and maintain a use case hierarchy like the one prescribed by Alastair Cockburn. I do not always develop complete use cases, but I do use this hierarchy (usually stored in some kind of wiki) to act as an anchor to associate most of my other major SDLC elements. Of course in the agile world, there are a lot less SDLC artifacts so usually I associate individual use case nodes with things like:

  • individual user stories

  • test cases (using tools like FIT)

  • simple models (as needed)

Traceability with working code is done by utilizing a revolutionary concept known as talking to the person responsible for developing or supporting the code. Relying on people seems to be something very scary in the manager world, but rather than focusing a whole bunch of work on doing complex traceability, try spending more work on making sure that there's always somebody who knows how your solution works.
The RUP Is Architecture Centric

The RUP also makes a big deal about being architecture centric. The RUP describes architecture as a filter on the various models necessary to build the system. (e.g. requirements model, design model, implementation model, deployment etc.) This filter represents a common vision of the system, it's common components, and unifying elements. The RUP spends a lot of time describing what architecture is and what it is and using terms that would probably baffle most of us, but the point is that according to RUP, architecture is something that needs to be considered throughout all aspects of developing software. While most people who follow agile probably recoil at the heavyweight definitions of architecture offered by the RUP, what is refreshing about RUP compared to most other methodologies is that RUP believes that architecture permeates all aspects of the system. Most other methods seem to put architecture "above and around" the actual implementation of the system. RUP on the other hand, believes that architecture is represented in its requirements, its design, its code, and how it's deployed. In other words architecture is more than a bunch of pretty diagrams with boxes and lines, and it doesn't cut architects who have no desire to be involved with implementation any slack.

Architecture is important, but keep it lightweight...

Where RUP tends to fail, is in the details. RUP has lots of advice on how to develop detailed use case models, detailed use case realizations and traceability matrices necessary to modeling and maintaining the "architecture" of the system.

Keeping a focus on architecture on any large scale project is important but in order for it to be maintainable it needs to be lightweight, and focused on value. What is valuable is going to change from project to project but in my experience make sure that effort is put into developing a set of architecture and coding standards, and that everyone be on the team is aware of them.

Everything else is gravy. (And gravy is not a good thing if you're trying to stay lean)
The RUP Is Iterative

In terms of principles that add value, this is a no-brainer to anybody who's trying to adopt agile. That being said, RUP does not offer very tangible advice on how long iteration should be, and how to manage what goes on in iteration. Most of the milestones, gating criteria and metrics are based around the larger scale phases.

While the RUP makes a big deal of iterating the best advice you are going to get around how to manage iterations are going to come from things like scrum or XP. Whenever anybody attempts to adopt RUP, the first thing I do is train them up on how to manage a project using iterations using approaches described by a really good text, Agile Estimating and Planning.

The RUP Is Model Driven

The RUP phases places a heavy emphasis on developing one or more diagrams/models to represent the system. When going through the RUP, one will encounter guidelines on how to develop use cases models, use case realization models, design models, deployment models, activity diagrams, etc.

In point of fact, it's probably a full-time job just to keep up with all of the UML diagrams recommended by RUP as well as the approach to using these diagrams within RUP. The biggest issue in using these models is one of maintenance. While not specifically saying so, RUP does imply that these models need to be developed, and kept up to date using reverse and forward engineering principles.

Models have A Lot Of Value, But Use Them for What They're Worth, They Aren't the Solution

Models are actually a great thing, they help us communicate, they help us understand, and they help us abstract implementation details that can take an awfully long time to learn. But models are expensive to create, and they are incredibly expensive to maintain. Models are also only so valuable if they are developed by the technical team in isolation of the business team. In order to be useful agile modeling best practices need to be used to supplement the RUP model driven approach. In short these practices consist of
  • use collaborative modeling techniques (like CRC cards)

  • don't be afraid to throwaway models

  • focus models on interfaces between teams, complex business logic, and code that is currently being reused by multiple teams

  • don't create a model without having an audience in mind

  • don't create a model because your process says so

When Using RUP to Scale Agile Make Sure You Follow Agile Documentation Principle

Scott Ambler, has some great on how to develop documentation in an agile manner. In a nutshell, Scott suggests that you should only document when
  • you are satisfying a specific project stakeholders

  • you have a stakeholder who can help you build a table of contents and direct you on what he wants to see

  • when there is specific business value

  • above all, don't create a document because some piece of processed material says so.

So if you've actually made it to the bottom of this document, you should have a pretty good idea about how I, and you could scale agile using specific modified portions of RUP in a pragmatic fashion. Scott Amber also has some great posts on how to scale up agile with RUP
Hopefully, anybody reading this will also have a better understanding of the particular project take to implementing large-scale software development projects.

Agile over RUP Part 3

In my Previous postI mentioned that the rational unified process is slightly modified can offer good value to large-scale projects, in this submission I elaborate on some of the components of the process.

Product Lifecycle Phases and Skill-based Disciplines

One of the biggest differentiators of the Rational Unified Process is the way that the method uses a two-dimensional grid to categorize work into one or more phases, as well as a specific disciplines or skill sets of work.

The problem with many structured development processes that they categorize work along one dimension only, this dimension is usually based on skill sets. What this in effect means is that work is organized around completing requirements, then completing design, then completing development, etc. Given that the whole waterfall approach to methodology was first described as a anti-pattern almost 30 years ago, it's surprising how often I see software development methodologies popping up that prescribe to it, especially in the management consulting world. The RUP has at least has the good sense to realize that if you're going to break things up by phases, then create phases based on the natural lifecycle of a product.

While the RUP does categorize work into disciplines (requirements, design, etc.) the RUP explicitly states that this work from different disciplines can be conducted in parallel and that there is plenty of opportunity for the work to overlap. Furthermore, the RUP provides advice on how to break up phases into multiple iterations.


RUP Phases

a brief description of each of the RUP phases are as follows:

Inception

At the beginning of a project, there has to be a vision, a good idea that will benefit the business, and there has to be some method of generating the money. Agile doesn't talk about any of this, when you start an agile project, you're in requirements design and development. Somebody has to talk about putting together the business case, looking at the solution from an enterprise technology perspective, (should the solution is Java, or.Net, how about a package like PeopleSoft?) Stakeholders need to be lined up, and a high-level estimate of what the overall solution is going to cost needs to be put together. Someone asked all sorts think about organizational change and training. Many agile projects are doomed to failure (IMHO I don't have a stat of this or anything) if they don't spend it least a couple of weeks to a couple of months (depending on project size) working on inception. Think of inception as an iteration zero on steroids, and work is not only done by developers, although they do need to be heavily involved on the technical side.

Elaboration

Once the team has a general idea of what they are doing, and why they're doing it, the RUP recommends to start the SDLC process (i.e. requirements, design, development, test, deploy) in multiple iterations focusing specifically on technical uncertainties, scary requirements, and generally anything that makes the developers "stay up and shiver at night". Again, elaboration can be looked at as the enterprise version of a combination of multiple iteration zeros, but supplemented with a comprehensive, planned set of spiking. One of the fundamental pillars of RUP is that any large development project should contain a phase where a subset of a large development team can get together and experiment, prototype, and mitigate technical risk before applying a large-scale team to developing the entire solution. One interesting thing to note is that many companies adopting RUP confuse elaboration with design, in point of fact elaboration requires one or more complete iterations of requirements, design, develop, test and deploy to be considered complete. The difference between elaboration and construction is that elaboration is focused on mitigating technical risk.

Construction

Once the majority of technical risk or guiding a new platform, adopting some legacy code, or understanding some complex requirements have been mitigated through a number of completed development iterations. Management is supposedly able to magically deem that all architecture risk has been eliminated, and it's time to start a brick laying, brainless, assembly-line approach to completing the solution. Now that all risk has supernaturally disappeared, it's time to expand the team, and optimize the delivery approach so that is an effective, efficient manufacturing style process. (Excuse me while I laugh hysterically) As naïve as this viewpoint is to developing software is, what is even worse is that many organizations confuse construction with the phase where all software development is conducted. According to RUP construction still requires revisiting requirements, revisiting design, and of course developing and testing. The emphasis is that more development than requirements or design should take place.

Transition

At some point in time, the solution needs to be handed over to the client. Training needs to take place, consultants need to be replaced with counterparts within the organization and the solution needs to be entrenched within the organization. This is all completely reasonable, and is something to say doesn't really talk about, and in my opinion probably shouldn't. I'm not sure I see the value in having any SDLC try to provide advice on how to fundamentally deliver a software product to an organization, it's not that I find the transition phase to be incredibly important, I just think that the RUP provides extremely superficial advice, and waters down its own strong points by trying to do too much. I personally work for a consulting company that has an extremely strong organizational change department. And trust me, software process geeks really have no idea on how to approach this One. Again, the biggest mistake organizations make when adopting RUP is to confuse transition with testing and deployment, the two have nothing to do with each other. If anybody out there is really interested in how to tackle transition, I recommend reading the Heart of Change for a start.

Compared to other methodologies RUP phases make sense but...

Let's be honest, has anybody out there actually done any development work and said "hey we are done elaboration phase, it's time to start construction...". Well I have actually been on projects where we had a "elaboration team" which was responsible for putting together the architecture framework, setting up the technical foundation, and making sure that the platform was solid from a performance point of view, and could handle the various "complex" requirements. We then had a "construction team" which was scheduled to start several months after the elaboration team, which was supposed to use the "common design components", patterns and other pieces that the elaboration team that put together.

While this sounds great on paper, the reality is that the majority of elaboration work really started once the construction team had landed, it was only when they started using the "common components" to implement the "non-complex requirements" that the elaboration team really figured out how the common components should operate. In short, software development cannot be neatly broken out into elaboration and construction phases, what I see happening is that iterations tend to start out with what appear to be largely "elaboration style" activities. These early iterations tend to have more experimentation, prototyping, and spiking necessary to mitigate technical risk and figuring out the intricacies of whatever new technologies are being used on the project. Subsequent iterations tend to have less and less elaboration style work and more and more construction style work. That being said, every once in a while, a drastically new requirement comes into play, or the development team figures out a new approach that can drastically improve your overall solution but requires a dramatic rethinking of the way things are being done. In short, replace the RUP construction and elaboration phases with a development phase, and plan to have a decreasing but fluctuating amount of elaboration style activities in each development iteration.

In my next post on this topic, I complete this article by giving an overview of RUP principles and best practices and how to modify them to make them more agile.

Sunday, February 8, 2009

Agile over RUP Part 2

This post is a continuation of my Previous poston my preferred development approach.

Reading about Agile Is Pretty Simple (which is a good thing...)

In my previous post on my preferred development approach, Agile over RUP, My Preferred Development Approach I touched upon the idea that mixing Agile with a more structured methodology like the Rational Unified Process (or the unified process in general) was a good way to combine what I see as a energetic, creative and real-time approach with a structured, more methodical framework that can help to organize large-scale projects.. I then went on to describe what agile means to me, and gave a summary of the agile best practices that I found useful on projects that I have been a part of.

In general, talking about agile is relatively straightforward. The practices are deceptively simple, and relatively easy to explain. The real difficulty in agile is in the implementation. I have probably been attempting to use "agile development" since early 2000, and I'm still probably learning at an incredible rate. I doubt that this rate of learning is going to decrease anytime soon, getting agile right is actually quite hard to do, but then again so is real success. IMHO this is all about what a good process should be, really easy to read, and incredibly hard to master.

Reading about the Rational Unified Process Is Pretty Complex (Which Isn't a Good Thing...)

The Rational Unified Process, on the other hand is a fully featured software development lifecycle framework that tries to encompass all aspects of software delivery. Unfortunately this leaves consumers of RUP with the impossible task of trying to get one's head around a complex, dense, and sometimes contradictory piece of work.

It's important to note that the creators of RUP refer to the unified process as a process framework, meaning that the process should be customized to the particular needs of a project, program or IT organization. RUP should not be taken out of the box and applied as is.

The RUP framework is a complex web of disciplines, (specific skill sets, i.e. requirements, design, development, etc.) phases, (product lifecycle stages i.e. inception, construction, etc.) process groupings, processes, artifacts, templates, gating criteria, and a bunch of other detailed process material, enough to make process geeks the whole world over cackle with glee, much to the detriment of those of us who are actually trying to get some work done. Here's a quick picture of all the pieces of RUP, written in UML, taken from my memory

So what's wrong with the above picture? For one, its complexity and sheer volume of information makes it incredibly difficult to actually find the valuable pieces of the process. Unfortunately, as of the last time I looked at the Rational Unified Process (which to be fair is ever evolving) it contained some incredibly good information (especially in the requirements and analysis and design of object-oriented systems) but there was a huge amount of process details that appear to be ad hoc, and added just to make the process complete. This seems to be a systemic problem of any detailed and "comprehensive" software methodology. The quality of different pieces of the process vary incredibly (the project management and testing is laughably bad), and it takes a very intimate knowledge of the process framework and some very real experience using it to determine what to use and what to completely ignore.

But even if all aspects of the rational unified process were of proper quality, RUP is just too complex.

But RUP Still Has Incredible Value, It Just Needs to Be Pared down a Little...

Since RUP is a process framework, it's okay to actually tailor it to make it compatible with an useful style of delivery, in fact many of the fundamental pieces of RUP are completely compatible with an agile way of doing things.

Listed below are the portions of the rational unified process that I consider actually valuable to a software development project...

  • phases like it or not, the kind of work you do during specific iterations are going to change over the length of any project, lightly applying a notion of a lifecycle is a good idea

  • disciplines, while agile projects categorized work into just a couple of roles, there really is more involved if you consider all aspects of delivery, RUP disciplines represent a reasonable way to organize the different types of work that need to be done, just be flexible in applying multiple roles to the same person

  • principles and best practices are IMHO the most important part of the process. By tweaking the RUP best practices and principles to be a little more agile in nature, you're left with a solid foundation in which to base your development and delivery.

  • Roles as stated previously, agile only has a couple of roles (e.g. Product Owner, Scrum Master, Developer) I personally like to see this list expanded, as long as everyone is clear that each single role does not equal one person, each person needs to fulfill many different roles.

  • Discipline Oriented Guidelines where many software development processes fail is that they offer either too little advice, or offer to much and present this advice as standards that must be followed. Detailed process material should be represented as a suite of advice, patterns, and lessons learned based on real project experience. Above all this detailed process work needs to be presented as accelerators that can but don't have to be used. I think it's also okay to base guidance on the work of established authors, but make sure you test the waters on a real project before socializing the guidance across different project teams. So far, I have based my guidance, and implemented real projects, on work presented by the following authors, of course this is a continuing work in progress...

Please refer to
Part 3
of this post for further details of other valuable components of RUP


Sunday, February 1, 2009

You know you are not doing Agile if

A common misconception I find many people are falling into when they are learning agile or using it for the first couple times is that they think Agile = Waterfall + Iterations. I'll call the second approach "Iterative Waterfall" for the purposes of this discussion.

While both models appear may similar (both are iterative and consist of requirements, design, coding ,and testing activities) there are significant differences that make them completely different animals. If we were to simply follow a waterfall based approach with small iterations then the flow of activities within an iteration would like look this:


As everyone is probably familiar with, this is the mini-waterfall (waterfalls within an iteration). The developers and customers follow a sequential set of activites as prescribed by the waterfall model. The flow of work is predictive and follows a set path. While this model is iterative it is not agile. Why? The reason is because an agile model will support continuous learning and refinement not only between iterations but also within iterations. It does this by intertwining the requirements, designing, coding, and testing together into an unpredictable flow of work.



The developers and customers jump back and forth from one activity to another in an unpredictable manner. There is no set path. Instead, as they start implementing a user story or use case, they may discover new insights that require them to revisit and redefine their initial set of requirements which require them to redesign the solution, create or rewrite tests and code. Iterative learning and change is enabled within each iteration. This allows the developers and customers to quickly and easily react to change and stay agile.

So, if you are doing agile right now ask yourself which model your using and if you are not doing the second model then I strongly urge you to try it out. You will find that the agile practices fit better and allows your team to fluidly move from one activity to another. Remember, you are not doing agile if your team isn't continuously learning and able to react to the new knowledge.