Friday, March 26, 2010

How to Measure Software teams Efficiency and Productivity


How do you communicate software productivity to non-technical executive peers? The productivity dilemma - What to measure? Productivity is usually expressed as a ratio but this assumes we know what the units of output and input are and that both are continuous and linear.

If you can’t measure input/output it is very difficult to measure productivity. Historical studies and attempts have been made and some of the more well know studies are:

  • ITT - Advanced Technology Center (1984)
  • USC - System Factory (1990)
  • MIT - IT and Productivity (1995)

These studies conclude that the key factors that influence productivity are (yes these are fairly obvious):
◦ requirements and specification clarity
◦ project size and complexity
◦ teamwork, experienced personnel

However they also gave us very poor measures such as lines of code. Any good engineer knows why I don’t need to elaborate on the failure of this measuring stick.

So we’re still left with the question, what do we measure?

We measure our process around key dependencies (how we build the software vs. the software itself). We look at the following:

  • Requirements clarity – is the problem to be solved well understood and documented.
  • Scope management – did we define scope well and manage inevitable change requests
  • Reliability – does the software work
  • Usability – does the customer adopt it

To accomplish these goals we follow some type of methodology.
  • Waterfall, Agile, XP, Iterative, etc.

Why is a methodology important? It creates a clearly defined process that all team members can understand, thereby reducing risk, improving quality and productivity. I will not advocate any particular methodology as better than another. It depends on so many factors includes team size, market dynamics, project complexity etc. Personally I prefer a hybrid Agile - Waterfall methodology which I will outline in another article.

Another important factor in software development is defining and understanding the multiple roles involved in creating software. The roles include:

  • Product Management
  • Product Designers
  • Software Engineers
  • Project Management
  • Quality Assurance
  • Release Management
  • Product Marketing


Each role has a unique demand curve in the project lifecycle. I can’t stress enough that A Key factor in productivity and performance is a properly balanced team (in terms of roles) so that there is minimal idle time and minimal bottlenecking in the critical paths.

Now we get to the heart of the matter. We look at what I call Software Units.

Software Units are designed to measure our process. A Unit is established as the most granular level of each role grouped by their contribution to a complete project. A Unit represents people hours contributed to a project and is not a date measurement. Dates must be calculated using units and total resources.

The Unit becomes the core measurement baseline

To create this baseline look at 3 sample project sizes based on your company’s recent history, small, medium and large. No pick a representative recent small project and determine all the roles, resources and hours involved and that will equal one Unit. For example you may have several recent 2 month projects, if this is small you would define it so.

For the rest of the organization to understand how to relate to that we use historical projects in recent memory and assign them a unit value. We do this for each project size. Now peers outside the software dept can relate, and we have a common language.

By graphing each roles contribution in both total time and “demand over time” we can establish the proper ratio of team members. This ratio is key to optimal productivity.


Now we can size projects using a consistent and understood method and can therefore measure our accuracy and consistency.

Next we look at other Measurements including:

  • MRD’s or inputs to Software team. Quality of the MRD can be measured in scope creep as well as Product Design time spent clarifying and developing the PRD.
  • LOE’s – how well are we estimating
  • P1, P2 and P3 bugs into QA, and after GA.
  • Date Delivery accuracy.
  • Scope change after MRD, PRD, CODE, BETA.
  • Relative Time spent in each of the major activities, Analysis, Design, Code, QA between projects of equal size and complexity

How do we reliably hit delivery dates?
  • Proper market assessment to product opportunity
  • Solid customer needs analysis to prioritize key functions and requirements
  • LOE – Level of effort produced by Software is within weeks of actual completion date
  • Establish and do not deviate from priorities

“The plans of the diligent lead to profit as surely as haste leads to poverty” -- Proverbs 21:5

No comments: