Agile and CMMI

I wanted to talk about this issue a long time ago, but I didn’t have much to share with you.

My first trigger for this post was few months ago when Software Engineering Institute (SEI) of Carnegie Mellon University published a report under the name CMMI or Agile: Why not embrace both! which is a confirmation by CMMI authors that you can satisfy both requirements.

The report summarizes the history of both paradigms and why it is perceived that CMMI is waterfall while it isn’t. It also summarized the misconceptions in both ideas and how to overcome them. And it ends by a call for action for both practitioners and trainers to bridge the gap between both.

The report also addresses other issues such as:

  • The Origins of Agile Methods
  • The Origins of CMMI
  • Lack of Accurate Information
  • Terminology Difficulties
  • There Is Value in Both Paradigms
  • Challenges When Using Agile
  • Challenges When Using CMMI
  • Problems Not Solved by CMMI nor Agile

Yesterday, I have been in a 2-days workshop on the same issue where we have conducted some real life case studies to discover how we can come out with processes that satisfy both, and get useful feedback from experts in both schools.

If I can summarize what I have learnt in those 2 days, it is that you have to be first convinced why you need Agile and why you need CMMI, and when you do, you will be able to select to which level you want to embrace each, and what are the practices that would "add value" to your organization (not only individual projects)

Finally, I would like to refer you to this 2 years old post by Jeff Sutherland in which he says that Scrum supports CMMI level 5

I have failed a sprint!

Welcome back everyone. I have been busy for the last month working hard trying to achieve a substantial success in this sprint as it had a special value to me; first, because it came after a long time of being idle, and second, because we had just resolved some external issues that held us back in the previous sprints.

At the beginning of the sprint we went through with a high velocity without the need for any overtime nor affecting quality which impressed me and later we had some tasks that required additional efforts than expected so we started losing velocity. We were still inline with the optimal burn-down but not for long.

I was still convinced that the remaining tasks can be squeezed in the final few days of the sprint. So we kept the hard work crossing our fingers to finish on time. Unfortunately, we had to extend the sprint for one day, ignore some features, jeopardize quality, and finally (as expected), the final build was rejected.

It is hard to admit the failure, but the bitterness of failure is the medicine that heals your weaknesses, and our sprint retrospective was really fruitful and full of lessons learned, so here are some:

  1. Pairing of tasks does not always result in dividing the effort (the famous example: nine women cannot give birth to a baby in one month).
  2. Instead of pairing, you can divide tasks into smaller ones, which have more benefits; more accurate estimation, a checklist to define what’s is meant by “done”, and less dependency of tasks.
  3. It is better to be pessimistic in you estimation till your team proves otherwise than to commit to what you are not 100% sure you can achieve.

And my personal lesson I learned was:

  1. It feels good to be superman, but it feels worse if you fails to prove it.

Coding to the core

I remember in my first job, I have joined a project that was based on an infrastructure built by some other developers who had left the company.

During my first days of training, I was told “never to touch” that infrastructure, and it made me write a funny poem about that on our whiteboard.

I was always encouraged to find other workarounds rather than modifying the infrastructure. And when it was really needed, it has to be kept minimal, done under close supervision, treated with high suspection, and always thought of first as the source of every bug that might appear later.

And in the following years, I started to read about good software design, refactoring, so I was always leaning toward “butchering” old code, but I always depended on my own manual testing to verify that I didn’t break anything in my refactoring “rides”. Manual testing was good for about 90% of the times, but there were few cases that skipped me, and hence I got that look “we told you not to touch it!!”. I was much experienced then, but it didn’t give me the right to try something that was considered a “taboo” before.

I still remember how terrified was one of my junior colleagues when I proposed adding a method to a base class instead of adding it to all subclasses.

Today, one of my colleagues asked me about a problem, and after 5 or 10 minutes of discussion, we found out that one of the solutions is to add the new required functionality to the base class (may be it was not the best solution, but it was better than the one she implemented). And then I wondered why didn’t she thought of it in the first place, and as I expected, she didn’t want to “touch” the base class although it was the straight forward solution, and thought of workarounds. And I was happy I encouraged her not to fear the “core”.

My point is, why do we keep building barriers to improvements? development is an innovative process, and it is the duty of seniors to encourage juniors to learn and try. In addition, we have all the tools that help us in our mission; source control, refactoring helpers, clone detectors, code analysis, unit testing, and more. And believe me, the cost of maintaining a badly designed project, is much much higher than the cost of a bug that might skip from the core.

I have also to stress that refactoring with freedom has to be coupled with extensive Unit Testing and Code Coverage, but that might be the topic for anothe post.