Sunday, January 28, 2007

Managing a Test Manager or Seeing the Big Picture

I was waxing nostalgic about my old career as a test manager and found a blog entry that was the same rant that I used to rant about all the time: http://shrinik.blogspot.com/2006/12/why-counting-is-bad-idea.html

I wrote comment on the blog, and although it’s written to other testers, I think its biggest value could be to those people that have to manage test managers.

When I was a test manager, almost without exception, I was managed by people who had no clue about a) the purpose of test b) what test managers and test teams could do for them and c) when or why test needed any of their time, except for ship weeks. I had to teach my bosses the answer to all three of those things, but it was often hard because they thought they had the answers.

One of the lessons I learned as a test manager combined with those I learned running my own business can be summed up as this: Achieving the big picture is more important than getting people to tell you about it. Yes, managers have to know the progress towards a goal, but there is a right way and a wrong way to achieve that. I think every manager needs to be reminded occasionally that the purpose of employees is not to make their job easier. And sometimes, like with a test team, it isn’t as straight forward as it seems. Understanding the underlying strategic purpose of each person or group’s job is really the key to success.


-------------------------------------


Ok, back to the post, "Why counting is a bad idea", basically says that anybody who thinks that they are getting good information when test is forced to produce reports that say things like: “Number of Test cases prepared: 1230, Number of Test cases Executed: 345, Number of Test cases Failed : 50” is wrong.


As an ex-test manager I disagree that numbers are bad, they are vital, but probably not why you think.

The purpose of testing is not to find a large number of bugs. I remember a project where I had two very different developers. Developer 1 was a little sloppy and did mostly UI type programming and his weekly large check-in was worth an easy 20 bugs. Developer 2, who was a god, did the deep architectural stuff and did a check in perhaps every month. We were lucky if we got one bug off of his code with a solid week of testing. Typically all the bugs from the testers of these developers were high risk bugs, but 80 bugs to 1 bug was no way to judge the productivity of the testers or the stability of the product.

Testing’s job is not about quality assurance, either. Everybody on the entire product team must own quality. If anything close to a “They will find this in test” attitude grows in your development team, that product is doomed to be low quality. This product will never feel right, it will never be smooth. No matter how much time and effort the test team puts on it. The old joke between Microsoft Test Managers was “I own quality for the length of time it takes my boss to walk to my office.”

So, if the purpose of the test team isn’t to find a large number of bugs (even high risk bugs) and it’s not to assure quality, what is it that they are asking us to do?

The purpose of the test team is to accurately communicate the status of the product.

The amount of testing applied and the number of bug reported are the best way to know the current status. But only when the status is communicated in context of the entire environment, which includes people, bugs and coverage, project goals and schedule. Those that try to simplify something thing this big and complex in to “500 cases run and 4 bugs” is very likely to have a huge surprise waiting for them at ship time.

Or to put it another way: A spec isn’t done when it hits 100 pages, why is it logical to think test is done at 100 test cases?

In the above example, I, the test manager, knowing what I know about the changes the developers are making and seeing the number, location and types of bugs, and by talking to the whole product team (especially testers), could make a very good educated guess as to accurate product status.

I could say something like, on a scale of one to ten, we are a three. Last week we were a two. But there is a lot of new code coming in, so I suspect next week we will be a two again. But after that, we are doing a stability drive towards beta, and if things go as planned, and we are mostly on track for it; right now I have a confidence level of nine. These are the numbers they really want, but don’t know how to ask for. This kind of roll-up needs to be done for all the different product areas, too.

If you do not give the rest of the product team that kind of black and white numerical status roll-up, they will find some horrid way to squeeze one out of you, and they generally go to test cases or worse.

One exception to numbers that I have seen work is pictures. I’ve known of some product unit mangers that were most happy with something as simple as a picture of a stop light. Green – Things were humming along as planned, Yellow – there were some storm clouds gathering that should be addressed now and Red – the product was blocked or there was some other huge catastrophe. It sounds simple, but it generally takes the test manager quite a few hours to accurately color.

The management team just wants some simple way to accurately know the status of the project. It’s not their desire for numbers that is wrong. In a void of anything else, they ask for what they think they want. It works out much better if the test team can supply them with the status numbers they really need before they think to ask for those other abominations.

1 comment:

Anonymous said...

Great Article!!!