You are on page 1of 1

In my previous organization it took us about 3 year to really get good at an Agile.

As the head of the QA department I had to make a lot of adjustments. Agile can be the best friend or worse enemy of quality. The trick is to remember garbage in garbage out. Well defined and detailed acceptance criteria evolved by the full team are the key. This well define and detail acceptance criteria does not come from the 30 minute high level planning meeting when the stories for the sprint are selected. It comes from the subsequent detailed discussion on specific stories in which the BAs, Customers/business reps, QA and developer get into the weeds. Enablers to making agile work are a good scrum master process mentor, having as much of the team located locally to meet frequently, having good tools like Rally, version1, JIRA or QC to manage both the delivery and quality aspects, and having a rigorous test process which full validates both the functional and non-functional acceptance criteria including data and integration, and continuous integration of code as it is built with automated code analysis to ensure code quality. In terms of transition from a water fall to agile delivery, my advice is keep the training wheels on, you may not be able to get requirements, development and testing to be full iterative in the near term. I recommend keeping the requirements phase but be including the full team in requirements discussions and performing static requirements analysis to drill down to acceptance criteria. It is easier and more effective to combine the dev/test cycle as these are usually controlled by pure IT resources. The first issue will combine dev/test cycles is usually environment. Where to test and how to build? If you shop is large enough, I recommend splitting the QA team into a functional team and a regression team. The functional team will run with development story by story mostly manually with some value added automation and the regression team will work in the higher level, usually Qa environment, and they will maintain and run regression testing and build out automation in an synchronously. In terms of defects, the game changes a bit. For agile, for quality metrics, I track requirement defects, blocking defects, defects that carry over across sprints, a stories and defects which do not close and either have to be carried over or are sent back to back log to be reassessed. I am firm believer that story should be cancelled if has been in correctly scoped or dependencies are not met. But these items need to be managed tightly. In terms of quality reporting, I like to see monitor the burn down of story points for each sprint, spikes when the story points increase dramatically do defects, defects I like to track any defects that get carried over, and the burn-up for each release against the product backlog. I consider stories that do not close 'risks' and you need to define a tolerance level for these situations.

You might also like