Popular Posts

Tuesday, February 12, 2008

Finding critical bugs at the critical moments of a release

The issues of a product can typically be revealed during the functional/regression test cycles. Most of the text books and web resources written about SQA, suggest to follow a properly designed test processes in order to catch bugs as early as possible. I strongly agree with that. However it is not always a practical approach to adhering to that kind of process. In my experience, more bugs are uncovered and caught by test teams when the product is matured and the team is familiar and well aware of the functionality of the product. One may argue that the good test design should anticipate all possible scenarios. However, it is not always true. Most of the practical and useful test scenarios are captured during the test execution stage NOT in the design phase.
I'm not going to say that QA doesn't have to be part of early planning and reviewing to find out issues. it will definitely help to capture design level defects as well as preparing test cases and scenarios. However, even with all efforts, you will encounter the last minute issues when product is mature and fully integrated, also QA have enough knowledge about the product in user's perspective.
Therefore last minute issues are not strange, specially in the modern agile development processes.
In my experience, one of the most important aspect of agile QA is to reveal critical issues as quicker as possible at any stage of the development cycle. In agile methodologies, QA do not get the luxury of following a proper release cycles and sufficient schedule for comprehensive testing. Therefore, QA should be in a position to capture critical bugs in a quicker way whenever a new build is given for testing.
So, what does it mean by finding critical bugs at the critical moments of a release?

Suppose, you are assigned to test a software product which has already undergone number of functional and regression testing cycles. Therefore, the critical issues are minimum in that kind of a product. Assume, a simple performance fix has been done in order to improve concurrency handling of the product and a minor release is expected to be done within a day. In such situation, a traditional QA process may suggest to do a smoke test to ensure the existing functionalities are not affected by that fix. Yes. That is important. However, QA should be able to try out a set of ad-hoc scenarios in order to capture some hidden issues which may have arisen due to the performance fix. QA/test engineers should use their experience to carry out some random checks to ensure that the quality of the product is not affected by the fix. There are situations where QA do not even get enough time to do a full smoke test. In such situations, QA can act wisely with the product knowledge and common sense to find out regression issues as quicker as possible.
In other words, QA should have a better understanding on exploratory testing. I will post a new blog entry on my experience in exploratory testing in due course.


Evan said...

It's really difficult to do a full round of testing within a day or two. After the code freeze the QA should be given at least 5 days to ensure the quality of the build. Without that there is a huge possibility that there can be bugs when releasing the product. So I agree to what you have blogged about.

About exploratory testing, yes I agree with you on that

Iranga said...

In my experience we discover critical defects in the end of the test cycle due to the traditional focus on achieving test coverage. And shift the adhoc/exploratory testing to the final day of testing or when we are satisfied that a significant test coverage is achieved and assume that the build is functionally stable. With agile even having a day for exploratory testing is a luxury.

What I believe the problem associated with adhoc testing is mapping it to test coverage, how do we quantify/justify the effort put into adhoc testing? Can we convince our clients to pay for adhoc testing? How would they see the test results? This all due to exploratory testing depending upon the testers intuition, creativeness and experience.

My opinion is we should focus on exploratory testing at the beginning of the cycle even with parallel to smoke testing .The challenge is to change the mindset of the non-QA folks.

Charitha said...

Yes I agree with you Iranga. we should focus on exploratory testing at the beginning of the cycle even with parallel to smoke testing.
The testing effort and coverage can be quantified by designing the traditional test cases and publishing the results. I don't say the these activities are not important. In a typical customer project, publishing test results, estimating the testing effort are extremely important. What I wanted to say was, QA should not adhere to a set of pre-defined test cases/scenarios when testing a product.
Let me propose a solution which can be applied to a typical customer project.

If you have a team with 5 members assigned to test a product, at least one SMART bug hunter can be allocated to perform ad-hoc/exploratory testing through out the development cycle. He/She should be separated from the traditional test/documentation tasks. The rest of the team can be involved in designing test cases, estimations, updating test cases with results and preparing comprehensive result docs which will be submitted to the client. After some time, a new engineer will take over the ad-hoc/exploratory testing, so that the responsibility can be interchanged among the team. A new eye will look at the product in a different way and catch more bugs.