Often in the release of new software, the QA team feels like they are the person holding the glass in the picture above. Doing their best to catch all of the coffee, knowing that they are not quite able to catch all the bugs, hoping that the release; when it ships, does not FAIL. In order to help change the industry and to liberate the silenced voice of most QA teams (that have probably made these points in the past) here is our list of the top software testing mistakes a development team MUST avoid in order to have a healthy stable releases of software.

Regression testing is not carried out

Regression-testing-is-not-carried-out-300x2001-1.jpg

There is no excuse for not carrying out a full comprehensive regression prior to releasing software. Tools are abundant and some are free. The value of verifying that all functionality is operational prior to a release is immense. If your latest release breaks existing functionality there is almost no new feature or bug fix that is worth the risk. However, the number of software releases across the world that are made with no regression testing or minimal regression testing far outweigh the number of releases that are made with a complete regression.

Releasing software without a regression is kind of like playing Russian Roulette. This is a game in which you load one bullet in a 6 chamber gun, point it at your head and

Regression testing takes too long or never completes

Regression testing can be time-consuming if it is carried out for example by only manual testers. The number of test cases a manual tester can execute is around 50 per day. Suppose there are 1,000 test cases and only one tester then it’s going to be a long wait for the regression to complete. In all likelihood the regression will never complete before the next release will have arrived and the regression has to start all over again. 

As a result, the regression is never completed and software is released. When defects are found in production, it results in far higher costs to fix and re-release than the cost of a complete regression done earlier than production.   

Regression-testing-takes-too-long-or-never-completes-300x2001-1.jpg

Regression is only carried out with automation

If you use just automation the execution time for test cases drops dramatically. You can execute hundreds of test cases in a few hours. However, this is not a panacea. Here’s why – For example,

Regression-is-only-carried-out-with-automation-300x2931-1.png

if there are 300 test cases the average execution failure rate is 35% and of that 20% are false failures. Digging through those Failures and figuring out which ones are valid and invalid, takes between 13 and 15 person hours! Furthermore, as the new release probably a mix of new features, modified features and bug fixes, the automation test cases are no longer valid. 

Finally just doing test case based testing is not enough. Mixing in exploratory testing that goes out of the test case are can radically improve the quality of the build. 

Doing an automation only regression test is far better than not doing regression at all or only partially. Although execution time of regression drops to mere hours, it could take days or never complete if you include the time to discover the false failures and fix and re-execute test cases for features that have changed.

Regression is only carried out with Ad-Hoc testing

Ad Hoc Testing is an informal and random style of testing performed by testers who are well aware of the functioning of software. It is also referred to as Random Testing or Monkey Testing. The testers may refer to existing test cases and pick some randomly to test the application. Or they can design new cases on the fly. The testing steps and the scenarios depend on the tester, and defects are found by random checking.

Using just ad hoc testing is risky as there is no thorough consistent baseline of functionality that is validated. In addition, it is almost always under resourced and there is a high likelihood of overlap if multiple testers are simultaneously carrying out ad hoc testing.Read for More About:https://www.webomates.com/blog/top-5-software-testing-mistakes/

Read Next

Api testing

Shift left testing

DevOps testing

Intelligent test automation

OTT media testing services

Requirement traceability

Black box testing

Regression testing

Software Testing Life Cycle

Test Automation vs Manual Testing

Selenium Testing Automation

Exploratory testing in software testing