Testing in Software Development
Mistakes and how to avoid them
Frequent arguments against extensive testing in software development are often that this is not necessary because a lot of value is placed on qualitative work in one’s own company. This would make it possible to save money on software tests. On closer inspection, however, it quickly becomes clear that both of these points can be refuted very quickly.
Consequences: Missing features and dissatisfied end-users
Because in the communication between the customer and the developer, misunderstandings can arise in the various steps, so that in the end the released application is missing features or cannot do what it should. In addition, the human factor cannot be neglected, i.e. people always make mistakes in their best work. Moreover, changes in an existing system can lead to malfunctions elsewhere, which only appear when an end user complains about them.
And this brings us to one of the consequences of companies trying to save on testing. Errors can lead to customer dissatisfaction and thus to a loss of image. In addition, bugs that occur after the release have to be fixed as quickly as possible. The developers are tied up during this time and cannot continue working on the development of new features, so that the further development of the project is delayed and thus costs money that should have been saved by testing.
Do's: Timely start and dedicated roles
All this makes it clear that testing is a necessary corrective in every software development. But what should you pay attention to in software testing and what should you avoid? It is important to create the test cases in good time, i.e. already when the user stories are created. In the best case, an independent test team is responsible for this, which is not too closely involved in the development, in order to exclude a certain operational blindness. The same applies to test execution. It is best if no one tests their own test cases, but always those created by someone else. In addition, this avoids a bottleneck being created by the product owner who has to approve the release and thus approve all test cases, which can lead to delays.
It is essential to ensure that the test cases are well developed. This means that the test cases are created on the basis of the user stories and, at best, all the steps listed there are covered. Testing should also begin immediately after the developer has implemented the user story in order to be able to meet the release date. Each test should also include a test report, in which exactly what worked and what did not work is recorded, in order to keep an overview of the errors in a development sprint. Automation of test cases for standard use cases, e.g. “I can download a text” or for errors that occur again and again, can be a great help.
Plan 20% of the project volume for software testing
Of course, testing should not be rampant; a test coverage of 95% negates all cost savings, as such a rate can only be achieved through intensive testing. 20% of the project volume is a good guideline to follow when introducing a testing procedure. However, it is pointless to try to save money by testing less, as this is not worthwhile due to the possible loss of image, the resources tied up in development for bug fixing and the associated time-delayed release of a downstream application.