In our last article, we talked about the importance of measuring the effectiveness of your organizations testing programs. Since testing alone can make up over a third of the budget for any IT project, it’s important to know how well you’re doing in this area.
In this post, we’ll be discussing the 14 must-haves for measuring your group’s testing success. Using these as a guide to continuously improve your testing operations will help ensure your team can deliver on schedule with consistent quality. Here are the 14 areas you should be watching:
Since testing in its fullest sense, has a great influence on product quality and consists of complex activities that are usually done under tight schedules and high pressure, you need to have a well-trained and dedicated group in charge of this process. The test group oversees test planning, test execution and recording, defect tracking, test metrics, test database, test reuse, test tracking, and validation.
Management and technical staff now realize that carrying out testing activities in parallel with all life-cycle phases is critical for software project quality. To that end, a tight coupling between configuration/development and testing groups is recommended.
Often, the primary method for both quality assessment and improvement for most payer organizations is the master test plan/test strategy. Test approaches should describe the high-level strategies and methodologies used to plan, organize, and manage testing of software projects within your company. The test approach should include descriptions of testing’s role at various phases of the project development cycle.
You should have a close look at the various areas of your testing strategy and planning, then map out a path for success to decrease the issues not caught during testing.
Also, make sure you set quality objectives. Setting these explicit objectives from both external and internal sources will help you push towards your goals of acceptable quality. Without explicit goals, your team may work toward different goals than the ones you expect them to work towards.
Planning is essential for a process that is to be repeated, defined, and managed. Project test planning requires stating objectives, analyzing risks, outlining strategies, and developing test-design specifications and test cases. Test planning also involves documenting test-completion criteria to determine when the testing is complete. In addition, the test plan must address the allocation of resources, the scheduling of test activities, and the responsibilities for testing at the unit, integration, system, and acceptance levels.
Test execution calls for a clear distinction between testing and debugging. The goals, tasks, activities, and tools for each must be identified, and responsibilities must be assigned. Management must accommodate and plan for both processes. I recommend separating these two processes as essential for testing maturity growth, since they are different in goals and methods. Managed testing is a planned occurrence and has time frames with given resources, requirements, and quality goals. Managed debugging, however, is more complex, because it is difficult to predict the nature of the defects that will occur and how long it will take to repair them. To reduce the process unpredictability often caused by large-scale debugging-related activities, project managers must allocate time and resources for defect remediation, repair, and retest.
Project test cases should be entered into a testing repository whether it is an Excel spread sheet or a test case database such as Mercury. The testing repository should track and report the success or failure of the test cases. Tracking of test cases should include test run data, by whom, and code version/build number, as well as any comments logged after the test case was run. The repository should provide project management with reports.
Once you have a testing repository, then a review of the test cases needs to happen from all stakeholders. A trace matrix should be implemented to map test cases back to requirements. This will create a coverage report that will show any gaps in your testing.
Lastly, once you have the repository and your coverage report, test data must be created. Each testing group should implement a plan for what test data to use and where it is going to be stored. I recommend keeping control of all test cases. Storing your test cases so that they’re reusable in the future is invaluable.
Most health plans use different software development life cycles (SDLCs) based on history and resources. Within any SDLC, testing process is required in some form. Software testing should be an empirical validation of business requirements configured in the system. Your testing team’s objective should be the independent view of the configurations built to support your markets or lines of businesses. Most often, software testing is only an afterthought. Most of the processes in the testing group are a hodgepodge of standards that were put together after past failures, with no thought as to the overall SDLC or standards.
The difference between a patched process and one that is thought out and structured is its effectiveness and efficiency in delivering high quality products.
In the SDLC are dozens of sub processes, each one of those having an effect on the quality of your products. Have a close look at each of those processes and make changes based on industry standards.
What type of tools are you using in your testing? Are you using them effectively, and are they the right tool for the job? Some of the tools you should look at are: coverage tools, SCM tools, defect analyzers, and automation tools. Audit your toolset and make changes or redeploy as necessary.
Test automation is used to automate repetitive tasks in your testing process that frees up recourses to do other testing tasks. This can be regression testing or adding claims into your testing environment. The hardest task is deciding which parts to automate. Analyze your current testing framework and automation approach. Do you know what to automate, how to automate, and how to work on optimizing your approach and framework?
Software regression testing – selective retesting of system or component to verify that modifications have not caused unintended effects and that system still complies with its specified requirements. [IEEE 610] or
Software regression Testing – testing conducted for the purpose of evaluating whether or not a change to the system has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests. [Testing Bible, 2003]
The above definitions from IEEE and the Testing Bible tell what it is, but the goal of the assessment is to find out how your company goes about regression testing. Legacy and defect regression should be a central tenet throughout all testing phases.
How often do you find you have missed testing a type of claims or enrollment functionality because you thought it has not changed and was not supposed to be affected? Proper regression testing planning can solve this problem.
Ideally, systems should have at least three separate environments for development, testing, and production. The test and production environments should be as similar as possible, with the exception of size. If cost prohibits having three environments, testing and development could take place in the same environment, but development activity would need to be closely managed (stopped) during acceptance testing. In no case should untested code or development be in a production environment.
There is no one universal approach to performance testing; however, I can evaluate your ability to perform testing on your key performance indicators. What are your key performance indicators? What are your key business requirements and processing target times? What are your service level agreements and how confident are you of meeting them?
A test measurement program is essential for evaluating the quality and effectiveness of the testing process, assessing the productivity of the testing personnel, and monitoring test process improvement. A test measurement program must be carefully planned and managed. Test data is to be collected and must be identified and decisions made on how they are to be used and by whom.
The defect management process should deal with the activities carried out throughout the defect life cycle, from when a defect is logged until the defect is resolved and closed.
Generally, a defect life cycle is followed for defect logging. The terminology for status of defects during this cycle could differ from one organization to another; however, defect terminology is often different within organizations. Do your people speak the same defect terminology?
Defect Management can be achieved using a simple Excel sheet where track of defects is maintained. Many organizations try to buy expensive tools and implement their defect management processes through them but never work on the process itself, only changing the tools for various fires that come up. They end up with an inefficient jumble of a process that never seems to work right.
A technical training program ensures that a skilled staff is available to the testing group. The staff is trained in test planning, testing methods, standards, techniques, and tools. The training program also prepares the staff for the review process and for supporting user participation in testing and review activities.
There’s a lot that goes into making your testing program efficient and effective. Keeping your sights on the 14 areas described above will help you keep things in shape. If you ever need help assessing or implementing any of the tools or methods above, give us a call. We’d be please to be your trusted partner in providing consulting resources for you.
In the next article we’ll summarize the assessment process, giving you some insight into how it might look if you enlisted a consultant or group of consultants to review your testing process. We’ll talk about the goals of an assessment team as well as the steps they should take to help you understand your teams full capabilities and limitations.