Friday 11 February 2022

Comprehensive testing and management of information systems

Software testing is a series of planned and systematic activities closely related to software development, and software testing also requires test models to guide practice. due to the close relationship between software development and software testing, the corresponding test models can be derived according to different development models.





(1) the complex test work is divided into small stages according to the stages

(2) test the system from multiple angles to find more defects


(1) software testing is easily misleading as the last stage of software development

(2) problems arising from the requirements and design stages cannot be detected very early

(3) quality control and testing efficiency are not effectively played




(1) testing and development are carried out simultaneously, which is conducive to finding problems as soon as possible

(2) add the idea of non-procedural angle test systems

(3) test preparation and design work in advance to improve test quality and efficiency


(1) regard software development as a series of serial activities such as requirements, design, and coding

(2) development and testing maintain a linear back-and-forth relationship

(3) it is not possible to support iteration, spontaneity, and change adjustment




(1) independent testing from development is conducive to the study of deeper test methods

(2) when testing multiple items at the same time, the test technology can be reused

(3) efficient adjustment of testers

(4) when repairing defects, it is not limited by the internal personnel of the project team


(1) the independent test team does not have a deep enough understanding of the system

(2) affect the quality of the test and the efficiency of the test




(1) emphasize the importance of unit testing and integration testing

(2) introducing exploratory testing brings the test model closer to reality

(3) when repairing defects, it is not limited by the internal personnel of the project team


(1) only part of the content of the test process is emphasized

(2) there is no explanation of the requirements test, acceptance test, etc

6. pre-test model: 

closely combine testing and development, and recognize the 3 elements contained in the acceptance test: test-based requirements, acceptance standards, and acceptance test plan. the pre-test model detects errors early at a lower cost and fully emphasizes the importance of testing to ensure the high quality of the system.

7. according to the development stage, the software test types are divided into: unit test, integration test, system test, acceptance test.

unit test content: (smallest unit, i.e. program module)

(1) unit function test

(2) unit interface testing

(3) unit local data structure test

(4) important execution path tests in the unit

(5) various types of error handling path tests for the unit

(5) unit boundary condition test

integration test content: (subsystem assembly test, joint test)

(1) inter-module interface testing

(2) data transfer between modules

(3) global data structure testing

system test content: (complete realistic simulation)

(1) verify the functionality of the system from the user's point of view
(2) non-functional verification, such as pressure, safety, fault tolerance, etc

Acceptance test content: (delivery test, release test, confirmation test) including ease of use, compatibility, installation, documentation and other tests

(1) testing and review of the entire system
(2) analyze the test results according to the acceptance criteria
(3) decide whether to receive the system and test evaluation

8. according to the test implementation organization,

the test types are divided into: developer test, user test, and third party test development environment test content: validation test, α test

(1) controlled testing performed by internal users (not programmers or testers).
(2) confirm that the software meets the specified requirements
(3) pay attention to the interface and characteristics of the product

user environment test content: β test, remember, user test ≠ acceptance test

(1) verification by the end user at the customer premises
(2) not under the control of the developer
(3) pay attention to the support of the product

Third-party test content: requirements analysis review, design review, code review, unit test, functional test, performance test, concurrent test, robustness test, etc

(1) testing organized by a third party between the developer and the user
(2) ensure the objectivity of the test work
(3) review requirements, design, user documents
(4) unit test, functional test, performance test, etc

9. classification according to test technology:

black box test, white box test, gray box test

black box test content: (functional test)

(1) test the interface
(2) functional testing
(3) from the user's point of view

white box test content: (structural test)

(1) check whether all structures and paths are correct
(2) check whether the internal actions of the software are carried out according to the regulations

gray box test content:

(1) pay attention to the normality of the output for the input
(2) pay attention to internal performance
(3) between black box and white box

10. classification according to test execution mode:

static test (code review, static structure analysis, code quality measurement), dynamic test (writing test cases, executing programs, analyzing the output results of programs)

11. classification according to test objects:

functional test, interface test, process test, interface test, installation test, document test, source code test, performance test (load, pressure, stability, concurrency, big data volume), database test, network test.

12. document testing is divided into:

document testing (requirements document, test document) for non-delivery users and document testing for delivery users (requirements documents, user manuals, installation manuals). user documentation testing is mainly based on readership, terminology, correctness, completeness, consistency, ease of use, charts and interface screenshots, samples and examples, language, printing and packaging.

13. two formulas:

average number of concurrent users: c=nl/t, where n is the number of logged-in users, l is the average length of the session, and t is the duration of the investigation


in general, 10% of the number of users visited per day is used as the average number of concurrent users.

14. classification according to quality attributes:

fault tolerance test, compatibility test, security test, reliability test, usability test, maintenance test, portability test, ease of use test.

compatibility testing is divided into software compatibility, hardware compatibility, data compatibility, software is divided into operating system compatibility, browser compatibility, resolution compatibility, database compatibility, and other software compatibility

15. software maintenance is divided into three categories:

(1) corrective maintenance. correct errors in the software
(2) adaptive maintenance. modifications are made to the software to accommodate the package environment
(3) perfect maintenance. boost performance or expand functionality

15. classification according to the test region: localization test, internationalization test

16. black box testing tries to find the following types of errors:

(1) whether there are any functions that are incorrect or missing
(2) whether the input can be accepted correctly on the interface and whether the correct output can be produced
(3) whether there is an error in accessing external information
(4) whether the performance meets the requirements
(5) whether the interface is wrong or not beautiful
(6) initialization or termination error

17, the advantages of black box testing:

(1) relatively simple, do not need to understand the code and implementation inside the program
(2) it has nothing to do with the internal implementation of the software
(3) from the user's point of view, it is easy to know which functions the user will use and the problems encountered
(4) based on the software development documentation, it is possible to know which functions in the documentation the software implements
(5) it is more convenient to do software automation testing


(1) it is impossible to cover all codes, and the coverage rate can only reach about 30%.
(2) the reusability of automated tests is low

18. use case design method of black box test:

Test area determination method (equivalence class division method, boundary value analysis method), combination coverage method (full combination coverage method, paired combination coverage method, orthogonal experimental design method, data coverage method), logical inference method (causal diagram method, decision table method, outline method), business path coverage method (scene analysis method, functional diagram method)

19. several principles that must be followed in the white box test method:

(1) ensure that all independent paths in a module are executed at least once
(2) all logical values need to be tested for both true and false cases
(3) check the internal data structure of the program to ensure its validity
(4) run all cycles at the upper and lower boundaries and within the operable range

20. classification of white box test methods: static white box test, dynamic white box test

21. the content of test management includes:

(1) the test objectives are clear, and the test plan and process monitoring guidelines are formulated
(2) test team construction and tester management
(3) monitoring of the test implementation process, including the tracking of the execution of the test plan and the work arrangement of the tester
(4) test the assessment of risks and the coping strategies of risks
(5) communication and coordination outside the test and confirmation of test problems
(6) unified management of test assets and test products
(7) the formulation of test specifications
(8) the formulation and evaluation of test performance appraisal

22. the purpose of test monitoring management is to provide feedback information and visibility for test activities. test monitoring includes:

(1) the progress of test case execution: = number of executed / total number - success rate
(2) the survival time of defects: = the order of defects from open to closed - efficiency
(3) trend analysis of defects: the distribution of the number of defects is counted in chronological order - trend
(4) defect distribution density: the total number of defects corresponding to a requirement / the total number of test cases corresponding to the demand
(5) defect modification quality: the number of defects found after each modification (including reproduced defects and new defects caused by modifications)

Note: configuration management during testing not only includes setting up a test environment that meets the requirements, but also obtaining the correct test and release versions.

23. the risks of the testing process mainly include:

(1) demand risk: the requirements are not accurately understood, and the requirements change, resulting in errors in the test method
(2) test case risk: the use case design is incomplete, ignoring boundaries, exceptions, etc
(3) defect risk: some defects occur occasionally and are difficult to reproduce
(4) code quality risk: the code quality is too poor, there are many defects, and it is easy to test and miss
(5) test environment risk: the test environment and the production environment are not completely consistent, resulting in test errors
(6) testing technical risks: the project has technical difficulties, and the testing ability and level are low
(7) regression test risk: since the regression test does not run all test cases, there may be incomplete tests
(8) communication and coordination risk: there are misunderstandings and poor communication between testers and developers
(9) other unpredictable risks: force manure risk factors

24. the content of the tester's performance appraisal includes:

(1) assessment and testing work content: (1) participate in the work content assessment of the software development process, such as requirements review, etc.; (2) participate in the preparation of test documents; (3) perform test work; (4) test result defect residue; (5) tester communication ability assessment.
(2) efficiency-related indicators in the test design: (1) document yield; (2) use case yield.
(3) indicators related to the quality of work in the test design: (1) demand coverage; (2) document quality; (3) document efficiency; (4) use case efficiency; (5) review questions.
(4) test the relevant indicators of work efficiency in execution: (1) execution efficiency; (2) progress deviation; (3) defect discovery rate.
(5) Indicators related to the quality of work in the test execution: (1) the number of defects; (2) the number / rate of effective defects; (3) the rate of serious defects; (4) the rate of module defects; (5) the rate of missing defects; (6) the time point of bug discovery, the convergence rate of bug defects; (7) the defect location and readability.


No test model is perfect, in addition to the correct selection of the test model, but also the test process and test work indicators are properly made, in order to improve the test effect, and then improve the quality of the software.

No comments:

Post a Comment