Wednesday, January 30, 2008

Introduction to Software Testing

Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software.

There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces.

The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.

Testing helps is Verifying and Validating if the Software is working as it is intended to be working. Thins involves using Static and Dynamic methodologies to Test the application.

Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.
Software Testing Fundamentals

Testing objectives include

1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.

Tuesday, January 29, 2008

Black Box Testing

Also known as functional testing. Asoftware testing technique whereby the internal workings of the item being tested are not known by the tester. For example, in a black box test on a software design the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications.


The advantages of this type of testing include:
The test is unbiased because the designer and the tester are independent of each other.
The tester does not need knowledge of any specific programming languages.
The test is done from the point of view of the user, not the designer.
Test cases can be designed as soon as the specifications are complete.


The disadvantages of this type of testing include:
The test can be redundant if the software designer has already run a test case.
The test cases are difficult to design.
Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.


For a complete software examination, both white box and black box tests are required.

Monday, January 28, 2008

White Box Testing

Also known as glass box, structural, clear box and open box testing. A software testing technique whereby explicit knowledge of the internal workings of the item being tested are used to select the test data. Unlike black box testing, white box testing uses specific knowledge of programming code to examine outputs. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.

Tuesday, January 15, 2008

Beta testing

Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Alpha testing

Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.

Verification and validation

Verification
Have we built the software right (i.e., does it match the specification)? Software testing is just one kind of verification, which also uses techniques such as reviews, inspections, and walkthroughs

Validation:
Have we built the right software (i.e., is this what the customer wants)?

Integration testing

Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing.

unit testing

unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a method; which may belong to a base/super class, abstract class or derived/child class.

The spiral model

The spiral model, also known as the spiral lifecycle model, is a systems development lifecycle (SDLC) model used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is favored for large, expensive, and complicated projects.

v model

v model is model in which testing is done prallelly with development.left side of v model ,reflect development input for the corresponding testing activities.

waterfall model

The waterfall model is a sequential software development model in which development is seen as flowing steadily downwards (like a waterfall) through several phases.

Test case

Test case is a set of conditions or variables under which a tester will determine if a requirement or use case upon an application is partially or fully satisfied.

Re-Testing

Re-Testing is to verify whether the particular test case or set of test cases have been carried out properly. Re-Testing can be done anytime during the SDLC.Regression Testing is done for all test cases to check for new bugs or re-occurance of old bugs

Regression testing

Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes. Regression testing is a normal part of the program development process and, in larger companies, is done by code testing specialists. Test department coders develop code test scenarios and exercises that will test new units of code after they have been written. These test cases form what becomes the test bucket. Before a new version of a software product is released, the old test cases are run against the new version to make sure that all the old capabilities still work. The reason they might not work is because changing or adding new code to a program can easily introduce errors into code that is not intended to be changed.

Stress testing

Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing may have a more specific meaning in certain industries.

Sanity testing

Sanity testing is cursory testing, and performed whenever cursory testing 'is' sufficient to prove the application is functioning according to specifications. Sanity testing is a subset of regression testing. It normally includes a set of core tests such as basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Smoke testing

Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details. The term comes to software testing from a similarly basic type of hardware testing, in which the device passed the test if it didn't catch fire the first time it was turned on. A daily build and smoke test is among industry best practices advocated by the IEEE (Institute of Electrical and Electronics Engineers).

The original version of smoke testing predates both hardware and software testing and is still used to test the integrity of a variety of systems by placing a smoke bomb inside some kind of a chamber to see if there are any leaks for the smoke to escape through.

Software Testing Life Cycle

Software Testing Life Cycle:

The test development life cycle contains the following components:

Requirements
Use Case Document
Test Plan
Test Case
Test Case execution
Report Analysis
Bug Analysis
Bug Reporting

Typical interaction scenario from a user's perspective for system requirements studies or testing. In other words, "an actual or realistic example scenario". A use case describes the use of a system from start to finish. Use cases focus attention on aspects of a system useful to people outside of the system itself.
Users of a program are called users or clients.
Users of an enterprise are called customers, suppliers, etc.
Use Case:


A collection of possible scenarios between the system under discussion and external actors, characterized by the goal the primary actor has toward the system's declared responsibilities, showing how the primary actor's goal might be delivered or might fail.

Use cases are goals (use cases and goals are used interchangeably) that are made up of scenarios. Scenarios consist of a sequence of steps to achieve the goal, each step in a scenario is a sub (or mini) goal of the use case. As such each sub goal represents either another use case (subordinate use case) or an autonomous action that is at the lowest level desired by our use case decomposition.

This hierarchical relationship is needed to properly model the requirements of a system being developed. A complete use case analysis requires several levels. In addition the level at which the use case is operating at it is important to understand the scope it is addressing. The level and scope are important to assure that the language and granularity of scenario steps remain consistent within the use case.

There are two scopes that use cases are written from: Strategic and System. There are also three levels: Summary, User and Sub-function.
Scopes: Strategic and System

Strategic Scope:


The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of value to the organization. The use case shows how the system is used to benefit the organization.,/p> These strategic use cases will eventually use some of the same lower level (subordinate) use cases.

System Scope:


Use cases at system scope are bounded by the system under development. The goals represent specific functionality required of the system. The majority of the use cases are at system scope. These use cases are often steps in strategic level use cases
Levels: Summary Goal , User Goal and Sub-function.

Sub-function Level Use Case:


A sub goal or step is below the main level of interest to the user. Examples are "logging in" and "locate a device in a DB". Always at System Scope.
User Level Use Case:


This is the level of greatest interest. It represents a user task or elementary business process. A user level goal addresses the question "Does your job performance depend on how many of these you do in a day". For example "Create Site View" or "Create New Device" would be user level goals but "Log In to System" would not. Always at System Scope.
Summary Level Use Case:

Written for either strategic or system scope. They represent collections of User Level Goals. For example summary goal "Configure Data Base" might include as a step, user level goal "Add Device to database". Either at System of Strategic Scope.
Test Documentation

Test documentation is a required tool for managing and maintaining the testing process. Documents produced by testers should answer the following questions:
What to test? Test Plan
How to test? Test Specification
What are the results? Test Results Analysis Report

Monday, January 14, 2008

How to Write a Fully Effective Bug Report

To write a fully effective report you must:
- Explain how to reproduce the problem - Analyze the error so you can describe it in a minimum number of steps.
- Write a report that is complete and easy to understand.

Write bug reports immediately; the longer you wait between finding the problem and reporting it, the more likely it is the description will be incomplete, the problem not reproducible, or simply forgotten.

Writing a one-line report summary (Bug's report title) is an art. You must master it. Summaries help everyone quickly review outstanding problems and find individual reports. The summary line is the most frequently and carefully read part of the report. When a summary makes a problem sound less severe than it is, managers are more likely to defer it. Alternatively, if your summaries make problems sound more severe than they are, you will gain a reputation for alarmism. Don't use the same summary for two different reports, even if they are similar. The summary line should describe only the problem, not the replication steps. Don't run the summary into the description (Steps to reproduce) as they will usually be printed independently of each other in reports.

Ideally you should be able to write this clearly enough for a developer to reproduce and fix the problem, and another QA engineer to verify the fix without them having to go back to you, the author, for more information. It is much better to over communicate in this field than say too little. Of course it is ideal if the problem is reproducible and you can write down those steps. But if you can't reproduce a bug, and try and try and still can't reproduce it, admit it and write the report anyway. A good programmer can often track down an irreproducible problem from a careful description. For a good discussion on analyzing problems and making them reproducible, see Chapter 5 of Testing Computer Software by Cem Kaner.

The most controversial thing in a bug report is often the bug Impacts: Low, Medium, High, and Urgent. Report should show the priority which you, the bug submitter, believes to be appropriate


Post Resume: Click here to Upload your Resume & Apply for Jobs

Sunday, January 13, 2008

What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:

•Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.

•Bug identifier (number, ID, etc.)

•Current bug status (e.g., 'Released for Retest', 'New', etc.)

•The application name or identifier and version

•The function, module, feature, object, screen, etc. where the bug occurred

•Environment specifics, system, platform, relevant hardware specifics

•Test case name/number/identifier

•One-line bug description

•Full bug description

•Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool

•Names and/or descriptions of file/data/messages/etc. used in test

•File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem

•Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

•Was the bug reproducible?

•Tester name

•Test date

•Bug reporting date

•Name of developer/group/organization the problem is assigned to

•Description of problem cause

•Description of fix

•Code section/file/module/class/method that was fixed

•Date of fix

•Application version that contains the fix

•Tester responsible for retest

•Retest date

•Retest results

•Tester responsible for regression tests

•Regression testing results

Saturday, January 12, 2008

What are the different types of Bugs we normally see in any of the Project?

1. What are the different types of Bugs we normally see in any of the Project? Include the severity as well.

The Life Cycle of a bug in general context is:

Bugs are usually logged by the development team (While Unit Testing) and also by testers (While sytem or other type of testing).

So let me explain in terms of a tester's perspective:

A tester finds a new defect/bug, so using a defect tracking tool logs it.

1. Its status is 'NEW' and assigns to the respective dev team (Team lead or Manager). 2. Th
e team lead assign's it to the team member, so the status is 'ASSIGNED TO'
3. The developer works on the bug fixes it and re-assings to the tester for testing. Now the status is 'RE-ASSIGNED'
4. The tester, check if the defect is fixed, if its fixed he changes the status to 'VERIFIED'
5. If the tester has the autority (depends on the company) he can after verifying change the status to 'FIXED'. If not the test lead can verify it and change the status to 'fixed'.

6. If the defect is not fixed he re-assign's the defect back to the dev team for re-fixing.

This is the life cycle of a bug.

1. User Interface Defects - Low
2. Boundary Related Defects - Medium
3. Error Handling Defects - Medium
4. Calculation Defects - High
5. Improper Service Levels (Control flow defects) - High
6. Interpreting Data Defects - High
7. Race Conditions (Compatibility and Intersystem defects)- High
8. Load Conditions (Memory Leakages under load) - High
9. Hardware Failures:- High

Friday, January 11, 2008

Top Ten Tips for Bug Tracking !!!!

1. A good tester will always try to reduce the repro steps to the minimal steps to reproduce; this is extremely helpful for the programmer who has to find the bug.

2. Remember that the only person who can close a bug is the person who opened it in the first place. Anyone can resolve it, but only the person who saw the bug can really be sure that what they saw is fixed.

3. There are many ways to resolve a bug. FogBUGZ allows you to resolve a bug as fixed, won't fix, postponed, not repro, duplicate, or by design.

4. Not Repro means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the repro steps.

5. You'll want to keep careful track of versions. Every build of the software that you give to testers should have a build ID number so that the poor tester doesn't have to retest the bug on a version of the software where it wasn't even supposed to be fixed.

6. If you're a programmer, and you're having trouble getting testers to use the bug database, just don't accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: "please put this in the bug database. I can't keep track of emails."

7. If you're a tester, and you're having trouble getting programmers to use the bug database, just don't tell them about bugs - put them in the database and let the database email them.

8. If you're a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they'll get the hint.

9. If you're a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bugs. A bug database is also a great "unimplemented feature" database, too.

10. Avoid the temptation to add new fields to the bug database. Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of the file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occurred; keeping track of which exact versions of which DLLs were installed on the machine where the bug happened. It's very important not to give in to these ideas. If you do, your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs "formally" is too much work, people will go around the bug database.

Thursday, January 10, 2008

How to Write a Helpful Bug Report

Imagine a customer telling you that your product is not working. You would probably have many questions:

How serious is the situation?
Is there a defect in your product, or is the defect in another part of the customer system?
Is the issue already known?
Is a solution already available?
Now imagine thousands of such incidents from around the world. How would you analyze these incidents as quickly, efficiently, and accurately as possible? After all, each report is a chance to make the product better for everyone.

This scenario is the challenge that the WebBugs team faces every day. Ideally, the team would be able to investigate every bug report in person. But this approach is not realistic, so we have designed a bug-reporting site to help us gather the information we need.

Our goal is to assist developers in creating a report that is complete and compact. A complete report allows our team's developers to reproduce the problem locally. A compact report helps us focus the investigation so that we can quickly determine the cause of the behavior.

Let's tour the bug-reporting site to see how we pursue this goal. And note that this document applies only to bugs. It does not discuss the trail for reporting documentation issues and requests for enhancements, which are other options on the bug-reporting site.

The Test Case in a Bug Report

Operating system and hardware The platform (operating system and hardware) indicates which system will reproduce the problem. If the behavior occurs on more than one platform, pick one to remove any ambiguity.
Source code for an executable test case The source code should be a stand-alone application that can compile without any external libraries. Again, shorter is better: Anything longer than 20 lines makes it more difficult for engineers to find the actual cause and increases the probability that the problem is actually in the bug submitter's code. Although extracting a small test case from a gigantic deployed application is a challenge, submit as short a test case as possible. This exercise may reveal that the problem is in one's own code, which means that a solution is immediately available.
Steps to reproduce Include the command-line invocation, even if it seems obvious.
Expected result The expected behavior tells us how the test case should behave if no bug were present. If the test case produces this expected result, we know that a fix works.
Actual result The actual behavior tells us the current results of the test case. If Sun engineering does not see this exact behavior, then the system configuration may be causing the problem.

Bug Life Cycle

In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:





The different states of a bug can be summarized as follows:

1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed
Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.

Bug Tracking Software Features

Do you know which bug tracking software is best for your development environment? Besides the obvious issues of compatibility, do you know what other features are important? Don’t let the sales pitches confuse. Before you purchase bug tracking software, know what your software developers and software testers want and need. After all, they’re the ones who will be using this tool most.

Reliable bug tracking software is itself bug-free and stable. Select a product that has a proven track record of success. If possible, talk with actual users to see whether or not the software is meeting their needs.

You want your bug tracking software to be customizable so that it works the way your development team works. But you don’t want this task to be so difficult that you have to first take a course to learn how to accomplish this. A good interface is simple and straightforward, and at the same time, intuitive.

Select bug tracking software that fits your budget and works on the hardware configuration you already have in place. If it requires additional, costly hardware to operate, keep looking.

As for features, including the ability to prioritize a reported bug helps managers allocate resources more efficiently. It’s important to know additional information about each reported bug, and effective bug tracking software facilitates adding notes. For example, it’s important to reproduce the steps that led to the discovery of the bug, the actions that should have happened when the bug was discovered, the actions that actually did happen, and the version that was used at the time of discrepancy. As the bug works its way from team member to team member, the good software tracks that history and allows additional notes to be appended at any time.

Reliable bug tracking software doubles as a management tool. It produces useful reports including work that has already been assigned to each team member. Such a report helps management reprioritize and reassign work when necessary. Also important is a built-in method for notifying specific team members that new work has been assigned to them. As each team member completes work on the bug, they need the ability to update the status of the bug. The bug itself needs to be reassigned to another individual or marked as closed.

Be sure that the bug tracking software you select allows customization of the steps necessary to close an open bug report. Also make sure the software allows you to designate the person(s) who has the authority to close a bug report.

Why does software have bugs?

Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).

* Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well engineered.

* Programming errors - programmers, like anyone else, can make mistakes.

* Changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.

* time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

* egos - people prefer to say things like:

* 'no problem'

* 'piece of cake'

* 'I can whip that out in a few hours'

* 'it should be easy to update that old code'

* instead of:

* 'that adds a lot of complexity and we could end up

* making a lot of mistakes'

* 'we have no idea if we can do that; we'll wing it'

* 'I can't estimate how long it will take, until I

* take a close look at it'

* 'we can't figure out what that old spaghetti code

* did in the first place'

* If there are too many unrealistic 'no problem's', the result is bugs.

* poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').

* software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.


Post Resume: Click here to Upload your Resume & Apply for Jobs

Debugging approach !!!!

In general, three categories for debugging approaches may be proposed.
• Brute force
• Back tracking
• Cause elimination

The brute force category of debugging is probably the most common and efficient method for isolating the cause of a software error. Brute force debugging methods are applied when all methods of debugging fail. Using a philosophy, memory dumps are taken, run time traces are invoked and the program is loaded with WRITE statement. When this is done, one finds a clue by the information produced which leads to cause of an error.

Backtracking is a common debugging approach that can be used successfully in small programs. Beginning at the site where a symptom has been uncovered, the source code is traced backward (manually) until the site of the cause is found. This process has a limitation when the source lines are more.

Cause Elimination is manifested by induction or deduction and introduces the concept of binary partitioning. Data related to the error occurrence are organized to isolate potential causes.

Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each.

If initial tests indicate that a particular cause hypothesis shows promise the data are refined in an attempt to isolate the bug.

Defect Tracking by Test Lead !!!

The test lead, categorizes the defects after meetings with the clients as,

Modify Cases: Test cases to be modified. This may arise when the testers understanding may be incorrect.

Discussion Items: Arises when there is a difference of opinion between the test and the development team. This is marked to the Domain consultant for final verdict.

Change Technology: Arises when the development team has to fix the bug.

Data Related: Arises when the defect is due to data and not coding.

User Training: Arises when the defect is not severe or technically not feasible to fix, it is decided to train the user on the defect. This should ideally not be critical.

New Requirement: Inclusion of functionality after discussion

User Maintenance: Masters and Parameter maintained by the user causing the defect.

Observation: Any other observation, which is not classified in the above categories like a user perspective GUI defect.

Reporting is done for defect evaluation and also to ensure that the development team is aware of the defects found and is in the process of resolving the defects. A detailed report of the defects is generated everyday and given to the development team for their feedback on defect resolution. A summary report is generated for every report to evaluate the rate at which new defects are found and the rate at which the defects are tracked to closure.

Defect counts are reported as a function of time, creating a Defect Trend diagram or report, and as a function of one or more defect parameters like category or status, creating a Defect Density report. These types of analysis provide a perspective on the trends or distribution of defects that reveal the system’s reliability, respectively.

It is expected that defect discovery rates will eventually diminish as the testing and fixing progresses. A threshold can be established below which the system can be deployed. Defect counts can also be reported based on the origin in the implementation model, allowing detection of “weak modules”, “hot spots”, parts of the system that are fixed again and again, indicating some fundamental design flaw.

Defects included in an analysis of this kind are confirmed defects. Not all reported defects report an actual flaw, as some may be enhancement requests, out of the scope of the system, or describe an already reported defect. However, there is a value to looking at and analysing why there are many defects being reported that are either duplicates or not confirmed defects.

Types of Defects

Defects that are detected by the tester are classified into categories by the nature of the defect. The following are the classification

Showstopper (X): The impact of the defect is severe and the system cannot go into the production environment without resolving the defect since an interim solution may not be available.

Critical (C): The impact of the defect is severe, however an interim solution is available. The defect should not hinder the test process in any way.

Non critical (N): All defects that are not in the X or C category are deemed to be in the N category. These are also the defects that could potentially be resolved via documentation and user training. These can be Graphic User Interface (GUI) defects are some minor field level observations.

What is the difference between exception and validation testing?

Validation testing aims to demonstrate that the software functions in a manner that can be reasonably expected by the customer. Testing the software in conformance to the Software Requirements Specifications.

Exception testing deals with handling the exceptions (unexpected events) while the AUT is run. Basically this testing involves how to change the control flow of the AUT when an exception arises

How can it be known when to stop Testing?

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends

Bug Reports - Other Considerations

* If your bug is randomly reproducible, just mention it in your bug report. But don.t forget to file it. You can always add the exact steps to reproduce anytime later you (or anyone else) discover them. This will also come to your rescue when someone else reports this issue, especially if it.s a serious one.
* Mention the error messages in the bug report, especially if they are numbered. For example, error messages from the database.
* Mention the version numbers and build numbers in the bug reports.
* Mention the platforms on which the issue is reproducible. Precisely mention the platforms on which the issue is not reproducible. Also understand that there is difference between the issue being not reproducible on a particular platform and it not being tested on that platform. This might lead to confusion.
* If you come across several problems having the same cause, write a single bug report. The fix of the problem will be only one. Similarly, if you come across similar problems at different locations requiring the same kind of fix but at different places, write separate bug reports for each of the problems. One bug report for only one fix.
* If the test environment on which the bug is reproducible is accessible to the developers, mention the details of accessing this setup. This will help them save time to setting up the environment to reproduce your bug.
* Under no circumstances should you hold on to any information regarding the bug. Unnecessary iterations of the bug report between the developer and the tester before being fixed is just waste of time due to ineffective bug reporting.

Debugger Commands

The debugger recognizes the following commands. Most commands can be abbreviated to one or two letters; e.g. "h(elp)" means that either "h" or "help" can be used to enter the help command (but not "he" or "hel", nor "H" or "Help" or "HELP"). Arguments to commands must be separated by whitespace (spaces or tabs). Optional arguments are enclosed in square brackets ("[]") in the command syntax; the square brackets must not be typed. Alternatives in the command syntax are separated by a vertical bar ("|").

Entering a blank line repeats the last command entered. Exception: if the last command was a "list" command, the next 11 lines are listed.

Commands that the debugger doesn't recognize are assumed to be Python statements and are executed in the context of the program being debugged. Python statements can also be prefixed with an exclamation point ("!"). This is a powerful way to inspect the program being debugged; it is even possible to change a variable or call a function. When an exception occurs in such a statement, the exception name is printed but the debugger's state is not changed.

Multiple commands may be entered on a single line, separated by ";;". (A single ";" is not used as it is the separator for multiple commands in a line that is passed to the Python parser.) No intelligence is applied to separating the commands; the input is split at the first ";;" pair, even if it is in the middle of a quoted string.

The debugger supports aliases. Aliases can have parameters which allows one a certain level of adaptability to the context under examination.

If a file .pdbrc exists in the user's home directory or in the current directory, it is read in and executed as if it had been typed at the debugger prompt. This is particularly useful for aliases. If both files exist, the one in the home directory is read first and aliases defined there can be overridden by the local file.


h(elp) [command]
Without argument, print the list of available commands. With a command as argument, print help about that command. "help pdb" displays the full documentation file; if the environment variable PAGER is defined, the file is piped through that command instead. Since the command argument must be an identifier, "help exec" must be entered to get help on the "!" command.


w(here)
Print a stack trace, with the most recent frame at the bottom. An arrow indicates the current frame, which determines the context of most commands.


d(own)
Move the current frame one level down in the stack trace (to a newer frame).


u(p)
Move the current frame one level up in the stack trace (to an older frame).


b(reak) [[filename:]lineno|function[, condition]]
With a lineno argument, set a break there in the current file. With a function argument, set a break at the first executable statement within that function. The line number may be prefixed with a filename and a colon, to specify a breakpoint in another file (probably one that hasn't been loaded yet). The file is searched on sys.path. Note that each breakpoint is assigned a number to which all the other breakpoint commands refer.

If a second argument is present, it is an expression which must evaluate to true before the breakpoint is honored.

Without argument, list all breaks, including for each breakpoint, the number of times that breakpoint has been hit, the current ignore count, and the associated condition if any.


tbreak [[filename:]lineno|function[, condition]]
Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as break.


cl(ear) [bpnumber [bpnumber ...]]
With a space separated list of breakpoint numbers, clear those breakpoints. Without argument, clear all breaks (but first ask confirmation).


disable [bpnumber [bpnumber ...]]
Disables the breakpoints given as a space separated list of breakpoint numbers. Disabling a breakpoint means it cannot cause the program to stop execution, but unlike clearing a breakpoint, it remains in the list of breakpoints and can be (re-)enabled.


enable [bpnumber [bpnumber ...]]
Enables the breakpoints specified.


ignore bpnumber [count]
Sets the ignore count for the given breakpoint number. If count is omitted, the ignore count is set to 0. A breakpoint becomes active when the ignore count is zero. When non-zero, the count is decremented each time the breakpoint is reached and the breakpoint is not disabled and any associated condition evaluates to true.


condition bpnumber [condition]
Condition is an expression which must evaluate to true before the breakpoint is honored. If condition is absent, any existing condition is removed; i.e., the breakpoint is made unconditional.


commands [bpnumber]
Specify a list of commands for breakpoint number bpnumber. The commands themselves appear on the following lines. Type a line containing just 'end' to terminate the commands. An example:


(Pdb) commands 1(com) print some_variable(com) end(Pdb)To remove all commands from a breakpoint, type commands and follow it immediately with end; that is, give no commands.

With no bpnumber argument, commands refers to the last breakpoint set.

You can use breakpoint commands to start your program up again. Simply use the continue command, or step, or any other command that resumes execution.

Specifying any command resuming execution (currently continue, step, next, return, jump, quit and their abbreviations) terminates the command list (as if that command was immediately followed by end). This is because any time you resume execution (even with a simple next or step), you may encounter· another breakpoint-which could have its own command list, leading to ambiguities about which list to execute.

If you use the 'silent' command in the command list, the usual message about stopping at a breakpoint is not printed. This may be desirable for breakpoints that are to print a specific message and then continue. If none of the other commands print anything, you see no sign that the breakpoint was reached.

New in version 2.5.


s(tep)
Execute the current line, stop at the first possible occasion (either in a function that is called or on the next line in the current function).


n(ext)
Continue execution until the next line in the current function is reached or it returns. (The difference between "next" and "step" is that "step" stops inside a called function, while "next" executes called functions at (nearly) full speed, only stopping at the next line in the current function.)


r(eturn)
Continue execution until the current function returns.


c(ont(inue))
Continue execution, only stop when a breakpoint is encountered.


j(ump) lineno
Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don't want to run.

It should be noted that not all jumps are allowed -- for instance it is not possible to jump into the middle of a for loop or out of a finally clause.


l(ist) [first[, last]]
List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With one argument, list 11 lines around at that line. With two arguments, list the given range; if the second argument is less than the first, it is interpreted as a count.


a(rgs)
Print the argument list of the current function.


p expression
Evaluate the expression in the current context and print its value. Note: "print" can also be used, but is not a debugger command -- this executes the Python print statement.


pp expression
Like the "p" command, except the value of the expression is pretty-printed using the pprint module.


alias [name [command]]
Creates an alias called name that executes command. The command must not be enclosed in quotes. Replaceable parameters can be indicated by "%1", "%2", and so on, while "%*" is replaced by all the parameters. If no command is given, the current alias for name is shown. If no arguments are given, all aliases are listed.

Aliases may be nested and can contain anything that can be legally typed at the pdb prompt. Note that internal pdb commands can be overridden by aliases. Such a command is then hidden until the alias is removed. Aliasing is recursively applied to the first word of the command line; all other words in the line are left alone.

As an example, here are two useful aliases (especially when placed in the .pdbrc file):


#Print instance variables (usage "pi classInst")alias pi for k in %1.__dict__.keys(): print "%1.",k,"=",%1.__dict__[k]#Print instance variables in selfalias ps pi self
unalias name
Deletes the specified alias.


statement
Execute the (one-line) statement in the context of the current stack frame. The exclamation point can be omitted unless the first word of the statement resembles a debugger command. To set a global variable, you can prefix the assignment command with a "global" command on the same line, e.g.:


(Pdb) global list_options; list_options = ['-l'](Pdb)
q(uit)
Quit from the debugger. The program being executed is aborted.

Tuesday, January 1, 2008

Software Testing Glossary

These are a few of the terms you will encounter frequently in software testing. This glossary will continue to expand, so if you see a term missing, or would like to have a term defined, e-mail me.

Where applicable, the source of the definition is shown. One of the problems in testing is that the common body of knowledge is not standard, but is drawn from a variety of sources. For that reason, you may see a term defined differently here than you may use it. I will be happy to accommodate explanations of alternate usage where appropriate.

A

algorithm

(IEEE) (1) A finite set of well-defined rules for the solution of a problem in a finite number of steps. (2) Any sequence of operations for performing a specific task.

algorithm analysis


(IEEE) A software V&V task to ensure that the algorithms selected are correct, appropriate, and stable, and meet all accuracy, timing, and sizing requirements.

analysis

(1) To separate into elemental parts or basic principles so as to determine the nature of the whole (2) A course of reasoning showing that a certain result is a consequence of assumed premises. (3) (ANSI) The methodical investigation of a problem, and the separation of the problem into smaller related units for further detailed study.

anomaly.

(IEEE) Anything observed in the documentation or operation of software that deviates from expectations based on previously verified software products or reference documents. See: bug, defect, error, exception, fault.

application software

(IEEE) Software designed to fill specific needs of a user; for example, software for navigation, payroll, or process control. Contrast with support software; system software.

architecture

(IEEE) The organizational structure of a system or component.

assertion

(NIST) A logical expression specifying a program state that must exist or a set of conditions that program variables must satisfy at a particular point during program execution.

assertion checking

(NIST) Checking of user-embedded statements that assert relationships between elements of a program. An assertion is a logical expression that specifies a condition or relation among program variables. Tools that test the validity of assertions as the program is executing or tools that perform formal verification of assertions have this feature.

audit

(1) (IEEE) An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria. (2) (ANSI) To conduct an independent review and examination of system records and activities in order to test the adequacy and effectiveness of data security and data integrity procedures, to ensure compliance with established policy and operational procedures, and to recommend any necessary changes.

audit trail

(1) (ISO) Data in the form of a logical path linking a sequence of events, used to trace the transactions that have affected the contents of a record. (2) A chronological record of system activities that is sufficient to enable the reconstruction, reviews, and examination of the sequence of environments and activities surrounding or leading to each event in the path of a transaction from its inception to output of final results.

B

baseline

(NIST) A specification or product that has been formally reviewed and agreed upon, that serves as the basis for further development, and that can be changed only through formal change control procedures.

basis path testing

batch

(IEEE) Pertaining to a system or mode of operation in which inputs are collected and processed all at one time, rather than being processed as they arrive, and a job, once started, proceeds to completion without additional input or user interaction. Contrast with conversational, interactive, on-line, real time.

batch processing

Execution of programs serially with no interactive processing. Contrast with real time processing.

benchmark

A standard against which measurements or comparisons can be made.

black-box testing

(1) Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to selected inputs and execution conditions. (2) Testing conducted to evaluate the compliance of a system or component with specified functional requirements and corresponding predicted results. Syn. functional testing, input/output driven testing. Contrast with white-box testing.

boolean

Pertaining to the principles of mathematical logic developed by George Boole, a nineteenth century mathematician. Boolean algebra is the study of operations carried out on variables that can have only one of two possible values, i.e., 1 (true) and 0 (false). As ADD, MULTIPLY, and DIVIDE are the primary operations of arithmetic, AND, OR, and NOT are the primary operations of Boolean Logic.

boundary value

(1) (IEEE) A data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component. (2) A value which lies at, or just inside or just outside a specified range of valid input and output values.

boundary value analysis

(NBS) A selection technique in which test data are chosen to lie along "boundaries" of the input domain [or output range] classes, data structures, procedure parameters, etc. Choices often include maximum, minimum, and trivial values or parameters.

branch

An instruction which causes program execution to jump to a new point in the program sequence, rather than execute the next instruction. Syn: jump.

branch analysis

(Myers) A test case identification technique which produces enough test cases such that each decision has a true and a false outcome at least once. Contrast with path analysis.

branch coverage

(NBS) A test coverage criteria which requires that for each decision point each possible branch be executed at least once. Syn: decision coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.

bug

A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, fault.

C

cause effect graphing

(1) (NBS) Test data selection technique The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2) (Myers) A systematic method of generating test cases representing combinations of conditions. See: testing, functional.

change control

The processes, authorities for, and procedures to be used for all changes that are made to the computerized system and/or the system's data. Change control is a vital subset of the Quality Assurance [QA] program within an establishment and should be clearly described in the establishment's SOPs, See: configuration control.

change tracker

A software tool which documents all changes made to a program.

client/server

A term used in a broad sense to describe the relationship between the receiver and the provider of a service. In the world of microcomputers, the term client-server describes a networked system where front-end applications, as the client, make service requests upon another networked system. Client-server relationships are defined primarily by software. In a local area network [LAN], the workstation is the client and the file server is the server. However, client-server systems are inherently more complex than file server systems. Two disparate programs must work in tandem, and there are many more decisions to make about separating data and processing between the client workstations and the database server. The database server encapsulates database files and indexes, restricts access, enforces security, and provides applications with a consistent interface to data via a data dictionary.

code

See: program, source code.

code audit

(IEEE) An independent review of source code by a person, team, or tool to verify compliance with software design documentation and programming standards. Correctness and efficiency may also be evaluated. Contrast with code inspection, code review, code walkthrough. See: static analysis.

code inspection

(Myers/NBS) A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with, code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. Syn: Fagan Inspection. See: static analysis.

code review

(IEEE) A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Contrast with code audit, code inspection, code walkthrough. See: static analysis.

code walkthrough

(Myers/NBS) A manual testing [error detection] technique where program (source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions. Contrast with code audit, code inspection, code review. See: static analysis.

coding standards

Written procedures describing coding [programming] style conventions specifying rules governing the use of individual constructs provided by the programming language, and naming, formatting, and documentation requirements which prevent programming errors, control complexity and promote understandability of the source code. Syn: development standards, programming standards.

comparitor

(IEEE) A software tool that compares two computer programs, files, or sets of data to identify commonalities or differences. Typical objects of comparison are similar versions of source code, object code, data base files, or test results.

compatibility testing

completeness

(NIST) The property that all necessary parts of the entity are included. Completeness of a product is often used to express the fact that all requirements have been met by the product. See: traceability analysis.

complexity

(IEEE) (1) The degree to which a system or component has a design or implementation that is difficult to understand and verify. (2) Pertaining to any of a set of structure based metrics that measure the attribute in (1).

computer aided software engineering (CASE)

An automated system for the support of software development including an integrated tool set, i.e., programs, which facilitate the accomplishment of software engineering methods and tasks such as project planning and estimation, system and software requirements analysis, design of data structure, program architecture and algorithm procedure, coding, testing and maintenance.

computer system audit

(ISO) An examination of the procedures used in a computer system to evaluate their effectiveness and correctness and to recommend improvements. See: software audit.

computer system security

(IEEE) The protection of computer hardware and software from accidental or malicious access, use, modification, destruction, or disclosure. Security also pertains to personnel, data, communications, and the physical protection of computer installations.

configurable, off-the-shelf software (COTS)

Application software, sometimes general purpose, written for a variety of industries or users in a manner that permits users to modify the program to meet their individual needs.

configuration control

(IEEE) An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. See: change control.

configuration management

(IEEE) A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verifying compliance with specified requirements. See: configuration control, change control

consistency

(IEEE) The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a system or component.

consistency checker

A software tool used to test requirements in design specifications for both consistency and completeness.

control flow analysis

(IEEE) A software V&V task to ensure that the proposed control flow is free of problems, such as design or code elements that are unreachable or incorrect.

control flow diagram.

(IEEE) A diagram that depicts the set of all possible sequences in which operations may be performed during the execution of a system or program. Types include box diagram, flowchart, input-process-output chart, state diagram. Contrast with data flow diagram. See: call graph, structure chart.

corrective maintenance

(IEEE) Maintenance performed to correct faults in hardware or software. Contrast with adaptive maintenance, perfective maintenance.

correctness

(IEEE) The degree to which software is free from faults in its specification, design and coding. The degree to which software, documentation and other items meet specified requirements. The degree to which software, documentation and other items meet user needs and expectations, whether specified or not.

coverage analysis

(NIST) Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test run. Coverage analysis is useful when attempting to execute each statement, branch, path, or iterative structure in a program. Tools that capture this data and provide reports summarizing relevant information have this feature. See: testing, branch; testing, path; testing, statement.

crash

(IEEE) The sudden and complete failure of a computer system or component.

critical control point

(CA) A function or an area in a manufacturing process or procedure, the failure of which, or loss of control over, may have an adverse affect on the quality of the finished product and may result in a unacceptable health risk.

critical design review

(IEEE) A review conducted to verify that the detailed design of one or more configuration items satisfy specified requirements; to establish the compatibility among the configuration items and other items of equipment, facilities, software, and personnel; to assess risk areas for each configuration item; and, as applicable, to assess the results of producibility analyses, review preliminary hardware product specifications, evaluate preliminary test planning, and evaluate the adequacy of preliminary operation and support documents. See: preliminary design review, system design review.

criticality

(IEEE) The degree of impact that a requirement, module, error, fault, failure, or other item has on the development or operation of a system. Syn: severity.

criticality analysis.

(IEEE) Analysis which identifies all software requirements that have safety implications, and assigns a criticality level to each safety-critical requirement based upon the estimated risk.

cyclic redundancy [check] code (CRC)

A technique for error detection in data communications used to assure a program or data file has been accurately transferred. The CRC is the result of a calculation on the set of transmitted bits by the transmitter which is appended to the data. At the receiver the calculation is repeated and the results compared to the encoded value. The calculations are chosen to optimize error detection. Contrast with check summation, parity check.

cyclomatic complexity

(1) (McCabe) The number of independent paths through a program. (2) (NBS) The cyclomatic complexity of a program is equivalent to the number of decision statements plus 1.

D

data analysis

(IEEE) (1) Evaluation of the description and intended use of each data item in the software design to ensure the structure and intended use will not result in a hazard. Data structures are assessed for data dependencies that circumvent isolation, partitioning, data aliasing, and fault containment issues affecting safety, and the control or mitigation of hazards. (2) Evaluation of the data structure and usage in the code to ensure each is defined and used properly by the program. Usually performed in conjunction with logic analysis.

data corruption

(ISO) A violation of data integrity. Syn: data contamination.

data dictionary

(IEEE) (1) A collection of the names of all data items used in a software system, together with relevant properties of those items; e.g., length of data item, representation, etc. (2) A set of definitions of data flows, data elements, files, data bases, and processes referred to in a leveled data flow diagram set.

data flow analysis

(IEEE) A software V&V task to ensure that the input and output data and their formats are properly defined, and that the data flows are correct.

data flow diagram

(IEEE) A diagram that depicts data sources, data sinks, data storage, and processes performed on data as nodes, and logical flow of data as links between the nodes. Syn: data flowchart, data flow graph.

data integrity

(IEEE) The degree to which a collection of data is complete, consistent, and accurate. Syn: data quality.

data validation

(1) (ISO) A process used to determine if data are inaccurate, incomplete, or unreasonable. The process may include format checks, completeness checks, check key tests, reasonableness checks and limit checks. (2) The checking of data for correctness or compliance with applicable standards, rules, and conventions.

dead code

Program code statements which can never execute during program operation. Such code can result from poor coding style, or can be an artifact of previous versions or debugging efforts. Dead code can be confusing, and is a potential source of erroneous software changes.

debugging

(Myers) Determining the exact nature and location of a program error, and fixing the error.

decision coverage

(Myers) A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.

decision table

(IEEE) A table used to show sets of conditions and the actions resulting from them.

defect

Non-comformance to requirements. See: anomaly, bug, error, exception, fault. defect analysis. See: failure analysis.

defect analysis

design of experiments

A methodology for planning experiments so that data appropriate for [statistical] analysis will be collected.

design phase

(IEEE) The period of time in the software life cycle during which the designs for architecture, software components, interfaces, and data are created, documented, and verified to satisfy requirements.

design specification

(NIST) A specification that documents how a system is to be built. It typically includes system or component structure, algorithms, control logic, data structures, data set [file] use information, input/output formats, interface descriptions, etc Contrast with design standards, requirement. See: software design description.

development methodology

(ANSI) A systematic approach to software creation that defines development phases and specifies the activities, products, verification procedures, and completion criteria for each phase. See: incremental development, rapid prototyping, spiral model, waterfall model.

diagnostic

(IEEE) Pertaining to the detection and isolation of faults or failures. For example, a diagnostic message, a diagnostic manual.

different software system analysis.

(IEEE) Analysis of the allocation of software requirements to separate computer systems to reduce integration and interface errors related to safety. Performed when more than one software system is being integrated. See: testing, compatibility.

dynamic analysis

(NBS) Analysis that is performed by executing the program code. Contrast with static analysis. See: testing.

E

embedded software

(IEEE) Software that is part of a larger system and performs some of the requirements of that system; e,g., software used in an aircraft or rapid transit system. Such software does not provide an interface with the user. See: firmware,

end user

(ANSI) (1) A person, device, program, or computer system that uses an information system for the purpose of data processing in information exchange. (2) A person whose occupation requires the use of an information system but does not require any knowledge of computers or computer programming. See: user.

entity relationship diagram

(IEEE) A diagram that depicts a set of real-world entities and the logical relationships among them. See: data structure diagram.

environment

(ANSI) (1) Everything that supports a system or the performance of a function. (2) The conditions that affect the performance of a system or function.

equivalence class partitioning

(Myers) Partitioning the input domain of a program into a finite number of classes [sets], to identify a minimal set of well selected test cases to represent these classes. There are two types of input equivalence classes, valid and invalid. See: testing, functional.

error

(ISO) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. See: anomaly, bug, defect, exception, fault.

error analysis

See: debugging, failure analysis.

error detection

Techniques used to identify errors in data transfers. See: check summation, cyclic redundancy check [CRC], parity check, longitudinal redundancy.

error guessing

(NBS) Test data selection technique. The selection is to pick values that are likely to cause errors.

error seeding

(IEEE) The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. Contrast with mutation analysis.

event table

A table which lists events and the corresponding specified effect[s] of or reaction[s] to each event.

exception

(IEEE) An event that causes suspension of normal program execution. Types include addressing exception, data exception, operation exception, overflow exception, protection exception, underflow exception.

exception conditions/responses table

A special type of event table.

execution trace

(IEEE) A record of the sequence of instructions executed during the execution of a computer program. Often takes the form of a list of code labels encountered as the program executes. Syn: code trace, control flow trace. See: retrospective trace, subroutine trace, symbolic trace, variable trace.

exception.

(IEEE) An event that causes suspension of normal program operation. Types include addressing exception, data exception, operation exception, overflow exception, protection exception, underflow exception. See: anomaly, bug, defect, error, fault.

external test data

(NBS) Test data that is at the extreme or boundary of the domain of an input variable or which produces results at the boundary of an output domain. See: testing, boundary value.

F

Fagan inspection

See: code inspection.

fail-safe

(IEEE) A system or component that automatically places itself in a safe operational mode in the event of a failure,

failure

(IEEE) The inability of a system or component to perform its required functions within specified performance requirements. See: bug, crash, exception, fault.

failure analysis

Determining the exact nature and location of a program error in order to fix the error, to identify and fix other similar errors, and to initiate corrective action to prevent future occurrences of this type of error. Contrast with debugging.

Failure Modes and Effects Analysis

(IEC) A method of reliability analysis intended to identify failures, at the basic component level, which have significant consequences affecting the system performance in the application considered.

Failure Modes and Effects Criticality Analysis

(IEC) A logical extension of FMEA which analyzes the severity of the consequences of faiIure.

fault

An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, bug, defect, error, exception.

fault seeding

See: error seeding.

Fault Tree Analysis

(IEC) The identification and analysis of conditions and factors which cause or contribute to the occurrence of a defined undesirable event, usually one which significantly affects system performance, economy, safety or other required characteristics.

feasibility study

Analysis of the known or anticipated need for a product, system, or component to assess the degree to which the requirements, designs, or plans can be implemented.

flowchart or flow diagram

(2) (ISO) A graphical representation in which symbols are used to represent such things as operations, data, flow direction, and equipment, for the definition, analysis, or solution of a problem. (2) (IEEE) A control flow diagram in which suitably annotated geometrical figures are used to represent operations, data, or equipment, and arrows are used to indicate the sequential flow from one to another. Syn: flow diagram. See: block diagram, box diagram, bubble chart, graph, input-process-output chart, structure chart.

formal qualification review

(IEEE) The test, inspection, or analytical process by which a group of configuration items comprising a system is verified to have met specific contractual performance requirements. Contrast with code review, design review, requirements review, test readiness review.

function

(1) (ISO) A mathematical entity whose value, namely, the value of the dependent variable, depends in a specified manner on the values of one or more independent variables, with not more than one value of the dependent variable corresponding to each permissible combination of values from the respective ranges of the independent variables. (2) A specific purpose of an entity, or its characteristic action.

functional analysis

(IEEE) Verifies that each safety-critical software requirement is covered and that an appropriate criticality level is assigned to each software element.

functional configuration audit

(IEEE) An audit conducted to verify that the development of a configuration item has been completed satisfactorily, that the item has achieved the performance and functional characteristics specified in the functional or allocated configuration identification, and that its operational and support documents are complete and satisfactory. See: physical configuration audit.

functional decomposition

See: modular decomposition.

functional design

(IEEE) (1) The process of defining the working relationships among the components of a system. See: architectural design. (2) The result of the process in (1).

functional requirement

(IEEE) A requirement that specifies a function that a system or system component must be able to perform.

G

graph

(IEEE) A diagram or other representation consisting of a finite set of nodes and internode connections called edges or arcs. Contrast with blueprint. See: block diagram, box diagram, bubble chart, call graph, cause-effect graph, control flow diagram, data flow diagram, directed graph, flowchart, input-process-output chart, structure chart, transaction flowgraph.

graphic software specifications

Documents such as charts, diagrams, graphs which depict program structure,, states of data, control, transaction flow, HIPO, and cause-effect relationships; and tables including truth, decision, event, state-transition, module interface, exception conditions/responses necessary to establish design integrity.

H

hazard

(DOD) A condition that is prerequisite to a mishap.

hazard analysis

A technique used to identify conceivable failures affecting system performance, human safety or other required characteristics. See: FMEA, FMECA, FTA, software hazard analysis, software safety requirements analysis, software safety design analysis, software safety code analysis, software safety test analysis, software safety change analysis.

hazard probability

(DOD) The aggregate probability of occurrence of the individual events that create a specific hazard.

hazard severity

(DOD) An assessment of the consequence of the worst credible mishap that could be caused by a specific hazard.

I

inspection

A manual testing technique in which program documents [specifications (requirements, design, source code or user's manuaIs are examined in a very formal and disciplined manner to discover errors, violations of standards and other problems. Checklists are a typical vehicle used in accomplishing this technique. See: static analysis, code audit, code inspection, code review, code walkthrough.

installation and checkout phase

(IEEE) The period of time in the software life cycle during which a software product is integrated into its operational environment and tested in this environment to ensure that it performs as required.

Institute of Electrical and Electronic Engineers (IEEE)

345 East 47th Street. New York, NY 10017. An organization involved in the generation and promulgation of standards. IEEE standards represent the formalization of current norms of professional practice through the process of obtaining the consensus of concerned, practicing professionals in the given field.

instruction

(1) (ANSI/IEEE) A program statement that causes a computer to perform a particular operation or set of operations. (2) (ISO) In a programming language, a meaningful expression that specifies one operation and identifies its operands, if any.

instruction set

(1) (IEEE) The complete set of instructions recognized by a given computer or provided by a given programming language. (2) (ISO) The set of the instructions of a computer, of a programming language, or of the programming languages in a programming system. See: computer instruction set.

instrumentation

(NIBS) The insertion of additional code into a program in order to collect information about program behavior during program execution. Useful for dynamic analysis techniques such as assertion checking, coverage analysis, tuning.

interface

(1) (ISO) A shared boundary between two functional units, defined by functional characteristics, common physical interconnection characteristics, signal characteristics, and other characteristics, as appropriate. The concept involves the specification of the connection of two devices having different functions. (2) A point of communication between two or more processes, persons, or other physical entities. (3) A peripheral device which permits two or more devices to communicate.

interface analysis

(IEEE) Evaluation of: (1) software requirements specifications with hardware, user, operator, and software interface requirements documentation, (2) software design description records with hardware, operator, and software interface requirements specifications, (3) source code with hardware, operator, and software interface design documentation, for correctness, consistency, completeness, accuracy, and readability. Entities to evaluate include data items and control items.

interface requirement

(IEEE) A requirement that specifies an external item with which a system system component must interact, or sets forth constraints on formats, timing, or other factors caused by such an interaction.

invalid inputs

1 (NBS) Test data that lie outside the domain of the function the program represents. (2) These are not only inputs outside the valid range for data to be input, i.e., when the specified input range is 50 to 100, but also unexpected inputs, especially when these unexpected inputs may easily occur; e,g., the entry of alpha characters or special keyboard characters when only numeric data is valid, or the input of abnormal command sequences to a program.

J

JCL.

job control job job control language.

job.

(IEEE) A user-defined unit of work that is to be accomplished by a computer. For example, the compilation, loading, and execution of a computer program. See: job control language.

job control language.

(IEEE) A language used to identify a sequence of jobs, describe their requirements to an operating system, and control their execution.

L

latent defect.

See: bug, fault. life cycle. See: software life cycle.

life cycle methodology.

The use of any one of several structured methods to plan, design, implement, test. and operate a system from its conception to the termination of its us. See: waterfall model.

logic analysis. (IEEE) Evaluates the safety-critical equations, algorithms, and control logic of the software design. (2) Evaluates the sequence of operations represented by the coded program and detects programming errors that might create hazards.

low-level language. See: assembly language. The advantage of assembly language is that it provides bit-level control of the processor allowing tuning of the program for optimal speed and performance. For time critical operations, assembly language may be necessary in order to generate code which executes fast enough for the required operations. The disadvantage of assembly language is the high-level of complexity and detail required in the programming. This makes the source code harder to understand, thus Increasing the chance of introducing errors during program development and maintenance.


M

MTBF.

mean time between failures.

MTTR.

mean time to repair.

MTTF.

mean time to failure.

machine code.

(IEEE) Computer instructions and definitions expressed in a form [binary code] that can be recognized by the CPU of a computer. All source code, regardless of the language in which it was programmed, is eventually converted to machine code. Syn: object code.

machine language. See: machine code.

macro.

(IEEE) In software engineering, a predefined sequence of computer instructions that is inserted into a program, usually during assembly or compilation, at each place that its corresponding macroinstruction appears in the program.

mainframe. Term used to describe a large computer.

maintainability.

(IEEE) The ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment. Syn: modifiability

maintenance.

(QA) Activities such as adjusting, cleaning, modifying, overhauling equipment to assure performance in accordance with requirements. Maintenance to a software system includes correcting software errors, adapting software to a new environment, or making enhancements to software See: adaptive maintenance, corrective maintenance, perfective maintenance.

mean time between failures.

A measure of there liability of a computer system, equal to average operating time of equipment between failures, as calculated on a statistical basis from the known failure rates of various components of the system.

mean time to failure.

A measure of reliability, giving the average time before the first failure.

mean time to repair.

A measure of reliability of a piece of repairable equipment, giving the average time between repairs.

measure.

(IEEE) A quantitative assessment of the degree to which a software product or process possesses a given attribute.

measurable.

Capable of being measured.

measurement.

The process of determining the value of some quantity in terms of a standard unit.

metric based test data generation.

(NBS) The process of generating test sets for structural testing based upon use of complexity metrics or coverage metrics.

metric, software quality.

(IEEE) A quantitative measure of the degree to which software possesses a given attribute which affects its quality.

mishap. (DOD) An unplanned event or series of events resulting in death. injury, occupational illness, or damage to or loss of data and equipment or property, or damage to the environment. Syn: accident. (DOD) An unplanned event or series of events resulting in death. injury, occupational illness, or damage to or loss of data and equipment or property, or damage to the environment. Syn: accident.

modular software.

(IEEE) Software composed of discrete parts. See: structured design.

modularity.

(IEEE) The degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components.

module.

(1) In programming languages, a self-contained subdivision of a program that may be separately compiled. (2) A discrete set of instructions, usually processed as a unit, by an assembler, a compiler, a linkage editor, or similar routine or subroutine. (3) A packaged functional hardware units suitable for use with other components.

multiple condition coverage.

(Myers) A test coverage criteria which requires enough test cases such that all possible combinations of conditions outcomes in each decision, and all points of entry, are invoked at least once. Contrast with branch coverage, condition, decision coverage, path coverage, path coverage, statement coverage.

mutation analysis.

(NBS) A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants [mutants] of the program. Contrast with error seeding.

N

NIST.

National Institute for Standards and Technology.

National Institute for Standards and Technology.

Gaithersburg, MD 20899. A federal agency under the Department of Commerce. originally established by an act of Congress on March 3, 1901 as the National Bureau of Standards. The Institute's overall goal is to strengthen and advance the Nation's science and technology and facilitate their effective application for public benefit. The National Computer Systems Laboratory conducts research and provides, among other things, the technical foundation for computer related policies of the Federal Government.

noncritical code analysis.

(IEEE) (1) Examines software elements that are not designated safety-critical and ensures that these elements do not cause a hazard. (2) Examines portions of the code that are not considered safety-critical code to ensure they do not cause hazards. Generally, safety-critical code should be isolated from non-safety-critical code. This analysis is to show this isolation is complete and that interfaces between safety-critical code and non-safety-critical code do not create hazards.

O

OOP.

object oriented programming.

object.

In object oriented programming, A self contained module [encapsulation] of and the programs [services] that manipulate [process] that data.

object code.

(NIST) A code expressed in machine language ["1"s and "0"s] which is normally an output of a given translation process that is ready to be executed by a computer. Syn: machine code. Contrast with source code. See: object program.

object oriented design.

(IEEE) A software development technique in which a system or component is expressed in terms of objects and connections between those objects.

object oriented language.

(IEEE) A programming language that allows the user to express a program in terms of objects and messages between those objects. Examples include C + +, Smalltalk and LOGO.

object oriented programming.

A technology for writing programs that are made up of self-sufficient modules that contain all of the information needed to manipulate a given data structure. The modules are created in class hierarchies so that the code or methods of a class can be passed to other modules. New object modules can be easily created by inheriting the characteristics of existing classes. See: object, object oriented design.

object program.

(IEEE) A computer program that is the output of an assembler or compiler.

octal.

The base 3 number system. Digits are 0,1,2,3,4,5,6, & 7.

on-line.

(IEEE) Pertaining to a system or mode of operation in which input data enter the computer directly from the point of origin or output data are transmitted directly to the point where they are used. For example, an airline reservation system. Contrast with batch. See: conversational, interactive, real time.

operating system.

(ISO) Software that controls the execution of programs, and that provides services such as resource allocation. scheduling, input/output control, and data management. Usually, operating systems are predominantly software. but partial or complete hardware implementations are possible.

operation and maintenance phase.

(IEEE) The period of time in the software life cycle during which a software product is employed in its operational environment, monitored for satisfactory performance, and modified as necessary to correct problems or to respond to changing requirements.

operation exception.

(IEEE) An exception that occurs when a program encounters an invalid operation code.

P

performance requirement.

(IEEE) A requirement that imposes conditions on a functional requirement; e.g., a requirement that specifies the speed. accuracy, or memory usage with which a given function must be performed.

peripheral device.

Equipment that is directly connected a computer. A peripheral device can be used to input data: e.g., keypad, bar code reader, transducer, laboratory test equipment: or to output data; e.g., printer, disk drive, video system, tape drive, valve controller, motor controller. Syn: peripheral equipment.

physical requirement.

(IEEE) A requirement that specifies a physical characteristic that a system or system component must posses; e.g., material, shape, size, weight.

platform.

The hardware and software which must be present and functioning for an application program to run [perform] as intended. A platform includes, but is not limited to the operating system or executive software, communication software, microprocessor. network, input/output hardware, any generic software libraries, database management, user interface software, and the like.

polling.

A technique a CPU can use to learn if a peripheral device is ready to receive data or to send data. In this method each device is checked or polled in-turn to determine if that device needs service. The device must wait until it is polled in order to send or receive data. This method is useful if the device's data can wait for a period of time before being processed, since each device must await its turn in the polling scheme before it will be serviced by the processor. Contrast with interrupt.

production database.

The computer file that contains the establishment's current production data.

program.

(1) (ISO) A sequence of instructions suitable for processing. Processing may include the use of an assembler, a compiler, an interpreter, or another translator to prepare the program for execution. The instructions may include statements and necessary declarations. (2) (ISO) To design, write, and test programs. (3) (ANSI) In programming languages, a set of one or more interrelated modules capable of being executed. (4) Loosely, a routine. (5) Loosely, to write a routine.

program design language.

(IEEE) A specification language with special constructs and, sometimes, verification protocols, used to develop, analyze, and document a program design.

program mutation.

(IEEE) A computer program that has been purposely altered from the intended version to evaluate the ability of program test cases to detect the alteration. See: testing, mutation.

programming language.

(IEEE) A language used to express computer programs. See: computer language, high-level language, low-level language.

programming standards.

See: coding standards.

project plan.

(NIST) A management document describing the approach taken for a project. The plan typically describes work to be done, resources required, methods to be used, the configuration management and quality assurance procedures to be followed, the schedules to be met, the project organization, etc. Project in this context is a generic term. Some projects may also need integration plans, security plans, test plans, quality assurance plans, etc. See: documentation plan, software development plan, test plan, software engineering.

proof of correctness.

(NBS) The use of techniques of mathematical logic to infer that a relation between

program variables assumed true at program entry implies that another relation between program variables holds at program exit.

protocol.

(ISO) A set of semantic and syntactic rules that determines the behavior of functional units in achieving communication.

prototyping.

Using software tools to accelerate the software development process by facilitating the identification of required functionality during analysis and design phases. A limitation of this technique is the identification of system or software problems and hazards. See: rapid prototyping.

pseudocode.

A combination of programming language and natural language used to express a software design. If used, it is usually the last document produced prior to writing the source code.

Q

QA.

quality assurance.

QC.

quality control.

qualification, installation.

(FDA) Establishing confidence that process equipment and ancillary systems are compliant with appropriate codes and approved design intentions, and that manufacturer's recommendations are suitably considered.

qualification, operational.

(FDA) Establishing confidence that process equipment and subsystems are capable of consistently operating within established limits and tolerances.

qualification, process performance.

(FDA) Establishing confidence that the process is effective and reproducible.

qualification, product performance.

(FDA) Establishing confidence through appropriate testing that the finished product produced by a specified process meets all release requirements for functionality and safety.

quality assurance.

(1) (ISO) The planned systematic activities necessary to ensure that a component, module, or system conforms to established technical requirements. (2) All actions that are taken to ensure that a development organization delivers products that meet performance requirements and adhere to standards and procedures. (3) The policy, procedures, and systematic actions established in an enterprise for the purpose of providing and maintaining some degree of confidence in data integrity and accuracy throughout the life cycle of the data, which includes input, update, manipulation, and output. (4) (QA) The actions, planned and performed, to provide confidence that all systems and components that influence the quality of the product are working as expected individually and collectively.

quality assurance, software.

(IEEE) (1) A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements. (2) A set of activities designed to evaluate the process by which products are developed or manufactured.

quality control.

The operational techniques and procedures used to achieve quality requirements.

R

rapid prototyping.

A structured software requirements discovery technique which emphasizes generating prototypes early in the development process to permit early feedback and analysis in support of the development process. Contrast with incremental development, spiral model, waterfall model. See: prototyping.

real time.

(IEEE) Pertaining to a system or mode of operation in which computation is performed during the actual time that an external process occurs, in order that the computation results can be used to control. monitor, or respond in a timely manner to the external process. Contrast with batch. See: conversational, interactive.

real time processing.

A fast-response [immediate response] on-line system which obtains data from an activity or a physical process, performs computations. and returns a response rapidly enough to affect [control] the outcome of the activity or process; e.g., a process control application. Contrast with batch processing.

record.

(1) (ISO) a group of related data elements treated as a unit. [A data element (field) is a component of a record, a record is a component of a file (database)].

record of change.

Documentation of changes made to the system. A record of change can be a written document or a database. Normally there are two associated with a computer system, hardware and software. Changes made to the data are recorded in an audit trail.

regression analysis and testing.

(IEEE) A software V&V task to determine the extent of V&V analysis and testing that must be repeated when changes are made to any previously examined software products. See: testing, regression.

relational database.

Database organization method that links files together as required. Relationships between files are created by comparing data such as account numbers and names. A relational system can take any two or more files and generate a new file from the records that meet the matching criteria. Routine queries often involve more than one data file; e.g., a customer tile and an order file can be linked in order to ask a question that relates to information in both tiles, such as the names of the customers that purchased a particular product. Contrast with network database, flat tile.

release.

(IEEE) The formal notification and distribution of an approved version. See: version.

reliability.

(IEEE) The ability of a system or component to perform its required functions under stated conditions for a specified period of time. See: software reliability.

reliability assessment.

(ANSI/IEEE) The process of determining the achieved level of reliability for an existing system or system component.

requirement.

(IEEE) (1) A condition or capability needed by a user to solve a problem or achieve an objective (2) A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard. specification, or other formally imposed documents. (3) A documented representation of a condition or capability as in (1) or (2). See: design requirement, functional requirement, implementation requirement, interface requirement, performance requirement, physical requirement.

requirements analysis.

(IEEE) (1) The process of studying user needs to arrive at a definition of a system, hardware, or software requirements. (2) The process of studying and refining system, hardware, or software requirements. See: prototyping, software engineering.

requirements phase.

(IEEE) The period of time in the software life cycle during which the requirements, such as functional and performance capabilities for a software product, are defined and documented.

subroutine trace, symbolic trace. variable trace.

revalidation.

Relative to software changes, revalidation means validating the change itself, assessing the nature of the change to determine potential ripple effects, and performing the necessary regression testing.

review.

(IEEE) A process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include code review, design review, formal qualification review, requirements review, test readiness review. Contrast with audit, inspection. See: static analysis.

risk.

(IEEE) A measure of the probability and severity of undesired effects. Often taken as the simple product of probability and consequence.

risk assessment.

(DOD) A comprehensive evaluation of the risk and its associated impact.

S

safety.

(DOD) Freedom from those conditions that can cause death, injury, occupational illness, or damage to or loss of equipment or property, or damage to the environment.

safety critical.

(DOD) A term applied to a condition, event, operation. process or item of whose proper recognition, control, performance or tolerance is essential to safe system operation or use; e.g., safety critical function, safety critical path, safety critical component.

safety critical computer software components.

(DOD) Those computer software components and units whose errors can result in a potential hazard, or loss of predictability or control of a system.

security.

See: computer system security.

side effect.

An unintended alteration of a program's behavior caused by a change in ore part of the program, without taking into account the effect the change has on another part of the program. See: regression analysis and testing.

simulation.

(1) (NBS) Use of an executable model to represent the behavior of an object. During testing the computational hardware, the external environment, and even code segments may be simulated. (2) (IEEE) A model that behaves or operates like a given system when provided a set of controlled inputs. Contrast with emulation.

simulation analysis.

(IEEE) A software V&V task to simulate critical tasks of the software or system environment to analyze logical or performance characteristics that would not be practical to analyze manually.

simulator.

(IEEE) A device, computer program, or system that behaves or operates like a given system when provided a set of controlled inputs. Contrast with emulator. A simulator provides inputs or responses that resemble anticipated process parameters. Its function is to present data to the system at known speeds and in a proper format.

sizing.

(IEEE) The process of estimating the amount of computer storage or the number of source lines required for a software system or component. Contrast with timing.

software.

(ANSI) Programs, procedures, rules, and any associated documentation pertaining to the operation of a system. Contrast with hardware See: application software, operating system, system software. utility software.

software characteristic.

An inherent, possibly accidental, trait, quality, or property of software; e.g., functionality, performance, attributes, design constraints, number of states, lines or branches.

software configuration item.

See: configuration item.

software design description.

(IEEE) A representation of software created to facilitate analysis, planning, implementation, and decision making. The software design description is used as a medium for communicating software design information, and may be thought of as a blueprint or model of the system. See: structured design, design description, specification.

software development notebook.

(NIST) A collection of material pertinent to the development of a software moduIe. Contents typically include the requirements, design, technical reports, code listings, test plans, test results, problem reports, schedules, notes, etc. for the module. Syn: software development file.

software development plan.

(NIST) The project plan for the development of a software product. Contrast with software development process, software life cycle.

software development process.

(IEEE) The process by which user needs are translated into a software product. the process involves translating user needs into software

software diversity.

(IEEE) A software development technique in which two or more functionally identical variants of a program are developed from the same specification by different programmers or programming teams with the intent of providing error detection, increased reliability, additional documentation or reduced probability that programming or compiler errors will influence the end results.

software documentation.

(NIST) Technical data or information, including computer listings and printouts, in human readable form, that describe or specify the design or details, explain the capabilities, or provide operating instructions for using the software to obtain desired results from a software system. See: specification; specification, requirements: specification, design; software design description; test plan, test report, user's guide.

software element.

(IEEE) A deliverable or in-process document produced or acquired during software development or maintenance. Specific examples include but are not limited to:

(1) Project planning documents; i.e., software development plans, and software verification and validation plans.

(2) Software requirements and design specifications.

(3) Test documentation.

(4) Customer-deliverable documentation.

(5) Program source code.

(6) Representation of software solutions implemented in firmware

(7) Reports; i.e., review, audit, project status.

(8) Data; i.e., defect detection, test. Contrast with software item. See: configuration item.

software engineering.

(IEEE) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; i.e., the application of engineering to software. See: project plan, requirements analysis, architectural design, structured design, system safety, testing, configuration management.

software engineering environment.

(IEEE) The hardware, software, and firmware used to perform a software engineering effort. Typical elements include computer equipment. compilers, assemblers, operating systems, debuggers, simulators, emulators, test tools, documentation tools, and database management systems.

software hazard analysis.

(CDE, CDRH) The identification of safety-critical software, the classification and estimation of potential hazards, and identification of program path analysis to identify hazardous combinations of internal and environmental program conditions. See: risk assessment, software safety change analysis, software safety code analysis, software safety design analysis, software safety requirements analysis, software safety test analysis, system safety.

software item.

(IEEE) Source code. object code, job control code, control data, or a collection of these items. Contrast with software element.

software life cycle.

(NIST) Period of time beginning when a software product is conceived and ending when the product is no longer available for use The software life cycle is typically broken into phases denoting activities such as requirements, design, programming, testing, installation, and operation and maintenance. Contrast with software development process. See: waterfall model.

software reliability.

(IEEE) (1) the probability that software will not cause the failure of a system for a specified time under specified conditions. The probability is a function of the inputs to and use of the system in the software The inputs to the system determine whether existing faults, if and are encountered. (2) The ability of a program to perform its required functions accurately and reproducibly under stated conditions for a specified period of time.

software requirements specification. See: specification; requirements.

software review.

(IEEE) An evaluation of software elements to ascertain discrepancies from planned results and to recommend improvement. This evaluation follows a formal process. Syn: software audit. See: code audit, code inspection, code review, code walkthrough, design review, specification analysis, static analysis.

software safety change analysis.

(IEEE) Analysis of the safety-critical design elements affected directly or indirectly by the change to show the change does not create a new hazard, does not impact on a previously resolved hazard, does not make a currently existing hazard more severe, and does not adversely affect any safety-critical software design element. See: software hazard analysis, system safety

software safety code analysis.

(IEEE) Verification that the safety-critical portion of the design are correctly implemented in the code. See: logic analysis, data analysis, interface analysis, constraint analysis, programming style analysis, non-critical code analysis, timing and sizing analysis, software hazard analysis, system safety.

software safety design analysis.

(IEEE) Verification that the safety-critical portion of the software design correctly implements the safety-critical requirements and introduces no new hazards. See: logic analysis, data analysis, interface analysis, constraint analysis, functional analysis, software element analysis, timing and sizing analysis, reliability analysis. software hazard analysis, system safety.

software safety requirements analysis.

(IEEE) Analysis evaluating software and interface requirements to identify errors and deficiencies that could contribute to a hazard. See: criticality analysis, specification analysis, timing and sizing analysis, different software systems analyses, software hazard analysis, system safety.

software safety test analysis.

(IEEE) Analysis demonstrating that safety requirements have been correctly implemented and that the software functions safely within its specified environment. Tests may include; unit level tests, interface tests, software configuration item testing, system level testing, stress testing, and regression testing. See: software hazard analysis, system safety.

source code.

(1) (IEEE) Computer instructions and data definitions expressed in a form suitable for input to an assembler, compiler or other translator. (2) The human readable version of the list of instructions [program] that cause a computer to perform a task. Contrast with object code. See: source program, programming language.

source program.

(IEEE) A computer program that must be compiled, assembled, or otherwise translated in order to be executed by a computer. Contrast with object program. See: source code.

spaghetti code.

Program source code written without a coherent structure Implies the excessive use of GOTO instructions. Contrast with structured programming.

special test data.

(NBS) Test data based on input values that are likely to require special handling by the program. See: error guessing; testing. special case.

specification.

(IEEE) A document that specifies, in a complete, precise, verifiable manner, the requirements, design, behavior, or other characteristics of a system or component, and often, the procedures for determining whether these provisions have been satisfied. Contrast with requirement. See: specification, formal; specification, requirements; specification, functional; specification, performance; specification, interface; specification, design; coding standards; design standards.

specification analysis.

(IEEE) Evaluation of each safety-critical software requirement with respect to a list of qualities such as completeness, correctness, consistency, testability. robustness, integrity, reliability, usability, flexibility, maintainability, portability, interoperability, accuracy, auditability, performance. internal instrumentation, security and training.

specification. design.

(NIST) A specification that documents how a system is to be built. It typically includes system or component structure, algorithms, control logic, data structures, data set [file] use information, input/output formats, interface descriptions, etc Contrast with design standards, requirement. See: software design description.

specification. formal.

(NIST) (1) A specification written and approved in accordance with established standards. (2) A specification expressed in a requirements specification language. Contrast with requirement.

specification. functional.

(NIST) A specification that documents the functional requirements for a system or system component. It describes what the system or component is to do rather than how it is to be built. Often part of a requirements specification. Contrast with requirement.

specification. interface.

(NIST) A specification that documents the interface requirements for a system or system component. Often part of a requirements specification. Contrast with requirement.

specification. performance.

(IEEE) A document that sets forth the performance characteristics that a system or component must possess. These characteristics typically include speed, accuracy, and memory usage Often part of a requirements specification. Contrast with requirement.

specification, product.

(IEEE) A document which describes the as built version of the software specification, programming. (NIST) See: specification, design.

specification, requirements.

(NIST) A specification that documents the requirements of a system or system component. It typically includes functional requirements. performance requirements, interface requirements, design requirements [attributes and constraints], development [coding] standards, etc Contrast with requirement.

specification. system.

See: requirements specification.

specification, test case.

See: test case.

specification tree.

(IEEE) A diagram that depicts all of the specifications for a given system and shows their relationship to one another.

spiral model.

(IEEE) A model of the software development process in which the constituent activities, typically requirements analysis, preliminary and detailed design. coding, integration, and testing, are performed iteratively until the software is complete Syn: evolutionary model, Contrast with incremental development; rapid prototyping; waterfall model.

standard operating procedures.

(SOP) Written procedures [prescribing and describing the steps to be taken in normal and defined conditions] which are necessary to assure control of production and processes.

state.

(IEEE) (1) A condition or mode of existence that a system, component, or simulation may be in; e.g., the pre-flight state of an aircraft navigation program or the input state of a given channel,

state diagram.

(IEEE) A diagram that depicts the states that a system or component can assume, and shows the events or circumstances that cause or result from a change from one state to another. Syn: state graph. See: state-transition table.

statement coverage.

See: testing, statement.

state-transition table.

(Beizer) A representation of a state graph that specifies the states, the inputs, the transitions, and the outputs. See: state diagram.

static analysis.

(1) (NBS) Analysis of a program that is performed without executing the program. (2) (IEEE) The process of evaluating a system or component based on its form, structure, content, documentation. Contrast with dynamic analysis. See: code audit, code inspection, code review, code walk-through, design review, symbolic execution.

static analyzer.

(ANSI/IEEE) A software tool that aides in the evaluation of a computer program without executing the program. Examples include checkers, compilers, cross-reference generators, standards enforcers, and flowcharters.

stepwise refinement.

A structured software design technique; data and processing steps are defined broadly at first, and then further defined with increasing detail.

structure chart.

(IEEE} A diagram that identifies modules, activities, or other entities in a system or computer program and shows how larger or more general entities break down into smaller, more specific entries. Note: The result is not necessarily the same as that shown in a call graph. Syn: hierarchy chart, program structure chart. Contrast with call graph.

structured design.

(IEEE) Any disciplined approach to software design that adheres to specified rules based on principles such as modularity, top-down design, and stepwise refinement of data, system structure, and processing steps. See: data structure centered design, input-processing-output, modular decomposition, object oriented design, rapid prototyping, stepwise refinement, structured programming. transaction analysis, transform analysis, graphical software specification/design documents, modular software, software engineering.

structured programming.

(IEEE) Any software development technique that includes structured design and results in the development of structured programs. See: structured design.

structured query language.

A language used to interrogate and process data in a relational database. Originally developed for IBM mainframes, there have been many implementations created for mini and micro computer database applications. SQL commands can be used to interactively work with a data base or can be embedded with a programming language to interface with a database.

stub.

(NES) Special code segments that when invoked by a code segment under test will simulate the behavior of designed and specified modules not yet constructed.

subprogram.

(IEEE) A separately compilable, executable component of a computer program.

Note: This term is defined differently in various programming languages. See: co-routine, main program, routine, subroutine.

subroutine.

(IEEE) A routine that returns control to the program or subprogram that called it. Note: This term is defined differently in various programming languages. See: module.

subroutine trace.

(IEEE) A record of all or selected subroutines or function calls performed during the execution of a computer program and. optionally, the values of parameters passed to and returned by each subroutine or function. Syn: call trace. See: execution trace, retrospective trace, symbolic trace, variable trace.

support software.

(IEEE) Software that aids in the development and maintenance of other software; e.g., compilers, loaders, and other utilities.

symbolic execution.

(IEEE) A static analysts technique in which program execution is simulated using symbols, such as variable names, rather than actual values for input data, and program outputs are expressed as logical or mathematical expressions involving these symbols.

symbolic trace.

(IEEE} A record of the source statements and branch outcomes that are encountered when a computer program is executed using symbolic, rather than actual values for input data. See: execution trace. retrospective trace, subroutine trace, variable trace.

syntax.

The structural or grammatical rules that define how symbols in a language are to be combined to form words, phrases, expressions, and other allowable constructs.

system.

(1) (ANSI) People, machines, and methods organized to accomplish a set of specific functions. (2) (DOD) A composite, at any level of complexity, of personnel, procedures, materials, tools, equipment, facilities, and software. The elements of this composite entity are used together in the intended operational or support environment to perform a given task or achieve a specific purpose, support. or mission requirement.

system administrator.

The person that is charged with the overall administration, and operation of a computer system. The System Administrator is normally an employee or a member of the establishment. Syn: system manager.

system analysis.

(ISO) A systematic investigation of a real or planned system to determine the functions of the system and how they relate to each other and to any other system. See: requirements phase.

system design.

(ISO) A process of defining the hardware and software architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. See: design phase, architectural design, functional design.

system design review.

(IEEE) A review conducted to evaluate the manner in which the requirements for a system have been allocated to configuration items, the system engineering process that produced the allocation, the engineering planning for the next phase of the effort, manufacturing considerations, and the planning for production engineering. See: design review.

system documentation.

(ISO) The collection of documents that describe the requirements, capabilities, limitations, design, operation, and maintenance of an information processing system. See: specification, test documentation, user's guide.

system integration.

(ISO) The progressive linking and testing of system components into a complete system. See: incremental integration.

system life cycle.

The course of developmental changes through which a system passes from its conception to the termination of its use; a.g., the phases and activities associated with the analysis. acquisition, design, development, test, integration, operation, maintenance, and modification of a system. See: software life cycle.

system safety.

(DOD) The application of engineering and management principles, criteria, and techniques to optimize all aspects of safety within the constraints of operational effectiveness, time, and cost throughout all phases of the system life cycle. See: risk assessment, software safety change analysis, software safety code analysis, software safety design analysis, software safety requirements analysis, software safety test analysis, software engineering.

system software.

(ISO) Application-independent software that supports the running of application software (2) (IEEE) Software designed to facilitate the operation and maintenance of a computer system and its associated programs: e.g., operating systems, assemblers, utilities. Contrast with application software See: support software

T

test.

(IEEE) An activity in which a system or component is executed under specified conditions, the results are observed or recorded and an evaluation is made of some aspect of the system or component.

testability.

(IEEE) (1) The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met. (2) The degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met. See: measurable.

test case.

(IEEE) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item. Syn: test case specification. See: test procedure.

test case generator.

(IEEE) A software tool that accepts as input source code, test criteria, specifications, or data structure definitions; uses these inputs to generate test input data; and, sometimes, determines expected results. Syn: test data generator, test generator.

test design.

(IEEE) Documentation specifying the details of the test approach for a software feature or combination of software features and identifying the associated tests. See: testing functional; cause effect graphing; boundary value analysis; equivalence class partitioning; error guessing; testing, structural; branch analysis; path analysis; statement coverage; condition coverage: decision coverage; multiple-condition coverage.

test documentation.

(IEEE) Documentation describing plans for, or results of the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.

test driver.

(IEEE) A software module used to invoke a module under test and, often, provide test inputs, control and monitor execution, and report test results. Syn: test harness.

test harness.

See: test driver.

test incident report.

(IEEE) A document reporting on any event that occurs during testing that requires further investigation. See: failure analysis.

test log.

(IEEE) A chronological record of all relevant details about the execution of a test.

test phase.

(IEEE) The period of time in the software life cycle in which the components of a software product are evaluated and integrated, and the software product is evaluated to determine whether or not requirements have been satisfied.

test plan.

(IEEE) Documentation specifying the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, responsibilities, required, resources, and any risks requiring contingency planning. See: test design, validation protocol.

test procedure.

(NIST) A formal document developed from a test plan that presents detailed instructions for the setup, operation, and evaluation of the results for each defined test. See: test case.

test readiness review.

(IEEE) (1) A review conducted to evaluate preliminary rest results for one or more configuration items; to verify that the test procedures for each configuration item are complete, comply with test plans and descriptions, and satisfy test requirements; and to verify that a project is prepared to proceed to formal testing of the configuration items. (2) A review as in (1) for any hardware or software component. Contrast with code review, design review, formal qualification review, requirements review.

test report.

(IEEE) A document describing the conduct and results of the testing carried out for a system or system component.

test result analyzer.

A software tool used to test output data reduction, formatting, and printing.

testing.

(IEEE) (1) The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. (2) The process of analyzing a software item to detect the differences between existing and required conditions, i.e., bugs, and to evaluate the features of the software items. See: dynamic analysis, static analysis, software engineering.

testing, 100 % See: testing, exhaustive

testing, acceptance.

(IEEE) Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. Contrast with testing, development; testing, operational. See: testing, qualification.

testing, alpha [a].

(Pressman) Acceptance testing performed by the customer in a controlled environment at the developer's site. The software is used by the customer in a setting approximating the target environment with the developer observing and recording errors and usage problems.

testing, assertion.

(NBS) A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes. See: assertion checking, instrumentation.

testing, beta [B].

(1) (Pressman) Acceptance testing performed by the customer in a live application of the software, at one or more end user sites, in an environment not controlled by the developer. (2) For medical device software such use may require an Investigational Device Exemption [ICE] or Institutional Review Board (IRS] approval.

testing, boundary value.

A testing technique using input values at, just below, and just above, the defined limits of an input domain; and with input values causing outputs to be at, just below, and just above, the defined limits of an output domain. See: boundary' value analysis; testing, stress.

testing, branch.

(NBS) Testing technique to satisfy coverage criteria which require that for each decision point, each possible branch (outcome] be executed at least once. Contrast with testing, path; testing, statement. See: branch coverage.

testing, compatibility.

The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems. See: different software system analysis; testing, integration; testing, interface. program variables. Feasible only for small, simple programs.

testing, formal.

(IEEE) Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

testing, functional.

(IEEE) (1) Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to selected inputs and execution conditions. (2) Testing conducted to evaluate the compliance of a system or component with specified functional requirements and corresponding predicted results. Syn: black-box testing, input/output driven testing. Contrast with testing, structural.

testing. integration.

(IEEE) An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire system has been integrated.

testing, interface.

(IEEE) Testing conducted to evaluate whether systems or components pass data and control correctly to one another. Contrast with testing, unit; testing, system. See: testing, integration.

testing, interphase. See: testing, interface.

testing, invalid case.

A testing technique using erroneous [invalid, abnormal, or unexpected] input values or conditions. See: equivalence class partitioning.

testing, mutation.

(IEEE) A testing methodology in which two or more program mutations are executed using the same test cases to evaluate the ability of the test cases to detect differences in the mutations.

testing, operational.

(IEEE) Testing conducted to evaluate a system or component in its operational environment. Contrast with testing, development; testing, acceptance; See: testing, system.

testing, component.

See: testing, unit.

testing, design based functional.

(NBS) The application of test data derived through functional analysis extended to include design functions as well as requirement functions. See: testing, functional.

testing, development.

(IEEE) Testing conducted during the development of a system or component, usually in the development environment by the developer. Contrast with testing, acceptance; testing, operational.

testing, exhaustive.

(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

testing, parallel.

(ISO) Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run.

testing, path.

(NBS) Testing to satisfy coverage criteria that each logical path through the program be tested. Often paths through the program are grouped into a finite set of classes. One path from each class is then tested. Syn: path coverage. Contrast with testing, branch; testing, statement; branch coverage; condition coverage; decision coverage.

testing. performance.

(IEEE) Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements.

testing, qualification.

(IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: testing, acceptance; testing, system.

testing, regression.

(NIST) Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software development and maintenance.

testing. special case.

A testing technique using input values that seem likely to cause program errors; e.g., "0", "1", NULL, empty string. See: error guessing.

testing. statement.

(NIST) Testing to satisfy the criterion that each statement in a program be executed at least once during program testing. Syn: statement coverage. Contrast with testing, branch; testing, path; branch coverage; condition coverage; decision coverage; multiple condition coverage; path coverage.

testing, storage.

This is a determination of whether or not certain processing conditions use more storage (memory] than estimated.

testing, stress.

(IEEE) Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. Syn: testing, boundary value.

testing, structural.

(1) (IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.

testing, system.

(IEEE) The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. Such testing may be conducted in both the development environment and the target environment.

testing, unit.

(1) (NIST) Testing of a module for typographic, syntactic, and logical errors, for correct implementation of its design, and for satisfaction of its requirements. (2) (IEEE) Testing conducted to verify the implementation of the design for one software element; e.g., a unit or module; or a collection of software elements. Syn: component testing.

testing, usability.

Tests designed to evaluate the machine/user interface. Are the communication device(s) designed in a manner such that the information is displayed in a understandable fashion enabling the operator to correctly interact with the system?

testing. valid case.

A testing technique using valid [normal or expected] input values or conditions. See: equivalence class partitioning.

testing, volume.

Testing designed to challenge a system's ability to manage the maximum amount of data over a period of time. This type of testing also evaluates a system's ability to handle overload situations in an orderly fashion.

testing, worst case.

Testing which encompasses upper and lower limits, and circumstances which pose the greatest chance finding of errors. Syn: most appropriate challenge conditions. See: testing, boundary' value; testing, invalid case; testing. special case: testing, stress; testing, volume.

time sharing.

(IEEE) A mode of operation that permits two or more users to execute computer programs concurrently on the same computer system by interleaving the execution of their programs. May be implemented by time slicing, priority-based interrupts, or other scheduling methods.

timing.

(IEEE) The process of estimating or measuring the amount of execution time required for a software system or component. Contrast with sizing.

timing analyzer.

(IEEE) A software tool that estimates or measures the execution time of a computer program or portion of a computer program, either by summing the execution times of the instructions along specified paths or by inserting probes at specified points in the program and measuring the execution time between probes.

timing and sizing analysis.

(IEEE) Analysis of the safety implications of safety-critical requirements that relate to execution time, clock time, and memory' allocation.

top-down design.

Pertaining to design methodology that starts with the highest level of abstraction and proceeds through progressively lower levels. See: structured design.

trace.

(IEEE) (1) A record of the execution of a computer program, showing the sequence of instructions executed, the names and values of variables, or both. Types include execution trace, retrospective trace, subroutine trace, symbolic trace, variable trace. (2) To produce a record as in (1). (3) To establish a relationship between two or more products of the development process: a.g., to establish the relationship between a given requirement and the design element that implements that requirement.

traceability.

(IEEE) (1) The degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or master-subordinate relationship to one another; ag., the degree to which the requirements and design of a given software component match. See: consistency. (2) The degree to which each element in a software development product establishes its reason for existing; e.g., the degree to which each element in a bubble chart references the requirement that it satisfies. See: traceability analysis, traceability matrix.

traceability analysis.

(IEEE) The tracing of (1) Software Requirements Specifications requirements to system requirements in concept documentation, (2) software design descriptions to software requirements specifications and software requirements specifications to software design descriptions, (3) source code to corresponding design specifications and design specifications to source code. Analyze identified relationships for correctness, consistency, completeness, and accuracy. See: traceability, traceability matrix.

traceability matrix.

(IEEE) A matrix that records the relationship between two or more products; ag., a matrix that records the relationship between the requirements and the design of a given software component. See: traceability, traceability analysis.

transaction.

(ANSI) (1) A command, message, or input record that explicitly or implicitly calls for a processing action, such as updating a file. (2) An exchange between and end user and an interactive system. (3) In a database management system, a unit of processing activity' that accomplishes a specific purpose such as a retrieval, an update, a modification, or a deletion of one or more data elements of a storage structure.

transaction analysis.

A structured software design technique, deriving the structure of a system from analyzing the transactions that the system is required to process.

transaction flowgraph.

(Seizer) A model of the structure of the system's (program's] behavior, i.e., functionality.

transaction matrix.

(IEEE) A matrix that identifies possible requests for database access and relates each request to information categories or elements in the database.

transform analysis.

A structured software design technique in which system structure is derived from analyzing the flow of data through the system and the transformations that must be performed on the data.

trojan horse.

A method of attacking a computer system, typically by providing a useful program which contains code intended to compromise a computer system by secretly providing for unauthorized access, the unauthorized collection of privileged system or user data, the unauthorized reading or altering of files, the performance of unintended and unexpected functions, or the malicious destruction of software and hardware See: bomb, virus, worm.

truth table.

(1) (ISO) An operation table for a logic operation. (2) A table that describes a logic function by listing all possible combinations of input values, and indicating, for each combination. the output value.

tuning.

(NIST) Determining what pans of a program are being executed the most. A tool that instruments a program to obtain execution frequencies of statements is a tool with this feature.

U

unambiguous.

(1) Not having two or more possible meanings. (2) Not susceptible to different interpretations. (3) Not obscurer not vague. (4) Clear, definite, certain.

unit.

(IEEE) (1) A separately testable element specified in the design of a computer software element. (2) A logically separable part of a computer program. Syn: component, module.

usability.

(IEEE) The ease with which a user can operate, prepare inputs for, and interpret of a system or component.

user.

(ANSI) Any person, organization, or functional unit that uses the services of an information processing system. See: end user.

user’s guide.

(ISO) Documentation that describes how to use a functional unit, and that may include description of the rights and responsibilities of the user, the owner, and the supplier of the unit. Syn: user manual, operator manual.

V

V&V.

verification and validation.

VV&T.

validation, verification, and testing.

valid.

(1) Sound. (2) Well grounded on principles of evidence (3) Able to withstand criticism or objection.

validate.

To prove to be valid.

validation.

(1) (FDA) Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes. Contrast with data validation.

validation, process.

(FDA) Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality characteristics.

validation. prospective.

(FDA) Validation conducted prior to the distribution of either a new product, or product made under a revised manufacturing process, where the revisions may affect the product's characteristics.

validation protocol.

(FDA) A written plan stating how validation will be conducted, including test parameters, product characteristics. production equipment, and decision points on what constitutes acceptable test results. See: test plan.

validation. retrospective.

(FDA) (1) Validation of a process for a product already in distribution based upon accumulated production. testing and control data. (2) Retrospective validation can also be useful to augment initial pre-market prospective validation for new products or changed processes. Test data is useful only if the methods and results are adequately specific. Whenever test data are used to demonstrate conformance to specifications, it is important that the test methodology be qualified to assure that the test results are objective and accurate.

validation, software.

(NBS) Determination of the correctness of the final program or software produced from a development project with respect to the user needs and requirements. Validation is usually accomplished by verifying each stage of the software development life cycle. See: verification, software.

validation. verification. and testing.

(NIST) Used as an entity to define a procedure of review, analysis, and testing throughout the software life cycle to discover errors. determine functionality, and ensure the production of quality software.

valid input.

(NBS) Test data that lie within the domain of the function represented by the program.

variable.

A name, label, quantity', or data item whose value may be changed many times during processing. Contrast with constant.

variable trace.

(IEEE) A record of the name and values of variables accessed or changed during the execution of a computer program. Syn: data-flow trace, data trace, value trace. See: execution trace, retrospective trace, subroutine trace, symbolic trace.

vendor.

A person or an organization that provides software and/or hardware and/or firmware and/or documentation to the user for a fee or in exchange for services. Such a firm could be a medical device manufacturer.

verifiable.

Can be proved or confirmed by examination or investigation. See: measurable.

verification, software.

(NIBS) In general the demonstration of consistence completeness, and correctness of the software at each stage and between each stage of the development life cycle. See: validation, software.

verify.

(ANSI) (1) To determine whether a transcription of data or other operation has been accomplished accurately. (2) To check the results of data entry; e.g., keypunching. (3) (Webster) To prove to be true by demonstration.

version.

An initial release or a complete re-release of a software item or software element. See: release.

version number.

A unique identifier used to identify software items and the related software documentation which are subject to configuration control. The execution of a virus program compromises a computer system by performing unwanted or unintended functions which may be destructive. See: bomb. trojan horse, worm.

volume.

(ANSI) A portion of data, together with its data carrier, that can be handled conveniently as a unit; a.g., a reel of magnetic tape, a disk pack, a floppy disk.

W

WAN.

wide area network.

walkthrough.

See: code walkthrough.

waterfall model.

(IEEE) A model of the software development process in which the constituent

activities, typically a concept phase, requirements phase, design phase, implementation phase. test phase, installation and checkout phase, and operation and maintenance, are performed in that order, possibly with overlap but with little or no iteration. Contrast with incremental development; rapid prototyping; spiral model.

whitebox testing. See: testing. structural.

workaround.

A sequence of actions the user should take to avoid a problem or system limitation until the computer program is changed. They may include manual procedures used in conjunction with the computer system.

worm.

An independent program which can travel from computer to computer across network connections replicating itself in each computer. They do not change other programs, but compromise a computer system through their impact on system performance. See: bomb, trojan horse, virus.