Pages

Ads 468x60px

Friday, February 20, 2009

Requirements Testing



Software


Testing software is an integral part of building a system. However, if the software is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory. Most of the defects in a system can be traced back to wrong, missing, vague or incomplete requirements.

Requirements seem to be ephemeral. They flit in and out of projects, they are capricious, intractable, unpredictable and sometimes invisible. When gathering requirements we are searching for all of the criteria for a system's success. We throw out a net and try to capture all these criteria.

The Quality Gateway

As soon as we have a single requirement in our net we can start testing. The aim is to trap requirements-related defects as early as they can be identified. We prevent incorrect requirements from being incorporated in the design and implementation where they will be more difficult and expensive to find and correct.

To pass through the quality gateway and be included in the requirements specification, a requirement must pass a number of tests. These tests are concerned with ensuring that the requirements are accurate, and do not cause problems by being unsuitable for the design and implementation stages later in the project.

Make The Requirement Measurable

In his work on specifying the requirements for buildings, Christopher Alexander describes setting up a quality measure for each requirement.

"The idea is for each requirement to have a quality measure that makes it possible to divide all solutions to the requirement into two classes: those for which we agree that they fit the requirement and those for which we agree that they do not fit the requirement."

In other words, if we specify a quality measure for a requirement, we mean that any solution that meets this measure will be acceptable. Of course it is also true to say that any solution that does not meet the measure will not be acceptable.

The quality measures will be used to test the new system against the requirements. The remainder of this paper describes how to arrive at a quality measure that is acceptable to all the stakeholders.

Quantifiable Requirements

Consider a requirement that says "The system must respond quickly to customer enquiries". First we need to find a property of this requirement that provides us with a scale for measurement within the context. Let's say that we agree that we will measure the response using minutes. To find the quality measure we ask: "under what circumstances would the system fail to meet this requirement?" The stakeholders review the context of the system and decide that they would consider it a failure if a customer has to wait longer than three minutes for a response to his enquiry. Thus "three minutes" becomes the quality measure for this requirement.

Any solution to the requirement is tested against the quality measure. If the solution makes a customer wait for longer than three minutes then it does not fit the requirement. So far so good: we have defined a quantifiable quality measure. But specifying the quality measure is not always so straightforward. What about requirements that do not have an obvious scale?

Non-quantifiable Requirements

Suppose a requirement is "The automated interfaces of the system must be easy to learn". There is no obvious measurement scale for "easy to learn". However if we investigate the meaning of the requirement within the particular context, we can set communicable limits for measuring the requirement.

Again we can make use of the question: "What is considered a failure to meet this requirement?" Perhaps the stakeholders agree that there will often be novice users, and the stakeholders want novices to be productive within half an hour. We can define the quality measure to say "a novice user must be able to learn to successfully complete a customer order transaction within 30 minutes of first using the system". This becomes a quality measure provided a group of experts within this context is able to test whether the solution does or does not meet the requirement.

An attempt to define the quality measure for a requirement helps to rationalise fuzzy requirements. Something like "the system must provide good value" is an example of a requirement that everyone would agree with, but each person has his own meaning. By investigating the scale that must be used to measure "good value" we identify the diverse meanings.

Sometimes by causing the stakeholders to think about the requirement we can define an agreed quality measure. In other cases we discover that there is no agreement on a quality measure. Then we substitute this vague requirement with several requirements, each with its own quality measure.

Requirements Test 1

Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?

By adding a quality measure to each requirement we have made the requirement visible. This is the first step to defining all the criteria for measuring the goodness of the solution. Now let's look at other aspects of the requirement that we can test before deciding to include it in the requirements specification.

Requirements Test 2

Does the specification contain a definition of the meaning of every essential subject matter term within the specification?

When the allowable values for each of the attributes are defined it provides data that can be used to test the implementation.

Requirements Test 3

Is every reference to a defined term consistent with its definition?

Requirements Test 4

Is the context of the requirements wide enough to cover everything we need to understand?

Requirements Test 5

Have we asked the stakeholders about conscious, unconscious and undreamed of requirements?

Requirements Test 5 (enlarged)

Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modelling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?

Requirements Test 6

Is every requirement in the specification relevant to this system?

Requirements Test 7

Does the specification contain solutions posturing as requirements?

Requirements Test 8

Is the stakeholder value defined for each requirement?

Requirements Test 9

Is each requirement uniquely identifiable?

Requirements Test 10

Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?

Conclusions

The requirements specification must contain all the requirements that are to be solved by our system. The specification should objectively specify everything our system must do and the conditions under which it must perform. Management of the number and complexity of the requirements is one part of the task.

The most challenging aspect of requirements gathering is communicating with the people who are supplying the requirements. If we have a consistent way of recording requirements we make it possible for the stakeholders to participate in the requirements process. As soon as we make a requirement visible we can start testing it. and asking the stakeholders detailed questions. We can apply a variety of tests to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning. We can ask the stakeholders to define the relative value of requirements. We can define a quality measure for each requirement, and we can use that quality measure to test the eventual solutions.

Testing starts at the beginning of the project, not at the end of the coding. We apply tests to assure the quality of the requirements. Then the later stages of the project can concentrate on testing for good design and good code. The advantages of this approach are that we minimise expensive rework by minimising requirements-related defects that could have been discovered, or prevented, early in the project's life.


I18N Testing



Software


Products developed in one location are used all over the world with different languages and regional standards. This arises the need to test product in different languages and different regional standards. Multilingual and localization testing can increase your products usability and acceptability worldwide.

Internationalization testing is the process, which ensures that product’s functionality is not broken and all the messages are properly externalized when used in different languages and locale. Internationalization testing is also called I18N testing, because there are 18 characters between I and N in Internationalization.

Internationalization, Globalization and Localization, all these words are normally used together. Though the objective of these word is same, that is to make sure that product is ready for the global market, but they serve different purpose and have different meaning.

I hope this article has given you better clarity about I18N testing


Functional Testing The only Answer to Quality



Software


Functional Testing refers to the type of testing which ensures that all functional requirements are met without any consideration to the final program structure. Functional Testing confirms that the application under development is capable to deliver as per user requirements. Functional Testing emulates the actions performed by the user and ensures that all execution paths are operating perfectly as desired in addition we are able to get the desired for the inputs supplied to the system.

1) Perform Unit Testing & ensure proper execution of each & every line of the code:

The software developers tend to design the code in isolation. When there is absence of pair programming, full code reviews & highly experienced developers, there is bound to be the possibility of inclusion of defects in the new code. Such defects if not detected during early stages of SDLC are quite difficult as well as expensive to detect later on as the project moves.

Hence a strong unit testing process is the backbone of the testing process, upon which the entire reliability of the product depends. Unit testing refers to the process of testing each & every unit of the code going down to the single component level. The developer during the development of the component does the unit testing. It is the responsibility of the developer to ensure that each & every part of the code is logically correct. Unit tests usually provide following type of coverage:

Function coverage: Ensures that every function or method is getting executed by at least one test case.

Statement coverage: Ensures that every line of the code gets covered by a minimum of one test case.

Path coverage: Ensures that every possible path through code gets covered by a minimum of one test case. There can be more number of test cases if need be.

Unit testing helps the developers to regularly ensure that every unit of code performs as expected. Unit tests are modified on continuous basis as the evolution process of the software continues. This helps in maintaining an updated documentation as well.

2) Perform Functional Testing & ensure expected results from every function:

As a part of the testing strategy all expected outcomes must be confirmed by functional testing.
All the function points of various lines of code must yield expected outcome which must be in line with the functional specification described in the specification document.

Functional testing takes care of all concerns revolving around the proper implementation of functional requirements. Commonly known as black box testing, it requires no prior knowledge of the basic code.

Functional test cases are created from requirement use cases wherein every scenario becomes a functional test. As the individual software components get implemented, after successful unit testing the corresponding functional tests are employed on them.

For some software projects, it is not feasible to test every functional aspect. Instead of that appropriate functional testing goals are defined. Critical and commonly used functions are prioritized according to the limitations of resources & time.


Thursday, February 19, 2009

“Reporting bugs - a how-to guide”



Software



When working with a developer or team of developers on an application – whether you are a designer working with developers or an end client hiring developers – you all want the same end result, a slick and bug free application. During the testing process of any application it is likely that some bugs or issues will show up and this article aims to explain how to report bugs and problems effectively so that your developers don’t need to spend time working out what the problem is before being able to fix it. This helps to ensure that projects stay on budget and that developers are spending their time adding features to the application rather than trying to get enough details to be able to reproduce and fix issues.

“It’s just not working!”

When you find a problem, it is very tempting to just fire off an email and presume that the developer will immediately be able to see the problem too. However, by taking a few minutes to describe the problem you have encountered accurately you can prevent any confusion occurring as to what the problem is and save both your time and the developer’s as she won’t need to get back to you to find out what actually happened, or spend a long time trying to reproduce the issue.
A good report

A good bug report tells your developer three vital things:

* What you expected to happen
* What actually happened
* What you did/were doing when it happened

What you expected to happen

There are two kinds of ‘bugs’, the first is where something breaks – you see an error message, your uploaded data disappears, you submit a form and the change isn’t saved. These bugs are generally pretty easy to report and identify as all your developer needs is to know exactly what you were doing or inputting at the time and they should be able to reproduce and fix the issue.

The second kind of bug is where the application doesn’t function as you expected. This might be because the developer has misinterpreted part of the specification or it could be that what you expect just isn’t how something can work. In this case the developer believes that it is working fine – and in fact it is ‘working’ even if it is incorrect. If your bug report is that the feature is broken, the developer may then spend time looking for some error in this part of the application when what they need to realize is that it isn’t working as you expected. By giving the information about what you expected to happen the developer can think ‘ah … you wanted it to do x and it is doing y’ and a resolution can be sorted out quickly.

What actually happened

What actually happened is very rarely ‘nothing’ yet bug reports often contain the phrase, ‘nothing happened’. If what happened was ‘nothing’ in terms of the intended result then explain that in a few more words, for example, if you clicked the submit button on a form and it didn’t submit and go onto the next page you could say,

“The form didn’t submit – it just remained on the same page.”

Or perhaps the form submitted and a blank page displayed,

“After submitting the form a blank page loaded.”

If an error message displays on the screen then include that in the report. Just copy and paste the error message.

If you use Internet Explorer then your browser may not display the error message generated by the server, instead showing a generic error page. You can ensure the IE displays the real error message by going to Tools > Internet Options > Advanced. Then scroll down to the browsing section and uncheck ‘Show Friendly HTTP error messages’.
What you were doing when it happened

Your developer wants to know this information – not because they want to tell you that you were doing something wrong, but because it is highly likely that the bug occurs only when a certain path of actions is followed, or when a certain type of data is entered. The more information you can give your developer the easier it will be for them to reproduce the problem you saw and fix it. Things you should include:

The steps taken

List exactly what you did, in the order you did it if possible. If you can go back and try the same steps again and the problem happens again that is great – note down exactly how you made the problem occur. Your developer will be pleased as you have just saved her time trying to reproduce the issue. Even if you can’t reproduce it, no-one is going to doubt that the problem happened – just describe as much as you can remember how you got to the broken point.

Any data you were entering

If the problem happened after you added some data to a form, include the data with the bug report. If you were uploading something such as an image into the application then include that too.

It may also be helpful to copy and paste the URL out of the address bar of the browser so the developer knows exactly which page you were on at the time.

The browser and operating system you were using at the time

With web applications problems may only be occurring in one browser. Let your developer know exactly what you are using – including the version number - so they can create the same environment to test the problem.

Effective bug reporting can make a huge difference in how quickly problems can be resolved, and prevent frustration on both sides of the process. Including the above information, even if it doesn’t seem relevant, will be appreciated by the developer. You don’t need to write an essay, just a few clear lines explaining the key information of:

* what you expected to happen
* what actually happened and,
* what you did/were doing when it happened.

This will be enough to isolate all but the most complicated of issues, and once an issue can be reproduced it is well on its way to being fixed.
 

Sample text

Sample Text

Job Search



Sample Text