Pages

Ads 468x60px

Wednesday, October 14, 2009

Iterative Model

An iterative lifecycle model does not attempt to start with a full specification of requirements. Instead, development begins by specifying and implementing just part of the software, which can then be reviewed in order to identify further requirements. This process is then repeated, producing a new version of the software for each cycle of the model. Consider an iterative lifecycle model which consists of repeating the following four phases in sequence:

A Requirements phase, in which the requirements for the software are gathered and analyzed. Iteration should eventually result in a requirements phase that produces a complete and final specification of requirements. - A Design phase, in which a software solution to meet the requirements is designed. This may be a new design, or an extension of an earlier design.

An Implementation and Test phase, when the software is coded, integrated and tested.

A Review phase, in which the software is evaluated, the current requirements are reviewed, and changes and additions to requirements proposed.

For each cycle of the model, a decision has to be made as to whether the software produced by the cycle will be discarded, or kept as a starting point for the next cycle (sometimes referred to as incremental prototyping). Eventually a point will be reached where the requirements are complete and the software can be delivered, or it becomes impossible to enhance the software as required, and a fresh start has to be made.

The iterative lifecycle model can be likened to producing software by successive approximation. Drawing an analogy with mathematical methods that use successive approximation to arrive at a final solution, the benefit of such methods depends on how rapidly they converge on a solution.

The key to successful use of an iterative software development lifecycle is rigorous validation of requirements, and verification (including testing) of each version of the software against those requirements within each cycle of the model. The first three phases of the example iterative model is in fact an abbreviated form of a sequential V or waterfall lifecycle model. Each cycle of the model produces software that requires testing at the unit level, for software integration, for system integration and for acceptance. As the software evolves through successive cycles, tests have to be repeated and extended to verify each version of the software.




Software



discussion
Amit Ahuja

Tuesday, October 6, 2009

Test Cases

Designing Test Cases


A test case is a detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes what to test, a test case describes how to perform a particular test. You need to develop a test case for each test listed in the test plan. Figure 2.10 illustrates the point at which test case design occurs in the lab development and testing process.


Test case includes:

* The purpose of the test.

* Special hardware requirements, such as a modem.

* Special software requirements, such as a tool.

* Specific setup or configuration requirements.

* A description of how to perform the test.

* The expected results or success criteria for the test.



Test cases should be written by a team member who understands the function or technology being tested, and each test case should be submitted for peer review.

Organizations take a variety of approaches to documenting test cases; these range from developing detailed, recipe-like steps to writing general descriptions. In detailed test cases, the steps describe exactly how to perform the test. In descriptive test cases, the tester decides at the time of the test how to perform the test and what data to use.

Most organizations prefer detailed test cases because determining pass or fail criteria is usually easier with this type of case. In addition, detailed test cases are reproducible and are easier to automate than descriptive test cases. This is particularly important if you plan to compare the results of tests over time, such as when you are optimizing configurations. Detailed test cases are more time-consuming to develop and maintain. On the other hand, test cases that are open to interpretation are not repeatable and can require debugging, consuming time that would be better spent on testing.

Test Case Design

Test Case ID:

It is unique number given to test case in order to be identified.

Test description:

The description if test case you are going to test.

Revision history:

Each test case has to have its revision history in order to know when and by whom it is created or modified.

Function to be tested:

The name of function to be tested.

Environment:

It tells in which environment you are testing.

Test Setup:

Anything you need to set up outside of your application for example printers, network and so on.

Test Execution:

It is detailed description of every step of execution.

Expected Results:

The description of what you expect the function to do.

Actual Results:

pass / failed

If pass - What actually happen when you run the test.

If failed - put in description of what you've observed.

discussion
Amit Ahuja




Software

Friday, October 2, 2009

Black Box Testing

Testing not based on any knowledge of the internal design or code. Tests are based on requirements and functionality.

discussion
Amit Ahuja





Software

Wednesday, September 30, 2009

Who should test? and what? – An Overview

Who should do the testing?

Software Testing is not the job of one person. It is teamwork. The size of the team depends upon the complexity and size of the software being tested. The software developer should have minimum or no role in the testing process as everyone knows that for a person who has developed the software, it is very difficult to pin point errors in his own creations.


Seven soft skills crucial for a good tester are – that He or She must:


1) Be Cautious
2) Be Curious
3) Have Patience
4) Have Perseverance
5) Be Analytical
6) Be Critical but with an attitude of not jumping to conclusions.
7) Be Good Communicator

Thus “More the soft skills a tester has, better tester He or She is”

Above all, a Most Successful Tester is the one:


1) Who is completely passionate.

2) Who is always eager to learn more & more.

3) Who never gives up.

Various personnel and their roles during development & testing are:

A) Customer: Provides funding, provides requirements, approves changes and some of the test results.

B) Project Manager: Plans and manages the project.

C) Software Developer:
Designs the codes and builds the software.

D) Testing coordinator: Creates test plans and test specifications based on the requirements and functional and technical documents.

E) Testing person: Executes the tests and documents the results.


Role based demarcation exclusive to the field of Testing:

1) Junior Software Testers: Are the ones having good theoretical knowledge of testing & would have participated in good number of seminars on testing or would have passed some courses in testing. Such individuals are not expected to have experience on testing; however little experience can be desirable.

2) Software Testers: Are the ones having good ability to understand testing packs, do testing, and doing documentation / defect logging. Such individuals usually do repetitive work & are invariably involved in front-end testing. Usually at least 6 months of testing experience is preferred for this role.

3) Senior Software Testers:
Are the ones having responsibility of doing all tasks usually performed by a Tester. In addition to this they are involved in back-end testing. The senior testers also do updating of test cases. Usually 1 to 2 years of experience of testing is preferred for this role.

4) Testing Analysts:
Are the ones having expertise in extracting requirements out of documentation & doing thorough verification with business & ascertaining the correctness of all the information. The testing analysts also do writing of test cases, executing the tests & reporting the findings thereafter. Usually 2 to 3 years of experience of testing along with at least 1-year experience of analysis is preferred for this role.

5) Testing Managers:
Are the ones having experience of doing all the tasks described above. Apart from this they possess the ability of managing the entire testing process, personnel as well as the testing environment. Usually 3 to 4 years of experience of testing along with at least 1-year of managerial experience & sound expertise of Project Management is ideal for this role.

6) Testing Consultants: Are the senior people having experience of doing all the tasks described above. Consultants are usually good communicators having proven ability of man management, with the help of which they are able to effectively handle the client & the senior executives of the organization. This includes doing systematic analysis of the current testing process of the client & ability to guide the client with their expert comments & recommendations to improve. Consultants have great expertise of manual testing in addition to sound knowledge of specialized fields like Automation, Usability & Security etc.

Generally persons with 5 years or more of Testing & Project Management experience in some senior capacity are appointed as testing consultants.

7) Test Automators: Are the ones having good skills of development plus sound experience of manual testing & have acquired specialization of Automated Testing; but as of date they would not prefer to get involved in manual testing. Automators have thorough knowledge & experience of using different Automation Tools. Usually 1 to 2 years of experience of test automation is preferred for this role.

Next question comes as to what should be tested?

According to Myer (The great Testing Guru) - for most of the programs, it is impractical to attempt to test the program with all possible inputs, due to a combinatorial explosion. For those inputs selected, a testing oracle is needed to determine the correctness of the output for particular test input.

Myer also says that, for most programs, it is impractical to attempt to test all execution paths through the product, due to combinatorial explosion. It is also not possible to develop an algorithm for generating test data for paths in an arbitrary product, due to the inability to determine path feasibility.

The point, which is being emphasized here, is that complete or exhaustive testing is just not possible. This is because exhaustive testing requires every statement in the program and every possible path combination to be executed at once.

discussion

Amit Ahuja




Software

Thursday, September 24, 2009

Free Download SQL Books

1) Teach Yourself SQL in 21 Days


discussion

Amit Ahuja


Software

Waterfall Model

There are various software development approaches defined and designed which are used/employed during development process of software, these approaches are also referred as "Software Development Process Models".

Waterfall Model

Waterfall approach was first Process Model to be introduced and followed widely in Software Engineering to ensure success of the project. In "The Waterfall" approach, the whole process of software development is divided into separate process phases.

The phases in Waterfall model are: Requirement Specifications phase, Software Design, Implementation and Testing & Maintenance. All these phases are cascaded to each other so that second phase is started as and when defined set of goals are achieved for first phase and it is signed off, so the name "Waterfall Model". All the methods and processes undertaken in Waterfall Model are more visible.


The stages of "The Waterfall Model" are:

Requirement Analysis & Definition: All possible requirements of the system to be developed are captured in this phase. Requirements are set of functionalities and constraints that the end-user (who will be using the system) expects from the system. The requirements are gathered from the end-user by consultation, these requirements are analyzed for their validity and the possibility of incorporating the requirements in the system to be development is also studied. Finally, a Requirement Specification document is created which serves the purpose of guideline for the next phase of the model.

System & Software Design: Before a starting for actual coding, it is highly important to understand what we are going to create and what it should look like? The requirement specifications from first phase are studied in this phase and system design is prepared. System Design helps in specifying hardware and system requirements and also helps in defining overall system architecture. The system design specifications serve as input for the next phase of the model.

Implementation & Unit Testing: On receiving system design documents, the work is divided in modules/units and actual coding is started. The system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality; this is referred to as Unit Testing. Unit testing mainly verifies if the modules/units meet their specifications.

Integration & System Testing: As specified above, the system is first divided in units which are developed and tested for their functionality. These units are integrated into a complete system during Integration phase and tested to check if all modules/units coordinate between each other and the system as a whole behaves as per the specifications. After successfully testing the software, it is delivered to the customer.

Operations & Maintenance: This phase of "The Waterfall Model" is virtually never ending phase (Very long). Generally, problems with the system developed (which are not found during the development life cycle) come up after its practical use starts, so the issues related to the system are solved after deployment of the system. Not all the problems come in picture directly but they arise time to time and needs to be solved; hence this process is referred as Maintenance.

Advantages and Disadvantages

Advantages

The advantage of waterfall development is that it allows for departmentalization and managerial control. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process like a car in a carwash, and theoretically, be delivered on time. Development moves from concept, through design, implementation, testing, installation, troubleshooting, and ends up at operation and maintenance. Each phase of development proceeds in strict order, without any overlapping or iterative steps.

Disadvantages

The disadvantage of waterfall development is that it does not allow for much reflection or revision. Once an application is in the testing stage, it is very difficult to go back and change something that was not well-thought out in the concept stage. Alternatives to the waterfall model include joint application development (JAD), rapid application development (RAD), synch and stabilize, build and fix, and the spiral model.

Common Errors in Requirements Analysis

In the traditional waterfall model of software development, the first phase of requirements analysis is also the most important one. This is the phase which involves gathering information about the customer's needs and defining, in the clearest possible terms, the problem that the product is expected to solve.

This analysis includes understanding the customer's business context and constraints, the functions the product must perform, the performance levels it must adhere to, and the external systems it must be compatible with. Techniques used to obtain this understanding include customer interviews, use cases, and "shopping lists" of software features. The results of the analysis are typically captured in a formal requirements specification, which serves as input to the next step.

Well, at least that's the way it's supposed to work theoretically. In reality, there are a number of problems with this theoretical model, and these can cause delays and knock-on errors in the rest of the process. This article discusses some of the more common problems that project managers experience during this phase, and suggests possible solutions.

Problem 1: Customers don't (really) know what they want

Possibly the most common problem in the requirements analysis phase is that customers have only a vague idea of what they need, and it's up to you to ask the right questions and perform the analysis necessary to turn this amorphous vision into a formally-documented software requirements specification that can, in turn, be used as the basis for both a project plan and an engineering architecture.

To solve this problem, you should:

* Ensure that you spend sufficient time at the start of the project on understanding the objectives, deliverables and scope of the project.
* Make visible any assumptions that the customer is using, and critically evaluate both the likely end-user benefits and risks of the project.
* Attempt to write a concrete vision statement for the project, which encompasses both the specific functions or user benefits it provides and the overall business problem it is expected to solve.
* Get your customer to read, think about and sign off on the completed software requirements specification, to align expectations and ensure that both parties have a clear understanding of the deliverabl

Problem 2: Requirements change during the course of the project

The second most common problem with software projects is that the requirements defined in the first phase change as the project progresses. This may occur because as development progresses and prototypes are developed, customers are able to more clearly see problems with the original plan and make necessary course corrections; it may also occur because changes in the external environment require reshaping of the original business problem and hence necessitates a different solution than the one originally proposed.

Good project managers are aware of these possibilities and typically already have backup plans in place to deal with these changes.

To solve this problem, you should:

* Have a clearly defined process for receiving, analyzing and incorporating change requests, and make your customer aware of his/her entry point into this process.
* Set milestones for each development phase beyond which certain changes are not permissible -- for example, disallowing major changes once a module reaches 75 percent completion.
* Ensure that change requests (and approvals) are clearly communicated to all stakeholders, together with their rationale, and that the master project plan is updated accordingly.


Problem 3: Customers have unreasonable timelines

It's quite common to hear a customer say something like "it's an emergency job and we need this project completed in X weeks". A common mistake is to agree to such timelines before actually performing a detailed analysis and understanding both of the scope of the project and the resources necessary to execute it. In accepting an unreasonable timeline without discussion, you are, in fact, doing your customer a disservice: it's quite likely that the project will either get delayed (because it wasn't possible to execute it in time) or suffer from quality defects (because it was rushed through without proper inspection).

To solve this problem, you should:

* Convert the software requirements specification into a project plan, detailing tasks and resources needed at each stage and modeling best-case, middle-case and worst-case scenarios.

* Ensure that the project plan takes account of available resource constraints and keeps sufficient time for testing and quality inspection.

* Enter into a conversation about deadlines with your customer, using the figures in your draft plan as supporting evidence for your statements. Assuming that your plan is reasonable, it's quite likely that the ensuing negotiation will be both productive and result in a favorable outcome for both parties.


Problem 4: Communication gaps exist between customers, engineers and project managers

Often, customers and engineers fail to communicate clearly with each other because they come from different worlds and do not understand technical terms in the same way. This can lead to confusion and severe miscommunication, and an important task of a project manager, especially during the requirements analysis phase, is to ensure that both parties have a precise understanding of the deliverable and the tasks needed to achieve it.

To solve this problem, you should:

* Take notes at every meeting and disseminate these throughout the project team.

* Be consistent in your use of words. Make yourself a glossary of the terms that you're going to use right at the start, ensure all stakeholders have a copy, and stick to them consistently.


Problem 5: The development team doesn't understand the politics of the customer's organization

The scholars Bolman and Deal suggest that an effective manager is one who views the organization as a "contested arena" and understands the importance of power, conflict, negotiation and coalitions. Such a manager is not only skilled at operational and functional tasks, but he or she also understands the importance of framing agendas for common purposes, building coalitions that are united in their perspective, and persuading resistant managers of the validity of a particular position.

These skills are critical when dealing with large projects in large organizations, as information is often fragmented and requirements analysis is hence stymied by problems of trust, internal conflicts of interest and information inefficiencies.

To solve this problem, you should:

* Review your existing network and identify both the information you need and who is likely to have it.
* Cultivate allies, build relationships and think systematically about your social capital in the organization.
* Persuade opponents within your customer's organization by framing issues in a way that is relevant to their own experience.
* Use initial points of access/leverage to move your agenda forward.





Software

Kinds of Testing that should be considered for Websites.

Black box testing – Testing not based on any knowledge of the internal design or code. Tests are based on requirements and functionality.


Incremental Integration Testing
– Continuous testing of website as new functionality is added; requires that various aspects of an web applications functionality be independent enough to work separately before all parts of the program are completed.


Integration Testing
– Testing that is done on combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, pages in a website etc.


Functional Testing
– Black box type testing geared to functional requirements of an application; testers should do this type of testing. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)

System Testing – Black box type testing that is based on overall requirements specifications; covers all combined parts of a web application.

End-to-end Testing – Similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity Testing or Smoke Testing – typically an initial testing effort to determine if a new web application is performing well enough to accept it for a major testing effort. For example, if there are lots of missing links, missing images, missing validations, or corrupting databases, the website may not be in a ’sane’ enough condition to warrant further testing in its current state.


Regression Testing – re-testing after fixes or modifications of the website or its pages. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.


Acceptance Testing – final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.


Load Testing – testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.


Stress Testing – term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

Performance Testing – term often used interchangeably with ’stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.


Usability Testing – testing that is done for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.


Security Testing – testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.


Compatibility Testing – testing how well website performs in a particular hardware/software/operating system//browser/network etc. environment.


Exploratory Testing – often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the website as they test it.


Ad-hoc Testing – similar to exploratory testing, but often taken to mean that the testers have significant understanding of the website before testing it.

User acceptance Testing – determining if website is satisfactory to an end-user or customer.


Alpha Testing – testing of a web application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.


Beta Testing – testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

Wednesday, March 18, 2009

Bug Life Cycle



Software



The steps in defect life cycle varies from company to company. But the basic flow remains the same. However, below I'm describing a basic flow for Bug Life Cycle:

* A Tester finds a bug. Status --> Open
* Test lead review the bug and authoriz the bug. Stats --> Open
* Development team lead review the defect. Stats --> Open
* The defect can be authorized or unauthorized by the development team. (Here the status of the defect / bug will be Open (For Authorized Defects) & Reject (For Unauthorized Defects).
* Now, the authorized bugs will get fixed or deferred by the development team. Status of the fixed bugs will be Fixed & Status will be Deferred for the bugs which got Deferred.
* The Fixed bugs will be again re-tested by the testing team (Here based on the Closure of the Bug, the status will be made as Closed or if the bug still remains, it will be re-raised and status will be Re-opened.

The above-mentioned cycle continues until all the bugs / defects gets fixed in the application.

Software Testing Bug Report Template



Software



In continuation to my previous post, here in this post, I'm explaining a simple and effective software bug report.

If you are using any Software Testing Management tool or any Bug reporting tool
like Bugzilla or Test Director or Bughost or any other online bug tracking tool, then; the tool will automatically generate the bug report. If you are not using any tool, you may refer to the following template for your software bug report:

* Name of Reporter:
* Email Id of Reporter:
* Version or Build:
* Module or component:
* Platform / Operating System:
* Type of error:
* Priority:
* Severity:
* Status:
* Assigned to:
* Summary:
* Description:

Software Testing Bug Report



Software



After you complete your Software Testing, it is good practice to prepare an effective bug report. Fixing a bug depends on how effectively you report it. Below are some tips to write a good software bug report:

* If you are doing manual Software Testing and reporting bugs withour the help of any tool, assign a unique number to each bug report. This will help to identify the bug record.
* Clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step.
* Be Specific and to the point

Apart from these tips, below are some good practices:

* Report the problem immediately
* Reproduce the bug atleast one more time before you report it
* Test the same bug occurrence on other similar modules of the application
* Read bug report before you submit it or send it.
* Never ever criticize any developer or attack any individual

Validation vs Verification, Reviews, Inspections



Software



Validation:
The process of evaluating software at the end of the software development
process to ensure compliance with software requirements. It is actual testing of the application.

1. Am I building the right product.

2. Determining if the system complies with the requirements and performs functions for which it is intended and meets the organization’s goals and user needs. It is traditional and is performed at the end of the project.

3. Am I accessing the right data (in terms of the data required to satisfy the requirement).

4. High level activity.

5. Performed after a work product is produced against established criteria ensuring that the product integrates correctly into the environment.

6. Determination of correctness of the final software product by a development project with respect to the user needs and requirements.


Verification:
The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase.

Verification process helps in detecting defects early, and preventing their leakage downstream. Thus, the higher cost of later detection and rework is eliminated. It includes:

1. Am I building the product right.

2. The review of interim work steps and interim deliverables during a project to ensure they are acceptable. To determine if the system is consistent, adheres to standards, uses reliable techniques and prudent practices, and performs the selected functions in the correct manner.

3. Am I accessing the data right (in the right place; in the right way).

4. Low level activity.

5. Performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists and standards.

6. Demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.


Review: A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. The main goal of reviews is to find defects. Reviews are a good compliment to testing to help assure quality. A few purposes’ of SQA reviews can be as follows:

Assure the quality of deliverable’s before the project moves to the next stage.
Once a deliverable has been reviewed, revised as required, and approved, it can be used as a basis for the next stage in the life cycle.

Various types of reviews:

Management Reviews:
Management reviews are performed by those directly responsible for the system in order to monitor progress, determine status of plans and schedules, confirm requirements and their system allocation. Therefore the main objectives of Management Reviews can be categorized as follows:

- Validate from a management perspective that the project is making progress according to the project plan.

- Ensure that deliverables are ready for management approvals.

- Resolve issues that require management’s attention.

- Identify any project bottlenecks.

- Keeping project in Control.

Requirement Review: A process or meeting during which the requirements for a system, hardware item, or software item are presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include system requirements review, software requirements review. Product management leads Requirement Review. Members from every affected department participates in the review.

Input Criteria: Software requirement specification is the essential document for the review. A checklist can be used for the review.

Exit Criteria:
Exit criteria include the filled & completed checklist with the reviewers’ comments & suggestions and the re-verification whether they are incorporated in the documents.

Design Review: A process or meeting during which a system, hardware, or software design is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include critical design review, preliminary design review, and system design review.


Inspection: A static analysis technique that relies on visual examination of development products to detect errors, violations of development standards, and other problems. Types include code inspection; design inspection, Architectural inspections, Test ware inspections etc. The participants in Inspections assume one or more of the following roles:

- Inspection leader
- Recorder
- Reader
- Author
- Inspector

All participants in the review are inspectors. The author shall not act as inspection leader and should not act as reader or recorder. Other roles may be shared among the team members. Individual participants may act in more than one role.

Individuals holding management positions over any member of the inspection team shall not participate in the inspection.

Input to the inspection shall include the following:

- A statement of objectives for the inspection
- The software product to be inspected
- Documented inspection procedure
- Inspection reporting forms
- Current anomalies or issues list

Input to the inspection may also include the following:

- Inspection checklists: Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected.

- Hardware product specifications

- Hardware performance data

- Anomaly categories

The individuals may make additional reference material available responsible for the software product when requested by the inspection leader.

The purpose of the exit criteria is to bring an unambiguous closure to the inspection meeting. The exit decision shall determine if the software product meets the inspection exit criteria and shall prescribe any appropriate rework and verification. Specifically, the inspection team shall identify the software product disposition as one of the following:

Accept with no or minor rework: The software product is accepted as is or with only minor rework. (For example, that would require no further verification).

Accept with rework verification:
The software product is to be accepted after the inspection leader or a designated member of the inspection team (other than the author) verifies rework.

Re-inspect:
Schedule a re-inspection to verify rework. At a minimum, a re-inspection shall examine the software product areas changed to resolve anomalies identified in the last inspection, as well as side effects of those changes.

Software Testing Principles



Software



Below are some basic Software Testing Principles:

- A necessary part of a test case is a definition of the expected output or result.

- A programmer should avoid attempting to test his or her own program.

- A programming organization should not test its own programs.

- Thoroughly inspect the results of each test.

- Test cases must be written for input conditions that are invalid and unexpected, as well as for those that are valid and expected.

- Examining a program to see if it does not do what it is supposed to do is only half the battle; the other half is seeing whether the program does what it is not supposed to do.

- Avoid throwaway test cases unless the program is truly a throwaway program.

- Do not plan a testing effort under the tacit assumption that no errors will be found.

- The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section.

- Software Testing is an extremely creative and intellectually challenging task.

Software Testing Process



Software



Below is a very basic software testing process
. Many companies use this process. Later on I'll post some more testing processes.

1. Understand of business logic and analysis of requirements: In this phase consider the following:

* Are the definitions and descriptions of the required functionalities precise?
* Is there clear delineation between the system and its environment?
* Can the requirements be realized in practice?
* Can the requirements be tested effectively?

2. Test Planning: During this phase Test Strategy is defined and Test Bed is created. The plan should identify:-

* Which aspects of the system should be tested.
* The methods, techniques and tools to be used.
* Personnel responsible for the testing.
* Manual and Automation Testing
* Defect Management and Risk Management etc.


3. Test Environment Setup:
A different testing server is prepared where the application will be tested. It is an independent testing environment.



4. Test Design: Identify the test scenarios and prepare the test cases / scripts. Selection of test data is also done in this phase. If required, test designing is done with some automated tools like QTP or LoadRunner or with some other software testing tool. Designing framework, scripting, script integration, Review and approval will be undertaken in this phase.



5. Test Execution: Testers execute the test cases and report any errors found to the development team.



6. Defect Tracking:
Raised defects are tracked using some tools like Test Director or Bug Host etc.



7. Test Reports:
As soon as testing is completed, Test Lead or Manager generate metrics and make final reports for the whole testing effort.

Friday, March 13, 2009

The Big Picture (Software testing)



Software

All software problems can be termed as bugs. A software bug usually occurs when the software

does not do what it is intended to do or does something that it is not intended to do. Flaws in specifications, design, code or other reasons can cause these bugs. Identifying and fixing bugs in the early stages of the software is very important as the cost of fixing bugs grows over time. So, the goal of a software tester is to find bugs and find them as early as possible and make sure they are fixed.

Testing is context-based and risk-driven. It requires a methodical and disciplined approach to finding bugs. A good software tester needs to build credibility and possess the attitude to be explorative, troubleshooting, relentless, creative, diplomatic and persuasive.

As against the perception that testing starts only after the completion of coding phase, it actually begins even before the first line of code can be written. In the life cycle of the conventional software product, testing begins at the stage when the specifications are written, i.e. from testing the product specifications or product spec. Finding bugs at this stage can save huge amounts of time and money.

Once the specifications are well understood, you are required to design and execute the test cases. Selecting the appropriate technique that reduces the number of tests that cover a feature is one of the most important things that you need to take into consideration while designing these test cases. Test cases need to be designed to cover all aspects of the software, i.e. security, database, functionality (critical and general) and the user interface. Bugs originate when the test cases are executed.

As a tester you might have to perform testing under different circumstances, i.e. the application could be in the initial stages or undergoing rapid changes, you have less than enough time to test, the product might be developed using a life cycle model that does not support much of formal testing or retesting. Further, testing using different operating systems, browsers and the configurations are to be taken care of.

Reporting a bug may be the most important and sometimes the most difficult task that you as a software tester will perform. By using various tools and clearly communicating to the developer, you can ensure that the bugs you find are fixed.

Using automated tools to execute tests, run scripts and tracking bugs improves efficiency and effectiveness of your tests. Also, keeping pace with the latest developments in the field will augment your career as a software test engineer.


Friday, February 20, 2009

Requirements Testing



Software


Testing software is an integral part of building a system. However, if the software is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory. Most of the defects in a system can be traced back to wrong, missing, vague or incomplete requirements.

Requirements seem to be ephemeral. They flit in and out of projects, they are capricious, intractable, unpredictable and sometimes invisible. When gathering requirements we are searching for all of the criteria for a system's success. We throw out a net and try to capture all these criteria.

The Quality Gateway

As soon as we have a single requirement in our net we can start testing. The aim is to trap requirements-related defects as early as they can be identified. We prevent incorrect requirements from being incorporated in the design and implementation where they will be more difficult and expensive to find and correct.

To pass through the quality gateway and be included in the requirements specification, a requirement must pass a number of tests. These tests are concerned with ensuring that the requirements are accurate, and do not cause problems by being unsuitable for the design and implementation stages later in the project.

Make The Requirement Measurable

In his work on specifying the requirements for buildings, Christopher Alexander describes setting up a quality measure for each requirement.

"The idea is for each requirement to have a quality measure that makes it possible to divide all solutions to the requirement into two classes: those for which we agree that they fit the requirement and those for which we agree that they do not fit the requirement."

In other words, if we specify a quality measure for a requirement, we mean that any solution that meets this measure will be acceptable. Of course it is also true to say that any solution that does not meet the measure will not be acceptable.

The quality measures will be used to test the new system against the requirements. The remainder of this paper describes how to arrive at a quality measure that is acceptable to all the stakeholders.

Quantifiable Requirements

Consider a requirement that says "The system must respond quickly to customer enquiries". First we need to find a property of this requirement that provides us with a scale for measurement within the context. Let's say that we agree that we will measure the response using minutes. To find the quality measure we ask: "under what circumstances would the system fail to meet this requirement?" The stakeholders review the context of the system and decide that they would consider it a failure if a customer has to wait longer than three minutes for a response to his enquiry. Thus "three minutes" becomes the quality measure for this requirement.

Any solution to the requirement is tested against the quality measure. If the solution makes a customer wait for longer than three minutes then it does not fit the requirement. So far so good: we have defined a quantifiable quality measure. But specifying the quality measure is not always so straightforward. What about requirements that do not have an obvious scale?

Non-quantifiable Requirements

Suppose a requirement is "The automated interfaces of the system must be easy to learn". There is no obvious measurement scale for "easy to learn". However if we investigate the meaning of the requirement within the particular context, we can set communicable limits for measuring the requirement.

Again we can make use of the question: "What is considered a failure to meet this requirement?" Perhaps the stakeholders agree that there will often be novice users, and the stakeholders want novices to be productive within half an hour. We can define the quality measure to say "a novice user must be able to learn to successfully complete a customer order transaction within 30 minutes of first using the system". This becomes a quality measure provided a group of experts within this context is able to test whether the solution does or does not meet the requirement.

An attempt to define the quality measure for a requirement helps to rationalise fuzzy requirements. Something like "the system must provide good value" is an example of a requirement that everyone would agree with, but each person has his own meaning. By investigating the scale that must be used to measure "good value" we identify the diverse meanings.

Sometimes by causing the stakeholders to think about the requirement we can define an agreed quality measure. In other cases we discover that there is no agreement on a quality measure. Then we substitute this vague requirement with several requirements, each with its own quality measure.

Requirements Test 1

Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?

By adding a quality measure to each requirement we have made the requirement visible. This is the first step to defining all the criteria for measuring the goodness of the solution. Now let's look at other aspects of the requirement that we can test before deciding to include it in the requirements specification.

Requirements Test 2

Does the specification contain a definition of the meaning of every essential subject matter term within the specification?

When the allowable values for each of the attributes are defined it provides data that can be used to test the implementation.

Requirements Test 3

Is every reference to a defined term consistent with its definition?

Requirements Test 4

Is the context of the requirements wide enough to cover everything we need to understand?

Requirements Test 5

Have we asked the stakeholders about conscious, unconscious and undreamed of requirements?

Requirements Test 5 (enlarged)

Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modelling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?

Requirements Test 6

Is every requirement in the specification relevant to this system?

Requirements Test 7

Does the specification contain solutions posturing as requirements?

Requirements Test 8

Is the stakeholder value defined for each requirement?

Requirements Test 9

Is each requirement uniquely identifiable?

Requirements Test 10

Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?

Conclusions

The requirements specification must contain all the requirements that are to be solved by our system. The specification should objectively specify everything our system must do and the conditions under which it must perform. Management of the number and complexity of the requirements is one part of the task.

The most challenging aspect of requirements gathering is communicating with the people who are supplying the requirements. If we have a consistent way of recording requirements we make it possible for the stakeholders to participate in the requirements process. As soon as we make a requirement visible we can start testing it. and asking the stakeholders detailed questions. We can apply a variety of tests to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning. We can ask the stakeholders to define the relative value of requirements. We can define a quality measure for each requirement, and we can use that quality measure to test the eventual solutions.

Testing starts at the beginning of the project, not at the end of the coding. We apply tests to assure the quality of the requirements. Then the later stages of the project can concentrate on testing for good design and good code. The advantages of this approach are that we minimise expensive rework by minimising requirements-related defects that could have been discovered, or prevented, early in the project's life.


 

Sample text

Sample Text

Job Search



Sample Text