Pages

Ads 468x60px

Featured Posts

Thursday, January 31, 2013

Selenium Vs QTP

1. Actual end user simulation, Is the test conducted using this tool equivalent to an end user action?
Selenium performs actions in the background on the browser. It modifies the DOM structure of the HTML page in order to perform actions on the page. To be more precise it executes javascript on UI objects within the webpage to perform actions like click, type, select etc. This is the reason why you can execute tests with the browser minimized.
QTP claims to perform end user simulation, in other words executing QTP scripts is equivalent to a person performing those steps manually on the application.
2. Support for most UI Components
Selenium Certain events, methods and Object properties are not supported by selenium. But broadly most UI components are supported.
QTP requires extra add-ons (plug-in, not free) to work with .Net components.
3. UI-Object management & storage
QTP comes built-in with object repository. Object repository management is quite easy in QTP. Selenium has no such built in feature but objects can be managed using UI-Element user extension. Other custom solutions like properties files can also be used in selenium. But for all such solutions the map file in selenium has to be hand-coded i.e unlike QTP the objects won't be recorded/added automatically .
4. Support for Dialog Boxes
QTP supports all kinds of IE dialog boxes. There is partial support for dialog boxes in Selenium. Some actions like retrieving the title of he dialog box can't be performed in selenium.
5. Support for web browsers
QTP supports IE & Firefox. Selenium supports IE, Firefox, Safari and Opera and a few more browsers. But either tools are far far away from full cross-browser support. Don't expect scripts created using browser to run flawlessly in another browser.
6. Object Recognition Parameters, Recognition on the basis of WYSWYG (what you see is what you get).
Selenium recognizes objects on the basis of the DOM structure of the HTML Page. The UI objects in selenium have vague descriptions and don't comply with WYSWYG policy.
QTP recognises and names objects based on properties which are more visible and obvious and are hence the objects have user friendly names.
7. Object Oriented Language support & Scalability (as in Integration with External tools utilities and libraries). Selenium Supports JAVA, dot net and many other industry standard programming languages. QTP supports vb script only.
8. Integration with test management tool.
QTP integrates seamlessly with QC and TD. Test management and mapping the manual testing process with automation becomes a lot easier with this integration. I have not yet heard of any test management tool that can integrate seamlessly with Selenium. keep an eye out for Bromide though.
9. Types of application supported
QTP wins this one hands down. This is one of the main reason why selenium can't even be considered in many cases. Imagine real-time applications like trading terminals, risk management applications built in TCL/TK and PowerBuilder. QTP supports most of these interfaces.
Selenium on the other hand can work only on applications that open up inside a browser. But aren't most applications moving to the browser based platform? Smile
10. Support for operating system/platforms
Selenium Supports JAVA and hence can be used in Windows PC or MAC or UNIX. Using selenium you can test your web application in all the above platforms. QTP supports Windows only.
11. Ease of creation of Scripts
Selenium IDE Recorder is not as powerful as QTP but is good for a free tool, many actions are not recorded by the IDE and have to be manually entered.
12. Technical Support
QTP offers technical support by phone and mail, HP also have a web-forum. QTP user community is vast and questions posted on online forums get answered quickly. Selenium being an open source tool has no official tech support, the user community is small, less-active and questions on forums seldom get answered. But the community is growing day by day as the tool gains acceptance.
13. Cost
QTP - Very Costly, in some thousand dollars per seat license. Many people want to switch to selenium because it's free. But cost isn't really a factor when your client's a investment bank Smile
14. Test Development Environment
When you are working on selenium you have the option of using wide range of IDEs like Eclipse, Netbeans, Visual Studio etc depending on your choice of development language. If you are a developer then you must have developed a taste for rich IDEs and switching to a environment given by a test tool may be hard for you.
QTP tests can only be developed in QTP.
15. Integration with development process
Tests developed using selenium can be easily part of the development project. Using tools like cruise control Continuous Integration is easier with Selenium. But don't get too caught up with this feature, it's really not that important to be integrated with the development process but it's nice to have.
16. Future in terms of usability and acceptance
The future bodes well for selenium because it's free, supports all programming languages/platforms and is immensely scalable and expandable due to it being free and open source. Many pundits have predicted that it will completely conquer the web testing market in the next 5 years. It being free makes a huge difference especially when times are hard like what we have now.
QTP on the other hand is the current market leader and I think that it will have it's presence for long due to it's user friendliness and support for interfaces other than web.

Thursday, October 7, 2010

Software testing life cycle

Software

Software testing life cycle identifies what test activities to carry out and when (what is the best time) to accomplish those test activities. Even though testing differs between organizations, there is a testing life cycle.

Software Testing Life Cycle consists of six (generic) phases:

* Test Planning,
* Test Analysis,
* Test Design,
* Construction and verification,
* Testing Cycles,
* Final Testing and Implementation and
* Post Implementation.


Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to control/deal with software testing – Manual, Automated and Performance.
Test Planning

This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriate budget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. This planning will be an ongoing process with no end point.

Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The Software Test Plan (STP) is designed to prescribe the scope, approach, resources, and schedule of all testing activities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan.). Almost all of the activities done during this stage are included in this software test plan and revolve around a test plan.
Test Analysis

Once test plan is made and decided upon, next step is to delve little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the appropriate time to automate is, what type of specific documentation I need for testing.

Proper and regular meetings should be held between testing teams, project managers, development teams, Business Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the completeness of the test plan created in the planning phase, which will further help in enhancing the right testing strategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to develop Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design, Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and Performance testing.
Test Design

Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared.
Construction and verification

In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported.
Testing Cycles

In this phase we have to complete testing cycles until test cases are executed without errors or a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle 2, test cycle 3….).
Final Testing and Implementation

In this we have to execute remaining stress and performance test cases, documentation for testing is completed / updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions.
Post Implementation
In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage.

MUTATION TESTING

Software

A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.

Wednesday, June 9, 2010

Proof of Concept & Interoperability Testing

Interoperability testing has become a requirement for companies that deploy multi-vendor networks. To satisfy this requirement, network and storage providers and managers have three options.

1. Set up an interoperability lab, an expensive and time-consuming project.

2. Use a third-party interoperability lab, such as ISOCORE or the University of New Hampshire.

3. Create a proof-of-concept lab, such as the labs at Cisco or Spirent Communications.

These labs typically connect the devices with a copper or fiber-optic patch cable and run the tests. Such testing reflects a best-case scenario that is useful for base-line interoperability testing but doesn’t represent how the devices will interoperate in an actual network.

It is analogous to testing the auto-pilot system of an airplane to see if it could land the plane in ideal weather conditions. While the test proves the plane can land itself on a perfect day, it is not a predictor of how the system will behave in wide range of weather conditions under which a plane will have to operate.

Sometimes spools of fiber are used to create the delay found in wide area networks. While this is an improvement over patch cables, there are three major limitations to proof of concept and interoperability testing with spools of fiber:

1. Spools of fiber cannot provide dynamic tests. The tests must be manually stopped and restarted to change out the spool of fiber for one with a different length.

2. Spools of fiber are expensive and impractical. Imagine the cost associated with moving an 50,000 km spool of fiber to another lab.

3. Spools of fiber only provide delay. They do not address the various other impairments that exist in a network.

It is as if we improved our auto-pilot system testing to include fixed amounts of wind from a single direction. Factors such as fog, rain, snow and wind sheers are still ignored in the testing. Weather conditions are dynamic and multifaceted. Testing under a single condition is not a realistic test. As with weather, so it is with networks. Impairments in real networks do not limit themselves to a single issue.
www.qaamitahuja.blogspot.com

Wednesday, October 14, 2009

Iterative Model

An iterative lifecycle model does not attempt to start with a full specification of requirements. Instead, development begins by specifying and implementing just part of the software, which can then be reviewed in order to identify further requirements. This process is then repeated, producing a new version of the software for each cycle of the model. Consider an iterative lifecycle model which consists of repeating the following four phases in sequence:

A Requirements phase, in which the requirements for the software are gathered and analyzed. Iteration should eventually result in a requirements phase that produces a complete and final specification of requirements. - A Design phase, in which a software solution to meet the requirements is designed. This may be a new design, or an extension of an earlier design.

An Implementation and Test phase, when the software is coded, integrated and tested.

A Review phase, in which the software is evaluated, the current requirements are reviewed, and changes and additions to requirements proposed.

For each cycle of the model, a decision has to be made as to whether the software produced by the cycle will be discarded, or kept as a starting point for the next cycle (sometimes referred to as incremental prototyping). Eventually a point will be reached where the requirements are complete and the software can be delivered, or it becomes impossible to enhance the software as required, and a fresh start has to be made.

The iterative lifecycle model can be likened to producing software by successive approximation. Drawing an analogy with mathematical methods that use successive approximation to arrive at a final solution, the benefit of such methods depends on how rapidly they converge on a solution.

The key to successful use of an iterative software development lifecycle is rigorous validation of requirements, and verification (including testing) of each version of the software against those requirements within each cycle of the model. The first three phases of the example iterative model is in fact an abbreviated form of a sequential V or waterfall lifecycle model. Each cycle of the model produces software that requires testing at the unit level, for software integration, for system integration and for acceptance. As the software evolves through successive cycles, tests have to be repeated and extended to verify each version of the software.




Software



discussion
Amit Ahuja

Tuesday, October 6, 2009

Test Cases

Designing Test Cases


A test case is a detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes what to test, a test case describes how to perform a particular test. You need to develop a test case for each test listed in the test plan. Figure 2.10 illustrates the point at which test case design occurs in the lab development and testing process.


Test case includes:

* The purpose of the test.

* Special hardware requirements, such as a modem.

* Special software requirements, such as a tool.

* Specific setup or configuration requirements.

* A description of how to perform the test.

* The expected results or success criteria for the test.



Test cases should be written by a team member who understands the function or technology being tested, and each test case should be submitted for peer review.

Organizations take a variety of approaches to documenting test cases; these range from developing detailed, recipe-like steps to writing general descriptions. In detailed test cases, the steps describe exactly how to perform the test. In descriptive test cases, the tester decides at the time of the test how to perform the test and what data to use.

Most organizations prefer detailed test cases because determining pass or fail criteria is usually easier with this type of case. In addition, detailed test cases are reproducible and are easier to automate than descriptive test cases. This is particularly important if you plan to compare the results of tests over time, such as when you are optimizing configurations. Detailed test cases are more time-consuming to develop and maintain. On the other hand, test cases that are open to interpretation are not repeatable and can require debugging, consuming time that would be better spent on testing.

Test Case Design

Test Case ID:

It is unique number given to test case in order to be identified.

Test description:

The description if test case you are going to test.

Revision history:

Each test case has to have its revision history in order to know when and by whom it is created or modified.

Function to be tested:

The name of function to be tested.

Environment:

It tells in which environment you are testing.

Test Setup:

Anything you need to set up outside of your application for example printers, network and so on.

Test Execution:

It is detailed description of every step of execution.

Expected Results:

The description of what you expect the function to do.

Actual Results:

pass / failed

If pass - What actually happen when you run the test.

If failed - put in description of what you've observed.

discussion
Amit Ahuja




Software

 

Sample text

Sample Text

Job Search



Sample Text