Pages

Ads 468x60px

Friday, September 5, 2008

Test Director:

TestDirector is a single, Web-based application for all essential aspects of test management — Requirements Management, Test Plan, Test Lab, and Defects Management. You can leverage these core modules either as a standalone solution or integrated within a global Quality Centre of Excellence environment. TestDirector supports high levels of communication and collaboration among IT teams. Whether you are coordinating the work of many disparate QA teams, or working with a large, distributed Centre of Excellence, this test management tool helps facilitate information access across geographical and organization boundaries.

Qulity center is the latest vesion of the testdirector. upto 8.0 it is called as testdirector after that it is called as QC. TestDirector for Quality Center has same 4 tabs - that did not change. Quality Center perfectly integrates with Functional Testing that now includes QTP + WinRunner (this

integration was possible before with TestDirector as well). As far as tabs... There is 1 additional tab that appeared in Quality Center that was not there before and it's called Dashboard (for project management).



Test Director:

TestDirector is a single, Web-based application for all essential aspects of test management — Requirements Management, Test Plan, Test Lab, and Defects Management. You can leverage these core modules either as a standalone solution or integrated within a global Quality Centre of Excellence environment. TestDirector supports high levels of communication and collaboration among IT teams. Whether you are coordinating the work of many disparate QA teams, or working with a large, distributed Centre of Excellence, this test management tool helps facilitate information access across geographical and organization boundaries.

Qulity center is the latest vesion of the testdirector.upto 8.0 it is called as testdirector after that it is called as QC. TestDirector for Quality Center has same 4 tabs - that did not change. Quality Center perfectly integrates with Functional Testing that now includes QTP + WinRunner

As far as tabs... There is 1 additional tab that appeared in Quality Center that was not there before and it's called Dashboard (for project management).

WinRunner:

It is Mercury Interactive Functional Automation Testing Tool.

How many types of Run Modes are available in WinRunner?

WinRunner provide three types of Run Modes.

  • Verify Mode
  • Debug Mode
  • Update Mode

What’s the Verify Mode?

In Verify Mode, WinRunner compare the current result of application to it’s expected result.

What’s the Debug Mode?

In Debug Mode, WinRunner track the defects in a test script.

What’s the Update Mode?

In Update Mode, WinRunner update the expected results of test script.

How many types of recording modes available in WinRunner?

WinRunner provides two types of Recording Mode?

  • Context Sensitive
  • Analog

What’s the Context Sensitive recording?

WinRunner captures and records the GUI objects, windows, keyboard inputs, and mouse click activities through Context Sensitive Recording.

What’s the Analog recording?

It captures and records the keyboard inputs, mouse click and mouse movement. It’s not captures the GUI objects and Windows.

discussion

Monday, September 1, 2008

Why do we test for performance?

What a simple question! I even have a simple answer.

"To determine or estimate various performance characteristics under various conditions."

The problem is that answer is virtually useless unless we also know what performance characteristics are interesting to whom and for what purpose. Worse, more often than not, the folks who ask us to do the performance testing fundamentally don't know what they want to know and don't know what we can reasonably provide. They also don't understand that how the results are going to be used significantly impacts which tests we run and how we design them.

In my experience, when I ask stakeholders what the goals of the performance testing effort are, I generally get one of three answers:

  • You're the performance tester, you tell me.
  • Tell me how many users/orders/customers we can handle in production.
  • Make sure it will be fast enough.

Needless to say, these answers aren't only as useless as my response about determining or estimating performance characteristics, but the second two are practically impossible, since we almost never have either the data or the equipment available to accomplish those missions reliably. Somewhere between "virtually useless" and "practically impossible" there must be some reasons for testing performance that are both useful and possible. If there weren't useful and possible reasons for testing performance, we wouldn't still be doing it. (I hope!)

As it turns out, the key to my coming up with a model to explore that middle ground was to stop thinking about performance testing as a "testing effort" and start thinking about it as a "business effort." Once I made that shift, I was quickly able to identify four groups of "for whom" with common "for what purposes." Since then, I've found that using this model to frame conversations about prioritizing performance testing objectives fundamentally changes the discussion and increases the value of the performance testing effort. It also reduces wasted effort from designing tests to collect one class of results only to find out that an entirely different test was needed to provide the class of results that stakeholders wanted, but were unable to articulate until after they were presented with the less valuable results

 

Sample text

Sample Text

Job Search



Sample Text