Pages

Ads 468x60px

Tuesday, August 19, 2008

Whitebox Testing - Is it really white ?

  • The popular myths around Blackbox & White Box Testing are by it’s name. It’s black since we can’t see it (don’t have access to the code) & it’s white since you have access to all the code. But then, With in the code there are many black boxes inside and it’s tough to have access to that code base.
  • We don’t have access to code of a language API. Most of the applications have been built on top of a API & assume that the API works fine
  • Most of the application do integrate some third party tools over it’s API. We don’t have access to that code base.
  • We don’t have access to the code of Compiler
  • We don’t have access to code of rum time engine that executes our application code
  • We don’t have access to the code of Operating System Services on top of which the application runs

The list goes on and there are many black boxes in side our code too. We are just testing the code written for the application and it’s better to call it as Code Based Testing rather than Whitebox Testing

– Happy Testing..

How to get job in Software Testing quickly?

I came across this article while surfing the net…..so I thot I shud share it with u all…. In recent days this is the most asked question to me by readers. How to get software testing job? How to come in software testing field? or Can I get job in testing?
All these questions are similar and I want to give also similar answer for this. I have written post on choosing software testing as your career where you can analyze your abilities and know which are the most important skills required for software testing.
I will continue mentioning that “know your interest before going into any career field”. Just going to software testing career or any other hot career is wrong and which may result into loss of your job interest as well as your job.
Now you know your abilities, skills, interests right? and you have made decision to go for software testing career as it is your favorite career and you suit most in this career. So here is a guideline for how to get a good job in software testing field.
If you are a fresher and just passed out from your college or passing out in coming months then you need to prepare well for some software testing methodologies. Prepare all the manual testing concepts. If possible have some hands-on on some automation and bug tracking tools like winrunner and test director. It is always a good idea to join any software testing institute or class which will provide you a good start and direction of preparation. You can join any 4 months duration software testing course or can do diploma in software testing which is typically of 6 months to 1 year. Keep the preparation going on in your course duration. This will help you to start giving interviews right after getting over your course.
If you have some sort of previous IT experience and want to switch to software testing then it’s somewhat simple for you. Show your previous IT experience in your resume while applying for software testing jobs. If possible do some crash course to get idea of software testing concepts like I mentioned for freshers above. Keep in mind that you have some kind of IT experience so be prepared for some tough interview questions here.
As companies always prefer some kind of relevant experience for any software job, its better if you have relevant experience in software testing and QA. It may be any kind of software testing tools hands-on or some testing course from reputed institutes.
Please always keep in mind- Do not add fake experience of any kind. This can ruin your career forever. Wait and try for some more days to get the job by your abilities instead of getting into trap of fake experience.
Last important words, Software testing is not ‘anyone can do career!’ Remove this attitude from your mind if someone has told such kind of foolish thing to you. Testing requires in depth knowledge of SDLC, out of box thinking, analytical skill and some programming language skill apart from software testing basics.

So best luck and start preparation for your rocking career!

What Do You Mean By SRS

  • a set of precisely stated properties or constraints which a software system must satisfy.
  • a software requirements document establishes boundaries on the solution space of the problem of developing a useful software system.

A software requirements document allows a design to be validated - if the constraints and properties specified in the document are satisfied by the software design, then that design is an acceptable solution to the problem.


Six requirements which a software requirements document should satisfy

  1. it should specify only external system behaviour,
  2. it should specify constraints on the implementation,
  3. it should be easy to change,
  4. it should serve as a reference tool for system maintainers,
  5. it should record forethought about the life cycle of the system, and
  6. it should characterize acceptable responses to undesired events


Characteristics of a Software Requirements Specification

A good SRS is

  1. unambiguous,
  2. complete,
  3. verifiable,
  4. consistent,
  5. modifiable,
  6. traceable, and
  7. usable during the operation and maintenance phase.

Unambiguous

  • Every requirement has only one interpretation.
  • Each characteristic of the final product is described using a single unique term.
  • A glossary should be used when a term used in a particular context could have multiple meanings.

Complete

    A complete SRS must possess the following qualities:
    1. inclusion of all significant requirements,
    2. definition of the responses of the software to all realizable classes of input,
    3. conformity to any standard that applies to it,
    4. full labelling and referencing of all tables and diagrams and the definition of all terms.

Verifiable

  • Every requirement must be verifiable.
  • There must exist some finite cost-effective process with which a person or machine can check that the software meets the requirement.

Consistent

  • No set of individual requirements described in the SRS can be in conflict.
  • Types of likely conflicts:
    1. Two or more requirements describe the same real world object in different terms.
    2. The specified characteristics of real world objects might conflict.
    3. There may be a logical or temporal conflict between two specified actions.

Modifiable

  • The structure and style of the SRS are such that any necessary changes to the requirements can be made easily, completely and consistently.
  • Requirements:
    1. a coherent and easy-to-use organization (including a table of contents, index and cross-referencing),
    2. not be redundant - this can lead to errors.

Traceable

  • The origin of each requirement must be clear.
  • The SRS should facilitate the referencing of each requirement in future development or enhancement documentation.
  • Types:
    1. Backward traceability
      • Each requirement must explicitly reference its source in previous documents.
    2. Forward traceability
      • Each requirement must have an unique name or reference number.

Usable during the operation and maintenance phase

  • The SRS must address the needs of the operation and maintenance phase, including the eventual replacement of the software.

Wednesday, August 13, 2008

Feasibility and Requirement Analysis

The purpose of the feasibility study is to determine that whether the project can go ahead or not. If the project can go ahead the feasibility study produce the project plan and budget for the future stages of Development. The main goal of this phase is to identify the exact requirement for the System. This study sometimes performed by the market organizations, developer and Client. If the system already exist the goal of this phase is to identify the part of the System which have to be automated. Developing a system which does not exist is more Difficult task because it requires more creative thinking. Once the requirement’s are Identified, they are recorded in the document called SRS. For this pupose some specific language has been selected. The document at the end of this phase is known as SRS.

Amit Ahuja

Tuesday, August 12, 2008

Difference between Testing Types and Testing Techniques

Testing types deal with what aspect of the computer software would be tested, while testing techniques deal with how a specific part of the software would be tested.

That is, testing types mean whether we are testing the function or the structure of the software. In other words, we may test each function of the software to see if it is operational or we may test the internal components of the software to check if its internal workings are according to specification.

On the other hand, ‘Testing technique’ means what methods or ways would be applied or calculations would be done to test a particular feature of a software (Sometimes we test the interfaces, sometimes we test the segments, sometimes loops etc.)

Amit Ahuja

Inspection

A static analysis technique that relies on visual examination of development products to detect errors, violations of development standards, and other problems. Types include code inspection; design inspection, Architectural inspections, Test ware inspections etc.
The participants in Inspections assume one or more of the following roles:
a) Inspection leader
b) Recorder
c) Reader
d) Author
e) Inspector

All participants in the review are inspectors. The author shall not act as inspection leader and should not act as reader or recorder. Other roles may be shared among the team members. Individual participants may act in more than one role.
Individuals holding management positions over any member of the inspection team shall not participate in the inspection.

Input to the inspection shall include the following:
a) A statement of objectives for the inspection
b) The software product to be inspected
c) Documented inspection procedure
d) Inspection reporting forms
e) Current anomalies or issues list
Input to the inspection may also include the following:
f) Inspection checklists
g) Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected
h) Hardware product specifications
i) Hardware performance data
j) Anomaly categories
The individuals may make additional reference material available responsible for the software product when requested by the inspection leader.
The purpose of the exit criteria is to bring an unambiguous closure to the inspection meeting. The exit decision shall determine if the software product meets the inspection exit criteria and shall prescribe any appropriate rework and verification. Specifically, the inspection team shall identify the software product disposition as one of the following:
a) Accept with no or minor rework. The software product is accepted as is or with only minor rework. (For example, that would require no further verification).
b) Accept with rework verification. The software product is to be accepted after the inspection leader or
a designated member of the inspection team (other than the author) verifies rework.
c) Re-inspect. Schedule a re-inspection to verify rework. At a minimum, a re-inspection shall examine the software product areas changed to resolve anomalies identified in the last inspection, as well as side effects of those changes.
Amit Ahuja

Walkthrough

A static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a segment of documentation or code, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.
The objectives of Walkthrough can be summarized as follows:
Detect errors early.
· Ensure (re)established standards are followed:
· Train and exchange technical information among project teams which participate in the walkthrough.
· Increase the quality of the project, thereby improving morale of the team members.
The participants in Walkthroughs assume one or more of the following roles:
a) Walk-through leader
b) Recorder
c) Author
d) Team member
To consider a review as a systematic walk-through, a team of at least two members shall be assembled. Roles may be shared among the team members. The walk-through leader or the author may serve as the recorder. The walk-through leader may be the author.
Individuals holding management positions over any member of the walk-through team shall not participate in the walk-through.

Input to the walk-through shall include the following:
a) A statement of objectives for the walk-through
b) The software product being examined
c) Standards that are in effect for the acquisition, supply, development, operation, and/or maintenance of the software product
Input to the walk-through may also include the following:
d) Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected
e) Anomaly categories

The walk-through shall be considered complete when
a) The entire software product has been examined
b) Recommendations and required actions have been recorded
c) The walk-through output has been completed
Amit Ahuja

Objective Testing

Objective testing is mainly used in systems where the data can be recorded while the simulation is running. This testing technique relies on the application of statistical and automated methods to the data collected.
Statistical methods are used to provide an insight into the accuracy of the simulation. These methods include hypothesis testing, data plots, principle component analysis and cluster analysis.
Automated testing requires a knowledge base of valid outcomes for various runs of simulation. This knowledge base is created by domain experts of the simulation system being tested. The data collected in various test runs is compared against this knowledge base to automatically validate the system under test. An advantage of this kind of testing is that the system can continually be regression tested as it is being developed.
Statistical Methods
Statistical methods are used to provide an insight into the accuracy of the simulation. These methods include hypothesis testing, data plots, principle component analysis and cluster analysis.
Automated Testing
Automated testing requires a knowledge base of valid outcomes for various runs of simulation. This knowledge base is created by domain experts of the simulation system being tested. The data collected in various test runs is compared against this knowledge base to automatically validate the system under test. An advantage of this kind of testing is that the system can continually be regression tested as it is being developed.
Amit Ahuja

Subjective Testing

Subjective testing mainly depends on an expert's opinion. An expert is a person who is proficient and experienced in the system under test. Conducting the test involves test runs of the simulation by the expert and then the expert evaluates and validates the results based on some criteria.
One advantage of this approach over objective testing is that it can test those conditions which cannot be tested objectively. For example, an expert can determine whether the joystick handling of the flight feels "right".

One disadvantage is that the evaluation of the system is based on the "expert's" opinion, which may differ from expert to expert. Also, if the system is very large then it is bound to have many experts. Each expert may view it differently and can give conflicting opinions. This makes it difficult to determine the validity of the system. Despite all these disadvantages, subjective testing is necessary for testing systems with human interaction.

Amit Ahuja

Monday, August 11, 2008

Black Box and Gray Box Testing

Black box testing is based on the Software's specifications or requirements, without reference to its internal workings. Gray box testing combines white box techniques with black box input testing [Hoglund 04]. This method of testing explores paths that are directly accessible from user inputs or external interfaces to the software. In a typical case, white box analysis is used to find vulnerable areas, and black box testing is then used to develop working attacks against these areas. The use of gray box techniques combines both white box and black box testing methods in a powerful way.

discussion Amit Ahuja

White Box Testing

The purpose of any Security testing method is to ensure the robustness of a system in the face of malicious attacks or regular Software failures. White box testing is performed based on the knowledge of how the system is implemented. White box testing includes analyzing data flow, control flow, information flow, coding practices, and exception and error handling within the system, to test the intended and unintended software behavior. White box testing can be performed to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities.

White box testing requires access to the source code. Though white box testing can be performed any time in the life cycle after the code is developed, it is a good practice to perform white box testing during the unit testing phase.

White box testing requires knowing what makes software secure or insecure, how to think like an attacker, and how to use different testing tools and techniques. The first step in white box testing is to comprehend and analyze source code, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively, testers need to know the different tools and techniques available for white box testing. The three requirements do not work in isolation, but together.

discussion Amit Ahuja

Saturday, August 9, 2008

Kinds of testing that should be considered for Websites.

Black box testing - testing not based on any knowledge of the internal design or code. Tests are based on requirements and functionality.

Incremental integration testing - continuous testing of website as new functionality is added; requires that various aspects of an web applications functionality be independent enough to work separately before all parts of the program are completed.

Integration testing - testing that is done on combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, pages in a website etc.

Functional testing - black box type testing geared to functional requirements of an application; testers should do this type of testing. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)

System testing - black box type testing that is based on overall requirements specifications; covers all combined parts of a web application.

End-to-end testing - similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing or smoke testing - typically an initial testing effort to determine if a new web application is performing well enough to accept it for a major testing effort. For example, if there are lots of missing links, missing images, missing validations, or corrupting databases, the website may not be in a ’sane’ enough condition to warrant further testing in its current state.

Regression testing - re-testing after fixes or modifications of the website or its pages. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

Load testing - testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Stress testing - term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

Performance testing - term often used interchangeably with ’stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.

Usability testing - testing that is done for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

Compatibility testing - testing how well website performs in a particular hardware/software/operating system//browser/network etc. environment.

Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the website as they test it.

Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the website before testing it.

User acceptance testing - determining if website is satisfactory to an end-user or customer.

Alpha testing - testing of a web application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.


discussionAmit Ahuja

Testing Policy VS. Quality Policy

Testing Policy:

A Testing policy is management‘s definition of testing a department. A testing policy involves the following four criteria

Definition of testing: a clear, brief and unambiguous definition of testing

Standards: The standards against which testing will be measured

Testing system: The method through which testing will be achieved and enforced

Evaluation: How testing team will measure and evaluate testing.

Quality Policy:

A quality policy is again a management definition of providing customer satisfaction for the first time and every time. Understanding the definition of excellence and quality is important because it is starting point of any management team contemplating the implementation of a quality policy.

discussion Amit Ahuja

What if Automation finds bugs ....? Good thing or bad thing?

Now, let us track that trail of what happens a bug is discovered -
In automation - situation could be bit tricky especially when the automation tests, logs are bigger. An error/bug reported by an automation bug needs to be checked to see if it is a bug in automation code or a bug in application or bug in data setup or some timing or synchronization related problem (in GUI automation scenario). Let us say you have 5-7 pages of log file - you will have to scan/read through the log file an locate the bug. You might have to do execute failed automated test manually (and corresponding data setup etc).
In manual testing, human tester can easily trace and follow the bug trail and document the bug. At a high level, bug investigation and isolation tasks tend to become relatively low.

Hence, when automation discovers a bug - things get really problematic.

If one were to cut down cycle time by automation or otherwise, they HAVE to make sure either "no bugs are discovered" or "any discovered bugs are IGNORED" or "bugs that are discovered, if fixed, not tested again and other regression testing is done ....

Can automation control or influence any of above events - prevents bugs being discovered or igonore the bugs if accidently discovered or mandate that bugs fixes will not be subsequently tested?

For the sake of argument, let us suppose that both human test cycle and Automation find same number of bugs ... and take out "bugs" portion of test cycle, how automation can save test cycle time? On what parameters this cycle time reduction by Automation depends ?

Type of test - nature of interactions between "test execution agent" (human or an automated script) and nature of verifications (during and post execution).
  • GUI Forms with lots of data input fields - can result in quick form fill tests when automated (zero think time).
  • Tests that require longer processing time can not gain from automation as automation can not speed up the processing time.
  • Tests that require visual inspection - window titles, error messages, color, tool tip and other GUI centric elements - are better tested manunally as programmatic tests would mean lots of investment. Human testes are quicker and cheaper in such cases.
  • Result verification that requires detailed analysis, text file processing, database check etc are good candidates for gaining cycle time.
Thus, there are parameters that are beyond the reach of automation ... hence the notion of cycle time reduction has to be really, really taken with "caution".

discussion Amit Ahuja

Are all best practices "worthless"? Testing Best Practices

Other day I was quoting following from Jerry’s new book on testing to one of my colleague who is a “best practice” proponent.

…..The risks in these two situations are vastly different, so do you think I recommended the same testing process I used for finding a personal web-writing application? Do you imagine I recommended that my client install random freeware pacemakers into live patients until he found something he liked, or didn't dislike? Why not?

I took above sentences as reference and told him.. “Can you use software testing strategy that one uses for web application writing to that of an embedded software in a heart pace maker? Hence best practices are such a junk thing ...

To that he was silent for a while answered --- I agree with your point that test strategy or approach used for web application cannot be applied for embedded software in pace maker … How about picking the practice from a same field/domain – will that not save the time, energy and effort for my client ? Let us say I develop a list of practices for a given field (embedded software used in human bodies) and keep “selling” them as best practices (jump start kit) for those clients who deal with such software? What is your opinion? Would you still say … best practices (in a context) are junk?

I did not have a good answer for him …. Then we discussed about “universal best practices” (I am not sure if such phrase exists as all best practices are universal in nature by default and context less??) such as “walking is good for health”,”Test considering end user scenarios”, “Do unit testing” “Do code review”, “Aspirin is good for heart”, “Drunken driving leads to accidents”, “Do meditation to calm your mind” etc. I told him about at least 3 contexts for each of these best practices where following best practices can lead to harmful effects.

After listening to me … he said … Shrini … you appear to be "making up" all these contexts to prove your point …I want you to answer my question – Are all generic best practices recommendations are worthless or fake? When customers want something readymade that will help them to jumpstart the work, they would like to see if I, as a consultant, can bring some “best practices” from my previous similar experiments. Is that expectation unreasonable?

I am thinking ... I don’t have a good answer for him … do you? I hope Jerry would have some answer …

Are there any "universal best practices" or by default all best practices are universal and context free? Will a best practice cease to remain as bet practice once it comes with a context?

[update] Quoting from Jerry's book again - "As humans - we are not perfect thinkers, we are affected by emotions and we are not clones. We are Imperfect, irrational, value driven,diverse humans - hence we test software and test our
testing AND hence test "best practices" that sales and marketing folks associate with software testing.

discussion Amit Ahuja

Software - A game of questions and answers

The most serious mistakes are not being made as a result of wrong answers. The truly dangerous thing is asking the wrong question."
— Peter Drucker

"Testing is a questioning process in order to evaluate software" - James Bach

"Computers are useless. They can only give you answers." - Pablo Picasso

"One who asks a question is a fool for five minutes; one who does not ask a question remains a fool forever." - A Chinese proverb

Other day I was discussing with one of my colleague … somehow our discussion went about interpreting in simple terms the whole “game” of software development and testing. Here is what and how ended up in agreeing on the “simple” model to describe software and software lifecycle …(not SDLC but SLC)


Software development is about coming up with answers (and demonstrating those answers with an example) to the questions raised by testers, end users and other stakeholders.

Software testers ask questions about claims and capabilities of what software is supposed to do, take the questions to developers and project mangers and ask for answers.

Project manager or project sponsors scan these questions and pick up those that they think are worth “answering”, prioritize them and pass on the developers for providing answers and ways to demonstrate the answers. Before releasing the software to testers, developer do some question-answer session with buddy developers and leads (peer testing, unit testing and code reviews)

Developers then get on mission to analyze questions and develop/construct answers in the form of capabilities in the software and “release” to testers to check to see the answers are “satisfactory”. When developers do not get answers or feel that it takes relatively long time to find the answers – they turn to project manager with their analysis as why answer can not be made available immediately. Project manager then takes the decision of “deferring” those “unanswered” question to be taken up in future releases.
At times Testers, act on behalf of end users and other

Testers verify those answers and check to see they are OK … some times there will be follow-up questions or new questions (regression bugs/issues) and they are routed to developers via the project manager. This cycle repeats until there are new questions to be answered by the developers.

So … as long as there are questions to be answered about software … there will be the need of developers (who will provide answers) and there will be need of Project managers (to prioritize and check which questions need to be answered) and hence a software development project …

Guess what … it is software testers who drive the whole thing by asking relevant and important questions about software – about it’s claims and capabilities …

So … as important trait of a tester is to practice asking “good” questions …


discussion Amit Ahuja




 

Sample text

Sample Text

Job Search



Sample Text