All You Wanted To Know On Software Testing.Interview Questions.Software Testing Terminologies.
Monday, November 1, 2010
Friday, October 29, 2010
Monday, October 25, 2010
Monday, October 18, 2010
Verfication Versus Validation
Verification ensures that the application complies to standards and processes.
This answers the question " Did we build the right system? " Eg: Design reveiws, code walkthroughs and inspections. Validation ensures whether the application is built as per the plan. This answers
the question " Did we build the system in the right way? ". Eg: Unit Testing, Integration Testing, System Testing and UAT.
Sunday, October 17, 2010
Endurance And Volume Testing
Load testing can be conducted in two ways. Longevity testing, also called endurance testing, evaluates a system's ability to handle a constant, moderate work load for a long time. Volume testing, on the other hand, subjects a system to a heavy work load for a limited time. Either approach makes it possible to pinpoint bottlenecks, bugs and component limitations. For example, a computer may have a fast processor but a limited amount of RAM (random-access memory). Load testing can provide the user with a general idea of how many applications or processes can be run simultaneously while maintaining the rated level of performance.
Load testing differs from stress testing, which evaluates the extent to which a system keeps working when subjected to extreme work loads or when some of its hardware or software has been compromised. The primary goal of load testing is to define the maximum amount of work a system can handle without significant performance degradation.
Load testing differs from stress testing, which evaluates the extent to which a system keeps working when subjected to extreme work loads or when some of its hardware or software has been compromised. The primary goal of load testing is to define the maximum amount of work a system can handle without significant performance degradation.
Saturday, October 16, 2010
Friday, October 15, 2010
Thursday, October 14, 2010
What is spike Testing
Spike Testing is a testing process in which an application is tested with sudden increment and decrement in the load.The system is suddenly loaded and unloaded.It is done to see how the system reacts with unexpected rise and fall of users.
What is heuristic testing?
Testing the application based on Experience is referred to as heurisitics testing.
Friday, October 8, 2010
Difference between re-testing / Regression Testiing?
Re-testing is testing the application again and again with the intent of finding the bugs.
Regression Testing is testing a new build to check whether bug fixes have resulted in any new bugs, basically you are checking a new build for presence of any bugs.
What Is Bug Triage
Bug Triage is the formal meeting we have to classify the bugs on the basis of Severity/Priority
Labels:
Defect Report
Sunday, October 3, 2010
Source Code Testing Tools
- BoundsChecker By Numega
Pure Coverage By Rational
Purify By Rational
Jprobe By Sitraka Software
ATTOLCoverage By ATTOL Software
Labels:
Testing Tools
What is middleware?
Middleware is a software that allows software components to talk to each other.For ex. WebSphere MQSeries is a middleware software available in the market.
Middleware can also be defined as Software that mediates between an application program and a network. It manages the interaction between disparate applications across the heterogeneous computing platforms. The Object Request Broker (ORB), software that manages communication between objects, is an example of a middleware program.
Middleware can also be defined as Software that mediates between an application program and a network. It manages the interaction between disparate applications across the heterogeneous computing platforms. The Object Request Broker (ORB), software that manages communication between objects, is an example of a middleware program.
Wednesday, September 29, 2010
What is Quality Control?
Quality control measures a product against existence of a required attribute
• Checks whether the product conforms to defined standards and procedures
• Quality control is a detection activity
• Testing is an example of QC
• Checks whether the product conforms to defined standards and procedures
• Quality control is a detection activity
• Testing is an example of QC
What is Quality Assurance?
• Set of activities which establish processes and sets up measurement programs to evaluate the processes.
• Processes "assure" the same level of quality in all the products produced by that process
• It is oriented towards 'prevention'.
• Training is an example of QA
• Processes "assure" the same level of quality in all the products produced by that process
• It is oriented towards 'prevention'.
• Training is an example of QA
What is unit testing
Testing of individual units or groups of related code is known as Unit Testing.Unit tested with a Unit Test Plan (UTP)
Unit testing is testing of smallest testable part of the software. It can be an individual program or a group of programs, forming a unit. The definition of ‘unit’ can be left to the project team. In some cases one single program can be considered as a unit or in some other cases, a set of related programs can be considered as a
unit. This discretion can be left to individual project teams. In some cases it is also referred as component
testing.
unit. This discretion can be left to individual project teams. In some cases it is also referred as component
testing.
It is very important that unit testing needs to be carried out with a help of a unit test plan. Note that this unit test plan should have been prepared during the detail design stage and not after coding! The reason behind it is that if you prepare a unit test plan based on your code, any defect in the code will creep into the unit test plan too defeating the very purpose of unit testing
Three Tier Architecture
In three-tier client/server, the processing is divided between two or more servers, one typically used for
application processing and another for database processing.This is common in large enterprises.
It decomposes an application into three sets of services: UI, business, and data. In this architecture UI resides in
Client, business logic resides in business logic server and data in database server.
A three-tier architecture introduces a server (or an "agent") between the client and the server. The role of the
agent is manifold.It can provide translation services (as in adapting a legacy application on a mainframe to a
client/server environment), metering services (as in acting as a transaction monitor to limit the number of
simultaneous requests to a given server), or intelligent agent services (as in mapping a request to a number of
different servers, collating the results, and returning a single response to the client).
What is two tier architecture ?
A two-tier architecture is where a client talks directly to a server, with no intervening server. It is typically used in
small environments.
For example you may have a database talking to a thick client machine
small environments.
For example you may have a database talking to a thick client machine
When automation is prefferred ?
Complex and time-consuming tests
- Tests requiring a great deal of precision
- Tests involving many simple,repetitive tests
- Tests involving many data combination
What is alpha and Beta testing?
- Forms of Acceptance testing .
- Testing in the production environment
- Alpha testing is performed by end users within a company but outside development group
- Beta testing is performed by a sub-set of actual customers outside the company
What is acceptance testing
Acceptance testing is one of the last phases of testing which is typically one at the customer place. Testers usually perform the tests which ideally are derived from the User Requirements Specification, to which
the system should conform. The focus is on a final verification of the required business function and flow of the system.
the system should conform. The focus is on a final verification of the required business function and flow of the system.
What is installation testing ?
Basic installation
Installation of various configurations
Installation on various platforms
Regression testing of basic functionality
Installation of various configurations
Installation on various platforms
Regression testing of basic functionality
What is Sanity Testing?
Sanity Testing Is a type functional system testing with
- Very basic minimal number of tests to verify the product for the feature / protocol compliance
- Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort
What are the types Of Integration Testing?
Big-bang Integration (non-incremental)
It is an incremental approach which can be done Depth-first or Breadth First
Stubs are used until the actual program is ready
Bottom-up Strategy
Process starts with low level modules where critical modules are built first
Cluster approach and test drivers are used
Often works well in less structured applications
- All components are combined in advance.
- Correction is difficult because isolation of causes is complicated
- Incremental Integration
- Incremental integration can be defined as continuous testing of an
- application by constructing and testing small components.
It is an incremental approach which can be done Depth-first or Breadth First
Stubs are used until the actual program is ready
Bottom-up Strategy
Process starts with low level modules where critical modules are built first
Cluster approach and test drivers are used
Often works well in less structured applications
What are drivers?
Drivers are simple programs designed specifically for testing the calls to lower layers.Test drivers permit generation of data in external form to be entered automatically into the system.
What is a stub?
Stubs are pieces of code which have the same interface as the low level functionality.
They do not perform any real computation or data manipulation
They do not perform any real computation or data manipulation
Tuesday, September 28, 2010
Integration Testing Explained
Once the Unit Testing is complete, the modules are integrated together and testing done to check if the integration is seamless is called Integration Testing. This may again be done by the developers themselves or independently amongst them. Integration between modules / systems can happen in different ways:
1. Program-to-program integration (Intra-module interface)
2. Module (set of programs)-to-module integration (Inter-module interface)
3. System-to-system integration (External interface)
1. Program-to-program integration (Intra-module interface)
2. Module (set of programs)-to-module integration (Inter-module interface)
3. System-to-system integration (External interface)
What Is Integration Testing ?
Integration Testing
• Combining and testing multiple components together
• Integration of modules, programs and functions
• Tests Internal Program interfaces
• Tests External interfaces for modules
• Combining and testing multiple components together
• Integration of modules, programs and functions
• Tests Internal Program interfaces
• Tests External interfaces for modules
What consitutes system testing?
Interface Testing – Focusing on the client application
Server Testing – To test the attribute of the server
Middleware Testing – To assure that middleware is capable to interact all the machines connected in the network
Network Testing – to check if the network is capable of load of the application
These are the types of system testing.
Server Testing – To test the attribute of the server
Middleware Testing – To assure that middleware is capable to interact all the machines connected in the network
Network Testing – to check if the network is capable of load of the application
These are the types of system testing.
Monday, September 27, 2010
What is a test oracle ?
An oracle is a mechanism used by software testers and software engineers for determining whether a test has passed or failed.[1] It is used by comparing the output(s) of the system under test, for a given test case input, to the outputs that the oracle determines that product should have. Oracles are always separate from the system under test.
Common oracles include:
* specifications and documentation
* other products (for instance, an oracle for a software program might be a second program that uses a different algorithm to evaluate the same mathematical expression as the product under test)
* an heuristic oracle that provides approximate results or exact results for a set of a few test inputs,
* a statistical oracle that uses statistical characteristics,
* a consistency oracle that compares the results of one test execution to another for similarity,
* a model-based oracle that uses the same model to generate and verify system behavior
* or a human being's judgment (i.e. does the program "seem" to the user to do the correct thing?).
Common oracles include:
* specifications and documentation
* other products (for instance, an oracle for a software program might be a second program that uses a different algorithm to evaluate the same mathematical expression as the product under test)
* an heuristic oracle that provides approximate results or exact results for a set of a few test inputs,
* a statistical oracle that uses statistical characteristics,
* a consistency oracle that compares the results of one test execution to another for similarity,
* a model-based oracle that uses the same model to generate and verify system behavior
* or a human being's judgment (i.e. does the program "seem" to the user to do the correct thing?).
What is equivalence partitioning ?
Systematic process of identifying a set of input conditions to be tested.The two distinct steps involved:
- Identify equivalence classes
- Identify test cases
What is Black Box Testing ?
Black box testing is derived from functional design specifications, without regard to the internal program structure. This test is done without any internal knowledge of system / product being tested. Functional tests examine the observable behavior of software as evidenced by its outputs, without any reference to internal functions. This is the essence of ‘black box’ testing.
• Black box tests better attack the quality target. It is an advantage to create the quality criteria from this point of view from the beginning.
• In black box testing, software is exercised over a full range of inputs and the outputs are observed for correctness. How those outputs are achieved, or what is inside the box are immaterial.
• Black Box testing technique can be applied once unit and integration testing is completed i.e. each line code has been covered through white-box testing
• Black box tests better attack the quality target. It is an advantage to create the quality criteria from this point of view from the beginning.
• In black box testing, software is exercised over a full range of inputs and the outputs are observed for correctness. How those outputs are achieved, or what is inside the box are immaterial.
• Black Box testing technique can be applied once unit and integration testing is completed i.e. each line code has been covered through white-box testing
Explain Static Testing
In Static Testing, the requirement specification, design, code or any document may be inspected and reviewed against a stated requirement and for some standards against some guideline/checklist. It is part of verification
technique that is not necessarily restricted to testing phase of project lifecycle.
Static Testing includes:
• Walk-through: Code reading and inspections with a team
• Reviews: Formal techniques for specification and design, code reading and inspections.
Guidelines for conducting a code walkthrough:
•Have review meeting chaired by the project manager.
•The programmer presents his/her work to reviewers.
•The programmers walk through the code in detail, focusing on the logic of the code.
•Reviewer asks to walk through specific test cases.
•The chair resolves the disagreements if the review team cannot reach agreement among themselves
and assign duties, usually to the programmer, for making specific changes.
•A second walk through is scheduled.
Many studies show that the single most cost-effective defect reduction process is the classic structural test - the
code inspection or walk-through. Code inspection is like proof-reading - it can find the mistakes the author
missed - the "typo's" and logic errors that even the best programmers can produce.
technique that is not necessarily restricted to testing phase of project lifecycle.
Static Testing includes:
• Walk-through: Code reading and inspections with a team
• Reviews: Formal techniques for specification and design, code reading and inspections.
Guidelines for conducting a code walkthrough:
•Have review meeting chaired by the project manager.
•The programmer presents his/her work to reviewers.
•The programmers walk through the code in detail, focusing on the logic of the code.
•Reviewer asks to walk through specific test cases.
•The chair resolves the disagreements if the review team cannot reach agreement among themselves
and assign duties, usually to the programmer, for making specific changes.
•A second walk through is scheduled.
Many studies show that the single most cost-effective defect reduction process is the classic structural test - the
code inspection or walk-through. Code inspection is like proof-reading - it can find the mistakes the author
missed - the "typo's" and logic errors that even the best programmers can produce.
What is White Box Testing?
White box testing is done based on the structure of the code – the conditions, loops, branches, etc. in it.
What is dynamic testing ?
The one which involves execution of code is typically done in ‘dynamic testing’.
What is static testing ?
Did you know that ‘testing’ does not always involve ‘execution of code / programs’? Testing can also be done
without executing even a single line of code! That is by means of ‘reviews’ – requirements specification review,
design document review, functional specification review and so on. This also is a form of testing, which is called
‘static testing’.
without executing even a single line of code! That is by means of ‘reviews’ – requirements specification review,
design document review, functional specification review and so on. This also is a form of testing, which is called
‘static testing’.
What are testing techniques?
Static Testing
Dynamic Testing
White Box Testing
Black Box Testing
are the testing techniques
Dynamic Testing
White Box Testing
Black Box Testing
are the testing techniques
What does testing test bed phase consists of ?
Test bed installation and configuration is one of the key activities in it. A place to copy the code, executables,
other installables, etc. need to be decided at this stage. Setting up a database (Oracle, DB2, SQL Server, etc.),
creating tables, inserting data and defining access controls are other major activities in this phase. Configuring
the network connectivity too will take a considerable effort if testing involves multiple locations and teams.
other installables, etc. need to be decided at this stage. Setting up a database (Oracle, DB2, SQL Server, etc.),
creating tables, inserting data and defining access controls are other major activities in this phase. Configuring
the network connectivity too will take a considerable effort if testing involves multiple locations and teams.
What does test design phase consists of ?
Designing for test involves activities like
Test scripts will be required in case if any testing tools are used. In most cases we may have luxury of copying existing production data to the test bed for testing. But there are cases where production data may not be available due to security and other reasons where project team has to prepare the test data. If this test data to be generated is huge, decision has to be taken whether to purchase a tool for it or develop it in-house.
- scoping out test coverage (one may use a traceability matrix for this),
- identifying different testing scenarios & appropriate test cases,
- preparation of test scripts and test data
- base-lining these as per configuration management requirements.
Test scripts will be required in case if any testing tools are used. In most cases we may have luxury of copying existing production data to the test bed for testing. But there are cases where production data may not be available due to security and other reasons where project team has to prepare the test data. If this test data to be generated is huge, decision has to be taken whether to purchase a tool for it or develop it in-house.
What does test plan phase consist of ?
This phase involves defining test scope, test environment setup, network connectivity, deciding different test
phases, approaches & methodologies, planning for manual vs. automated testing, effort & person hrs
estimation, defect tracking mechanism, configuration management & risk management (both from testing
perspective) and evaluation & identification of testing & defect tracking tools.
Note that all these cannot be done on day one of the project itself. Strategic decisions involving investments like setting up test environments which may involve setting up separate test beds / servers, hardware & software, say purchase of desktops, testing tools, etc. will have to be decided in the early stages.
Requirements for testing are identified based on the software requirements specification, functional spec,
design spec and use case documents as appropriate
phases, approaches & methodologies, planning for manual vs. automated testing, effort & person hrs
estimation, defect tracking mechanism, configuration management & risk management (both from testing
perspective) and evaluation & identification of testing & defect tracking tools.
Note that all these cannot be done on day one of the project itself. Strategic decisions involving investments like setting up test environments which may involve setting up separate test beds / servers, hardware & software, say purchase of desktops, testing tools, etc. will have to be decided in the early stages.
Requirements for testing are identified based on the software requirements specification, functional spec,
design spec and use case documents as appropriate
What is the cost of ineffective testing
- Time
– Projects need to be reworked or abandoned
- Money
– Defects are 100 to 1000 times more costly to find and
repair after deployment
- Quality
– Products released with undiscovered or unresolved
defects
Why Testing?
Verifies that all requirements are implemented correctly
(both for positive and negative conditions)
• Identifies defects before software deployment
• Helps improve quality and reliability
• Makes software predictable in behavior
• Reduces incompatibility and interoperability issues
• Helps marketability and retention of customers
(both for positive and negative conditions)
• Identifies defects before software deployment
• Helps improve quality and reliability
• Makes software predictable in behavior
• Reduces incompatibility and interoperability issues
• Helps marketability and retention of customers
IEEE definition of testing
IEEE definition of testing: “Testing is the process of exercising or evaluating a system or system component by
manual or automated means to verify that it satisfies specified requirements, or to identify differences between
expected and actual results.
manual or automated means to verify that it satisfies specified requirements, or to identify differences between
expected and actual results.
Other Dimensions of Quality
Portability – measure of (effort) how easily a software can be moved from one environment to another. The
effort may also involve data migration, documentation and other similar activities
– Hardware independence: The ability of a system or software to output the same set of results
independent of the hardware that it is installed on.
– Interoperability: It is the ability of two or more systems or subsystems or components to exchange
information and to use the information further that has been exchanged
• Reliability - According to ANSI, Software Reliability is defined as the probability of failure-free software
operation for a specified period of time in a specified environment. However it is not a direct function of time. It
is one of the very important attributes of quality and will become harder to achieve as the complexity of the
software increases.
– Error Tolerance: It is the ability of a system or component to continue normal operation despite the
presence of erroneous inputs either from the user or from a different interface system
– Availability: It is the proportion of time that the system is up and running i.e. available for its intended
users. It can be measure as: Uptime/(Uptime + Downtime)
• Usability – Measure of how easy or difficult it is to use a software. It refers to the efficiency, comfort, safety
and satisfaction with which intended users of a software and under a variety of conditions can perform their
tasks
– Understandability: The measure of effort required understand the overall functionality of a system
– Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the system
features
– Operability: It is the ability to keep a system in a functioning and operating condition.
– Communicativeness: The ability to express or communicate the messages or status of the system to
the external world in a clear and unambiguous way
effort may also involve data migration, documentation and other similar activities
– Hardware independence: The ability of a system or software to output the same set of results
independent of the hardware that it is installed on.
– Interoperability: It is the ability of two or more systems or subsystems or components to exchange
information and to use the information further that has been exchanged
• Reliability - According to ANSI, Software Reliability is defined as the probability of failure-free software
operation for a specified period of time in a specified environment. However it is not a direct function of time. It
is one of the very important attributes of quality and will become harder to achieve as the complexity of the
software increases.
– Error Tolerance: It is the ability of a system or component to continue normal operation despite the
presence of erroneous inputs either from the user or from a different interface system
– Availability: It is the proportion of time that the system is up and running i.e. available for its intended
users. It can be measure as: Uptime/(Uptime + Downtime)
• Usability – Measure of how easy or difficult it is to use a software. It refers to the efficiency, comfort, safety
and satisfaction with which intended users of a software and under a variety of conditions can perform their
tasks
– Understandability: The measure of effort required understand the overall functionality of a system
– Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the system
features
– Operability: It is the ability to keep a system in a functioning and operating condition.
– Communicativeness: The ability to express or communicate the messages or status of the system to
the external world in a clear and unambiguous way
What are dimensions of Software Quality ?
Quality has many dimensions. These are mentioned above from the software engineering view point. Here are
their details:
• Functionality – the core dimension which focuses on the required functionality of software.
– Completeness: All requirements that are specified are implemented in the software
– Correctness: All features implemented are as per the specified requirements and are correct
– Compatibility: The software is compatible with other OS, browsers, databases, clients, hardware,
system, etc. as specified
• Performance – Dimension focusing on the overall performance of the system from response time, resource
consumption and other perspectives.
– Time: Response time for a given transaction. The speed in which the system responds to user’s input,
usually expressed in terms of millisec
– Resources: Measure of utilization of resources like CPU, Memory and Disk. Which are critical resources
and their parameters
• Maintainability - the ease with which a software system or its component/s can be modified to correct
defects (if any), improve the performance, or other attributes or adapt to a new or changed environment or add
new features.
– Correctability: The measure of effort required to correct the software defects and to cope with its user
complaints
– Expandability: The measure of effort required to modify or improve the efficiency of a system or its
component. Note that it may be very difficult to expand a software which is not modular by design.
– Testability: The measure of effort required to test the software efficiently. If the requirements are not
very clear or ambiguous, it becomes very difficult to test the software and so is the case with haphazardly
designed software.
their details:
• Functionality – the core dimension which focuses on the required functionality of software.
– Completeness: All requirements that are specified are implemented in the software
– Correctness: All features implemented are as per the specified requirements and are correct
– Compatibility: The software is compatible with other OS, browsers, databases, clients, hardware,
system, etc. as specified
• Performance – Dimension focusing on the overall performance of the system from response time, resource
consumption and other perspectives.
– Time: Response time for a given transaction. The speed in which the system responds to user’s input,
usually expressed in terms of millisec
– Resources: Measure of utilization of resources like CPU, Memory and Disk. Which are critical resources
and their parameters
• Maintainability - the ease with which a software system or its component/s can be modified to correct
defects (if any), improve the performance, or other attributes or adapt to a new or changed environment or add
new features.
– Correctability: The measure of effort required to correct the software defects and to cope with its user
complaints
– Expandability: The measure of effort required to modify or improve the efficiency of a system or its
component. Note that it may be very difficult to expand a software which is not modular by design.
– Testability: The measure of effort required to test the software efficiently. If the requirements are not
very clear or ambiguous, it becomes very difficult to test the software and so is the case with haphazardly
designed software.
What Is Quality?
“The degree to which a system, component or process meets
customer or user needs or expectations”
-I.E.E.E
“The degree to which a system, component or process meets requirements"
customer or user needs or expectations”
-I.E.E.E
“The degree to which a system, component or process meets requirements"
Sunday, September 26, 2010
What Is Requirements traceability Matrix?
Requirements Tracebility matrix is the mapping between Customer Requirements and Test cases created.Its basically used to ensure all the requirements are tested and test coverage is 100%
What is monkey Testing?
Monkey testing is the art of generating random tests via automation. There are smart monkeys and dumb monkeys. Dumb monkeys randomly exercise functionality in an application -- for instance, randomly clicking UI elements or randomly inserting data, rarely with validating output.
They're often used to expose issues like memory leaks. Smart monkeys, on the other hand are, well, smarter! They randomly interact with the software being tested, but have state or model information which allows them to validate the results of interaction. Some smart monkeys are so smart that they actually queue up new tests based on the results of previous tests.
They're often used to expose issues like memory leaks. Smart monkeys, on the other hand are, well, smarter! They randomly interact with the software being tested, but have state or model information which allows them to validate the results of interaction. Some smart monkeys are so smart that they actually queue up new tests based on the results of previous tests.
What Is Test Strategy?
A testing strategy, on the other hand, is a holistic view or high level approach on how you will test a product .
it's the approach you will take, the tools (and methodologies) you will use to deliver the highest possible quality at the end of a project.
it's the approach you will take, the tools (and methodologies) you will use to deliver the highest possible quality at the end of a project.
What Is Testing Methodology?
A testing methodology is a tool or method used to test an application. As you listed, some methodologies include Automated UI testing, regression testing, and so forth.
Some might argue that testing techniques such as pairwise-combinatorial interdependence modeling or model-based testing are also methodologies.
Some might argue that testing techniques such as pairwise-combinatorial interdependence modeling or model-based testing are also methodologies.
What is Smoke Testing?
Smoke testing is non-exhaustive way of testing the software, ascertaining that the most of the crucial functions of the application works, but not bothering with finer details.For Example,before a build is assigned for testing you may subject the software to Smoke Testing or Sanity Testing.
Saturday, September 25, 2010
What is Compatibility Testing
Compatibility Testing is the process of testing whether software is compatible with other elements like mix of browsers, Operating Systems, or hardware.
Testing Terms : What is Interoperability Testing?
Testing the ability of the Application under test to interoperate with the existing system .
Suppose you are installating the Application under test in a new environment with already set of software exist,installation of new software should not affect the existing softwares.
Suppose you are installating the Application under test in a new environment with already set of software exist,installation of new software should not affect the existing softwares.
What is Usability Testing ?
Usability Testing is also called as 'Testing for User-Friendliness'. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user.
Basically In usability testing you are checking how easy an application is to use.
Basically In usability testing you are checking how easy an application is to use.
What is Navigation Testing?
Navigation Testing ensures each link goes to the place where it is supposed to go.Ensure there are no broken links
How do you test world wide application ?
Web sites are essentially client/server applications - with web servers and 'browser' clients.
Consideration should be given to the interactions between html pages, web services, encrypted communications, Internet connections, firewalls, applications that run in web pages (such as javascript, flash, other plug-in applications), the wide variety of applications that could run on the server side, etc.
Additionally, there are a wide variety of servers and browsers, mobile platforms, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
Consideration should be given to the interactions between html pages, web services, encrypted communications, Internet connections, firewalls, applications that run in web pages (such as javascript, flash, other plug-in applications), the wide variety of applications that could run on the server side, etc.
Additionally, there are a wide variety of servers and browsers, mobile platforms, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
- Navigation Testing : Testing the breadcrumbs
- Content Testing : Test Images and other text contents
- What are the expected loads on the server [ Load Testing ]
- What kind of performance is required under such loads (such as web server response time, database query response times).
- What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, load generation appliances, etc.)?
- Who is the target audience? What kind and version of browsers will they be using, and how extensively should testing be for these variations? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wider variety of connection speeds and browser types)
- What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should flash, applets, etc. load and run)?
- What kinds of security (firewalls, encryption, passwords, functionality, etc.) will be required and what is it expected to do? How can it be tested?
- What internationilization/localization/language requirements are there, and how are they to be verified?
- How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
- What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.
- Which HTML and related specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
- Will there be any standards or requirements for page appearance and/or graphics, 508 compliance, etc. throughout a site or parts of a site?
- Will there be any development practices/standards utilized for web page components and identifiers, which can significantly impact test automation.
- How will internal and external links be validated and updated? how often?
- Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
- How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
- How are flash, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
What is a Test Plan
A test plan is a document detailing a systematic approach to testing a system such as a machine or software. The plan typically contains a detailed understanding of what the eventual workflow will be for testing the product.
What is extreme Programming ?
Extreme Programming (XP) is a software development methodology which is intended to improve software quality and responsiveness to changing customer requirements. As a type of agile software development,it advocates frequent "releases" in short development cycles (time boxing), which is intended to improve productivity and introduce checkpoints where new customer requirements can be adopted.
Testing Basics: What is testing ?
Exercising a system or product with the intent of finding problems or bugs is referred to Testing.
Friday, September 24, 2010
Winrunner : What is contained In Winrunner GUI File ?
WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description
Labels:
Winrunner
Subscribe to:
Posts (Atom)