sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +=======    Quality Techniques Newsletter    =======+
         +=======           September 2002            =======+

Subscribers worldwide to support the Software Research, Inc. (SR),
TestWorks, QualityLabs, and eValid user communities and other
interested parties to provide information of general use to the
worldwide internet and software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the
entire document/file is kept intact and this complete copyright
notice appears with it in all copies.  Information on how to
subscribe or unsubscribe is at the end of this issue.  (c) Copyright
2002 by Software Research, Inc.


                       Contents of This Issue

   o  QW2002: Conference Highlights

   o  IEEE Computer: Special Issue on Web Services Computing

   o  Load Testing Terminology, by Scott Stirling

   o  Tenth International Symposium on the Foundations of Software
      Engineering (FSE-10)

   o  eValid: A Compact Buyers' Guide

   o  Special Issue on Web Services For The Wireless World

   o  3rd International Conference on Web Information Systems
      Engineering (WISE 2002)

   o  SR/Institute's Software Quality Hotlist

   o  QTN Article Submittal, Subscription Information


                   QW2002: Conference Highlights

                      Fifteenth International
          Software and Internet Quality Week Conference,
            3-6 September 2002, San Francisco, CA  USA

* Best Paper, Presentation Awards

  Here are the QW2002 Best Paper and Best Presentation contest

    > Best Paper.  The Best Paper award went to  Mr. Giri
      Vijayranghavan for his paper "Bugs in Your Shopping Cart: A
      Taxonomy".  See the paper abstract at:

    > Best Presentation.  The Best Presentation award wend to Dick
      Hamlet for his presentation "Science, Computer 'Science',
      Mathematics, and Software Engineering".  See the paper
      abstract at:

* Keynote Speaker Slides Available

  For those of you who asked here are the URLs for the presentations
  by Keynote Speakers:

    > Fred Baker: "Internet Reliability Under Stress"

    > Robert Binder: "Achieving Very High Reliability for Ubiquitous
      Information Technology"

    > Don O'Neill: "Competitiveness Versus Security"

* Photos Available

  A selection of photos from QW2002 can be seen at:


       IEEE Computer: Special Issue on Web Services Computing

Computer (IEEE), the flagship publication of the IEEE Computer
Society, invites articles relating to integration architectures for
Web Services and/or application case studies that use Web Services
technology.  to appear in August 2003.  Guest editors are Jen-Yao
Chung (, IBM T. J. Watson Research Center; Kwei-
Jay Lin (, University of California, Irvine; and Rick
Mathieu (, Saint Louis University.

Web services are Internet-based, modular applications that perform a
specific business task and conform to a  particular technical
format. The technical format ensures each of these self-contained
business services is an application that will easily integrate with
other services to create a complete business process. This
interoperability allows businesses to dynamically publish, discover,
and aggregate a range of Web services through the Internet to more
easily create innovative products, business processes and value
chains. Typical application areas are business-to-business
integration, content management, e-sourcing, composite Web Service
creation, and design collaboration for computer engineering.

** Proposed topics include

- Web Services architecture and security; Frameworks for building
  Web Service applications; Composite Web Service creation and
  enabling infrastructures
- Web Services discovery; Resource management for web services;
  Solution Management for Web Services
- Dynamic invocation mechanisms for Web Services; Quality of service
  for Web Services; Web Services modeling; UDDI enhancements; SOAP
- Case studies for Web Services; E-Commerce applications using Web
  Services; Grid based Web Services applications

Submissions should be 4,000 to 6,000 words long and should follow
the magazine's guidelines on style and presentation. All submissions
will be anonymously reviewed in accordance with normal practice for
scientific publications. Submissions should be received by 15
January 2003 to receive full consideration.

Author guidelines are available at
<http//> Please submit your
electronic manuscript in  PDF or Postscript to
All submissions will be peer reviewed.  Send queries to,, or

** Important Dates

Submission Deadline January 15, 2003
Notification of Acceptance April 26, 2003
Final version of paper May 24, 2003
Publication Date August 2003

** Related Web Links

CFP for Computer (IEEE) Special Issue on Web Services Computing,
Computer (IEEE) Home Page, <http//>
IEEE Task Force on E-Commerce, <http//>
IEEE Conference on E-Commerce (CEC'03), <http//>

Martin Bichler, PhD
Deep e-Commerce, IBM T.J. Watson Research Center
tel: 914-945-3310  (T/L 862-3310)  fax: 914-945-2141


                      Load Testing Terminology
                           Scott Stirling


What is the difference between load, stress and performance testing?
Why do these three types of testing seem to belong together, perhaps
with others such as scalability testing and benchmarking?  Questions
such as these, which I have encountered in my own career as
sometimes load tester, on Web forums and in discussions in the
workplace, are what this article purposes to answer.  It would be
nice to point to some seminal piece of QA literature that settles
these questions definitively, but surprisingly, this is not

The use of these terms in the QA literature varies to the point that
sometimes load testing is defined as a type of performance test, or
performance test is a type of load test, or (somewhat common,
actually) load testing is not worth mentioning explicitly.  To what
degree definitions proposed in the literature have been arrived at
independently versus influenced by or directly based on previous
published definitions is impossible to tell.

The signature characteristic of background, performance and stress
testing is that they all require some kind of definite workload (the
terms "workload" and "load" are interchangeable in this context)
exercising the system or component under test (SUT or CUT) during
testing.  Load is an indispensable part of the test set up for all
these types of testing.  There are notable exceptions where
simulated load is crucial to other types of testing, such as aspects
of security testing, where it may be required to simulate load-
related conditions giving rise to security problems (such as a
buffer overrun or denial-of-service).

Reliability testing sometimes requires load as a prerequisite to
measuring realistic, time-dependent phenomena such as mean-time-
between-failures (MTBF) in transaction or batch processing systems.
But reliability is a concern for other criteria, such as functional
accuracy and consistency, where load may be incidental or
irrelevant.  Reliability and security are both cross-cutting

                            Load Testing

Definition: Any type of testing where the outcome of a test is
dependent on a workload (realistic or hyper-realistic) explicitly
characterized, simulated and submitted to the SUT.

Discussion: "Load testing" is a generic term, rarely to be found
defined in a way that really makes sense when put side by side with
performance and stress testing.  It's not often found in the QA or
engineering literature, but is common in everyday speech and in the
names of popular load generating test tools (WebLOAD, LoadRunner,
e-Load, QALoad, etc.).   Sometimes it is mentioned along with
performance testing but not defined separately.  Sometimes it is
conspicuously absent, such as from the IEEE and British Computer
Society's glossaries of software engineering and testing terminology
(see references below).

Nevertheless, here is one definition from John Musa's Software
Reliability Engineering: "Load test involves executing operations
simultaneously, at the same rates and with the same other
environmental conditions as those that will occur in the field. Thus
the same interactions and impact of environmental conditions will
occur as can be expected in the field.  Acceptance test and
performance test are types of load test" (p. 8).

Is acceptance test really a type of load test, as Musa claims?  I
argue that it is not, although one might include a load test,
probably a performance test, as part of an acceptance test suite,
which would normally include other types of tests such as
installation and functional tests.

Robert Binder's excellent "Testing Object-Oriented Systems: Models,
Patterns, and Tools" mentions load testing as a variation on
performance testing (p. 744), with no justification or discussion.
It's worth mentioning that the performance section of Binder's book
is a cursory overview in a massive tome that covers just about every
conceivable topic in QA.

Glenford Myers' classic "The Art of Software Testing" does not
define load testing, but does define performance, stress, and volume

The closest thing I know of to a bible of performance testing and
analysis, Raj Jain's, "The Art of Computer Systems Performance
Analysis," doesn't mention load testing at all.

The recent "Performance Solutions, a Practical Guide to Creating
Responsive Scalable Software," by Connie U. Smith and Lloyd G.
Williams, talks a lot about performance analysis and scalability,
and never mentions load testing (this is more of a development than
testing book, but includes sections on performance evaluation and

Before moving on, it is worth discussing the term "load" or
"workload" in the context of software testing.  To understand it, we
can conceptualize the system or component under test as a provider
of one or more services.  Services, depending on the software, may
include things such as retrieving data from a database, transforming
data, performing calculations, sending documents over HTTP,
authenticating a user, and so on.  The services a system or
component provides are performed in response to requests from a
user, or possibly another system or component.  The requests are the
workload.  Raj Jain defines workloads as: "The requests made by the
users of the system" (Jain, 1990, p. 4).  If interested in further
reading, Jain's book contains three chapters on workload selection
and characterization.

                         Background Testing

Definition: Background testing is the execution of normal functional
testing while the SUT is exercised by a realistic work load.  This
work load is being processed "in the background" as far as the
functional testing is concerned.

Discussion: Background testing differs from normal functional
testing in that another test driver submits a constant "background"
workload to the SUT for the duration of the functional tests.  The
background workload can be an average or moderate number of
simulated users, actively exercising the system.

The reason for doing background testing is to test the system's
functionality and responsibilities under more realistic conditions
than single-user or simulated single-user functional testing allows.
It is one thing to verify system functionality while a system is
virtually idle and isolated from all interference and interaction
with multiple simultaneous users.  But it's more realistic to test
functionality while the system is being exercised more like it would
in production.

The only place where I have seen background testing described is
Boris Beizer's "Software System Testing and Quality Assurance."
Beizer includes background, stress and performance testing in the
same chapter, noting that "all three require a controlled source of
transactions with which to load the system" (p. 234).

                           Stress Testing

Definition: Stress testing submits the SUT to extremes of workload
processing and resource utilization.  Load is used to push the
system to and beyond its limits and capabilities.

Discussion: The general approach is to drive one or more components
or the entire system to processing, utilization, or capacity limits
by submitting as much workload as is necessary (all at once or
increasing incrementally) to achieve that end, even forcing it to
its breaking point.

The purpose of stress testing is to reveal defects in software that
only appear, or are more easily noticed, when the system is stressed
to and beyond its limits.  In practice, one may take note of
performance characteristics when the system is in extremis in order
to supplement intuitions or test assumptions about performance
limits.  If so, one is engaging in a type of performance analysis in
conjunction with stress testing.  "Spike testing," to test
performance or recovery behavior when the SUT is stressed with a
sudden and sharp increase in load should be considered a type of
load test.  Whether it deserves its own classification or should be
a subtype of stress or performance testing is an interesting
question.  Like most natural language terms of classification, there
are fuzzy boundaries at the edges of the classes. Some terms are
better understood as describing different parts of the same

While some sources, such as Myers, define stress and volume testing
separately, in practicality they are different shades of the same
type of test.

                        Performance Testing

Definition: Performance testing proves that the SUT meets (or does
not meet) its performance requirements or objectives, specifically
those regarding response time, throughput, and utilization under a
given (usually, but not necessarily, average or normal) load.

Discussion: Performance testing is one term that is used and defined
pretty consistently throughout the QA literature.  Yet,
conspicuously missing from some resources is a definition of
"performance" (e.g., Jain, 1990 and BS-7925-1 never explicitly
define "performance"). Maybe it is taken for granted that everyone
knows what is meant by "performance."  In any case, the IEEE
Standard Glossary of Software Engineering Terminology has a fine

"The degree to which a system or component accomplishes its
designated functions within given constraints, such as speed,
accuracy, or memory usage."

The key words here are "accomplishes" and "given constraints." In
practical use the "given constraints" are expressed and measured in
terms of throughput, response time, and/or resource utilization
(RAM, network bandwidth, disk space, CPU cycles, etc.).  "Accuracy"
is an interesting example, which would seem to better belong with
reliability testing, where reliability testing is to functional
testing as scalability testing is to performance testing.  The
problem with including accuracy as a performance metric is that it
is easy to confuse with accuracy as a functional requirement.  By
stating that performance is "the degree to which a system or
component accomplishes its designated functions within given
constraints," the notion of functional accuracy is tacitly assumed
in "accomplishes."

Whereas "load testing" is rarely mentioned or discussed separately,
performance testing methodology, case studies, standards, tools,
techniques and, to a lesser extent, test patterns abound.  An
interesting project to track for performance test patterns and a
standardized approach to performance testing is the British Computer
Society's work in progress on performance testing.  This draft, in
version 3.4 currently, includes two documents with a few test
patterns and some useful information on performance testing
terminology and methodology, including a glossary (rough and in its
early stages).  It can be found at
<>, from the "Work in Progress"
link on the left-hand frame.

One of the best, brief overviews of performance testing (and load
generation) is in Boris Beizer's "Software System Testing and
Quality Assurance."  For example:

"Performance testing can be undertaken to: 1) show that the system
meets specified performance objectives, 2) tune the system, 3)
determine the factors in hardware or software that limit the
system's performance, and 4) project the system's future load-
handling capacity in order to schedule its replacements" (Beizer,
1984, p. 256.).

Performance testing doesn't have to take place only at system scope,
however.  For example, performance can be tested at the unit level,
where one may want to compare the relative efficiency of different
sorting or searching algorithms, or at the subsystem level where one
may want to compare the performance of different logging

Scalability testing is a subtype of performance test where
performance requirements for response time, throughput, and/or
utilization are tested as load on the SUT is increased over time.

Benchmarking is specific type of performance test with the purpose
of determining performance baselines for comparison.  Baseline
comparisons are most useful for evaluating performance between
alternate implementations, different implementation versions, or
different configurations.  It is extremely important to explicitly
quantify workloads for benchmark tests, even if the workload is no
load. Likewise, the load must be consistent and repeatable.
Otherwise, reliable comparisons between results cannot be made.

The tools for performance testing are generally the same as those
used for stress testing or background testing.  The COTS tools for
stress testing can usually be used for performance testing and
vice-versa (hence the names of the popular WebLoad and LoadRunner,
which make reference to the general load generating capabilities of
the tools rather than a particular subtype of load testing).  But
whereas a simple URL "hammering" tool such as Apache ab (included in
the Apache HTTPD server distribution) or Microsoft WAST
(<>) will often suffice for stress
testing and very straightforward or simplified applications and
components, performance testing often requires more complex user
simulation and control over parameters of the workload being sent to
the SUT (think time, randomized delay, realistic emulation of
virtual users, authentication handling and client session
management), greater verification capabilities (such as parsing and
verification of HTML responses or HTTP headers), and robust metrics
gathering capabilities to validate response time, throughput, and
utilization objectives.

                       Sources and References

"Art of Computer Systems Performance Analysis, The: Techniques for
Experimental Design, Measurement, Simulation, and Modeling," Raj
Jain, John Wiley & Sons: New York, 1990

"Art of Software Testing, The," Glenford Myers, John Wiley & Sons:
New York, 1979

"Performance Solutions, a Practical Guide to Creating Responsive
Scalable Software," Connie U. Smith and Lloyd G. Williams, Addison
Wesley: Boston, 2002

"Software Reliability Engineering," John Musa, McGraw-Hill: New
York, 1999

"Software System Testing and Quality Assurance," Boris Beizer, Van
Nostrand Reinhold: New York, 1984

"Testing Object-Oriented Systems: Models, Patterns, and Tools,"
Robert V. Binder, Addison Wesley: Boston, 1999

"BS 7925-1, Vocabulary of terms in software testing, version 6.2,"
<>, British Computer Society
Specialist Interest Group in Software Testing (BCS SIGST)

"Performance Testing Draft Document V3.4,"
British Computer Society Specialist Interest Group in Software
Testing (BCS SIGST)

"610.12-1990, IEEE Standard Glossary of Software Engineering
Terminology," <>


                          ACM SIGSOFT 2002
                Tenth International Symposium on the
            Foundations of Software Engineering (FSE-10)

                        November 18-22, 2002
                    Westin Francis Marion Hotel
                  Charleston, South Carolina, USA


SIGSOFT 2002 brings together researchers and practitioners from
academia and industry to exchange new results related to both
traditional and emerging fields of software engineering. FSE-10
features seventeen paper presentations on such topics as mobility,
dynamic and static program analysis, aspect-oriented programming,
requirements analysis, modeling, and dynamic response systems. Three
keynotes, a student research forum with posters, and a reception
round out the FSE-10 program. SIGSOFT 2002 also features 2 workshops
and 6 half-day tutorials.


  * Keynote Speakers:

    Gregory D. Abowd, Georgia Institute of Technology
    Gerard J. Holzmann, Bell Laboratories, 2001 ACM SIGSOFT
      Outstanding Research Award Winner
    Gary McGraw, Chief Technology Officer, Cigital

  * FSE-10 Student Research Forum, Wednesday November 20

  * SIGSOFT Distinguished Paper Award

  * Educator's Grant Program, designed to help increase the
    participation of women and minorities in software engineering.
    Application DEADLINE: October 1, 2002

  * Student Support, through the SIGSOFT Conference Attendance Program
    for Students (CAPS). <>

TUTORIALS, all half day

 "Viewpoint Analysis and Requirements Engineering"
 Bashar Nuseibeh, The Open University, UK and Steve Easterbrook,
 University of Toronto

 "Micromodels of Software: Modeling and Analysis with Alloy"
 Daniel Jackson, MIT

 "Software Engineering Education: Integrating Software Engineering
 into the Undergraduate Curriculum"
 Thomas Horton, University of Virginia and W. Michael McCracken,
 Georgia Institute of Technology

 "Software Model Checking"
 Matthew Dwyer, Kansas State University

 "Internet Security"
 Richard A. Kemmerer, University of California, Santa Barbara

 "Software Engineering Education: New Concepts in
 Software Engineering Education"
 Thomas Horton, University of Virginia; Michael Lutz, Rochester
 Institute of Technology; W. Michael McCracken, Georgia Institute of
 Technology; Laurie Williams, North Carolina State University


 Workshop on Self-Healing Systems (WOSS '02), November 18-19

 Workshop on Program Analysis for Software Tools and Engineering
 (PASTE '02), November 18-19

SIGSOFT 2002/FSE-10 General Chair
   Mary Lou Soffa, Univ. of Pittsburgh,

SIGSOFT 2002/FSE-10 Program Chair
   William Griswold, Univ. of Calif., San Diego,

Sponsored by ACM SIGSOFT; in cooperation with ACM SIGPLAN.


                 eValid -- A Compact Buyers' Guide

The eValid website testing solution has natural superiority over
competing methods and technologies for a wide range of WebSite
testing and quality assessment activities.  eValid is the product of
choice for QA/Testing of any application intended for use from a

Read <> What People
Are Saying about eValid!  For the full story see the

Frequently Asked Questions (FAQs).

* Product Benefits

  See the eValid genral Product Overview at:

  There is short summary of eValid's main Product Advantages at:

  There is a more detailed table giving a Features and Benefits

  For specific kinds of problems see the Solution Finder at:

  For more details on commercial offerings look at the Comparative
  Product Summary at:

* Technology Aspects

  See also the description of eValid as an Enabling Technology:

  From a technology perspective look at the Test Technology
  Comparison at;

  Also, you may be interested in the Comparative Testing Levels at:

  eValid is a very cost effective, as the illustrated in the Return
  on Investment (ROI) guide at:

  For quick reference, look at this a short General Summary of
  eValid features and capabilities:

* Link Checking and Site Analysis

  There is a compact Site Analysis Summary:

  After that, take a look at for a good summary of eValid's inherent
  advantages as a Site Analysis engine:

  As a link checker eValid has 100% accuracy -- presenting a
  uniformly client-side view and immune to variations in page
  delivery technology.  Here is a complete Link Check Product

  eValid offers a unique 3D-SiteMap capability that shows how sets
  of WebSite pages are connected.  See:

* Functional Testing

  There is a compact Functional Testing summary at:

  For a good summary of inherent eValid functional testing
  advantages see:

  eValid capabilites for Validation of WebSite Features is
  unparalleled, as illustrated in this page:

* Server Loading

  eValid loads web servers using multiple copies of eValid to assure
  100% realistic test playback.  There is a concise Load Testing
  Summary that hits the high points at:

  The table at summarizes inherent eValid advantages for loading:

  How many simultaneous playbacks you get on your machine is a
  function of your Machine Adjustments.  Your mileage will vary but
  you can almost certainly get 50 simultaneous full-fidelity users.
  See this page for details:

  In parallel with that loading scenario, you may wish to use the
  eVlite playback agent to generate a lot of activity, as described

* Page Timing/Tuning

  There is a short Timing/Tuning Summary at:

  As a browser, eValid can look inside individual pages and provide
  timing and tuning data not available to other approaches.  See the
  example Page Timing Chart to see the way timing details are
  presented in eValid's unique timing chart:

  Here is an explanation of the Page Tuning Process with a live

* Pricing and Licensing.

  Here are the official Suggested List Prices for the eValid product
  suite:  <>

  The detailed Feature Keys description shows how eValid license
  keys are organized:

  See also the details on Maintenance Subscriptions:

* Current Product Version and Download Details.

  Features added in eValid's current release are given in the
  Product Release Notes for Ver 3.2 at:

  For a evaluation copy of eValid go to the Product Download
  Instructions at:


        Special Issue on Web Services for the Wireless World
         IEEE Transactions on Systems, Man, and Cybernetics

               Submission deadline: January 31, 2003.

The growth and wide spread of Internet technologies have enabled a
wave of innovations that are having an important impact on the way
businesses deal with their partners and customers. To remain
competitive, traditional businesses are under the pressure to take
advantage of the information revolution the Internet and the web
have brought about. Most of businesses are moving their operations
to the web for more automation, efficient business processes,
personalization, and global visibility.

Web service is one of the promising technologies that could help
businesses in being more web-oriented. In fact, Business-to-Customer
(B2C) cases identified the first generation of web services. More
recently, businesses started using the web as a means to connect
their business processes with other partners, e.g. creating B2B
(Business-to-Business) web services.  Despite all the efforts that
are spent on web services research and development, many businesses
are still struggling with how to put their core business competences
on the Internet as a collection of web services. For instance,
potential customers need to retrieve these web services from the web
and understand their capabilities and constraints if they wish
fusing them into combinations of new value-adding web services. The
advent of wireless technologies, such as palmtops and advanced
mobile telephones, has opened the possibility to provide facilities
on the spot, no matter where these customers are located (anytime
and anywhere). Businesses that are eager to get engaged on the
market of wireless web services (also denoted by m-services) are
even facing complicated technical, legal, and organizational

This special issue aims at presenting recent and significant
developments in the general area of wireless web services. We seek
original and high quality submissions related (but not limited to)
to one or more of the following topics:
- Composition of m-services vs. web services.
- Description, organization, and discovery of m-services.
- Web service/M-service ontologies and semantic issues.
- Personalization of m-services.
- Security support for m-services.
- Agent support for m-services.
- M-service brokering.
- Pricing and payment models.
- M-service agreements and legal contracts.
- Content distribution and caching.
- Technologies and infrastructures for m-services vs. web services.
- Wireless communication middleware and protocols for m-services.
- Interoperability of m-services with web services.

Guest editors
- Boualem Benatallah
  The University of New South Wales, Sydney, Australia.
- Zakaria Maamar
  Zayed University, Dubai, United Arab Emirates.


                The 3rd International Conference On
          Web Information Systems Engineering (WISE 2002)
       Workshops: 11th Dec 2002, Conference: 12-14th Dec 2002
                       Grand Hyatt, Singapore


Held in conjunction with the International Conference on Asian
Digital Library ICADL2002.  Organised by School of Computer
Engineering, Nanyang Technological University, & WISE Society.

The aim of this conference is to provide an international forum for
researchers, professionals, and industrial practitioners to share
their knowledge in the rapidly growing area of Web technologies.

The WISE 2002 conference will feature two keynote speeches (12th and
13th December) and thirty-four paper presentations spread over three
days (12-14th December).

Participants may also attend the keynote speech (13th December) of
the ICADL2002 conference which will be hosted in the same hotel.

Tutorial topics are as follows:

  * "Web Mining : A Bird's Eyeview", Sanjay Kumar Madria, Department
    of Computer Science, University of Missouri-Rolla, USA.
  * "XML: The Base Technology for the Semantic Web", Erich J.
    Neuhold Fraunhofer IPSI and Tech. Univ. Darmstadt, Germany.
  * "Business-to-Business (B2B) E-Commerce: Issues and Enabling
    Technologies", Boualem Benatallah, University of New South
    Wales, Sydney, Australia.
  * "Web Services for the Technical Practitioner", Jan Newmarch,
    School of Network Computing, Monash University, Australia.


              SR/Institute's Software Quality HotList

SR/Institute maintains this list of links to selected organizations
and institutions which support the software quality and software
testing area.  Organizations and other references are classified by
type, by geographic area,  and then in alphabetic order within each
geographic area.

Our aim in building and maintaining the Software Quality HotList is
to bring to one location a complete list of technical,
organizational, and related resources.  The goal is to have the
Software Quality HotList be the first stop in technical development
related to software testing and software quality issues.

The material in the Software Quality HotList is collected from a
variety of sources, including those mentioned here and the ones that
they mention in turn, and including many others sources as well.

Software Quality is a big and growing field; there are many hundreds
of entries in the list as of the last revision.  Obviously it is
impossible to include everything and everybody.  Our apologies in
advance if you, your favorite link, or your favorite site have been
missed.  If we missed you, please take a moment and suggest a new
URL using the form that can be found on the HotList top page.

    ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------

QTN is E-mailed around the middle of each month to over 10,000
subscribers worldwide.  To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should
  provide at least a 1-month lead time from the QTN issue date.  For
  example, submission deadlines for "Calls for Papers" in the March
  issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.

STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All
other systems are either trademarks or registered trademarks of
their respective companies.

        -------->>> QTN SUBSCRIPTION INFORMATION <<<--------

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:


As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:

   TO UNSUBSCRIBE: Include this phrase in the body of your message:

Please, when using either method to subscribe or unsubscribe, type
the  exactly and completely.  Requests to unsubscribe
that do not match an email address on the subscriber list are

		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Web:       <>