sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +=======    Quality Techniques Newsletter    =======+
         +=======            December 2001            =======+

Subscribers worldwide to support the Software Research, Inc. (SR),
TestWorks, QualityLabs, and eValid user communities and other
interested parties to provide information of general use to the
worldwide internet and software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the
entire document/file is kept intact and this complete copyright
notice appears with it in all copies.  Information on how to
subscribe or unsubscribe is at the end of this issue.  (c) Copyright
2001 by Software Research, Inc.


                       Contents of This Issue

   o  Test Automation Issues, by Boris Beizer

   o  QWE2002 Details

   o  Major Issues Facing Internet/Software Quality, by Edward
      Miller (QWE2002 Chair)

   o  SAFECOMP 2002

   o  eValid Ver. 3.1 Early Release Available

   o  FSE 2002

   o  QTN Article Submittal, Subscription Information


           *     We Extend Our Best Wishes For The      *
           *               Holiday Season               *
           *                  And For                   *
           *    Peace And Prosperity In The New Year    *


                       Test Automation Issues
                            Boris Beizer

      Note:  This article is taken from a collection of Dr.
      Boris Beizer's essays "Software Quality Reflections" and
      is reprinted with permission of the author.  We plan to
      include additional items from this collection in future
      months.  You can contact Dr. Beizer at

Automation has been a driving force among the leaders of the testing
community for decades.  Software test automation isn't new -- it's
as old as software development.  What is new is the realization that
without automation it will become increasingly more difficult to
achieve the minimum quality standards our users demand.

                      Why The Automation Drive

The drive for automation, be it of widget production or software,
has been the desire to reduce the labor content of products and
services.  Automation seeks to improve human productivity and
consequently, the competitiveness of the entity that produces the
product or provides the service For software, automation tools
support increased productivity by reducing the direct human labor
content, especially, in test execution.  For most software
developers, the increased efficiency of automation provides a marked
labor reduction when compared to manual methods -- a labor reduction
that justifies the capital investment and training cost that any
automation method demands.

While improved productivity may be hard to see in new software
development it is obvious in the maintenance phase of a software
product where much of the labor content is spent in continual
repetition of regression tests from unit- to system-levels.

Reduced labor content and improved productivity are poor reasons for
investing in test automation.  The reason to adopt test automation
is that contemporary software development is almost impossible
without it.  There are many precedences for this in the computer
field, especially in hardware.  For example, could integrated
circuits be designed and produced by manual methods? No! No human
craftsman has ever or ever will meet the precision required, say, of
a step-and-repeat machine used to expose integrated circuit masks.
The mere presence of humans in crucial steps of the process would so
pollute the hyper-clean environment of the chip fabrication line
that the yield would be zero.  The same is true for wire bonding and
many other steps in chip fabrication.  Look at your desktop and
you'll see many objects whose manual manufacture is impossible.
Even Benvenuto Celini, the greatest goldsmith the world has known,
couldn't make and install the tiny ball in the tip of your 25-cent
ball point pen.

But is this true for software? After all, we've been building good
software for decades without automating test execution or test
design.  I contend that the software isn't as good as you think it
is and that even if it were, you'd never be able to prove it if you
are restricted to manual testing methods.

The erroneous belief that manual testing can work in today's
software development climate comes from leaving two crucial things
out of the equation: 1) the bug statistics of a product under
maintenance and 2) the errors in the test process itself.

Software under maintenance undergoes two distinct processes: (1)
corrective changes by which the software's ability to do old jobs is
improved and (2) progressive changes by which new features are
introduced.  In corrective maintenance, if that were the only
maintenance activity, the number of defects remaining in the
software and the number of defects introduced by the corrections
eventually stabilize to comparable levels.  As software improves,
issues get subtler and it becomes increasingly more difficult to
distinguish between true bugs, seeming bugs, and test execution and
evaluation errors.

Unlike the software that is continually improving through
maintenance, manual testing errors do not decrease as the product
matures -- if anything, manual testing error rates increase as the
tedium of doing yet another boring test run increases.
Consequently, eventually the latent bugs become masked by the test
execution errors and it is no longer possible to determine the
latent bug density.

Progressive maintenance has a similar situation.  One part of
progressive maintenance is equivalency testing -- testing old
features to make sure they have not changed.  Those tests are also
subject to the uncertainty that plagues corrective maintenance if
they are done manually.  But even if you're not overly concerned
about the latent bug rate, take note that the almost universal
experience with manual regression testing (whether or not people
will admit it) is that it just doesn't get done.  Either progressive
or equivalency testing or some part of both is sacrificed.
"Regression testing has exhausted us," they say, "Therefore it must
have been exhaustive."

A major goal of testing is to expose bugs.  If the test process
itself has bugs, this creates an uncertainty about the bugs that are
found.  Real bugs are missed.  False bugs are discovered and the
software "fixed," thereby introducing bugs.  We can get useful
insights into the test execution bug rate by looking at manual
keypunching.  For decades, starting in 1910, professional keypunch
operators punched cards that were then verified by a second keypunch
operator on a verifying machine.  The error rate per keystroke for
unverified cards was about 1-10/1,000 and after verification and
correction, 0.1-1/1,000.  These were trained operators who were a
lot better at this task than is the typical test engineer.  Also,
comparison (i.e., the verification process) was automated while test
result verification today in a manual test environment is still
manual (or I should say, visual).  If the typical test case has 100
significant keystrokes, if our testers were as good as the
unverified keypunch operators, we would have a best-case test
execution error rate of 1 test case in 10 and more likely, every
test case execution will be wrong.  How does one reconcile that
uncertainty with a goal of one defect per 100,000 lines of code --
or converting it to key-strokes -- one defect per 2,500,000
keystrokes? It doesn't compute.

The only known way to eliminate testing uncertainty is to make the
testing process itself behave like the software it is used to test:
to make the tests a permanent product that is continually improved
by maintenance.  The only way we know how to do that is to embed the
testing itself into software-that is what automation, at the heart,
is all about.

                      Combinatorial Complexity

As software matures and stabilizes, an increasing percentage of the
bugs can be attributed to unforeseen interactions of: the software
with itself, the software with its environment, and the software's
features with each other.  Of these, the feature interaction problem
is the most severe.  For example, we can test a word processor's
justification and hyphenation features separately.  Each of these
features are straightforward when taken by themselves.  However,
combine the two and there's a massive complexity escalation -- not
only in the software that implements the combination but in the
combination of the features themselves and in the tests we must run
to validate those combinations.  So far it has neither been
practical nor possible for developers to analyze and test every
possible feature interaction in a complex product.  Frequently-used
feature combinations are thoroughly analyzed and tested, but it is
rarely those combinations that have bugs.  Testing feature
combinations, by all known methods today, requires that we do just
that -- try the combination.

Testing the popular combinations saves us from embarrassment but
does little to improve the software's quality over time.
Furthermore, as the software evolves and more features are added,
the likelihood of bugs caused by unforeseen (and untested) feature
combinations increases to the point where these bugs dominate all
other types.  Thus, the feature interaction bugs become the limiting
factor to improved quality.

Note that if you add features to a stable product and merely
maintain the old quality level for single features, then as you add
features and increase the software's complexity, the quality
degrades.  The quality degrades because the untested feature
interactions bugs continue to increase non-linearly while with
respect to the other bugs, the software only marginally improves.

The feature interaction bugs grow by at least the square of the
length of the software's feature list because every feature
potentially interacts with every other feature.  It is worse than
square because features interact in pairs, triplets, four-at-a-time,
and so on.

You need another square-law process or better to contain a square-
law process.  Human beings are not only not a square law process,
but they don't improve at all.  I'm sure that given proper training
a person from the stone age 20,000 years ago would be as good a
tester as anybody alive today.  We don't have to decry the supposed
moral decay of our youth or weep over the putative decline of the
work ethic in the West -- such things, whether false or true, don't
affect the fact that people just don't get better.  The productivity
difference between a bad manual tester and a good manual tester is
probably less than 2:1.  It's likely that the best manual testers
will turn out to be idiot-savants with operative IQs of 60 or less.

It takes a square-law process to beat a square-law process and we
have one -- computers.  Someday we may learn how to avoid testing
combinations to bring feature interaction bugs under control, but
until that day, until that new theoretical breakthrough is made,
we'll have to stick to what we have.

Computational power per unit cost has been growing and continues to
grow in a square-law fashion right from Univac I until today.  We
may not like having to buy ever better workstations and whiz-bangs
for our testers, but right now, it's the only choice we have if we
want to continue to satisfy the users' seemingly endless quest for
more features and their interactions without any reduction in
perceived quality.

                         Automation Methods

We can divide automation methods into two main groups:  test
execution automation and test design automation.  These are
discussed in further detail below.

Whatever your long-term automation plans are, be sure that you plan
to implement and stabilize test execution automation before you
attempt test design automation because most test design automation
methods yield vast numbers of tests very rapidly.  But even if you
continue manual test design for a while, you must address the issue
of test-suite configuration control and test suite maintenance from
the start.  The idea behind automation, as we saw above, is to
convert tests and the testing process into an ever-improving object.
That can only happen if the test suites are maintained as rigorously
as your software is maintained.  If tests are to be maintained, then
they too must be under rigorous configuration control (i.e., version
control).  For most organizations, simple adaptations of the tools
and methods they now use for software configuration control will
work for test configuration control.  So even before you've brought
in the first automation tool you have process changes to make.  If
you don't make these process changes (test suite maintenance, test
suite configuration control, and test bug reporting and correction)
then the best tools in the world will fail and you won't realize
their potential.

                     Test Execution Automation

Test execution automation pays off first by reducing test execution
labor and more important, by reducing test execution error rates.
The latter payoff is harder to measure than the first, especially in
the early stages of an automation effort.  But it's easy to prove
the labor content reduction --  keep an honest count of how often
test cases are rerun from the time they're first tried until the
release is used in production.  Typical values are 15-25 times.
Even if the test design is bug-free and the execution is without
error, if the test reveals a bug it will be rerun at least six times
(three by the tester and three by the developer whose software is at
fault) before the bug is acknowledged.  It doesn't take a PhD in
higher math to see the payoff.

The primary test execution automation tool is the test driver.  Test
drivers are available as stand-alone products or bundled with more
comprehensive tools such as test data generators.

The most popular driver is the playback feature of a
capture/playback tool.  Capture/playback tools come in a wide
variety with a wide range of features --  they are the single most
popular test automation product.

Drivers: execute test cases in a prescribed order; compare actual
outcome to predicted outcome; and report any discrepancies, usually
by exception.  Drivers pass the software's status information to
debug packages so debugging can follow testing without interruption.
Test for drivers may be generated by a capture/playback system or by
programming the test cases in a scripting language or by use of a
test data generator.

                       Test Design Automation

Test design automation tools produce test cases by a variety of
formal and heuristic means.  The simplest and most popular being the
capture/playback tool that just records the tester's keystrokes and
the software's responses for later editing and/or playback.
Capture/playback reduce test design labor content by passing the
test data (inputs and outcomes) to an editor, say a word processor.
Because most of the keystrokes and responses vary only by a few
characters from test case to test case, you can exploit the word
processor's ability to replicate the test data and use macros and
automatic search-and-replace features to create the variants needed
for each test case.  This, as simple as it is, results in a
continually improving product and process, which as we saw, is
essential for meeting our quality goals.

The second major class of design automation aids are test data
generators that automatically create combinations of input values
from specifications.  The specifications are typically linearly
proportional to the number of features but the generators creates
sets of tests based on combinations and therefore they provide the
essential square-law test suite size we need.

For example, the inputs to a product consists of a set of numbers
that are specified in terms of smallest and largest values for each
number.  The tool then generates valid test cases consisting of all
the possible combinations of extreme values and combinations of
values in which some numbers are outside the prescribed ranges.

More elaborate tools exist that work directly from formal
requirement specifications and use that information to generate test
cases based on logic, on regular expressions, on finite state
machines, and on a variety of other formal and heuristic methods.


My father, who was a skilled goldsmith, used to say: "A poor workman
always blames his tools."  What worker can acquire skill without
training? The single biggest cause of automation failure is not
providing enough time and money to permit the worker to acquire the
skills essential to making the tool pay off.  Without training, the
best tool in the world becomes mere shelfware.  A simple tool such
as a capture/playback tool or a driver will require a week or two of
effort to master, or about $3,000 per user.  A complex test data
generator may take several weeks or months to master, for a cost of
about $10,000 per user.

With training and mastery costs like these, the purchase price of
the tool is a small part of the cost.  We should be bold and honest
about training expenses.  Stop hiding these costs under the table
and hope that no one will notice the initial productivity drop when
training is underplanned and underfunded.  It's no good trying to
hide these costs because if they're hidden, the tool will be unused
and therefore worthless, no matter what its potential might have
been.  We must be up-front with management, let them know the true
costs of automation and let them decide to go ahead with it or to
get out of the software development business.

                       What the Future Holds

The commercially available test tools such as those that will be
featured at the conference are but a small part of the huge library
of experimental, research, and prototype tools that have yet to make
it to the marketplace.  The commercial availability of advanced
testing products has been hindered by lack of a demonstrable market
willing to pay for them --  not by a lack of technology.


5th Annual International Software & Internet Quality Week Europe (QWE2002)
                  Conference Theme: Internet NOW!

                          11-15 March 2002
                       Brussels, Belgium  EU


         * * * QWE2002 Conference Brochure Available * * *

The program brochure is now available from the Quality Week web site.
Download your own copy of the full-color brochure in *pdf format from:


      For an overview of the entire program, and for detailed
descriptions of the multiple tracks,  visit the web site at:


The QWE2002 International Advisory Board selected speakers who are
truly knowledgeable and passionate about their subject.  Each
speaker has her/his own sub-page with a photo and descriptions of
their professional background on the web.  Discover the state-of-
the-art in software and internet QA and testing from around the
world.  Just look at their topics listed in the daily program. If
you click on the paper title, you will find detailed descriptions,
short abstracts and key points on the authors sub-webpage.

                   * * * Program Highlights * * *

* Pressing questions and issues discussed by a distinguished lineup
  of six Industrial and Academic Keynote Speakers including such
  well known speakers as:

   > Dr. Linda Rosenbergt (NASA, USA) "Independent Verification and
     Validation Implementation at NASA"
   > Mr. Eric Simons (Intel Corporataion, USA) "Power Testing"
   > Dr. Koenraad Debackere (KUL Leuven, Belgium ) "Organizing for
     High Tech Innovation"

* Over two intensive, hard-working days, we offer 18 pre-conference
  Tutorials conducted by the foremost experts in their fields.

* Parallel Tracks that cover the broad field of software quality
  with the latest developments:

   + Internet: E-commerce experience, Internet Time and Site
   + Technology: From browser-based website testing to UML methods
   + Applications: Hear solutions from researchers and practitioners
   + Management: Managing Testing, Quality Improvement, Process
   + Tools and Solutions: the latest solutions and newest tools from
     the Exhibitors

                   * * * Industry Exhibitors * * *

* Industry Exhibitors will showcase their services and latest
  products at the Two-Day Trade Show (Expo: 11-15 March 2002).
  Exhibitors including: CEDITI, CMG, Computer Associates, eValid,
  Gitek, I2B, Pearson Education, ps_testware, Rational, SIM Group,
  Software Research, Veritest, and more.

* You will take home a CD-ROM with all the paper topics presented at
  the conference and with all the contact information for the
  exhibitors.  This will enable you to pass on the information to
  your colleagues and use it as a ready training tool.

                     * * * Special Events * * *

* Special Events: Business can be enjoyable as during lunch, the
  breaks and the special networking events, you will have ample
  opportunity to network, exchange information and find valuable
  business partners.

    * Welcome reception at the Cantillon Brewery, where the third
      generation of family brewers is producing the famous Gueuze,
      using the age-old artisan methods.
    * Cocktail Party with the Exhibitors
    * Conference Dinner at a Famous Art Nouveau Cafe, where stock
      brokers and journalists have met since 1903.
    * Visit a family chocolate Factory, or
    * Tour the beautiful new Musical Instruments Museum, listening
      to performances with infra-red headphones.

  All the details on the Special Events at:


Mark your calendars *NOW* for QWE2002: 11-15 March 2002.  Join us in
the newly refurbished, beautiful downtown Brussels, Belgium, the
Capital of Europe.

Register early on-line and receive Early Bird Special Pricing at:


We look forward to seeing you in Brussels!

Rita Bral,
Conference Director


           Major Issues Facing Internet/Software Quality,
                  by Edward Miller (QWE2002 Chair)

As part of my preparation for each Quality Week conference that we
run I generally ask the all of the conference speakers, and the the
Conference Advisory Board as well, to outline what they think are
the biggest problems and issues facing the Internet & Software
Quality community.

After all, the Advisory Board themselves represent the nexus of the
technical community, and the speakers who have elected into the
conference program by their votes, are a kind of main guard of
quality issues and technology.  These are the people who are totally
in the know about the activity in the community.  If it happening,
and its important, this group knows about it.

On the other hand, it goes without saying that so many high quality
thinkers will, to put it mildly, differ a very great deal about what
is and isn't important.

This year is, it seems to me, a critical on for the Internet and
Software Quality community.  The aftermath of 9/11, the collapse of
the "dot coms", and the realignments that all of us have experienced
because of the lagging economy.  These issues -- and more --

Here is a suggestion as to what appears to be important in our field
-- summarized from the responses I've received so far from the
QWE2002 speakers and Advisory Board members.

Here were my basic questions:

    > What are the MAIN internet and software quality issues
      facing the community today?  How do availability, or
      process, or reliability, or performance rank in
      importance?  Where are people really feeling the heat?

    > In these three main areas of technology, applied
      technology, and process knowledge, what are the major
      unsolved problems or major difficulties:

      * Control software (which rides in devices and helps
        something operate, like an airplane or a train)?

      * Conventional Client/Server applications, developed on
        Windows or UNIX?

      * Web applications (including XML, DHTML, everything)?

      * Did I leave a major category of software out?

    > Where do you think is the greatest RISK from quality
      failures?  Is in eCommerce, or in the internet back shops
      (the routers etc), or where?

Here is a summary of the responses, excerpted and in no particular
order.  I hope you find it interesting to see what the patterns are.

Issue: What should you, can you (conference participant) do to
combat the tendency of management to first cut QA/QC -- all in the
light of the economic downturn?

Risk: If only that software were tested using the intellectual and
software tools that have long been available -- and that for the
most part, have been rotting on the shelf.

Risk: There is a serious security and privacy issue with these
systems and most security and privacy testing are at about the
fictional James Bond level.

Growth:  Third party, independent testing companies that know how to
exploit the tools (intellectual and mechanical).

Pressure: Heat is felt in management - where do we invest our
effort?  - if we only have so many $$ how do we spend them wisely to
meet Quality objectives?

Need: Risk based approaches for development process.

Growth: risk based methods deciding how to spread testing effort
across test objectives.

Risk: My biggest threat to quality is security. Hackers are getting
very subtle and we don't "check" our website constantly to see if
someone has broken in and made a change that we are not aware of.

Issue: The main issues facing eCommerce are Time to Market (TTM),
usability, reliability, and accuracy (no particular order I can
discern).  In embedded systems, concerns include performance,
reliability, and safety - especially as software is extended into
new devices and uses.

Issue: With the advent of "lite" methods, there is much interest in
flexibility and tailorability of a product development lifecycle and
quality activities. We are still trying to answer a canonical
question:  How much can a lifecycle be tailored or adapted without
losing its fundamental principles (and therefore its value)?

Risk: Increasing complexity. Just about all of the easy, simple
products have been done.  What's left is hard. Legacy and
interoperability compound the problem.

Risk: Product segmentation. Called hypersegmentation by one
colleague, this is mass-customization run amok. Everyone wants
everything their way, on their OS, with their old or brand new
browser, database, security software, etc.  This is a significant
challenge from many perspectives, including code forks, SCM, product
line management, testing, and others.

Risk: eCommerce has some pretty large financial risks, especially
for a company like Intel that does a billion dollars a month in B2B

Issue: Safety risk & potential loss of life in embedded systems from
advancing avionics to automotive applications, control systems, etc.

Concern: Brand equity is a very valuable thing. Anything that erodes
it can be a serious threat to long-term financial success. When
quality is low, brand equity drops and is difficult and time
consuming to recover.

Opportunity: Quality as valued partner rather than policeman.
Fundamental understanding of the relationship between quality, time,
and money (including brand equity). Big Q versus little q (see my
slides). Defect prevention versus defect removal.

Issue: As far as I can judge, IT is evolving from a classical model
to an ASP model, more and more classical application or services get
"webbed" with the big risk of forgetting the new constraints that
goes along with this:

  - connection type and bandwidth
  - security issues
  - performance issues
  - people being tired of paying for wrong services and poor quality

The ASP model has a brilliant future but IT providers first need to
understand the real needs and issues.

Concern: maturity towards testing increases in both traditional and
new sectors and that quality concerns are ranking a bit higher than
in the past.

Risk: The incredible speed at which new versions are released these
days. Most popular software are released at least once a year if not
twice or even more.

Issue: WebSite usability (perform consumer or user-centric tests
especially at the level of business processes) and performance.  Web
developers often tend to miss real user concerns, web sites are
often the victims of their own success.

Issue: One shot consultancy projects are gone; long live Quality
Service Provisioning and global test services that are conceived on
the long run as from the beginning.

Issue: Internet profitability, speed, and content. Especially the
first: you see companies down-sizing Internet activities as it
appears difficult to make them profitable.

Software quality: Agility of new developments: the key challenge is
providing solutions that make life better instead of pushing new
technologies of which it is still unclear what their benefits are.

Risk: Becoming faster, less bureaucratic, and getting focus to the
different quality demands for different products in different

Issue: Web based applications may succumb to the temptation of
providing ever-increasing number of features without first focusing
on improving reliability.

Risk: "quality roulette" i.e., companies throwing poorly tested
software onto the Internet, often under the "Internet-time"
delusion.  Test managers report down-sizing in spite of increasing
workloads and management dictates of ever-faster cycle times.

Concern: Effective project management remains a large obstacle to
quality.  Unrealistic schedules, insufficient staffing, ever-shorter
cycle times, along with signs that adoption of so-called "agile
methods" may be driven by unrealistic and often dangerous management
expectations along the "silver bullet" lines.

Concern: Applying the techniques that we know work in a methodical,
careful way to get incremental but meaningful improvements in
quality, where quality is defined as satisfying the customers and
users with reliable, quick, "fit for use" systems rather than adding
glitzy bells and whistles no one cares about.

Issue: As far as I can see, we are now entering a consolidation
period following the internet storm of the past few years. Companies
are trying to integrate the old and new technologies. Quality has
become an issue again.

Risk: The greatest risks seem to lie in the vulnerability of the
internet and the difficulty of testing security. There have been
major incidents here in Europe where private bank accounts have been
made accessible to all. IT People should realize that Internet
Programming is no game.

Risk: Better test planning and estimation skills, and the
application of project

Concern: The "low budget approach" return. Companies have spent a
lot of money to pass the Y2K and then some to build some presence on
the Net. Now, and more with the economic recession pressure, they
are cutting costs. This means that money and time are directed to
development activities and that quality activities are neglected.

Risk: The "lack of challenge" situation. The teams that have build
the "traditional" systems that have been used by companies to by
Y2K-compliant have often been left-out of the "run to the Net"
effort because they were still busy with post-production release
activities and there were no times to train them for Web-based

Risk: The "skill/experience mismatch".  Companies are making the
transition to Web-based development (Java, Thin Client, multi-tiered
application), but they have in one hand young developers, many fresh
out of school. They know the new tools but do not have the software
engineering experience that help you to build long-lasting software
(like architecture or comments).


                           SAFECOMP 2002
                The 21st International Conference on
             Computer Safety, Reliability and Security
        Catania, Italy, 10-13 September 2002, Catania, Italy

SAFECOMP 2002 will be held on 10-13 September 2002 in Catania,
Italy.  The European Workshop on Industrial Computer Systems, TC 7
on Reliability, Safety, and Security (EWICS TC7) established
SAFECOMP in 1979:  <>.  SAFECOMP is an annual
2.5-day event covering the state of the art, experience and new
trends in the areas of computer safety, reliability and security
regarding dependable applications of computer systems.  SAFECOMP
provides ample opportunity to exchange insights and experience on
emerging methods and practical applications across the borders of
the disciplines represented by participants.

SAFECOMP focuses on safety-critical computer applications and is a
platform for knowledge and technology transfer between academia,
industry and research institutions.  Papers are invited on all
aspects of dependability and survivability of critical computer-
based systems of systems and infrastructures.  Due to the increasing
integration and evolution of hybrid systems, SAFECOMP emphasizes
work in relation to human factors, system evolution, dependability
and survivability.  Nowadays practical experience points out the
need of multidisciplinary approaches to deal with the nature of
critical complex settings.  SAFECOMP, therefore, is open to
multidisciplinary work enhancing our understanding across
disciplines.  SAFECOMP welcomes original work neither published nor
submitted elsewhere on both industrial and research experience.
Examples of industrial and research topics include, but are not
limited to,

Industrial Sectors: Accident Reports and Management - Aerospace and
Avionics - Automotive - Banking and E-commerce - Bluetooth Devices -
Critical National Information Infrastructures - Distributed and
Real-time Systems - Euro Currency Transaction - Firewalls - Medical
Systems - Networking and Telecommunication - Open Source Software -
Power Plants - Programmable Electronic Systems - Railways -
Responsibility in Socio-Technical Systems - Robotics - Safety
Guidelines and Certification - Safety Standards - Safety-Critical
Systems and Infrastructures - Smart Card Systems

Research Areas: Fault Tolerance - Commercial-Off-The-Shelf and
Safety - Dependability and Survivability Analysis and Modeling -
Dependability Benchmarking - Design for Dependability and
Survivability - Diversity - Empirical Analyses - Evolution,
Maintenance, Dependability and Survivability - Formal Methods,
Dependability and Survivability - Human Factors, Dependability and
Survivability - Internet Dependability - Intrusion Detection -
System Modeling and Engineering - Qualitative Approaches for
Dependability and Survivability - Quantitative Approaches for
Dependability and Survivability - Safety, Reliability and Security -
Safety and Risk Assessment - Dependable and Survivable Structures -
System Dependability and Survivability - Verification, Validation
and Testing


Paper submissions are via the conference web site.  The language for
all paper submissions and presentations at the conference is
English, and no simultaneous translation will be provided.

After acceptance by the Program Committee, the final paper is
required in electronic version according to the file templates
provided by the publisher.  Accepted papers will appear in the
conference proceedings that will be distributed at the conference.
Springer-Verlag will publish the conference proceedings in the
series Lecture Notes in Computer Science (LNCS):

Extended and revised versions of the best papers accepted for the
Conference will be peer-reviewed and published in a Special Issue of
the international journal Reliability Engineering and System Safety
(RESS) published by Elsevier: <>.

      Contact: Massimo Felici
      Laboratory for Foundations of Computer Science
      Division of Informatics
      The University of Edinburgh
      James Clerk Maxwell Building
      The King's Buildings
      Mayfield Road
      Edinburgh EH9 3JZ
      United Kingdom

      Telephone:    +44 - 131 - 6505899
      Fax:          +44 - 131 - 6677209
      URL address:


             eValid Ver. 3.1 Adds Powerful New Features

We're excited about the changes and additions to eValid in Ver. 3.1.
Complete details on this new eValid release are at:

We invite you to try out Ver. 3.1 of eValid before the formal
product release.  We'll send you a Ver. 3.1 key along with complete
download instructions.  Just reply to  with your
request.  Please include your CID number if you know it.  We'll do
the rest!

             o       o       o       o       o       o

Here is a quick summary of the many new features in eValid:

* eVlite.  This is a playback only, thin eValid client that reads
  eValid scripts and plays back their navigation events.  It can run
  up to 1000 threads per copy, and you can easily get 10 copies
  running at one time -- for up to 10,000 simulated users -- on one
  eValid driver machine.

  Check out:
  for details on eVlite.

  Also, this new feature permits -- for the first time -- true
  variable fidelity LoadTest scripts:

* Enhanced Record/Play Modes.  We've added record/play modes to help
  overcome problems on even the most difficult-to-test WebSites:

   > Sub-Window Record/Play.  eValid now supports true parent/child
     single-script recording.

   > Desktop Window Recording.  eValid scripts can now record
     actions you take on any application that is launched from the
     eValid browser but runs on the desktop.

   > Application Mode Record/Play.  Now _ANY_ subwindow has
     Application Mode recording available to it, and these
     recordings are made in the parent script.

   > Multiple-window Multiple Playback in LoadTest.  Tests that use
     multiple subwindows now play back from multiple instances of
     eValid in LoadTest mode (but Lock/Unlock is needed to prevent
     focus stealing).

* eVinteractive Demo.  This is a new demo that illustrates eValid
  interactive mode with a simple GUI on the desktop.  Type in a
  command and the dependent eValid will execute it interactively.

* Log Dialog Prompts.  The latest release provides a prompt at the
  end of a test playback.  You are asked to select which reports you
  want to see.  This helps prevent you from missing important data.

* Improved Memory Usage and Performance.  We have made a number of
  changes and improvements to the way eValid uses RAM and virtual
  memory, particularly during multi-browser playback.

* Improved LoadTest Reporting.  In addition to adjustments to
  provide for eVlite playbacks in a LoadTest scenario we have
  improved the on-screen reporting of LoadTest runs.

* Improved SiteMap Reporting.  We have simplified some of the
  reports and added a new report that is just a list of the URLs
  encountered during a Site Analysis Run.

* XP Operation.  We have confirmed that eValid runs just find on the
  new Microsoft Windows XP operating system.  Also, we have
  previously confirmed eValid operation with IE 6.0.  No problems
  were encountered in either case.

eValid runs the way a website test engine should.  Inside the
browser, the way clients see the website.  The easy, all-natural

Complete information on eValid is at:
  <> or <>.


                          ACM SIGSOFT 2002
                Tenth International Symposium on the
            Foundations of Software Engineering (FSE-10)

                        November 20-22, 2002
                    Westin Francis Marion Hotel
                  Charleston, South Carolina, USA


SIGSOFT 2002 brings together researchers and practitioners from
academia and industry to exchange new results related to both
traditional and emerging fields of software engineering. We invite
submission of technical papers which report results of theoretical,
empirical, and experimental work, as well as experience with
technology transition.

TOPICS OF INTEREST. We encourage submissions in any field of
software engineering, including, but not limited to:

  * Component-Based Software Engineering
  * Distributed, Web-Based, and Internet-scale Software Engineering
  * Empirical Studies of Software Tools and Methods
  * Feature Interaction and Cross-cutting Concerns
  * Generic Programming and Software Reuse
  * Requirements Engineering
  * Software Analysis and Model Checking
  * Software Architectures
  * Software Configuration Management
  * Software Engineering and Security
  * Software Engineering Tools and Environments
  * Software Information Management
  * Software Metrics
  * Software Performance Engineering
  * Software Process and Workflow
  * Software Reengineering
  * Software Reliability Engineering
  * Software Safety
  * Software Testing
  * Specification and Verification
  * User Interfaces

General Chair
  Mary Lou Soffa, Univ. of Pittsburgh,

Program Chair
  William Griswold, Univ. of Calif., San Diego,

    ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------

QTN is E-mailed around the middle of each month to over 9000
subscribers worldwide.  To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should
  provide at least a 1-month lead time from the QTN issue date.  For
  example, submission deadlines for "Calls for Papers" in the March
  issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.

STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All
other systems are either trademarks or registered trademarks of
their respective companies.

        -------->>> QTN SUBSCRIPTION INFORMATION <<<--------

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:


As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:

   TO UNSUBSCRIBE: Include this phrase in the body of your message:

Please, when using either method to subscribe or unsubscribe, type
the  exactly and completely.  Requests to unsubscribe
that do not match an email address on the subscriber list are

	       Software Research, Inc.
	       1663 Mission Street, Suite 400
	       San Francisco, CA  94103  USA
	       Phone:     +1 (415) 861-2800
	       Toll Free: +1 (800) 942-SOFT (USA Only)
	       Fax:       +1 (415) 861-9801
	       Web:       <>