sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======              April 2000             =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) (Previously Testing Techniques
Newsletter) is E-mailed monthly to subscribers worldwide to support the
Software Research, Inc. (SR), TestWorks, QualityLabs, and eValid WebTest
Services user community and to provide information of general use to the
worldwide software and internet quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the entire
document/file is kept intact and this complete copyright notice appears
with it in all copies.  (c) Copyright 2003 by Software Research, Inc.


========================================================================

   o  QUALITY WEEK 2000 -- Early Bird Deadline is 28 April 2000

   o  Mars Lander -- Added Comments, by Boris Beizer

   o  Making 'IT' Work, By Marco Dekkers

   o  Call for Papers: Special Issue on Web Engineering

   o  Plea for Cooperation to Prevent Denial Of Service Attacks, by Alan
      Paller

   o  Automated Software Testing -- A Perspective (Part 1 of 2), by
      Kerry Zallar

   o  CAPBAK/Web Product Demo Offer

   o  Call For Papers -- Mutation 2000 in Silicon Valley

   o  Instructions for Life

   o  Call For Participation -- TWIST 2000

   o  Call For Papers: Special Issue on Web Engineering

   o  QTN SUBMITTAL, SUBSCRIPTION INFORMATION

========================================================================

       QUALITY WEEK 2000 -- Early Bird Deadline is 28 April 2000

There are only six working days until the registrations fees for Quality
Week 2000 go up.  To take advantage of the Early Bird discount rates
registration (along with payment) must be received on/before 28 April
2000.  Act now and you will save money.  Register on-line at
<http://www.soft.com/QualWeek/QW2K/qw2k.register.html> or call us
IMMEDIATELY at: +1 (415) 861-2800 or FAX your registration to: +1 (415)
861-9801.

Join your colleagues at this premier conference.  You will gain the
latest insights and experiences from the brightest in the QA, IT, and
Internet professionals.  Learn the techniques from over 100
presentations to make your job more effective!

Exciting highlights at Quality Week 2000:

We have a superior lineup of Industrial and Academic Keynote Speakers:

  * Sanjay Jejurikar (Director of Windows 2000 Testing, Microsoft
    Corporation) The Engineering Process of Windows 2000 (10P2)
  * Gene Spafford (CERIAS / Purdue University) Information Security
    Requires Assurance (10P3)
  * Stu Feldman (IBM Corporation) Internet and E-Commerce: Issues and
    Answers (1P)
  * Bill Gilmore (Intel Corporation) The Intel Corporate Software
    Quality Network (1P2)
  * Leon Osterwell (University of Massachusetts) Determining the Quality
    of Electronic Commerce Processes (5P1)
  * Rainer Pirker (IBM Austria) The Need for Quality-e-business
    Performance Testing (5P2)
  * Marcelo Dalceggio (Banco Rio de la Plata Argentina) Automated
    Software Inspection Process (10P1) [QWE'99 Best Presentation]

Fourteen Tutorials given by the foremost experts in their fields:

  * Johanna Rothman (Rothman Consulting Group) Life as a New Test
    Manager (A1)
  * Norman Schneidewind (Naval Postgraduate School) A Roadmap to
    Distributed Client-Server Software Reliability Engineering (B1)
  * Michael Deck (Cleanroom Software Engineering, Inc) Requirements
    Analysis Using Formal Methods (C1)
  * Bill Deibler (SSQC) Making the CMM Work: Streamlining the CMM for
    Small Projects and Organizations (D1)
  * Ross Collard (Collard & Company) Test Planning Workshop (E1) NEW
  * G. Bazzana & E. Fagnoni (ONION s.r.l.) Testing Web-based
    Applications: Techniques for Conformance Testing (F1) NEW
  * Edward Kit (Software Development Technologies) Testing In the Real
    World (G1)
  * Robert Binder (RBSC) How to Write a Test Design Pattern (A2) NEW
  * John Musa (Consultant) Developing More Reliable Software Faster and
    Cheaper (B2)
  * Tom Gilb (Result Planning Limited) Requirements Engineering for
    Software Developers and Testers (C2)
  * Tim Koomen (IQUIP Informatica BV) Stepwise Improvement of the
    Testing Process using TPI tm (D2)
  * Linda Rosenberg, Ruth Stapko, & Albert Gallo (NASA GSFC) Risk-Based
    Object Oriented Testing (E2) NEW
  * Adrian Cowderoy (MMHQ) Cool Q-Quality Improvement for Multi-
    Disciplinary Task in Website Development (F2)
  * Chris Loosley & Eric Siegel Web Application Performance (G2)

New four Post-Conference Workshops:

  * Douglas Hoffmann (Software Quality Methods LLC) Oracle Strategies
    for Automated Testing (W1)
  * Cem Kaner (Attorney at Law) Bug Advocacy Workshop (W2)
  * Edward Miller (Software Research, Inc.) Achieving WebSite Quality
    (W3)
  * Robert Sabourin (Purkinje, Inc)The Effective SQA Manager-Getting
    Things Done (W4)

**BOFSs: Our Bird of a Feather Sessions are an added benefit at Quality
Week 2000.  Meet, talk and debate your favorite topics with your peers
in these informal sessions.  Check it out on our website for more
details!  <http://www.soft.com/QualWeek/QW2K/qw2k.bofs.html>.  It is the
perfect opportunity to meet with others who share your interest.  If
interested in becoming a moderator for the topics that need them or for
new topics contact Mark Wiley at: .

Pick Top Quality Industry Expert's Brains During Three Special Panel
Sessions

**Debate with the Experts in Three Panel Sessions:  Ask The Quality
experts panel, this special QW2000 panel session works interactively
with you to get your key questions answered! If you have a burning
question about any aspect of Software or Internet Quality, click on Ask
The Quality Experts!  <http://msoftweb.rte.microsoft.com>

  * Ask The Quality Experts!-Nick Borelli from Microsoft will take your
    questions before the conference so you come prepared to debate.
  * Protecting Intellectual Property In An Open Source World - Doug
    Whitney will tell how Intel does it
  * How Can I Tell When My Project Is In Trouble?-Brian Lawrence,
    Johanna Rothman and other management experts will explain.

Submit your votes for the most important questions or post a new
question today:  <http://www.soft.com/QualWeek/Papers/8P.html> There you
will see the current set of questions posed to the Panel of Experts,
rank ordered based on the number of votes each questions has received.

========================================================================

                     Mars Lander -- Added Comments

                                   by

                              Boris Beizer

This morning's newspaper had an important story about the two Mars
lander failures.  The first ($125 million) was caused by inadequate
system engineering and coordination.  We can certainly attribute this to
a wholly inadequate set of design reviews and software inspection
procedures.  The second, worth a cool $250 million, was a attributed to
a failure to do proper regression testing.  Here's the gist of the story
as I can make it out through the garblings introduced by the reporters.

1.  Hardware design glitch causes various sensors to give spurious
signals.  More specifically, the sensors are those that determine that
the lander has landed.   As the landing legs are being deployed, the
vibration causes these signals to be sent, so that the software thinks
the lander has landed while it is still actually in its descent mode.
Having "landed," it starts sprouting antennas, removing protective
covers, etc. with the expected destructive effect on these components
This flaw is discovered in testing -- prior to launch.

2.  Hardware and/or software changes are made to presumably mitigate
these spurious signals.  The story is garbled here because if the
spurious signals were eliminated, software changes should not have been
needed.  So we're not sure which changes were made or why.  However, the
story does say that at any point prior to the actual landing, they could
have corrected the problem remotely.  So the likely scenario is that
hardware changes were made to minimize some of these signals and
software changers were (should have been?) made to ignore the spurious
signals that remained.

3.  No regression tests of the software or the system were made.  This
is explicitly stated in the story. The failure to do this regression
testing is attributed to pressure on NASA to "get it done as cheaply as
possible" -- why didn't they just drop all testing, I wonder?

4.  We can read between the lines here and conclude that much of the key
regression testing, especially of the software, was not automated.  We
can also infer that there was no suitable system simulator under which
the software could be tested -- and a whole lot of other missing
components in their entire testing regime.  We can infer that all this
essential testing infrastructure is missing because if it existed, doing
a complete regression test of the software would have been a low-cost,
fully automated, no-brainer.  And such a minor budget item that it would
not have even appeared on the cost-cutting radar screen.

        So add this to your list of the value of automated regression
testing -- and regression testing itself and the use of system
simulators to support testing of complex systems.  $250 megabucks --
this can buy a lot of testing for that price.   I figure 2,500 testers
working full time for a year.

        Boris Beizer Ph.D.                 Seminars and Consulting
        1232 Glenbrook Road                on Software Testing and
        Huntingdon Valley, PA 19006        Quality Assurance

        Email: bbeizer@sprintmail.com bbeizer@acm.org bbeizer@ieee.org

========================================================================

                            Making 'IT' Work

                            By Marco Dekkers
                        Productmanager Testing
                        KZA kwaliteitszorg B.V.
                        e-mail: mdekkers@kza.nl

Software development companies experience great difficulties in
satisfying their customers.  On a regular basis articles reach the press
regarding customers of IT services holding their suppliers liable for
the failure of projects.  On the other hand suppliers grow more and more
irritated with customers 'who don't know what they want'.  The
background of these problems often lies in fuzziness about project goals
and requirements.  Clearly, this kind of incidents can be prevented.  By
defining and maintaining a clear and measurable set of requirements
communication between the customer and supplier is stimulated and the
chances for successful completion of the project increase.

According to various studies two-thirds of software development efforts
do not succeed in delivering a good quality product on time and within
budget.  How is this possible? When we consider that the ranks of (most)
software development companies are made up of highly trained, motivated
and ambitious people, these results are hard to believe.  Furthermore,
modern development tools reduce the chance of making mistakes in
designing and building software (at least in theory).  With the rise of
new ways of developing software, such as Rapid Application Development,
the goal is to develop software quicker, better and cheaper.
Considering all these elements, how is it still possible most software
development efforts fail?

This is not a simple question.  There is no single one factor that
completely explains the failure of projects. A complex combination of
factors contributes to the ultimate demise of projects. Examples of such
factors are:

 * Lack of a experienced projectmanager with a clear mandate
 * To optimistic planning in the start-up phase of the project (raising
   expectations beyond what is attainable)
 * Lack of user involvement
 * Lack of experience on the part of developers

This list certainly is not complete, however, it does illustrate the
types of problems projects are faced with.  The main reason for project
failure is missing in this list. This is the fact that all to often it
is unclear what the business goals of the project are and which
requirements have to be met by the final product. I ran into a typical
example of this during a project where I was the testmanager. The
customer in question had ordered the development of a workflow
application on the basis of a contract consisting of two pages. No
specification or requirement documents were drawn up during the course
of the project. All was done in good faith and on the basis of the
understanding that by means of prototyping users would deliver input to
developers regarding desirable functionality. My involvement with the
project started approximately one year after development started. It
took me less than two weeks to bring to light that the development staff
and the customer had completely different ideas about the desired end
result. On the basis of my analysis a decision was made to document the
requirements which had to be met by the system. It goes without saying
this was a time consuming effort. Furthermore, the outcome of this
process made it evident that structural re-engineering of the code was
necessary. It will not come as a surprise that the project failed to
meet it's target date (by a factor of three). Had there been a clear
understanding of the project goals from the start, a great deal of time,
effort and money could have been saved.

This example is certainly not unique. During numerous projects it is not
until the acceptance test phase that the customer concludes that the
system does not meet his expectations.  Sometimes it is too late to
correct the situation and the project is abandoned. More likely,
delivery is pushed back and users are confronted with a system that
barely supports them in their work.  The core of the problem is that
developers and customers do not speak the same language when it comes to
IT. The remedy is as simple as it is effective. Customer and supplier
have to identify project stake-holders early on. Each group has to be
asked which requirements it has regarding the completed software
product. This can be done by organizing one or more workshops with
stake-holders. Requirements are then prioritized in consultation with
all the parties concerned. Thus it becomes evident which requirements
are essential to software product acceptance. Using the extended ISO
9126 norm, requirements are translated to measurable indicators. The
usefulness of ISO 9126 lies in the fact that it describes a way of
defining measurable characteristics of software. Quality is defined in
terms of functionality, reliability, efficiency, portability, usability
and maintainability. For each of these it is possible to determine
indicators that describe how measurement is to take place. The advantage
of using ISO 9126 to describe product quality is that requirements are
formulated in a way that leaves no room for ambiguity.  The possibility
of misunderstandings when using the norm is minimal.

Requirements can be coupled to the phase(s) of system development during
which they are met.  Doing so, identifies the earliest moment
verification can take place.  If the quality assurance activities are in
tune with this, defects are detected at the earliest possible moment
(immediately after injection). This reduces the costs of fixing defects
and development time.

The combination of a clear understanding of requirements and early
detection of defects enhances the chances of project success.

Ultimately it comes down to this.  If you don't know where you are
going, it is very unlikely you will end up somewhere you want to be.  On
the other hand, although defining measurable requirements together with
project stake-holders does not guarantee success, it at least gives you
a head start in the race towards project success.

========================================================================

                  Call for Papers for Web Engineering

           Part of the Internet and the Digital Economy Track

                       of the Thirty-forth Annual
      Hawaii International Conference on Systems Sciences (HICSS)
                   Maui, HI  USA -- January 3-6, 2001

The growth of the World Wide Web is having phenomenal impact on
business, commerce, industry, finance, education, government,
entertainment, and other sectors. The extent of its use is changing our
personal and working lives.  The Web's anticipated scope as an
environment for knowledge exchange has changed dramatically. Many
applications and systems are being migrated to the Web, and a whole
range of new applications is emerging in the Web environment.

Without major modifications to its primary mechanisms, the Web has
turned into a platform for distributed applications. The originally
simple and well-defined document-oriented implementation model of the
Web hinders today's Web application development. Nevertheless, the
development of Web applications is still mostly ad hoc, generally lacks
disciplined and systematic approaches, and neglects using approaches to
Hypermedia concepts and manageable structures of the information space.

The application of Software Engineering practice to development for the
Web, which is also referred to as Web Engineering, and especially the
systematic reuse of artifacts for evolution of Web applications is a
main goal to achieve. In order to ensure integrity and quality of Web
applications, and to facilitate more cost-effective design,
implementation, maintenance respectively evolution, and federation of
such Web applications, rigourous approaches for Web Engineering are
required.

This is the second Minitrack on Web Engineering. Topics of special
interest include, but are not limited to:

- Design Models & Methods
- Software Development Processes
- Frameworks & Architectures
- Web Pattern & Pattern Mining
- Reuse, Integration, and Federation Techniques
- OO-Technology, Component-based Web Engineering
- Semi-structured Data & Data Models

for Web-applications, specifically in Electronic Commerce and similar
strategic areas. Further, an active discussion with focus on Web
Engineering and its influence on other communities is anticipated.

Minitrack Chairs

        Martin Gaedke
        Telecooperation Office (TecO)
        University of Karlsruhe
        Vincenz-Priessnitz Str.1
        76131 Karlsruhe
        Germany
        Ph.: +49 (721) 6902-79
        e-mail: gaedke@teco.edu

        Daniel Schwabe
        Departamento de Informatica
        University of Rio de Janeiro (PUC-RIO)
        R. M. de S. Vicente, 225
        Rio de Janeiro, RJ 22453-900
        Brasil
        e-mail: schwabe@inf.puc-rio.br

        Gustavo Rossi
        LIFIA-UNLP
        University of La Plata
        Calle 9, Nro 124.
        (1900) La Plata
        Buenos Aires, Argentina
        Ph.: +54 (221) 4236585
        e-mail: gustavo@sol.info.unlp.edu.ar

Homepage of the Minitrack
        <http://www.webengineering.org/hicss34/>

Homepage of the HICSS-34
        <http://www.hicss.hawaii.edu/>

========================================================================

       Plea for Cooperation to Prevent Denial Of Service Attacks
                                   by
                              Alan Paller
                          Director of Research
                       SANS Director of Research
                         Email: sansro@sans.org

This is an urgent request for your cooperation to slow down the wave of
denial of service attacks?

As you may know, denial of service (DOS) attacks are virulent and still
very dangerous. These are the attacks responsible for the many outages
reported recently in the press and others that have been kept more
secret.  DOS attacks are a source of opportunities for extortion and a
potential vehicle for nation-states or anyone else to cause outages in
the computer systems used by business, government, and academia.  DOS
attacks, in a nutshell, comprise a world-wide scourge that has already
been unleashed and continues to grow in sophistication and intensity.

One effective defense for these attacks is widely available and is
neither expensive nor difficult to implement, but requires Internet-wide
action; that's why we're writing this note to request your cooperation.

The defense involves straightforward settings on routers that stop key
aspects of these attacks and, in doing that, reduce their threat
substantially. These settings will not protect you from being attacked,
but rather will stop any of the computers in your site from being used
anonymously in attacking others. In other words, these settings help
protect your systems from being unwitting assistants in DOS attacks, by
eliminating the anonymity upon which such attacks rely.  If everyone
disables the vehicles for anonymity in these attacks, the attacks will
be mitigated or may cease entirely for large parts of the net.

The simple steps can be found at the SANS website at the URL
http://www.sans.org/dosstep/index.htm and will keep your site from
contributing to the DOS threat.  Tools will soon be publicly posted to
determine which organizations have and have not protected their users
and which ones have systems that still can be used as a threat to the
rest of the community.

More than 100 organizations in the SANS community have tested the
guidelines, which were drafted by Mark Krause of UUNET with help from
security experts at most of the other major ISPs and at the MITRE
organization. The testing has improved them enormously. (A huge thank-
you goes to the people who did the testing.)

We hope you, too, will implement these guidelines and reduce the global
threat of DOS attacks.

We also urge you to ask your business partners and universities and
schools with which you work to implement these defenses.  And if you use
a cable modem or DSL connection, please urge your service provider to
protect you as well.

As in all SANS projects, this is a community-wide initiative. If you can
add to the guidelines to cover additional routers and systems, we
welcome your participation.


========================================================================

       Automated Software Testing -- A Perspective (Part 1 of 2)

                                   by

                              Kerry Zallar

      Note from the author: My perspective on most things is that
the 'glass is half full' rather than half empty. This attitude carries
over to the advice I suggest on automated software testing as well. I
should point out, however, there is an increasing awareness from others
experienced in this field, as well as from my own experience, that many
efforts in test automation do not live up to expectations. A lot of
effort goes into developing and maintaining test automation, and even
once it's built you may or may not recoup your investment. It's very
important to perform a good cost/benefit analysis on whatever manual
testing you plan to automate. The successes I've seen have mostly been
on focused areas of the application where it made sense to automate,
rather than complete automation efforts. Also, skilled people were
involved in these efforts and they were allowed the time to do it right.

Test automation can add a lot of complexity and cost to a test team's
effort, but it can also provide some valuable assistance if its done by
the right people, with the right environment and done where it makes
sense to do so. I hope by sharing some pointers that I feel are
important that you'll find some value that translates into saved time,
money and less frustration in your efforts to implement test automation
back on the job.

                               Key Points

I've listed the "key points" up front instead of waiting until the end.
The rest of the article will add detail to some of these key points.

o  First, it's important to define the purpose of taking on a test
   automation effort. There are several categories of testing tools each
   with its own purpose. Identifying what you want to automate and where
   in the testing life cycle will be the first step in developing a test
   automation strategy. Just wishing that everything should be tested
   faster is not a practical strategy.  You need to be specific.

o  Developing a test automation strategy is very important in mapping
   what's to be automated, how it's going to be done, how the scripts
   will be maintained and what the expected costs and benefits will be.
   Just like every testing effort should have a testing strategy, or
   test plan, so should there be a 'plan' built for test automation.

o Many of the testing 'tools' provided by vendors are very sophisticated
   and use existing or proprietary coding 'languages'. The effort of
   automating an existing manual testing effort is no different than a
   programmer using a coding language to write programs to automate any
   other manual process.  Treat the entire process of automating testing
   as you would any other software development effort. This includes
   defining what should be automated, (the requirements phase),
   designing test automation, writing the scripts, testing the scripts,
   etc. The scripts need to be maintained over the life of the product
   just as any program would require maintenance. Other components of
   software development, such as configuration management also apply

o The effort of test automation is an investment. More time and
   resources are needed up front in order to obtain the benefits later
   on. Sure, some scripts can be created which will provide immediate
   payoff, but these opportunities are usually small in number relative
   to the effort of automating most test cases. What this implies is
   that there usually is not a positive payoff for automating the
   current release of the application. The benefit comes from running
   these automated tests every subsequent release.  Therefore, ensuring
   that the scripts can be easily maintained becomes very important.

o Since test automation really is another software development effort,
   it's important that those performing the work have the correct skill
   sets. A good tester does not necessarily make a good test automator.
   In fact, the job requirements are quite different. Good testers are
   still necessary to identify and write test cases for what needs to be
   tested. A test automator, on the other hand, takes these test cases
   and writes code to automate the process of executing those tests.
   From what I've seen, the best test automation efforts have been lead
   by developers who have put their energies into test automation.
   That's not to say that testers can't learn to be test automators and
   be successful, it's just that those two roles are different and the
   skill sets are different.

                                 Points

Here are some other important points to consider:

When strategizing for test automation, plan to achieve small successes
and grow. It's better to incur a small investment and see what the
effort really takes before going gung ho and trying to automate the
whole regression suite. This also gives those doing the work the
opportunity to try things, make mistakes and design even better
approaches.

Many software development efforts are underestimated, sometimes grossly
underestimated. This applies to test automation as well, especially if
the effort is not looked upon as software development. Test automation
is not something that can be done on the side and care should be taken
when estimating the amount of effort involved. Again, by starting small
and growing, estimating the work can be gauged.

When people think of testing tools, many first think of the system test.
There are several types of testing tools which can be applied at various
points of code integration. Test automation can be applied at each of
the levels of testing including unit testing, one or more layers of
integration testing, and system testing (another form of integration).
The sooner tests can be executed after the code is written, before too
much code integration has occurred, the more likely bugs will not be
carried forward. When strategizing for test automation, consider
automating these tests as early as possible as well as later in the
testing life cycle.

Related to this last point is the idea that testers and software
developers need to work as a team to make effective test automation
work. I don't believe testing independence is lost when testers and
developers work together, but there can be some excellent advantages
that I'll later point out.

Testing tools, as sophisticated as they have become, are still dependent
upon consistency in the test environment. This should be quite obvious,
but having a dedicated test environment is absolutely necessary. If
testers don't have control of their test environment and test data, the
required setup for tests may not meet the requirements of those tests.
When manual testing is done testers may sometimes 'work around' test
setup issues. Automated test scripts are less flexible and require
specific setup scenarios, thereby needing more control.

Test automation is not the only answer to delivering quality software.
In fact, test automation in many cases is a last gasp effort in an
attempt to find problems after they've been made instead of eliminating
the problems as they are being created. Test automation is not a
substitute for walkthroughs, inspections, good project management,
coding standards, good configuration management, etc. Most of these
efforts produce higher pay back for the investment than does test
automation. Testing will always need to be done and test automation can
assist, but it should not be looked upon as the primary activity in
producing better software.

The truth is that developers can produce code faster and faster with
more complexity than ever before. Advancements in code generation tools
and code reuse are making it difficult for testers to keep up with
software development. Test automation, especially if applied only at the
end of the testing cycle, will not be able to keep up with these
advances. We must pull out all stops along the development life cycle to
build in good quality software and test as early and often as possible
with the assistance of test automation.

                                Benefits

To many people, the benefits of automation are pretty obvious. Tests can
be run faster, they're consistent, and tests can be run over and over
again with less overhead. As more automated tests are added to the test
suite more tests can be run each time thereafter. Manual testing never
goes away, but these efforts can now be focused on more rigorous tests.

There are some common 'perceived' benefits that I like to call 'bogus'
benefits. Since test automation is an investment it is rare that the
testing effort will take less time or resources in the current release.
Sometimes there's the perception that automation is easier than testing
manually. It actually makes the effort more complex since there's now
another added software development effort. Automated testing does not
replace good test planning, writing of test cases or much of the manual
testing effort.

                                 Costs

Costs of test automation include personnel to support test automation
for the long term. As mentioned, there should be a dedicated test
environment as well as the costs for the purchase, development and
maintenance of tools. All of the efforts to support software
development, such as planning, designing, configuration management, etc.
apply to test automation as well.

                              Common View

Now that some of the basic points have been noted, I'd like to talk
about the paradigm of testing automation. When people think of test
automation, the 'capture/playback' paradigm is commonly perceived. The
developers create the application software and turn it over to the
testing group. The testers then busily use capture/playback
functionality of the testing tool to quickly create test scripts.
Capture/playback is used because it's easier than 'coding' scripts.
These scripts are then used to test the application software.

There are some inherent problems with this paradigm. First, test
automation is only applied at the final stage of testing when it is most
expensive to go back and correct the problem. The testers don't get a
chance to create scripts until the product is finished and turned over.
At this point there is a tremendous pull on resources to just test the
software and forgo the test automation effort. Just using
capture/playback may be temporarily effective, but using
capture/playback to create an entire suite will make the scripts hard to
maintain as application modifications are made.

                        Test and Automate Early

From observations and experience, a different paradigm appears to be
more effective. Just as you would want to test early and test often if
you were testing manually, the same applies to test automation. The
first level of testing is the unit testing performed by the developer.
From my experience unit testing can be done well or not done well
depending on the habits and personality of the developer. Inherently,
developers like to develop, not write test cases. Here's where an
opportunity for developers and testers to work together can begin to pay
off. Testers can help document unit tests and developers can write
utilities to begin to automate their unit tests. Assisting in
documenting test cases will give a better measurement of unit tests
executed.  Much success of test automation comes from homegrown
utilities. This is because they integrate so well with the application
and there is support from the developer to maintain the utilities so
that they work with the application. More effective and efficient unit
testing, through the use of some automation, provides a significant bang
for the buck in trying to find bugs in the testing life cycle. Static
analyzers can also be used to identify which modules have the most code
complexity and may require more testing.

                           (To Be Continued)

========================================================================

                     CAPBAK/Web Product Demo Offer

CAPBAK/Web(tm) is a Test Enabled Web Browser(tm) for Windows 95/98 and
Windows NT/2000.

CAPBAK/Web performs essentially all functions needed for detailed
WebSite static and dynamic testing, QA/Validation, and load generation.
CAPBAK/Web has native capabilities that handle WebSite features that are
difficult, awkward, or even impossible with other methods such as those
based on viewing a website from the Windows OS level.

A new FREE Demo Version can be downloaded from:

  <http://www.soft.com/Products/Downloads/down.capbakweb.html#DEMO>

CAPBAK/Web has a very rich feature set:

 * Intuitive on-browser GUI.
 * Recording and playback of sessions in combined true-time and object
   mode.
 * Fully editable recordings/scripts.
 * Pause/SingleStep/Resume control for script checkout.
 * Performance timings to 1 msec resolution.
 * Content validation, HTML document features, URLs, selected text
   fragments, selected images, and all images and applets.
 * Event, timing charts, performance, and history charts.
 * Wizard create scripts to exercise all links on a page, push all
   buttons on a FORM, and manipulate a FORM's complete contents.
 * JavaScript and VBScript fully supported.
 * Advanced recording features for Java applets and ActiveX controls.
 * LoadTest feature to chain scripts into realistic load testing
   scenarios.
 * Logfiles are spreadsheet ready.
 * Cache management (play back tests with no cache or an initially empty
   cache).

A feature/benefit analysis of CAPBAK/Web is at:

 <http://www.soft.com/Products/Web/CAPBAK/features.benefits.html>

The LoadTest feature is described at:

 <http://www.soft.com/Products/Web/CAPBAK/Documentation.IE/CBWeb.load.html>

Take a quick look at the CAPBAK/Web GUI and other material about the
product at:

 <http://www.soft.com/Products/Web/CAPBAK/Documentation.IE/CBWeb.quickstart.html>

Download the latest CAPBAK/Web release at:

 <http://www.soft.com/Products/Downloads/down.capbakweb.html#DEMO>

which does NOT require a license key, or at:

 <http://www.soft.com/Products/Downloads/down.capbakweb.html#FULL>

which does requires a key but has full capabilities. that you can get
from 

========================================================================

           Call For Papers -- Mutation 2000 in Silicon Valley

                A Symposium on Mutation Testing in the
                Twentieth and the Twenty First Centuries

              October 6-7, 2000, San Jose, California, USA

        <http://www.research.telcordia.com/society/mutation2000>

In collaboration with the IEEE International Symposium on Software
Reliability Engineering, ISSRE '2000 (October 8-11)

The objective of this symposium, the first of its kind, is to bring
together researchers and practitioners of mutation testing from all over
the world. These individuals will share their experiences and insights
into one of the most fascinating and powerful techniques for software
testing, namely, Mutation! The originators of mutation testing will be
keynote speakers. Research talks will focus on various practical and
theoretical aspects of mutation testing. Symposium proceedings will be
published. These proceedings will document the work done in mutation
testing, and hence will serve as a long lasting reference for
researchers, educators, and students. In addition, selected papers will
be published in a special issue of the Journal of Software Testing,
Verification, and Reliability.


Topics:  Researchers and practitioners are invited to submit original
manuscripts of either full paper or short paper describing work in any
of the following areas:

    * Mutation-based test adequacy criteria
    * Effectiveness of mutation
    * Comparison of mutation with other testing techniques
    * Tools for mutation
    * Experience with mutation
    * Mutation applied to OO programs
    * Novel applications of mutation

Submissions:  Submitted manuscripts should be in English and no longer
than 5000 words or 20 double-spaced pages for full papers, and 2500
words or 10 double-spaced pages for short papers. All submissions must
be made electronically in Word, PDF or postscript format. Each
submission should include title, all authors' names, affiliations, and
complete mail and electronic mail addresses, together with an abstract
not exceeding 200 words and keyword list of 4 - 5 keywords. Final
versions of accepted papers will be limited to 10 pages for long papers
and 5 pages for short papers in the IEEE proceedings format described at
<http://computer.org/cspress/instruct.htm>.


Important Dates:
    Jun.  1, 2000    Papers due
    Jun.  1, 2000    Proposals for Tool demo due
    Jul. 15, 2000    Acceptance Notification
    Aug. 15, 2000    Camera Ready Manuscript due

Organizers:
    Honorary Chair
           Richard A. DeMillo, Telcordia Technologies
           Richard J. Lipton,  Princeton University
    General Chair
           Aditya P. Mathur, Purdue University
           Email: apm@cs.purdue.edu
    Program Chair
           W. Eric Wong, Telcordia Technologies
           Email: ewong@research.telcordia.com
    ISSRE 2000 Liaison
           Allen P. Nikora, Jet Propulsion Laboratory, NASA

Program committee:
     John A. Clark, University of York, UK
     Bob Horgan, Telcordia Technologies, USA
     Bill Howden, University of California at San Diego, USA
     Kim N. King, Georgia State University, USA
     Jose C. Maldonado, University of Sao Paulo at Sao Carlos, Brazil
     Mike McCracken, Georgia Tech, USA
     Jeff Offutt, George Mason University
     Martin Woodward, University of Liverpool, UK

Sponsors:
     IEEE Reliability Society
     Telcordia Technologies
     Software Engineering Research Center

========================================================================

                         INSTRUCTIONS FOR LIFE

Sent by George Lindamood  with instructions to
send it on to at least 15 people to preserve the good karma.  Um, QTN
circulation is 9K+, so...

 1. Take into account that great love and great achievements involve
    great risk.

 2. When you lose, don't lose the lesson.

 3. Follow the three R's: Respect for self, Respect for others and
    Responsibility for all your actions.

 4. Remember that not getting what you want is sometimes a wonderful
    stroke of luck.

 5. Learn the rules so you know how to break them properly.

 6. Don't let a little dispute injure a great friendship.

 7. When you realize you've made a mistake, take immediate steps to
    correct it.

 8. Spend some time alone every day.

 9. Open your arms to change, but don't let go of your values.

10. Remember that silence is sometimes the best answer.

11. Live a good, honorable life. Then when you get older and think back,
    you'll be able to enjoy it a second time.

12. A loving atmosphere in your home is the foundation for your life.

13. In disagreements with loved ones, deal only with the current
    situation. Don't bring up the past.

14. Share your knowledge. It's a way to achieve immortality.

15. Be gentle with the earth.

16. Once a year, go someplace you've never been before.

17. Remember that the best relationship is one in which your love for
    each other exceeds your need for each other.

18. Judge your success by what you had to give up in order to get it.

19. Approach love and cooking with reckless abandon.

========================================================================

                         Call for Participation

   The Workshop on Internet-scale Software Technologies (TWIST 2000)

      "Organizational and Technical Issues in the Tension Between
      Centralized and Decentralized Applications on the Internet"

                            July 13-14, 2000

                    Institute for Software Research
                    University of California, Irvine
                        Irvine, California, USA

                  <http://www.isr.uci.edu/twist2000/>


The goal of TWIST 2000 is to substantively explore design tensions
between centralizing and decentralizing forces on the Internet, the pros
and cons of centralized and decentralized architectures, and the long
term implications which lead architects to design one way or the other.

Many of the most successful applications on the Internet today are
architecturally centralized. Among these are eBay, AOL, and Amazon.com.
The success of these centralized architectures is surprising to some,
given the fundamentally decentralized way the Internet itself and the
World Wide Web work.

Alternatively, many companies and research projects have advocated
decentralized applications. Such applications are touted as having the
advantages of robustness, scalability based upon replication (rather
than just raw speed), resource sharing, and ability to span trust
domains. Applications of the decentralized approach include SETI@Home
(parallel scientific computing) and the Air Traffic Control system
(distributed command and control).

Many applications employ a mixed strategy, including financial trading
and email. Consider how Travelocity, for example, is implemented as a
decentralized Web application wrapping the centralized Sabre
reservations service. Other applications exhibit both strategies
depending on the layer of abstraction considered: the Domain Name
Service is a centralized monopoly of names in a decentralized database,
or how Akamai appears as a single global Web cache to a browser but
internally relies on globally distributed servers, or eBay, a
centralized service enabling wildly decentralized marketplaces.

We seek answers to such questions as:

  - Can centralized applications continue to scale with the growth of
    Internet users, traffic, types of services, and customer base?
  - Can existing centralized approaches continue to grow unabated, or
    will they reach hard limits?
  - If they can grow unabated, then how can this be accomplished and how
    does it impact decentralized application architecture and
    development?

Issues to consider include:

  - At what levels of an application's design should distribution be
    employed?
  - What are the key distinguishing characteristics of services
    (applications) for which centralized architectures (exploiting
    Moore's Law) will continue to suffice?
  - Under what circumstances are decentralized architectures superior?
    Necessary?
  - What sort of application spaces do applications such as Internet
    phones/smartphones have?

Axes of influence include:

    - Economic and business models.
  - Trust.
  - Robustness/fault-tolerance.
  - Scale.
  - Problem characteristics.
  - Democratization. Participants often vote their resources by deciding
    to share information or compute cycles.

Attendance

Attendance at the workshop is by invitation only, based on submission of
an informal statement of your interests. As well, we encourage
submission of 1-5 page position papers which will be distributed in
advance to the workshop attendees. Submitters of position papers will be
given priority for attendance invitation.

Submission details and deadlines are available at the workshop web site:

        <http://www.isr.uci.edu/twist2000/>

Workshop Report

The workshop organizers will produce a report subsequent to the workshop
which will be submitted for widespread publication. A proceedings will
not be produced.

Sponsored by

UC Institute for Software Research         <http://www.isr.uci.edu/>

For More Information

Debra A. Brodbeck ISR Technical Relations Director brodbeck@uci.edu
(949) 824-2260

========================================================================

           Call for Papers: Special issue on Web Engineering

IEEE Multimedia seeks submissions for a special issue on Web
Engineering, to appear in early 2001.

The issue aims to assess the problems of Web-based application system
development and to present approaches, methods, and tools for systematic
development of complex Web-based applications. It would also address the
challenges of developing Web-based front-end and back-end systems,
interfaces to legacy systems, distributed databases and real-time
systems, and the management of distributed development.

Submissions are due electronically by 1 June 2000.

Guest Editors:
Athula Ginige, Univ of Western Sydney;  a.ginige@uws.edu.au
San Murugesan, Univ of Western Sydney; s.murugesan@uws.edu.au

For further details see the attached CFP or  the Web page at:
http://vision.macarthur.uws.edu.au/multimedia-WebE/ or contact the Guest
Editors.

Dr San Murugesan Dept of Computing and Information Systems University of
Western Sydney Macarthur Campbelltown NSW 2560; Australia
 Phone: +61-2- 4620 3513
 Fax:   +61-2- 4626 6683
 email: s.murugesan@uws.edu.au
 web page: http://fistserv.macarthur.uws.edu.au/san/

========================================================================
------------>>>          QTN SUBMITTAL POLICY            <<<------------
========================================================================

QTN is E-mailed around the 15th of each month to subscribers worldwide.
To have your event listed in an upcoming issue E-mail a complete
description and full details of your Call for Papers or Call for
Participation to "ttn@sr-corp.com".

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the QTN issue date.  For example,
  submission deadlines for "Calls for Papers" in the January issue of
  QTN On-Line should be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; QTN disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR
logo are trademarks or registered trademarks of Software Research, Inc.
All other systems are either trademarks or registered trademarks of
their respective companies.

========================================================================
----------------->>>  QTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to QTN, to CANCEL a current subscription, to CHANGE an
address (a CANCEL and a SUBSCRIBE combined) or to submit or propose an
article, use the convenient Subscribe/Unsubscribe facility at:

         <http://www.soft.com/News/QTN-Online/subscribe.html>.

Or, send E-mail to "qtn@sr-corp.com" as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:

           subscribe your-E-mail-address

   TO UNSUBSCRIBE: Include this phrase in the body of your message:

           unsubscribe your-E-mail-address

   NOTE: Please, when subscribing or unsubscribing via email, type YOUR
   email address, NOT the phrase "your-E-mail-address".

               QUALITY TECHNIQUES NEWSLETTER
               Software Research, Inc.
               1663 Mission Street, Suite 400
               San Francisco, CA  94103  USA

               Phone:          +1 (415) 861-2800
               Toll Free:      +1 (800) 942-SOFT (USA Only)
               Fax:            +1 (415) 861-9801
               E-mail:         qtn@sr-corp.com
               WWW:            <http://www.soft.com/News/QTN-Online>

                               ## End ##