Senate House

3rd Workshop on Advances in Model Based Testing         (A-MOST 2007)

co-located with the
ISSTA 2007 
International Symposium on Software Testing and Analysis

London, United Kingdom

July 9-12, 2007


Christopher Robinson-Mallett
Fraunhofer IESE, Germany
Robert M. Hierons
Brunel University, UK
Steve Counsell
Brunel University, UK

Steering Committee
Auguston, Mikhail
Naval Postgraduate School, USA

Briand, Lionel
Carleton University, Canada

Bernstein, Larry
Stevens Institute of  Technology, USA

Dalal, Siddharta R.
Xerox Corporatio, USA
Harrold, Mary Jean
Georgia Tech, USA

Hartman, Alan
IBM Research, Israel

Heitmeyer, Constance
Naval Research Laboratory, USA

Jain, Ashish
Telcordia Technologies, USA

Mathur, Aditya
Purdue University, USA

Poore, Jesse
University of Tennessee, USA

Pretschner, Alexander
ETH Zürich, Switzerland

Program Committee
Alexander, Roger   
Washington State University, USA
Auguston, Mikhail   
Naval Postgraduate School, USA
Baker, Paul   
Motorola, UK

Bernstein, Larry
Stevens Institute of  Technology, USA

Baudry, Benoit   
INRIA/IRISA Rennes, France
Binder, Robert   
mVerify, USA
Bogdanov, Kirill   
University of Sheffield
Weise, Carsten   
RWTH Aachen, Germany
Colbourn, Charles   
Arizona State University, USA 
Conrad, Mirko   
The Mathworks, Munich, Germany
Frantzen, Lars
Radboud University Nijmegen, NL
Gao, Jerry   
San Jose State University, USA
Ghosh, Sudipto   
Colorado State University, USA  
Grieskamp, Wolfgang   
Microsoft Research, Redmond

Groz, Roland

Hartman, Alan   
IBM Research, Israel
Heitmeyer, Constance   
Naval Research Laboratory, USA
Jain, Ashish   
Telcordia Technologies, USA 
Leppänen, Sari   
Nokia Research Center, Finland
Liggesmeyer, Peter   
University of Kaiserslautern
Neilsen, Brian   
University of Aalborg
Nunez Garcia, Manuel   
Universidad Complutense de Madrid
Offutt, Jeff   
George Mason University, USA
Oshana, Robert
Texas Instruments, USA
Pretschner, Alexander   
ETH Zürich, Switzerland
Reid, Stuart   
Cranfield University, UK
Robinson, Harry
Google, USA
Utting, Mark   
University of Waikato, New Zealand
Willcock, Colin   
Nokia, Finland

Submission deadline on 20th April
Notification on 25th May
Workshop is to be held on 9th July 2007


Notification is delayed until May 31 2007.

Submission has been on closed April 25 2007. 

For workshop registration, hotel reservations, and visa letter requests, please refer to details at ISSTA website.

Workshop Proceedings will be published in the ACM Digital Library. Selected authors will be invited to submit an extended paper for a journal after the workshop presentations. For detailed information in submission refer to the paragraph at the bottom of this page

Workshop Program

9:00-9:30 Welcome

9:30-11:00 Session 1: Test Generation

Kicillof, Grieskamp, Tillmann, Braberman Achieving Both Model
and Code Coverage with Automated Gray-Box Testing

Masson, Julliand, Plessis, Jaffuel, Debois Automatic Generation of Model Based Tests for a Class of Security Properties

Combining Test Case Generation for Component and Integration Testing

11:00-11:30 Coffee

11:30-13:00 Session 2: Regression Testing and Prioritization

Korel, Koutsogiannakis, Tahat
Model-Based Test Prioritization Heuristic Methods and their Evaluation

Farooq, Iqbal, Malik, Nadeem
An Approach for Selective State-Machine based Regression Testing

Chen, Probert, Ural
Model-based Regression Test Suite Generation Using Dependence Analysis

13:00-14:30 Lunch

14:30-16:00 Session 3: Temporal Logics and Model Checking

Fraser, Wotawa
Using LTL Rewriting to Improve the Performance of Model-Checker Based Test-Case Generation

Wijesekera, Sun, Ammann, Fraser
Relating Counterexamples to Test Cases in CTL Model Checking Specifications

Test Case Generation from Formal Models through Abstraction refinement and Model Checking

16:00-16:30 Coffee

16:30-18:00 Session 4: Supporting MBT

Bouquet, Grandpierre, Legeard, Peureux, Utting, Vacelet
A subset of precise UML for Model-based Testing

Naslavsky, Ziv, Richardson
Towards Traceability of Model-based Testing Artifacts

Weiglhofer, Aichernig, Peischl, Wotawa
Test Purpose Generation in an Industrial Application

Workshop Topics and Goals

The increasing use of software and the growing system complexity, in size, heterogeneity, autonomy, physical distribution, and dynamicity make focussed software system testing a challenging task. Recent years have seen an increasing industrial and academic interest in the use of models for designing and testing software. Success has been reported using a range of types of models using a variety of specification formats, notations and formal languages, such as UML, SDL, Z.  A-MOST 07 will bring together researchers and practitioners interested in the topic of Model Based Testing (MBT).
The use of models for designing and testing software is currently one of the most salient industrial trends with significant impact on the development and testing processes. Model-based tools and methods from object-oriented software engineering, formal methods, and other mathematical and engineering disciplines have been successfully applied and continue to converge into comprehensive approaches to software and system engineering.
The execution of software using test-cases or sequences derived in a manual or automatic manner from models, often referred to as MBT, is an encouraging scientific and industrial trend to cope with growing software system complexity. Modelling requires a substantial investment, and practical and scalable MBT solutions can help leverage this investment. The testing models may have been adapted from system design models or might have been devised specifically to support MBT. Naturally, the greatest benefits are often obtained when test generation is automated, but many practitioners report that the modelling process itself is of value, often highlighting requirements issues.
The use of industrial scale software demands the model-based construction of software and systems as compositions of independent and reusable actors. In this engineering paradigm, complex system functionality arises out of the composition of many component services. For these systems, model based testing may significantly improve component acceptance and move component integration testing towards a canonical validation and certification of complete systems.
Automation of software development and software testing on the basis of executable models and simulation promises significant reductions in fault-removal cost and development time - also promoted by the Object Management Group (OMG) as part of their drive towards Model Driven Architecture (MDA). As a consequence of automating MBT changes in requirements analysis, development and testing processes are needed that demand combined efforts from research and industry towards a broadly accepted solution.
A-MOST will focus on three main areas: the models used in MBT; the processes, techniques, and tools that support MBT; and evaluation. Here evaluation includes the evaluation of software using MBT and the evaluation of MBT. These areas can be further broken down into the following topics.


  • Models for component, integration and system testing
  • Product-line models
  • (Hybrid) embedded system models
  • Systems-of-systems models
  • Executable models and simulation
  • Environment and use models
  • Non-functional models
  • Testing from Architectural Models

Processes, Methods and Tools

  • Model-based test generation algorithms
  • Tracing from requirements model to test models
  • Performance and predictability of model-driven development
  • Test model evolution during the software lifecycle
  • Risk-based approaches for MBT
  • Generation of testing-infrastructures from models
  • Combinatorial approaches for MBT
  • Statistical testing

Experiences and Evaluation

  • Non-functional/Quantificative MBT
  • Estimating dependability (e.g., security, safety, reliability) using MBT
  • Coverage metrics and measurements for structural and (non-)functional models
  • Cost of testing, economic impact of MBT
  • Empirical validation, experiences, case studies using MBT
The goal of this workshop is to bring together researchers and practitioners to discuss the current state of the art and practice as well as future prospects for model-based software testing. We will invite submissions of full-length papers that describe new research, tools, technologies, and industry experience, as well as position papers.

Submission Guidelines
The proposed submission guidelines are: Papers should be submitted in PDF format and should not exceed 11 pages (including all text, figures, references and appendices). Please clearly indicate whether the paper is a research paper, an experience report, or a position paper. The results described must be unpublished and should not be under review elsewhere. Each submitted paper must conform to the ACM Format and Submission Guidelines. At least three members of the Program Committee will review each submission.