ISERN Program

Sunday 16th September
18.00-20.00 Registration and Get-together
 
DAY 1: Monday 17 September
8.30-9.00 Registration
9.00-10.00 Welcome and new introductions
Chair: Dieter Rombach
10.00-10.30 Coffee break
10.30-12.00 Top Ten Unsolved Problems in ESE: Introduction
Chair: Natalia Juristo
  Introduction to the session and its goal: elaborate a list of the 10 most important problems the ESE community should focus on for the next years. The list will be elaborated following the Delphi approach. In this first session, participants are asked to propose a list of at least 5 problems they think ESE should try to solve in the near future.
12.00-13.30 Lunch
13.30-15.00 Ongoing collaborations
Chair: Claes Wohlin
  Short presentations describing collaboration among ISERN members:

GQM+Strategies. Fraunhofer IESE (Germany), University of Maryland/Fraunhofer Center-Maryland (USA), University of Oulu (Finland)

European Master on Software Engineering.  Blekinge Institute of Technology (Sweden), University of Bolzano-Bozen (Italy), University of Kaiserslautern (Germany), Universidad Politécnica de Madrid (Spain), Universidade de Sao Paulo -San Carlos (Brazil).

Evolution of Features and their Dependencies - An Explorative Study in OSS. University of Bolzano-Bozen (Italy), University of Calgary (Canada).

A Framework for Improving Technology Adoption Decision Making. Fraunhofer IESE (Germany), Universidad Politécnica de Madrid (Spain)
15.00-15.30 Coffee break
15.30-16.30 Searching for collaboration
Chair: Laurie Williams
 

Poster session. Participants are given 5 minutes each to present their poster (where they look for collaboration in a specific topic). The rest of the time people are left free to talk to the presenters.

A proposal of the collaborative experiment about the integrative project management using the Cloud environment with an in-process measurement facility. Yoshiki Mitani. Information-Technology Promotion Agency (IPA), Japan

Proposal for replication of an experiment about real-time machine translation . Filippo Lanubile. University of Bari, Italy.

GQM+Strategies. Vic Basili, University of Maryland/Fraunhofer Center-Maryland, USA (4 posters)

Empirical investigation of safety engineering methods. Stefan Wagner. University of Stuttgart, Germany.

16.30-17.15 ISERN recommended core reading list
Chair: Per Runeson
 

During the Banff meeting, Per Runeson and Sira Vegas were assigned the task to work on a core reading list for ISERN members. Although this list is mainly for new members, it is of interest for the whole community. The initial list will be presented and people will be asked to provide feedback.

17.15-17.30 Wrap-up and plan for Tuesday
17.30-18.30 ISERN recommended core reading list
Chair: Per Runeson
16.30-17.15 ISERN SC Meeting (by invitation)
19.00-23.00 ISERN Dinner
 
DAY 2: Thueday 18 September
9:00-10:00 Top Ten Unsolved Problems in ESE: The Voting
Chair: Natalia Juristo

Presentation of list of suggested interesting open problems in ESE. Each member should vote:
10.00-10.30 Coffee break
10.30-12.00 GQM+Strategies (Restricted)
Chairs: Vic Basili, Marku Oivo, Jens Heidrich
Empirically founded Requirements Engineering Improvement (Open)
Chairs: Stefan Wagner, Daniel Mendez

What are (currently) the best practices in SE? (Open)
Chair: Andreas Jedlitschka TBD
  Description: GQM+Strategies is an evolved version of GQM that aims at aligning and communicating an organizations goals and strategies at all levels, generating goals from strategies, making all goals measurable and interpreting high level measures in terms of the measurable goals that support them. Several organizations have been working on developing, evolving, and applying the approach in different environments. The goal is to coordinate what is being done at the various organizations and lay out a research and application plan for future work based upon what has been learned so far.

Description: The workshop aims at elaborating existing methods and potential fields of further investigation in the area of empirical software engineering to be transferred to software process improvement in requirements engineering. The overall goal is to build a thematic map and discuss, on basis of this map, potentials to initiate new research collaborations

Description:
If we compare ourselves to the evidence-based approach in medicine, we need to accept that the provision of evidence-based best practices cannot be placed on the shoulders of just a few organizations (people).
In addition, it might be necessary to go beyond the borders of ESE and involve domain experts and practitioners.
The session therefore aims at discussing and identifying the general interest in a community endeavor. Criteria for deciding about what is a best practice or when a technology gets the label ""best practice"" need to be discussed. Finally, we would be interested in ideas on potential SE topics that could be used as a pilot as well as in terms of cooperation and means for dissemination.

12.00-13.30 Lunch
13.30-15.00 Qualitative syntheses (Restricted)
Chairs: Carolyn Seaman, Liliana Guzmán)
Coding Contests – A Foundation for Large Scale Experiments in Empirical Software Engineering? (Open)
Chair: Stefan Biffl,  Dietmar Winkler, Christoph Steindl and Martin Kitzler)
Professional ethics when writing scientific papers (Open)Chair: Reidar Conradi)
 

Research synthesis aims at analyzing, combining and summarizing the results of individual empirical studies on a research question. It attempts to confirm or build hypotheses as well as to support the prioritization and planning of future research. Research synthesis may also support practitioners in understanding the effects of a software technology, in which context those effects may be expected and how to better adopt it.
In this session, we want to explore how to assess qualitative syntheses in SE. For this purpose, we plan to introduce and evaluate the reliability and validity of a checklist designed for assessing qualitative synthesis in SE.

In 2007, Catalysts started to organize coding contests [1] in order to investigate the research question whether test-driven development is faster and leads to less defects. The contests have grown over the years from initially 60 participants to nearly 300 participants in 2012. Other organizations also perform coding contests – either for recruiting purposes (like Google’s CodeJam and Challenge24), for crowd-sourcing purposes (like TopCoder), or just for the sake of providing a stage for the best coders (like ACM Coding Contests). Since 2011, Catalysts has been gathering Contest Data (about the progress per contestant, including source code per level, etc.) from the contests and has made that data available via public data servers [2].

Some issues:
1. Plagiarism (of other's stuff) [refs]. Very, very serious.    Need some hard numbers here? 2. The Vancouver rules [refs:from medicine] state minimum conditions for being listed as a co-author. How common is "sneaking here"?  Note, nasty combination of #1 and #2! 3. "Intentional nomission of key information" - in scientific papers.   

Based on recent experience with paper reviewing, some recurring situations are listed below.3.1 Self-plagiarism: Context: the author(s) have already - before the actual CfP deadline - submitted, got accepted or already published a *very* similar paper on the same subject. However, nothing is mentioned about this in the drafted or finished paper - just silence. And how big must the "delta" be to warrant another paper on the essentially the same subject?
    - Delta must be over 50 percent?
    - But usually OK to upgrade from a conference to journal paper?

3.2 Wrong method: Ex. "Probably" applying a wrong statistical method on the given data, but insufficent or muddled

data to decide upon this:
      - scale mixup: ordinal vs. absolute scale (can do non-parametric vs. parametric tests?).
      - not normal distribution for t-test (skewed data?).

3.3 Heterogeneous data population: Ex. data has been collected from several sources and/or with different criteria. This is seldom properly discussed when making tests on aggregated data, which are not as homogeneous as claimed.  Lastly: how to "document" the actual cases of misconduct? Here, #1 and #3.1 pose the easiest cases, I think. Indeed, do we need a "witness protection program" for ESE? Are we all semi-crooks, and why (hmmmm)?  And, is it better or worse in other disciplines?
15.00-15.30 Lunch
15.30-16.30 Workshops summary session
Chairs: Jeff Carver and Sira Vegas

1 session where each group describes conclusions, outcomes and future actions in 10 minutes
16.30-17.00 Top Ten Unsolved Problems in ESE: Final results
Chair: Natalia Juristo

Final list of 10 most voted problems that the ESE community should solve within the next few years
17.00-17.30 ISERN business
Chair: Dieter Rombach

This session includes the results of the “ISERN recommended core reading list” session