2012

NEWS

December 3, 2012

Conference photos

Conference photos

Conference photos are available online.

REGULAR PAPERS

PRESENTATIONS

LIST OF PARTICIPANTS

PHOTOGALLERY

October 25—28, 2012 • Loucky monastery • Znojmo • Czech Republic

Invited speakers

Dirk Beyer

University of Passau, Germany

CPAchecker: The Configurable Software-Verification Platform

CPAchecker is an open-source software-verification framework that is
built on the concepts of Configurable Program Analysis (CPA).
CPAchecker integrates most of the state-of-the-art technologies for
software model checking as independent components in the framework.
Examples are counterexample-guided abstraction refinement (CEGAR),
lazy predicate abstraction, interpolation-based refinement,
large-block encoding, and bounded model checking. Every abstract
domain, together with the corresponding operations, implements the
interface of configurable program analysis. The main algorithm
is configurable to perform a reachability analysis on user-specified
combinations of existing CPAs. In software verification, it takes a
considerable amount of effort to convert a verification idea into
actual experimental results --- we aim at accelerating this process.
CPAchecker allows to study the influence of different concepts and
ideas on the performance of the verification in isolation from other
parameters, and we will demonstrate this on a few examples, in
particular, using our predicate analysis.

Dieter Gollmann

Technische Universität Hamburg-Harburg

Security for Cyber-physical Systems

Cyber-physical systems are characterized by an IT infrastructure controlling effects in the physical world.
Attacks are intentional actions that try to cause undesired effects in the physical world. We will examine to
which extent traditional IT security techniques can protect against attacks on cyber-physical systems, and
which additional measures could be deployed to strengthen their security.

Said Hamdioui

Delft University of Technology

Testing Embedded Memories in the Nano-Era: from
defects to Built-In-Self Test

The cost of memory testing increases with every generation of new memory chips. New technologies
are introducing new defect mechanisms that were unknown in the past. Precise fault modeling to
design efficient tests is therefore essential in order to keep the test cost and test time within
economically acceptable limits, while keeping a high product quality.

The objective is to provide attendees with an overview of fault modeling, test design and BIST for memory devices in the nano-era. Traditional fault modeling and recent development in fault models for current and future technologies are covered. Systematic methods are presented for designing and optimizing tests, supported by industrial results from different companies. Impact of algorithmic (e.g., data-background) and non-algorithmic (e.g. voltage) stresses is explored in order to get better insight in the test effectiveness. BIST architectures are covered; Finally, future challenges in memory testing (failure mechanism, test, diagnosis, etc) are highlighted.

The objective is to provide attendees with an overview of fault modeling, test design and BIST for memory devices in the nano-era. Traditional fault modeling and recent development in fault models for current and future technologies are covered. Systematic methods are presented for designing and optimizing tests, supported by industrial results from different companies. Impact of algorithmic (e.g., data-background) and non-algorithmic (e.g. voltage) stresses is explored in order to get better insight in the test effectiveness. BIST architectures are covered; Finally, future challenges in memory testing (failure mechanism, test, diagnosis, etc) are highlighted.

Colin McDiarmid

Corpus Christi College Oxford

Quicksort and large deviations

Quicksort is probably the most familiar and important randomised algorithm in computer science. It is well known
that the expected number of comparisons on any input of $n$ distinct keys is asymptotically $2 n \ln n$. Crucially,
the probability of a large deviation above this value is very small. This probability was well estimated in 1996 by
Hayward and the speaker, with rather an ad-hoc proof: we shall revisit this result in the light of more recent
results on concentration.

Peter Bro Miltersen

Aarhus University

Recent result on Howard's algorithm

Howard's algorithm (a.k.a. policy iteration) is a standard
algorithm for solving Markov decision processes. Furthermore,
generalizations of the algorithm solve various kinds of two-player games,
including parity games, concurrent reachability games and Shapley's
stochastic games. The worst case time complexity of the algorithm was
until recently poorly understood, but in recent years, a surge of
publications on the topic have given us an almost complete understanding
of its complexity in the various settings in which it applies. We shall
give a survey of these results and explain how they unexpectedly led to
the answer to a classical open question concerning the complexity of a
randomized variant of the simplex algorithm for linear programming. We
shall also present the open problems that remain.

Simon Perdrix

CNRS, Laboratoire d'Informatique de Grenoble

Graph-based Quantum Secret Sharing

Secret sharing is a protocol, introduced independently by Shamir and
Blakley, for distributing a secret among n players. A secret sharing
protocol has a threshold k if any group of at least k players can recover
the secret, whereas any group of at most k-1 players has no information
about the secret. In this presentation, we will mainly focus on the case
where the protocol is described by a graph and the shared secret is a
quantum state (protocols introduced by Markham and Sanders). This family of
graph-based protocols is very promising in terms of physical implementation.
Unanimity graph-based protocols (i.e. the threshold is the total number of
players) are known. We present quasi-unanimity protocols (the ratio k/n
tends to 1 as n tends to infinity), and we prove the existence of protocols
such that k<0.811n. This last result is proved using probabilistic methods
(Lovasz Local Lemma) and is non-constructive, however we show that a random
graph has a threshold at most 0.811n with high probability.
Finally, we show that computing the threshold of a given graph is hard (the
corresponding decision problem is NP-complete).

•