MEMICS
MEMICS
Annual Doctoral Workshop on Mathematical and
Engineering Methods in Computer Science
 
organized jointly by the Masaryk University
and the Brno University of Technology, Czechia
2011
2010
2009
2008
2007
2006/2005
NEWS
 
October 22—24, 2010    Hotel Galant    Mikulov    Czechia
 

Levels of Realism: From Virtual Reality to Real Virtuality - Perception and Virtual Time
Alan Chalmers (University of Warwick, United Kingdom)
Andrej Ferko (Comenius University, Slovakia)
Realism in real-time has long been a "holy grail" of the computer graphics community. While real-time performance is typically accepted as 25fps and above, the definition of realism remains less clear. If we were able to simulate the physics of the real world to minute detail then it would be possible for us to achieve images which were physically correct. However, the amount of computation required for such physical accuracy of complex scenes precludes any possibility of achieving such images in reasonable, let alone real-time, on a desktop computer for many years to come. Furthermore, there is no guarantee of realism as these images do not take into account how the human may perceive this information. Our perception of an environment is not only what we see, but may be significantly influenced by other sensory input, including sound, smell, touch, and even taste. If virtual environments are ever to be regularly used as a valuable tool to experiment in the virtual world with confidence that the results are the same as would be experienced in the real world, then we need to be able to compute these environments to be perceptually equivalent as if we were "there" in the real world; so called "there-reality" or real virtuality. We discuss the virtual time issues, as well. This paper surveys promising efforts to date and identifies and discusses future research directions.

New Approaches to Fault Tolerant Systems Design
Andreas Steininger (Vienna University of Technology, Austria)
Fault tolerance is achieved by introducing redundancy. Redundancy can appear in different forms. It can be space redundancy (additional hardware), information redundancy (additional information helping to verify some data), and time redundancy (multiple sequential executions of the same code, and/or execution of additional verification code). Combinations of these redundancy types are also possible. Fault-tolerant system design based on space redundancy has a quite long tradition, and many generic architectures and concepts have been developed that have proven well in traditional safety-critical application fields like aerospace or (nuclear) power plants. However, the ongoing introduction of microelectronic systems for safety-relevant functions in cars is bringing up new problems that cannot be solved by simply applying the existing approaches. Two main reasons for this are (i) enormous cost pressure in the automotive industry and (ii) the huge amount of variants and configuration options for these systems. In my talk I will report on our experiences in this context. More specifically I will present a dual-core architecture that we have developed and optimized together with the automotive industry. I will use this example to touch upon the topics of error detection in hardware, memory protection, comparison of (very simple) fault-tolerant computer architectures, common cause faults, and fault-tolerance evaluation by means of fault injection.

Recent results on DFA minimization and other block splitting algorithms
Antti Valmari (Tampere University of Technology, Finland)
Hopcroft's famous DFA minimization algorithm runs in O( n alpha log n ) time, where n is the number of states and alpha is the number of different labels. In 2008, an improvement to Hopcroft's algorithm was published that runs in O( m log n ) time, where m is the number of transitions. This is an improvement, because m is at most n alpha and is often much smaller. The improvement was later applied to the so-called Paige--Tarjan algorithm, yielding an O( m log n ) time algorithm for bisimulation minimization in the presence of transition labels. Another improvement, published in 2010, significantly simplified an existing O( m log n ) time algorithm for optimizing Markov chains. All these algorithms are block splitting algorithms. This talk presents these results and some other, small improvements that apply to block splitting algorithms.

Model-Based Segmentation of Biomedical Images
Stefan Wörz (University of Heidelberg, BIOQUANT, IPMB, and DKFZ Heidelberg, Germany)
A central task in biomedical image analysis is the segmentation and quantification of 3D image structures. A large variety of segmentation approaches is based on deformable models. Deformable models allow, for example, to incorporate a priori information about the image structures. This talk gives an overview of different types of deformable models such as active contour models, active shape models, active appearance models, and analytic parametric models. Moreover, this talk presents in more detail 3D parametric intensity models, which are utilized in different approaches for high-precision localization and quantification of 3D image structures. In these approaches, the intensity models are used in conjunction with an accurate, efficient, and robust model fitting scheme. These segmentation approaches have been successfully applied to different biomedical applications such as the localization of 3D anatomical point landmarks in 3D MR and 3D CT images, the quantification of vessels in 3D MRA and 3D CTA images, as well as the segmentation and quantification of cells and subcellular structures in 3D microscopy images.

 
 
MEMICS Photo
 
MEMICS
 
Copyright © FI MU
Copyright © FIT VUT
Brno, 2005–2010
FIFIT
Photos and Design