Logo

Publications

This is a collection of all peer-reviewed publications which I (co-)authored, mostly with ACM and a few with IEEE, Springer etc. I’ve taken care that all papers hosted here are as close as possible to the “official copy of record” (or however publishers want to call this); from approx. 2020 on, all publications are also open-access as well, so the copy hosted here is 100% identical to the one from the publisher.

2024
PDF Bib
Y. Zhang, C.M. Guldbæk, C.F.D. Jensen, N.B. Hansen, F. Echtler
TableCanvas: Remote Open-Ended Play in Physical-Digital Environments
TEI '24: Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction. (2024-02-11)

Remote video communication is now part of everyday life, also for families. At the same time, children encounter digital devices at an early age, but studies indicate that physical play is still vital for their development. To support physical play at a distance, we introduce two prototypes build around projection displays, WallWizard and TableCanvas. Both allow users to play together remotely by combining physical and digital elements on shared surfaces. We evaluated our prototypes through an expert review, and, based on this study, we elaborate further on TableCanvas. We conducted a second qualitative user study with an updated prototype, focused on evaluating the remote aspect. The overall feedback was positive and suggested that the concept could facilitate and promote open-ended play, as well as support a successful remote play experience. Users also indicated additional potential use cases for board gaming, education, and work-related tasks.

PDF Bib
C. Getschmann, F. Echtler
LensLeech: On-Lens Interaction for Arbitrary Camera Devices
TEI '24: Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction. (2024-02-11)

Cameras provide a vast amount of information at high rates and are part of many specialized or general-purpose devices. This versatility makes them suitable for many interaction scenarios, yet they are constrained by geometry and require objects to keep a minimum distance for focusing. We present the LensLeech, a soft silicone cylinder that can be placed directly on or above lenses. The clear body itself acts as a lens to focus a marker pattern from its surface into the camera it sits on. This allows us to detect rotation, translation, and deformation-based gestures such as pressing or squeezing the soft silicone. We discuss design requirements, describe fabrication processes, and report on the limitations of such on-lens widgets. To demonstrate the versatility of LensLeeches, we built prototypes to show application examples for wearable cameras, smartphones, and interchangeable-lens cameras, extending existing devices by providing both optical input and output for new functionality.

2023
PDF Bib
F. Echtler, V. Maierhöfer, N.B. Hansen, R. Wimmer
SurfaceCast: Ubiquitous, Cross-Device Surface Sharing
Proceedings of the ACM on Human-Computer Interaction (2023-11-01)

Real-time online interaction is the norm today. Tabletops and other dedicated interactive surface devices with direct input and tangible interaction can enhance remote collaboration, and open up new interaction scenarios based on mixed physical/virtual components. However, they are only available to a small subset of users, as they usually require identical bespoke hardware for every participant, are complex to setup, and need custom scenario-specific applications. We present SurfaceCast, a software toolkit designed to merge multiple distributed, heterogeneous end-user devices into a single, shared mixed-reality surface. Supported devices include regular desktop and laptop computers, tablets, and mixed-reality headsets, as well as projector-camera setups and dedicated interactive tabletop systems. This device-agnostic approach provides a fundamental building block for exploration of a far wider range of usage scenarios than previously feasible, including future clients using our provided API. In this paper, we present various example application scenarios which we enhance through the multi-user and multi-device features of the framework. Our results show that the hardware- and content-agnostic architecture of SurfaceCast can run on a wide variety of devices with sufficient performance and fidelity for real-time interaction.

PDF Bib
F. Echtler, L. Besançon, J. Vornhagen, C. Wacharamanotham
Toward a consensus on research transparency for HCI
Interactions (2023-08-25)

During the past few years, the COVID-19 pandemic has resulted in an unprecedented amount of research being conducted and published in a very short timeframe to successfully analyze SARS-COV-2, its vaccines, and its treatments. Concurrently, the pandemic also highlighted the limitations of our publication system, which enables and incentivizes rapid dissemination and questionable research practices.

While HCI research is usually not a foundation for life-and-death decisions, we face similar problems. HCI researchers and CHI community members have long criticized a lack of methodological and statistical rigor and a lack of transparent research practices in quantitative and qualitative works. Research transparency can alleviate these issues, as it facilitates the independent verification, reproduction, and—wherever appropriate—replication of claims. Consequently, we argue that the CHI community needs to move toward a consensus on research transparency.

PDF Bib
L. Besancon, F. Echtler, M. Kay, C. Wacharamanotham
The Journal of Visualization and Interaction
The Journal of Visualization and Interaction (2023-04-19)

The Journal of Visualization and Interaction (JoVI) is a venue for publishing scholarly work related to the fields of visualization and human-computer interaction. Contributions to the journal include research in: • how people understand and interact with information and technology, • innovations in interaction techniques, interactive systems, or tools, • systematic literature reviews, • replication studies or reinterpretations of existing work, • and commentary on existing publications. Cross-disciplinary work from other fields such as statistics or psychology, which is relevant to the fields of visualization or human-computer interaction is also welcome.

2022
PDF Bib
R. Van Koningsbruggen, S. Shalawadi, E. Hornecker, F. Echtler
Frankie: Exploring How Self-Tracking Technologies Can Go from Data-Centred to Human-Centred
MUM '22: Proceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia (2022-11-27)

Self-tracking technologies have long promised to enhance our well-being. However, our initial work and that of others show that most of these technologies focus on data, not the user. Based on interviews, development of mood boards, and the creation of a research product, we propose an alternative approach to self-tracking: re-humanising self-tracking technologies. Our work shows that feelings play an important role with data, that data are temporal, and associated with work and utility. We interpret four design criteria, which are applied in the creation of Frankie: a human-centred tracking device which records both quantitative (number of activities) and qualitative (perceived weight of the activity and spoken reflections) data to foster self-reflection. Through this design case we add to the discussion on re-imagining self-tracking technologies to go beyond data-centric artefacts.

PDF Bib
N. Yaghoubisharif, C. Getschmann, F. Echtler
HeadsUp: Mobile Collision Warnings through Ultrasound Doppler Sensing
MUM '22: Proceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia (2022-11-27)

Smartphone-using pedestrians are often distracted, leading to frequent accidents of varying severity with a risk of both awkwardness and injury. We introduce HeadsUp, a mobile app designed to warn the user of imminent collisions with solid obstacles. HeadsUp runs on unmodified commodity smartphones without additional hardware and uses active ultrasound sensing based on the Doppler effect. We contribute an analysis of the ultrasound audio characteristics of six different smartphone models to verify the feasibility of our approach across vendors and device classes, and a description of two implementation variants of our signal processing pipeline. We evaluate our system both in a lab environment and under real-world conditions, and we conclude that HeadsUp can effectively work at a range of up to 3 meters, even though overall performance is heavily dependent on both the individual user and the environment characteristics.

PDF Bib
C. Getschmann, E.M. Mthunzi, F. Echtler
MirrorForge: Rapid Prototyping of Complex Mirrors for Camera and Projector Systems
TEI '22: Proceedings of the Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction. Best Paper Award (2022-02-14)

From small tangibles to large tabletops, mirrors with complex geometries can be invaluable tools to reflect or form light for cameras and projectors. While many fabrication techniques are available for prototyping physical or electronic components, creating mirrors requires manual computation and industrial manufacturing equipment and is therefore considerably slower and more expensive. We propose a technique for fabricating mirror surfaces based on thermoplastics sheets with a laminated metallization layer and 3D-printed fixtures for bending or vacuum forming. Given our toolchain, a simulation of mirrors for cameras and projectors by rendering the CAD model is possible and allows to fabricate and evaluate design iterations quickly, making the process reasonably accessible for research. Finally, we show two prototypes for tangible interfaces based on our mirrors for projection and camera-based interaction, discussing advantages and limitations.

2021
PDF Bib
S. Shalawadi, A. Alnayef, N. van Berkel, J. Kjeldskov, F. Echtler
Rainmaker: A Tangible Work-Companion for the Personal Office Space
MobileHCI '21: Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (2021-09-27)

Routines are an important element of day-to-day work life, supporting people in structuring their day around required tasks. Effectively managing these routines is, however, experienced as challenging by many – an issue further amplified by the current work from home lockdown measures. In this paper we present Rainmaker, a tangible device to support people in their working life in the context of their own homes. We evaluate and iterate on our prototype through two qualitative studies, spanning respectively three days (N = 11) and 15 days (N = 2). Our results highlight the perceived advantages of the use of a primarily physical rather than digital tool for work support, allowing users to stay focused on their tasks and reflect on their work achievements. We present lessons for future work in this area and publicly release the software and hardware used in the construction of Rainmaker.

PDF Bib
C. Getschmann, F. Echtler
DesPat: Smartphone-Based Object Detection for Citizen Science and Urban Surveys
i-com - Journal of Interactive Media - Special Issue on E-Government and Smart Cities (2021-08-26)

Data acquisition is a central task in research and one of the largest opportunities for citizen science. Especially in urban surveys investigating traffic and people flows, extensive manual labor is required, occasionally augmented by smartphones. We present DesPat, an app designed to turn a wide range of low-cost Android phones into a privacy-respecting camera-based pedestrian tracking tool to automatize data collection. This data can then be used to analyze pedestrian traffic patterns in general, and identify crowd hotspots and bottlenecks, which are particularly relevant in light of the recent COVID-19 pandemic. All image analysis is done locally on the device through a convolutional neural network, thereby avoiding any privacy concerns or legal issues regarding video surveillance. We show example heatmap visualizations from deployments of our prototype in urban areas and compare performance data for a variety of phones to discuss suitability of on-device object detection for our usecase of pedestrian data collection.

PDF Bib
T. Tobollik, S. Shalawadi, C. Getschmann, F. Echtler
Exploring Epileptic Seizure Detection with Commercial Smartwatches
Proceedings of the 2021 IEEE International Conference on Pervasive Computing and Communications - Workshops and other Affiliated Events (2021-03-26)

Some forms of epilepsy can randomly trigger severe seizures that degrade patients’ quality of life and may even lead to death. Specially trained dogs that go through a long learning process can sometimes help to warn patients of an imminent seizure, possibly due to an ability to sense subtle changes in the subject’s blood oxygen level. In this paper, we present our exploratory study of using a commercial smartwatch with an oxygen sensor to continuously capture data and detect changes in blood oxygen saturation during or ahead of a seizure to warn the patient or emergency contacts. Our data shows a possible correlation between reported oxygen level and a seizure incident, but higher-frequency readings will be required in the future to determine whether accurate prediction with smartwatches is indeed possible.

PDF Bib
J. Roth, J. Ehlers, C. Getschmann, F. Echtler
TempoWatch: a Wearable Music Control Interface for Dance Instructors
TEI '21: Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (2021-02-14)

We present TempoWatch, a wearable smartwatch-based interface designed to allow dance instructors to control music playback and tempo directly on their wrist via touch gestures using a circular watch display. Dance instructors have unique requirements with respect to music playback in their classes, in particular the ability to stay in position while controlling the playback, and to change speed via time-stretching. However, common stereo decks and mobile music player apps do not support these requirements well. We present the design and architecture of our system, and a qualitative evaluation performed with 9 semi-professional instructors in their own dance classes. Dance instructors were involved in this project from the very beginning to match the system and interface design to its prospective use cases. Results show that instructors are able to use TempoWatch productively after only a short learning phase.

PDF Bib
C. Getschmann, F. Echtler
Seedmarkers: Embeddable Markers for Physical Objects
TEI '21: Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (2021-02-14)

We present Seedmarkers, shape-independent topological markers that can be embedded in physical objects manufactured with common rapid-prototyping techniques. Many markers are optimized for technical performance while visual appearance or the feasibility of permanently merging marker and physical object is not considered. We give an overview of the aesthetic properties of a wide range of existing markers and conducted a short online survey to assess the perception of popular marker designs. Based on our findings we introduce our generation algorithm making use of weighted Voronoi diagrams for topological optimization. With our generator, Seedmarkers can be created from technical drawings during the design process to fill arbitrary shapes on any surface. Given dimensions and manufacturing constraints, different configurations for 3 or 6 degrees of freedom tracking are possible. We propose a set of application examples for shape-independent markers, including 3D printed tangibles, laser cut plates and functional markers on printed circuit boards.

2020
PDF Bib
S. Shalawadi, E. Hornecker, F. Echtler
A dynamic representation of physical exercises on inflatable membranes: Making walking fun again!
ETIS '20: Proceedings of the 4th European Tangible Interaction Studio (2020-11-16)

Humans have been relying on multiple sensory channels to generate enriched meaning to reflect experiences. Imagine a tangible object that can change its texture and size based on the amount of walking done by users and engage their touch and vision sensory channels simultaneously for self reflection. This idea has been realised with a prototype that uses pneumatics on inflatable membranes to produce haptics and visual effects for users to experience their walking through step counts mapped on inflatables. An evaluation of the prototype is made to derive interpretations that could be a conceptual guideline for representing physical exercises using pneumatics.

PDF Bib
B. Schulte, S. Shalawadi, M. van Kleek, F. Echtler
Cloudless Skies? Decentralizing Mobile Interaction
MobileHCI '20: Proceedings of the 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (2020-10-05)

Mobile interaction is now almost exclusively dependent on an intransparent cloud infrastructure beyond the users’ control, resulting in multiple issues related to security, privacy, and availability. While alternative approaches based on decentralized network architectures exist, these have their own set of issues such as lack of usability or content moderation. In this workshop, we bring together researchers from related, but rarely interconnected fields to discuss issues and opportunities related to decentralization. We will develop and discuss current and future scenarios for decentralized mobile interaction, both utopian and dystopian. We invite researchers interested in implementation and networking aspects of mobile decentralization as well as those focusing on ethical, legal and social implications (ELSI) of these alternative approaches to the mobile ecosystem.

PDF Bib
P. Riehmann, G. Molina Leon, J. Reibert, F. Echtler, B. Fröhlich
Short-Contact Touch-Manipulation of Scatterplot Matrices on Wall Displays
Computer Graphics Forum, Volume 39, 2020 (2020-07-18)

This paper presents a multitouch interaction vocabulary for scatterplot matrices (SPLOMs) on wall-sized displays. The consistent arrangement of plots, efficient specification and two-dimensional propagation of two-tiered focus + context regions, fling-based paternoster navigation, and axis-centered coordinated selection aid in overcoming the interaction challenges of such large displays including long swipes on blunt surfaces, frequent physical navigation by walking for accessing screen areas beyond arm’s reach in horizontal direction and uncomfortable or even impossible access to screen areas in vertical direction. An expert review and a subsequent user study confirmed the potential and general usability of our seamlessly integrated multitouch interaction techniques for SPLOMs on large vertical displays.

PDF Bib
J. Odenwald, S. Bertel, F. Echtler
Tabletop teleporter: evaluating the immersiveness of remote board gaming
Proceedings of the 9th ACM International Symposium on Pervasive Displays (2020-06-04)

Communication with remote persons over a video link is common today, e.g. to connect with family members abroad, particularly during the COVID-19 pandemic. However, social activities such as board games are rarely shared in this way, as common video chat software does not support this scenario well. However, interactive tabletops provide inherent support for natural tangible interaction with items on the tabletop surface.

We present the Tabletop Teleporter, a setup designed to merge two remote locations into a single shared interaction space. We evaluate the system using a board game, focusing on the perceived immersion and connectedness of participants. Our evaluation shows that most measures for the social quality of a remotely shared game are not significantly different from one played with co-located participants, and that players prefer our setup over a pure videochat scenario.

PDF Bib
C. Wacharamanotham, L. Eisenring, S. Haroz, F. Echtler
Transparency of CHI Research Artifacts: Results of a Self-Reported Survey
Proceedings of the ACM CHI 2020 Conference on Human Factors in Computing Systems. Best Paper Award (top 1%) (2020-04-25)

Several fields of science are experiencing a “replication crisis” that has negatively impacted their credibility. Assessing the validity of a contribution via replicability of its experimental evidence and reproducibility of its analyses requires access to relevant study materials, data, and code. Failing to share them limits the ability to scrutinize or build-upon the research, ultimately hindering scientific progress.

Understanding how the diverse research artifacts in HCI impact sharing can help produce informed recommendations for individual researchers and policy-makers in HCI. Therefore, we surveyed authors of CHI 2018-2019 papers, asking if they share their papers’ research materials and data, how they share them, and why they do not. The results (N = 460/1356, 34% response rate) show that sharing is uncommon, partly due to misunderstandings about the purpose of sharing and reliable hosting. We conclude with recommendations for fostering open research practices.

2019
PDF Bib
J. Hartmann, M. Schirmer, F. Echtler
BinarySwipes: Fast List Search on Small Touchscreens
MuC '19 - Proceedings of Mensch & Computer (2019-09-08)

Smartwatches and other wearables generally have small screens, thereby complicating touch-based interaction. Selection from a long list, e.g. to locate a contact or a music track, is particularly cumbersome due to the limited interaction space. We present BinarySwipes, an interaction technique based on binary search which is designed to speed up list search tasks on space-constrained screens. We evaluate a prototypical implementation of BinarySwipes on a smartwatch. Results from our evaluation with 21 participants show improved performance over a plain linear search on lists with 100, 200 and 500 entries, but also increased mental load on the users.

PDF Bib
T. Dressel, E. List, F. Echtler
SecuriCast: Zero-Touch Two-Factor Authentication using WebBluetooth
EICS '19 - Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems (2019-06-18)

Simple username/password logins are widely used on the web, but are susceptible to multiple security issues, such as database leaks, phishing, and password re-use. Two-factor authentication is one way to mitigate these issues, but suffers from low user acceptance due to (perceived) additional effort. We introduce SecuriCast, a method to provide two-factor authentication using WebBluetooth as a secondary channel between an unmodified web browser and the user’s smart-phone. Depending on the usage scenario and the desired level of security, no device switch and only minimal additional interaction is required from the user. We analyse SecuriCast based on the framework by Bonneau et al., briefly report on results from a user study with 30 participants demonstrating performance and perceived usability of SecuriCast, and discuss possible attack scenarios and extensions.

2018
PDF Bib
F. Echtler
SurfaceStreams: A Content-Agnostic Streaming Toolkit for Interactive Surfaces
ACM UIST 2018 Adjunct Proceedings (2018-10-04)

We present SurfaceStreams, an open-source toolkit for recording and sharing visual content among multiple heterogeneous display-camera systems. SurfaceStreams clients support on-the-fly background removal and rectification on a range of different capture devices (Kinect & RealSense depth cameras, SUR40 sensor, plain webcam). After preprocessing, the raw data is compressed and sent to the SurfaceStreams server, which can dynamically receive streams from multiple clients, overlay them using the removed background as mask, and deliver the merged result back to the clients for display. We discuss an exemplary usage scenario (3-way shared interactive tabletop surface) and present results from a preliminary performance evaluation.

PDF Bib
S. Stickert, H. Hiller, F. Echtler
Companion - A Software Toolkit for Digitally Aided Pen-and-Paper Tabletop Roleplaying
ACM UIST 2018 Adjunct Proceedings (2018-10-04)

We present Companion, a software tool tailored towards improving and digitally supporting the pen-and-paper tabletop role-playing experience. Pen-and-paper role-playing games (P&P RPG) are a concept known since the early 1970s. Since then, the genre has attracted a massive community of players while branching out into several genres and P&P RPG systems to choose from. Due to the highly interactive and dynamic nature of the game, a participants individual impact on narrative and interactive aspects of the game is extremely high. The diversity of scenarios within this context unfold a variety of players needs, as well as factors limiting and enhancing game-play. Companion offers an audio management workspace for creation and playback of soundscapes based on visual layouting. It supports interactive image presentation and map exploration which can incorporate input from any device providing TUIO tracking data. Additionally, a mobile app was developed to be used as a remote control for media activation on the desktop host.e present SurfaceStreams, an open-source toolkit for recording and sharing visual content among multiple heterogeneous display-camera systems. SurfaceStreams clients support on-the-fly background removal and rectification on a range of different capture devices (Kinect & RealSense depth cameras, SUR40 sensor, plain webcam). After preprocessing, the raw data is compressed and sent to the SurfaceStreams server, which can dynamically receive streams from multiple clients, overlay them using the removed background as mask, and deliver the merged result back to the clients for display. We discuss an exemplary usage scenario (3-way shared interactive tabletop surface) and present results from a preliminary performance evaluation.

PDF Bib
M. Kaltenbrunner, F. Echtler
The TUIO 2.0 Protocol: An Abstraction Framework for Tangible Interactive Surfaces
Proceedings of the ACM on Human-Computer Interaction - EICS (2018-06-01)

Since its introduction in 2005, the TUIO protocol has been widely employed within a multitude of usage contexts in tangible and multi-touch interaction. While its simple and versatile design still covers the core functionality of interactive tabletop systems, the conceptual and technical developments of the past decade also led to a variety of ad-hoc extensions and modifications for specific scenarios. In this paper, we present an analysis of the strengths and shortcomings of TUIO 1.1, leading to the constitution of an extended abstraction model for tangible interactive surfaces and the specification of the second-generation TUIO 2.0 protocol, along with several example encodings of existing tangible interaction concepts.

PDF Bib
F. Echtler, M. Häussler
Open Source, Open Science, and the Replication Crisis in HCI
CHI '18 Extended Abstracts on Human Factors in Computing Systems (2018-04-21)

The open-source model of software development is an established and widely used method that has been making inroads into several scientific disciplines which use software, thereby also helping much-needed efforts at replication of scientific results. However, our own discipline of HCI does not seem to follow this trend so far. We analyze the entire body of papers from CHI 2016 and CHI 2017 regarding open-source releases, and compare our results with the discipline of bioinformatics. Based on our comparison, we suggest future directions for publication practices in HCI in order to improve scientific rigor and replicability.

2017
PDF Bib
M. Heinz, S. Bertel, F. Echtler
TouchScope: A Hybrid Multitouch Oscilloscope Interface
Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI '17) (2017-11-13)

We present TouchScope, a hybrid multitouch interface for common off-the-shelf oscilloscopes. Oscilloscopes are a valuable tool for analyzing and debugging electronic circuits, but are also complex scientific instruments. Novices are faced with a seemingly overwhelming array of knobs and buttons, and usually require lengthy training before being able to use these devices productively.

In this paper, we present our implementation of TouchScope which uses a multitouch tablet in combination with an unmodified off-the-shelf oscilloscope to provide a novice-friendly hybrid interface, combining both the low entry barrier of a touch-based interface and the high degrees of freedom of a conventional button-based interface. Our evaluation with 29 inexperienced participants shows a comparable performance to traditional learning materials as well as a significantly higher level of perceived usability.

PDF Bib
T. Weißker, E. Genc, A. Berst, F. D. Schreiber, F. Echtler
ShakeCast: using handshake detection for automated, setup-free exchange of contact data
Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '17) (2017-09-04)

We present ShakeCast, a system for automatic peer-to-peer exchange of contact information between two persons who just shook hands. The accelerometer in a smartwatch is used to detect the physical handshake and implicitly triggers a setup-free information transfer between the users’ personal smartphones using Bluetooth LE broadcasts. An abstract representation of the handshake motion data is used to disambiguate between multiple simultaneous transmissions and to prevent accidental data leakage.

To evaluate our system, we collected individual wrist acceleration data from 130 handshakes, performed by varying combinations of 20 volunteers. We present a systematic analysis of possible data features which can be used for disambiguation, and we validate our approach using the most salient features. Our analysis shows an expected match rate between corresponding handshakes of 92.3%.

PDF Bib
H. Sahibzada, E. Hornecker, F. Echtler, P. T. Fischer
Designing Interactive Advertisements for Public Displays
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (2017-05-06)

Although public displays are increasingly being deployed in everyday situations, they are still mostly used as auto-active information sources. Adding interactivity can help to attract and engage users. We report on the design and in-the-wild evaluation of an interactive advert for a public display in a tourist information center. We evaluate and compare 3 different variants - non-interactive, interaction using body tracking, and interaction using personal mobile devices - with respect to attracting the attention and interaction from passersby. We further compare these variants with an iterated version of the body tracking system with an extended tracking area. Our findings include an unexpected reluctance of passersby to use their mobile device in public, and the increased interactive area for body interaction resulting in increased engagement and spontaneous multi-user interaction, while removing the so-called ‘landing effect’. Based on our findings, we suggest guidelines for interactive adverts on public displays.

2016
PDF Bib
F. Echtler, M. Kaltenbrunner
SUR40 Linux: Reanimating an Obsolete Tangible Interaction Platform
Proceedings of the 2016 ACM on Interactive Surfaces and Spaces (ACM ISS '16) (2016-11-06)

Optical sensing technologies are among the most versatile hardware solutions for interactive surfaces, as they are capable of recognizing touch as well as (limited) hover state in addition to printed tokens. One widely used system is the Pixelsense/SUR40, currently one of very few devices which provides these capabilities in the form factor of a regular table, thereby allowing working at the device in a sitting position. Unfortunately, the device has been discontinued by the manufacturer, provides only an unsupported SDK on an outdated operating system, and has a gathered a reputation for high latency as well as sensitivity to environment light. In this paper, we present our research into modernizing and extending the SUR40 system. By switching to a Linux operating system running a custom video driver, we are able to provide lower latency, support other types of optical tags and improve the system’s robustness, particularly regarding external lighting conditions. We present an analysis of the device’s internals, a comparison of quantitative performance measurements, and an outlook into extending the tangible interaction capabilities with an improved cross-platform development framework.

PDF Bib
T. Weißker, A. Berst, J. Hartmann, F. Echtler
The Massive Mobile Multiuser Framework: Enabling Ad-hoc Realtime Interaction on Public Displays with Mobile Devices
Proceedings of the 5th International Symposium on Pervasive Displays (PerDis'16) (2016-06-20)

In this paper, we present the Massive Mobile Multiuser (M³) framework, a software platform designed to enable setup-free, real-time, concurrent interaction with shared public displays through large numbers of personal mobile devices. This work is motivated by the fact that simultaneous interaction of multiple persons with public displays requires either dedicated tracking hardware to detect gestures or touch, or a way for users to interact through their personal mobile devices. The latter option provides more flexibility but also presents a heightened entry barrier as it often requires installation of custom software.

To address these issues, M³ enables immediate interaction through the mobile browser without requiring prior setup on the user side, and real-time interaction suitable for fast multiplayer games. We present a detailed analysis of latency sources and findings from two real-world deployments of our framework in public settings with up to 17 concurrent users. Despite a resource-constrained environment and an unpredictable selection of client devices, M³ consistently delivers performance suitable for real-time interaction.

PDF Bib
T. Weißker, A. Berst, J. Hartmann, F. Echtler
MMM Ball: Showcasing the Massive Mobile Multiuser Framework
Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (2016-05-07)

We present the Massive Mobile Multiuser (M³) framework, a platform designed to enable setup-free, real-time, concurrent interaction with shared public displays through large numbers of mobile devices. Simultaneous interaction of multiple persons with public displays requires either dedicated tracking hardware to detect gestures, or a way for users to interact through their personal mobile devices. The latter option provides more flexibility, but also presents a heightened entry barrier as it often requires installation of custom software. To address these issues, M³ enables immediate interaction through the mobile browser without requiring prior setup on the user side, and real-time interaction suitable for fast multiplayer games. We present a live demonstration of our framework at the example of a public game which has already been shown to support up to 17 concurrent users. Despite a resource-constrained environment and an unpredictable selection of client devices, M³ consistently delivers performance suitable for real-time interaction.

PDF Bib
F. Echtler
CalendarCast: Setup-Free, Privacy-Preserving, Localized Sharing of Appointment Data
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (2016-05-07)

We introduce CalendarCast, a novel method to support the common task of finding a suitable time and date for a shared meeting among co-located participants using their personal mobile devices. In this paper, we describe the Bluetooth-based wireless protocol and interaction concept on which CalendarCast is based, present a prototypical implementation with Android smartphones and dedicated beacons, and report on results of a user study demonstrating improved task performance compared to unaugmented calendars.

The motivating scenario for CalendarCast occurs quite often in a variety of contexts, for example at the end of a prior meeting or during ad-hoc conversations in the hallway. Despite a large variety of digital calendar tools, this situation still usually involves a lengthy manual comparison of free and busy time slots. CalendarCast utilizes Bluetooth Low Energy (BTLE) advertisement broadcasts to share the required free/busy information with a limited, localized audience, on demand only, and without revealing detailed personal information. No prior knowledge about the other participants, such as email addresses or account names, is required.

2015
PDF Bib
A. Kreskowski, J. Wagner, J. Bossert, F. Echtler
MoBat: Sound-Based Localization of Multiple Mobile Devices on Everyday Surfaces
Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces (ITS '15), Funchal, Madeira (2015-11-15)

We present MoBat, a combined hard- and software system designed to locate and track multiple unmodified mobile devices on any regular table using passive acoustic sensing. Barely audible sound pulses are emitted from mobile devices, picked up by four microphones located in the corners of the surface and processed in a low-latency pipeline to extract position data. We demonstrate an average positional accuracy and precision of about 3 cm on a table of 1 m x 2 m size, and discuss possible usage scenarios regarding proxemics and tangible interaction.

PDF Bib
V. Krauß, E. Fuchkina, G. Molina Leon, E. Popescu, F. Echtler, S. Bertel
pART bench: a Hybrid Search Tool for Floor Plans in Architecture
Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces (ITS '15), Funchal, Madeira (2015-11-15)

Architectural databases often contain thousands of different floor plans which have either been collected from historical designs or, more recently, auto-generated by suitable algorithms. Searching for a floor plan that fits specific requirements in such a database involves setting a large number of parameters, such as lines of sight, lighting levels, room types and many more.

We present pART bench, a hybrid tabletop/tablet tool which allows the use of intuitive touch commands and tangible objects to quickly adjust search parameters, view resulting floor plans and iteratively refine the search. We report on a comprehensive requirements analysis with practising architects, on the design process, and describe our prototypical implementation of the system, both on a tablet and on a PixelSense tabletop device.

PDF Bib
M. Schirmer, J. Hartmann, S. Bertel, F. Echtler
Shoe me the Way: A Shoe-Based Tactile Interface for Eyes-Free Urban Navigation
Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2015), Copenhagen, Denmark (2015-08-24)

We present Shoe me the Way, a novel tactile interface for eyes-free pedestrian navigation in urban environments. Our prototypical implementation can be fully integrated into users’ own, regular shoes without permanent modifications. Interface use does not distract users from their surroundings. It thereby adds to users’ safety and enables them to explore their environments more freely than is possible with prevailing mobile map-based pedestrian navigation systems. We evaluated our prototype using two different navigation modes in a study with 21 participants and report on significant differences in user performance and preferences between the modes. Study results also show that even our prototypical implementation is already stable, functional and has high usability.

PDF Bib
M. Spreitzenbarth, T. Schreck, F. Echtler, D. Arp, J. Hoffmann
Mobile-Sandbox: combining static and dynamic analysis with machine-learning techniques
International Journal of Information Security, Springer, April 2015 (2015-04-01)

Smartphones in general and Android in particular are increasingly shifting into the focus of cyber criminals. For understanding the threat to security and privacy, it is important for security researchers to analyze malicious software written for these systems. The exploding number of Android malware calls for automation in the analysis. In this paper, we present Mobile-Sandbox, a system designed to automatically analyze Android applications in novel ways: First, it combines static and dynamic analysis, i.e., results of static analysis are used to guide dynamic analysis and extend coverage of executed code. Additionally, it uses specific techniques to log calls to native (i.e., non-Java) APIs, and last but not least it combines these results with machine-learning techniques to cluster the analyzed samples into benign and malicious ones. We evaluated the system on more than 69,000 applications from Asian third-party mobile markets and found that about 21 % of them actually use native calls in their code.

2014
PDF Bib
M. Kaltenbrunner, F. Echtler
TUIO Hackathon
Proceedings of the 9th ACM International Conference on Interactive Tabletops and Surfaces (ITS '14) (2014-11-16)

TUIO is an open framework that defines a common protocol and API for tangible- and multitouch-surfaces. The protocol is based on Open Sound Control and allows the platform-independent encoding and transmission of an abstract description of interactive surfaces, including touch events and tangible object states. While the original TUIO specification has been implemented for various hardware and software environments, there are not as many feature complete reference implementations of the next TUIO generation, although its specification has been finalized and partially implemented by community members. The TUIO Hackathon at the International Conference for Interactive Tabletops and Surfaces is addressing expert users and developers of hardware and software environments for surface-based tangible user interfaces that are interested in experimenting with this new framework, with the goal of initiating the development and integration of new TUIO implementations.

PDF Bib
F. Echtler, R. Wimmer
The Interactive Dining Table, or: Pass the Weather Widget, Please
Proceedings of the 9th ACM International Conference on Interactive Tabletops and Surfaces (ITS '14) (2014-11-16)

Large-scale interactive surfaces are nearly ubiquitous in research labs and showrooms around the world today. However, unlike other recent interactive technologies such as smartphones, they have not yet found their way into people’s everyday lives. Possible reasons include high cost as well as a lack of suitable applications. In this paper, we present our prototypical implementation of a low-cost, unobtrusive interactive surface, integrated with the dining table in a real-world living room. To motivate our approach, we explore three scenarios highlighting potential applications for our system and present their prototypical implementations. In the first scenario, we extend regular board games with interactive components without sacrificing their unique haptic experience. In the second scenario, we investigate ambient notifications which provide a method for delivering information to the users subconsciously. Finally, we explore the concept of augmented dining in which the appearance of food placed on the table is augmented or modified by the system.

PDF Bib
F. Echtler, D. Kammer, D. Vanacken, L. Hoste, B. Signer
Engineering Gestures for Multimodal User Interfaces
Engineering Gestures for Multimodal Interfaces - Workshop at EICS 2014, Rome, Italy (2014-06-17)

Despite increased presence of gestural and multimodal user interfaces in research as well as daily life, development of such systems still mostly relies on programming concepts which have emerged from classic WIMP user interfaces. This workshop proposes to explore the gap between attempts to formalize and structure development for multimodal interfaces in the research community on the one hand and the lack of adoption of these formal languages and frameworks by practitioners and other researchers on the other hand.

PDF Bib
A. Bazo, F. Echtler
Phone proxies: effortless content sharing between smartphones and interactive surfaces
EICS 2014 - ACM Symposium on Engineering Interactive Computing Systems (2014-06-17)

We present Phone Proxies, a technique for effortless content sharing between mobile devices and interactive surfaces. In such a scenario, users often have to perform a lengthy setup process before the actual exchange of content can take place. Phone Proxies uses a combination of custom NFC (near-field communication) tags and optical markers on the interactive surface to reduce the user interaction required for this setup process to an absolute minimum. We discuss two use cases: “pickup”, in which the user wants to transfer content from the surface onto their device, and “share”, in which the user transfers device content to the surface for shared viewing. We introduce three possible implementations of Phone Proxies for each of these use cases and discuss their respective advantages.

PDF Bib
H. Glücker, F. Raab, F. Echtler, C. Wolff
EyeDE: gaze-enhanced software development environments
CHI '14 Extended Abstracts on Human Factors in Computing Systems (2014-04-26)

This paper introduces EyeDE, a prototypical system enabling gaze interaction for assistance in integrated development environments (IDE). By utilizing an eye tracking device, we have enhanced an IDE prototype with gaze-controlled interaction methods for source code navigation. A qualitative evaluation shows that users welcome the ability to quickly look up documentation or to jump to method declarations just by looking at triggers placed in the code. Although inaccuracies inherent in eye tracking technology and discomforting sitting positions for users impede successful implementation of more advanced IDE features, the interaction paradigm appears to be acceptable within the software development context and seems promising as eye tracking technology is being further improved.

PDF Bib
E. Baumer, C. Hordatt, S. Ross, J. Ahn, A. Krüger, K. Rust, M. Bie, S. Maidenbaum, J. Schöning, E. Bonsignore, M. Malu, M. Silberman, A. Börütecene, B. McNally, B. Tomlinson, O. Buruk, M. Muller, J. Yip, T. Clegg, L. Norooz, A. Druin, J. Norton, F. Echtler, O. Özcan, D. Gruen, D. Patterson, M. Guha, A. Riener
CHI 2039: speculative research visions
CHI '14 Extended Abstracts on Human Factors in Computing Systems (2014-04-26)

This paper presents a curated collection of fictional abstracts for papers that could appear in the proceedings of the 2039 CHI Conference. It provides an opportunity to consider the various visions guiding work in HCI, the futures toward which we (believe we) are working, and how research in the field might relate with broader social, political, and cultural changes over the next quarter century.

2013
PDF Bib
F. Raab, C. Wolff, F. Echtler
RefactorPad: Editing Source Code on Touchscreens
Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems (EICS '13) (2013-06-24)

Despite widespread use of touch-enabled devices, the field of software development has only slowly adopted new interaction methods for available tools. In this paper, we present our research on RefactorPad, a code editor for editing and restructuring source code on touchscreens. Since entering and modifying code with on-screen keyboards is time-consuming, we have developed a set of gestures that take program syntax into account and support common maintenance tasks on devices such as tablets. This work presents three main contributions: 1) a test setup that enables researchers and participants to collaboratively walk through code examples in real-time; 2) the results of a user study on editing source code with both finger and pen gestures; 3) a list of operations and some design guidelines for creators of code editors or software development environments who wish to optimize their tools for touchscreens.

PDF Bib
R. Wimmer, F. Echtler
Exploring the Benefits of Fingernail Displays
Extended Abstracts of CHI 2013, Paris, France. (2013-04-27)

Fingers are an important interface both to the physical and the digital world. We propose research on artificial fingernails which contain tiny displays and sensors. These fingernail displays greatly supplement other input and output channels, offering novel interaction possibilities. We present three contributions: (1) the general concept and use cases for fingernail displays, (2) a technique for capturing touch events at the fingernails and interaction methods supported by this technique, and (3) an overview of relevant research questions.

PDF Bib
F. Echtler, R. Wimmer
The Interactive Dining Table
CHI 2013 Workshop on Blended Interaction (Blend13), Paris, France. (2013-04-27)

Large-scale interactive surfaces are nearly ubiquitous in research labs and showrooms around the world today. However, unlike other recent interaction technologies such as smartphones, they have not yet found their way into people’s everyday lives. Possible reasons include high cost as well as a lack of suitable applications. In this paper, we present our prototypical implementation of a low-cost, unobtrusive interactive surface, integrated with the dining table in a real-world living room. To motivate our approach, we explore three scenarios highlighting potential applications for our system and present their prototypical implementations. In the first scenario, we extend regular board games with interactive components without sacrificing their unique haptic experience. In the second scenario, we investigate ambient notifications which provide a method for delivering information to the users subconsciously. Finally, we introduce and explore the concept of \emph{augmented dining}, in which the appearance of food placed on the table is augmented or modified through the system.

PDF Bib
M. Spreitzenbarth, F. Freiling, F. Echtler, T. Schreck, J. Hoffmann
Mobile-Sandbox: having a deeper look into Android applications
Proceedings of ACM SAC 2013, Coimbra, Portugal. (2013-03-18)

Smartphones in general and Android in particular are increasingly shifting into the focus of cybercriminals. For understanding the threat to security and privacy it is important for security researchers to analyze malicious software written for these systems. The exploding number of Android malware calls for automation in the analysis. In this paper, we present Mobile-Sandbox, a system designed to automatically analyze Android applications in two novel ways: (1) it combines static and dynamic analysis, i.e., results of static analysis are used to guide dynamic analysis and extend coverage of executed code, and (2) it uses specific techniques to log calls to native (i.e., “non-Java”) APIs. We evaluated the system on more than 36,000 applications from Asian third-party mobile markets and found that 24% of all applications actually use native calls in their code.

2012
PDF Bib
E. Artinger, P. Maier, T. Coskun, S. Nestler, M. Mähler, E. Yildirim-Krannig, F. Wucholt, F. Echtler, G. Klinker
Creating a common operation picture in realtime with user-centered interfaces for mass casualty incidents
4th international workshop for Situation recognition and medical data analysis in Pervasive Health environments (PervaSense), PervasiveHealth 2012 (2012-07-03)

Accurate, accessible, and realtime information on the number, location, and medical condition of patients are critical for the successful management of mass casualty incidents (MCIs), where the number of patients exceeds the capacity of the emergency management service (EMS). We present a concept of a collaborative infrastructure which generates a common operation picture in realtime. A complex, stressful and uncommon situation like an MCI creates strong psychological influences and burdens on the rescue workers. Based on our psychological findings we derived eleven special requirements for efficient and intuitive user interfaces in unstable, time- critical emergency situations. Taking the requirements into consideration we developed a concept to overcome the MCI through the combination of multiple devices. The devices are carefully chosen according to the task of the EMS personnel in the field as well as in the incident command post. Three different interfaces PDAs for the rescue units in the field, tablet PCs for the incident commanders and a multitouch table in the incident command post help the entire rescue team to gain efficient situational awareness.

PDF Bib
J. Kramer, N. Burrus, F. Echtler, D. Herrera, M. Parker
Hacking The Kinect
ISBN 978-1430238676, Apress Media, Mar. 2012 (2012-04-02)

Hacking the Kinect is the technogeek’s guide to developing software and creating projects involving the groundbreaking volumetric sensor known as the Microsoft Kinect. Microsoft’s release of the Kinect in the fall of 2010 startled the technology world by providing a low-cost sensor that can detect and track body movement in three-dimensional space. The Kinect set new records for the fastest-selling gadget of all time. It has been adopted worldwide by hobbyists, robotics enthusiasts, artists, and even some entrepreneurs hoping to build business around the technology.

Hacking the Kinect introduces you to programming for the Kinect. You’ll learn to set up a software environment, stream data from the Kinect, and write code to interpret that data. The progression of hands-on projects in the book leads you even deeper into an understanding of how the device functions and how you can apply it to create fun and educational projects. Who knows? You might even come up with a business idea.

PDF Bib
F. Echtler, A. Butz
GISpL: Gestures Made Easy
TEI 2012, Kingston, ON, Canada. (2012-02-19)

We present GISpL, the Gestural Interface Specification Language. GISpL is a formal language which allows both researchers and developers to unambiguously describe the behavior of a wide range of gestural interfaces using a simple JSON-based syntax. GISpL supports a multitude of input modalities, including multi-touch, digital pens, multiple regular mice, tangible interfaces or mid-air gestures.

GISpL introduces a novel view on gestural interfaces from a software-engineering perspective. By using GISpL, developers can avoid tedious tasks such as reimplementing the same gesture recognition algorithms over and over again. Researchers benefit from the ability to quickly reconfigure prototypes of gestural UIs on-the-fly, possibly even in the middle of an expert review.

In this paper, we present a brief overview of GISpL as well as some usage examples of our reference implementation. We demonstrate its capabilities by the example of a multi-channel audio mixer application being used with several different input modalities. Moreover, we present exemplary GISpL descriptions of other gestural interfaces and conclude by discussing its potential applications and future development.

2011
PDF Bib
D. Pustka, M. Huber, C. Wächter, F. Echtler, P. Keitler, J. Newman, D. Schmalstieg, G. Klinker
Automatic Configuration of Pervasive Sensor Networks for Augmented Reality
IEEE Pervasive Computing, Volume 10, Number 3, July-Sept. 2011, pp. 68-79. ()

The ubiquitous tracking (Ubitrack) approach uses spatial relationship graphs and patterns to support a distributed software architecture for augmented reality (AR) systems in which clients can produce, transform, transmit, and consume tracking data.

PDF Bib
N. Klügel, M. R. Friess, G. Groh, F. Echtler
An Approach to Collaborative Music Composition
International Conference on New Interfaces for Musical Expression (NIME 2011), May 30th, 2011, Oslo, Norway. ()

This paper provides a discussion of how the electronic, solely IT based composition and performance of electronic music can be supported in realtime with a collaborative application on a tabletop interface, mediating between single-user style music composition tools and co-located collaborative music improvisation. After having elaborated on the theoretical backgrounds of prerequisites of co-located collaborative tabletop applications as well as the common mon paradigms in music composition/notation, we will review related work on novel IT approaches to music composition and improvisation. Subsequently, we will present our prototypical implementation and the results.

PDF Bib
A. Dippon, F. Echtler, G. Klinker
Multi-Touch Table as Conventional Input Device
HCI International Extended Abstracts, July 9 - 14, Orlando, Florida, USA, pp. 237-241. ()

In order to improve the functionality of multi-touch devices, the possibility of using them as input devices for other computers needs to be reviewed. The idea is, to get rid of many different peripherals (e.g. keyboard, mouse, multi-touch pad) by using a single multi-touch display. Furthermore the display can be used as an additional monitor to show for example toolbars, which can be directly manipulated through multi-touch gestures. We implemented a prototype, which provides an adaptive keyboard that shows shortcut keys for different applications, a multi-touch pad as well as the option to drag&drop standard Windows widgets onto the multi-touch table, which can be controlled by direct touch input. A user study was conducted to test the current system and to get information about the further approach to this concept.

PDF Bib
E. Artinger, T. Coskun, M. Schanzenbach, F. Echtler, S. Nestler, G. Klinker
Exploring Multi-touch Gestures for Map Interaction in Mass Casualty Incidents
3. Workshop zur IT-Unterstützung von Rettungskräften im Rahmen der GI-Jahrestagung Informatik 2011 ()

In mass casualty incidents a common operation picture, which gives an overview about the current situation is critical information for managing the emergency. In order to support the collaboration between different incident commanders a multi-touch table, placed in the incident command post, is used to present the current operation picture on a map. To place as little additional mental load as possible on the users, any interaction with this map interface should be natural and intuitive. Therefore we investigated in a user study several alternative multi-touch gestures, combined to five sets for the tasks of modifying the map view and selecting map objects in an emergency management scenario. The gesture sets contained widely known as well as new promising gestures.

2010
PDF Bib
P. Keitler, D. Pustka, M. Huber, F. Echtler, G. Klinker
Management of Tracking for Mixed and Augmented Reality Systems
The Engineering of Mixed Reality Systems, E. Dubois, P. Gray, L. Nigay (eds.), Human-Computer Interaction Series, Springer Verlag, 2010. ()

Position and orientation tracking is a major challenge for Mixed / Augmented Reality applications, especially in heterogeneous and wide-area sensor setups. In this article, we describe trackman, a planning and analysis tool which supports the AR-engineer in setup and maintenance of the tracking infrastructure. A new graphical modeling approach based on spatial relationship graphs (SRGs) eases the specification of known as well as the deduction of new relationships between entities in the scene. Modeling is based on reusable patterns representing the underlying sensor drivers or algorithms. Recurring constellations in the scene can be condensed into reusable meta-patterns. The process is further simplified by semi-automatic modeling techniques which automize trivial steps. Dataflow networks can be generated automatically from the SRG and are guaranteed to be semantically correct. Furthermore, generic tools are described that allow for the calibration/registration of static spatial transformations as well as for the live surveillance of tracking accuracy. In summary, this approach reduces tremendously the amount of expert knowledge needed for the administration of tracking setups.

PDF Bib
M. R. Friess, M. Kleinhans, F. Forster, F. Echtler, G. Groh
A Tabletop Interface for Generic Creativity Techniques
International Conference on Interfaces and Human Computer Interaction (IHCI 2010), July 26, 2011, Freiburg, Germany. ()

Within this paper, a multi-touch based tabletop application supporting a generic model for creativity techniques will be introduced. Therefore, based on related work, requirements will be worked out and transferred into a concept and a prototypical implementation. After this application has been described in detail, an evaluation done with the system will be presented. The results of this evaluation will be discussed by referring to a survey and observations. Finally the conclusions and prospects for future research will be shown.

PDF Bib
F. Echtler, T. Pototschnig, G. Klinker
An LED-based Multitouch Sensor for LCD Screens
TEI 2010, January 25 - 27, Cambridge, MA, USA, pp. 227-230. ()

In recent years, a large number of multitouch sensor concepts have been presented. Particularly optical sensors are highly popular due to their versatility. However, especially camera-based systems often require a significant amount of space behind the screen and are not well suited to flatscreen-based setups. While integrated sensors for flatscreens have already been presented, they are mostly complex, expensive or both.

To address these problems, a novel type of multitouch sensor is presented which extends a common LCD monitor with multitouch capabilities without significant depth requirements. The sensor consists of a homogeneous matrix of cheap, mass-produced infrared LEDs. The LCD surface remains unmodified, resulting in a pleasant haptic experience for the user.

PDF Bib
F. Echtler, G. Klinker, A. Butz
Features, Regions, Gestures: Components of a Generic Gesture Recognition Engine
Workshop on Engineering Patterns for Multitouch Interfaces, June 20, 2010, Berlin, Germany. ()

In recent years, research in novel types of human-computer interaction, for example multi-touch or tangible interfaces, has increased considerably. Although a large number of innovative applications have already been written based on these new input methods, they often have significant deficiencies from a developer’s point of view. Aspects such as configurability, portability and code reuse have been largely overlooked. A prime example for these problems is the topic of gesture recognition. Existing implementations are mostly tied to a certain hardware platform, tightly integrated into user interface libraries and monolithic, with hard coded gesture descriptions. Developers are therefore time and again forced to reimplement crucial application components.

To address these drawbacks, we propose a clean separation between user interface and gesture recognition. In this paper, we present a widely applicable, generic specification of gestures which enables the implementation of a hardware-independent standalone gesture recognition engine for multi-touch and tangible interaction. The goal is to allow the developer to focus on the user interface itself instead of on internal components of the application.

PDF Bib
F. Echtler, G. Klinker, A. Butz
Towards a Unified Gesture Description Language
13th International Conference on Humans and Computers (HC 2010), December 8, 2010, Düsseldorf, Germany, pp. 177-182. ()

Proliferation of novel types of gesture-based user interfaces has led to considerable fragmentation, both in terms of program code and in terms of the gestures themselves. Consequently, it is difficult for developers to build on previous work, thereby consuming valuable development time. Moreover, the flexibility of the resulting user interface is limited, particularly in respect to users wishing to customize the interface. To address this problem, we present a generic and extensible formal language to describe gestures. This language is applicable to a wide variety of input devices, such as multi-touch surfaces, pen-based input, tangible objects and even free-hand gestures. It enables the development of a generic gesture recognition engine which can serve as a backend to a wide variety of user in terfaces. Moreover, rapid customization of the interface becomes possible by simply swapping gesture definitions - an aspect which has considerable advantages when conducting UI research or porting an existing application to a new type of input device. Developers will be able to benefit from the reduced amount of code, while users will be able to benefit from the increased flexibility through customization afforded by this approach.

PDF Bib
F. Echtler, M. Häussler, G. Klinker
BioTISCH: the interactive molecular biology lab bench
CHI 2010 Works-in-Progress ()

In a molecular biology lab, scientists often need to execute strictly defined sequences of operations, typically mixing specific amounts of reagents. The exact steps require information from various sources, like manuals, websites and own notes. Direct access to a computer at the bench would be highly desirable but is rarely implemented, as computers do not fit well into a wet lab environment. In this paper, we present BioTISCH, an interactive workbench for molecular biology laboratories. We show a prototypical setup of an interactive table which provides a sterile user interface for access to existing documentation and for common tasks such as unit conversions. The example illustrates that interactive tables blend very well into a modern biological laboratory and could improve access and exchange of information in this environment.

PDF Bib
E. Artinger, M. Schanzenbach, F. Echtler, S. Nestler, T. Coskun, G. Klinker
Beyond Pinch-to-Zoom: Exploring Alternative Multi-touch Gestures for Map Interaction
Technischer Bericht: TUM-I1006 ()

Interaction with virtual maps is a common task on tabletop interfaces, particularly in the context of command-andcontrol applications. In nearly all cases, widely known gestures such as pinch-to-zoom are employed. To explore alternatives and variations of this mode of interaction, we have defined five alternative gesture sets for the tasks of modifying the map view and selecting map objects in an emergency management scenario. We present the results of an explorative study conducted with user interface experts, domain experts and inexperienced randomly selected users.

2009
PDF Bib
M. Tönnis, F. Echtler, M. Huber, G. Klinker
Low Cost 3D Rotational Input Devices: the stationary Spinball and the mobile Soap3D
The 11th Symposium on Virtual and Augmented Reality (SVR), Porto Alegre, Brazil, May 25 - 28, 2009. Best short paper ()

This paper introduces a new approach enabling intuitive input of rotational data with low-cost optical sensors. Two devices are built, one for desktop applications and a mobile wireless device for everywhere use. The desktop variant, called Spinball makes the virtual trackball a real device. The mobile device, called Soap3D uses a two-layered approach for device casing, enabling closed hand control. Benefits and construction are illustrated and results of expert reviews are discussed.

PDF Bib
J. Schöning, J. Hook, N. Motamedi, P. Olivier, F. Echtler, P. Brandl, M. Muller, F. Daiber, O. Hilliges, M. Löchtefeld, N. Roth, U. von Zadow
Building Interactive Multi-Touch Surfaces
Journal of Graphics, GPU, & Game Tools ()

Multi-touch interaction with computationally enhanced surfaces has received considerable attention in recent years. Hardware implementations of multitouch interaction such as frustrated total internal reflection (FTIR) and diffused illumination (DI) have allowed for the low-cost development of surfaces. Although many of these technologies and associated applications have been presented in academic settings, the practicality of building a high-quality multi-touch{enabled surface, both in terms of the software and hardware required, are not widely known. We draw upon our extensive experience as developers of multi-touch technology to provide practical advice in relation to building and deploying applications upon multi-touch surfaces. This includes technical details of the construction of optical multi-touch surfaces, including infrared illumination, silicone compliant surfaces, projection screens, cameras, filters, and projectors, and an overview of existing software libraries for tracking.

PDF Bib
S. Nestler, M. Huber, F. Echtler, A. Dollinger, G. Klinker
Development and evaluation of a virtual reality patient simulation (VRPS)
The 17-th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG'09) ()

In disasters and mass casualty incidents (MCIs) paramedics initially determine the severeness of all patients’ injuries during the so-called triage. In order to enhance disaster preparedness continuous training of all paramedics is indispensable. Due to the fact that large disaster control exercises are laborious and expensive, additional training on a small scale makes sense. Therefore we designed and developed a virtual reality patient simulation (VRPS) to train paramedics in this disaster triage. The presented approach includes gesture based interactions with the virtual patients in order to simulate the triage process as realistically as possible. The evaluated approach focuses on the training of paramedics in disaster triage according to the mSTaRT (modified Simple Triage and Rapid Treatment) triage algorithm on a multi-touch table top device. At the Munich fire department fully-qualified paramedics performed 160 triage processes with the triage simulation. The accuracy of the triage processes was compared to previous disaster control exercises with real mimes. The presented results of this explorative evaluation will be the basis for future, larger evaluations.

PDF Bib
F. Echtler, S. Nestler, A. Dippon, G. Klinker
Supporting Casual Interactions between Board Games on Public Tabletop Displays and Mobile Devices
Personal and Ubiquitous Computing, Special Issue on Public and Private Displays, Springer Verlag, 2009, pp. 609-617. ()

As more interactive surfaces enter public life, casual interactions from passersby are bound to increase. Most of these users can be expected to carry a mobile phone or PDA, which nowadays offers significant computing capabilities of its own. This offers new possibilities for interaction between these users’ private displays and large public ones.

In this paper, we present a system which supports such casual interactions. We first explore a method to track mobile phones that are placed on a horizontal interactive surface by examining the shadows which are cast on the surface. This approach detects the presence of a mobile device, as opposed to any other opaque object, through the signal strength emitted by the built-in Bluetooth transceiver without requiring any modifications to the devices’ software or hardware.

We then go on to investigate interaction between a Sudoku game running in parallel on the public display and on mobile devices carried by passing users. Mobile users can join a running game by placing their devices on a designated area. The only requirement is that the device is in discoverable Bluetooth mode. After a specific device has been recognized, a client software is sent to the device which then enables the user to interact with the running game. Finally, we explore the results of a study which we conducted to determine the effectiveness and intrusiveness of interactions between users on the tabletop and users with mobile devices.

PDF Bib
F. Echtler
Tangible Information Displays
Dissertation an der Fakultät für Informatik, Technische Universität München, Dez. 2009 ()

The goal of this thesis is to provide a generic architecture and software framework for graphical multi-touch and multi-user interfaces.

In recent years, research in novel types of computer-human interaction has increased considerably. Particularly multi-touch and multi-user interfaces have received a lot of interest, partly due to the availability of robust and affordable sensor hardware. This trend has been accelerated by the emergence of commercial products which have made these interaction concepts available to a wide user base in a surprisingly short timeframe.

Although a considerable amount of useful applications has already been written based on these new modalities, they share some deficiencies from a developer’s point of view. Even when source code is available, most of these applications are written in a monolithic fashion, making reuse of code difficult. Furthermore, they duplicate large amounts of core functionality such as gesture recognition and are often locked to a single type of input hardware.

To address this lack of reusability and portability, a layered architecture is presented in this thesis to describe an interactive application in a generalised fashion. As part of this architecture, a formal description of gestures will also be specified.

A reference implementation of this architecture, libTISCH, is presented. When using this framework, a developer should not require more time for creating a novel user interface than for a conventional one. The same applies to integration of new types of input hardware - existing software should “just work” after a suitable adapter has been provided. A number of example applications have been created with libTISCH and tested on various input sensors. The results show the suitability of libTISCH for the intended tasks regarding software development and hardware integration.

PDF Bib
F. Echtler, T. Sielhorst, M. Huber, G. Klinker
A Short Guide to Modulated Light
TEI 2009, February 16 - 18, Cambridge, United Kingdom, pp. 393-396. ()

Many types of tangible interaction systems, such as interactive surfaces and gesture-based interfaces, are based on various kinds of optical tracking, using infrared illuminators and cameras. One drawback of these setups is that they suffer from problems common to optical trackers, such as sensitivity to stray environment light from artificial and natural sources. In this paper, we present a method to significantly enhance tracking robustness for those systems which employ active illumination. Through addition of a small electronic circuit which modulates the LEDs used to illuminate the scene, contrast can be significantly increased.

PDF Bib
F. Echtler, A. Dippon, M. Tönnis, G. Klinker
Inverted FTIR: Easy Multitouch Sensing for Flatscreens
ITS 2009, November 23 - 25, Banff, Canada, pp. 29-32. ()

The increased attention which multitouch interfaces have received in recent years is partly due to the availability of cheap sensing hardware such as FTIR-based screens. However, this method has so far required a bulky projector-camera setup behind the screen. In this paper, we present a new approach to FTIR sensing by “inverting” the setup and placing the camera in front of the screen. This allows the use of unmodified flat screens as display, thereby dramatically shrinking the space required behind the screen and enabling the easy construction of new types of interactive surfaces.

2008
PDF Bib
J. Schöning, P. Brandl, F. Daiber, F. Echtler, O. Hilliges, J. Hook, M. Löchtefeld, N. Motamedi, M. Muller, P. Olivier, N. Roth, U. von Zadow
Multi-Touch Surfaces: A Technical Guide
Technical Report TUM-I0833 ()

Multi-touch interaction with computationally enhanced surfaces has received considerable recent attention. Approaches to the implementation of multi-touch interaction such as Frustrated Total Internal Reflection (FTIR) and Diffused Illumination (DI) have allowed for the low cost development of such surfaces, leading to a number of technology and application innovations. Although many of these techniques have been presented in an academic setting, the practicalities of building a high quality multi-touch enabled surface, both in terms of the software and hardware, are not trivial. This document aims to summarize the knowledge and experience of developers of multi-touch technology who gathered at the Bootcamp on Construction & Implementation of Optical Multi-touch Surfaces at Tabletop 2008 in Amsterdam, and seeks to provide hints and practical advice to people seeking to ``build your own’’ multi-touch surface. We mostly focus on technical aspects that are important in the construction of optical multi-touch surfaces, including: infrared illumination, silicone compliant surfaces, projection screens, cameras, filters, and projectors. In addition, we outline how to integrate this hardware to allow users to create a solid multi-touch surface, and provide an overview of existing software libraries for the implementation of multi-touch applications. In addition, we discuss the problem of latency introduced by the different parts of the system. A brief description of most of the common technologies to realize (multi-) touch surfaces is provided; however, the main focus is on those that utilise optical approaches.

PDF Bib
S. Nestler, F. Echtler, A. Dippon, G. Klinker
Collaborative problem solving on mobile hand-held devices and stationary multi-touch interfaces
PPD'08. Workshop on designing multi-touch interaction techniques for coupled public and private displays ()

This paper focuses on the coupling of mobile hand-helds with a stationary multi-touch table top device for collaborative purposes. For different fields of application, such as the health care domain, the coupling of these two technologies is promising. For the example of sudoku puzzles we evaluated the collaboration between multi-touch table top devices and mobile hand-helds. During the small-scale evaluation we focused on the differences between face-to-face collaboration and remote collaboration when solving problems collaboratively on table top devices and hand-helds.

PDF Bib
F. Echtler, M. Huber, D. Pustka, P. Keitler, G. Klinker
Splitting the Scene Graph - Using Spatial Relationship Graphs Instead of Scene Graphs in Augmented Reality
3rd International Conference on Computer Graphics Theory and Applications (GRAPP '08) ()

Scene graphs have been a core element of 3D graphics since the publication of Inventor. However, in Virtual and Augmented Reality applications, 3D graphics are often interleaved with and controlled by real-world data provided by pose trackers, cameras and other kinds of sensors. In such a setup, the generalized concept of a Spatial Relationship Graph (SRG) might be better suited as an underlying data structure to describe the application and its components. In this paper, we will give an overview of the SRG concept, describe its difference to a scene graph and provide an example AR application built upon an SRG-based tracking library.

PDF Bib
F. Echtler, M. Huber, G. Klinker
Shadow Tracking on Multi-Touch Tables
AVI '08: 9th International Working Conference on Advanced Visual Interfaces, Naples, Italy, pp. 388-391. ()

Multi-touch interfaces have been a focus of research in recent years, resulting in development of various innovative UI concepts. Support for existing WIMP interfaces, however, should not be overlooked. Although several approaches exist, there is still room for improvement, particularly regarding implementation of the “hover” state, commonly used in mouse-based interfaces.

In this paper, we present a multi-touch system which is designed to address this problem. A multi-touch table based on FTIR (frustrated total internal reflection) is extended with a ceiling-mounted light source to create shadows of hands and arms. By tracking these shadows with the rear-mounted camera which is already present in the FTIR setup, users can control multiple cursors without touching the table and trigger a “click” event by tapping the surface with any finger of the corresponding hand.

An informal evaluation with 15 subjects found an improvement in accuracy when compared to an unaugmented touch screen.

PDF Bib
F. Echtler, G. Klinker
A Multitouch Software Architecture
Proc. of the 5th Nordic Conference on Human-Computer Interaction: Using Bridges (NordiCHI 08), Lund, Schweden, pp. 463-466. ()

In recent years, a large amount of software for multitouch interfaces with various degrees of similarity has been written. In order to improve interoperability, we aim to identify the common traits of these systems and present a layered software architecture which abstracts these similarities by defining common interfaces between successive layers. This provides developers with a unified view of the various types of multitouch hardware. Moreover, the layered architecture allows easy integration of existing software, as several alternative implementations for each layer can co-exist. Finally, we present our implementation of this architecture, consisting of hardware abstraction, calibration, event interpretation and widget layers.

PDF Bib
F. Echtler, G. Klinker
Tracking Mobile Phones on Interactive Tabletops
MEIS '08: Workshop on Mobile and Embedded Interactive Systems, Munich, Germany, pp. 285-290. ()

The number of “interactive surface” systems, especially tabletop interfaces, in public is increasing. As more and more casual users interact with such systems, they may wish to use existing computing infrastructure, such as their mobile phone, in conjunction with the new interface. In this paper, we present a method to reliably detect and track unmodified mobile phones that are placed upon an interactive tabletop. Range detection and data exchange is done via Bluetooth, while the location of the phone is tracked through its shadow on the surface.

2007
PDF Bib
D. Pustka, M. Huber, F. Echtler, P. Keitler
UTQL: The Ubiquitous Tracking Query Language v1.0
Technical Report TUM-I0718, Oct. 11, 2007. ()

Centrally coordinated Ubiquitous Tracking (Ubitrack) setups consist of a Ubitrack server and many clients. The server maintains a central Spatial Relationship Graph (SRG) that describes coordinate frames, trackers and trackable objects. Using Spatial Relationship Patterns, the server derives new spatial relationships from the SRG and generates data flow descriptions that are executed by the clients to satisfy their own or other client’s queries.

In this context, a language is needed for the communication between the clients and the server to exchange descriptions of trackers, tracked objects, tracked or otherwise known spatial relationships, client capabilities and data flow networks. This document describes the Ubiquitous Tracking Query Language (UTQL), an XML-based description and query language for Ubitrack system, that fulfils these requirements.

PDF Bib
D. Pustka, P. Keitler, M. Huber, F. Echtler, G. Klinker
Ubiquitous Tracking for Quickly Solving Multi-Sensor Calibration and Tracking Problems
Demonstration at the Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, Nov. 14 - 16, 2007 ()

Tracking with multiple sensors, despite being well studied in research, is lacking widespread use in the absence of reusable tools for calibration and sensor fusion. We try to fill this gap by providing a uniform theoretical and software framework called Ubitrack that addresses these issues.

The proposed demo will include a number of different trackers. To show the usefulness of our approach, we will let viewers select from a list of tracking and calibration problems arising from the tracker combination. We will then demonstrate how easily the given problem is graphically modeled and implemented using our Ubitrack toolbox.

PDF Bib
J. Newman, A. Bornik, D. Pustka, F. Echtler, M. Huber, D. Schmalstieg, G. Klinker
Tracking for Distributed Mixed Reality Environments
IEEE VR 2007 Workshop on 'Trends and Issues in Tracking for Virtual Environments', Charlotte, NC, USA, Mar 11, 2007 ()

By developing a taxonomy of existing systems, libraries and frameworks used for developing Mixed Reality and Ubiquitous Computing environments, it can be seen that little effort has been made to research applications that share the characteristics of these two fields. Solutions have focussed on meeting the needs of each project in turn, rather than developing a generalised middleware, which would not only have the advantage of generality, but also facilitate cooperation and collaboration in what ought to be a crossdisciplinary area. We have designed just such a solution, that can be used to support a wide-range of Mixed Reality applications and allow them to inter-operate in a distributed fashion, whilst simultaneously supporting straightforward porting of legacy applications.

PDF Bib
S. Nestler, A. Dollinger, F. Echtler, M. Huber, G. Klinker
Design and Development of Virtual Patients
Vierter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR, Weimar, 15. Juli 2007 ()

This paper discusses the design and prototypical implementation of virtual patients to train paramedics in disaster operations. These virtual patients are presented on a multi-touch table top. The current approach focuses on disaster triage according to the mSTaRT triage algorithm. In order to enhance disaster preparedness continuous training of all paramedics is indispensable. Due to the fact that large disaster control exercises are laborious and expensive, training on a small scale makes sense. The presented approach includes gesture based interactions to simulate the patients as realistically as possible. This more intuitive interaction results in a more realistic simulation of the patient and a better preparation for the real situation.

PDF Bib
G. Klinker, F. Echtler
3D Visualization and Exploration of Relationships and Constraints at the Example of Sudoku Games
Technical Report TUM-I0722, Nov. 1, 2007, pp. 1-8. ()

In recent years, many systems for visualizing and exploring massive amounts of data have emerged. A topic that has not yet been investigated much, concerns the analysis of constraints that different objects impose on one another, regarding co-existence. In this paper, we present concepts toward visualizing mutual constraints between objects in a three-dimensional virtual and AR-based setting. We use 3x3 Sudoku games as suitable examples for investigating the underlying, more general concepts of helping users visualize and explore the constraints between different settings in rows, columns and blocks of matrix-like arrangements. Constraints are shown as repulsive and magnetic forces making objects keep their distance or seek proximity. We expect our concepts to be valuable beyond gaming, both as didactic tools for teaching and research, and in computational steering scenarios where problems are too complex to be solved by the computer alone in a reasonable amount of time. We are in the process of developing several prototypical visualization and interaction environments for Sudoku puzzles, allowing users to explore constraints in virtual 3D settings on a desktop and in an Augmented Reality-based environment. This paper reports on the underlying visualization principles.

PDF Bib
M. Huber, D. Pustka, P. Keitler, F. Echtler, G. Klinker
A System Architecture for Ubiquitous Tracking Environments
The Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, Nov. 13 - 16, 2007, pp. 211-214. ()

Ubiquitous tracking setups, covering large tracking areas with many heterogeneous sensors of varying accuracy, require dedicated middleware to facilitate development of stationary and mobile applications by providing a simple interface and encapsulating the details of sensing, calibration and sensor fusion.

In this paper we present a centrally coordinated peer-to-peer architecture for ubiquitous tracking, where a server computes optimal data flow configurations for sensor and application clients, which are directly exchanging tracking data with low latency using a light-weight data flow framework. The server’s decisions are inferred from an actively maintained central spatial relationship graph (SRG) using spatial relationship patterns.

The system is compared to a previous Ubitrack implementation using the highly distributed \textsc{Dwarf} middleware. It exhibits significantly better performance in a reference scenario.

PDF Bib
F. Echtler, M. Huber, G. Klinker
Hand tracking for enhanced gesture recognition on interactive multi-touch surfaces
Technical Report TUM-I0721, Nov. 1, 2007. ()

Recently, interactive surfaces with multi-touch sensors based on frustrated total internal reflection (FTIR) have seen increased attention in research and commerce. In this paper, we present a new method of gathering data about the users’ gestures on an interactive table beyond simple binary touch information.

In addition to the infrared light emitters at the rim of the interaction surface, a second infrared light source is placed above an interactive table to create shadows of hands and arms. By tracking these shadows with the same rear-mounted camera, several consecutive and disjoint surface contacts can be traced back to the same user, thereby enabling new interaction techniques. We demonstrate this approach at the example of a virtual whiteboard with persistent color assignments for each participant.

2006
PDF Bib
F. Echtler
Using Semantic Web Languages in Argumentation Models
Forschungsberichte Künstliche Intelligenz FKI-253-06, TU München, 2006 ()

Recent research has created a multitude of argumentation models with varying degrees of formality. In this paper, we look at the feasibility of using semantic web languages like OWL and SWRL as an unbiased template for developing such models. We then present the Argumentation Ontology (ArgOn) as an example.

2005
PDF Bib
J. Georgii, F. Echtler, R. Westermann
Interactive Simulation of Deformable Bodies on GPUs
Simulation and Visualisation Conference Proceedings, 2005 ()

We present a mass-spring system for interactive simulation of deformable bodies. For the amount of springs we target, numerical time integration of spring displacements needs to be accelerated and the transfer of displaced point positions for rendering must be avoided. To fulfill these requirements, we exploit features of recent graphics accelerators to simulate spring elongation and compression in the graphics processing unit (GPU), saving displaced point masses in graphics memory, and then sending these positions through the GPU again to render the deformed body. This approach allows for interactive simulation and rendering of about one hundred thousand elements and it enables the display of internal properties of the deformed body. To further increase the physical realism of this simulation, we have integrated volume preservation and additional physics based constraints into the GPU mass-spring system.

2004
PDF Bib
G. Klinker, H. Najafi, T. Sielhorst, F. Sturm, F. Echtler, M. Isik, W. Wein, C. Truebswetter
FixIt: An Approach towards Assisting Workers in Diagnosing Machine Malfunctions
Proc. of the International Workshop on Design and Engineering of Mixed Reality Systems - MIXER 2004, Funchal, Madeira, CEUR Workshop Proceedings ()

Augmented Reality (AR) is a newly emerging user interface paradigm that is currently under rapid development. AR is still in its infancy. Only very few cases exist, in which AR technology has come out of the laboratories and has been evaluated [3] or used [8] in real industrial settings. At the Technical University of Munich, we address some of such open questions annually in three-month laboratory classes for graduate students in their junior years. By posing the problems within industrial settings (e.g., by involving an industrial sponsor) we try to ensure a realistic setting within which AR solutions are considered. Students investigate the posed problem, suggest a solution and build a very rough, prototypical demonstrator that illustrates the key ideas. The result is a prototypical system, illustrating the key points of using AR in a specific problem context. At the same time, the system design process has also laid out (and partially addressed) both system building and very general system design issues, which will be led back onto research agenda of the university and picked up in due time.

PDF Bib
F. Echtler
Realization of efficient mass-spring simulations on graphics hardware
Diploma Thesis, TU München, 2004 ()

Physics-based simulation of deformable objects is a valuable tool for creating realistic and plausible computer graphics. However, even the calculations involved for simple abstractions like mass-spring models are highly demand- ing, especially when real-time simulation is desired. This diploma thesis explores the possibility to use the powerful vector computation engine present in modern 3D graphics hardware for these calculations and shows a significant performance gain over CPU-based solutions, which in turn allows the use of larger and more detailed models.

2003
PDF Bib
F. Echtler, F. Sturm, K. Kindermann, G. Klinker, J. Stilla, J. Trilk, H. Najafi
The Intelligent Welding Gun: Augmented Reality for Experimental Vehicle Construction
Chapter 17 in Virtual and Augmented Reality Applications in Manufacturing (VARAM), Ong S.K and Nee A.Y.C eds, Springer Verlag, 2003, pp 333-360. ()

This chapter presents the prototypical design and implementation of an Intelligent Welding Gun to help welders in the automotive industry shoot studs with high precision in experimental vehicles. A presentation of the stud welding scenario and the identified system requirements is followed by a thorough exploration of the design space of potential system setups, analyzing the feasibility of different options to place sensors, displays and landmarks in the work area. The setup yielding the highest precision for stud welding purposes is the Intelligent Welding Gun: a regular welding gun with a display attachment, a few buttons for user interactions, and reflective markers to track the gun position from stationary cameras. While welders operate and move the gun, the display shows three-dimensional stud locations on the car frame relative to the current gun position. Navigational metaphors, such as notch and bead and a compass, are used to help welders place the gun at the planned stud positions with the required precision. The setup has been tested by a number of welders. It shows significant time improvements over the traditional stud welding process. It is currently in the process of being modified and installed for productional use.

Icons based on CC-BY work by Adrien Coquet and Pedro Lalli from the Noun Project.