About

I am trained as computer scientist and human-computer interaction researcher. Currently, I work in Munich for Fujitsu Enabling Software Technology on user interfaces for cloud services and data visualization. I enjoy building software and hardware prototypes ranging from interactive visualizations of large data sets to Arduino projects with sensors. I am also experienced in a large repertoire of ethnographic and explorative user studies as well as controlled experiments.

I used to work in HCI research and built up a large network with researchers from around the globe. I am glad that I keep discussing with them about human-computer interaction research. Those people keep inspiring me, push my HCI knowledge over limits and motivate me to show my best in changing how we interact with technology.

I love what I do and I like to spark people in my surrounding with my passion for human-computer interaction design.

SLAP Widgets slap widgets
SLAP Widgets are tangible, translucent controls for multitouch tabletops. Using rear projection, SLAP Widgets can be relabeled dynamically. People perform more accurate input on touch-enabled surfaces with SLAP widgets. more
BodyScape bodyscape
To enable on-body touch interaction techniques, I built a glove with force sensors in each fingertip. Sensors are connected to an Arduino Lillypad and an additional XBee WIFI module sends out all touch events to a computer controlling a display.
Finger-identification on multi-touch tables identified touch
With Alix Goguey and Gery Casiez, we built a multi-touch table that distinguishes between fingers touching the surface. We designed several interaction techniques and compared their performance in the context of a drawing application. paper
WILD Room wild room
The WILD room features an interactive high-resolution Wall-sized display and VICON tracking system. I developed pan-and-zoom navigation techniques for astrophysicists navigating in large image data of the milky way. more
BiPad Bipad
I created a software framework enabling the design of bimanual interaction techniques for hand-held tablets. The hand holding the tablet can interact on the side or the corner of the tablet enabling the design of powerful bimanual input techniques. more
Wind-shield Displays wsd
We showed that displaying potential hazards-warnings in windshields on-top of the real world leads to more secure driving behavior compared to displaying warnings in head-up displays. paper
Multi-finger Chords mfc
I built a classifier recognizing multiple finger chords on off-the-shelf hand-held tablets. I designed several interaction techniques for enabling fast touch-based application short-cuts on tablets. more
Camera Motion camera motion
Simply automating camera operation does not please the needs of camera operators: they are artist and need to feel in control. We interviewed and observed 9 operators on film sets. Based on findings we created interaction concepts where camera motion control transitions back-and-forth between humans and machines. paper
Bibberball Bibberball
Bibberball is a proof-of-concept implementation by my student Julia Wayrauther for a new era of physical gaming. Bibberball contains the user's personal smart phone, makes use of its sensors, storage and network connection, and directs light (and sound) to the object's surface via fiberglass.
MIME learn and memorize gestures
MIME is a concept to reveal gestures to users and to design a gesture-command mapping that is easy to memorize. Concepts like MIME are important when making gesture input a true alternative to conventional input devices. more
Perspective-dependent gestures dpg
We designed gestural input techniques for edge-loop scaling and extrusion tasks on touch-based input surfaces and tested them with a set of users. paper
CrowdView roskilde
In collaboration with Tobias Seitz (LMU University) and Simon Dittmann, I created https://crowdview.dk - more than 30.000 people provided anonymous location data at the Roskilde Festival in Denmark. Read more

Work

Fujitsu Enabling Solutions Technology

Senior User Experience Specialist

I work as developer and UX-consultant on several Fujitsu products. One of my favorite projects I am involved in is called PICCO, an interactive cost-visualization tool for monitoring and optimizing cloud-service costs in organizations.

September 2015 ~ Present
Munich, Germany

Ludwig Maximilians Univeristät

Post-doc: teaching interaction techniques to end users

In order to make gesture languages a real alternative to common input devices, people need to remember a set of gestures and the command those gestures are mapped to. I developed gesture languages that can be thought to people using little screen-space. I proved in long-term experiments that the gesture-command mappings I designed are also easy to remember to end-users.

June 2013 - June 2015
Munich, Germany

Télécom ParisTech

Post-doc: detect multi-finger-poses on off-the-shelf tablets.

I implemented a recognizer that enables finger-identification through multi-finger chords performed on off-the-shelf capacitive touch tablets.

October 2012 - May 2013
Paris, France

PhD in Interaction Design (computer science)

INRIA, Université Paris Sud

I worked in a multi-surface environment featuring an interactive wall-sized display assembled by 32 screens. This interactive wall displays large data sets with very high resolution. I designed, implemented and evaluated several interaction techniques for astrophycisists, extreme users who work collaboratively on large stiched images, e.g. of the milky way.

September 2009 - September 2012
Paris, France

Education

Internship at Insitu Research Group

INRIA, Université Paris Sud

May 2009 - August 2009
Paris, France

Computer Science (Diplom Informatiker)

RWTH-Aachen, Germany

September 2002 - March 2009
Aachen, Germany

Publications

Perspective-dependent Indirect Touch Input for 3D Polygon Extrusion

Henri Palleis, Julie Wagner, Heinrich Hussmann

In adjunct proceedings of the 28th ACM User Interface Software and Technology Symposium, UIST '15. Charlotte, NC, USA, November 8 - 11, 2015. ACM, New York, NY, USA.

We present a two-handed indirect touch interaction technique for the extrusion of polygons within a 3D modeling tool that we have built for a horizontal/vertical dual touch screen setup. In particular, we introduce perspective-dependent touch gestures: using several graphical input areas on the horizontal display, the non-dominant hand navigates the virtual camera and thus continuously updates the spatial frame of reference within which the dominant hand performs extrusions with dragging gestures.

UIST, 2015
Charlotte, NC, USA

Quantifying Object- and Command-oriented Interaction

Alix Goguey, Julie Wagner, Gery Casiez

INTERACT '15: 15th international conference on Human-Computer Interaction

In spite of previous work showing the importance of understanding users’ strategies when performing tasks, i.e. the order in which users perform actions on objects using commands, HCI researchers evaluating and comparing interaction techniques remain mainly focused on performance (e.g. time, error rate). This can be explained to some extent by the difficulty to characterize such strategies.We propose metrics to quantify if an interaction technique introduces a rather object- or command-oriented task strategy, depending if users favor completing the actions on an object before moving to the next one or in contrast if they are reluctant to switch between commands. On an interactive surface, we compared Fixed Palette and Toolglass with two novel techniques that take advantage of finger identification technology, Fixed Palette using Finger Identification and Finger Palette. We evaluated our metrics with previous results on both existing techniques. With the novel techniques we found that (1) minimizing the required physical movement to switch tools does not necessarily lead to more object-oriented strategies and (2) increased cognitive load to access commands can lead to command-oriented strategies.

INTERACT, 2015
Bamberg, Germany

Contact-analog Warnings on Windshield Displays promote Monitoring the Road Scene

Renate Häuslschmid, Laura Schnurr, Julie Wagner, Andreas Butz

In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Automotive UI '15), Nottingham, UK, September 1-3, 2015.

Drivers attend to a lot of information at various locations inside and outside the car as well as on external devices (e.g. smart phones). Head-Up Displays (HUDs) support keeping drivers’ visual focus directed towards the street; as they present virtual information in the windshield area on top of the physical world within the field of view of the driver. Displayed information, however, is often spatially dissociated with its cause in the physical world: for example a warning is displayed, yet drivers still require time searching for the hazard causing it. Windshield displays (WSDs) allow virtual warnings being displayed at the position of the hazard. We compared HUD and WSD with the baseline no-display and found that drivers demonstrate a calm gaze behavior with WSDs; they keep their visual attention in average 1.5 s longer focused on the leading car. However, we also found no significant faster reaction time compared to HUDs. We discuss our findings comparing HUDs to WSDs, present potential limitations of our study and point out future steps in order to further investigate the advantages of WSDs.

Automotive UI, 2015
Nottingham, UK

MIME: Teaching Mid-Air Pose-Command Mappings

Simon Ismair, Julie Wagner, Ted Selker, Andreas Butz

MobileHCI '15: 17th international conference on Human-Computer Interaction with Mobile Devices and Services

Mid-air gestures are initial hand poses with a subsequent movement. Existing gesture guides reveal this dynamic part of a gesture. Initial poses, however, are either revealed by space-consuming cheat sheets or time-consuming demonstration videos. Mime is a novel interaction concept that (1) reveals how to form complex hand poses and (2) teaches pose-command mappings: Mime reduces hand poses to space-efficient line figures that users mime with their hands; these abstract lines are embedded into command icons or names to create a mnemonic. We present several applications of the Mime concept, and implemented a prototype based on mid-air back-of-device interaction on off-the-shelf mobile phones. We compared both mnemonics, iconic and textual, to a baseline without embedding to test learnability and memorability of a 12-item vocabulary. Users in the iconic condition required significantly less training than both other conditions and recalled significantly more items after one week compared to the no-cue baseline.

MobileHCI, 2015
Copenhagen, Denmark

Delegation Impossible? - Towards Novel Interfaces for Camera Motion

Axel Hoesl, Julie Wagner, Andreas Butz

In Extended Abstracts of the 33rd SIGCHI Conference on Human Factors in Computing Systems, CHI '15. Seoul, Korea, April 18 - 23, 2015. ACM, New York, NY, USA.

When watching a movie, the viewer perceives camera motion as an integral movement of a viewport in a scene. Behind the scenes, however, there is a complex and error-prone choreography of multiple people controlling separate motion axes and camera attributes. This strict separation of tasks has mostly historical reasons, which we believe could be overcome with today’s technology. We revisit interface design for camera motion starting with ethnographic observations and interviews with nine camera operators. We identified seven influencing factors for camera work and found that automation needs to be combined with human interaction: Operators want to be able to spontaneously take over in unforeseen situations. We characterize a class of user interfaces supporting (semi-)automated camera motion that take both human and machine capabilities into account by offering seamless transitions between automation and control.

CHI, 2015
Seoul, Korea

Out of Shape, Out of Style, Out of Focus: Wie sich Computer besser in unseren Alltag integrieren (lassen)

Andreas Butz, Gilbert Beyer, Alina Hang, Doris Hausen, Fabian Hennecke, Felix Lauber, Sebastian Loehmann, Henri Palleis, Sonja Rümelin, Bernhard Slawik, Sarah Tausch, Julie Wagner, Heinrich Hussmann

In Informatik Spektrum: Organ der Gesellschaft für Informatik e.V. und mit ihr assoziierter Organisationen. Online, May 2014.

Informatik Spektrum, 2014

Multi-finger Chords for Hand-held Tablets: Recognizable and Memorable

Julie Wagner, Eric Lecolinet, Ted Selker

In Proceedings of the 32nd SIGCHI Conference on Human Factors in Computing Systems. Toronto, Canada, April 26 - May 1, 2014. ACM, New York, NY, USA.

Despite the demonstrated benefits of multi-finger input, today's gesture vocabularies offer a limited number of postures and gestures. Previous research designed several posture sets, but does not address the limited human capacity of retaining them. We present a multi-finger chord vocabulary, introduce a novel hand-centric approach to detect the identity of fingers on off-the-shelf hand-held tablets, and report on the detection accuracy. A between-subjects experiment comparing ’random’ to a ‘categorized’ chord-command mapping found that users retained categorized mappings more accurately over one week than random ones. In response to the logical posture-language structure, people adapted to logical memorization strategies, such as ‘exclusion’, ‘order’, and ‘category’, to minimize the amount of information to retain. We conclude that structured chord-command mappings support learning, short-, and long-term retention of chord-command mappings.

CHI, 2014
Toronto, Canada

A Body-centric Design Space for Multi-surface Interaction

Julie Wagner, Mathieu Nancel, Sean Gustafson, Stéphane Huot, Wendy Mackay

In Proceedings of the 31st ACM International Conference on Human Factors in Computing Systems - CHI 2013, Paris, France, April 2013

We introduce BodyScape, a body-centric design space that allows us to describe, classify and systematically compare multi-surface interaction techniques, both individually and in combination. BodyScape reflects the relationship between users and their environment, specifically how different body parts enhance or restrict movement within particular interaction techniques and can be used to analyze existing techniques or suggest new ones. We illustrate the use of BodyScape by comparing two free-hand techniques, on-body touch and mid-air pointing, first separately, then combined. We found that touching the torso is faster than touching the lower legs, since it affects the user’s balance; and touching targets on the dominant arm is slower than targets on the torso because the user must compensate for the applied force.

CHI, 2013
Paris, France

Left-over Windows Cause Window Clutter... But What Causes Left-over Windows?

Julie Wagner, Wendy Mackay, Stéphane Huot

Ergo'IHM 2012-24th French Speaking Conference on Human-Computer Interaction

Sleep mode lets users go for days or weeks without rebooting, supporting work on multiple tasks that they can return to later. However, users also struggle with window clutter, facing an increasing number of 'left-over windows' that get in the way. Our goal is to understand how users create and cope with left-over windows. We conducted a two-week field study with ten notebook users. We found that they work in very short sessions, switching often between computerbased and external tasks. 34% of left-over windows remain untouched for a day or more, increasing in quantitity until they all disappear after a reboot. Some users reboot as a deliberate 'clean-up' strategy, whereas others lose left-over windows after an unexpected system crash. Users intentionally keep left-over windows as to-do lists, as reminders of upcoming tasks, and for facilitating future access; the rest are simply forgotten. Tools for visualizing and managing left-over windows should help users reduce window clutter, while maintaing the benefits of interruptible work sessions.

ErgoIHM, 2012
Biarritz, France

BiTouch and BiPad: designing bimanual interaction for hand-held tablets

Julie Wagner, Stéphane Huot, Wendy Mackay

In proceedings of CHI '12 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems

Despite the demonstrated benefits of bimanual interaction, most tablets use just one hand for interaction, to free the other for support. In a preliminary study, we identified five holds that permit simultaneous support and interaction, and noted that users frequently change position to combat fatigue. We then designed the BiTouch design space, which introduces a support function in the kinematic chain model for interacting with hand-held tablets, and developed BiPad, a toolkit for creating bimanual tablet interaction with the thumb or the fingers of the supporting hand. We ran a controlled experiment to explore how tablet orientation and hand position affect three novel techniques: bimanual taps, gestures and chords. Bimanual taps outperformed our one-handed control condition in both landscape and portrait orientations; bimanual chords and gestures in portrait mode only; and thumbs outperformed fingers, but were more tiring and less stable. Together, BiTouch and BiPad offer new opportunities for designing bimanual interaction on hand-held tablets.

CHI, 2012
Austin, Texas

Multisurface Interaction in the WILD Room

Michel Beaudouin-Lafon, Olivier Chapuis, James Eagan, Tony Gjerlufsen, Stéphane Huot, Clemens Klokmose, Wendy Mackay, Mathieu Nancel, Emmanuel Pietriga, Clement Pillias, Romain Primet, Julie Wagner

In proceedings of CHI '12 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems

CHI, 2012
Austin, Texas

Mid-air pan-and-zoom on wall-sized displays

Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, Wendy Mackay

In Proceedings of the 29th ACM International Conference on Human Factors in Computing Systems

Very-high-resolution wall-sized displays offer new opportunities for interacting with large data sets. While pointing on this type of display has been studied extensively, higher-level, more complex tasks such as pan-zoom navigation have received little attention. It thus remains unclear which techniques are best suited to perform multiscale navigation in these environments. Building upon empirical data gathered from studies of pan-and-zoom on desktop computers and studies of remote pointing, we identified three key factors for the design of mid-air pan-and-zoom techniques: uni- vs. bimanual interaction, linear vs. circular movements, and level of guidance to accomplish the gestures in mid-air. After an extensive phase of iterative design and pilot testing, we ran a controlled experiment aimed at better understanding the influence of these factors on task performance. Significant effects were obtained for all three factors: bimanual interaction, linear gestures and a high level of guidance resulted in significantly improved performance. Moreover, the interaction effects among some of the dimensions suggest possible combinations for more complex, real-world tasks.

CHI, 2011
Vancouver, Canada

Exploring sustainable design with reusable paper

Julie Wagner, Wendy Mackay

In proceedings of CHI '10 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems

This paper explores the need for sustainable design with paper: how people really print and how we can take advantage of novel, reusable paper technology. We conducted two studies to investigate user's printing behavior. A key finding of the first study was that users often need an intermediate state between the electronic and physical forms of their documents. The second study examined users' predictions of which types of documents required this intermediate state. We formulate these findings into design guidelines that take into account: examination phase, transitions, cognitive and emotional reasons, and task- and event-relevant documents. Finally, we discuss how the different physical characteristics of reusable paper affect the user interface and could effectively support sustainable design.

CHI, 2010
Atlanta, Georgia

SLAPbook: tangible widgets on multi-touch tables in groupware environments

Malte Weiss, Julie Wagner, Roger Jennings, Yvonne Jansen, Ramsin Khoshabeh, James D. Hollan, Jan Borchers

In proceedings of TEI '09 Proceedings of the 3rd International Conference on Tangible and Embedded Interaction

We present SLAPbook, an application using SLAP, translucent and tangible widgets for use on vision-based multi-touch tabletops in Single Display Groupware (SDG) environments. SLAP stands for Silicone ILluminated Active Peripherals and includes widgets such as sliders, knobs, keyboards, and buttons. The widgets and tactile feedback to multi-touch tables while simultaneously providing dynamic relabeling to tangible objects using the table's rear projection. SLAPbook provides multiple users the ability to add and edit content to a guestbook, browse other peoples' entries, and access personal data using a token-based personalization system. Interaction with the table takes place in the personal and public space so that users can make use of personal and shared controls to perform separate and coordinative actions.

TEI, 2009
Cambridge, United Kingdom

SLAP widgets: bridging the gap between virtual and physical controls on tabletops

Malte Weiss, Julie Wagner, Roger Jennings, Yvonne Jansen, Ramsin Khoshabeh, James D. Hollan, Jan Borchers

CHI '09 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems

We present Silicone iLluminated Active Peripherals (SLAP), a system of tangible, translucent widgets for use on multitouch tabletops. SLAP Widgets are cast from silicone or made of acrylic, and include sliders, knobs, keyboards, and buttons. They add tactile feedback to multi-touch tables, improving input accuracy. Using rear projection, SLAP Widgets can be relabeled dynamically, providing inexpensive, battery-free, and untethered augmentations. Furthermore, SLAP combines the flexibility of virtual objects with physical affordances. We evaluate how SLAP Widgets influence the user experience on tabletops compared to virtual controls. Empirical studies show that SLAPWidgets are easy to use and outperform virtual controls significantly in terms of accuracy and overall interaction time.

CHI, 2009
Boston, MA, USA

Honors and Awards

Best Paper Award

Top 1% of papers
CHI, 2011

 

Honorable Mention

Top 5% of Papers
CHI, 2013

 

Honorable Mention

Top 5% of Papers
CHI, 2014