I currently work at Monash University as a research fellow after a postodoctoral position in Linköping University and PhD Student position with Inria and Limsi/CNRS at Université Paris Sud.
I was supervised by Tobias Isenberg from Inria and Mehdi Ammi from LIMSI/CNRS.
My main focus lies in the intersection between human-computer interaction and interactive scientific visualization. My focus has been on bridging the gap between several novel interaction paradigms often used in scientific visualization (tactile interaction and tangible interaction). Overall, I am deeply interested by 3D interaction as well as all new interaction paradigms and their applications to specific scientific domains such as medicine or fluid dynamics. To validate my work, I have mainly relied on controlled experiments with users (domain experts or out-of-the-lab participants).
Bat. 660, Office 1040
Université Paris-Sud XI
91405 Orsay Cedex France
PhD in HCI • 2014-2017I was supervised by Mehdi Ammi and Tobias Isenberg and worked on the building of An Interaction Continuum for 3D Dataset Visualization. With this thesis work, I demonstrated the potential of an interaction continuum for visualization by proposing hybrid interaction paradigms in an easy-to-maintain, easy-to-integrate, and affordable setups. It provides the necessary initial steps for an interaction continuum that will hopefully inspire the creation of more hybrid interaction techniques for 3D data interaction.
Master of Research in HCI• 2013-2014This year of study was done in parallel to my last year at Polytech Paris Sud, offering me the opportunity to also get a Master of Research.
Master of Computer Science • 2013-2014:I had the opportunity to study one year at the University of Hong (HKU) thanks to an exchange program. I attended classes at the faculty of engineering as well as the faculty of computer science. I did not get a diploma from the University of Hong Kong as it was not included in the exchange program. List of courses followed:
Master of Engineering• 2011-2014:I got my Master of Engineering diploma in 2014 after three years of study at Polytech Paris Sud where I specialized as a Computer Science Engineer
Classe Préparatoire at Polytech Paris Sud • 2009-2011:Two intense years of studying many scientific fields such as:
|2019||Lonni Besançon, Mickael Sereno, Lingyun Yu, Mehdi Ammi, Tobias Isenberg||"Hybrid Touch/Tangible Spatial 3D Data Selection" Computer Graphics Forum, Wiley, In press, Eurographics Conference on Visualization (EuroVis 2019), 38.|
|2019||Xiyao Wang, Lonni Besançon, Mehdi Ammi, Tobias Isenberg||"Augmenting Tactile 3D Data Navigation With Pressure Sensing" Computer Graphics Forum, Wiley, In press, Eurographics Conference on Visualization (EuroVis 2019), 38.|
|2019||Mickael Sereno, Lonni Besançon, Tobias Isenberg||"Supporting Volumetric Data Visualization and Analysis by Combining Augmented Reality Visuals with Multi-Touch Input" EuroVis Extended Abstracts, Jun 2019, Porto, Portugal.|
|2019||Kahin Akram Hassan, Yu Liu, Lonni Besançon, Jimmy Johansson, and Niklas Rönnberg.||"A Study on Visual Representations for Active PlantWall Data Analysis" In MDPI Data, 2019.|
|2019||Jouni Helske, Matthew Cooper, Anders Ynnerman, Lonni Besançon||"The Significant Effect of Visual Representations on Dichotomous Thinking" Journée Visu 2019, Agro Paris Tech, France, Mai 2019.|
|2019||Lonni Besançon, Matthew Cooper, Anders Ynnerman, Frédéric Vernier.||"Surimpression de population sur choroplèthes" Journée Visu 2019, May 2019, Paris, France.|
|2019||Lonni Besançon, Pierre Dragicevic||"The Continued Prevalence of Dichotomous Inferences at CHI" ACM CHI 2019 (alt.chi). May 4 - 7, 2019, Glasgow, UK.|
|2019||Tanja Blascheck, Lonni Besançon, Anastasia Bezerianos, Bongshin Lee, Petra Isenberg||"Glanceable Visualization: Studies of Data Comparison Performance on Smartwatches" IEEE Transactions on Visualization and Computer Graphics, Institute of Electrical and Electronics Engineers, Jan. 2019|
|2018||Lonni Besançon, Amir Semmo, David Biau, Bruno Frachet, Virginie Pineau, El Hadi Sariali, Rabah Taouachi, Tobias Isenberg, Pierre Dragicevic||"Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization." Proceedings of the Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering, Aug 2018, Victoria, Canada. pp.13.|
|2018||Tanja Blascheck, Lonni Besançon, Anastasia Bezerianos, Bongshin Lee, Petra Isenberg||"Perception des visualisations sur smartwatch." Journées Visu 2018, May 2018, EDF-Saclay|
|2018||Xiyao Wang, Lonni Besançon, Mehdi Ammi, Tobias Isenberg||"Navigation Tactile 3D Augmentée pour Mobiles." Journées Visu 2018, May 2018, EDF-Saclay|
|2018||Lonni Besançon||"An Interaction Continuum for 3D Dataset Visualization.", PhD Thesis, 2018.|
|2018||Tanja Blascheck, Anastasia Bezerianos, Lonni Besançon, Bongshin Lee, Petra Isenberg||"Preparing for Perceptual Studies: Position and Orientation of Wrist-worn Smartwatches for Reading Tasks.", Workshop on Data Visualization on Mobile Devices, ACM CHI, 2018, Montréal, Canada|
|2017||Lonni Besançon, Pierre Dragicevic||"La Différence Significative entre Valeurs p et Intervalles de Confiance", 29EME conférence francophone sur l'Interaction Homme-Machine, Aout 2017, Poitiers, France.|
|2017||Xiyao Wang, Lonni Besançon, Mehdi Ammi, Tobias Isenberg||"Augmenting Tactile 3D Data Exploration With Pressure Sensing", IEEE VIS 2017, Oct 2017, Phoenix, Arizona, United States. 2017|
|2017||Lonni Besançon, Paul Issartel, Mehdi Ammi, Tobias Isenberg||"Interactive 3D Data Exploration Using Hybrid Tactile/Tangible Input", Journées Visu 2017, Jun 2017, Rueil-Malmaison, France|
|2017||Lonni Besançon, Mehdi Ammi, Tobias Isenberg||"Pressure-Based Gain Factor Control for Mobile 3D Interaction using Locally-Coupled Devices", CHI 2017 - ACM CHI Conference on Human Factors in Computing Systems, May 2017, Denver, United States. pp.1831-1842.|
|2017||Lonni Besançon, Paul Issartel, Mehdi Ammi, Tobias Isenberg||"Usability Comparison of Mouse, Touch and Tangible Inputs for 3D Data Manipulation", CHI 2017 - ACM CHI Conference on Human Factors in Computing Systems, May 2017, Denver, United States. pp.4727-4740|
|2016||Mickael Sereno, Mehdi Ammi, Tobias Isenberg, Lonni Besançon||Tangible Brush: Tactile-Tangible Hybrid 3D Selection, Extended Abstract IEEE VIS, October 23–28, Baltimore, Maryland, USA, 2016.|
|2016||Paul Issartel, Lonni Besançon, F. Guéniat, T. Isenberg, M. Ammi||Preference Between Allocentric and Egocentric 3D Manipulation in a Locally Coupled Configuration. ACM 4th Symposium on Spatial User Interaction (SUI)|
|2016||Lonni Besançon, Paul Issartel, M. Ammi, T. Isenberg||"Hybrid Tactile/Tangible Interaction for 3D Data Exploration. IEEE Transactions on Visualization and Computer Graphics, 23(1), January 2017. (10 pages)|
|2016||Paul Issartel, L. Besançon, Tobias Isenberg, Mehdi Ammi||"A Tangible Volume for Portable 3D Interaction", International Symposium on Mixed and Augmented Reality (ISMAR), 2016|
We present the first empirical study on using color manipulation and stylization to make surgery images more palatable. While aversion to such images is natural, it limits many people’s ability to satisfy their curiosity, educate themselves, and make informed deci- sions. We selected a diverse set of image processing techniques, and tested them both on surgeons and lay people. While many artistic methods were found unusable by surgeons, edge-preserving image smoothing gave good results both in terms of preserving informa- tion (as judged by surgeons) and reducing repulsiveness (as judged by lay people). Color manipulation turned out to be not as effective
Stylization, affect, empirical study, surgery.
Small devices such as smartwatches change the way we look at data. Instead of pronlonged stares, we only briefly glance at them. These devices also happen to be particularly fit to support visualization of personal data (activity tracker, food, weather, ...). We investigate how to design glanceable visualization for these specific devices.
Stylization, affect, empirical study, surgery.
The work presented in this thesis demonstrates the potential of an interaction continuum for visualization by proposing hybrid interaction paradigms in an easy-to-maintain, easy-to-integrate, and affordable setup. It provides the necessary initial steps for an interaction continuum that will hopefully inspire the creation of more hybrid interaction techniques for 3D data interaction.
Tactile Interaction, Pressure Input, 3D Interaction, Tangible Interaction, Thesis
In addition to the wrong interpretations it often causes, binary significance testing tends to generate a false im- pression of confidence in scientific publications. Estimation techniques offer more information and better lend them- selves to nuanced interpretations. We discuss the limits of binary significance testing and suggest practical guidelines on how to use estimation techniques in scientific publica- tions, from paper writing to presentation.
Statistical Analysis, NHST, Estimation
We present a pressure-augmented tactile interaction technique to improve 3D object/view manipulation tasks on mobile devices. Existing tactile techniques for mobile data exploration either make use of up to four fingers to control all the needed degrees of freedom (DOF) for 3D manipulation or simultaneously adjust multiple DOF together to reduce the number of fingers needed for interaction. Yet the small display size of mobile devices limits the number of fingers that should simultaneously be used. Controlling each DOF for 3D data exploration separately, however, gives users more control. We address this contradiction by combining tactile and pressure input. We thus use pressure to intuitively switch between different tactile interaction modes. In this extended abstract we describe our interaction design as well as our rationale for the input mappings.
Tactile Interaction, Pressure Input, 3D Interaction
We present the design and evaluation of pressure-based interactive control of 3D navigation precision. Specifically, we examine the control of gain factors in tangible 3D interactions using locally-coupled mobile devices. By focusing on pressure as a separate input channel we can adjust gain factors independently from other input modalities used in 3D navigation, in particular for the exploration of 3D visualizations. We present two experiments. First, we determined that people strongly preferred higher pressures to be mapped to higher gain factors. Using this mapping, we compared pressure with rate control, velocity control, and slider-based control in a second study. Our results show that pressure-based gain control allows people to be more precise in the same amount of time compared to established input modalities. Pressure-based control was also clearly preferred by our participants. In summary, we demonstrate that pressure facilitates effective and efficient precision control for mobile 3D navigation.
Tangible Interaction, Pressure Input, Gain Factor
We study user preference between allocentric and egocentric 3D manipulation on mobile devices, in a configuration where the motion of the device is applied to an object displayed on the device itself. We first evaluate this preference for translations and for rotations alone, then for full 6-DOF manipulation. We also investigate the role of contextual cues by performing this experiment in different 3D scenes. Finally, we look at the specific influence of each manipulation axis. Our results provide guidelines to help interface designers select an appropriate default mapping in this locally coupled configuration.
Tangible Interaction, Interaction Mappings
We present the design of a 6-DOF tangible controller for 3D spatial data selection. Such selection is a primary and fundamental task in scientific visualization: it is performed prior to many other interac- tions. Many datasets are defined in 3D space, yet selection is often performed based on 2D input. While 2D selection may be efficient for datasets with explicit shapes, it is less efficient for data without such objects. We address this issue by combining 2D tactile with 3D tangible input to perform 3D selection in volumetric datasets.
Tangible Interaction, Tactile Interaction, Design Space Exploration, Scientific Visualization
We present the design and evaluation of an interface that combines tactile and tangible paradigms for 3D visualization. While studies have demonstrated that both tactile and tangible input can be efficient for a subset of 3D manipulation tasks, we reflect here on the possibility to combine the two complementary input types. Based on a field study and follow-up interviews, we present a conceptual framework of the use of these different interaction modalities for visualization both separately and combined---focusing on free exploration as well as precise control. We present a prototypical application of a subset of these combined mappings for fluid dynamics data visualization using a portable, position-aware device which offers both tactile input and tangible sensing. We evaluate our approach with domain experts and report on their qualitative feedback.
Tangible Interaction, Tactile Interaction, Design Space Exploration, Scientific Visualization
We present a new approach to achieve tangible object manipulation with a single, fully portable and self-contained device. Our solution is based on the concept of a “tangible volume”. We turn a tangible object into a handheld fish-tank display. The tangible volume represents a volume of space that can be freely manipulated within a virtual scene. This volume can be positioned onto virtual objects to directly grasp them, and to manipulate them in 3D space. We investigate this concept through two user studies. The first study evaluates the intuitiveness of using a tangible volume for grasping and manipulating virtual objects. The second study evaluates the effects of the limited field of view on spatial awareness. Finally, we present a generalization of this concept to other forms of interaction through the surface of the volume.
Tangible Interaction, Augmented Reality, Fish-Tank Display
We evaluate the performance and usability of mouse-based, touch- based, and tangible interaction for manipulating objects in a 3D virtual environment. This comparison is a step toward a better understanding of the limitations and benefits of these existing interaction techniques, with the ultimate goal of facilitating the integration of different 3D data exploration environments into a single interaction continuum. For this purpose we analyze participants’ performance in 3D manipulation using a docking task. We measured comple- tion times, docking precision, as well as subjective criteria such as fatigue, workload, and preference. Our results show that the three input modalities provide similar levels of precision but require dif- ferent interaction times. We also discuss our qualitative observations as well as people’s preferences and put our findings into context of the practical application domain of 3D data analysis environments.
User Interfaces, Input devices and strategies
|2017||Marie Cheng||Polytech Paris-Sud||Bachelor|
|2016-2017||Mickael Francisco Sereno||Polytech Paris-Sud||Student Project|
|Year||Course Title||Location||Level||Role||# hours||# students|
|2014--2018||Interactive Information Visualization||Université Paris Saclay||Graduate||Lab Sessions||~14||~30|
|2016, 2017||Algorithms, Complexity, Graph Theory||Polytech Paris Sud||Graduate||Lectures + Tutorials||~20||~50|
|2014--2016||Algorithms and data structures||Polytech Paris Sud||Undergraduate||Lab Sessions & Tutorials||~30||~20|
|2015, 2016||Academic Internship Tutor||Polytech Paris Sud||Graduate||//||//||2|
|2015||Computer & Network Security||Polytech Paris Sud||Graduate||Lab Sessions||~15||~30|
|2014||Assembly Language||Polytech Paris Sud||Undergraduate||Lab Session||4||28|
An exhaustive list (with paper count) is available here (to come later), on my resume, or by clicking on the picture on the left. I am reviewing almost every year for (in alphebetical order): 3DUI (now IEEE VR), CHI, ICMI, EuroVIS, IHM, ISMAR, ISS, MobileHCI, NordiCHI, OzCHI, SUI, TEI, UIST, VRST, VIS, VR.
Research Engineer Intern• March 2014 - September 2014Design, Implementation and Evaluation on driving simulateur of the interface of semi-autonomous cars.
Software Developper Intern • June 2013 - September 2013
Development of an application which also involved Ecole Centrale Paris in a research project, so as to define a method to help generating and evaluating new antennas' concepts using Bayes Nets, CSPs, and several Java APIs.
The application developed uses Bayes Nets so as to represent and compute all the requirements an antenna should respect. The program allows to add requirements, possible values for different components and/or performance constraints and the user can ask the program to solve the problem which will be done thanks to a fast algorithm.
The application itself allows engineers from Thales to easily create Bayes Nets without ever manipulating them so that they do not require any Bayes Net background to solve their complex problems. Hence, getting the ideal architecture of a new antenna is easier since engineers only need to formulate their problem ( constraints, performance, requirements, available pieces ... ). The resolution of the problem itself is done by the program.
Network Engineer Intern • Mai 2011 - July 2011
Developing of a network monitoring system based on Nagios and using mainly FAN (Fully Automated Nagios) for a private hospital.
The system had to monitor different machines such as switches, computers, servers or printers and had to be platform independent. Among other things, the internship included working with unix commands, and several network skills such as understanding and using the SNMP protocol for instance.