AVI 2018

International Conference on Advanced Visual Interfaces

Resort Riva del Sole, Castiglione della Pescaia, Grosseto (Italy), 29 May - 1 June 2018

Keynote Speakers

Toyoaki Nishida, Kyoto University / RIKEN Center for Advanced Intelligence Project, Japan

Envisioning Conversation - Toward understanding and augmenting common ground

When: Wednesday, 30 May 2018

Abstract: Our intellectual life draws on daily conversations that allow us to communicate thoughts, ideas, emotions, etc. To conduct smooth and reliable interactions, participants need to share a solid basis of common ground prior to conversation, which consists of knowledge, beliefs, and suppositions regarding the topics to discuss. Importance of common ground applies to artificial agents as well. A capability of jointly building and maintaining the common ground with people on the fly is indispensable to establish a productive relationship. Understanding and augmenting common ground is challenging, as it is both tacit and dynamic in the sense that the common ground for a situation contains plenty of tacit dimensions and it is dynamically updated as the interaction proceeds.

Our approach leverages the idea of conversation as a window into common ground. We characterize conversation as a continuous update of common ground as thoughts and feelings are expressed, attempting to portray common ground by eliciting cues about it and accumulating them in a larger context. We place more emphasis on envisioning the common ground beneath the surface than building autonomous conversational agents.

Conversational informatics is a study of conversational interaction from an interdisciplinary perspective. On the scientific side, it aims to unveil how people interact with each other to share thoughts and feelings using social signals even if their mental processes are much different from each other as they may rely on different cultural background. On the engineering side, it aims at establishing a computational theory of conversational interactions. It encompasses measurement, analysis and modeling conversational interaction among agents and the situation. Synthetic evidential study (SES) builds on conversational informatics to help a group of people envision the mental processes underlying conversational interactions by combining dramatic role play, producing an agent play and criticizing it from different angles.

Conversation envisioning is a computational framework for facilitating continuous collaborations and longitudinal effort of conversationally by participants and meta-participants, such as directors who produce conversational contents and instructors who adapt them to teach students. Conversation envisioning helps us understand other people and ourselves in the context of conversation. We are developing a computational platform for conversation envisioning to permit people and artificial agents to express their thoughts and ideas in a situated fashion so that other participants can directly perceive in a conversational fashion. This framework is also designed to support collaborative data collection, analysis, and production by different categories of stakeholders, such as students, instructors and directors.

Our preliminary evaluation with a bargaining scenario has demonstrated that timely and explicit presentation of relevant pieces of the common ground by conversation envisioning may intrinsically motivate people to engage in a situated interaction with rich socio-cultural aspects in cross-cultural communications, where they have limited or no shared background.

A potential application of conversation envisioning is education such as computer-assisted language learning in which participatory active learning of students is a key to success. Conversation envisioning helps stakeholders of different categories actively participate in exploit digital storytelling to contribute to various aspects of learning subjects that involve expressing and interpreting mental processes. Future challenges include automating the process ranging from content production to building autonomous interactive agents to significantly reduce the overhead of stakeholders to permit them to concentrate on the essence.

Bio: Toyoaki Nishida is Professor at Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University. He received the B.E., the M.E., and the Doctor of Engineering degrees from Kyoto University in 1977, 1979, and 1984, respectively. His research centers on artificial intelligence and human computer interaction. He opened up a new field of research called conversational informatics in 2003. He edited and co-authored three books on conversational informatics and related topics from Wiley and Springer. Currently, he leads the Human-AI communication (HAIC) team at RIKEN Center for Advanced Intelligence Project (AIP). He is an associate editor the AI & Society journal. He serves as a senior member of the Conference Toward AI Network Society, Ministry of Internal Affairs and Communications (MIC), Japan.

Alfred Inselberg, Tel Aviv University, Israel

Visual Analytics for High Dimensional Data

When: Thursday, 31 May 2018

Abstract: A dataset with M items has 2^M subsets anyone of which may be the one satisfying our objective. With a good data display and interactivity our fantastic pattern-recognition defeats this combinatorial explosion by extracting insights from the visual patterns. This is the core reason for data visualization. With parallel coordinates the search for relations in multivariate data is transformed into a 2-D pattern recognition problem. Together with criteria for good query design, we illustrate this on several real datasets (financial, process control, credit-score, one with hundreds of variables) with stunning results. A geometric classification algorithm yields the classification rule explicitly and visually. The minimal set of variables, features, are found and ordered by their predictive value. A model of a country’s economy reveals sensitivities, impact of constraints, trade-offs and economic sectors unknowingly competing for the same resources. An overview of the methodology provides foundational understanding; learning the patterns corresponding to various multivariate relations. These patterns are robust in the presence of errors and that is good news for the applications. A topology of proximity emerges opening the way for visualization in Big Data.

Bio: Alfred received a Ph. D. in Mathematics and Physics from the University of Illinois (Champaign-Urbana) and continued as Research Professor at the Biological Computer Laboratory there working on Neural Networks, Cognition, Population Dynamics and Models of Complex Nonlinear Systems. He then held senior research positions at IBM where he developed a Mathematical Model of the Ear(Cochlea) (TIME Nov. 74) and later Collision-Avoidance Algorithms for Air Traffic Control (3 USA patents). Concurrently he had joint appointments at UCLA, USC, Technion and Ben Gurion University. Since 1995 AI is Professor at the School of Mathematical Sciences of Tel Aviv University. He was elected Senior Fellow at the San Diego Supercomputing Center in 1996, Distinguished Visiting Professor at Korea University in 2008, and National University of Singapore in 2011. He invented and developed the multidimensional visualization methodology of Parallel Coordinates for which he received numerous patents and awards. His textbook on "Parallel Coordinates: VISUAL Multidimensional Geometry", was published by Springer and praised by Stephen Hawking among many others.

Mary Czerwinski, Microsoft Research, USA

Using Technology for Health and Wellbeing

When: Friday, 1 June 2018

Abstract: How can we create technologies to help us reflect on and change our behavior, improving our health and overall wellbeing? In this talk, I will briefly describe the last several years of work our research team has been doing in this area. We have developed wearable technology to help families manage tense situations with their children, mobile phone-based applications for handling stress and depression, as well as logging tools that can help you stay focused or recommend good times to take a break at work. The goal in all of this research is to develop tools that adapt to the user so that they can maximize their productivity and improve their health.

Bio: Dr. Mary Czerwinski is a Principle Researcher and Research Manager of the Visualization and Interaction (VIBE) Research Group. Mary's latest research focuses primarily on emotion tracking and intervention design and delivery, information worker task management and health and wellness for individuals and groups. Her research background is in visual attention and multitasking. She holds a Ph.D. in Cognitive Psychology from Indiana University in Bloomington. Mary was awarded the ACM SIGCHI Lifetime Service Award, was inducted into the CHI Academy, and became an ACM Distinguished Scientist in 2010. She also received the Distinguished Alumni award from Indiana University's Brain and Psychological Sciences department in 2014, and from the College of Arts and Sciences in 2018. Mary became a Fellow of the ACM in 2016. More information about Dr. Czerwinski can be found at her website.

AVI 2018 - International Conference on Advanced Visual Interfaces