Martin Arend

General Manager Automotive Security, Data Services Connected Car E/E Architecture, Automotive Security
BMW Group, Germany

“Automotive Security – Behind the Scenes of Automobile Cyber and Information Security at BMW”

Abstract: Global mega trends are scaling up functionality and comfort of a modern premium brand vehicle, but also system complexity and attack vectors. Security targets and their holistic end-to-end implementation approaches are focus of this talk, complemented by data sovereignty and transparency for the customer.


Shiro Kumano
Senior Research Scientist – NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Japan

“Machine Reading People’s Subjective Evaluation from Their Self-Reports and Facial Expression”

Abstract: The first step in developing a machine that has emotions and is able to interact empathetically with people is to teach the machine how to understand people’s emotional and cognitive states. Two main methods that people use in daily life are 1) to directly ask the target about their current state in order to obtain their self-report (e.g. “What do you think?”), and 2) to observe their behavior (e.g. facial expression) assuming their relation with the internal state. One problem with these methods is that both pieces of information can easily be distorted, either intentionally or unintentionally. For example, some people exaggerate their responses. Others may try to control their behavior so that they look more friendly. In this talk, I will introduce our recent studies to address these issues. First, I will introduce our technique to remove people’s response style from their subjective evaluations to obtain unbiased results. Next, I will show how accurately humans and machines can distinguish between posed and spontaneous smiles. We are not specifically targeting the automotive industry, but I hope this talk will provide you some insight.


Bao-Liang Lu
Professor, Department of Computer Science and Engineering, Shanghai Jiao Tong University, China

“Multimodal Emotion Recognition Using Deep Learning and Transfer Learning”

Abstract: The field of affective computing aspires to narrow the communicative gap between the highly emotional human and the emotionally challenged computers by developing computational systems that recognize and respond to human emotions. The detection and modeling of human emotions are the primary studies of affective computing. Among various approaches to emotion recognition, the electroencephalography (EEG)-based model is more reliable because of its high accuracy and objective evaluation in comparison with other external appearance clues like facial expression and gesture. Various psychophysiology studies have demonstrated the correlations between human emotions and EEG signals. In this talk, we will present our recent work on investigating critical frequency bands and critical channels, investigating the stable patterns over time, and developing multimodal emotion recognition approach with deep learning and transfer learning.


Daniel McDuff
Researcher – Microsoft, USA

Building Perceptive AI: From Sensing to Synthesis

Abstract: Emotions play an important role in our everyday lives. Humans have evolved to have highly adaptive behaviors that help us survive and thrive.  AI can help us advance the fundamental understanding of human behavior and emotions, build smarter technology, and ultimately help people.  In this talk, I will present novel methods for physiological and behavioral measurement via ubiquitous hardware. Then I will present state-of-the-art approaches for emotion synthesis that can be used to create rich human-agent or robot interactions. Finally, I will show examples of new human computer interfaces and autonomous systems that leverage behavioral and physiological signals, including emotion-aware natural language conversation systems and vehicles with intrinsic emotional drives. This technology presents many opportunities for building natural user interfaces and more intelligent machines; however, it also raises questions about the ethics of designing emotionally-aware artificial systems. Throughout the talk I will comment on many of these questions and propose design principals to help address them.


Stefan Scherer
CTO of Embodied, Inc. / Associate Director of Neural Information Processing and Research Assistant Professor University of Southern California, Institute for Creative Technologies, USA

Learning Representations of Human Behavior

Abstract: Advances in computational behavior analyses and machine learning provide an opportunity to profoundly impact a wide range of professions that heavily rely on the interpretation of human behavior (e.g., customer profiling, targeted advertising, interpersonal skill training, and mental health screening). For example, automatic algorithms to detect impairments in social-emotional functioning from individuals’ behavior have the potential to increase the objectivity, accessibility, and efficiency of mental health care. In other words, automatic techniques can provide professionals with a different set of eyes and ears that produce quantified and objective assessments where otherwise only subjective information would be available.

Within this presentation I discuss how we can automatically learn meaningful and discriminatory representations of human behavior while leveraging both (1) explicit descriptors of human behavior motivated by top-down knowledge about human nature as well as (2) advanced machine learning techniques that derive representations automatically and directly from the data; combining the best of both worlds (1) better interpretability of human behavior and (2) exceptional performance in tasks with large quantities of data.


Björn Schuller
Full professor and head of the “Chair of Embedded Intelligence for Health Care and Wellbeing”, University of Augsburg, Germany / Professor of Artificial Intelligence & head of “GLAM-Group on Language, Audio & Music”, Imperial College London, UK / Chief Scientific Officer (CSO) and Co-founding CEO, audEERING GmbH, Germany

“Audiovisual Affect Assessment and Autonomous Automobiles”

Abstract: Emotion and a broader range of affective driver states can be a live decisive factor on the road. While this aspect has been investigated repeatedly, the advent of autonomous automobiles puts a new perspective on the role of computer-based emotion recognition in the car – the passenger’s one. This includes amongst others the monitoring of wellbeing during the drive such as to adjust the driving style or to adapt the info- and entertainment. This talk aims to foresee according challenges and provide potential avenues towards affect modelling in a multimodal “audiovisual plus x” on the road context. From the technical end, this concerns holistic passenger modelling and (deep) automatic machine learning routes towards representation learning directly from raw data. As multimodal data and its synergistic integration can put particular burden on the synchronization and design of suited shared network topologies, according algorithms are further presented. In addition, techniques for acquisition of affective data with reliable labels are introduced. In conclusion, automated affect analysis has just matured to the point of applicability in autonomous vehicles in first selected use-cases, which will be discussed towards the end.


Midori Sugaya
Professor,  laboratory for emotion estimation by biometrics and artificial Intelligence,applied for robot platform,  Department of Information Science and Engineering, College of Engineering, Shibaura Institute of Technology, Japan

“Emotion Aware Sensing and Applications”

Abstract: Emotion aware technologies have been very attractive these days, to understand the internal state of the human through some methods and sensors. We consider If we apply the human’s emotional state for variety of applications, and robots, devices. it would be possible to increases QoL through emotion satisfactory communications through the emotion aware artificial entities. There are various technologies required for achieving the purpose. We will introduce our approach that collect data from biometric sensors such as EEG, pulse sensors, and estimate emotions based on psychology models and biometric indications. Currently we have been developed emotion visualizer, voice-casting robot, personal space aware robot etc. We will introduce the approach and applications, and introduce related topics and issues about real-time feedback,  embedded and networked system.


Dongrui Wu
Full Professor and Key Laboratory of the Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, China

“Driver Drowsiness Estimation from EEG Signals”

Abstract: This talk considers an important regression problem in brain-computer interface (BCI), namely, online driver drowsiness estimation from EEG signals. By integrating fuzzy sets with domain adaptation, we propose a novel online weighted adaptation regularization for regression (OwARR) algorithm to reduce the amount of subject-specific calibration data, and also a source domain selection (SDS) approach to save about half of the computational cost of OwARR. Using a simulated driving dataset with 15 subjects, we show that OwARR and OwARR-SDS can achieve significantly smaller estimation errors than several other approaches.


Michael Würtenberger
Vice President E/E Architecture and Technology, BMW Research, Innovations, New Technology, Germany

Vehicle Interiors as Sensate Environments – Tactile Robotics and Artificial Intelligence Transforming the Future in-Car Experience

Abstract: A vehicle’s interior can already now be considered a highly interactive sensate environment. With the advent of highly automated driving, even more emphasis will be on this smart space and the corresponding user experience. New interior layouts become possible, with the attention shifting from the driver to the wellbeing and comfort of rider passengers in highly reconfigurable interior layouts. Smart materials, an array of sensors, robotic mechatronics, and intelligent perception will provide vehicles with more capabilities to sense and interact with their human counterparts. Tactile intelligence  in particular will add an exciting new modality and help address challenges of safe human-robot coexistence.


Bo Xiao
Senior Applied Scientist – Amazon, USA

“A Smart Agent Feeling for You: Computational Analysis and Simulation of Empathy”

Abstract: Empathy is an important psychological process that modulates emotion and facilitates human communication. Enhancement of empathy has profound significance in a range of applications that are based on human-human or human-computer interactions. This talk will first introduce the emotion regulation process of empathy, and empirical evidences of improved outcomes by promoting empathy in interactions. It will then briefly review computational analysis of empathy based on multimodal signal processing, as well as approaches for simulation of empathy in smart agents. In the end, we will discuss about open challenges for empathy modeling, simulation, and paths towards application in consumer products.


Ziping Zhao
Associate professor and Vice Dean of the College of Computer and Information Engineering, Tianjin Normal University, China

“When Attention Meets Speech Applications: Speech Emotion Recognition & Speech-based Depression Analysis Perspective”

Abstract: Recently, attention mechanisms have been widely adopted in the deep learning community. In the context of deep learning, attention is a family of algorithms that enable a network to dynamically select subsets of input attributes given a particular context (input-output pair) setting. The overall aim of applying attention is, of course, to improve decision accuracy, and since its introduction in 2015 for machine translation, has been successfully applied to speech applications in a number of different forms. This talk presents how the attention mechanisms have been applied to the tasks of speech emotion recognition and automatic assessment of depression from speech. In this talk we also detail the attentive strategies that have been successful in both speech emotion and depression recognition tasks, and discuss challenging issues in practice.