MEET OUR CODE_N CONTEST FINALISTS 2018: Tawny from Germany
Human to human interaction relies heavily on empathy – a trait artificial intelligence lacks. Combining human data with AI figures giving machines the ability to become empathic is Tawny. Sebastian Schröder enlightens us on their Emotion AI, a solution that makes all interactions more empathic and efficient.
Jessica: What is TAWNY all about?
Sebastian: TAWNY provides Emotion AI, a solution that makes all interactions more empathic and efficient.
We use human data from various sensors (camera, wearables, smartphone, depending on the use case) and our AI figures on relevant affective states (perhaps emotions, reactions, or patterns with a specific desired outcome). This delivers information to the system it is embedded in. The system can then adapt behavior to the human being.
Jessica: How did you come up with the idea?
Sebastian: Marco and Chadly both did their Ph.D. in the area of affective computing at LMU. In parallel, Michael realized that affective computing will be a disruptive market in the future and initiated a project in this area. The team initially met two years ago at an event and the idea was born. After some highly promising and successful first projects, a company was born one year ago.
Jessica: What are you trying to solve?
Sebastian: Interactions between humans and machines are not smooth compared to interactions between humans. Human-human interaction relies heavily on empathy, so it requires a substantial amount of emotional intelligence, which machines are totally lacking. We give machines the ability to become empathic and thus make up for productivity losses in their interactions with humans.
Jessica: Your application allows you to strive to create truly emotionally intelligent human-machine interactions. In which fields can Emotion AI be applied?
Sebastian: We adapt our AI to the business needs of our customers. Our technology can be found in customer interactions, production, human-machine interfaces, recommender systems, gaming, and smart products.
In call centers, we’re able to predict successful calls with a simple wearable worn by the agent. In automotive production, we’re able to predict errors with the help of a simple wearable worn by a factory worker. In biathlons, we’re able to predict if an athlete will achieve their target with the help of a wearable. In chatbot and e-commerce scenarios (which are basically recommender systems), we can detect the affective state of a user and adapt the content they’re shown simply by adding smartphone sensors, and this leads to higher conversion rates. In gaming, we can detect the flow-state of gamers and adjust difficulty levels to improve the experience. In human-machine interfaces, we can automate interactions based on simple video input, for example auto-swipe in Tinder or auto-like in Facebook. We can provide a wearable to add auto-select to Spotify playlists. Our technology is embedded in a smart solution made by a producer of baby products targeted at new mothers. Because the product is empathic, it interacts better with users and significantly reduces stress (this is great, especially for young mothers).
Our goal for this year is to standardize our products and find a scalable use case. We would be happy to find it with your help.
Jessica: Thank you for the interview, Sebastian!