MEET OUR CODE_N CONTEST FINALISTS 2018: RealSynth from Germany

08/22/2018  |  Digital Transformation, Interview, Startups

A virtual gym to train your AI – sounds like pie in the sky? Berlin-based CODE_n CONTEST finalist RealSynth is turning this scenario into reality since 2017! The startup builds fully tagged, realistic virtual environments where training and testing AI can be done faster, cheaper and safer than with current manual processes. Backed by TechStars and next47 – the independent investment arm of Siemens – these pioneers are truly devoted to minimizing the need of field data and field tests… and take the load of companies that are obliged to test systems and verify them rigorously under a huge variety of scenarios and conditions before they can safely be deployed! Check out our interview with co-founder Bernhard Prantl and come visit the RealSynth booth at the new.New Festival 2018.

Lisa: Hi Bernhard, what is RealSynth all about? What are you trying to solve?

Bernhard: RealSynth is a virtual gym for AI. Our proprietary software solution reinvents the AI training and machine-learning process. We generate photorealistic image data in real time. Our data comes pre-tagged with pixel-level accuracy and can be used directly with any machine learning platform. This allows our customers to capitalize on endless volumes of training data while also controlling and repeating test cases fully and automatically. We can cover any scenario including changes in weather and lighting conditions or customer-specific edge cases. We already deliver synthetic image data at a fraction of the cost of comparable real image data. In the future, we will create a virtual universe that simulates the real world at the push of a button. We will provide an off-the-shelf SaaS solution, enabling all companies to leverage simulated environments for training and testing their algorithms.

Lisa: How did you come up with the idea?

Bernhard: Artificial intelligence – and specifically machine learning – is a growing driver of innovation in the industry, particularly for autonomous vehicles such as self-driving cars, drones, and fully automated trains. The main fuel for growth is data! In an open-source world where the algorithms, software, and computational power that are required for machine learning are accessible to anyone, data provides the only way for companies to gain competitive advantage. During our time at Siemens, where we developed advanced driver assistance systems for railway vehicles, we experienced this thirst for data first-hand. It was a tremendous challenge capturing large volumes of field data using sensors, lidars, radars, and other sensors – especially data that covered a sufficiently broad range of conditions and scenarios. We realized that under certain circumstances, the real world could be replaced by a simulation. This allows us to generate huge amounts of annotated data at the push of a button. Hooked by the idea, we started to build a prototype using the latest video gaming technology. This allows us to generate annotated photorealistic training data for railway scenarios.

Lisa: What are you trying to solve?

Bernhard: The process of collecting and annotating data from various sensors in the field (e.g., cameras, radars, and lidars) can be highly challenging, complex, and time-consuming. First, hundreds of thousands or even millions of images covering a wide range of conditions and scenarios have to be collected in the field. However, certain situations can be extremely rare and some scenarios are impossible or too dangerous to replicate in the real world (e.g., footage of a child running in front of a car). Collecting images is not enough. All these images need to be annotated, which is a long-winded and labor-intensive process, particularly for the kind of pixel-level annotations required by modern machine learning algorithms. Annotating an image by hand can easily take up several minutes of human time, whereas generating a synthetic image only requires a few milliseconds of computer time. Thanks to our product, customers can minimize the need to acquire and annotate field data and avoid extensive field testing. Field data and field tests need to cover a wide range of circumstances and scenarios, even if they are extremely rare in the real world.

Lisa: Can you provide some examples of the virtual test field you create and the kind of autonomous system that was tested in it? In which countries do you plan to roll out your solution?

Bernhard: In a pilot project with a railway manufacturer, we accurately reconstructed a railroad within our simulated environment measuring several kilometers. We then used this virtual representation of the railroad as a data source for generating huge amounts of data, covering a wide range in terms of variation and parameters (e.g., different lighting and weather conditions). We demonstrated in the project that machine learning algorithms can be trained with this data, and this makes it possible to achieve a transition between the virtual and physical worlds. The best approach in practice is to use a blend of real and synthetic data during training. We demonstrated how this yields algorithms that perform better than solely relying on real or synthetic data. We are initially concentrating on the European automotive and rail vehicle market and are already doing business with German corporates today. We are also speaking to further potential clients in Europe and the United States. By mid-2019, we will have started pilot projects in China.

Lisa: Thank you for the interview, Bernhard!

Meet RealSynth at the new.New Festival 2018 this fall, in Stuttgart!