June 6, 2024 Stuttgart, Germany
At the STARTUP AUTOBAHN EXPO2024, we are thrilled to present a successfully completed pilot project that involved porting and optimizing a deep learning model for multiple hardware targets. Using Embedl’s Model Optimization SDK, an automotive Tier 1 supplier optimized their proprietary deep learning model for both CPU and DSP. This project demonstrates the remarkable capabilities of our SDK in the hands of skilled engineers, leading to significant improvements in model performance across different hardware platforms.
The pilot project aimed to optimize a proprietary deep learning model, developed by the Tier 1 supplier. The goal was to enhance the model's performance on multiple hardware targets (CPU and DSP from different vendors). The optimization process was divided into two main steps: initial optimization for an existing hardware platform and subsequent re-optimization for a different hardware target. The project concluded with engineers being able to choose the optimal target for their models, achieving substantial speedup while preserving accuracy.
The first step involved optimizing the interior sensing model for an existing hardware platform using Embedl’s Model Optimization SDK. Within a week of receiving the software, the engineers at the Tier 1 supplier managed to produce an optimized model that reduced latency by half. This significant improvement was achieved through advanced techniques such as hardware-aware neural architecture search (NAS), pruning, and mixed precision quantization. The Tier 1 supplier's engineers handled the optimization process entirely, demonstrating the user-friendly nature and effectiveness of the SDK without Embedl having access to the original data or models.
Following the successful initial optimization, the model was transferred and re-optimized for a second hardware target from a different vendor, which required using a different toolchain. This step highlighted the flexibility of Embedl’s SDK in adapting to various hardware environments. The engineers efficiently re-optimized the model for the performance characteristics of the second hardware, ensuring it maintained the speedup and accuracy gains achieved in the first step. This adaptability is crucial for deploying deep learning models in diverse operational scenarios, proving the SDK's robustness.
Several techniques were employed during the optimization process:
These techniques collectively contributed to significant improvements in model performance, demonstrating the power and versatility of Embedl’s Model Optimization SDK.
The Tier 1 supplier experienced several benefits from this project:
These benefits underscore the value of Embedl’s Model Optimization SDK in improving deep learning models for various hardware targets.
The success of this pilot project which we will show at the STARTUP AUTOBAHN EXPO2024 highlights the transformative potential of Embedl’s Model Optimization SDK. The Tier 1 supplier's ability to optimize and re-optimize their deep learning models for multiple hardware platforms demonstrates the SDK's flexibility, efficiency, and effectiveness. As we look to the future, we anticipate further advancements in model optimization techniques, enabling even more significant performance improvements and broader applications across different industries. Embedl remains committed to supporting our partners in achieving their technological goals and driving innovation in deep learning model optimization.