Events

STARTUP AUTOBAHN EXPO2024

Written by Embedl | May 29, 2024 11:33:02 AM

June 6, 2024 Stuttgart, Germany

At the STARTUP AUTOBAHN EXPO2024, we are thrilled to present a successfully completed pilot project that involved porting and optimizing a deep learning model for multiple hardware targets. Using Embedl’s Model Optimization SDK, an automotive Tier 1 supplier optimized their proprietary deep learning model for both CPU and DSP. This project demonstrates the remarkable capabilities of our SDK  in the hands of skilled engineers, leading to significant improvements in model performance across different hardware platforms.

 

Overview of the Pilot Project

The pilot project aimed to optimize a proprietary deep learning model, developed by the Tier 1 supplier. The goal was to enhance the model's performance on multiple hardware targets (CPU and DSP from different vendors). The optimization process was divided into two main steps: initial optimization for an existing hardware platform and subsequent re-optimization for a different hardware target. The project concluded with engineers being able to choose the optimal target for their models, achieving substantial speedup while preserving accuracy.

 

Step 1: Initial Optimization for Existing Hardware

The first step involved optimizing the interior sensing model for an existing hardware platform using Embedl’s Model Optimization SDK. Within a week of receiving the software, the engineers at the Tier 1 supplier managed to produce an optimized model that reduced latency by half. This significant improvement was achieved through advanced techniques such as hardware-aware neural architecture search (NAS), pruning, and mixed precision quantization. The Tier 1 supplier's engineers handled the optimization process entirely, demonstrating the user-friendly nature and effectiveness of the SDK without Embedl having access to the original data or models.

 

Step 2: Re-Optimization for a Second Hardware Target

Following the successful initial optimization, the model was transferred and re-optimized for a second hardware target from a different vendor, which required using a different toolchain. This step highlighted the flexibility of Embedl’s SDK in adapting to various hardware environments. The engineers efficiently re-optimized the model for the performance characteristics of the second hardware, ensuring it maintained the speedup and accuracy gains achieved in the first step. This adaptability is crucial for deploying deep learning models in diverse operational scenarios, proving the SDK's robustness.

 

Technologies and Techniques Used

Several techniques were employed during the optimization process:

  • Hardware-Aware Neural Architecture Search (NAS): This technique allows for the automatic design of neural network architectures optimized for specific hardware constraints.
  • Pruning: Reducing the number of parameters in the model to enhance efficiency. Do it correctly, and it can be done without compromising performance.
  • Mixed Precision Quantization: Using different precision levels for various parts of the model to accelerate computation and reduce memory usage.

These techniques collectively contributed to significant improvements in model performance, demonstrating the power and versatility of Embedl’s Model Optimization SDK.

 

Benefits Realized by the Tier 1 Supplier

The Tier 1 supplier experienced several benefits from this project:

  • Significant Speedup: The optimized models exhibited reduced on-device latency, enhancing the real-time performance of the system.
  • Flexibility: The ability to optimize the model for different hardware targets provided greater flexibility in deployment and potential cost savings.
  • User-Friendly SDK: The engineers receiving the tools managed the entire optimization process independently, showcasing the SDK's ease of use.
  • Maintained Accuracy: Despite the extensive optimizations, the models retained maximum accuracy, ensuring reliable performance in practical applications.

These benefits underscore the value of Embedl’s Model Optimization SDK in improving deep learning models for various hardware targets.

Conclusion and Future Prospects

The success of this pilot project which we will show at the STARTUP AUTOBAHN EXPO2024 highlights the transformative potential of Embedl’s Model Optimization SDK. The Tier 1 supplier's ability to optimize and re-optimize their deep learning models for multiple hardware platforms demonstrates the SDK's flexibility, efficiency, and effectiveness. As we look to the future, we anticipate further advancements in model optimization techniques, enabling even more significant performance improvements and broader applications across different industries. Embedl remains committed to supporting our partners in achieving their technological goals and driving innovation in deep learning model optimization.