Blog | Embedl

“What hardware should we use?”

Written by Hans Salomonsson | Mar 7, 2023 9:28:34 AM

In our guide, Overcome 4 main challenges when deploying deep learning in embedded systems , we list the most common challenges you face during leading a deep learning project (DL). Here is one of the challenges you can find in our guide:

Problem

Every cent in your BOM matters when selecting the best hardware for your system. Deep learning computations often require expensive hardware, and selecting the best fitting hardware is tricky as you have to balance product margins and quality. You not only need to consider the theoretical numbers such as memory and computation capacity, but you also need to measure the real-time performance of your model – and the utilization might be low. Sometimes, you get to evaluate between 3-5 hardware candidates; more than that takes too much time. And you can’t just choose the one that looks best on paper or the one you’ve worked with before out of habit. You need to see it in action and see convincing proof that it works, and that’s a very time-consuming process. Your team is drowning in manual work, and you wonder if there isn’t a better way to do this?

Solution

Luckily, there’s a better way! You can measure your model’s real performance simultaneously with minimal (or no) manual work by doing an automated hardware evaluation. That way, you remove the guesswork and only have to decide when you really know your options.