Frequently Asked Questions
Have questions about Embedl Model Optimization SDK? Have a look at answers to some of the common questions.
Product
A Python SDK, or software development kit, provides developers with a toolbox of resources, tools, and algorithms designed to help them create software applications. Python SDKs enable developers to create software more efficiently and effectively by offering a consistent and reliable set of tools and resources.
Embedl recommends using a stable version of Python, also known as a maintained version or bugfix status. The current status of the various Python versions can be viewed at https://devguide.python.org/versions/. (Sometimes the latest version of Python isn't yet compatible with Tensorflow or Pytorch, in which case there may be a delay before it is supported by Embedl. See the release documentation for the current status of each release.) In addition, Embedl will normally support Python versions that still get security fixes, dropping support versions that have reached their end of life.
The embedl SDK is a set of tools and resources designed to help developers optimize their deep learning models for deployment on resource-constrained devices or real-time applications. It includes pre-built algorithms for pruning, quantizing, and compressing models, which can significantly reduce the model size and speed up inference times. The embedl SDK is built on a modular approach, allowing developers to interchange and tweak components for specific applications, and transfer their domain knowledge into the SDK for better results. Additionally, the SDK provides unique visualization tools, enabling developers to track changes made to the model during optimizations. Overall, the embedl SDK is a powerful tool that can help developers improve the efficiency and accuracy of their deep learning models.
Embedl recommends using Ubuntu Long Term Support versions (Ubuntu LTS), that still have maintenance and security support (see https://endoflife.date/ubuntu), provided they are used with a supported version of Python (see above). Other Linux distributions can sometimes be supported if necessary (contact sales).
Our goal is to support every available hardware target, we typically add them on request from our customers. The current supported hardware includes Xilinx FPGAs, Nvidia GPUs, Texas Instruments DSPs, ARM CPUs, NXP NPUs, Intel CPUs, GPUs and FPGAs.
Yes, we support any inference engine.
Yes, the end product you get after optimizing your deep learning model does not have any additional runtime dependencies. The only thing that differs between the pre-optimized model and the model optimized by Embedl Model Optimization SDK is the contents of the model itself. Therefore, you can deploy optimized models just as you would deploy your current model. You are not dependent on Embedl to run the optimized model.
Embedl supports a wide variety of hardware platforms used for deployment of deep learning models, including CPU, GPU, FPGA and MCU platforms.
Yes, we typically work with companies that have DL competence in house from a couple of data scientists or engineers focusing on DL to companies with hundreds of DL-engineers and researchers.
RESEARCH
Industries such as computer vision, natural language processing, and robotics can benefit from the use of Neural Architecture Search.
The future of Neural Architecture Search looks promising, with continued research and development likely to lead to further advancements in the field.
Neural Architecture Search is a technique for automatically designing neural network architectures, while neural network architecture design typically involves manual design by human experts.
The computational cost of the search process and the limited interpretability of the resulting architectures are two major challenges of Neural Architecture Search.
NAS can improve the performance of machine learning models by designing neural network architectures that are tailored specifically to the task at hand, resulting in improved accuracy and efficiency.
Still have questions?
We are more than happy to answer any questions