Semiconductor IP News and Trends Blog
What Will It Take to Bring DNN to Embedded?
If you missed Michelle Mao’s presentation at the recent Autosens conference in Detroit, “What Will It Take to Bring DNN to Embedded?”, you missed an important evaluation of how designers can do four fundamental things to lower the power budget and bring deep neural networks (DNNs) to embedded systems:
- Optimize the network architecture
- Optimize the problem definition
- Minimize the number of bits per computation
- Use optimized DNN hardware
Paul McLellan, Cadence’s Breakfast Bytes blogger, wrote two in-depth posts on her talk. The first one, “CactusNet: One Network to Rule Them All” introduces CactusNet, Cadence’s state-of-the-art CNN benchmark optimized for embedded applications. CactusNet is used to optimize DNNs. But that’s too much to explain here, so please read Paul’s post.
His second post, “CactusNet: Moving Neural Nets from the Cloud to Embed Them in Cars,” discusses how to optimize the problem definition, minimize the number of bits per computation, and use an optimized DNN architecture. Again, this is really worth reading, with lots of meaty information.