weberlo commented on issue #2563: URL: https://github.com/apache/incubator-tvm/issues/2563#issuecomment-670653285
> How do you think the difference between MicroTVM and MCUNet? Hi @wang-y-z. I wasn't aware of this work until now. Thanks for the pointer! It's a bit embarrassing to see them compare against the old runtime that was designed purely for AutoTVM purposes (and it only _happened_ to be able to run entire models). Because of that design goal, it makes no use of flash memory, so it runs out of memory very quickly 😅. I'd say TinyNAS isn't comparable to µTVM, since µTVM doesn't currently do any architecture search. You could imagine using only TinyNAS to produce a model, then importing the result and running it with µTVM. TinyEngine is an interesting point of comparison, since it uses a codegen-based approach, and this is the approach we want to move towards going forward. For the past few months, we've focused on strengthening support for autotuning and deployment with the C graph runtime. However, as we look at smaller devices, there are a lot of mechanisms in the graph runtime that cause unnecessarily high memory usage (e.g., runtime overhead and JSON parsing). With the prototype Relay AoT compiler being merged soon (#6219), we'll have a good starting point for an entirely codegen-based approach. Though the codegen approach seems to give them the most benefit (Figure 4), the model-adaptive/memory-aware optimizations in TinyEngine look compelling as well, and it would certainly be interesting to see how they could be implemented in TVM. > By the way, can you tell me what's going on about MicroTVM on RICS-V device and if you have plan to support the User defined extensions for RV? We haven't prioritized RISC-V-specific features, since we're still building up all of the device-agnostic infrastructure. Is there a use case for user-defined extensions you have in mind? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
