MichaelJKlaiber commented on code in PR #12087:
URL: https://github.com/apache/tvm/pull/12087#discussion_r925598695


##########
python/tvm/relay/backend/contrib/uma/tutorial.md:
##########
@@ -0,0 +1,195 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+Making your hardware accelerator TVM-ready with UMA 
+=============================================
+
+**Disclaimer**: *This is an early preliminary version of this tutorial. Feel 
free to aks questions or give feedback via the UMA thread in the TVM
+discussion forum 
[[link](https://discuss.tvm.apache.org/t/rfc-uma-universal-modular-accelerator-interface/12039)].*
+
+
+This tutorial will give you step-by-step guidance how to use UMA to
+make your hardware accelerator TVM-ready.
+While there is no one-fits-all solution for this problem, UMA targets to 
provide a stable and Python-only
+API to integrate a number of hardware accelerator classes into TVM.
+
+In this tutorial you will get to know the UMA API in three use cases of 
increasing complexity.
+In these use case the three mock-accelerators
+**Vanilla**, **Strawberry** and **Chocolate** are introduced and
+integrated into TVM using UMA. 
+
+
+Vanilla
+===
+**Vanilla** is a simple accelerator consisting of a MAC array and has no 
internal memory.
+It is can ONLY process Conv2D layers, all other layers are executed on a CPU, 
that also orchestrates **Vanilla**.
+Both the CPU and Vanilla use a shared memory.
+
+For this purpose **Vanilla** has a C interface `vanilla_conv2dnchw`, that 
accepts pointers to input data *if_map*,
+*weights* and *result* data, as well as the parameters of `Conv2D`: `oc`, 
`iw`, `ih`, `ic`, `kh`, `kw`.
+```c
+int vanilla_conv2dnchw(float* ifmap, float*  weights, float*  result, int oc, 
int iw, int ih, int ic, int kh, int kw);
+```
+
+The script `uma_cli` creates code skeletons with API-calls into the UMA-API 
for new accelerators.
+For **Vanilla** we use it like this:
+
+```
+cd tvm/python/tvm/relay/backend/contrib/uma
+python uma_cli.py --add-accelerator vanilla_accelerator --tutorial vanilla

Review Comment:
   @kslavka , nice catch! Thanks. Fixed it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to