This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
     new dc247816f0 [Doc] Refactor install docs (#17287)
dc247816f0 is described below

commit dc247816f0b6be770a39064286d9723df6782a86
Author: Siyuan Feng <hzfen...@sjtu.edu.cn>
AuthorDate: Wed Aug 21 20:52:51 2024 +0800

    [Doc] Refactor install docs (#17287)
    
    * [Doc] Refactor install docs
    
    The major updates include:
    
    1. remove nnpack installation guide
    2. refactor building guide into step-by-step instructions
    
    * update for ci
---
 docs/install/from_source.rst | 421 +++++++++++++++++--------------------------
 docs/install/index.rst       |   3 +-
 docs/install/nnpack.rst      | 118 ------------
 3 files changed, 163 insertions(+), 379 deletions(-)

diff --git a/docs/install/from_source.rst b/docs/install/from_source.rst
index 4dc14863a8..a963d06ab5 100644
--- a/docs/install/from_source.rst
+++ b/docs/install/from_source.rst
@@ -19,240 +19,239 @@
 
 Install from Source
 ===================
-This page gives instructions on how to build and install the TVM package from
-scratch on various systems. It consists of two steps:
+This page gives instructions on how to build and install the TVM package from 
source.
 
-1. First build the shared library from the C++ codes (`libtvm.so` for linux, 
`libtvm.dylib` for macOS and `libtvm.dll` for windows).
-2. Setup for the language packages (e.g. Python Package).
+.. contents:: Table of Contents
+    :local:
+    :depth: 2
 
-To get started, download tvm source code from the `Download Page 
<https://tvm.apache.org/download>`_.
+.. _install-dependencies:
 
-Developers: Get Source from Github
-----------------------------------
-You can also choose to clone the source repo from github.
-It is important to clone the submodules along, with ``--recursive`` option.
+Step 1. Install Dependencies
+----------------------------
 
-.. code:: bash
+Apache TVM requires the following dependencies:
 
-    git clone --recursive https://github.com/apache/tvm tvm
+- CMake (>= 3.24.0)
+- LLVM (recommended >= 15)
+- Git
+- A recent C++ compiler supporting C++ 17, at the minimum
+    - GCC 7.1
+    - Clang 5.0
+    - Apple Clang 9.3
+    - Visual Studio 2019 (v16.7)
+- Python (>= 3.8)
+- (Optional) Conda (Strongly Recommended)
 
-For windows users who use github tools, you can open the git shell, and type 
the following command.
+To easiest way to manage dependency is via conda, which maintains a set of 
toolchains
+including LLVM across platforms. To create the environment of those build 
dependencies,
+one may simply use:
 
 .. code:: bash
 
-   git submodule init
-   git submodule update
+    # make sure to start with a fresh environment
+    conda env remove -n tvm-build-venv
+    # create the conda environment with build dependency
+    conda create -n tvm-build-venv -c conda-forge \
+        "llvmdev>=15" \
+        "cmake>=3.24" \
+        git \
+        python=3.11
+    # enter the build environment
+    conda activate tvm-build-venv
 
 
-.. _build-shared-library:
+Step 2. Get Source from Github
+------------------------------
+You can also choose to clone the source repo from github.
 
-Build the Shared Library
-------------------------
+.. code:: bash
 
-Our goal is to build the shared libraries:
+    git clone --recursive https://github.com/apache/tvm tvm
 
-   - On Linux the target library are `libtvm.so` and `libtvm_runtime.so`
-   - On macOS the target library are `libtvm.dylib` and `libtvm_runtime.dylib`
-   - On Windows the target library are `libtvm.dll` and `libtvm_runtime.dll`
+.. note::
+    It's important to use the ``--recursive`` flag when cloning the TVM 
repository, which will
+    automatically clone the submodules. If you forget to use this flag, you 
can manually clone the submodules
+    by running ``git submodule update --init --recursive`` in the root 
directory of the TVM repository.
 
-It is also possible to :ref:`build the runtime <deploy-and-integration>` 
library only.
+Step 3. Configure and Build
+---------------------------
+Create a build directory and run CMake to configure the build. The following 
example shows how to build
 
-The minimal building requirements for the ``TVM`` libraries are:
+.. code:: bash
 
-   - A recent C++ compiler supporting C++ 17, at the minimum
-      - GCC 7.1
-      - Clang 5.0
-      - Apple Clang 9.3
-      - Visual Studio 2019 (v16.7)
-   - CMake 3.18 or higher
-   - We highly recommend to build with LLVM to enable all the features.
-   - If you want to use CUDA, CUDA toolkit version >= 8.0 is required. If you 
are upgrading from an older version, make sure you purge the older version and 
reboot after installation.
-   - On macOS, you may want to install `Homebrew <https://brew.sh>`_ to easily 
install and manage dependencies.
-   - Python is also required. Avoid using Python 3.9.X+ which is not 
`supported <https://github.com/apache/tvm/issues/8577>`_. 3.7.X+ and 3.8.X+ 
should be well supported however.
+    cd tvm
+    rm -rf build && mkdir build && cd build
+    # Specify the build configuration via CMake options
+    cp ../cmake/config.cmake .
 
-To install the these minimal pre-requisites on Ubuntu/Debian like
-linux operating systems, execute (in a terminal):
+We want to specifically tweak the following flags by appending them to the end 
of the configuration file:
 
 .. code:: bash
 
-    sudo apt-get update
-    sudo apt-get install -y python3 python3-dev python3-setuptools gcc 
libtinfo-dev zlib1g-dev build-essential cmake libedit-dev libxml2-dev
-
-
-Note that the version of CMake on apt may not be sufficiently up to date; it 
may be necessary to install it directly from `Kitware's third-party APT 
repository <https://apt.kitware.com/>`_.
+    # controls default compilation flags (Candidates: Release, Debug, 
RelWithDebInfo)
+    echo "set(CMAKE_BUILD_TYPE RelWithDebInfo)" >> config.cmake
 
+    # LLVM is a must dependency for compiler end
+    echo "set(USE_LLVM \"llvm-config --ignore-libllvm --link-static\")" >> 
config.cmake
+    echo "set(HIDE_PRIVATE_SYMBOLS ON)" >> config.cmake
 
-On Fedora/CentOS and related operating systems use:
+    # GPU SDKs, turn on if needed
+    echo "set(USE_CUDA   OFF)" >> config.cmake
+    echo "set(USE_METAL  OFF)" >> config.cmake
+    echo "set(USE_VULKAN OFF)" >> config.cmake
+    echo "set(USE_OPENCL OFF)" >> config.cmake
 
-.. code:: bash
+    # cuBLAS, cuDNN, cutlass support, turn on if needed
+    echo "set(USE_CUBLAS OFF)" >> config.cmake
+    echo "set(USE_CUDNN  OFF)" >> config.cmake
+    echo "set(USE_CUTLASS OFF)" >> config.cmake
 
-    sudo dnf update
-    sudo dnf groupinstall -y "Development Tools"
-    sudo dnf install -y python-devel ncurses-compat-libs zlib-devel cmake 
libedit-devel libxml2-devel
 
-Use Homebrew to install the required dependencies for macOS running either the 
Intel or M1 processors. You must follow the post-installation steps specified by
-Homebrew to ensure the dependencies are correctly installed and configured:
+.. note::
+    ``HIDE_PRIVATE_SYMBOLS`` is a configuration option that enables the 
``-fvisibility=hidden`` flag.
+    This flag helps prevent potential symbol conflicts between TVM and 
PyTorch. These conflicts arise due to
+    the frameworks shipping LLVMs of different versions.
 
-.. code:: bash
+    `CMAKE_BUILD_TYPE 
<https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html>`_ controls 
default compilation flag:
 
-    brew install gcc git cmake
-    brew install llvm
-    brew install python@3.8
+    - ``Debug`` sets ``-O0 -g``
+    - ``RelWithDebInfo`` sets ``-O2 -g -DNDEBUG`` (recommended)
+    - ``Release`` sets ``-O3 -DNDEBUG``
 
-If you are on macOS with an M1 Processor you may need to use conda to manage 
dependencies while building. Specifically you may need, `Miniforge 
<https://github.com/conda-forge/miniforge>`_ to ensure that the dependencies 
obtained using pip are compatible with M1.
+Once ``config.cmake`` is edited accordingly, kick off build with the commands 
below:
 
-.. code:: bash
+.. code-block:: bash
 
-    brew install miniforge
-    conda init
-    conda create --name tvm python=3.8
-    conda activate tvm
+    cmake .. && cmake --build . --parallel $(nproc)
 
-We use cmake to build the library.
-The configuration of TVM can be modified by editing `config.cmake` and/or by 
passing cmake flags to the command line:
+.. note::
+    ``nproc`` may not be available on all systems, please replace it with the 
number of cores on your system
 
+A success build should produce ``libtvm`` and ``libtvm_runtime`` under 
``build/`` directory.
 
-- First, check the cmake in your system. If you do not have cmake,
-  you can obtain the latest version from `official website 
<https://cmake.org/download/>`_
-- First create a build directory, copy the ``cmake/config.cmake`` to the 
directory.
+Leaving the build environment ``tvm-build-venv``, there are two ways to 
install the successful build into your environment:
 
-  .. code:: bash
+-  Install via environment variable
 
-      mkdir build
-      cp cmake/config.cmake build
+.. code-block:: bash
 
-- Edit ``build/config.cmake`` to customize the compilation options
+    export TVM_HOME=/path-to-tvm
+    export PYTHONPATH=$TVM_HOME/python:$PYTHONPATH
 
-  - On macOS, for some versions of Xcode, you need to add ``-lc++abi`` in the 
LDFLAGS or you'll get link errors.
-  - Change ``set(USE_CUDA OFF)`` to ``set(USE_CUDA ON)`` to enable CUDA 
backend. Do the same for other backends and libraries
-    you want to build for (OpenCL, RCOM, METAL, VULKAN, ...).
-  - To help with debugging, ensure the embedded graph executor and debugging 
functions are enabled with ``set(USE_GRAPH_EXECUTOR ON)`` and 
``set(USE_PROFILER ON)``
-  - To debug with IRs, ``set(USE_RELAY_DEBUG ON)`` and set environment 
variable `TVM_LOG_DEBUG`.
+- Install via pip local project
 
-      .. code:: bash
+.. code-block:: bash
 
-          export TVM_LOG_DEBUG="ir/transform.cc=1,relay/ir/transform.cc=1"
+    conda activate your-own-env
+    conda install python # make sure python is installed
+    cd /path-to-tvm/python
+    pip install -e .
 
-- TVM requires LLVM for CPU codegen. We highly recommend you to build with the 
LLVM support on.
+Step 4. Validate Installation
+-----------------------------
 
-  - LLVM 4.0 or higher is needed for build with LLVM. Note that version of 
LLVM from default apt may lower than 4.0.
-  - Since LLVM takes long time to build from source, you can download 
pre-built version of LLVM from
-    `LLVM Download Page <http://releases.llvm.org/download.html>`_.
+Using a compiler infrastructure with multiple language bindings could be 
error-prone.
+Therefore, it is highly recommended to validate Apache TVM installation before 
use.
 
-    - Unzip to a certain location, modify ``build/config.cmake`` to add 
``set(USE_LLVM /path/to/your/llvm/bin/llvm-config)``
-    - You can also directly set ``set(USE_LLVM ON)`` and let cmake search for 
a usable version of LLVM.
+**Step 1. Locate TVM Python package.** The following command can help confirm 
that TVM is properly installed as a python package and provide the location of 
the TVM python package:
 
-  - You can also use `LLVM Nightly Ubuntu Build <https://apt.llvm.org/>`_
+.. code-block:: bash
 
-    - Note that apt-package append ``llvm-config`` with version number.
-      For example, set ``set(USE_LLVM llvm-config-10)`` if you installed LLVM 
10 package
+    >>> python -c "import tvm; print(tvm.__file__)"
+    /some-path/lib/python3.11/site-packages/tvm/__init__.py
 
-  - If you are a PyTorch user, it is recommended to set ``(USE_LLVM 
"/path/to/llvm-config --link-static")`` and ``set(HIDE_PRIVATE_SYMBOLS ON)``
-    to avoid potential symbol conflicts between different versions LLVM used 
by TVM and PyTorch.
+**Step 2. Confirm which TVM library is used.** When maintaining multiple build 
or installation of TVM, it becomes important to double check if the python 
package is using the proper ``libtvm`` with the following command:
 
-  - On supported platforms, the `Ccache compiler wrapper 
<https://ccache.dev/>`_ may be helpful for
-    reducing TVM's build time.  There are several ways to enable CCache in TVM 
builds:
+.. code-block:: bash
 
-    - Leave `USE_CCACHE=AUTO` in `build/config.cmake`. CCache will be used if 
it is found.
+    >>> python -c "import tvm; print(tvm._ffi.base._LIB)"
+    <CDLL '/some-path/lib/python3.11/site-packages/tvm/libtvm.dylib', handle 
95ada510 at 0x1030e4e50>
 
-    - Ccache's Masquerade mode. This is typically enabled during the Ccache 
installation process.
-      To have TVM use Ccache in masquerade, simply specify the appropriate 
C/C++ compiler
-      paths when configuring TVM's build system.  For example:
-      ``cmake -DCMAKE_CXX_COMPILER=/usr/lib/ccache/c++ ...``.
+**Step 3. Reflect TVM build option.** Sometimes when downstream application 
fails, it could likely be some mistakes with a wrong TVM commit, or wrong build 
flags. To find it out, the following commands will be helpful:
 
-    - Ccache as CMake's C++ compiler prefix.  When configuring TVM's build 
system,
-      set the CMake variable ``CMAKE_CXX_COMPILER_LAUNCHER`` to an appropriate 
value.
-      E.g. ``cmake -DCMAKE_CXX_COMPILER_LAUNCHER=ccache ...``.
+.. code-block:: bash
 
-- We can then build tvm and related libraries.
+    >>> python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in 
tvm.support.libinfo().items()))"
+    ... # Omitted less relevant options
+    GIT_COMMIT_HASH: 4f6289590252a1cf45a4dc37bce55a25043b8338
+    HIDE_PRIVATE_SYMBOLS: ON
+    USE_LLVM: llvm-config --link-static
+    LLVM_VERSION: 15.0.7
+    USE_VULKAN: OFF
+    USE_CUDA: OFF
+    CUDA_VERSION: NOT-FOUND
+    USE_OPENCL: OFF
+    USE_METAL: ON
+    USE_ROCM: OFF
 
-  .. code:: bash
 
-      cd build
-      cmake ..
-      make -j4
+**Step 4. Check device detection.** Sometimes it could be helpful to 
understand if TVM could detect your device at all with the following commands:
 
-  - You can also use Ninja build system instead of Unix Makefiles. It can be 
faster to build than using Makefiles.
+.. code-block:: bash
 
-  .. code:: bash
+    >>> python -c "import tvm; print(tvm.metal().exist)"
+    True # or False
+    >>> python -c "import tvm; print(tvm.cuda().exist)"
+    False # or True
+    >>> python -c "import tvm; print(tvm.vulkan().exist)"
+    False # or True
 
-      cd build
-      cmake .. -G Ninja
-      ninja
+Please note that the commands above verify the presence of an actual device on 
the local machine for the TVM runtime (not the compiler) to execute properly. 
However, TVM compiler can perform compilation tasks without requiring a 
physical device. As long as the necessary toolchain, such as NVCC, is 
available, TVM supports cross-compilation even in the absence of an actual 
device.
 
-  - There is also a makefile in the top-level tvm directory that can
-    automate several of these steps.  It will create the build
-    directory, copy the default ``config.cmake`` to the build
-    directory, run cmake, then run make.
 
-    The build directory can be specified using the environment
-    variable ``TVM_BUILD_PATH``.  If ``TVM_BUILD_PATH`` is unset, the
-    makefile assumes that the ``build`` directory inside tvm should be
-    used.  Paths specified by ``TVM_BUILD_PATH`` can be either
-    absolute paths or paths relative to the base tvm directory.
-    ``TVM_BUILD_PATH`` can also be set to a list of space-separated
-    paths, in which case all paths listed will be built.
+Step 5. Extra Python Dependencies
+---------------------------------
+Building from source does not ensure the installation of all necessary Python 
dependencies.
+The following commands can be used to install the extra Python dependencies:
 
-    If an alternate build directory is used, then the environment
-    variable ``TVM_LIBRARY_PATH`` should be set at runtime, pointing
-    to the location of the compiled ``libtvm.so`` and
-    ``libtvm_runtime.so``.  If not set, tvm will look relative to the
-    location of the tvm python module.  Unlike ``TVM_BUILD_PATH``,
-    this must be an absolute path.
+* Necessary dependencies:
 
-  .. code:: bash
-
-     # Build in the "build" directory
-     make
+.. code:: bash
 
-     # Alternate location, "build_debug"
-     TVM_BUILD_PATH=build_debug make
+    pip3 install numpy decorator attrs
 
-     # Build both "build_release" and "build_debug"
-     TVM_BUILD_PATH="build_debug build_release" make
+* If you want to use RPC Tracker
 
-     # Use debug build
-     TVM_LIBRARY_PATH=~/tvm/build_debug python3
+.. code:: bash
 
-If everything goes well, we can go to :ref:`python-package-installation`
+    pip3 install tornado
 
-.. _build-with-conda:
+* If you want to use auto-tuning module
 
-Building with a Conda Environment
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. code:: bash
 
-Conda is a very handy way to the necessary obtain dependencies needed for 
running TVM.
-First, follow the `conda's installation guide 
<https://docs.conda.io/projects/conda/en/latest/user-guide/install/>`_
-to install miniconda or anaconda if you do not yet have conda in your system. 
Run the following command in a conda environment:
+    pip3 install tornado psutil 'xgboost>=1.1.0' cloudpickle
 
-.. code:: bash
 
-    # Create a conda environment with the dependencies specified by the yaml
-    conda env create --file conda/build-environment.yaml
-    # Activate the created environment
-    conda activate tvm-build
+Advanced Build Configuration
+----------------------------
 
-The above command will install all necessary build dependencies such as cmake 
and LLVM. You can then run the standard build process in the last section.
+Ccache
+~~~~~~
+On supported platforms, the `Ccache compiler wrapper <https://ccache.dev/>`_ 
may be helpful for
+reducing TVM's build time, especially when building with `cutlass 
<https://github.com/NVIDIA/cutlass>`_
+or `flashinfer <https://github.com/flashinfer-ai/flashinfer>`_.
+There are several ways to enable CCache in TVM builds:
 
-If you want to use the compiled binary outside the conda environment,
-you can set LLVM to static linking mode ``set(USE_LLVM "llvm-config 
--link-static")``.
-In this way, the resulting library won't depend on the dynamic LLVM libraries 
in the conda environment.
+    - Leave ``USE_CCACHE=AUTO`` in ``build/config.cmake``. CCache will be used 
if it is found.
 
-The above instructions show how to use conda to provide the necessary build 
dependencies to build libtvm.
-If you are already using conda as your package manager and wish to directly 
build and install tvm as a conda package, you can follow the instructions below:
+    - Ccache's Masquerade mode. This is typically enabled during the Ccache 
installation process.
+      To have TVM use Ccache in masquerade, simply specify the appropriate 
C/C++ compiler
+      paths when configuring TVM's build system.  For example:
+      ``cmake -DCMAKE_CXX_COMPILER=/usr/lib/ccache/c++ ...``.
 
-.. code:: bash
+    - Ccache as CMake's C++ compiler prefix.  When configuring TVM's build 
system,
+      set the CMake variable ``CMAKE_CXX_COMPILER_LAUNCHER`` to an appropriate 
value.
+      E.g. ``cmake -DCMAKE_CXX_COMPILER_LAUNCHER=ccache ...``.
 
-   conda build --output-folder=conda/pkg  conda/recipe
-   # Run conda/build_cuda.sh to build with cuda enabled
-   conda install tvm -c ./conda/pkg
 
 Building on Windows
 ~~~~~~~~~~~~~~~~~~~
 TVM support build via MSVC using cmake. You will need to obtain a visual 
studio compiler.
 The minimum required VS version is **Visual Studio Enterprise 2019** (NOTE: we 
test
 against GitHub Actions' `Windows 2019 Runner 
<https://github.com/actions/virtual-environments/blob/main/images/win/Windows2019-Readme.md>`_,
 so see that page for full details.
-We recommend following :ref:`build-with-conda` to obtain necessary 
dependencies and
+We recommend following :ref:`install-dependencies` to obtain necessary 
dependencies and
 get an activated tvm-build environment. Then you can run the following command 
to build
 
 .. code:: bash
@@ -279,117 +278,21 @@ Currently, ROCm is supported only on linux, so all the 
instructions are written
 - You need to first install HIP runtime from ROCm. Make sure the installation 
system has ROCm installed in it.
 - Install latest stable version of LLVM (v6.0.1), and LLD, make sure 
``ld.lld`` is available via command line.
 
-.. _python-package-installation:
-
-Python Package Installation
----------------------------
-
-TVM package
-~~~~~~~~~~~
-
-Depending on your development environment, you may want to use a virtual 
environment and package manager, such
-as ``virtualenv`` or ``conda``, to manage your python packages and 
dependencies.
-
-The python package is located at `tvm/python`
-There are two ways to install the package:
-
-Method 1
-   This method is **recommended for developers** who may change the codes.
-
-   Set the environment variable `PYTHONPATH` to tell python where to find
-   the library. For example, assume we cloned `tvm` on the directory
-   `/path/to/tvm` then we can add the following line in `~/.bashrc`.
-   The changes will be immediately reflected once you pull the code and 
rebuild the project (no need to call ``setup`` again)
-
-   .. code:: bash
-
-       export TVM_HOME=/path/to/tvm
-       export PYTHONPATH=$TVM_HOME/python:${PYTHONPATH}
-
-
-Method 2
-   Install TVM python bindings by `setup.py`:
-
-   .. code:: bash
-
-       # install tvm package for the current user
-       # NOTE: if you installed python via homebrew, --user is not needed 
during installaiton
-       #       it will be automatically installed to your user directory.
-       #       providing --user flag may trigger error during installation in 
such case.
-       export MACOSX_DEPLOYMENT_TARGET=10.9  # This is required for mac to 
avoid symbol conflicts with libstdc++
-       cd python; python setup.py install --user; cd ..
-
-Python dependencies
-~~~~~~~~~~~~~~~~~~~
-
-Note that the ``--user`` flag is not necessary if you're installing to a 
managed local environment,
-like ``virtualenv``.
-
-   * Necessary dependencies:
-
-   .. code:: bash
-
-       pip3 install --user numpy decorator attrs
-
-   * If you want to use ``tvmc``: the TVM command line driver.
-
-   .. code:: bash
-
-       pip3 install --user typing-extensions psutil scipy
-
-   * If you want to use RPC Tracker
-
-   .. code:: bash
-
-       pip3 install --user tornado
-
-   * If you want to use auto-tuning module
-
-   .. code:: bash
-
-       pip3 install --user tornado psutil 'xgboost>=1.1.0' cloudpickle
-
-Note on M1 macs, you may have trouble installing xgboost / scipy. scipy and 
xgboost requires some additional dependencies to be installed,
-including openblas and its dependencies. Use the following commands to install 
scipy and xgboost with the required dependencies and
-configuration. A workaround for this is to do the following commands:
-
-    .. code:: bash
-
-        brew install openblas gfortran
-
-        pip install pybind11 cython pythran
-
-        export OPENBLAS=/opt/homebrew/opt/openblas/lib/
-
-        pip install scipy --no-use-pep517
-
-        pip install 'xgboost>=1.1.0'
-
-Install Contrib Libraries
--------------------------
-
-.. toctree::
-   :maxdepth: 1
-
-   nnpack
-
-
 .. _install-from-source-cpp-tests:
 
 Enable C++ Tests
-----------------
+~~~~~~~~~~~~~~~~
 We use `Google Test <https://github.com/google/googletest>`_ to drive the C++
 tests in TVM. The easiest way to install GTest is from source.
 
-   .. code:: bash
-
-       git clone https://github.com/google/googletest
-       cd googletest
-       mkdir build
-       cd build
-       cmake -DBUILD_SHARED_LIBS=ON ..
-       make
-       sudo make install
+.. code:: bash
 
+    git clone https://github.com/google/googletest
+    cd googletest
+    mkdir build
+    cd build
+    cmake -DBUILD_SHARED_LIBS=ON ..
+    make
+    sudo make install
 
 After installing GTest, the C++ tests can be built and started with 
``./tests/scripts/task_cpp_unittest.sh`` or just built with ``make cpptest``.
diff --git a/docs/install/index.rst b/docs/install/index.rst
index ab2e06d0de..6bc2da97e1 100644
--- a/docs/install/index.rst
+++ b/docs/install/index.rst
@@ -21,11 +21,10 @@ Installing TVM
 ==============
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 1
 
    from_source
    docker
-   nnpack
 
 Visit the :ref:`install TVM from source <install-from-source>` page to install 
TVM from the source code. Installing
 from source gives you the maximum flexibility to configure the build 
effectively from the official source releases.
diff --git a/docs/install/nnpack.rst b/docs/install/nnpack.rst
deleted file mode 100644
index c5516235a3..0000000000
--- a/docs/install/nnpack.rst
+++ /dev/null
@@ -1,118 +0,0 @@
-..  Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-..    http://www.apache.org/licenses/LICENSE-2.0
-
-..  Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
-
-
-NNPACK Contrib Installation
-===========================
-
-`NNPACK <https://github.com/Maratyszcza/NNPACK>`_ is an acceleration package
-for neural network computations, which can run on x86-64, ARMv7, or ARM64 
architecture CPUs.
-Using NNPACK, higher-level libraries like _MXNet_ can speed up
-the execution on multi-core CPU computers, including laptops and mobile 
devices.
-
-.. note::
-
-   AS TVM already has natively tuned schedules, NNPACK is here mainly for 
reference and comparison purpose.
-   For regular use prefer native tuned TVM implementation.
-
-TVM supports NNPACK for forward propagation (inference only) in convolution, 
max-pooling, and fully-connected layers.
-In this document, we give a high level overview of how to use NNPACK with TVM.
-
-Conditions
-----------
-
-The underlying implementation of NNPACK utilizes several acceleration methods,
-including fft and winograd.
-These algorithms work better on some special `batch size`, `kernel size`, and 
`stride` settings than on other,
-so depending on the context, not all convolution, max-pooling, or 
fully-connected layers can be powered by NNPACK.
-When favorable conditions for running NNPACKS are not met,
-
-NNPACK only supports Linux and OS X systems. Windows is not supported at 
present.
-
-Build/Install NNPACK
---------------------
-
-If the trained model meets some conditions of using NNPACK,
-you can build TVM with NNPACK support.
-Follow these simple steps:
-
-build NNPACK shared library with the following commands. TVM will link NNPACK 
dynamically.
-
-Note: The following NNPACK installation instructions have been tested on 
Ubuntu 16.04.
-
-Build Ninja
-~~~~~~~~~~~
-
-NNPACK need a recent version of Ninja. So we need to install ninja from source.
-
-.. code:: bash
-
-   git clone git://github.com/ninja-build/ninja.git
-   cd ninja
-   ./configure.py --bootstrap
-
-
-Set the environment variable PATH to tell bash where to find the ninja 
executable. For example, assume we cloned ninja on the home directory ~. then 
we can added the following line in ~/.bashrc.
-
-
-.. code:: bash
-
-   export PATH="${PATH}:~/ninja"
-
-
-Build NNPACK
-~~~~~~~~~~~~
-
-The new CMAKE version of NNPACK download `Peach 
<https://github.com/Maratyszcza/PeachPy>`_ and other dependencies alone
-
-Note: at least on OS X, running `ninja install` below will overwrite 
googletest libraries installed in `/usr/local/lib`. If you build googletest 
again to replace the nnpack copy, be sure to pass `-DBUILD_SHARED_LIBS=ON` to 
`cmake`.
-
-.. code:: bash
-
-   git clone --recursive https://github.com/Maratyszcza/NNPACK.git
-   cd NNPACK
-   # Add PIC option in CFLAG and CXXFLAG to build NNPACK shared library
-   sed -i "s|gnu99|gnu99 -fPIC|g" CMakeLists.txt
-   sed -i "s|gnu++11|gnu++11 -fPIC|g" CMakeLists.txt
-   mkdir build
-   cd build
-   # Generate ninja build rule and add shared library in configuration
-   cmake -G Ninja -D BUILD_SHARED_LIBS=ON ..
-   ninja
-   sudo ninja install
-
-   # Add NNPACK lib folder in your ldconfig
-   echo "/usr/local/lib" > /etc/ld.so.conf.d/nnpack.conf
-   sudo ldconfig
-
-
-Build TVM with NNPACK support
------------------------------
-
-.. code:: bash
-
-   git clone --recursive https://github.com/apache/tvm tvm
-
-- Set `set(USE_NNPACK ON)` in config.cmake.
-- Set `NNPACK_PATH` to the $(YOUR_NNPACK_INSTALL_PATH)
-
-after configuration use `make` to build TVM
-
-
-.. code:: bash
-
-   make

Reply via email to