[GitHub] [incubator-tvm] FrozenGene commented on issue #4803: [Frontend] Asymmetric padding of convolution support

2020-02-02 Thread GitBox
FrozenGene commented on issue #4803: [Frontend] Asymmetric padding of 
convolution support
URL: https://github.com/apache/incubator-tvm/pull/4803#issuecomment-581273697
 
 
   Currently our QNN seems doesn't like passing asymmetric padding directly to 
it. Change to WIP to indicate I am working on it. I will notice to review when 
it is complete. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yidawang commented on a change in pull request #4747: [ThreadPool] Solve ARM BIG.LITTLE heterogeneous multicores

2020-02-02 Thread GitBox
yidawang commented on a change in pull request #4747: [ThreadPool] Solve ARM 
BIG.LITTLE heterogeneous multicores
URL: https://github.com/apache/incubator-tvm/pull/4747#discussion_r373950272
 
 

 ##
 File path: src/runtime/threading_backend.cc
 ##
 @@ -133,34 +133,42 @@ class ThreadGroup::Impl {
 #endif
 }
 if (exclude_worker0) {  // master thread run task
-#if defined(__ANDROID__)
-  SetFullCpuAffinity();
-#else
-  // if we set TVM_BIND_MASTER_THREAD to be 1, we will bind master thread
-  // to core 0.
-  const char* bind_master_thread = getenv("TVM_BIND_MASTER_THREAD");
-  if (bind_master_thread && atoi(bind_master_thread) == 1) {
-cpu_set_t cpuset;
-CPU_ZERO(&cpuset);
-if (reverse) {
-  CPU_SET(sorted_order_[sorted_order_.size() - 1], &cpuset);
-} else {
-  CPU_SET(sorted_order_[0], &cpuset);
-}
-pthread_setaffinity_np(pthread_self(), sizeof(cpu_set_t), &cpuset);
-  }
-  pthread_atfork(nullptr, nullptr, ThreadGroup::Impl::SetFullCpuAffinity);
-#endif
+  // Master thread will have free migration on needed cores.
 
 Review comment:
   I would suggest add one sentence in the comments: Typically, the OS will 
schedule the master thread to run at core 0, which is idle otherwise, when 
other works are running.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yidawang commented on a change in pull request #4747: [ThreadPool] Solve ARM BIG.LITTLE heterogeneous multicores

2020-02-02 Thread GitBox
yidawang commented on a change in pull request #4747: [ThreadPool] Solve ARM 
BIG.LITTLE heterogeneous multicores
URL: https://github.com/apache/incubator-tvm/pull/4747#discussion_r373950412
 
 

 ##
 File path: src/runtime/threading_backend.cc
 ##
 @@ -133,34 +133,42 @@ class ThreadGroup::Impl {
 #endif
 }
 if (exclude_worker0) {  // master thread run task
-#if defined(__ANDROID__)
-  SetFullCpuAffinity();
-#else
-  // if we set TVM_BIND_MASTER_THREAD to be 1, we will bind master thread
-  // to core 0.
-  const char* bind_master_thread = getenv("TVM_BIND_MASTER_THREAD");
-  if (bind_master_thread && atoi(bind_master_thread) == 1) {
-cpu_set_t cpuset;
-CPU_ZERO(&cpuset);
-if (reverse) {
-  CPU_SET(sorted_order_[sorted_order_.size() - 1], &cpuset);
-} else {
-  CPU_SET(sorted_order_[0], &cpuset);
-}
-pthread_setaffinity_np(pthread_self(), sizeof(cpu_set_t), &cpuset);
-  }
-  pthread_atfork(nullptr, nullptr, ThreadGroup::Impl::SetFullCpuAffinity);
-#endif
+  // Master thread will have free migration on needed cores.
+  // See the comment inside SetFullCpuAffinity function to get more detail.
+  SetFullCpuAffinity(reverse);
 }
 #endif
   }
 
-  static void SetFullCpuAffinity() {
+  void SetFullCpuAffinity(bool reverse) {
 
 Review comment:
   This function is to set master thread to full cpu affinity, the function 
name should indicate that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Coderx7 edited a comment on issue #4802: Building tvm using llvm on windows 10 fails (ninja: build stopped: subcommand failed.)

2020-02-02 Thread GitBox
Coderx7 edited a comment on issue #4802: Building tvm using llvm on windows 10 
fails (ninja: build stopped: subcommand failed.)
URL: https://github.com/apache/incubator-tvm/issues/4802#issuecomment-581227116
 
 
   @tqchen Thanks a lot I removed the GCC from PATH and it detected clang/llvm. 
   However it fails again using llvm built from source code as well . 
   
   > CLANG_~1: error: unsupported option '-fPIC' for target 
'x86_64-pc-windows-msvc'
   
   Here is the log : 
   ```
   PS D:\Codes\tvm_testbed\tvm_llvm\build> cmake .. -G Ninja
   -- The C compiler identification is Clang 9.0.1 with GNU-like command-line
   -- The CXX compiler identification is Clang 9.0.1 with GNU-like command-line
   -- Check for working C compiler: C:/Program Files/LLVM/bin/clang.exe
   -- Check for working C compiler: C:/Program Files/LLVM/bin/clang.exe -- works
   -- Detecting C compiler ABI info
   -- Detecting C compiler ABI info - done
   -- Detecting C compile features
   -- Detecting C compile features - done
   -- Check for working CXX compiler: C:/Program Files/LLVM/bin/clang++.exe
   -- Check for working CXX compiler: C:/Program Files/LLVM/bin/clang++.exe -- 
works
   -- Detecting CXX compiler ABI info
   -- Detecting CXX compiler ABI info - done
   -- Detecting CXX compile features
   -- Detecting CXX compile features - done
   Build in Debug mode
   -- Build with RPC support...
   -- Build with Graph runtime support...
   -- Build VTA runtime with target: sim
   -- Found OpenCL: C:/Windows/System32/OpenCL.dll
   -- Build with OpenCL support
   -- Use 
llvm-config=D:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/bin/llvm-config.exe
   -- 
D:\Codes\tvm_testbed\tools\llvm-9.0.1.src.tar\llvm-9.0.1.src\Release\include
   -- Found 
LLVM_INCLUDE_DIRS=D:\Codes\tvm_testbed\tools\llvm-9.0.1.src.tar\llvm-9.0.1.src\Release\include
   -- Found LLVM_DEFINITIONS= -D_CRT_SECURE_NO_DEPRECATE 
-D_CRT_SECURE_NO_WARNINGS -D_CRT_NONSTDC_NO_DEPRECATE 
-D_CRT_NONSTDC_NO_WARNINGS -D_SCL_SECURE_NO_DEPRECATE -D_SCL_SECURE_NO_WARNINGS 
-DUNICODE -D_UNICODE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS 
-D__STDC_LIMIT_MACROS
   -- Found TVM_LLVM_VERSION=90
   -- Build with LLVM
   -- Set TVM_LLVM_VERSION=90
   -- Build with contrib.sort
   -- Build with contrib.hybriddump
   -- Performing Test SUPPORT_CXX11
   -- Performing Test SUPPORT_CXX11 - Failed
   -- Build with c++11
   -- Build with thread support...
   -- Looking for pthread.h
   -- Looking for pthread.h - not found
   -- Found Threads: TRUE
   -- Configuring done
   -- Generating done
   -- Build files have been written to: D:/Codes/tvm_testbed/tvm_llvm/build
   PS D:\Codes\tvm_testbed\tvm_llvm\build> ninja
   [1/315] Building CXX object CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj
   FAILED: CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj
   C:\PROGRA~1\LLVM\bin\CLANG_~1.EXE  -DDMLC_USE_FOPEN64=0 -DNDEBUG 
-DTVM_LLVM_VERSION=90 -DTVM_THREADPOOL_USE_OPENMP=0 -Dtvm_EXPORTS -I../include 
-I../3rdparty/dlpack/include -I../3rdparty/dmlc-core/include 
-I../3rdparty/rang/include -I../3rdparty/compiler-rt -I../3rdparty/picojson 
-I../vta/include -I"C:/Program Files 
(x86)/IntelSWTools/system_studio_2020/OpenCL/sdk/include" 
-ID:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/include 
-I../topi/include -std=c++11 -O0 -g -Wall -fPIC  -g -Xclang -gcodeview -O0 
-D_DEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrtd
-D_CRT_SECURE_NO_DEPRECATE  -D_CRT_SECURE_NO_WARNINGS  
-D_CRT_NONSTDC_NO_DEPRECATE  -D_CRT_NONSTDC_NO_WARNINGS  
-D_SCL_SECURE_NO_DEPRECATE  -D_SCL_SECURE_NO_WARNINGS  -DUNICODE  -D_UNICODE  
-D__STDC_CONSTANT_MACROS  -D__STDC_FORMAT_MACROS  -D__STDC_LIMIT_MACROS -MD -MT 
CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj -MF 
CMakeFiles\tvm.dir\src\relay\op\nn\nn.cc.obj.d -o 
CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj -c ../src/relay/op/nn/nn.cc
   CLANG_~1: error: unsupported option '-fPIC' for target 
'x86_64-pc-windows-msvc'
   [2/315] Building CXX object CMakeFiles/tvm.dir/src/relay/op/nn/pooling.cc.obj
   FAILED: CMakeFiles/tvm.dir/src/relay/op/nn/pooling.cc.obj
   C:\PROGRA~1\LLVM\bin\CLANG_~1.EXE  -DDMLC_USE_FOPEN64=0 -DNDEBUG 
-DTVM_LLVM_VERSION=90 -DTVM_THREADPOOL_USE_OPENMP=0 -Dtvm_EXPORTS -I../include 
-I../3rdparty/dlpack/include -I../3rdparty/dmlc-core/include 
-I../3rdparty/rang/include -I../3rdparty/compiler-rt -I../3rdparty/picojson 
-I../vta/include -I"C:/Program Files 
(x86)/IntelSWTools/system_studio_2020/OpenCL/sdk/include" 
-ID:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/include 
-I../topi/include -std=c++11 -O0 -g -Wall -fPIC  -g -Xclang -gcodeview -O0 
-D_DEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrtd
-D_CRT_SECURE_NO_DEPRECATE  -D_CRT_SECURE_NO_WARNINGS  
-D_CRT_NONSTDC_NO_DEPRECATE  -D_CRT_NONSTDC_NO_WARNINGS  
-D_SCL_SECURE_NO_DEPRECATE  -D_SCL_SECURE_NO_WARNINGS  -DUNICODE  -D_UNICODE  
-D__STDC_CONSTANT_MACROS  -D__STDC_FORMAT_MACROS  -D__STDC_LIMIT_MACROS -MD -MT 
CMakeFiles/tvm.

[GitHub] [incubator-tvm] tqchen opened a new pull request #4804: [LINT] Fix -Wextra

2020-02-02 Thread GitBox
tqchen opened a new pull request #4804: [LINT] Fix -Wextra
URL: https://github.com/apache/incubator-tvm/pull/4804
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Coderx7 edited a comment on issue #4802: Building tvm using llvm on windows 10 fails (ninja: build stopped: subcommand failed.)

2020-02-02 Thread GitBox
Coderx7 edited a comment on issue #4802: Building tvm using llvm on windows 10 
fails (ninja: build stopped: subcommand failed.)
URL: https://github.com/apache/incubator-tvm/issues/4802#issuecomment-581227116
 
 
   @tqchen Thanks a lot I removed the GCC from PATH and it detected clang/llvm. 
However it fails again using llvm built from source code as well . 
   
   > CLANG_~1: error: unsupported option '-fPIC' for target 
'x86_64-pc-windows-msvc'
   
   Here is the log : 
   ```
   PS D:\Codes\tvm_testbed\tvm_llvm\build> cmake .. -G Ninja
   -- The C compiler identification is Clang 9.0.1 with GNU-like command-line
   -- The CXX compiler identification is Clang 9.0.1 with GNU-like command-line
   -- Check for working C compiler: C:/Program Files/LLVM/bin/clang.exe
   -- Check for working C compiler: C:/Program Files/LLVM/bin/clang.exe -- works
   -- Detecting C compiler ABI info
   -- Detecting C compiler ABI info - done
   -- Detecting C compile features
   -- Detecting C compile features - done
   -- Check for working CXX compiler: C:/Program Files/LLVM/bin/clang++.exe
   -- Check for working CXX compiler: C:/Program Files/LLVM/bin/clang++.exe -- 
works
   -- Detecting CXX compiler ABI info
   -- Detecting CXX compiler ABI info - done
   -- Detecting CXX compile features
   -- Detecting CXX compile features - done
   Build in Debug mode
   -- Build with RPC support...
   -- Build with Graph runtime support...
   -- Build VTA runtime with target: sim
   -- Found OpenCL: C:/Windows/System32/OpenCL.dll
   -- Build with OpenCL support
   -- Use 
llvm-config=D:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/bin/llvm-config.exe
   -- 
D:\Codes\tvm_testbed\tools\llvm-9.0.1.src.tar\llvm-9.0.1.src\Release\include
   -- Found 
LLVM_INCLUDE_DIRS=D:\Codes\tvm_testbed\tools\llvm-9.0.1.src.tar\llvm-9.0.1.src\Release\include
   -- Found LLVM_DEFINITIONS= -D_CRT_SECURE_NO_DEPRECATE 
-D_CRT_SECURE_NO_WARNINGS -D_CRT_NONSTDC_NO_DEPRECATE 
-D_CRT_NONSTDC_NO_WARNINGS -D_SCL_SECURE_NO_DEPRECATE -D_SCL_SECURE_NO_WARNINGS 
-DUNICODE -D_UNICODE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS 
-D__STDC_LIMIT_MACROS
   -- Found TVM_LLVM_VERSION=90
   -- Build with LLVM
   -- Set TVM_LLVM_VERSION=90
   -- Build with contrib.sort
   -- Build with contrib.hybriddump
   -- Performing Test SUPPORT_CXX11
   -- Performing Test SUPPORT_CXX11 - Failed
   -- Build with c++11
   -- Build with thread support...
   -- Looking for pthread.h
   -- Looking for pthread.h - not found
   -- Found Threads: TRUE
   -- Configuring done
   -- Generating done
   -- Build files have been written to: D:/Codes/tvm_testbed/tvm_llvm/build
   PS D:\Codes\tvm_testbed\tvm_llvm\build> ninja
   [1/315] Building CXX object CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj
   FAILED: CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj
   C:\PROGRA~1\LLVM\bin\CLANG_~1.EXE  -DDMLC_USE_FOPEN64=0 -DNDEBUG 
-DTVM_LLVM_VERSION=90 -DTVM_THREADPOOL_USE_OPENMP=0 -Dtvm_EXPORTS -I../include 
-I../3rdparty/dlpack/include -I../3rdparty/dmlc-core/include 
-I../3rdparty/rang/include -I../3rdparty/compiler-rt -I../3rdparty/picojson 
-I../vta/include -I"C:/Program Files 
(x86)/IntelSWTools/system_studio_2020/OpenCL/sdk/include" 
-ID:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/include 
-I../topi/include -std=c++11 -O0 -g -Wall -fPIC  -g -Xclang -gcodeview -O0 
-D_DEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrtd
-D_CRT_SECURE_NO_DEPRECATE  -D_CRT_SECURE_NO_WARNINGS  
-D_CRT_NONSTDC_NO_DEPRECATE  -D_CRT_NONSTDC_NO_WARNINGS  
-D_SCL_SECURE_NO_DEPRECATE  -D_SCL_SECURE_NO_WARNINGS  -DUNICODE  -D_UNICODE  
-D__STDC_CONSTANT_MACROS  -D__STDC_FORMAT_MACROS  -D__STDC_LIMIT_MACROS -MD -MT 
CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj -MF 
CMakeFiles\tvm.dir\src\relay\op\nn\nn.cc.obj.d -o 
CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj -c ../src/relay/op/nn/nn.cc
   CLANG_~1: error: unsupported option '-fPIC' for target 
'x86_64-pc-windows-msvc'
   [2/315] Building CXX object CMakeFiles/tvm.dir/src/relay/op/nn/pooling.cc.obj
   FAILED: CMakeFiles/tvm.dir/src/relay/op/nn/pooling.cc.obj
   C:\PROGRA~1\LLVM\bin\CLANG_~1.EXE  -DDMLC_USE_FOPEN64=0 -DNDEBUG 
-DTVM_LLVM_VERSION=90 -DTVM_THREADPOOL_USE_OPENMP=0 -Dtvm_EXPORTS -I../include 
-I../3rdparty/dlpack/include -I../3rdparty/dmlc-core/include 
-I../3rdparty/rang/include -I../3rdparty/compiler-rt -I../3rdparty/picojson 
-I../vta/include -I"C:/Program Files 
(x86)/IntelSWTools/system_studio_2020/OpenCL/sdk/include" 
-ID:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/include 
-I../topi/include -std=c++11 -O0 -g -Wall -fPIC  -g -Xclang -gcodeview -O0 
-D_DEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrtd
-D_CRT_SECURE_NO_DEPRECATE  -D_CRT_SECURE_NO_WARNINGS  
-D_CRT_NONSTDC_NO_DEPRECATE  -D_CRT_NONSTDC_NO_WARNINGS  
-D_SCL_SECURE_NO_DEPRECATE  -D_SCL_SECURE_NO_WARNINGS  -DUNICODE  -D_UNICODE  
-D__STDC_CONSTANT_MACROS  -D__STDC_FORMAT_MACROS  -D__STDC_LIMIT_MACROS -MD -MT 
CMakeFiles/tvm.dir

[GitHub] [incubator-tvm] FrozenGene commented on issue #4794: Change color channel from BGR to RGB for darknet preprocessing

2020-02-02 Thread GitBox
FrozenGene commented on issue #4794: Change color channel from BGR to RGB for 
darknet preprocessing
URL: https://github.com/apache/incubator-tvm/pull/4794#issuecomment-581227501
 
 
   Thanks @vizero1 @cchung100m This is merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (00097b1 -> c21d1ee)

2020-02-02 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 00097b1  [QNN] Conv2D with dilation support. (#4796)
 add c21d1ee  Change color channel from BGR to RGB for darknet 
preprocessing (#4794)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/testing/darknet.py | 1 +
 1 file changed, 1 insertion(+)



[GitHub] [incubator-tvm] FrozenGene merged pull request #4794: Change color channel from BGR to RGB for darknet preprocessing

2020-02-02 Thread GitBox
FrozenGene merged pull request #4794: Change color channel from BGR to RGB for 
darknet preprocessing
URL: https://github.com/apache/incubator-tvm/pull/4794
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Coderx7 commented on issue #4802: Building tvm using llvm on windows 10 fails (ninja: build stopped: subcommand failed.)

2020-02-02 Thread GitBox
Coderx7 commented on issue #4802: Building tvm using llvm on windows 10 fails 
(ninja: build stopped: subcommand failed.)
URL: https://github.com/apache/incubator-tvm/issues/4802#issuecomment-581227116
 
 
   @tqchen Thanks a lot I removed the GCC from PATH and it detected clang/llvm. 
however it fails again using llvm built from source code as well . here is the 
log : 
   ```
   PS D:\Codes\tvm_testbed\tvm_llvm\build> cmake .. -G Ninja
   -- The C compiler identification is Clang 9.0.1 with GNU-like command-line
   -- The CXX compiler identification is Clang 9.0.1 with GNU-like command-line
   -- Check for working C compiler: C:/Program Files/LLVM/bin/clang.exe
   -- Check for working C compiler: C:/Program Files/LLVM/bin/clang.exe -- works
   -- Detecting C compiler ABI info
   -- Detecting C compiler ABI info - done
   -- Detecting C compile features
   -- Detecting C compile features - done
   -- Check for working CXX compiler: C:/Program Files/LLVM/bin/clang++.exe
   -- Check for working CXX compiler: C:/Program Files/LLVM/bin/clang++.exe -- 
works
   -- Detecting CXX compiler ABI info
   -- Detecting CXX compiler ABI info - done
   -- Detecting CXX compile features
   -- Detecting CXX compile features - done
   Build in Debug mode
   -- Build with RPC support...
   -- Build with Graph runtime support...
   -- Build VTA runtime with target: sim
   -- Found OpenCL: C:/Windows/System32/OpenCL.dll
   -- Build with OpenCL support
   -- Use 
llvm-config=D:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/bin/llvm-config.exe
   -- 
D:\Codes\tvm_testbed\tools\llvm-9.0.1.src.tar\llvm-9.0.1.src\Release\include
   -- Found 
LLVM_INCLUDE_DIRS=D:\Codes\tvm_testbed\tools\llvm-9.0.1.src.tar\llvm-9.0.1.src\Release\include
   -- Found LLVM_DEFINITIONS= -D_CRT_SECURE_NO_DEPRECATE 
-D_CRT_SECURE_NO_WARNINGS -D_CRT_NONSTDC_NO_DEPRECATE 
-D_CRT_NONSTDC_NO_WARNINGS -D_SCL_SECURE_NO_DEPRECATE -D_SCL_SECURE_NO_WARNINGS 
-DUNICODE -D_UNICODE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS 
-D__STDC_LIMIT_MACROS
   -- Found TVM_LLVM_VERSION=90
   -- Build with LLVM
   -- Set TVM_LLVM_VERSION=90
   -- Build with contrib.sort
   -- Build with contrib.hybriddump
   -- Performing Test SUPPORT_CXX11
   -- Performing Test SUPPORT_CXX11 - Failed
   -- Build with c++11
   -- Build with thread support...
   -- Looking for pthread.h
   -- Looking for pthread.h - not found
   -- Found Threads: TRUE
   -- Configuring done
   -- Generating done
   -- Build files have been written to: D:/Codes/tvm_testbed/tvm_llvm/build
   PS D:\Codes\tvm_testbed\tvm_llvm\build> ninja
   [1/315] Building CXX object CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj
   FAILED: CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj
   C:\PROGRA~1\LLVM\bin\CLANG_~1.EXE  -DDMLC_USE_FOPEN64=0 -DNDEBUG 
-DTVM_LLVM_VERSION=90 -DTVM_THREADPOOL_USE_OPENMP=0 -Dtvm_EXPORTS -I../include 
-I../3rdparty/dlpack/include -I../3rdparty/dmlc-core/include 
-I../3rdparty/rang/include -I../3rdparty/compiler-rt -I../3rdparty/picojson 
-I../vta/include -I"C:/Program Files 
(x86)/IntelSWTools/system_studio_2020/OpenCL/sdk/include" 
-ID:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/include 
-I../topi/include -std=c++11 -O0 -g -Wall -fPIC  -g -Xclang -gcodeview -O0 
-D_DEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrtd
-D_CRT_SECURE_NO_DEPRECATE  -D_CRT_SECURE_NO_WARNINGS  
-D_CRT_NONSTDC_NO_DEPRECATE  -D_CRT_NONSTDC_NO_WARNINGS  
-D_SCL_SECURE_NO_DEPRECATE  -D_SCL_SECURE_NO_WARNINGS  -DUNICODE  -D_UNICODE  
-D__STDC_CONSTANT_MACROS  -D__STDC_FORMAT_MACROS  -D__STDC_LIMIT_MACROS -MD -MT 
CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj -MF 
CMakeFiles\tvm.dir\src\relay\op\nn\nn.cc.obj.d -o 
CMakeFiles/tvm.dir/src/relay/op/nn/nn.cc.obj -c ../src/relay/op/nn/nn.cc
   CLANG_~1: error: unsupported option '-fPIC' for target 
'x86_64-pc-windows-msvc'
   [2/315] Building CXX object CMakeFiles/tvm.dir/src/relay/op/nn/pooling.cc.obj
   FAILED: CMakeFiles/tvm.dir/src/relay/op/nn/pooling.cc.obj
   C:\PROGRA~1\LLVM\bin\CLANG_~1.EXE  -DDMLC_USE_FOPEN64=0 -DNDEBUG 
-DTVM_LLVM_VERSION=90 -DTVM_THREADPOOL_USE_OPENMP=0 -Dtvm_EXPORTS -I../include 
-I../3rdparty/dlpack/include -I../3rdparty/dmlc-core/include 
-I../3rdparty/rang/include -I../3rdparty/compiler-rt -I../3rdparty/picojson 
-I../vta/include -I"C:/Program Files 
(x86)/IntelSWTools/system_studio_2020/OpenCL/sdk/include" 
-ID:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/include 
-I../topi/include -std=c++11 -O0 -g -Wall -fPIC  -g -Xclang -gcodeview -O0 
-D_DEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrtd
-D_CRT_SECURE_NO_DEPRECATE  -D_CRT_SECURE_NO_WARNINGS  
-D_CRT_NONSTDC_NO_DEPRECATE  -D_CRT_NONSTDC_NO_WARNINGS  
-D_SCL_SECURE_NO_DEPRECATE  -D_SCL_SECURE_NO_WARNINGS  -DUNICODE  -D_UNICODE  
-D__STDC_CONSTANT_MACROS  -D__STDC_FORMAT_MACROS  -D__STDC_LIMIT_MACROS -MD -MT 
CMakeFiles/tvm.dir/src/relay/op/nn/pooling.cc.obj -MF 
CMakeFiles\tvm.dir\src\relay\op\nn\pooling.cc.obj.d -o 
CMakeFiles/

[GitHub] [incubator-tvm] FrozenGene opened a new pull request #4803: [Frontend] Asymmetric padding of convolution support

2020-02-02 Thread GitBox
FrozenGene opened a new pull request #4803: [Frontend] Asymmetric padding of 
convolution support
URL: https://github.com/apache/incubator-tvm/pull/4803
 
 
   As we have supported asymmetric padding, we could avoid extra `pad` operator 
in the frontend now. Our tf frontend has done this. This pr complete this 
support in the tflite / coreml / keras frontend.
   
   Note: This is not conflict with pr 
https://github.com/apache/incubator-tvm/pull/4787, which could handle it and 
legalize it in the topi. Like our MXNet frontend could still keep 2D padding if 
it doesn't have asymmetric padding. However, like tflite / tf / keras / coreml 
has asymmetric padding, this pr could make us no extra `pad` operator now and 
could have better performance described in the rfc 
https://github.com/apache/incubator-tvm/issues/2682 
   
   @inadob @optima2005 @wyc-ruiker @anijain2305 @cchung100m Could you help to 
review this? Thanks.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4797: [AutoTVM] Minor bug fixes in AutoTVM for QNN graphs

2020-02-02 Thread GitBox
kevinthesun commented on a change in pull request #4797: [AutoTVM] Minor bug 
fixes in AutoTVM for QNN graphs
URL: https://github.com/apache/incubator-tvm/pull/4797#discussion_r373913039
 
 

 ##
 File path: python/tvm/autotvm/graph_tuner/utils/utils.py
 ##
 @@ -73,7 +73,7 @@ def is_boundary_node(node_entry, input_names):
 # Operators dependent on original layouts.
 _LAYOUT_FIXED_OP = ["batch_flatten", "transpose", "reshape",
 "multibox_prior", "multibox_transform_loc", "where",
-"non_max_suppression", "strided_slice"]
 
 Review comment:
   Why do we want to change this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen closed issue #4802: Building tvm using llvm on windows 10 fails (ninja: build stopped: subcommand failed.)

2020-02-02 Thread GitBox
tqchen closed issue #4802: Building tvm using llvm on windows 10 fails (ninja: 
build stopped: subcommand failed.)
URL: https://github.com/apache/incubator-tvm/issues/4802
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4802: Building tvm using llvm on windows 10 fails (ninja: build stopped: subcommand failed.)

2020-02-02 Thread GitBox
tqchen commented on issue #4802: Building tvm using llvm on windows 10 fails 
(ninja: build stopped: subcommand failed.)
URL: https://github.com/apache/incubator-tvm/issues/4802#issuecomment-581218382
 
 
   please open a new post in the discuss forum https://discuss.tvm.ai/ We do 
not support GCC, you will need to build LLVM from source in windows 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4749: [AutoTVM] Support to pass additional compiler options

2020-02-02 Thread GitBox
FrozenGene commented on a change in pull request #4749: [AutoTVM] Support to 
pass additional compiler options
URL: https://github.com/apache/incubator-tvm/pull/4749#discussion_r373905750
 
 

 ##
 File path: python/tvm/module.py
 ##
 @@ -162,13 +162,20 @@ def export_library(self,
 f.write(_PackImportsToC(self, is_system_lib))
 files.append(path_cc)
 
-if has_c_module:
-options = []
-if "options" in kwargs:
-opts = kwargs["options"]
-options = opts if isinstance(opts, (list, tuple)) else [opts]
-opts = options + ["-I" + path for path in find_include_path()]
-kwargs.update({'options': opts})
+# Make sure we won't pass {'options': None} or {'options': [None, 
'-std=c++11', ...]}
+# to compiler. We can not prevent users doing the following code:
+# f.export_library(path_dso, cc.create_shared, options=None)
+# f.export_library(path_dso, cc.create_shared, options=[None, 
'-std=c++11'])
+if "options" in kwargs or has_c_module:
+opts = kwargs["options"] if "options" in kwargs else None
+options = opts if isinstance(opts, (list, tuple)) else [opts]
+options = [opt for opt in options if opt]
+if has_c_module:
+options = options + ["-I" + path for path in 
find_include_path()]
+if not options:
+del kwargs["options"]
+else:
+kwargs.update({'options': options})
 
 Review comment:
   Yes. Your way could avoid twice check of `has_c_module`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on issue #4796: [QNN] Conv2D with dilation support.

2020-02-02 Thread GitBox
FrozenGene commented on issue #4796: [QNN] Conv2D with dilation support.
URL: https://github.com/apache/incubator-tvm/pull/4796#issuecomment-581216587
 
 
   Thanks @anijain2305 @jackwish This is merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene merged pull request #4796: [QNN] Conv2D with dilation support.

2020-02-02 Thread GitBox
FrozenGene merged pull request #4796: [QNN] Conv2D with dilation support.
URL: https://github.com/apache/incubator-tvm/pull/4796
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (9963cf3 -> 00097b1)

2020-02-02 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 9963cf3  [QNN] Doc fix on convolution and dequantize (#4799)
 add 00097b1  [QNN] Conv2D with dilation support. (#4796)

No new revisions were added by this update.

Summary of changes:
 src/relay/qnn/op/convolution.cc  | 18 +++---
 tests/python/relay/test_op_qnn_conv2d.py | 25 -
 2 files changed, 35 insertions(+), 8 deletions(-)



[GitHub] [incubator-tvm] FrozenGene commented on issue #4747: [ThreadPool] Solve ARM BIG.LITTLE heterogeneous multicores

2020-02-02 Thread GitBox
FrozenGene commented on issue #4747: [ThreadPool] Solve ARM BIG.LITTLE 
heterogeneous multicores
URL: https://github.com/apache/incubator-tvm/pull/4747#issuecomment-581216205
 
 
   @yidawang Would you like to review it again? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vinx13 merged pull request #4799: [QNN] Doc fix on convolution and dequantize

2020-02-02 Thread GitBox
vinx13 merged pull request #4799: [QNN] Doc fix on convolution and dequantize
URL: https://github.com/apache/incubator-tvm/pull/4799
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (396095a -> 9963cf3)

2020-02-02 Thread wuwei
This is an automated email from the ASF dual-hosted git repository.

wuwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 396095a  fix #4670: add bias for fc layer (#4801)
 add 9963cf3  [QNN] Doc fix on convolution and dequantize (#4799)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/qnn/op/qnn.py | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)



[GitHub] [incubator-tvm] shoubhik commented on issue #4764: [CI] ci-gpu update blockers

2020-02-02 Thread GitBox
shoubhik commented on issue #4764: [CI] ci-gpu update blockers 
URL: https://github.com/apache/incubator-tvm/issues/4764#issuecomment-581189877
 
 
   > given that the mkl part poses accuracy problem, i feel it might be a bad 
idea to rely on it for testing QNN(see also comment about intel dependency). 
would be great if we can explore generic alternatives that can test QNN. For 
the parser part, I think we can start by directly checking alpha equivalence of 
the graph as well as potentially the comparison to a simulated FP32 version.
   
   + @anijain2305 
   @tqchen most of the code for mxnet qnn has been tested for MKLDNN. Some of 
the formulas used for quantization, dequantization and convolutions used my the 
reference Mxnet implementations(fake quantization) is not fully and thorougly 
tested. Also some of the optimizations available in MKLDNN are not implemented 
in the stock implementaion. So, my suggestion to unblock this PR would be to 
use the simple feed forward network test case, where we test only the graph 
generated by QNN. Once we have the fix from MKLDNN as well we can add proper 
test cases with MKLDNN at that time.
   @tqchen and @icemelon9 what do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4756: [Docker] Update torch version to 1.4

2020-02-02 Thread GitBox
masahi commented on issue #4756: [Docker] Update torch version to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756#issuecomment-581182912
 
 
   @jwfromm have you tried pytorch 1.4 + onnx 1.6? So far I have no problem 
with this combo. I want to update onnx CI as well if possible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen closed issue #4670: [Relay] Bias missing in final layer of Mobilenet?

2020-02-02 Thread GitBox
tqchen closed issue #4670: [Relay] Bias missing in final layer of Mobilenet?
URL: https://github.com/apache/incubator-tvm/issues/4670
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen merged pull request #4801: fix #4670: add bias for fc layer

2020-02-02 Thread GitBox
tqchen merged pull request #4801: fix #4670: add bias for fc layer
URL: https://github.com/apache/incubator-tvm/pull/4801
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: fix #4670: add bias for fc layer (#4801)

2020-02-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 396095a  fix #4670: add bias for fc layer (#4801)
396095a is described below

commit 396095a3e8ad3d15bdc9c52b938d370d4b5ebbf5
Author: kshitij12345 
AuthorDate: Mon Feb 3 00:27:12 2020 +0530

fix #4670: add bias for fc layer (#4801)
---
 python/tvm/relay/testing/mobilenet.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/python/tvm/relay/testing/mobilenet.py 
b/python/tvm/relay/testing/mobilenet.py
index f76b0c2..1b3ce03 100644
--- a/python/tvm/relay/testing/mobilenet.py
+++ b/python/tvm/relay/testing/mobilenet.py
@@ -120,7 +120,9 @@ def mobile_net(num_classes=1000, data_shape=(1, 3, 224, 
224),
 pool = relay.nn.global_avg_pool2d(data=body, layout=layout)
 flatten = relay.nn.batch_flatten(data=pool)
 weight = relay.var('fc_weight')
+bias = relay.var('fc_bias')
 fc = relay.nn.dense(data=flatten, weight=weight, units=num_classes)
+fc = relay.nn.bias_add(fc, bias)
 softmax = relay.nn.softmax(data=fc)
 return relay.Function(relay.analysis.free_vars(softmax), softmax)
 



[GitHub] [incubator-tvm] tqchen commented on issue #4801: fix #4670: add bias for fc layer

2020-02-02 Thread GitBox
tqchen commented on issue #4801: fix #4670: add bias for fc layer
URL: https://github.com/apache/incubator-tvm/pull/4801#issuecomment-581165443
 
 
   Thanks @kshitij12345 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Coderx7 opened a new issue #4802: Building tvm using llvm on windows 10 fails (ninja: build stopped: subcommand failed.)

2020-02-02 Thread GitBox
Coderx7 opened a new issue #4802: Building tvm using llvm on windows 10 fails 
(ninja: build stopped: subcommand failed.)
URL: https://github.com/apache/incubator-tvm/issues/4802
 
 
   Hi and thank  you for this great project. 
   Last night I tried installing `tvm `using `Visual Studio 16 2019` without a 
hitch and everything went smoothly. but since it seems, tvm needs llvm to work 
properly ( at least in my case as pointed out 
[here](https://discuss.tvm.ai/t/tvmerror-check-failed-bf-nullptr-target-llvm-is-not-enabled/5561)),
 I decided to build `tvm` with `LLVM ON`.   
   I followed the instrcutions at 
[docs.tvm.ai/install](https://docs.tvm.ai/install/from_source.html) and built 
llvm from source and then tried building tvm using Ninja. 
   However this is where it fails. upon using the ninja to build it fails with 
these errors (building with vs2019 also fails as well): 
   
   ```bash
   D:\Codes\tvm_testbed\tvm_llvm\build>cmake .. -G Ninja
   -- The C compiler identification is GNU 5.1.0
   -- The CXX compiler identification is GNU 5.1.0
   -- Check for working C compiler: C:/TDM-GCC-64/bin/gcc.exe
   -- Check for working C compiler: C:/TDM-GCC-64/bin/gcc.exe -- works
   -- Detecting C compiler ABI info
   -- Detecting C compiler ABI info - done
   -- Detecting C compile features
   -- Detecting C compile features - done
   -- Check for working CXX compiler: C:/TDM-GCC-64/bin/c++.exe
   -- Check for working CXX compiler: C:/TDM-GCC-64/bin/c++.exe -- works
   -- Detecting CXX compiler ABI info
   -- Detecting CXX compiler ABI info - done
   -- Detecting CXX compile features
   -- Detecting CXX compile features - done
   -- Build with RPC support...
   -- Build with Graph runtime support...
   -- Build VTA runtime with target: sim
   -- Found OpenCL: C:/Windows/System32/OpenCL.dll (found version "2.1")
   -- Build with OpenCL support
   -- Use 
llvm-config=D:/Codes/tvm_testbed/tools/llvm-9.0.1.src.tar/llvm-9.0.1.src/Release/bin/llvm-config.exe
   -- 
D:\Codes\tvm_testbed\tools\llvm-9.0.1.src.tar\llvm-9.0.1.src\Release\include
   -- Found 
LLVM_INCLUDE_DIRS=D:\Codes\tvm_testbed\tools\llvm-9.0.1.src.tar\llvm-9.0.1.src\Release\include
   -- Found LLVM_DEFINITIONS= -D_CRT_SECURE_NO_DEPRECATE 
-D_CRT_SECURE_NO_WARNINGS -D_CRT_NONSTDC_NO_DEPRECATE 
-D_CRT_NONSTDC_NO_WARNINGS -D_SCL_SECURE_NO_DEPRECATE -D_SCL_SECURE_NO_WARNINGS 
-DUNICODE -D_UNICODE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS 
-D__STDC_LIMIT_MACROS
   -- Found TVM_LLVM_VERSION=90
   -- Build with LLVM
   -- Set TVM_LLVM_VERSION=90
   -- Build with contrib.sort
   -- Build with contrib.hybriddump
   -- Performing Test SUPPORT_CXX11
   -- Performing Test SUPPORT_CXX11 - Success
   -- Build with c++11
   -- Build with thread support...
   -- Looking for pthread.h
   -- Looking for pthread.h - found
   -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
   -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
   -- Found Threads: TRUE
   -- Configuring done
   -- Generating done
   -- Build files have been written to: D:/Codes/tvm_testbed/tvm_llvm/build
   
   D:\Codes\tvm_testbed\tvm_llvm\build>Ninja
   [1/315] Building CXX object CMakeFiles/vta_fsim.dir/vta/src/device_api.cc.obj
   ../vta/src/device_api.cc:1:0: warning: -fPIC ignored for target (all code is 
position independent)
/*
^
   [2/315] Building CXX object CMakeFiles/vta_fsim.dir/vta/src/runtime.cc.obj
   ../vta/src/runtime.cc:1:0: warning: -fPIC ignored for target (all code is 
position independent)
/*
^
   ../vta/src/runtime.cc: In function 'void* VTABufferAlloc(size_t)':
   ../vta/src/runtime.cc:1324:7: warning: 'void* VTABufferAlloc(size_t)' 
redeclared without dllimport attribute: previous dllimport ignored 
[-Wattributes]
void* VTABufferAlloc(size_t size) {
  ^
   ../vta/src/runtime.cc: In function 'void VTABufferFree(void*)':
   ../vta/src/runtime.cc:1328:6: warning: 'void VTABufferFree(void*)' 
redeclared without dllimport attribute: previous dllimport ignored 
[-Wattributes]
void VTABufferFree(void* buffer) {
 ^
   ../vta/src/runtime.cc: In function 'void VTABufferCopy(const void*, size_t, 
void*, size_t, size_t, int)':
   ../vta/src/runtime.cc:1332:6: warning: 'void VTABufferCopy(const void*, 
size_t, void*, size_t, size_t, int)' redeclared without dllimport attribute: 
previous dllimport ignored [-Wattributes]
void VTABufferCopy(const void* from,
 ^
   ../vta/src/runtime.cc: In function 'void* VTATLSCommandHandle()':
   ../vta/src/runtime.cc:1365:18: warning: 'void* VTATLSCommandHandle()' 
redeclared without dllimport attribute: previous dllimport ignored 
[-Wattributes]
VTACommandHandle VTATLSCommandHandle() {
 ^
   ../vta/src/runtime.cc: In function 'void VTARuntimeShutdown()':
   ../vta/src/runtime.cc:1369:6: warning: 'void VTARuntimeShutdown()' 
redeclared without dllimport attribute: previous dllimport ignored 
[-Wattributes]
void VTARuntimeShutdown() {
 ^
   ../vta/src/runtime.cc: In f

[GitHub] [incubator-tvm] kshitij12345 opened a new pull request #4801: fix #4670: add bias for fc layer

2020-02-02 Thread GitBox
kshitij12345 opened a new pull request #4801: fix #4670: add bias for fc layer
URL: https://github.com/apache/incubator-tvm/pull/4801
 
 
   Fixes #4670 
   
   @tqchen @zhiics


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services