jroesch commented on a change in pull request #6162:
URL: https://github.com/apache/incubator-tvm/pull/6162#discussion_r462735811



##########
File path: tests/python/relay/test_ir_parser2.py
##########
@@ -1,891 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one

Review comment:
       This was a duplicated file that was modified to work on new parser. I 
deleted the original parsers tests and replaced it with the one that has been 
ported to work on new parser. 

##########
File path: tests/python/relay/test_ir_parser.py
##########
@@ -868,48 +883,60 @@ def test_extern_adt_defn():
     extern_def = relay.TypeData(extern_var, [typ_var], [])
     mod[extern_var] = extern_def
 
-    assert_parses_as(
+    assert_parse_module_as(
         """
         extern type T[A]
         """,
         mod
     )
+
+@pytest.mark.skip("not yet tested on parser 2.0")
 def test_import_grad():
     mod = tvm.IRModule()
     mod.import_from_std("gradient.rly")
 
+# hiearchy id, i.e parse nn.conv2d
+# do with multiple levels
+#
+# call attributes not correctly parsing
+# convert error from attribute construction to real error message
+# lexing issue with projection of graph variables
+
+# def test_hierarchical_identifiers():
+#     assert False
+
+def test_resnet():
+    mod, params = relay.testing.resnet.get_workload()
+    text = str(mod.astext())
+    parsed_mod = parse_module(text)
+    tvm.ir.assert_structural_equal(mod, parsed_mod)
+
+def inline_params(mod, params):
+    main_fn = mod["main"]
+    str_to_var = {}
+    for param in main_fn.params:
+        str_to_var[param.name_hint] = param
+
+    bind_map = {}
+    for param in params:
+        bind_map[str_to_var[param]] = relay.const(params[param])
+
+    body = relay.bind(main_fn.body, bind_map)
+    main_fn = relay.Function(relay.analysis.free_vars(body), body)
+    mod["main_fn"] = main_fn
+    return mod
+
+def test_resnet_inlined_params():
+    mod, params = relay.testing.resnet.get_workload()
+    print("here")
+    mod = inline_params(mod, params)
+    print("here")
+    text = str(mod.astext())
+    print("here")
+    parsed_mod = parse_module(text)
+    print("here")
+    tvm.ir.assert_structural_equal(mod, parsed_mod)
+    print("here")
+
 if __name__ == "__main__":
-    test_graph()

Review comment:
       See my above comment to Marisa, but in my last PR I left the existing 
parser in place with all tests in `tests/python/relay/test_ir_parser.py` and 
the new parser in `tests/python/relay/test_ir_parser2.py`. In this PR I removed 
the original parser completely and renamed the 2.0 tests. 

##########
File path: include/tvm/parser/source_map.h
##########
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_PARSER_SOURCE_MAP_H_
+#define TVM_PARSER_SOURCE_MAP_H_
+/*!
+ * \file source_map.h
+ * \brief A map from source names to source code.
+ */
+#include <tvm/runtime/packed_func.h>
+#include <tvm/runtime/registry.h>
+#include <tvm/ir/span.h>
+
+#include <fstream>
+#include <string>
+
+namespace tvm {
+namespace parser {
+
+
+/*! \brief A program source in any language.
+ *
+ * Could represent the source from an ML framework or the internal
+ * source of a TVM program.
+ */
+struct Source {
+  /*! \brief The raw source. */
+  std::string source;
+  /*! \brief A mapping of line breaks into the raw source. */
+  std::vector<std::pair<int, int>> line_map;
+
+  /*! \brief An empty source. */
+  Source() : source(), line_map() {}
+
+  /*! \brief Construct a source from a string. */
+  TVM_DLL explicit Source(const std::string& source);
+
+  TVM_DLL Source(const Source& source) : source(source.source), 
line_map(source.line_map) {}
+
+  /*! \brief Generate an error message at a specific line and column with the
+   * annotated message.
+   *
+   * The error is written directly to the `out` std::ostream.
+   *
+   * \param out The output ostream.
+   * \param line The line at which to report a diagnostic.
+   * \param line The column at which to report a diagnostic.
+   * \param msg The message to attach.
+   */
+  TVM_DLL void ReportAt(std::ostream& out, int line, int column, const 
std::string& msg) const;

Review comment:
       I plan on doing this in follow up work. I have a tracking issue here 
https://github.com/apache/incubator-tvm/issues/6153. My goal was to first 
rewrite the parser, land initial machinery, replace the existing one with near 
feature parity (I believe 2.0 is better then the existing one) and then improve 
as I try to flow the new diagnostics through the entire compiler pipeline. I 
added a bullet to the tracking list about defining a custom diagnostic renderer 
which will control how the messages are generated. 

##########
File path: python/tvm/parser/__init__.py
##########
@@ -24,4 +24,5 @@ def parse_expr(source):
     return _ffi_api.ParseExpr("string", source)
 
 def fromtext(source, source_name="from_string"):
+    # TODO(@tqchen): currently we have to invoke `str` which dramatically 
reduces performance.

Review comment:
       The C++ code was expecting a std::string, I changed it to tvm::String 
which fixes the overload but still incurs a copy. Copying can be very expensive 
here, for example if we have a model including the parameters it can be 100s of 
megabytes. 

##########
File path: python/tvm/parser/__init__.py
##########
@@ -24,4 +24,5 @@ def parse_expr(source):
     return _ffi_api.ParseExpr("string", source)
 
 def fromtext(source, source_name="from_string"):
+    # TODO(@tqchen): currently we have to invoke `str` which dramatically 
reduces performance.

Review comment:
       The C++ code was expecting a std::string, I changed it to tvm::String 
which fixes the overload but still incurs a copy. Copying can be very expensive 
here, for example if we have a model including the parameters it can be 100s of 
megabytes. Its slightly faster without conversion, but would be even faster if 
we didn't need to copy. 

##########
File path: src/parser/diagnostic.h
##########
@@ -138,8 +62,73 @@ struct Diagnostic {
   /*! \brief The diagnostic message. */
   std::string message;
 
+  /*! \brief A diagnostic for a single character token. */
   Diagnostic(int line, int column, const std::string& message)
-      : level(DiagnosticLevel::Error), span(SourceName(), line, column), 
message(message) {}
+      : level(DiagnosticLevel::Error), span(SourceName(), line, column, line, 
column + 1), message(message) {}
+
+  Diagnostic(DiagnosticLevel level, Span span, const std::string& message)
+    : level(level), span(span), message(message) {}
+};
+
+/*!
+ * \brief A wrapper around std::stringstream to build a diagnostic.
+ *
+ * \code
+ *
+ * void ReportError(const Error& err);
+ *
+ * void Test(int number) {
+ *   // Use error reporter to construct an error.
+ *   ReportError(ErrorBuilder() << "This is an error number=" << number);
+ * }
+ *
+ * \endcode
+ */
+struct DiagnosticBuilder {
+ public:
+  /*! \brief The level. */
+  DiagnosticLevel level;
+
+  /*! \brief The source name. */
+  SourceName source_name;
+
+  /*! \brief The span of the diagnostic. */
+  Span span;
+
+  /*! \brief The line number. */
+  int line;

Review comment:
       I am part way through refactoring, originally this didn't have the Span 
and it now does so we can delete the line/column fields. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to