jcf94 commented on a change in pull request #6686: URL: https://github.com/apache/incubator-tvm/pull/6686#discussion_r505237035
########## File path: src/auto_scheduler/compute_dag.cc ########## @@ -970,8 +1005,21 @@ void ComputeDAG::RewriteLayout(const Array<Step>& transform_steps) { } // end for placeholder } // end for stage p_dag->access_analyzer = AccessAnalyzer(p_dag->tensors); - p_dag->ops = p_dag->access_analyzer->ops_topo_order; + + Array<te::Operation> out_ops; + for (const auto& op : p_dag->access_analyzer->ops_topo_order) { + if (p_dag->access_analyzer.IsOutput(op)) { + out_ops.push_back(op); + } + } + + p_dag->ops.clear(); + te::Schedule sch = te::create_schedule(out_ops); + for (auto stage : sch->stages) { + p_dag->ops.push_back(stage->op); + } p_dag->flop_ct = FlopEstimator().EstimateFlop(p_dag->ops); + p_dag->init_state = State(p_dag->ops); Review comment: We can delete Line 987 since it's added here. Anyway, this does not matter. I'm doing some updating on layout_write, and have also modified some code in this part, I'll refine the code after this PR's merge. :) ########## File path: src/te/schedule/schedule_dataflow_rewrite.cc ########## @@ -138,6 +138,15 @@ Tensor Schedule::cache_read(const Tensor& tensor, const std::string& scope, } os << "." << scope; + // when a schedule has multiple cache_read on the same tensor, + // we make sure their op names are unique. e.g., w.shared, w.shared.d, w.shared.d.d + for (auto pair : (*this)->stage_map) { + auto stage = pair.second; + if (stage->op->name == os.str()) { + os << ".d"; Review comment: Can we add a global map here and mark these name as "w.shared.0", "w.shared.1"? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org