Re: [PR] [Schedule] Loop-Partition Scheduling Primitive [tvm]

2024-01-22 Thread via GitHub


rutkoor commented on code in PR #16431:
URL: https://github.com/apache/tvm/pull/16431#discussion_r1462780765


##
src/tir/schedule/concrete_schedule.cc:
##
@@ -500,6 +500,143 @@ Array ConcreteScheduleNode::Split(const LoopRV& 
loop_rv,
   return CreateRV(results);
 }
 
+Array ConcreteScheduleNode::LoopPartition(const LoopRV& loop_rv,
+  const 
Array>& factor_rvs,
+  bool preserve_unit_iters) {
+  class SymbolicShapeError : public ScheduleError {
+   public:
+explicit SymbolicShapeError(IRModule mod, For loop) : mod_(mod), 
loop_(std::move(loop)) {}
+
+String FastErrorString() const final {
+  return "ScheduleError: The min and extent values of the loop are 
required to be known at "
+ "compile time. However, dynamic shape has been detected.";
+}
+
+String DetailRenderTemplate() const final {
+  return "Detected dynamic shape in either min or extent of a loop {0}";
+}
+
+IRModule mod() const final { return mod_; }
+Array LocationsOfInterest() const final { return {loop_}; }
+
+IRModule mod_;
+For loop_;
+  };
+
+  class NotSingleInferFactorError : public ScheduleError {
+   public:
+explicit NotSingleInferFactorError(IRModule mod) : mod_(mod) {}
+
+String FastErrorString() const final {
+  return "ScheduleError: only one factor can be specified as -1 or none";
+}
+
+String DetailRenderTemplate() const final {
+  return "Only one factor can be specified as -1 or none";
+}
+
+IRModule mod() const final { return mod_; }
+Array LocationsOfInterest() const final { return {}; }
+
+IRModule mod_;
+  };
+
+  class WrongFactorSumError : public ScheduleError {
+   public:
+explicit WrongFactorSumError(IRModule mod, For loop) : mod_(mod), 
loop_(std::move(loop)) {}
+
+String FastErrorString() const final {
+  return "ScheduleError: The sum of factors is larger than or equal to the 
extent of "
+ "loop";
+}
+
+String DetailRenderTemplate() const final {
+  return "The sum of factors is not larger than or equal to the extent of 
loop {0}";
+}
+
+IRModule mod() const final { return mod_; }
+Array LocationsOfInterest() const final { return {loop_}; }
+
+IRModule mod_;
+For loop_;
+  };
+
+  class NonPositiveFactorError : public ScheduleError {

Review Comment:
   Added changes to reuse the same error class for Split and Loop-Partition.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Schedule] Loop-Partition Scheduling Primitive [tvm]

2024-01-22 Thread via GitHub


rutkoor commented on code in PR #16431:
URL: https://github.com/apache/tvm/pull/16431#discussion_r1462780106


##
src/tir/schedule/primitive/get_block_loop.cc:
##
@@ -238,7 +238,7 @@ struct GetOutputBlocksTraits : public 
UnpackedInstTraits
 
   static String UnpackedAsPython(Array outputs, String block_rv) {
 PythonAPICall py("get_output_blocks");
-py.Input("block", block_rv);
+py.Input("scope_block", block_rv);

Review Comment:
   Removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Vulkan] Some fixes of Vulkan codegen [tvm]

2024-01-22 Thread via GitHub


junrushao commented on PR #16405:
URL: https://github.com/apache/tvm/pull/16405#issuecomment-1905314210

   I think it's part of https://github.com/apache/tvm/pull/16414 that has been 
merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch nightly updated (e4b1d684b4 -> 8621517d3d)

2024-01-22 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly
in repository https://gitbox.apache.org/repos/asf/tvm.git


from e4b1d684b4 Merge remote-tracking branch 'upstream/main'
 add ee5c994a59 [Disco][3rdparty] Add latency optimized all reduce kernels.
 add 20efa23e37 Revert "[Disco][3rdparty] Add latency optimized all reduce 
kernels."
 add 8621517d3d [skip ci] Post unity transition

No new revisions were added by this update.

Summary of changes:
 .asf.yaml | 20 +++-
 1 file changed, 19 insertions(+), 1 deletion(-)



Re: [PR] [Unity][MSC][M4.1] Add plugin && plugin_builder, enable build and test in different frameworks [tvm]

2024-01-22 Thread via GitHub


Hzfengsy merged PR #16397:
URL: https://github.com/apache/tvm/pull/16397


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Vulkan] Some fixes of Vulkan codegen [tvm]

2024-01-22 Thread via GitHub


Hzfengsy commented on PR #16405:
URL: https://github.com/apache/tvm/pull/16405#issuecomment-1905278555

   Please rebase :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Schedule] Loop-Partition Scheduling Primitive [tvm]

2024-01-22 Thread via GitHub


Hzfengsy commented on code in PR #16431:
URL: https://github.com/apache/tvm/pull/16431#discussion_r1462709310


##
src/tir/schedule/concrete_schedule.cc:
##
@@ -500,6 +500,143 @@ Array ConcreteScheduleNode::Split(const LoopRV& 
loop_rv,
   return CreateRV(results);
 }
 
+Array ConcreteScheduleNode::LoopPartition(const LoopRV& loop_rv,
+  const 
Array>& factor_rvs,
+  bool preserve_unit_iters) {
+  class SymbolicShapeError : public ScheduleError {
+   public:
+explicit SymbolicShapeError(IRModule mod, For loop) : mod_(mod), 
loop_(std::move(loop)) {}
+
+String FastErrorString() const final {
+  return "ScheduleError: The min and extent values of the loop are 
required to be known at "
+ "compile time. However, dynamic shape has been detected.";
+}
+
+String DetailRenderTemplate() const final {
+  return "Detected dynamic shape in either min or extent of a loop {0}";
+}
+
+IRModule mod() const final { return mod_; }
+Array LocationsOfInterest() const final { return {loop_}; }
+
+IRModule mod_;
+For loop_;
+  };
+
+  class NotSingleInferFactorError : public ScheduleError {
+   public:
+explicit NotSingleInferFactorError(IRModule mod) : mod_(mod) {}
+
+String FastErrorString() const final {
+  return "ScheduleError: only one factor can be specified as -1 or none";
+}
+
+String DetailRenderTemplate() const final {
+  return "Only one factor can be specified as -1 or none";
+}
+
+IRModule mod() const final { return mod_; }
+Array LocationsOfInterest() const final { return {}; }
+
+IRModule mod_;
+  };
+
+  class WrongFactorSumError : public ScheduleError {
+   public:
+explicit WrongFactorSumError(IRModule mod, For loop) : mod_(mod), 
loop_(std::move(loop)) {}
+
+String FastErrorString() const final {
+  return "ScheduleError: The sum of factors is larger than or equal to the 
extent of "
+ "loop";
+}
+
+String DetailRenderTemplate() const final {
+  return "The sum of factors is not larger than or equal to the extent of 
loop {0}";
+}
+
+IRModule mod() const final { return mod_; }
+Array LocationsOfInterest() const final { return {loop_}; }
+
+IRModule mod_;
+For loop_;
+  };
+
+  class NonPositiveFactorError : public ScheduleError {

Review Comment:
   Would be great to reuse the same error class as Split. 



##
src/tir/schedule/primitive/get_block_loop.cc:
##
@@ -238,7 +238,7 @@ struct GetOutputBlocksTraits : public 
UnpackedInstTraits
 
   static String UnpackedAsPython(Array outputs, String block_rv) {
 PythonAPICall py("get_output_blocks");
-py.Input("block", block_rv);
+py.Input("scope_block", block_rv);

Review Comment:
   I'm not sure what the purpose of this change. Maybe a typo?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BugTIR] fix thread_sync occurs in letstmt [tvm]

2024-01-22 Thread via GitHub


Hzfengsy commented on PR #16454:
URL: https://github.com/apache/tvm/pull/16454#issuecomment-1905259889

   cc @vinx13 @spectrometerHBH 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Handle symbolic variables in LambdaLift [tvm]

2024-01-22 Thread via GitHub


slyubomirsky commented on code in PR #16411:
URL: https://github.com/apache/tvm/pull/16411#discussion_r1462672163


##
src/relax/transform/lambda_lift.cc:
##
@@ -336,176 +266,235 @@ class LambdaLifter : public ExprMutator {
   return it->second;
 }();
 
-auto global = GlobalVar(lift_func_name);
-Array free_vars = FreeVars(func);
 Array captured_vars;
-
-Array typed_captured_vars;
-bool recursive = false;
-for (const auto& var : free_vars) {
-  if (!recur_vars_.empty() && var == recur_vars_.back()) {
-recursive = true;
+bool is_recursive = false;
+bool is_closure = false;
+for (const auto& var : FreeVars(func)) {
+  if (var.same_as(current_lambda_var_)) {
+is_recursive = true;
   } else {
+is_closure = true;
 captured_vars.push_back(var);
   }
 }
 
+Array typed_captured_vars;
 Map rebinding_map;
 for (auto free_var : captured_vars) {
   Var var = Var(free_var->name_hint(), GetStructInfo(free_var), 
free_var->span);
   typed_captured_vars.push_back(var);
   rebinding_map.Set(free_var, var);
 }
 
-// recursive call
-if (recursive) {
-  if (!captured_vars.empty()) {
-Array fvs;
-for (auto fv : captured_vars) {
-  fvs.push_back(fv);
-}
-// it is required by block_blocker, will be updated later
-UpdateStructInfo(global, GetStructInfo(recur_vars_.back()));
-lambda_map_.emplace(recur_vars_.back(), Call(global, fvs));
-  } else {
-if (recur_vars_.size() > 0) {
-  lambda_map_.emplace(recur_vars_.back(), global);
-}
-  }
+tvm::Array lifted_func_params =
+func_node->params.Map([this](Var var) { return VisitVarDef(var); });
+for (const auto& var : typed_captured_vars) {
+  lifted_func_params.push_back(var);
 }
 
-tvm::Array params;
-bool all_params_unchanged = true;
-for (Var param : func_node->params) {
-  Var new_param = this->VisitVarDef(param);
-  params.push_back(new_param);
-  all_params_unchanged &= param.same_as(new_param);
+auto gvar_lifted_func = GlobalVar(lift_func_name);
+{
+  auto func_sinfo = Downcast(func_node->struct_info_);
+  if (is_closure) {
+func_sinfo = FuncStructInfo(lifted_func_params.Map(GetStructInfo), 
func_sinfo->ret,
+func_sinfo->purity);
+  }
+  UpdateStructInfo(gvar_lifted_func, func_sinfo);
 }
 
-Expr body = this->VisitWithNewScope(func_node->body);
-Expr visited_func;
+Expr body = func_node->body;
 
-if (all_params_unchanged && body.same_as(func_node->body)) {
-  visited_func = GetRef(func_node);
-} else if (const auto& body_sinfo = 
MatchStructInfo(body)) {
-  visited_func =
-  Function(params, body, body_sinfo.value(), func_node->is_pure, 
func_node->attrs);
-} else {
-  visited_func =
-  Function(params, body, func_node->ret_struct_info, 
func_node->is_pure, func_node->attrs);
+// recursive call
+if (is_recursive && is_closure) {
+  // it is required by block_blocker, will be updated later
+  nested_closure_map_.emplace(
+  current_lambda_var_.value(),
+  Call(gvar_lifted_func, captured_vars.Map([](Var var) -> Expr { 
return var; })));

Review Comment:
   Might have to be careful to ensure type safety can't get broken that way, 
that has tended to lead to soundness issues in languages. (I believe that type 
of thing is a source of unsoundness in TypeScript, IIRC.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Draft][WIP] Relax language specification [tvm]

2024-01-22 Thread via GitHub


slyubomirsky commented on PR #14148:
URL: https://github.com/apache/tvm/pull/14148#issuecomment-1905208488

   Closing the PR so discussion can move to the RFC instead.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Draft][WIP] Relax language specification [tvm]

2024-01-22 Thread via GitHub


slyubomirsky closed pull request #14148: [Unity][Draft][WIP] Relax language 
specification
URL: https://github.com/apache/tvm/pull/14148


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [RFC] Relax Language Specification [tvm-rfcs]

2024-01-22 Thread via GitHub


slyubomirsky opened a new pull request, #106:
URL: https://github.com/apache/tvm-rfcs/pull/106

   [Rendered 
view.](https://github.com/slyubomirsky/tvm-rfcs/blob/relax-spec/rfcs/0106-relax-spec.md)
   
   Now that Unity has been merged into TVM's main branch, I have written an RFC 
to make my [unofficial Relax 
specification](https://github.com/apache/tvm/pull/14148) an official one, akin 
to the [TIR specification RFC](https://github.com/apache/tvm-rfcs/pull/101). 
Since Relax is a much newer and less heavily used language than TIR, there are 
fewer unresolved questions in this RFC compared to that for the TIR 
specification. I welcome your review both on the [specification draft 
itself](https://github.com/slyubomirsky/tvm-rfcs/blob/relax-spec/rfcs/assets/0106/spec.md)
 and on the procedures proposed in the RFC.
   
   Many thanks to those who have reviewed past versions of the draft Relax 
specification, including @YuchenJin, @psrivas2, @sunggg, @junrushao, @denise-k, 
and @yongwww.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [BugTIR] fix thread_sync occurs in letstmt [tvm]

2024-01-22 Thread via GitHub


JackWeiw opened a new pull request, #16454:
URL: https://github.com/apache/tvm/pull/16454

   See original discuss
   [LayerNorm Error in thread_storage_sync when read x into shared 
memory](https://discuss.tvm.apache.org/t/layernorm-error-in-thread-storage-sync-when-read-x-into-shared-memory/16269)
   
   I try to read x into shared memory to accelerate layernorm, [script here 
1](https://gist.github.com/JackWeiw/f873daaff32212b0b19cf91fda463007)
   
   but error occurs in pass thread_sorage_sync pass.I found it is because the 
error in lower LetStmt ,[lowered 
script](https://gist.github.com/JackWeiw/6737f4faa1b486b389721cf7f3f4ad4d)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BugTIR] fix thread_sync occurs in letstmt [tvm]

2024-01-22 Thread via GitHub


JackWeiw closed pull request #16447: [BugTIR] fix thread_sync occurs in letstmt
URL: https://github.com/apache/tvm/pull/16447


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated: [skip ci] Post unity transition

2024-01-22 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 8621517d3d [skip ci] Post unity transition
8621517d3d is described below

commit 8621517d3d90d81ffdfd394f419e333c42406719
Author: tqchen 
AuthorDate: Mon Jan 22 18:08:43 2024 -0500

[skip ci] Post unity transition

This PR turns the settings back after post unity transition.
---
 .asf.yaml | 20 +++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/.asf.yaml b/.asf.yaml
index 9d2dbfb240..7aa29bf955 100644
--- a/.asf.yaml
+++ b/.asf.yaml
@@ -53,4 +53,22 @@ github:
 
   # See 
https://cwiki.apache.org/confluence/display/INFRA/Git+-+.asf.yaml+features#Git.asf.yamlfeatures-Branchprotection
   protected_branches:
-main: {}
+main:
+  required_status_checks:
+contexts:
+  - unity/pr-head
+  - arm/pr-head
+  - cortexm/pr-head
+  - cpu/pr-head
+  - docker/pr-head
+  - gpu/pr-head
+  - hexagon/pr-head
+  - i386/pr-head
+  - lint/pr-head
+  - minimal/pr-head
+  - riscv/pr-head
+  - wasm/pr-head
+  - cross-isa-minimal/pr-head
+
+  required_pull_request_reviews:
+required_approving_review_count: 1



Re: [PR] [RFC] Relax Upstreaming [tvm-rfcs]

2024-01-22 Thread via GitHub


YuchenJin commented on PR #89:
URL: https://github.com/apache/tvm-rfcs/pull/89#issuecomment-1904969432

   > It's worth noting that with the merging of Unity into TVM's main branch, 
Relax has already been _de facto_ upstreamed.
   
   🥳 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [RFC] Relax Upstreaming [tvm-rfcs]

2024-01-22 Thread via GitHub


tqchen commented on PR #89:
URL: https://github.com/apache/tvm-rfcs/pull/89#issuecomment-1904964309

   indeed, check out https://github.com/apache/tvm/issues/16446


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [RFC] Relax Upstreaming [tvm-rfcs]

2024-01-22 Thread via GitHub


tqchen closed pull request #89: [RFC] Relax Upstreaming
URL: https://github.com/apache/tvm-rfcs/pull/89


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [RFC] Relax Upstreaming [tvm-rfcs]

2024-01-22 Thread via GitHub


slyubomirsky commented on PR #89:
URL: https://github.com/apache/tvm-rfcs/pull/89#issuecomment-1904942456

   It's worth noting that with the merging of Unity into TVM's main branch, 
Relax has already been _de facto_ upstreamed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Draft][WIP] Relax language specification [tvm]

2024-01-22 Thread via GitHub


slyubomirsky commented on PR #14148:
URL: https://github.com/apache/tvm/pull/14148#issuecomment-1904924009

   Now that Unity is in mainline I work to make this an RFC instead, so see the 
upcoming RFC for future updates.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch dependabot/pip/apps/microtvm/pillow-10.2.0 created (now 364a27e665)

2024-01-22 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch dependabot/pip/apps/microtvm/pillow-10.2.0
in repository https://gitbox.apache.org/repos/asf/tvm.git


  at 364a27e665 Bump pillow from 10.0.1 to 10.2.0 in /apps/microtvm

No new revisions were added by this update.



[PR] Bump pillow from 10.0.1 to 10.2.0 in /apps/microtvm [tvm]

2024-01-22 Thread via GitHub


dependabot[bot] opened a new pull request, #16453:
URL: https://github.com/apache/tvm/pull/16453

   Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.0.1 to 
10.2.0.
   
   Release notes
   Sourced from https://github.com/python-pillow/Pillow/releases";>pillow's 
releases.
   
   10.2.0
   https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html";>https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html
   Changes
   
   Add keep_rgb option when saving JPEG to prevent conversion 
of RGB colorspace https://redirect.github.com/python-pillow/Pillow/issues/7553";>#7553 
[https://github.com/bgilbert";>@​bgilbert]
   Trim negative glyph offsets in ImageFont.getmask() https://redirect.github.com/python-pillow/Pillow/issues/7672";>#7672 
[https://github.com/nulano";>@​nulano]
   Removed unnecessary "pragma: no cover" https://redirect.github.com/python-pillow/Pillow/issues/7668";>#7668 
[https://github.com/radarhere";>@​radarhere]
   Trim glyph size in ImageFont.getmask() https://redirect.github.com/python-pillow/Pillow/issues/7669";>#7669 
[https://github.com/radarhere";>@​radarhere]
   Fix loading IPTC images and update test https://redirect.github.com/python-pillow/Pillow/issues/7667";>#7667 
[https://github.com/nulano";>@​nulano]
   Allow uncompressed TIFF images to be saved in chunks https://redirect.github.com/python-pillow/Pillow/issues/7650";>#7650 
[https://github.com/radarhere";>@​radarhere]
   Concatenate multiple JPEG EXIF markers https://redirect.github.com/python-pillow/Pillow/issues/7496";>#7496 
[https://github.com/radarhere";>@​radarhere]
   Changed IPTC tile tuple to match other plugins https://redirect.github.com/python-pillow/Pillow/issues/7661";>#7661 
[https://github.com/radarhere";>@​radarhere]
   Do not assign new fp attribute when exiting context manager https://redirect.github.com/python-pillow/Pillow/issues/7566";>#7566 
[https://github.com/radarhere";>@​radarhere]
   Support arbitrary masks for uncompressed RGB DDS images https://redirect.github.com/python-pillow/Pillow/issues/7589";>#7589 
[https://github.com/radarhere";>@​radarhere]
   Support setting ROWSPERSTRIP tag https://redirect.github.com/python-pillow/Pillow/issues/7654";>#7654 
[https://github.com/radarhere";>@​radarhere]
   Apply ImageFont.MAX_STRING_LENGTH to ImageFont.getmask() https://redirect.github.com/python-pillow/Pillow/issues/7662";>#7662 
[https://github.com/radarhere";>@​radarhere]
   Optimise ImageColor using functools.lru_cache 
https://redirect.github.com/python-pillow/Pillow/issues/7657";>#7657 
[https://github.com/hugovk";>@​hugovk]
   Restricted environment keys for ImageMath.eval() https://redirect.github.com/python-pillow/Pillow/issues/7655";>#7655 
[https://github.com/radarhere";>@​radarhere]
   Optimise ImageMode.getmode using 
functools.lru_cache https://redirect.github.com/python-pillow/Pillow/issues/7641";>#7641 
[https://github.com/hugovk";>@​hugovk]
   Added trusted PyPI publishing https://redirect.github.com/python-pillow/Pillow/issues/7616";>#7616 
[https://github.com/radarhere";>@​radarhere]
   Compile FriBiDi for Windows ARM64 https://redirect.github.com/python-pillow/Pillow/issues/7629";>#7629 
[https://github.com/nulano";>@​nulano]
   Fix incorrect color blending for overlapping glyphs https://redirect.github.com/python-pillow/Pillow/issues/7497";>#7497 
[https://github.com/ZachNagengast";>@​ZachNagengast]
   Add .git-blame-ignore-revs file https://redirect.github.com/python-pillow/Pillow/issues/7528";>#7528 
[https://github.com/akx";>@​akx]
   Attempt memory mapping when tile args is a string https://redirect.github.com/python-pillow/Pillow/issues/7565";>#7565 
[https://github.com/radarhere";>@​radarhere]
   Fill identical pixels with transparency in subsequent frames when saving 
GIF https://redirect.github.com/python-pillow/Pillow/issues/7568";>#7568 
[https://github.com/radarhere";>@​radarhere]
   Removed unnecessary string length check https://redirect.github.com/python-pillow/Pillow/issues/7560";>#7560 
[https://github.com/radarhere";>@​radarhere]
   Determine mask mode in Python instead of C https://redirect.github.com/python-pillow/Pillow/issues/7548";>#7548 
[https://github.com/radarhere";>@​radarhere]
   Corrected duration when combining multiple GIF frames into single frame 
https://redirect.github.com/python-pillow/Pillow/issues/7521";>#7521 
[https://github.com/radarhere";>@​radarhere]
   Handle disposing GIF background from outside palette https://redirect.github.com/python-pillow/Pillow/issues/7515";>#7515 
[https://github.com/radarhere";>@​radarhere]
   Seek past the data when skipping a PSD layer https://redirect.github.com/python-pillow/Pillow/issues/7483";>#7483 
[https://github.com/radarhere";>@​radarhere]
   ImageMath: Inline isinstance check https://redirect.github.com/python-pillow/Pillow/issues/7623";>#7623 
[https://github.com/hugovk";>@​hugovk]
   Update actions/upload-artifact action to v4 https://redirect.github.com/python-pillow/Pill

(tvm) branch dependabot/pip/apps/microtvm/cmsisnn/pillow-10.2.0 created (now a9a6a6cff3)

2024-01-22 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch 
dependabot/pip/apps/microtvm/cmsisnn/pillow-10.2.0
in repository https://gitbox.apache.org/repos/asf/tvm.git


  at a9a6a6cff3 Bump pillow from 10.0.1 to 10.2.0 in /apps/microtvm/cmsisnn

No new revisions were added by this update.



[PR] Bump pillow from 10.0.1 to 10.2.0 in /apps/microtvm/ethosu [tvm]

2024-01-22 Thread via GitHub


dependabot[bot] opened a new pull request, #16451:
URL: https://github.com/apache/tvm/pull/16451

   Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.0.1 to 
10.2.0.
   
   Release notes
   Sourced from https://github.com/python-pillow/Pillow/releases";>pillow's 
releases.
   
   10.2.0
   https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html";>https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html
   Changes
   
   Add keep_rgb option when saving JPEG to prevent conversion 
of RGB colorspace https://redirect.github.com/python-pillow/Pillow/issues/7553";>#7553 
[https://github.com/bgilbert";>@​bgilbert]
   Trim negative glyph offsets in ImageFont.getmask() https://redirect.github.com/python-pillow/Pillow/issues/7672";>#7672 
[https://github.com/nulano";>@​nulano]
   Removed unnecessary "pragma: no cover" https://redirect.github.com/python-pillow/Pillow/issues/7668";>#7668 
[https://github.com/radarhere";>@​radarhere]
   Trim glyph size in ImageFont.getmask() https://redirect.github.com/python-pillow/Pillow/issues/7669";>#7669 
[https://github.com/radarhere";>@​radarhere]
   Fix loading IPTC images and update test https://redirect.github.com/python-pillow/Pillow/issues/7667";>#7667 
[https://github.com/nulano";>@​nulano]
   Allow uncompressed TIFF images to be saved in chunks https://redirect.github.com/python-pillow/Pillow/issues/7650";>#7650 
[https://github.com/radarhere";>@​radarhere]
   Concatenate multiple JPEG EXIF markers https://redirect.github.com/python-pillow/Pillow/issues/7496";>#7496 
[https://github.com/radarhere";>@​radarhere]
   Changed IPTC tile tuple to match other plugins https://redirect.github.com/python-pillow/Pillow/issues/7661";>#7661 
[https://github.com/radarhere";>@​radarhere]
   Do not assign new fp attribute when exiting context manager https://redirect.github.com/python-pillow/Pillow/issues/7566";>#7566 
[https://github.com/radarhere";>@​radarhere]
   Support arbitrary masks for uncompressed RGB DDS images https://redirect.github.com/python-pillow/Pillow/issues/7589";>#7589 
[https://github.com/radarhere";>@​radarhere]
   Support setting ROWSPERSTRIP tag https://redirect.github.com/python-pillow/Pillow/issues/7654";>#7654 
[https://github.com/radarhere";>@​radarhere]
   Apply ImageFont.MAX_STRING_LENGTH to ImageFont.getmask() https://redirect.github.com/python-pillow/Pillow/issues/7662";>#7662 
[https://github.com/radarhere";>@​radarhere]
   Optimise ImageColor using functools.lru_cache 
https://redirect.github.com/python-pillow/Pillow/issues/7657";>#7657 
[https://github.com/hugovk";>@​hugovk]
   Restricted environment keys for ImageMath.eval() https://redirect.github.com/python-pillow/Pillow/issues/7655";>#7655 
[https://github.com/radarhere";>@​radarhere]
   Optimise ImageMode.getmode using 
functools.lru_cache https://redirect.github.com/python-pillow/Pillow/issues/7641";>#7641 
[https://github.com/hugovk";>@​hugovk]
   Added trusted PyPI publishing https://redirect.github.com/python-pillow/Pillow/issues/7616";>#7616 
[https://github.com/radarhere";>@​radarhere]
   Compile FriBiDi for Windows ARM64 https://redirect.github.com/python-pillow/Pillow/issues/7629";>#7629 
[https://github.com/nulano";>@​nulano]
   Fix incorrect color blending for overlapping glyphs https://redirect.github.com/python-pillow/Pillow/issues/7497";>#7497 
[https://github.com/ZachNagengast";>@​ZachNagengast]
   Add .git-blame-ignore-revs file https://redirect.github.com/python-pillow/Pillow/issues/7528";>#7528 
[https://github.com/akx";>@​akx]
   Attempt memory mapping when tile args is a string https://redirect.github.com/python-pillow/Pillow/issues/7565";>#7565 
[https://github.com/radarhere";>@​radarhere]
   Fill identical pixels with transparency in subsequent frames when saving 
GIF https://redirect.github.com/python-pillow/Pillow/issues/7568";>#7568 
[https://github.com/radarhere";>@​radarhere]
   Removed unnecessary string length check https://redirect.github.com/python-pillow/Pillow/issues/7560";>#7560 
[https://github.com/radarhere";>@​radarhere]
   Determine mask mode in Python instead of C https://redirect.github.com/python-pillow/Pillow/issues/7548";>#7548 
[https://github.com/radarhere";>@​radarhere]
   Corrected duration when combining multiple GIF frames into single frame 
https://redirect.github.com/python-pillow/Pillow/issues/7521";>#7521 
[https://github.com/radarhere";>@​radarhere]
   Handle disposing GIF background from outside palette https://redirect.github.com/python-pillow/Pillow/issues/7515";>#7515 
[https://github.com/radarhere";>@​radarhere]
   Seek past the data when skipping a PSD layer https://redirect.github.com/python-pillow/Pillow/issues/7483";>#7483 
[https://github.com/radarhere";>@​radarhere]
   ImageMath: Inline isinstance check https://redirect.github.com/python-pillow/Pillow/issues/7623";>#7623 
[https://github.com/hugovk";>@​hugovk]
   Update actions/upload-artifact action to v4 https://redirect.github.com/python-pillow/Pill

[PR] Bump pillow from 10.0.1 to 10.2.0 in /apps/microtvm/cmsisnn [tvm]

2024-01-22 Thread via GitHub


dependabot[bot] opened a new pull request, #16452:
URL: https://github.com/apache/tvm/pull/16452

   Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.0.1 to 
10.2.0.
   
   Release notes
   Sourced from https://github.com/python-pillow/Pillow/releases";>pillow's 
releases.
   
   10.2.0
   https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html";>https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html
   Changes
   
   Add keep_rgb option when saving JPEG to prevent conversion 
of RGB colorspace https://redirect.github.com/python-pillow/Pillow/issues/7553";>#7553 
[https://github.com/bgilbert";>@​bgilbert]
   Trim negative glyph offsets in ImageFont.getmask() https://redirect.github.com/python-pillow/Pillow/issues/7672";>#7672 
[https://github.com/nulano";>@​nulano]
   Removed unnecessary "pragma: no cover" https://redirect.github.com/python-pillow/Pillow/issues/7668";>#7668 
[https://github.com/radarhere";>@​radarhere]
   Trim glyph size in ImageFont.getmask() https://redirect.github.com/python-pillow/Pillow/issues/7669";>#7669 
[https://github.com/radarhere";>@​radarhere]
   Fix loading IPTC images and update test https://redirect.github.com/python-pillow/Pillow/issues/7667";>#7667 
[https://github.com/nulano";>@​nulano]
   Allow uncompressed TIFF images to be saved in chunks https://redirect.github.com/python-pillow/Pillow/issues/7650";>#7650 
[https://github.com/radarhere";>@​radarhere]
   Concatenate multiple JPEG EXIF markers https://redirect.github.com/python-pillow/Pillow/issues/7496";>#7496 
[https://github.com/radarhere";>@​radarhere]
   Changed IPTC tile tuple to match other plugins https://redirect.github.com/python-pillow/Pillow/issues/7661";>#7661 
[https://github.com/radarhere";>@​radarhere]
   Do not assign new fp attribute when exiting context manager https://redirect.github.com/python-pillow/Pillow/issues/7566";>#7566 
[https://github.com/radarhere";>@​radarhere]
   Support arbitrary masks for uncompressed RGB DDS images https://redirect.github.com/python-pillow/Pillow/issues/7589";>#7589 
[https://github.com/radarhere";>@​radarhere]
   Support setting ROWSPERSTRIP tag https://redirect.github.com/python-pillow/Pillow/issues/7654";>#7654 
[https://github.com/radarhere";>@​radarhere]
   Apply ImageFont.MAX_STRING_LENGTH to ImageFont.getmask() https://redirect.github.com/python-pillow/Pillow/issues/7662";>#7662 
[https://github.com/radarhere";>@​radarhere]
   Optimise ImageColor using functools.lru_cache 
https://redirect.github.com/python-pillow/Pillow/issues/7657";>#7657 
[https://github.com/hugovk";>@​hugovk]
   Restricted environment keys for ImageMath.eval() https://redirect.github.com/python-pillow/Pillow/issues/7655";>#7655 
[https://github.com/radarhere";>@​radarhere]
   Optimise ImageMode.getmode using 
functools.lru_cache https://redirect.github.com/python-pillow/Pillow/issues/7641";>#7641 
[https://github.com/hugovk";>@​hugovk]
   Added trusted PyPI publishing https://redirect.github.com/python-pillow/Pillow/issues/7616";>#7616 
[https://github.com/radarhere";>@​radarhere]
   Compile FriBiDi for Windows ARM64 https://redirect.github.com/python-pillow/Pillow/issues/7629";>#7629 
[https://github.com/nulano";>@​nulano]
   Fix incorrect color blending for overlapping glyphs https://redirect.github.com/python-pillow/Pillow/issues/7497";>#7497 
[https://github.com/ZachNagengast";>@​ZachNagengast]
   Add .git-blame-ignore-revs file https://redirect.github.com/python-pillow/Pillow/issues/7528";>#7528 
[https://github.com/akx";>@​akx]
   Attempt memory mapping when tile args is a string https://redirect.github.com/python-pillow/Pillow/issues/7565";>#7565 
[https://github.com/radarhere";>@​radarhere]
   Fill identical pixels with transparency in subsequent frames when saving 
GIF https://redirect.github.com/python-pillow/Pillow/issues/7568";>#7568 
[https://github.com/radarhere";>@​radarhere]
   Removed unnecessary string length check https://redirect.github.com/python-pillow/Pillow/issues/7560";>#7560 
[https://github.com/radarhere";>@​radarhere]
   Determine mask mode in Python instead of C https://redirect.github.com/python-pillow/Pillow/issues/7548";>#7548 
[https://github.com/radarhere";>@​radarhere]
   Corrected duration when combining multiple GIF frames into single frame 
https://redirect.github.com/python-pillow/Pillow/issues/7521";>#7521 
[https://github.com/radarhere";>@​radarhere]
   Handle disposing GIF background from outside palette https://redirect.github.com/python-pillow/Pillow/issues/7515";>#7515 
[https://github.com/radarhere";>@​radarhere]
   Seek past the data when skipping a PSD layer https://redirect.github.com/python-pillow/Pillow/issues/7483";>#7483 
[https://github.com/radarhere";>@​radarhere]
   ImageMath: Inline isinstance check https://redirect.github.com/python-pillow/Pillow/issues/7623";>#7623 
[https://github.com/hugovk";>@​hugovk]
   Update actions/upload-artifact action to v4 https://redirect.github.com/python-pillow/Pill

(tvm) branch dependabot/pip/apps/microtvm/ethosu/pillow-10.2.0 created (now 9c72e81844)

2024-01-22 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch 
dependabot/pip/apps/microtvm/ethosu/pillow-10.2.0
in repository https://gitbox.apache.org/repos/asf/tvm.git


  at 9c72e81844 Bump pillow from 10.0.1 to 10.2.0 in /apps/microtvm/ethosu

No new revisions were added by this update.



Re: [PR] [Transform] Improve symbolic variable handling in FuseOps [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on PR #16450:
URL: https://github.com/apache/tvm/pull/16450#issuecomment-1904772530

   I could see having a post-processing pass to update the signature, maybe as 
an extension of `RemoveUnusedParameters`.  There would still need to be an 
update to `FuseOps` to have the fused functions marked as private, since the 
post-processing step would only be allowed to update the signature of internal 
functions.
   
   Though, could you expand on what you mean by intermediate expressions?  In 
either case, whether implemented in `FuseOps` or in a post-processing pass, I 
think intermediate expressions would be handled correctly.  If an expression 
`n*4` can be inferred from the tensor shapes, but `n+42` also appears in the 
fused function, then there would still be a shape expr used to expose `n` to 
the fused function.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(tvm) branch main updated (ee5c994a59 -> 20efa23e37)

2024-01-22 Thread csullivan
This is an automated email from the ASF dual-hosted git repository.

csullivan pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


from ee5c994a59 [Disco][3rdparty] Add latency optimized all reduce kernels.
 add 20efa23e37 Revert "[Disco][3rdparty] Add latency optimized all reduce 
kernels."

No new revisions were added by this update.

Summary of changes:
 .gitmodules|  3 ---
 3rdparty/trt-llm-allreduce |  1 -
 CMakeLists.txt |  3 ---
 cmake/modules/CUDA.cmake   |  4 
 src/runtime/disco/nccl/nccl.cc | 12 
 5 files changed, 23 deletions(-)
 delete mode 16 3rdparty/trt-llm-allreduce



(tvm) branch main updated: [Disco][3rdparty] Add latency optimized all reduce kernels.

2024-01-22 Thread csullivan
This is an automated email from the ASF dual-hosted git repository.

csullivan pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new ee5c994a59 [Disco][3rdparty] Add latency optimized all reduce kernels.
ee5c994a59 is described below

commit ee5c994a591f58755e69d51bfaabe075df826af4
Author: Chris Sullivan 
AuthorDate: Mon Jan 22 14:44:15 2024 +

[Disco][3rdparty] Add latency optimized all reduce kernels.
---
 .gitmodules|  3 +++
 3rdparty/trt-llm-allreduce |  1 +
 CMakeLists.txt |  3 +++
 cmake/modules/CUDA.cmake   |  4 
 src/runtime/disco/nccl/nccl.cc | 12 
 5 files changed, 23 insertions(+)

diff --git a/.gitmodules b/.gitmodules
index b5102d9a9b..fd9363b144 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -31,3 +31,6 @@
 [submodule "3rdparty/flashinfer"]
path = 3rdparty/flashinfer
url = https://github.com/flashinfer-ai/flashinfer.git
+[submodule "3rdparty/trt-llm-allreduce"]
+   path = 3rdparty/trt-llm-allreduce
+   url = g...@github.com:csullivan/trt-llm-allreduce.git
diff --git a/3rdparty/trt-llm-allreduce b/3rdparty/trt-llm-allreduce
new file mode 16
index 00..84d707ae96
--- /dev/null
+++ b/3rdparty/trt-llm-allreduce
@@ -0,0 +1 @@
+Subproject commit 84d707ae96651fcbb872edda9b5c7e2897f81bf5
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 058f477dbd..4a2ccae919 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -908,6 +908,9 @@ if(USE_CUDA AND USE_NCCL)
   find_library(LIBRT rt)
   target_link_libraries(tvm PRIVATE nccl ${LIBRT})
   target_link_libraries(tvm_runtime PRIVATE nccl ${LIBRT})
+  install(TARGETS trtllm_allreduce EXPORT ${PROJECT_NAME}Targets DESTINATION 
lib${LIB_SUFFIX})
+  target_link_libraries(tvm PRIVATE -Wl,--no-as-needed trtllm_allreduce)
+  target_link_libraries(tvm_runtime PRIVATE -Wl,--no-as-needed 
trtllm_allreduce)
 endif()
 
 if(USE_ROCM AND USE_RCCL)
diff --git a/cmake/modules/CUDA.cmake b/cmake/modules/CUDA.cmake
index 84f466f591..e2c55d0c68 100644
--- a/cmake/modules/CUDA.cmake
+++ b/cmake/modules/CUDA.cmake
@@ -47,6 +47,10 @@ if(USE_CUDA)
 set(CMAKE_CUDA_ARCHITECTURES native)
   endif()
 
+  if(USE_CUDA AND USE_NCCL)
+add_subdirectory(${PROJECT_SOURCE_DIR}/3rdparty/trt-llm-allreduce)
+  endif()
+
   if(USE_CUDNN)
 message(STATUS "Build with cuDNN support")
 include_directories(SYSTEM ${CUDA_CUDNN_INCLUDE_DIRS})
diff --git a/src/runtime/disco/nccl/nccl.cc b/src/runtime/disco/nccl/nccl.cc
index 61c307c673..4be8133229 100644
--- a/src/runtime/disco/nccl/nccl.cc
+++ b/src/runtime/disco/nccl/nccl.cc
@@ -24,6 +24,7 @@
 #include 
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -38,6 +39,7 @@
 #if TVM_NCCL_RCCL_SWITCH == 0
 #include 
 
+#include "../../../../3rdparty/trt-llm-allreduce/include/cuda_allreduce.h"
 #include "../../cuda/cuda_common.h"
 #else
 #include 
@@ -140,6 +142,7 @@ struct CCLThreadLocalContext {
   int device_id;
   deviceStream_t default_stream = nullptr;
   ncclComm_t comm;
+  std::unique_ptr custom_allreduce;
 
   void Clear() {
 NCCL_CALL(ncclCommDestroy(comm));
@@ -190,6 +193,8 @@ void InitCCLPerWorker(IntTuple device_ids, std::string 
unique_id_bytes) {
   worker->ccl = TVM_DISCO_CCL_NAME;
   ctx->worker = worker;
   ctx->device_id = device_id;
+  ctx->custom_allreduce =
+  std::make_unique(worker->num_workers, 
worker->worker_id, ctx->comm);
   // Initialize the communicator
   ncclUniqueId id;
   std::memcpy(id.internal, unique_id_bytes.data(), NCCL_UNIQUE_ID_BYTES);
@@ -201,6 +206,13 @@ void AllReduce(NDArray send, ReduceKind reduce_kind, 
NDArray recv) {
   ShapeTuple shape = send.Shape();
   int64_t numel = shape->Product();
   deviceStream_t stream = ctx->GetDefaultStream();
+  // TODO(csullivan) make this work
+  // 1. pass type in
+  // 2. src and dest args
+  // 3. some strategy selection outside, if (!enqueu) do nccl?
+  // 3. reduce kind
+  // 4. pass stream in to custom api
+  // ctx->custom_allreduce->enqueue(send->data, numel);
   NCCL_CALL(ncclAllReduce(send->data, recv->data, numel,
   /*datatype=*/AsNCCLDataType(DataType(send->dtype)),
   /*op=*/AsNCCLRedOp(reduce_kind), ctx->comm, stream));



Re: [PR] [Unity][Transform] Handle symbolic variables in LambdaLift [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on code in PR #16411:
URL: https://github.com/apache/tvm/pull/16411#discussion_r1462355365


##
src/relax/transform/lambda_lift.cc:
##
@@ -336,176 +266,235 @@ class LambdaLifter : public ExprMutator {
   return it->second;
 }();
 
-auto global = GlobalVar(lift_func_name);
-Array free_vars = FreeVars(func);
 Array captured_vars;
-
-Array typed_captured_vars;
-bool recursive = false;
-for (const auto& var : free_vars) {
-  if (!recur_vars_.empty() && var == recur_vars_.back()) {
-recursive = true;
+bool is_recursive = false;
+bool is_closure = false;
+for (const auto& var : FreeVars(func)) {
+  if (var.same_as(current_lambda_var_)) {
+is_recursive = true;
   } else {
+is_closure = true;
 captured_vars.push_back(var);
   }
 }
 
+Array typed_captured_vars;
 Map rebinding_map;
 for (auto free_var : captured_vars) {
   Var var = Var(free_var->name_hint(), GetStructInfo(free_var), 
free_var->span);
   typed_captured_vars.push_back(var);
   rebinding_map.Set(free_var, var);
 }
 
-// recursive call
-if (recursive) {
-  if (!captured_vars.empty()) {
-Array fvs;
-for (auto fv : captured_vars) {
-  fvs.push_back(fv);
-}
-// it is required by block_blocker, will be updated later
-UpdateStructInfo(global, GetStructInfo(recur_vars_.back()));
-lambda_map_.emplace(recur_vars_.back(), Call(global, fvs));
-  } else {
-if (recur_vars_.size() > 0) {
-  lambda_map_.emplace(recur_vars_.back(), global);
-}
-  }
+tvm::Array lifted_func_params =
+func_node->params.Map([this](Var var) { return VisitVarDef(var); });
+for (const auto& var : typed_captured_vars) {
+  lifted_func_params.push_back(var);
 }
 
-tvm::Array params;
-bool all_params_unchanged = true;
-for (Var param : func_node->params) {
-  Var new_param = this->VisitVarDef(param);
-  params.push_back(new_param);
-  all_params_unchanged &= param.same_as(new_param);
+auto gvar_lifted_func = GlobalVar(lift_func_name);
+{
+  auto func_sinfo = Downcast(func_node->struct_info_);
+  if (is_closure) {
+func_sinfo = FuncStructInfo(lifted_func_params.Map(GetStructInfo), 
func_sinfo->ret,
+func_sinfo->purity);
+  }
+  UpdateStructInfo(gvar_lifted_func, func_sinfo);
 }
 
-Expr body = this->VisitWithNewScope(func_node->body);
-Expr visited_func;
+Expr body = func_node->body;
 
-if (all_params_unchanged && body.same_as(func_node->body)) {
-  visited_func = GetRef(func_node);
-} else if (const auto& body_sinfo = 
MatchStructInfo(body)) {
-  visited_func =
-  Function(params, body, body_sinfo.value(), func_node->is_pure, 
func_node->attrs);
-} else {
-  visited_func =
-  Function(params, body, func_node->ret_struct_info, 
func_node->is_pure, func_node->attrs);
+// recursive call
+if (is_recursive && is_closure) {
+  // it is required by block_blocker, will be updated later
+  nested_closure_map_.emplace(
+  current_lambda_var_.value(),
+  Call(gvar_lifted_func, captured_vars.Map([](Var var) -> Expr { 
return var; })));
 }
-auto new_func = Downcast(visited_func);
 
-Function lifted_func;
-bool is_closure = IsClosure(captured_vars);
 if (!is_closure) {
-  lifted_func = Function(
-  /*params=*/new_func->params,
-  /*body=*/new_func->body,
-  /*ret_struct_info=*/new_func->ret_struct_info,
-  /*is_pure=*/new_func->is_pure,
-  /*attrs=*/new_func->attrs,
-  /*span=*/new_func->span);
-} else {
-  // Flatten the Closure
-  std::vector closure_params;
-  closure_params.reserve(func->params.size() + typed_captured_vars.size());
-  for (size_t i = 0; i < func->params.size(); ++i) {
-closure_params.emplace_back(func->params[i]);
-  }
-  for (size_t i = 0; i < typed_captured_vars.size(); ++i) {
-closure_params.emplace_back(typed_captured_vars[i]);
-  }
+  rebind_map_.emplace(current_lambda_var_.value(), gvar_lifted_func);
+}
 
-  lifted_func = Function(/*params=*/closure_params,
- /*body=*/Bind(new_func->body, rebinding_map),
- /*ret_struct_info=*/new_func->ret_struct_info,
- /*is_pure=*/new_func->is_pure,
- /*attrs=*/new_func->attrs,
- /*span=*/func->span);
+body = this->VisitWithNewScope(body, lifted_func_params);
+StructInfo ret_struct_info = GetStructInfo(body);
+body = Bind(body, rebinding_map);
 
-  for (Var param : closure_params) {
-CHECK(param->checked_type_.defined())
-<< "relax.Function requires params to contain checked_type_";
-  

Re: [PR] [Unity][Transform] Handle symbolic variables in LambdaLift [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on code in PR #16411:
URL: https://github.com/apache/tvm/pull/16411#discussion_r1462333764


##
src/relax/transform/lambda_lift.cc:
##
@@ -336,176 +266,235 @@ class LambdaLifter : public ExprMutator {
   return it->second;
 }();
 
-auto global = GlobalVar(lift_func_name);
-Array free_vars = FreeVars(func);
 Array captured_vars;
-
-Array typed_captured_vars;
-bool recursive = false;
-for (const auto& var : free_vars) {
-  if (!recur_vars_.empty() && var == recur_vars_.back()) {
-recursive = true;
+bool is_recursive = false;
+bool is_closure = false;
+for (const auto& var : FreeVars(func)) {
+  if (var.same_as(current_lambda_var_)) {
+is_recursive = true;
   } else {
+is_closure = true;
 captured_vars.push_back(var);
   }
 }
 
+Array typed_captured_vars;
 Map rebinding_map;
 for (auto free_var : captured_vars) {
   Var var = Var(free_var->name_hint(), GetStructInfo(free_var), 
free_var->span);
   typed_captured_vars.push_back(var);
   rebinding_map.Set(free_var, var);
 }
 
-// recursive call
-if (recursive) {
-  if (!captured_vars.empty()) {
-Array fvs;
-for (auto fv : captured_vars) {
-  fvs.push_back(fv);
-}
-// it is required by block_blocker, will be updated later
-UpdateStructInfo(global, GetStructInfo(recur_vars_.back()));
-lambda_map_.emplace(recur_vars_.back(), Call(global, fvs));
-  } else {
-if (recur_vars_.size() > 0) {
-  lambda_map_.emplace(recur_vars_.back(), global);
-}
-  }
+tvm::Array lifted_func_params =
+func_node->params.Map([this](Var var) { return VisitVarDef(var); });
+for (const auto& var : typed_captured_vars) {
+  lifted_func_params.push_back(var);
 }
 
-tvm::Array params;
-bool all_params_unchanged = true;
-for (Var param : func_node->params) {
-  Var new_param = this->VisitVarDef(param);
-  params.push_back(new_param);
-  all_params_unchanged &= param.same_as(new_param);
+auto gvar_lifted_func = GlobalVar(lift_func_name);
+{
+  auto func_sinfo = Downcast(func_node->struct_info_);
+  if (is_closure) {
+func_sinfo = FuncStructInfo(lifted_func_params.Map(GetStructInfo), 
func_sinfo->ret,
+func_sinfo->purity);
+  }
+  UpdateStructInfo(gvar_lifted_func, func_sinfo);
 }
 
-Expr body = this->VisitWithNewScope(func_node->body);
-Expr visited_func;
+Expr body = func_node->body;
 
-if (all_params_unchanged && body.same_as(func_node->body)) {
-  visited_func = GetRef(func_node);
-} else if (const auto& body_sinfo = 
MatchStructInfo(body)) {
-  visited_func =
-  Function(params, body, body_sinfo.value(), func_node->is_pure, 
func_node->attrs);
-} else {
-  visited_func =
-  Function(params, body, func_node->ret_struct_info, 
func_node->is_pure, func_node->attrs);
+// recursive call
+if (is_recursive && is_closure) {
+  // it is required by block_blocker, will be updated later
+  nested_closure_map_.emplace(
+  current_lambda_var_.value(),
+  Call(gvar_lifted_func, captured_vars.Map([](Var var) -> Expr { 
return var; })));

Review Comment:
   Yup, that's correct.  I'm wondering if there should be an implicit 
conversion from `Array` to `Array`, to avoid needing 
this type of conversion.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Handle symbolic variables in LambdaLift [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on code in PR #16411:
URL: https://github.com/apache/tvm/pull/16411#discussion_r1462330502


##
src/relax/transform/lambda_lift.cc:
##
@@ -336,176 +266,235 @@ class LambdaLifter : public ExprMutator {
   return it->second;
 }();
 
-auto global = GlobalVar(lift_func_name);
-Array free_vars = FreeVars(func);
 Array captured_vars;
-
-Array typed_captured_vars;
-bool recursive = false;
-for (const auto& var : free_vars) {
-  if (!recur_vars_.empty() && var == recur_vars_.back()) {
-recursive = true;
+bool is_recursive = false;
+bool is_closure = false;
+for (const auto& var : FreeVars(func)) {
+  if (var.same_as(current_lambda_var_)) {
+is_recursive = true;
   } else {
+is_closure = true;
 captured_vars.push_back(var);
   }
 }
 
+Array typed_captured_vars;
 Map rebinding_map;
 for (auto free_var : captured_vars) {
   Var var = Var(free_var->name_hint(), GetStructInfo(free_var), 
free_var->span);
   typed_captured_vars.push_back(var);
   rebinding_map.Set(free_var, var);
 }
 
-// recursive call
-if (recursive) {
-  if (!captured_vars.empty()) {
-Array fvs;
-for (auto fv : captured_vars) {
-  fvs.push_back(fv);
-}
-// it is required by block_blocker, will be updated later
-UpdateStructInfo(global, GetStructInfo(recur_vars_.back()));
-lambda_map_.emplace(recur_vars_.back(), Call(global, fvs));
-  } else {
-if (recur_vars_.size() > 0) {
-  lambda_map_.emplace(recur_vars_.back(), global);
-}
-  }
+tvm::Array lifted_func_params =
+func_node->params.Map([this](Var var) { return VisitVarDef(var); });
+for (const auto& var : typed_captured_vars) {
+  lifted_func_params.push_back(var);
 }
 
-tvm::Array params;
-bool all_params_unchanged = true;
-for (Var param : func_node->params) {
-  Var new_param = this->VisitVarDef(param);
-  params.push_back(new_param);
-  all_params_unchanged &= param.same_as(new_param);
+auto gvar_lifted_func = GlobalVar(lift_func_name);
+{
+  auto func_sinfo = Downcast(func_node->struct_info_);
+  if (is_closure) {
+func_sinfo = FuncStructInfo(lifted_func_params.Map(GetStructInfo), 
func_sinfo->ret,
+func_sinfo->purity);
+  }
+  UpdateStructInfo(gvar_lifted_func, func_sinfo);
 }
 
-Expr body = this->VisitWithNewScope(func_node->body);
-Expr visited_func;
+Expr body = func_node->body;
 
-if (all_params_unchanged && body.same_as(func_node->body)) {
-  visited_func = GetRef(func_node);
-} else if (const auto& body_sinfo = 
MatchStructInfo(body)) {
-  visited_func =
-  Function(params, body, body_sinfo.value(), func_node->is_pure, 
func_node->attrs);
-} else {
-  visited_func =
-  Function(params, body, func_node->ret_struct_info, 
func_node->is_pure, func_node->attrs);
+// recursive call
+if (is_recursive && is_closure) {
+  // it is required by block_blocker, will be updated later
+  nested_closure_map_.emplace(
+  current_lambda_var_.value(),
+  Call(gvar_lifted_func, captured_vars.Map([](Var var) -> Expr { 
return var; })));
 }
-auto new_func = Downcast(visited_func);
 
-Function lifted_func;
-bool is_closure = IsClosure(captured_vars);
 if (!is_closure) {
-  lifted_func = Function(
-  /*params=*/new_func->params,
-  /*body=*/new_func->body,
-  /*ret_struct_info=*/new_func->ret_struct_info,
-  /*is_pure=*/new_func->is_pure,
-  /*attrs=*/new_func->attrs,
-  /*span=*/new_func->span);
-} else {
-  // Flatten the Closure
-  std::vector closure_params;
-  closure_params.reserve(func->params.size() + typed_captured_vars.size());
-  for (size_t i = 0; i < func->params.size(); ++i) {
-closure_params.emplace_back(func->params[i]);
-  }
-  for (size_t i = 0; i < typed_captured_vars.size(); ++i) {
-closure_params.emplace_back(typed_captured_vars[i]);
-  }
+  rebind_map_.emplace(current_lambda_var_.value(), gvar_lifted_func);
+}
 
-  lifted_func = Function(/*params=*/closure_params,
- /*body=*/Bind(new_func->body, rebinding_map),
- /*ret_struct_info=*/new_func->ret_struct_info,
- /*is_pure=*/new_func->is_pure,
- /*attrs=*/new_func->attrs,
- /*span=*/func->span);
+body = this->VisitWithNewScope(body, lifted_func_params);
+StructInfo ret_struct_info = GetStructInfo(body);
+body = Bind(body, rebinding_map);
 
-  for (Var param : closure_params) {
-CHECK(param->checked_type_.defined())
-<< "relax.Function requires params to contain checked_type_";
-  

Re: [PR] [Unity][Transform] Handle symbolic variables in LambdaLift [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on code in PR #16411:
URL: https://github.com/apache/tvm/pull/16411#discussion_r1462323573


##
src/relax/transform/lambda_lift.cc:
##
@@ -336,176 +266,235 @@ class LambdaLifter : public ExprMutator {
   return it->second;
 }();
 
-auto global = GlobalVar(lift_func_name);
-Array free_vars = FreeVars(func);
 Array captured_vars;
-
-Array typed_captured_vars;
-bool recursive = false;
-for (const auto& var : free_vars) {
-  if (!recur_vars_.empty() && var == recur_vars_.back()) {
-recursive = true;
+bool is_recursive = false;
+bool is_closure = false;
+for (const auto& var : FreeVars(func)) {
+  if (var.same_as(current_lambda_var_)) {
+is_recursive = true;
   } else {
+is_closure = true;
 captured_vars.push_back(var);
   }
 }
 
+Array typed_captured_vars;
 Map rebinding_map;
 for (auto free_var : captured_vars) {
   Var var = Var(free_var->name_hint(), GetStructInfo(free_var), 
free_var->span);
   typed_captured_vars.push_back(var);
   rebinding_map.Set(free_var, var);
 }
 
-// recursive call
-if (recursive) {
-  if (!captured_vars.empty()) {
-Array fvs;
-for (auto fv : captured_vars) {
-  fvs.push_back(fv);
-}
-// it is required by block_blocker, will be updated later
-UpdateStructInfo(global, GetStructInfo(recur_vars_.back()));
-lambda_map_.emplace(recur_vars_.back(), Call(global, fvs));
-  } else {
-if (recur_vars_.size() > 0) {
-  lambda_map_.emplace(recur_vars_.back(), global);
-}
-  }
+tvm::Array lifted_func_params =
+func_node->params.Map([this](Var var) { return VisitVarDef(var); });
+for (const auto& var : typed_captured_vars) {
+  lifted_func_params.push_back(var);
 }
 
-tvm::Array params;
-bool all_params_unchanged = true;
-for (Var param : func_node->params) {
-  Var new_param = this->VisitVarDef(param);
-  params.push_back(new_param);
-  all_params_unchanged &= param.same_as(new_param);
+auto gvar_lifted_func = GlobalVar(lift_func_name);
+{
+  auto func_sinfo = Downcast(func_node->struct_info_);
+  if (is_closure) {
+func_sinfo = FuncStructInfo(lifted_func_params.Map(GetStructInfo), 
func_sinfo->ret,
+func_sinfo->purity);
+  }
+  UpdateStructInfo(gvar_lifted_func, func_sinfo);
 }
 
-Expr body = this->VisitWithNewScope(func_node->body);
-Expr visited_func;
+Expr body = func_node->body;
 
-if (all_params_unchanged && body.same_as(func_node->body)) {
-  visited_func = GetRef(func_node);
-} else if (const auto& body_sinfo = 
MatchStructInfo(body)) {
-  visited_func =
-  Function(params, body, body_sinfo.value(), func_node->is_pure, 
func_node->attrs);
-} else {
-  visited_func =
-  Function(params, body, func_node->ret_struct_info, 
func_node->is_pure, func_node->attrs);
+// recursive call
+if (is_recursive && is_closure) {
+  // it is required by block_blocker, will be updated later
+  nested_closure_map_.emplace(
+  current_lambda_var_.value(),
+  Call(gvar_lifted_func, captured_vars.Map([](Var var) -> Expr { 
return var; })));
 }
-auto new_func = Downcast(visited_func);
 
-Function lifted_func;
-bool is_closure = IsClosure(captured_vars);
 if (!is_closure) {
-  lifted_func = Function(
-  /*params=*/new_func->params,
-  /*body=*/new_func->body,
-  /*ret_struct_info=*/new_func->ret_struct_info,
-  /*is_pure=*/new_func->is_pure,
-  /*attrs=*/new_func->attrs,
-  /*span=*/new_func->span);
-} else {
-  // Flatten the Closure
-  std::vector closure_params;
-  closure_params.reserve(func->params.size() + typed_captured_vars.size());
-  for (size_t i = 0; i < func->params.size(); ++i) {
-closure_params.emplace_back(func->params[i]);
-  }
-  for (size_t i = 0; i < typed_captured_vars.size(); ++i) {
-closure_params.emplace_back(typed_captured_vars[i]);
-  }
+  rebind_map_.emplace(current_lambda_var_.value(), gvar_lifted_func);
+}
 
-  lifted_func = Function(/*params=*/closure_params,
- /*body=*/Bind(new_func->body, rebinding_map),
- /*ret_struct_info=*/new_func->ret_struct_info,
- /*is_pure=*/new_func->is_pure,
- /*attrs=*/new_func->attrs,
- /*span=*/func->span);
+body = this->VisitWithNewScope(body, lifted_func_params);
+StructInfo ret_struct_info = GetStructInfo(body);
+body = Bind(body, rebinding_map);
 
-  for (Var param : closure_params) {
-CHECK(param->checked_type_.defined())
-<< "relax.Function requires params to contain checked_type_";
-  

Re: [PR] [Unity][Transform] Handle symbolic variables in LambdaLift [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on code in PR #16411:
URL: https://github.com/apache/tvm/pull/16411#discussion_r1462322217


##
src/relax/transform/lambda_lift.cc:
##
@@ -336,176 +266,235 @@ class LambdaLifter : public ExprMutator {
   return it->second;
 }();
 
-auto global = GlobalVar(lift_func_name);
-Array free_vars = FreeVars(func);
 Array captured_vars;
-
-Array typed_captured_vars;
-bool recursive = false;
-for (const auto& var : free_vars) {
-  if (!recur_vars_.empty() && var == recur_vars_.back()) {
-recursive = true;
+bool is_recursive = false;
+bool is_closure = false;
+for (const auto& var : FreeVars(func)) {
+  if (var.same_as(current_lambda_var_)) {
+is_recursive = true;
   } else {
+is_closure = true;
 captured_vars.push_back(var);
   }
 }
 
+Array typed_captured_vars;
 Map rebinding_map;
 for (auto free_var : captured_vars) {
   Var var = Var(free_var->name_hint(), GetStructInfo(free_var), 
free_var->span);
   typed_captured_vars.push_back(var);
   rebinding_map.Set(free_var, var);
 }
 
-// recursive call
-if (recursive) {
-  if (!captured_vars.empty()) {
-Array fvs;
-for (auto fv : captured_vars) {
-  fvs.push_back(fv);
-}
-// it is required by block_blocker, will be updated later
-UpdateStructInfo(global, GetStructInfo(recur_vars_.back()));
-lambda_map_.emplace(recur_vars_.back(), Call(global, fvs));
-  } else {
-if (recur_vars_.size() > 0) {
-  lambda_map_.emplace(recur_vars_.back(), global);
-}
-  }
+tvm::Array lifted_func_params =
+func_node->params.Map([this](Var var) { return VisitVarDef(var); });
+for (const auto& var : typed_captured_vars) {
+  lifted_func_params.push_back(var);
 }
 
-tvm::Array params;
-bool all_params_unchanged = true;
-for (Var param : func_node->params) {
-  Var new_param = this->VisitVarDef(param);
-  params.push_back(new_param);
-  all_params_unchanged &= param.same_as(new_param);
+auto gvar_lifted_func = GlobalVar(lift_func_name);
+{
+  auto func_sinfo = Downcast(func_node->struct_info_);
+  if (is_closure) {
+func_sinfo = FuncStructInfo(lifted_func_params.Map(GetStructInfo), 
func_sinfo->ret,
+func_sinfo->purity);
+  }
+  UpdateStructInfo(gvar_lifted_func, func_sinfo);
 }
 
-Expr body = this->VisitWithNewScope(func_node->body);
-Expr visited_func;
+Expr body = func_node->body;
 
-if (all_params_unchanged && body.same_as(func_node->body)) {
-  visited_func = GetRef(func_node);
-} else if (const auto& body_sinfo = 
MatchStructInfo(body)) {
-  visited_func =
-  Function(params, body, body_sinfo.value(), func_node->is_pure, 
func_node->attrs);
-} else {
-  visited_func =
-  Function(params, body, func_node->ret_struct_info, 
func_node->is_pure, func_node->attrs);
+// recursive call
+if (is_recursive && is_closure) {
+  // it is required by block_blocker, will be updated later
+  nested_closure_map_.emplace(
+  current_lambda_var_.value(),
+  Call(gvar_lifted_func, captured_vars.Map([](Var var) -> Expr { 
return var; })));
 }
-auto new_func = Downcast(visited_func);
 
-Function lifted_func;
-bool is_closure = IsClosure(captured_vars);
 if (!is_closure) {
-  lifted_func = Function(
-  /*params=*/new_func->params,
-  /*body=*/new_func->body,
-  /*ret_struct_info=*/new_func->ret_struct_info,
-  /*is_pure=*/new_func->is_pure,
-  /*attrs=*/new_func->attrs,
-  /*span=*/new_func->span);
-} else {
-  // Flatten the Closure
-  std::vector closure_params;
-  closure_params.reserve(func->params.size() + typed_captured_vars.size());
-  for (size_t i = 0; i < func->params.size(); ++i) {
-closure_params.emplace_back(func->params[i]);
-  }
-  for (size_t i = 0; i < typed_captured_vars.size(); ++i) {
-closure_params.emplace_back(typed_captured_vars[i]);
-  }
+  rebind_map_.emplace(current_lambda_var_.value(), gvar_lifted_func);
+}
 
-  lifted_func = Function(/*params=*/closure_params,
- /*body=*/Bind(new_func->body, rebinding_map),
- /*ret_struct_info=*/new_func->ret_struct_info,
- /*is_pure=*/new_func->is_pure,
- /*attrs=*/new_func->attrs,
- /*span=*/func->span);
+body = this->VisitWithNewScope(body, lifted_func_params);
+StructInfo ret_struct_info = GetStructInfo(body);
+body = Bind(body, rebinding_map);
 
-  for (Var param : closure_params) {
-CHECK(param->checked_type_.defined())
-<< "relax.Function requires params to contain checked_type_";
-  

Re: [PR] [Unity][Transform] Handle symbolic variables in LambdaLift [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on code in PR #16411:
URL: https://github.com/apache/tvm/pull/16411#discussion_r1462314825


##
src/relax/transform/lambda_lift.cc:
##
@@ -236,95 +236,25 @@ class LambdaLifter : public ExprMutator {
 
   using ExprMutator::VisitExpr_;
 
-  void VisitBinding_(const VarBindingNode* binding) final {
-bool is_lambda = binding->value->IsInstance();
-if (is_lambda) {
-  recur_vars_.push_back(binding->var);
+  void VisitBinding_(const VarBindingNode* binding, const FunctionNode* 
func_node) final {
+auto cache = current_lambda_var_;
+current_lambda_var_ = binding->var;
+
+// ExprMutator::VisitBinding_(binding, func_node);

Review Comment:
   Thank you for the catch, and the comment was some test code, and is now 
removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Handle symbolic variables in LambdaLift [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on code in PR #16411:
URL: https://github.com/apache/tvm/pull/16411#discussion_r1462312974


##
src/relax/transform/lambda_lift.cc:
##
@@ -336,176 +266,235 @@ class LambdaLifter : public ExprMutator {
   return it->second;
 }();
 
-auto global = GlobalVar(lift_func_name);
-Array free_vars = FreeVars(func);
 Array captured_vars;
-
-Array typed_captured_vars;
-bool recursive = false;
-for (const auto& var : free_vars) {
-  if (!recur_vars_.empty() && var == recur_vars_.back()) {
-recursive = true;
+bool is_recursive = false;
+bool is_closure = false;
+for (const auto& var : FreeVars(func)) {
+  if (var.same_as(current_lambda_var_)) {
+is_recursive = true;
   } else {
+is_closure = true;
 captured_vars.push_back(var);
   }
 }
 
+Array typed_captured_vars;
 Map rebinding_map;
 for (auto free_var : captured_vars) {
   Var var = Var(free_var->name_hint(), GetStructInfo(free_var), 
free_var->span);
   typed_captured_vars.push_back(var);
   rebinding_map.Set(free_var, var);
 }
 
-// recursive call
-if (recursive) {
-  if (!captured_vars.empty()) {
-Array fvs;
-for (auto fv : captured_vars) {
-  fvs.push_back(fv);
-}
-// it is required by block_blocker, will be updated later
-UpdateStructInfo(global, GetStructInfo(recur_vars_.back()));
-lambda_map_.emplace(recur_vars_.back(), Call(global, fvs));
-  } else {
-if (recur_vars_.size() > 0) {
-  lambda_map_.emplace(recur_vars_.back(), global);
-}
-  }
+tvm::Array lifted_func_params =
+func_node->params.Map([this](Var var) { return VisitVarDef(var); });
+for (const auto& var : typed_captured_vars) {
+  lifted_func_params.push_back(var);
 }
 
-tvm::Array params;
-bool all_params_unchanged = true;
-for (Var param : func_node->params) {
-  Var new_param = this->VisitVarDef(param);
-  params.push_back(new_param);
-  all_params_unchanged &= param.same_as(new_param);
+auto gvar_lifted_func = GlobalVar(lift_func_name);
+{
+  auto func_sinfo = Downcast(func_node->struct_info_);
+  if (is_closure) {
+func_sinfo = FuncStructInfo(lifted_func_params.Map(GetStructInfo), 
func_sinfo->ret,
+func_sinfo->purity);
+  }
+  UpdateStructInfo(gvar_lifted_func, func_sinfo);
 }
 
-Expr body = this->VisitWithNewScope(func_node->body);
-Expr visited_func;
+Expr body = func_node->body;
 
-if (all_params_unchanged && body.same_as(func_node->body)) {
-  visited_func = GetRef(func_node);
-} else if (const auto& body_sinfo = 
MatchStructInfo(body)) {
-  visited_func =
-  Function(params, body, body_sinfo.value(), func_node->is_pure, 
func_node->attrs);
-} else {
-  visited_func =
-  Function(params, body, func_node->ret_struct_info, 
func_node->is_pure, func_node->attrs);
+// recursive call
+if (is_recursive && is_closure) {
+  // it is required by block_blocker, will be updated later

Review Comment:
   Thank you for the catch.  The typo was present in the original, and the 
comment is also out of date after this change.  I've removed it, and added an 
appropriate comment.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][TVMScript] Optionally hide StructInfo that can be inferred [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on PR #16356:
URL: https://github.com/apache/tvm/pull/16356#issuecomment-1904640644

   Updated to target the `main` branch, and to include unit tests for 
round-trip of opaque functions, with and without displaying the inferable 
struct info.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Implement relax.transform.ReorderTakeAfterMatmul [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on PR #16315:
URL: https://github.com/apache/tvm/pull/16315#issuecomment-1904519267

   Rebased onto `main`, now that the main branch includes `unity`.  No other 
changes made, just wanting to avoid stale CI before merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Implement relax.transform.ExpandMatmulOfSum [tvm]

2024-01-22 Thread via GitHub


Lunderberg commented on PR #16313:
URL: https://github.com/apache/tvm/pull/16313#issuecomment-1904516242

   Rebased onto `main`, now that the main branch includes `unity`.  No other 
changes made, just wanting to avoid stale CI before merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Transform] Improve symbolic variable handling in FuseOps [tvm]

2024-01-22 Thread via GitHub


Lunderberg opened a new pull request, #16450:
URL: https://github.com/apache/tvm/pull/16450

   Prior to this commit, `FuseOps` and `FuseOpsByPattern` exposed a symbolic 
variable to the fused function if it was used within the fused function, but 
wasn't inferable from other parameter shapes. While this prevents undefined 
symbolic variables, it can cause issues for downstream use of `CodegenJSON`, 
which requires all arguments to be tensors, or tuple of tensors.
   
   Frequently, all uses of a non-inferable symbolic shape occur within a 
symbolic expression that can be inferred.  For example, a function that takes 
`arg: R.Tensor([N+1])` and returns `R.add(arg, R.const(1))` cannot infer `N`.  
However, all occurrences of `N` occur as part of the expression `N+1`, and the 
value of `N+1` can be inferred.  Therefore, if we replace `N+1` with `M`, the 
additional `ShapeTuple` argument isn't required.
   
   In addition, prior to this commit, the `CompositeFunctionAnnotator` visited 
the body of functions without the parameters being considered in-scope.  As a 
result, `EraseToWellDefined` would remove known shapes from the function body's 
`StructInfo`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [microTVM][APPS][RISC-V] KWS model example for ESP32-C3 board [tvm]

2024-01-22 Thread via GitHub


Aleksei-grovety opened a new pull request, #16449:
URL: https://github.com/apache/tvm/pull/16449

   This example demonstrates the process of building and executing a Keyword 
Spotting (KWS) model using TVM and ESP-IDF Python tools specifically for the 
Seeed Studio XIAO ESP32-C3 board with an INMP441 microphone.
   
   cc @Mousius, @gromero, @leandron, @mehrdadh


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [BugTIR] fix thread_sync occurs in letstmt [tvm]

2024-01-22 Thread via GitHub


JackWeiw opened a new pull request, #16447:
URL: https://github.com/apache/tvm/pull/16447

   [LayerNorm Error in thread_storage_sync when read x into shared 
memory](https://discuss.tvm.apache.org/t/layernorm-error-in-thread-storage-sync-when-read-x-into-shared-memory/16269)
   
   I try to read x into shared memory to accelerate layernorm, [script here 
1](https://gist.github.com/JackWeiw/f873daaff32212b0b19cf91fda463007)
   
   but error occurs in pass thread_sorage_sync pass.I found it is because the 
error in lower LetStmt ,[lowered 
script](https://gist.github.com/JackWeiw/6737f4faa1b486b389721cf7f3f4ad4d)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org