BenkangPeng opened a new pull request, #18375:
URL: https://github.com/apache/tvm/pull/18375
## Summary
This PR fixes a bug introduced during the TVM FFI refactoring where
`BaseComputeOp.axis`,
`BaseComputeOp.reduce_axis`, and `ScanOp.scan_axis` attributes became
inaccessible from Python.
## Problem
After the TVM FFI library was separated to `3rdparty/tvm-ffi`, the Python
property wrappers
in `python/tvm/te/tensor.py` stopped working because they rely on
`__getattr__`, which no
longer exists in the new FFI system.
### Error when accessing these attributes:
```python
import tvm
from tvm import te
A = te.placeholder((128, 128), name='A')
rk = te.reduce_axis((0, 128), name='k')
C = te.compute((128, 128), lambda i, j: te.sum(A[i, rk], axis=rk), name='C')
# This throws: AttributeError: 'ComputeOp' object has no attribute
'__getattr__'
axis = C.op.axis
```
### Root Cause
1. In C++ (`include/tvm/te/operation.h` lines 147-149),
`BaseComputeOpNode`'s attributes are properly
registered via FFI reflection:
```cpp
refl::ObjectDef<BaseComputeOpNode>()
.def_ro("axis", &BaseComputeOpNode::axis)
.def_ro("reduce_axis", &BaseComputeOpNode::reduce_axis);
```
2. The new FFI system (`3rdparty/tvm-ffi/python/tvm_ffi/registry.py`)
automatically
creates Python properties for these C++ attributes.
3. However, in `python/tvm/te/tensor.py`, the old `@property` wrappers using
`self.__getattr__("axis")` **block** the FFI auto-generated properties
from being added:
```python
# This prevents FFI from adding the auto-generated property
@property
def axis(self):
return self.__getattr__("axis") # __getattr__ doesn't exist anymore!
```
4. The FFI registry skips adding properties if they already exist on the
class
``3rdparty/tvm-ffi/python/tvm_ffi/registry.py` lines 272-275
```python
def _add_class_attrs(type_cls: type, type_info: TypeInfo) -> type:
for field in type_info.fields:
name = field.name
if not hasattr(type_cls, name): # skip already defined attributes
setattr(type_cls, name, field.as_property(type_cls))
```
## Solution
Remove the broken Python property wrappers in `BaseComputeOp` and `ScanOp`,
allowing
the FFI system to automatically expose these C++ attributes as intended.
The FFI-generated properties provide the same functionality without
requiring manual
wrapper code.
## Changes
- `python/tvm/te/tensor.py`:
- Removed `@property` decorators for `axis` and `reduce_axis` in
`BaseComputeOp`
- Removed `@property` decorator for `scan_axis` in `ScanOp`.
- Added docstring comments explaining that these attributes are
auto-exposed by FFI
## Testing
Verified that the attributes are now accessible:
```python
import tvm
from tvm import te
n, k, m = 128, 128, 128
A = te.placeholder((n, k), name='A')
B = te.placeholder((k, m), name='B')
rk = te.reduce_axis((0, k), name='k')
C = te.compute((n, m), lambda i, j: te.sum(A[i, rk] * B[rk, j], axis=rk),
name='C')
# These now work correctly
print(C.op.axis) # [T.iter_var(i, T.Range(0, 128), "DataPar", ""),
T.iter_var(j, T.Range(0, 128), "DataPar", "")]
print(C.op.reduce_axis) # [T.iter_var(k, T.Range(0, 128), "CommReduce",
"")]
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]