gemini-code-assist[bot] commented on code in PR #308:
URL: https://github.com/apache/tvm-ffi/pull/308#discussion_r2586328643
##########
python/tvm_ffi/cython/tvm_ffi_python_helpers.h:
##########
@@ -394,20 +396,56 @@ class TVMFFIPyCallManager {
return -1;
}
}
+
+ /*!
+ * \brief Set an py_arg to out using the __tvm_ffi_value__ protocol.
+ * \param setter_factory The factory function to create the setter
+ * \param ctx The call context
+ * \param py_arg The python argument to be set
+ * \param out The output argument
+ * \return 0 on success, -1 on failure
+ */
+ TVM_FFI_INLINE int SetArgumentFFIValueProtocol(TVMFFIPyArgSetterFactory
setter_factory,
+ TVMFFIPyCallContext* ctx,
PyObject* py_arg,
+ TVMFFIAny* out) {
+ // RAII guard to ensure counter is decremented even if exceptions occur.
+ class NestedFFIValueCounterGuard {
+ public:
+ explicit NestedFFIValueCounterGuard(int* count) : count_(count) {
++(*count_); }
+ ~NestedFFIValueCounterGuard() { --(*count_); }
+
+ private:
+ int* count_;
+ };
+ NestedFFIValueCounterGuard guard(&(ctx->nested_tvm_ffi_value_count));
+
+ int ret_code = SetArgument(setter_factory, ctx, py_arg, out);
+ // retain the static object if the return value only if it is the
outermost nested level
+ // NOTE: we still keep the invariance that each ArgSetter can have at most
one temporary FFI
+ // object so it ensures that we won't overflow the temporary FFI object
stack
+ // The counter mechanism is applied per TVMFFIPyCallContext
+ // so nested call into tuple conversion will be handled in another call
context
+ if (ret_code == 0 && ctx->nested_tvm_ffi_value_count == 1 &&
+ out->type_index >= kTVMFFIStaticObjectBegin) {
+ if (TVMFFIObjectIncRef(out->v_ptr) != 0) {
+ PyErr_SetString(PyExc_RuntimeError, "Failed to retain temp object");
+ return -1;
+ }
+ // TVMFFIPyPushTempFFIObject(ctx, out->v_ptr);
+ // push the temporary FFI object to the call context
+ ctx->temp_ffi_objects[ctx->num_temp_ffi_objects++] = out->v_ptr;
+ return 0;
+ }
+ return ret_code;
Review Comment:

There's a potential double-free bug here. The logic unconditionally retains
and pushes the object to `temp_ffi_objects`, assuming that `SetArgument`
doesn't manage temporary objects. However, many setters called by `SetArgument`
(e.g., for lists, tuples, or other newly created objects) already push the
temporary object handle for cleanup.
This leads to the same object being pushed to `temp_ffi_objects` twice but
only `IncRef`'d once here, resulting in a double `DecRef` at the end of the
call and likely a crash.
To fix this, we should check if `SetArgument` has already pushed a temporary
object. We can do this by checking `num_temp_ffi_objects` before and after the
call to `SetArgument`. The object should only be retained and pushed here if a
setter hasn't already done so.
Additionally, it's better to use the `TVMFFIPyPushTempFFIObject` inline
function instead of manually inlining its implementation for better code
clarity and maintainability.
```c
int num_temp_before = ctx->num_temp_ffi_objects;
int ret_code = SetArgument(setter_factory, ctx, py_arg, out);
if (ret_code != 0) {
return ret_code;
}
bool pushed_by_setter = (ctx->num_temp_ffi_objects > num_temp_before);
// retain the static object if the return value only if it is the
outermost nested level
// and if the setter hasn't already managed a temporary object.
if (!pushed_by_setter && ctx->nested_tvm_ffi_value_count == 1 &&
out->type_index >= kTVMFFIStaticObjectBegin) {
if (TVMFFIObjectIncRef(out->v_ptr) != 0) {
PyErr_SetString(PyExc_RuntimeError, "Failed to retain temp object");
return -1;
}
TVMFFIPyPushTempFFIObject(ctx, out->v_ptr);
}
return ret_code;
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]