zeroshade commented on code in PR #36489:
URL: https://github.com/apache/arrow/pull/36489#discussion_r1268207882


##########
cpp/src/arrow/buffer.h:
##########
@@ -57,18 +58,31 @@ class ARROW_EXPORT Buffer {
   ///
   /// \note The passed memory must be kept alive through some other means
   Buffer(const uint8_t* data, int64_t size)
-      : is_mutable_(false), is_cpu_(true), data_(data), size_(size), 
capacity_(size) {
+      : is_mutable_(false),
+        is_cpu_(true),
+        data_(data),
+        size_(size),
+        capacity_(size),
+        device_type_(DeviceAllocationType::kCPU) {
     SetMemoryManager(default_cpu_memory_manager());
   }
 
   Buffer(const uint8_t* data, int64_t size, std::shared_ptr<MemoryManager> mm,
-         std::shared_ptr<Buffer> parent = NULLPTR)
+         std::shared_ptr<Buffer> parent = NULLPTR,
+         std::optional<DeviceAllocationType> device_type = std::nullopt)
       : is_mutable_(false),
         data_(data),
         size_(size),
         capacity_(size),
         parent_(std::move(parent)) {
+    // will set device_type from the memory manager
     SetMemoryManager(std::move(mm));
+    // if a device type is specified, use that instead. for example:
+    // CUDA_HOST. The CudaMemoryManager will set device_type_ to CUDA,
+    // but you can specify CUDA_HOST as the device type to override it.

Review Comment:
   The reason to override the device type set by the memory manager is when you 
want to use the same memory manager to manage both cases. In this case we use 
the same CudaMemoryManager to manage both situations, but buffers which are 
allocated with `cudaAllocHost` should have the type CUDA_HOST. Allowing us to 
not need separate memory managers for  `CUDA` vs `CUDA_HOST`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to