romainfrancois commented on pull request #9615:
URL: https://github.com/apache/arrow/pull/9615#issuecomment-802741367


   The issue about doing R things in parallel is that you can't really. Maybe 
we can have an R specific mutex: 
   
   ```cpp
   std::mutex& get_r_mutex() {
     static std::mutex m ;
     return m;
   }
   ```
   
   that we can lock when we do need to call something in the R api, including 
making a `cpp11::doubles` for example. Then use this in a wrapper class like: 
   
   ```cpp
   template <class vector>
   class synchronized {
   public:
     synchronized(SEXP x) {
       std::lock_guard<std::mutex> lock(get_r_mutex());
       data_ = new vector(x);
     }
   
     vector& data() {
       return *data_;
     }
   
     ~synchronized() {
       std::lock_guard<std::mutex> lock(get_r_mutex());
       delete data_;
     }
   
   private:
     vector* data_;
   };
   ```
   
   so that we can have something like this: 
   
   ```cpp
   // [[arrow::export]]
   int parallel_test(int n) {
     auto tasks = 
arrow::internal::TaskGroup::MakeThreaded(arrow::internal::GetCpuThreadPool());
     SEXP x = PROTECT(Rf_allocVector(REALSXP, 100));
   
     std::atomic<int> count(0);
     for (int i = 0; i < n; i++) {
       tasks->Append([x, &count] {
         synchronized<cpp11::doubles> dx(x);
   
         int nx = dx.data().size();
         std::this_thread::sleep_for(std::chrono::milliseconds(100));
         count += nx;
   
         return arrow::Status::OK();
       });
     }
   
     auto status = tasks->Finish();
     UNPROTECT(1);
     return count;
   }
   ```
   
   Of course this only makes sure that the `synchronized<cpp11::doubles>` is 
safe on construction and destruction, access to other methods would also need 
to lock/unlock. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to