paleolimbot commented on code in PR #12817:
URL: https://github.com/apache/arrow/pull/12817#discussion_r853367957


##########
r/R/record-batch-reader.R:
##########
@@ -176,3 +176,37 @@ RecordBatchFileReader$create <- function(file) {
   assert_is(file, "InputStream")
   ipc___RecordBatchFileReader__Open(file)
 }
+
+#' Convert an object to an Arrow RecordBatchReader
+#'
+#' @param x An object to convert to a [RecordBatchReader]
+#' @param ... Passed to S3 methods
+#'
+#' @return A [RecordBatchReader]
+#' @export
+#'
+#' @examplesIf arrow_available() && arrow_with_dataset()
+#' reader <- as_record_batch_reader(data.frame(col1 = 1, col2 = "two"))
+#' reader$read_next_batch()
+#'
+as_record_batch_reader <- function(x, ...) {
+  UseMethod("as_record_batch_reader")
+}
+
+#' @rdname as_record_batch_reader
+#' @export
+as_record_batch_reader.RecordBatchReader <- function(x, ...) {
+  x
+}
+
+#' @rdname as_arrow_table
+#' @export
+as_record_batch_reader.default <- function(x, ...) {
+  Scanner$create(x)$ToRecordBatchReader()

Review Comment:
   I added the Table -> RecordBatchReader method you mentioned in C++ and used 
that for RecordBatch, data.frame, Table, and arrow_dplyr_query which simplified 
things considerably! I kept `Scanner$create(x)$ToRecordBatchReader()` for 
Dataset for now, since reading a whole dataset to a Table seems like a worse 
piece of placeholder code. I think there's an open ticket for improving the 
recordbatchreader-ness of the arrow_dplyr_query...I'll either make a new ticket 
or add a note to make sure that `as_record_batch_reader()` methods are updated 
when that ticket is fixed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to