paleolimbot commented on code in PR #12817:
URL: https://github.com/apache/arrow/pull/12817#discussion_r852234012


##########
r/R/record-batch-reader.R:
##########
@@ -176,3 +176,37 @@ RecordBatchFileReader$create <- function(file) {
   assert_is(file, "InputStream")
   ipc___RecordBatchFileReader__Open(file)
 }
+
+#' Convert an object to an Arrow RecordBatchReader
+#'
+#' @param x An object to convert to a [RecordBatchReader]
+#' @param ... Passed to S3 methods
+#'
+#' @return A [RecordBatchReader]
+#' @export
+#'
+#' @examplesIf arrow_available() && arrow_with_dataset()
+#' reader <- as_record_batch_reader(data.frame(col1 = 1, col2 = "two"))
+#' reader$read_next_batch()
+#'
+as_record_batch_reader <- function(x, ...) {
+  UseMethod("as_record_batch_reader")
+}
+
+#' @rdname as_record_batch_reader
+#' @export
+as_record_batch_reader.RecordBatchReader <- function(x, ...) {
+  x
+}
+
+#' @rdname as_arrow_table
+#' @export
+as_record_batch_reader.default <- function(x, ...) {
+  Scanner$create(x)$ToRecordBatchReader()

Review Comment:
   That's a good point and it definitely shouldn't be in the default method. Is 
there a better bit of code I should use to handle all the ArrowTabulars 
(data.frame, Dataset, Table, RecordBatch, etc?). I don't think we have to 
provide any guarantees about the result or keep the options stable across 
versions (if a user or developer does want to provide options regarding size or 
ordering of batches, they can create a `RecordBatchReader` themselves?).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to