lidavidm commented on code in PR #14382:
URL: https://github.com/apache/arrow/pull/14382#discussion_r993805820


##########
docs/source/java/dataset.rst:
##########
@@ -32,31 +32,50 @@ is not designed only for querying files but can be extended 
to serve all
 possible data sources such as from inter-process communication or from other
 network locations, etc.
 
+.. contents::
+
 Getting Started
 ===============
 
+Currently supported file formats are:
+
+- Apache Arrow (`.arrow`)
+- Apache ORC (`.orc`)
+- Apache Parquet (`.parquet`)
+- Comma-Separated Values (`.csv`)
+
 Below shows a simplest example of using Dataset to query a Parquet file in 
Java:
 
 .. code-block:: Java
 
     // read data from file /opt/example.parquet
     String uri = "file:/opt/example.parquet";
-    BufferAllocator allocator = new RootAllocator(Long.MAX_VALUE);
-    DatasetFactory factory = new FileSystemDatasetFactory(allocator,
-        NativeMemoryPool.getDefault(), FileFormat.PARQUET, uri);
-    Dataset dataset = factory.finish();
-    Scanner scanner = dataset.newScan(new ScanOptions(100)));
-    List<ArrowRecordBatch> batches = StreamSupport.stream(
-        scanner.scan().spliterator(), false)
-            .flatMap(t -> stream(t.execute()))
-            .collect(Collectors.toList());
-
-    // do something with read record batches, for example:
-    analyzeArrowData(batches);
-
-    // finished the analysis of the data, close all resources:
-    AutoCloseables.close(batches);
-    AutoCloseables.close(factory, dataset, scanner);
+    ScanOptions options = new ScanOptions(/*batchSize*/ 5);

Review Comment:
   For the first example, can we use realistic-ish parameters? A batch size of 
5 is far too small.



##########
docs/source/java/dataset.rst:
##########
@@ -32,31 +32,50 @@ is not designed only for querying files but can be extended 
to serve all
 possible data sources such as from inter-process communication or from other
 network locations, etc.
 
+.. contents::
+
 Getting Started
 ===============
 
+Currently supported file formats are:
+
+- Apache Arrow (`.arrow`)

Review Comment:
   reST uses double backticks.



##########
docs/source/java/dataset.rst:
##########
@@ -65,6 +84,9 @@ Below shows a simplest example of using Dataset to query a 
Parquet file in Java:
     aware container ``VectorSchemaRoot`` by which user could be able to access
     decoded data conveniently in Java.
 
+    The ``ScanOptions`` `batchSize` argument takes effect only if it is set to 
a value

Review Comment:
   reST uses double backticks.



##########
docs/source/java/dataset.rst:
##########
@@ -228,3 +250,25 @@ native objects after using. For example:
     AutoCloseables.close(factory, dataset, scanner);
 
 If user forgets to close them then native object leakage might be caused.
+
+Development Guidelines
+======================
+
+* Related to the note about ScanOptions batchSize argument: Let's try to read 
a Parquet file with gzip compression and 3 row groups:
+
+    .. code-block::
+
+       # Let configure ScanOptions as:
+       ScanOptions options = new ScanOptions(/*batchSize*/ 32768);
+
+       $ parquet-tools meta data4_3rg_gzip.parquet
+       file schema: schema
+       age:         OPTIONAL INT64 R:0 D:1
+       name:        OPTIONAL BINARY L:STRING R:0 D:1
+       row group 1: RC:4 TS:182 OFFSET:4
+       row group 2: RC:4 TS:190 OFFSET:420
+       row group 3: RC:3 TS:179 OFFSET:838
+
+    In this case, we are configuring ScanOptions batchSize argument equals to

Review Comment:
   This is a pretty confusing way to think about it. The batch size parameter 
controls the _maximum_ batch size only. If the underlying file has smaller 
batches, those will not be consolidated. 



##########
docs/source/java/dataset.rst:
##########
@@ -32,31 +32,50 @@ is not designed only for querying files but can be extended 
to serve all
 possible data sources such as from inter-process communication or from other
 network locations, etc.
 
+.. contents::
+
 Getting Started
 ===============
 
+Currently supported file formats are:
+
+- Apache Arrow (`.arrow`)
+- Apache ORC (`.orc`)
+- Apache Parquet (`.parquet`)
+- Comma-Separated Values (`.csv`)
+
 Below shows a simplest example of using Dataset to query a Parquet file in 
Java:
 
 .. code-block:: Java
 
     // read data from file /opt/example.parquet
     String uri = "file:/opt/example.parquet";
-    BufferAllocator allocator = new RootAllocator(Long.MAX_VALUE);
-    DatasetFactory factory = new FileSystemDatasetFactory(allocator,
-        NativeMemoryPool.getDefault(), FileFormat.PARQUET, uri);
-    Dataset dataset = factory.finish();
-    Scanner scanner = dataset.newScan(new ScanOptions(100)));
-    List<ArrowRecordBatch> batches = StreamSupport.stream(
-        scanner.scan().spliterator(), false)
-            .flatMap(t -> stream(t.execute()))
-            .collect(Collectors.toList());
-
-    // do something with read record batches, for example:
-    analyzeArrowData(batches);
-
-    // finished the analysis of the data, close all resources:
-    AutoCloseables.close(batches);
-    AutoCloseables.close(factory, dataset, scanner);
+    ScanOptions options = new ScanOptions(/*batchSize*/ 5);
+    try (
+        BufferAllocator allocator = new RootAllocator();

Review Comment:
   The allocator should be within a try-with-resources block (ideally 
everything should be)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to