westonpace commented on a change in pull request #11008:
URL: https://github.com/apache/arrow/pull/11008#discussion_r709679007



##########
File path: python/pyarrow/dataset.py
##########
@@ -678,17 +678,32 @@ def dataset(source, schema=None, format=None, 
filesystem=None,
         )
 
 
-def _ensure_write_partitioning(scheme):
-    if scheme is None:
-        scheme = partitioning(pa.schema([]))
-    if not isinstance(scheme, Partitioning):
-        # TODO support passing field names, and get types from schema
-        raise ValueError("partitioning needs to be actual Partitioning object")
-    return scheme
+def _ensure_write_partitioning(part, schema, flavor):
+    if isinstance(part, Partitioning) and flavor:
+        raise ValueError(
+            "Providing a partitioning_flavor with "
+            "a Partitioning object is not supported"
+        )
+    elif isinstance(part, (tuple, list)):
+        # Name of fields were provided instead of a partitioning object.
+        # Create a partitioning factory with those field names.
+        part = partitioning(
+            schema=pa.schema([schema.field(f) for f in part]),
+            flavor=flavor
+        )
+    elif part is None:
+        part = partitioning(pa.schema([]), flavor=flavor)
+
+    if not isinstance(part, Partitioning):
+        raise ValueError(
+            "partitioning must be a Partitioning object with a schema"

Review comment:
       ```suggestion
               "partitioning must be a Partitioning object"
   ```
   What does `with a schema` mean?
   

##########
File path: python/pyarrow/_dataset.pyx
##########
@@ -1998,6 +1998,10 @@ cdef class PartitioningFactory(_Weakrefable):
     cdef inline shared_ptr[CPartitioningFactory] unwrap(self):
         return self.wrapped
 
+    @property
+    def type_name(self):
+        return frombytes(self.factory.type_name())
+

Review comment:
       Does this provide any real value?  Note that the C++ impl will currently 
return `"schema"` for `DirectoryPartitioning`

##########
File path: python/pyarrow/dataset.py
##########
@@ -714,9 +729,12 @@ def write_dataset(data, base_dir, basename_template=None, 
format=None,
         and `format` is not specified, it defaults to the same format as the
         specified FileSystemDataset. When writing a Table or RecordBatch, this
         keyword is required.
-    partitioning : Partitioning, optional
+    partitioning : Partitioning or list[str], optional
         The partitioning scheme specified with the ``partitioning()``
-        function.
+        function or as a list of field names.
+    partitioning_flavor : str, optional

Review comment:
       Might be good to mention the default behavior?  It's kind of hidden in 
the code but I think its inheriting the default from 
`pyarrow.dataset.partitioning`

##########
File path: python/pyarrow/dataset.py
##########
@@ -714,9 +729,12 @@ def write_dataset(data, base_dir, basename_template=None, 
format=None,
         and `format` is not specified, it defaults to the same format as the
         specified FileSystemDataset. When writing a Table or RecordBatch, this
         keyword is required.
-    partitioning : Partitioning, optional
+    partitioning : Partitioning or list[str], optional
         The partitioning scheme specified with the ``partitioning()``
-        function.
+        function or as a list of field names.

Review comment:
       ```suggestion
           function or a list of field names.
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to