jorisvandenbossche commented on code in PR #43818:
URL: https://github.com/apache/arrow/pull/43818#discussion_r1732710234
##########
python/pyarrow/parquet/core.py:
##########
@@ -2081,11 +2081,11 @@ def file_visitor(written_file):
the entire directory will be deleted. This allows you to overwrite
old partitions completely.
**kwargs : dict,
- Used as additional kwargs for :func:`pyarrow.dataset.write_dataset`
+ Used as additional kwargs for :py:func:`pyarrow.dataset.write_dataset`
Review Comment:
Is this addition of `:py:` needed?
##########
python/pyarrow/parquet/core.py:
##########
@@ -1163,13 +1163,13 @@ def _get_pandas_index_columns(keyvalues):
buffer_size : int, default 0
If positive, perform read buffering when deserializing individual
column chunks. Otherwise IO calls are unbuffered.
-partitioning : pyarrow.dataset.Partitioning or str or list of str, \
+partitioning : :py:class:`pyarrow.dataset.Partitioning` or str or list of str,
\
default "hive"
The partitioning scheme for a partitioned dataset. The default of "hive"
assumes directory names with key=value pairs like "/year=2009/month=11".
In addition, a scheme like "/2009/11" is also supported, in which case
you need to specify the field names or a full schema. See the
- ``pyarrow.dataset.partitioning()`` function for more details."""
+ :py:func:`pyarrow.dataset.partitioning()` function for more details."""
Review Comment:
```suggestion
:py:func:`pyarrow.dataset.partitioning` function for more details."""
```
Otherwise the linking will not work, I think
##########
python/pyarrow/parquet/core.py:
##########
@@ -1163,13 +1163,13 @@ def _get_pandas_index_columns(keyvalues):
buffer_size : int, default 0
If positive, perform read buffering when deserializing individual
column chunks. Otherwise IO calls are unbuffered.
-partitioning : pyarrow.dataset.Partitioning or str or list of str, \
+partitioning : :py:class:`pyarrow.dataset.Partitioning` or str or list of str,
\
Review Comment:
I thought this shouldn't be needed, because sphinx was set up to do this
manually?
##########
python/pyarrow/parquet/core.py:
##########
@@ -2030,16 +2030,16 @@ def write_to_dataset(table, root_path,
partition_cols=None,
Deprecated and has no effect from PyArrow version 15.0.0.
schema : Schema, optional
This Schema of the dataset.
- partitioning : Partitioning or list[str], optional
+ partitioning : :py:class:`pyarrow.dataset.Partitioning` or list[str],
optional
The partitioning scheme specified with the
- ``pyarrow.dataset.partitioning()`` function or a list of field names.
+ :py:func:`pyarrow.dataset.partitioning()` function or a list of field
names.
Review Comment:
```suggestion
:py:func:`pyarrow.dataset.partitioning` function or a list of field
names.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]