jqin61 commented on code in PR #41:
URL: https://github.com/apache/iceberg-python/pull/41#discussion_r1427302782


##########
pyiceberg/io/pyarrow.py:
##########
@@ -1565,13 +1564,54 @@ def fill_parquet_file_metadata(
         del upper_bounds[field_id]
         del null_value_counts[field_id]
 
-    df.file_format = FileFormat.PARQUET
     df.record_count = parquet_metadata.num_rows
-    df.file_size_in_bytes = file_size
     df.column_sizes = column_sizes
     df.value_counts = value_counts
     df.null_value_counts = null_value_counts
     df.nan_value_counts = nan_value_counts
     df.lower_bounds = lower_bounds
     df.upper_bounds = upper_bounds
     df.split_offsets = split_offsets
+
+
+def write_file(table: Table, tasks: Iterator[WriteTask]) -> Iterator[DataFile]:
+    task = next(tasks)
+
+    try:
+        _ = next(tasks)
+        # If there are more tasks, raise an exception
+        raise ValueError("Only unpartitioned writes are supported: 
https://github.com/apache/iceberg-python/issues/208";)
+    except StopIteration:
+        pass
+
+    df = task.df
+
+    file_path = 
f'{table.location()}/data/{_generate_datafile_filename("parquet")}'
+    file_schema = schema_to_pyarrow(table.schema())

Review Comment:
   Hi Fokko! I am working with @syun64 to test out the impending write feature. 
During the test, we realized the field ids are not being set in the written 
parquet file.
   To help illustrate this, I put together [A diff against your working 
branch](https://github.com/adrianqin/iceberg-python/compare/1398a2fb01341087a1334482db84a193843a2362..edad329b003d6d6bb71318d42a31697a9d06b67d)
   
   The field_ids not written correctly in the parquet (current behavior) looks 
like:
   ```
   <pyarrow._parquet.ParquetSchema object at 0x11c40c880>
   required group field_id=-1 schema {
     optional binary field_id=-1 id (String);
     optional int32 field_id=-1 date (Date);
   }
   ```
   and the parquet schema after using a different metadata key for field id in 
the arrow schema to write the parquet file looks like:
   ```
   <pyarrow._parquet.ParquetSchema object at 0x11c40c880>
   required group field_id=-1 schema {
     optional binary field_id=1 id (String);
     optional int32 field_id=2 date (Date);
   }
   ```
   
   We feel it is a peculiar issue with pyarrow.parquet.ParquetWriter where we 
need to define the field_ids in the metadata of the pyarrow.schema conforming 
to a particular format like **"PARQUET:field_id"** instead of "field_id". 
   Do you think we should use a different pyarrow schema when we write the 
pyiceberg file?
   [prefix with the 
'PARQUET:'](https://github.com/adrianqin/iceberg-python/compare/1398a2fb01341087a1334482db84a193843a2362..edad329b003d6d6bb71318d42a31697a9d06b67d#diff-8d5e63f2a87ead8cebe2fd8ac5dcf2198d229f01e16bb9e06e21f7277c328abdR163)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to