Fokko commented on code in PR #41: URL: https://github.com/apache/iceberg-python/pull/41#discussion_r1429902142
########## pyiceberg/io/pyarrow.py: ########## @@ -1565,13 +1564,54 @@ def fill_parquet_file_metadata( del upper_bounds[field_id] del null_value_counts[field_id] - df.file_format = FileFormat.PARQUET df.record_count = parquet_metadata.num_rows - df.file_size_in_bytes = file_size df.column_sizes = column_sizes df.value_counts = value_counts df.null_value_counts = null_value_counts df.nan_value_counts = nan_value_counts df.lower_bounds = lower_bounds df.upper_bounds = upper_bounds df.split_offsets = split_offsets + + +def write_file(table: Table, tasks: Iterator[WriteTask]) -> Iterator[DataFile]: + task = next(tasks) + + try: + _ = next(tasks) + # If there are more tasks, raise an exception + raise ValueError("Only unpartitioned writes are supported: https://github.com/apache/iceberg-python/issues/208") + except StopIteration: + pass + + df = task.df + + file_path = f'{table.location()}/data/{_generate_datafile_filename("parquet")}' Review Comment: Keep in mind that we write a single file in the first iteration. I've moved the uuid and task-id to the `WriteTask` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org