cfrancois7 opened a new issue, #1100:
URL: https://github.com/apache/iceberg-python/issues/1100
### Question
By trying partitionning my table I've got one error.
I've drowned myself in the documentation, stackoverflow and medium to find
one answer.
I even tried chatGPT but without sucess :D
I've used local SQLite and MinIO server to develop a "proof-of-concept".
Next, the code to reproduce the issue:
```
from pyiceberg.catalog.sql import SqlCatalog
from pyiceberg.partitioning import DayTransform, PartitionSpec,
PartitionField
import pyarrow as pa
warehouse_path = "local_s3"
catalog = SqlCatalog(
"pieuvre",
**{
"uri": f"sqlite:///{warehouse_path}/catalog.db",
"warehouse": "http://localhost:9001",
"s3.endpoint": "http://localhost:9001",
"s3.access-key-id": "minio_user",
"s3.secret-access-key": "minio1234",
},
)
catalog.create_namespace_if_not_exists('my_namespace')
ts_schema = pa.schema([
pa.field('timestamp', pa.timestamp('s'), nullable=False), # Assuming
timestamp with seconds precision
pa.field('campaign_id', pa.uint8(), nullable=False),
pa.field('temperature', pa.float32()),
pa.field('pressure', pa.float32()),
pa.field('humidity', pa.int32()),
pa.field('led_0', pa.bool_())
])
# Define partitioning spec for campaign_ID
ts_partition_spec = PartitionSpec(
PartitionField(
field_id=2,
source_id=2,
transform=DayTransform(),
name="campaign_id"
)
)
time_series_table = catalog.create_table(
'my_namespace.time_series',
schema=ts_schema,
partition_spec=ts_partition_spec # <= raises error !!
)
```
My purpose is to partition the table by campaign_id.
Is it possible? If yes, how?
How to interpret the documentation from [api
documentation](https://py.iceberg.apache.org/api/#create-a-table) ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]