Re: create iceberg on minio s3 got "The AWS Access Key Id you provided does not exist in our records."

2021-08-13 Thread Daniel Weeks
So, if I recall correctly, the hive server does need access to check and create paths for table locations. There may be an option to disable this behavior, but otherwise the fs implementation probably needs to be available to the hive metastore. -Dan On Fri, Aug 13, 2021, 4:48 PM Lian Jiang wro

Re: [CWS] Re: Subject: [VOTE] Release Apache Iceberg 0.12.0 RC3

2021-08-13 Thread Ryan Blue
+1 (binding) - Checked signatures, checksums, and RAT - Ran build and test. There were only failures in org.apache.iceberg.mr.hive.TestHiveIcebergStorageHandlerWithEngine that I think I hit last time I’ll do more checking over the weekend, but right now it looks good! On Fri, Aug 13,

Re: create iceberg on minio s3 got "The AWS Access Key Id you provided does not exist in our records."

2021-08-13 Thread Lian Jiang
Thanks Daniel. After modifying the script to, export AWS_REGION=us-east-1 export AWS_ACCESS_KEY_ID=minio export AWS_SECRET_ACCESS_KEY=minio123 ICEBERG_VERSION=0.11.1 DEPENDENCIES="org.apache.iceberg:iceberg-spark3-runtime:$ICEBERG_VERSION,org.apache.iceberg:iceberg-hive-runtime:$ICEBERG_VERSION,

Re: create iceberg on minio s3 got "The AWS Access Key Id you provided does not exist in our records."

2021-08-13 Thread Daniel Weeks
Hey Lian, At a cursory glance, it appears that you might be mixing two different FileIO implementations, which may be why you are not getting the expected result. When you set: --conf spark.sql.catalog.hive_test.io-impl=org.apache.iceberg.aws.s3.S3FileIO you're actually switching over to the nati

Re: Subject: [VOTE] Release Apache Iceberg 0.12.0 RC3

2021-08-13 Thread Carl Steinbach
+1 (binding) * Checked signatures of all artifacts. * Ran build and test to completion without failures. * Verified that RAT checks pass and that dates have the correct year. - Carl On Wed, Aug 11, 2021 at 12:59 AM John Zhuge wrote: > +1 (non-binding) > > - Checked signature, checksum, and lic

create iceberg on minio s3 got "The AWS Access Key Id you provided does not exist in our records."

2021-08-13 Thread Lian Jiang
Hi, I try to create an iceberg table on minio s3 and hive. *This is how I launch spark-shell:* # add Iceberg dependency export AWS_REGION=us-east-1 export AWS_ACCESS_KEY_ID=minio export AWS_SECRET_ACCESS_KEY=minio123 ICEBERG_VERSION=0.11.1 DEPENDENCIES="org.apache.iceberg:iceberg-spark3-runtime

Re: Iceberg python library sync

2021-08-13 Thread Ryan Blue
Thanks, Jun! On Fri, Aug 13, 2021 at 2:29 PM Jun H. wrote: > Thanks everyone. I will set up the sync meeting to kick off the discussion > at 9 AM (UTC-7, PDT) on 08/18/2021 (coming Wednesday). I will create and > share a meeting agenda and notes doc soon. > > Best regards, > > Jun > > > > > On T

Re: Iceberg python library sync

2021-08-13 Thread Jun H.
Thanks everyone. I will set up the sync meeting to kick off the discussion at 9 AM (UTC-7, PDT) on 08/18/2021 (coming Wednesday). I will create and share a meeting agenda and notes doc soon. Best regards, Jun On Thu, Aug 12, 2021 at 1:49 PM Szehon Ho wrote: > +1, would love to listen in as

Re: Iceberg disaster recovery and relative path sync-up

2021-08-13 Thread Anjali Norwood
Perfect, thank you Yufei. Regards Anjali On Thu, Aug 12, 2021 at 9:58 PM Yufei Gu wrote: > Hi Anjali, > > Inline... > On Thu, Aug 12, 2021 at 5:31 PM Anjali Norwood > wrote: > >> Thanks for the summary Yufei. >> Sorry, if this was already discussed, I missed the meeting yesterday. >> Is there

Identify watermark in the iceberg table properties

2021-08-13 Thread 1
Hi,all: I need to embed the iceberg table, which is regarded as real-time table, into our workflow. That is to say, Flink writes data into Iceberg table in real-time, I need something to indicate the data completeness on the ingestion path so that downstream batch consumer jobs can be trigge