Hi Umesh,
Can you add the following verbose configs to capture class-loading information
when you run the spark application
You can set the following in spark config (For default -
$SPARK_HOME/conf/spark-defaults.conf)
Dear podling,
This email was sent by an automated system on behalf of the Apache
Incubator PMC. It is an initial reminder to give you plenty of time to
prepare your quarterly board report.
The board meeting is scheduled for Wed, 17 April 2019, 10:30 am PDT.
The report for your podling will form
Hi,
Thanks for the report.. Logistics : Please subscribe to the dev ML, to be
able to send responses.
http://hudi.apache.org/community.html
Seems like we are unable to convert the json into a GenericRecord using the
supplied schema.
You can write a small program to first check if that json
Great thoughts.. Lets chat more on the HIP.
>> I am thinking something like a min/max on the row key for each file.
There could be cases where a monotonous increasing id generation service is
used when there are new entities
BloomIndex already does this today. In addition to Bloom filters, it
Hi Balaji,
I tried it still gives same error I dont have any other hoodie library
except spark bundle. I am using Databricks Spark cloud. Do you think
Databricks cloud has some other hoodie dependencies?
Regards,
Umesh
On Thu, Mar 28, 2019 at 9:43 AM Umesh Kacha wrote:
> Hi Balaji thanks no I
I published a json file to kafka and run the hoodie delta streamer as a
spark job with kafka as main data source. Since am using kafka 1.1 version
i had to make changes to the kafka offset generation class and also the
json kafka source class because Hudi is using a depreciated class such as