vinothchandar commented on issue #1731:
URL: https://github.com/apache/hudi/issues/1731#issuecomment-644344181


   That's for the delta folks to answer :) .. If you are rewriting parquet 
files or generating new parquet file on each write, there is nothing 
fundamentally different any other system can do here.. All databases or even 
data warehouses you are comparing to, have long running servers with some 
metadata/data loaded into memory, to help with such fast updates.. 
   
   livy is a long running server.. which already has a spark application 
running, unlike issuing spark-submit everytime.. ofc if you use livy or 
zeppelin, that overhead goes away. 
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to