For presto carbon data connector,I have two suggestions:
1) Creating a carbon data table though spark-shell,spark-shell can store the
table metadata into the hive metastore. Currently, the loading metadata is
from the file and then cached in presto carbon connector.
This creates a problem that I have deleted a table and recreated the same
table,which is a phenomenon of metadata inconsistency by presto query
again.The only solution is to restart the presto cluster.In order to avoid
it,I want to load the carbon data table metadata from the hive metastore.
2) For the / SegmentTaskIndexStore/ class,its's /segmentProperties/ can
cache the table segment detail.But if I have deleted a table, the
'segmentProperties' always holds the table segment detail,until restarting
presto.It may be cause memory leak.I don't suggest to cache segment
detail.The /lruCache /of /SegmentTaskIndexStore/ don't cause the above
phenomenon.



--
View this message in context: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Presto-CarbonData-optimization-work-discussion-tp18509p18551.html
Sent from the Apache CarbonData Dev Mailing List archive mailing list archive 
at Nabble.com.

Reply via email to