hi,all:
    I have a hive table STORED AS INPUTFORMAT 
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' , many files of it is very 
small , when I use spark to read it , thousands tasks will start , how can I 
limit the task num ?

2019-11-12


lk_spark 

Reply via email to