[ https://issues.apache.org/jira/browse/SPARK-18107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15612047#comment-15612047 ]
J.P Feng commented on SPARK-18107: ---------------------------------- Here is the execution logs of Hive 1.2.1, [Insert into]: 0: jdbc:hive2://master.mydata.com:23250> insert into table login4game partition(pt='mix_en',dt='2016-10-21') select distinct account_name,role_id,server,'1476979200' as recdate, 'mix' as platform, 'mix' as pid, 'mix' as dev from tbllog_login where pt='mix_en' and dt='2016-10-21' ; INFO : Number of reduce tasks not specified. Estimated from input data size: 1 INFO : In order to change the average load for a reducer (in bytes): INFO : set hive.exec.reducers.bytes.per.reducer=<number> INFO : In order to limit the maximum number of reducers: INFO : set hive.exec.reducers.max=<number> INFO : In order to set a constant number of reducers: INFO : set mapreduce.job.reduces=<number> INFO : number of splits:3 INFO : Submitting tokens for job: job_1472611548204_72608 INFO : The url to track the job: http://master.mydata.com:9378/proxy/application_1472611548204_72608/ INFO : Starting Job = job_1472611548204_72608, Tracking URL = http://master.mydata.com:9378/proxy/application_1472611548204_72608/ INFO : Kill Command = /usr/local/hadoop/bin/hadoop job -kill job_1472611548204_72608 INFO : Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1 INFO : 2016-10-27 21:51:37,717 Stage-1 map = 0%, reduce = 0% INFO : 2016-10-27 21:51:46,455 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 3.17 sec INFO : 2016-10-27 21:51:48,576 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 16.16 sec INFO : 2016-10-27 21:51:56,945 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 22.7 sec INFO : MapReduce Total cumulative CPU time: 22 seconds 700 msec INFO : Ended Job = job_1472611548204_72608 INFO : Loading data to table my_log.login4game partition (pt=mix_en, dt=2016-10-21) from hdfs://master.mydata.com:45660/data/warehouse/staging/.hive-staging_hive_2016-10-27_21-51-26_264_2085348807080462789-1/-ext-10000 INFO : Partition my_log.login4game{pt=mix_en, dt=2016-10-21} stats: [numFiles=2, numRows=132276, totalSize=971551, rawDataSize=82804776] No rows affected (32.183 seconds) > Insert overwrite statement runs much slower in spark-sql than it does in > hive-client > ------------------------------------------------------------------------------------ > > Key: SPARK-18107 > URL: https://issues.apache.org/jira/browse/SPARK-18107 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.0.0 > Environment: spark 2.0.0 > hive 2.0.1 > Reporter: J.P Feng > > I find insert overwrite statement running in spark-sql or spark-shell spends > much more time than it does in hive-client (i start it in > apache-hive-2.0.1-bin/bin/hive ), where spark costs about ten minutes but > hive-client just costs less than 20 seconds. > These are the steps I took. > Test sql is : > insert overwrite table login4game partition(pt='mix_en',dt='2016-10-21') > select distinct account_name,role_id,server,'1476979200' as recdate, 'mix' as > platform, 'mix' as pid, 'mix' as dev from tbllog_login where pt='mix_en' and > dt='2016-10-21' ; > there are 257128 lines of data in tbllog_login with > partition(pt='mix_en',dt='2016-10-21') > ps: > I'm sure it must be "insert overwrite" costing a lot of time in spark, may be > when doing overwrite, it need to spend a lot of time in io or in something > else. > I also compare the executing time between insert overwrite statement and > insert into statement. > 1. insert overwrite statement and insert into statement in spark: > insert overwrite statement costs about 10 minutes > insert into statement costs about 30 seconds > 2. insert into statement in spark and insert into statement in hive-client: > spark costs about 30 seconds > hive-client costs about 20 seconds > the difference is little that we can ignore > -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org