Re: [I] [SUPPORT]hudi insert is too slow [hudi]

2023-11-20 Thread via GitHub


zyclove commented on issue #10131:
URL: https://github.com/apache/hudi/issues/10131#issuecomment-1818731721

   @ad1happy2go 
   Can bulk mode not generate small files? Directly output the 128M result file 
and merge it later.
   If hoodie.clustering is turned on, can small files be automatically merged 
after the bulk is completed? 
   Must I start the follow job to do the merge?
   ```
   hoodie.clustering.inline=true
   
   spark-submit \
   --master yarn \
   --class org.apache.hudi.utilities.HoodieClusteringJob \
   hdfs://nameservice1/utility_jars/hudi-utilities-bundle_2.12-0.10.0.jar
   ``` 
   ---
   If not use bulk mode.
   Can this  stage(Building workload profile:smart_datapoint_report_rw_clear_rt
   )be optimized in hudi 1.0? This stage is simply too time consuming.
   
![image](https://github.com/apache/hudi/assets/15028279/8e7cf46b-1691-4c58-817b-11dac0d950aa)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [SUPPORT]hudi insert is too slow [hudi]

2023-11-20 Thread via GitHub


ad1happy2go commented on issue #10131:
URL: https://github.com/apache/hudi/issues/10131#issuecomment-1818465534

   @zyclove Bulk_insert mode don't merge the small files while ingestion. So, 
you have to do clustering after bulk_insert to optimise file size.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [SUPPORT]hudi insert is too slow [hudi]

2023-11-19 Thread via GitHub


zyclove commented on issue #10131:
URL: https://github.com/apache/hudi/issues/10131#issuecomment-1818173326

   @ad1happy2go 
   
   The table bucket is 128. 
   hoodie.bucket.index.num.buckets=128
   
   When I use the bulk model, why do the result tables generate so many small 
files (some hundreds of thousands of small files) and they are not generated 
according to the default 128, which has a huge impact on performance and cost. 
How should I optimize this?
   
![image](https://github.com/apache/hudi/assets/15028279/248b8fdc-fa44-4329-bfae-89a90ec702a0)
   
   ```
   set hoodie.metadata.table=false;
   set hoodie.sql.insert.mode=non-strict;
   set hoodie.sql.bulk.insert.enable=true;
   set hoodie.populate.meta.fields=false;
   set hoodie.parquet.compression.codec=snappy;
   
   set hoodie.bloom.index.prune.by.ranges=false;
   set hoodie.file.listing.parallelism=200;
   set hoodie.cleaner.parallelism=200;
   set hoodie.insert.shuffle.parallelism=200;
   set hoodie.upsert.shuffle.parallelism=200;
   set hoodie.delete.shuffle.parallelism=200;
   set hoodie.bulkinsert.shuffle.parallelism=200;
   ``` 
   
   But when I don't turn on the bulk mode, I can basically generate files 
according to the number of buckets normally, but the performance is also 
relatively slow, and OMM sometimes occurs.
   
![image](https://github.com/apache/hudi/assets/15028279/5fa08f31-2e4e-446f-9817-9b18b46bfdf3)
   
   ```
   Remove the above configuration 
   ``` 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [SUPPORT]hudi insert is too slow [hudi]

2023-11-17 Thread via GitHub


ad1happy2go commented on issue #10131:
URL: https://github.com/apache/hudi/issues/10131#issuecomment-1816242294

@zyclove The parallelism config doesn't work as we are deriving the number 
of tasks from input data frame itself.
   Can you paste the complete DAG for this job
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [SUPPORT]hudi insert is too slow & [hudi]

2023-11-16 Thread via GitHub


zyclove opened a new issue, #10131:
URL: https://github.com/apache/hudi/issues/10131

   
   
   **Describe the problem you faced**
   
   spark sql bulk insert data is too slow , how to turn performance.
   as https://hudi.apache.org/docs/performance 
   I do change many config, but is not well.
   
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. spark-sql  set hudi config
   set 
hoodie.write.lock.zookeeper.lock_key=bi_ods_real.smart_datapoint_report_rw_clear_rt;
   set spark.sql.hive.filesourcePartitionFileCacheSize=524288000;
   set hoodie.metadata.table=false;
   set hoodie.sql.insert.mode=non-strict;
   set hoodie.sql.bulk.insert.enable=true;
   set hoodie.populate.meta.fields=false;
   set hoodie.parquet.compression.codec=snappy;
   
   set hoodie.bloom.index.prune.by.ranges=false;
   set hoodie.file.listing.parallelism=800;
   set hoodie.cleaner.parallelism=800;
   set hoodie.insert.shuffle.parallelism=800;
   set hoodie.upsert.shuffle.parallelism=800;
   set hoodie.delete.shuffle.parallelism=800;
   set hoodie.memory.compaction.max.size= 4294967296;
   set hoodie.memory.merge.max.size=107374182400;
   2. sql
   ```
   insert into bi_dw_real.dwd_smart_datapoint_report_rw_clear_rt 
   select
 /*+ coalesce(${partitions}) */
 
md5(concat(coalesce(data_id,''),coalesce(dev_id,''),coalesce(gw_id,''),coalesce(product_id,''),coalesce(uid,''),coalesce(dp_code,''),coalesce(dp_id,''),if(dp_mode
 in 
('ro','rw','wr'),dp_mode,'un'),coalesce(dp_name,''),coalesce(dp_time,''),coalesce(dp_type,''),coalesce(dp_value,''),coalesce(ct,'')))
 as id, 
 _hoodie_record_key as uuid,
 data_id,dev_id,gw_id,product_id,uid,
 dp_code,dp_id,if(dp_mode in ('ro','rw','wr'),dp_mode,'un') as dp_mode 
,dp_name,dp_time,dp_type,dp_value,
 ct as gmt_modified,
 case 
 when length(ct)=10 then 
date_format(from_unixtime(ct),'MMddHH')  
 when length(ct)=13 then 
date_format(from_unixtime(ct/1000),'MMddHH') 
 else '1970010100' end as dt
   from 
   
hudi_table_changes('bi_ods_real.ods_log_smart_datapoint_report_batch_rt', 
'latest_state', '${taskBeginTime}', '${next30minuteTime}')   
   lateral  view dataPointExplode(split(value,'\001')[0]) dps as ct, 
data_id, dev_id, gw_id, product_id, uid, dp_code, dp_id, gmtModified, dp_mode, 
dp_name, dp_time, dp_type, dp_value
   where _hoodie_commit_time >${taskBeginTime} and 
_hoodie_commit_time<=${next30minuteTime};
   ``` 
   3. result table info
   tblproperties (
 type = 'mor',
 primaryKey = 'id',
 preCombineField = 'gmt_modified',
 hoodie.combine.before.upsert='false',
 hoodie.bucket.index.num.buckets=128,
 hoodie.compact.inline='false',
 hoodie.common.spillable.diskmap.type='ROCKS_DB',
 hoodie.datasource.write.partitionpath.field='dt,dp_mode',
 
hoodie.compaction.payload.class='org.apache.hudi.common.model.PartialUpdateAvroPayload'
)
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :0.14.0
   
   * Spark version :3.2.1
   
   * Hive version :3.2.1
   
   * Hadoop version :3.2.2
   
   * Storage (HDFS/S3/GCS..) :s3
   
   * Running on Docker? (yes/no) :no
   
   
   **Additional context**
   
   
![image](https://github.com/apache/hudi/assets/15028279/96c73e5a-d1f2-4db0-b583-37605ca754d0)
   
   
![image](https://github.com/apache/hudi/assets/15028279/1ce817b5-b372-4c09-a14c-f5ebf83f32fb)
   
   how to change parallelism? I set spark-sql --conf 
spark.default.parallelism=800 is not work .
   
   The follow config in sql file is not work as expect.
   
   ```
   set hoodie.file.listing.parallelism=800;
   set hoodie.cleaner.parallelism=800;
   set hoodie.insert.shuffle.parallelism=800;
   set hoodie.upsert.shuffle.parallelism=800;
   set hoodie.delete.shuffle.parallelism=800;
   ``` 
   
   
   The follow  issues are not bulk insert .
   #8189  #2620 
   Please take a look and give me some optimization suggestions.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org