[ 
https://issues.apache.org/jira/browse/PHOENIX-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15830604#comment-15830604
 ] 

Josh Elser commented on PHOENIX-3601:
-------------------------------------

bq. Using Josh Elser's take on the very cool 
https://github.com/joshelser/phoenix-performance toolset, I generated about 
114M rows of TPC-DS data on a 5 RegionServer setup. I used a load-factor of 5, 
which created a 256-way split table we'll refer to as SALES. I also created a 
new table, pre-salted with 5 buckets we'll call SALES2 and UPSERT SELECTed the 
data over. Both tables had major compaction and UPDATE STATISTICS run on them 
as well.

Sick. I'm glad you found it useful :). 30-40% decrease in execution time is 
awesome!

I don't have a good control over spark myself to do some testing, but maybe 
this is a good reason for me to find the time to mess around with it...

> PhoenixRDD doesn't expose the preferred node locations to Spark
> ---------------------------------------------------------------
>
>                 Key: PHOENIX-3601
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3601
>             Project: Phoenix
>          Issue Type: Improvement
>    Affects Versions: 4.8.0
>            Reporter: Josh Mahonin
>            Assignee: Josh Mahonin
>         Attachments: PHOENIX-3601.patch
>
>
> Follow-up to PHOENIX-3600, in order to let Spark know the preferred node 
> locations to assign partitions to, we need to update PhoenixRDD to retrieve 
> the underlying node location information from the splits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to