metesynnada commented on issue #7113:
URL: 
https://github.com/apache/arrow-datafusion/issues/7113#issuecomment-1655137896

   
   
   
   
   > Something I was also considering before as optimization, which also will 
fix this is to end the build phase by copying the offsets over to a new 
(`ListArray` with u64 values) array that has the offsets pre-calculated for 
each index.
   > 
   > This will save some work during probing phase (if we have multiple values 
per unique key on the probe side) as we can copy the indices directly from the 
list array instead of traversing the chained list.
   > 
   > e.g. for the build structure with 2 unique keys with 6 and 4 values the 
array will look like this:
   > 
   > So the structure that looks like this (subtracted 1 to make it more clear)
   > 
   > ```
   > map:
   >  - hash: x, first: 8
   >  - hash: y, first: 9
   > 
   > next: [END,1,END,0,3,2,5,6,7,4]
   > ```
   > 
   > can be converted to a list array with the following contents (by doing it 
for each index (0..N))
   > 
   > ```
   > offsets: [0, 5]
   > values: [1,2,5,6,7,8,0,3,4,9]
   > ```
   > 
   > The benefit is (besides being in order) that instead of traversing it time 
after time in backwards order we can just copy the slice `1,2,5,6,7,8` or 
`0,3,4,9` into the target array :)
   
   I think this is quite elegant. Does this converting process occur in every 
single probe? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to