[ https://issues.apache.org/jira/browse/BEAM-5519?focusedWorklogId=296532&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296532 ]
ASF GitHub Bot logged work on BEAM-5519: ---------------------------------------- Author: ASF GitHub Bot Created on: 16/Aug/19 19:56 Start Date: 16/Aug/19 19:56 Worklog Time Spent: 10m Work Description: kyle-winkelman commented on issue #6511: [BEAM-5519] Remove call to groupByKey in Spark Streaming. URL: https://github.com/apache/beam/pull/6511#issuecomment-522132527 One other change that we can think about making now that we no longer need to preserve the partitioner. Just to clean things up and eliminate a possible source of confusion: ``` @@ -58,14 +58,9 @@ public class GroupCombineFunctions { JavaPairRDD<ByteArray, Iterable<byte[]>> groupedRDD = (partitioner != null) ? pairRDD.groupByKey(partitioner) : pairRDD.groupByKey(); - // using mapPartitions allows to preserve the partitioner - // and avoid unnecessary shuffle downstream. return groupedRDD - .mapPartitionsToPair( - TranslationUtils.pairFunctionToPairFlatMapFunction( - CoderHelpers.fromByteFunctionIterable(keyCoder, wvCoder)), - true) - .mapPartitions(TranslationUtils.fromPairFlatMapFunction(), true); + .mapToPair(CoderHelpers.fromByteFunctionIterable(keyCoder, wvCoder)) + .map(new TranslationUtils.FromPairFunction<>()); } ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 296532) Time Spent: 6h 10m (was: 6h) > Spark Streaming Duplicated Encoding/Decoding Effort > --------------------------------------------------- > > Key: BEAM-5519 > URL: https://issues.apache.org/jira/browse/BEAM-5519 > Project: Beam > Issue Type: Bug > Components: runner-spark > Reporter: Kyle Winkelman > Assignee: Kyle Winkelman > Priority: Major > Labels: spark, spark-streaming > Fix For: 2.16.0 > > Time Spent: 6h 10m > Remaining Estimate: 0h > > When using the SparkRunner in streaming mode. There is a call to groupByKey > followed by a call to updateStateByKey. BEAM-1815 fixed an issue where this > used to cause 2 shuffles but it still causes 2 encode/decode cycles. -- This message was sent by Atlassian JIRA (v7.6.14#76016)