Hi I have special requirement when I need to process data in one partition at
the last  after doing many filtering,updating etc in a DataFrame. Currently
to process data in one partition I am using coalesce(1) which is killing and
painfully slow my jobs hangs for hours even 5-6 hours and I dont know how to
solve this I came across localIterator will it be helpful in my case please
share some example if it is useful or please share me idea how to solve this
problem of processing data in one partition only. Please guide.

JavaRDD<Row> maksedRDD =
sourceRdd.coalesce(1,true).mapPartitionsWithIndex(new Function2<Integer,
Iterator&lt;Row>, Iterator<Row>>() { 
            @Override 
            public Iterator<Row> call(Integer ind, Iterator<Row>
rowIterator) throws Exception { 
                List<Row> rowList = new ArrayList<>(); 

                while (rowIterator.hasNext()) { 
                    Row row = rowIterator.next(); 
                    List rowAsList =
updateRowsMethod(JavaConversions.seqAsJavaList(row.toSeq())); 
                    Row updatedRow = RowFactory.create(rowAsList.toArray()); 
                    rowList.add(updatedRow); 
                }           
                return rowList.iterator(); 
            } 
        }, false);



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Can-we-use-localIterator-when-we-need-to-process-data-in-one-partition-tp25974.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to