Hi All,
I already find a solution to solve this problem. Please ignore my question...
Thanx
Best regards,
Henry
From: MA33 YTHung1
Sent: Friday, February 6, 2015 4:34 PM
To: user@spark.apache.org
Subject: how to process a file in spark standalone cluster without distributed
storage (i.e. HDFS/EC2)?
Hi All,
sc.textFile will not work because the file is not distributed to other workers,
So I try to read the file first using FileUtils.readLines and then use
sc.parallelize, but the readLines failed because OOM (file is large).
Is there a way to split local files and upload those partition to each worker
as RDD memory?
Best regards,
Henry
The privileged confidential information contained in this email is intended for
use only by the addressees as indicated by the original sender of this email.
If you are not the addressee indicated in this email or are not responsible for
delivery of the email to such a person, please kindly reply to the sender
indicating this fact and delete all copies of it from your computer and network
server immediately. Your cooperation is highly appreciated. It is advised that
any unauthorized use of confidential information of Winbond is strictly
prohibited; and any information in this email irrelevant to the official
business of Winbond shall be deemed as neither given nor endorsed by Winbond.
The privileged confidential information contained in this email is intended for
use only by the addressees as indicated by the original sender of this email.
If you are not the addressee indicated in this email or are not responsible for
delivery of the email to such a person, please kindly reply to the sender
indicating this fact and delete all copies of it from your computer and network
server immediately. Your cooperation is highly appreciated. It is advised that
any unauthorized use of confidential information of Winbond is strictly
prohibited; and any information in this email irrelevant to the official
business of Winbond shall be deemed as neither given nor endorsed by Winbond.