Re: How to read multiple HDFS directories

2021-05-05 Thread Kapil Garg
Hi Lalwani,
But I need to augment the directory specific data to every record of that
directory.
Once I have read the data, there is no link back to the directory in the
data which I can use to augment additional data

On Wed, May 5, 2021 at 10:41 PM Lalwani, Jayesh 
wrote:

> You don’t have to union multiple RDDs.  You can read files from multiple
> directories in a single read call. Spark will manage partitioning of the
> data across directories.
>
>
>
> *From: *Kapil Garg 
> *Date: *Wednesday, May 5, 2021 at 10:45 AM
> *To: *spark users 
> *Subject: *[EXTERNAL] How to read multiple HDFS directories
>
>
>
> *CAUTION*: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> Hi,
>
> I am facing issues while reading multiple HDFS directories. Please read
> the problem statement and current approach below
>
>
>
> *Problem Statement*
>
> There are N HDFS directories each having K files. We want to read data
> from all directories such that when we read data from directory D, we map
> all the data and augment it with additional information specific to that
> directory.
>
>
>
> *Current Approach*
>
> In current approach, we are iterating over the directories, reading it in
> RDD, mapping the RDD and the putting the RDD into a list.
>
> After all N directories have been read, we have a list of N RDDs
>
> We call spark Union on the list to merge them together.
>
>
>
> This approach is causing data skewness because there is 1 directory of
> size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's
> turn comes, spark submits its task on available executors causing the RDD
> to present on few executors instead of spreading on all.
>
>
>
> Is there a way to avoid this data skewness ? I couldn't find any RDD API,
> spark config which could enforce the data reading tasks evenly on all
> executors.
>
>
> --
>
> Regards
> Kapil Garg
>
>
>
>
> *-*
>
> *This email and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they are
> addressed. If you have received this email in error, please notify the
> system manager. This message contains confidential information and is
> intended only for the individual named. If you are not the named addressee,
> you should not disseminate, distribute or copy this email. Please notify
> the sender immediately by email if you have received this email by mistake
> and delete this email from your system. If you are not the intended
> recipient, you are notified that disclosing, copying, distributing or
> taking any action in reliance on the contents of this information is
> strictly prohibited.*
>
>
>
> *Any views or opinions presented in this email are solely those of the
> author and do not necessarily represent those of the organization. Any
> information on shares, debentures or similar instruments, recommended
> product pricing, valuations and the like are for information purposes only.
> It is not meant to be an instruction or recommendation, as the case may be,
> to buy or to sell securities, products, services nor an offer to buy or
> sell securities, products or services unless specifically stated to be so
> on behalf of the Flipkart group. Employees of the Flipkart group of
> companies are expressly required not to make defamatory statements and not
> to infringe or authorise any infringement of copyright or any other legal
> right by email communications. Any such communication is contrary to
> organizational policy and outside the scope of the employment of the
> individual concerned. The organization will not accept any liability in
> respect of such communication, and the employee responsible will be
> personally liable for any damages or other liability arising.*
>
>
>
> *Our organization accepts no liability for the content of this email, or
> for the consequences of any actions taken on the basis of the information *
> provided,* unless that information is subsequently confirmed in writing.
> If you are not the intended recipient, you are notified that disclosing,
> copying, distributing or taking any action in reliance on the contents of
> this information is strictly prohibited.*
>
>
> *-*
>
>

-- 
Regards
Kapil Garg

-- 


*-*

*This email and any files transmitted with it

Re: How to read multiple HDFS directories

2021-05-05 Thread Lalwani, Jayesh
You don’t have to union multiple RDDs.  You can read files from multiple 
directories in a single read call. Spark will manage partitioning of the data 
across directories.

From: Kapil Garg 
Date: Wednesday, May 5, 2021 at 10:45 AM
To: spark users 
Subject: [EXTERNAL] How to read multiple HDFS directories


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.


Hi,
I am facing issues while reading multiple HDFS directories. Please read the 
problem statement and current approach below

Problem Statement
There are N HDFS directories each having K files. We want to read data from all 
directories such that when we read data from directory D, we map all the data 
and augment it with additional information specific to that directory.

Current Approach
In current approach, we are iterating over the directories, reading it in RDD, 
mapping the RDD and the putting the RDD into a list.
After all N directories have been read, we have a list of N RDDs
We call spark Union on the list to merge them together.

This approach is causing data skewness because there is 1 directory of size 12 
GBs whereas other RDDs are less than 1 GB. So when the large RDD's turn comes, 
spark submits its task on available executors causing the RDD to present on few 
executors instead of spreading on all.

Is there a way to avoid this data skewness ? I couldn't find any RDD API, spark 
config which could enforce the data reading tasks evenly on all executors.

--
Regards
Kapil Garg



-

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error, please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you should not 
disseminate, distribute or copy this email. Please notify the sender 
immediately by email if you have received this email by mistake and delete this 
email from your system. If you are not the intended recipient, you are notified 
that disclosing, copying, distributing or taking any action in reliance on the 
contents of this information is strictly prohibited.



Any views or opinions presented in this email are solely those of the author 
and do not necessarily represent those of the organization. Any information on 
shares, debentures or similar instruments, recommended product pricing, 
valuations and the like are for information purposes only. It is not meant to 
be an instruction or recommendation, as the case may be, to buy or to sell 
securities, products, services nor an offer to buy or sell securities, products 
or services unless specifically stated to be so on behalf of the Flipkart 
group. Employees of the Flipkart group of companies are expressly required not 
to make defamatory statements and not to infringe or authorise any infringement 
of copyright or any other legal right by email communications. Any such 
communication is contrary to organizational policy and outside the scope of the 
employment of the individual concerned. The organization will not accept any 
liability in respect of such communication, and the employee responsible will 
be personally liable for any damages or other liability arising.



Our organization accepts no liability for the content of this email, or for the 
consequences of any actions taken on the basis of the information provided, 
unless that information is subsequently confirmed in writing. If you are not 
the intended recipient, you are notified that disclosing, copying, distributing 
or taking any action in reliance on the contents of this information is 
strictly prohibited.

-


Re: How to read multiple HDFS directories

2021-05-05 Thread Kapil Garg
Hi Mich,
The number of directories can be 1000+, doing 1000+ reduce by key and union
might be a costlier operation.

On Wed, May 5, 2021 at 10:22 PM Mich Talebzadeh 
wrote:

> This is my take
>
>
>1. read the current snapshot (provide empty if it doesn't exist yet)
>2. Loop over N directories
>   1. read unprocessed new data from HDFS
>   2. union them and do a `reduceByKey` operation
>   3. output a new version of the snapshot
>
> HTH
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 5 May 2021 at 17:03, Kapil Garg  wrote:
>
>> Sorry but I didn't get the question. It is possible that 1 record is
>> present in multiple directories. That's why we do a reduceByKey after the
>> union step.
>>
>> On Wed, May 5, 2021 at 9:20 PM Mich Talebzadeh 
>> wrote:
>>
>>> When you are doing union on these RDDs, (each RDD has one to one
>>> correspondence with an HDFS directory), do you have a common key across all?
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Wed, 5 May 2021 at 16:23, Kapil Garg  wrote:
>>>
 Hi Mich,
 I went through the thread and it doesn't relate to the problem
 statement I shared above.

 In my problem statement, there is a simple ETL job which doesn't use
 any external library (such as pandas)
 This is the flow

 *hdfsDirs := List(); //contains N directories*

 *rddList := List();*
 *for each directory in hdfsDirs:*
 *rdd = spark.read(directory)*
 *rdd.map() //augment the data with additional directory related
 data*
 *rddList.add(rdd)*

 *finalRdd = spark.union(rddList) // number of tasks = N*K // Files are
 distributed unevenly on executors here*

 *finalRdd.partionBy(hashpartitioner); // here tasks take uneven time*

 Is it possible to make the union step read the directory evenly on each
 executor, that way, each executor will have roughly the same amount of data



 On Wed, May 5, 2021 at 8:35 PM Mich Talebzadeh <
 mich.talebza...@gmail.com> wrote:

> Hi,
>
> Have a look at this thread called
>
> Tasks are skewed to one executor
>
> and see if it helps and we can take it from there.
>
> HTH
>
>
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any loss, damage or destruction of data or any other property which may
> arise from relying on this email's technical content is explicitly
> disclaimed. The author will in no case be liable for any monetary damages
> arising from such loss, damage or destruction.
>
>
>
>
> On Wed, 5 May 2021 at 15:46, Kapil Garg 
> wrote:
>
>> Hi,
>> I am facing issues while reading multiple HDFS directories. Please
>> read the problem statement and current approach below
>>
>> *Problem Statement*
>> There are N HDFS directories each having K files. We want to read
>> data from all directories such that when we read data from directory D, 
>> we
>> map all the data and augment it with additional information specific to
>> that directory.
>>
>> *Current Approach*
>> In current approach, we are iterating over the directories, reading
>> it in RDD, mapping the RDD and the putting the RDD into a list.
>> After all N directories have been read, we have a list of N RDDs
>> We call spark Union on the list to merge them together.
>>
>> This approach is causing data skewness because there is 1 directory
>> of size 12 GBs whereas other RDDs are less than 1 GB. So when the large
>> RDD's turn comes, spark submits its task on available executors causing 
>> the
>> RDD to present on few executors instead of spreading on all.
>>
>> Is there a way to avoid this data skewness ? I couldn't find any RDD
>> API, spark config which could enforce the data reading tasks evenly on 
>> all
>> executors.
>>
>> --
>> Regards
>> Kapil Garg
>>
>>
>>

Re: How to read multiple HDFS directories

2021-05-05 Thread Mich Talebzadeh
This is my take


   1. read the current snapshot (provide empty if it doesn't exist yet)
   2. Loop over N directories
  1. read unprocessed new data from HDFS
  2. union them and do a `reduceByKey` operation
  3. output a new version of the snapshot

HTH

   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 5 May 2021 at 17:03, Kapil Garg  wrote:

> Sorry but I didn't get the question. It is possible that 1 record is
> present in multiple directories. That's why we do a reduceByKey after the
> union step.
>
> On Wed, May 5, 2021 at 9:20 PM Mich Talebzadeh 
> wrote:
>
>> When you are doing union on these RDDs, (each RDD has one to one
>> correspondence with an HDFS directory), do you have a common key across all?
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Wed, 5 May 2021 at 16:23, Kapil Garg  wrote:
>>
>>> Hi Mich,
>>> I went through the thread and it doesn't relate to the problem statement
>>> I shared above.
>>>
>>> In my problem statement, there is a simple ETL job which doesn't use any
>>> external library (such as pandas)
>>> This is the flow
>>>
>>> *hdfsDirs := List(); //contains N directories*
>>>
>>> *rddList := List();*
>>> *for each directory in hdfsDirs:*
>>> *rdd = spark.read(directory)*
>>> *rdd.map() //augment the data with additional directory related data*
>>> *rddList.add(rdd)*
>>>
>>> *finalRdd = spark.union(rddList) // number of tasks = N*K // Files are
>>> distributed unevenly on executors here*
>>>
>>> *finalRdd.partionBy(hashpartitioner); // here tasks take uneven time*
>>>
>>> Is it possible to make the union step read the directory evenly on each
>>> executor, that way, each executor will have roughly the same amount of data
>>>
>>>
>>>
>>> On Wed, May 5, 2021 at 8:35 PM Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 Hi,

 Have a look at this thread called

 Tasks are skewed to one executor

 and see if it helps and we can take it from there.

 HTH



view my Linkedin profile
 



 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.




 On Wed, 5 May 2021 at 15:46, Kapil Garg 
 wrote:

> Hi,
> I am facing issues while reading multiple HDFS directories. Please
> read the problem statement and current approach below
>
> *Problem Statement*
> There are N HDFS directories each having K files. We want to read data
> from all directories such that when we read data from directory D, we map
> all the data and augment it with additional information specific to that
> directory.
>
> *Current Approach*
> In current approach, we are iterating over the directories, reading it
> in RDD, mapping the RDD and the putting the RDD into a list.
> After all N directories have been read, we have a list of N RDDs
> We call spark Union on the list to merge them together.
>
> This approach is causing data skewness because there is 1 directory of
> size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's
> turn comes, spark submits its task on available executors causing the RDD
> to present on few executors instead of spreading on all.
>
> Is there a way to avoid this data skewness ? I couldn't find any RDD
> API, spark config which could enforce the data reading tasks evenly on all
> executors.
>
> --
> Regards
> Kapil Garg
>
>
> *-*
>
> *This email and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they are
> addressed. If you have received this email in error, please notify the
> system manager. This messag

Re: How to read multiple HDFS directories

2021-05-05 Thread Kapil Garg
Sorry but I didn't get the question. It is possible that 1 record is
present in multiple directories. That's why we do a reduceByKey after the
union step.

On Wed, May 5, 2021 at 9:20 PM Mich Talebzadeh 
wrote:

> When you are doing union on these RDDs, (each RDD has one to one
> correspondence with an HDFS directory), do you have a common key across all?
>
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 5 May 2021 at 16:23, Kapil Garg  wrote:
>
>> Hi Mich,
>> I went through the thread and it doesn't relate to the problem statement
>> I shared above.
>>
>> In my problem statement, there is a simple ETL job which doesn't use any
>> external library (such as pandas)
>> This is the flow
>>
>> *hdfsDirs := List(); //contains N directories*
>>
>> *rddList := List();*
>> *for each directory in hdfsDirs:*
>> *rdd = spark.read(directory)*
>> *rdd.map() //augment the data with additional directory related data*
>> *rddList.add(rdd)*
>>
>> *finalRdd = spark.union(rddList) // number of tasks = N*K // Files are
>> distributed unevenly on executors here*
>>
>> *finalRdd.partionBy(hashpartitioner); // here tasks take uneven time*
>>
>> Is it possible to make the union step read the directory evenly on each
>> executor, that way, each executor will have roughly the same amount of data
>>
>>
>>
>> On Wed, May 5, 2021 at 8:35 PM Mich Talebzadeh 
>> wrote:
>>
>>> Hi,
>>>
>>> Have a look at this thread called
>>>
>>> Tasks are skewed to one executor
>>>
>>> and see if it helps and we can take it from there.
>>>
>>> HTH
>>>
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Wed, 5 May 2021 at 15:46, Kapil Garg 
>>> wrote:
>>>
 Hi,
 I am facing issues while reading multiple HDFS directories. Please read
 the problem statement and current approach below

 *Problem Statement*
 There are N HDFS directories each having K files. We want to read data
 from all directories such that when we read data from directory D, we map
 all the data and augment it with additional information specific to that
 directory.

 *Current Approach*
 In current approach, we are iterating over the directories, reading it
 in RDD, mapping the RDD and the putting the RDD into a list.
 After all N directories have been read, we have a list of N RDDs
 We call spark Union on the list to merge them together.

 This approach is causing data skewness because there is 1 directory of
 size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's
 turn comes, spark submits its task on available executors causing the RDD
 to present on few executors instead of spreading on all.

 Is there a way to avoid this data skewness ? I couldn't find any RDD
 API, spark config which could enforce the data reading tasks evenly on all
 executors.

 --
 Regards
 Kapil Garg


 *-*

 *This email and any files transmitted with it are confidential and
 intended solely for the use of the individual or entity to whom they are
 addressed. If you have received this email in error, please notify the
 system manager. This message contains confidential information and is
 intended only for the individual named. If you are not the named addressee,
 you should not disseminate, distribute or copy this email. Please notify
 the sender immediately by email if you have received this email by mistake
 and delete this email from your system. If you are not the intended
 recipient, you are notified that disclosing, copying, distributing or
 taking any action in reliance on the contents of this information is
 strictly prohibited.*



 *Any views or opinions presented in this email are solely those of the
 author and do not necessarily represent those of the organization. Any
 information on shares, debentures or similar instruments, recommended
 product pricing, valuations and the like are for information purposes only.
 It is not meant to be an instruction o

Re: How to read multiple HDFS directories

2021-05-05 Thread Mich Talebzadeh
When you are doing union on these RDDs, (each RDD has one to one
correspondence with an HDFS directory), do you have a common key across all?


   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 5 May 2021 at 16:23, Kapil Garg  wrote:

> Hi Mich,
> I went through the thread and it doesn't relate to the problem statement I
> shared above.
>
> In my problem statement, there is a simple ETL job which doesn't use any
> external library (such as pandas)
> This is the flow
>
> *hdfsDirs := List(); //contains N directories*
>
> *rddList := List();*
> *for each directory in hdfsDirs:*
> *rdd = spark.read(directory)*
> *rdd.map() //augment the data with additional directory related data*
> *rddList.add(rdd)*
>
> *finalRdd = spark.union(rddList) // number of tasks = N*K // Files are
> distributed unevenly on executors here*
>
> *finalRdd.partionBy(hashpartitioner); // here tasks take uneven time*
>
> Is it possible to make the union step read the directory evenly on each
> executor, that way, each executor will have roughly the same amount of data
>
>
>
> On Wed, May 5, 2021 at 8:35 PM Mich Talebzadeh 
> wrote:
>
>> Hi,
>>
>> Have a look at this thread called
>>
>> Tasks are skewed to one executor
>>
>> and see if it helps and we can take it from there.
>>
>> HTH
>>
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Wed, 5 May 2021 at 15:46, Kapil Garg 
>> wrote:
>>
>>> Hi,
>>> I am facing issues while reading multiple HDFS directories. Please read
>>> the problem statement and current approach below
>>>
>>> *Problem Statement*
>>> There are N HDFS directories each having K files. We want to read data
>>> from all directories such that when we read data from directory D, we map
>>> all the data and augment it with additional information specific to that
>>> directory.
>>>
>>> *Current Approach*
>>> In current approach, we are iterating over the directories, reading it
>>> in RDD, mapping the RDD and the putting the RDD into a list.
>>> After all N directories have been read, we have a list of N RDDs
>>> We call spark Union on the list to merge them together.
>>>
>>> This approach is causing data skewness because there is 1 directory of
>>> size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's
>>> turn comes, spark submits its task on available executors causing the RDD
>>> to present on few executors instead of spreading on all.
>>>
>>> Is there a way to avoid this data skewness ? I couldn't find any RDD
>>> API, spark config which could enforce the data reading tasks evenly on all
>>> executors.
>>>
>>> --
>>> Regards
>>> Kapil Garg
>>>
>>>
>>> *-*
>>>
>>> *This email and any files transmitted with it are confidential and
>>> intended solely for the use of the individual or entity to whom they are
>>> addressed. If you have received this email in error, please notify the
>>> system manager. This message contains confidential information and is
>>> intended only for the individual named. If you are not the named addressee,
>>> you should not disseminate, distribute or copy this email. Please notify
>>> the sender immediately by email if you have received this email by mistake
>>> and delete this email from your system. If you are not the intended
>>> recipient, you are notified that disclosing, copying, distributing or
>>> taking any action in reliance on the contents of this information is
>>> strictly prohibited.*
>>>
>>>
>>>
>>> *Any views or opinions presented in this email are solely those of the
>>> author and do not necessarily represent those of the organization. Any
>>> information on shares, debentures or similar instruments, recommended
>>> product pricing, valuations and the like are for information purposes only.
>>> It is not meant to be an instruction or recommendation, as the case may be,
>>> to buy or to sell securities, products, services nor an offer to buy or
>>> sell securities, products or services unless specifically stated to be so
>>> on behalf of the Flipkart group. Employees of the Flipkart group of
>>> companies are expressly required not to make defamatory statements and not
>>> to i

Re: How to read multiple HDFS directories

2021-05-05 Thread Kapil Garg
Hi Mich,
I went through the thread and it doesn't relate to the problem statement I
shared above.

In my problem statement, there is a simple ETL job which doesn't use any
external library (such as pandas)
This is the flow

*hdfsDirs := List(); //contains N directories*

*rddList := List();*
*for each directory in hdfsDirs:*
*rdd = spark.read(directory)*
*rdd.map() //augment the data with additional directory related data*
*rddList.add(rdd)*

*finalRdd = spark.union(rddList) // number of tasks = N*K // Files are
distributed unevenly on executors here*

*finalRdd.partionBy(hashpartitioner); // here tasks take uneven time*

Is it possible to make the union step read the directory evenly on each
executor, that way, each executor will have roughly the same amount of data



On Wed, May 5, 2021 at 8:35 PM Mich Talebzadeh 
wrote:

> Hi,
>
> Have a look at this thread called
>
> Tasks are skewed to one executor
>
> and see if it helps and we can take it from there.
>
> HTH
>
>
>
>view my Linkedin profile
> 
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Wed, 5 May 2021 at 15:46, Kapil Garg 
> wrote:
>
>> Hi,
>> I am facing issues while reading multiple HDFS directories. Please read
>> the problem statement and current approach below
>>
>> *Problem Statement*
>> There are N HDFS directories each having K files. We want to read data
>> from all directories such that when we read data from directory D, we map
>> all the data and augment it with additional information specific to that
>> directory.
>>
>> *Current Approach*
>> In current approach, we are iterating over the directories, reading it in
>> RDD, mapping the RDD and the putting the RDD into a list.
>> After all N directories have been read, we have a list of N RDDs
>> We call spark Union on the list to merge them together.
>>
>> This approach is causing data skewness because there is 1 directory of
>> size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's
>> turn comes, spark submits its task on available executors causing the RDD
>> to present on few executors instead of spreading on all.
>>
>> Is there a way to avoid this data skewness ? I couldn't find any RDD API,
>> spark config which could enforce the data reading tasks evenly on all
>> executors.
>>
>> --
>> Regards
>> Kapil Garg
>>
>>
>> *-*
>>
>> *This email and any files transmitted with it are confidential and
>> intended solely for the use of the individual or entity to whom they are
>> addressed. If you have received this email in error, please notify the
>> system manager. This message contains confidential information and is
>> intended only for the individual named. If you are not the named addressee,
>> you should not disseminate, distribute or copy this email. Please notify
>> the sender immediately by email if you have received this email by mistake
>> and delete this email from your system. If you are not the intended
>> recipient, you are notified that disclosing, copying, distributing or
>> taking any action in reliance on the contents of this information is
>> strictly prohibited.*
>>
>>
>>
>> *Any views or opinions presented in this email are solely those of the
>> author and do not necessarily represent those of the organization. Any
>> information on shares, debentures or similar instruments, recommended
>> product pricing, valuations and the like are for information purposes only.
>> It is not meant to be an instruction or recommendation, as the case may be,
>> to buy or to sell securities, products, services nor an offer to buy or
>> sell securities, products or services unless specifically stated to be so
>> on behalf of the Flipkart group. Employees of the Flipkart group of
>> companies are expressly required not to make defamatory statements and not
>> to infringe or authorise any infringement of copyright or any other legal
>> right by email communications. Any such communication is contrary to
>> organizational policy and outside the scope of the employment of the
>> individual concerned. The organization will not accept any liability in
>> respect of such communication, and the employee responsible will be
>> personally liable for any damages or other liability arising.*
>>
>>
>>
>> *Our organization accepts no liability for the content of this email, or
>> for the consequences of any actions taken on the basis of the information *
>> provided,* unless that information is subsequently confirmed in writing.
>> If you are not the intended recipient, you are notified that disclosing,
>> copyi

Re: How to read multiple HDFS directories

2021-05-05 Thread Mich Talebzadeh
Hi,

Have a look at this thread called

Tasks are skewed to one executor

and see if it helps and we can take it from there.

HTH



   view my Linkedin profile




*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 5 May 2021 at 15:46, Kapil Garg 
wrote:

> Hi,
> I am facing issues while reading multiple HDFS directories. Please read
> the problem statement and current approach below
>
> *Problem Statement*
> There are N HDFS directories each having K files. We want to read data
> from all directories such that when we read data from directory D, we map
> all the data and augment it with additional information specific to that
> directory.
>
> *Current Approach*
> In current approach, we are iterating over the directories, reading it in
> RDD, mapping the RDD and the putting the RDD into a list.
> After all N directories have been read, we have a list of N RDDs
> We call spark Union on the list to merge them together.
>
> This approach is causing data skewness because there is 1 directory of
> size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's
> turn comes, spark submits its task on available executors causing the RDD
> to present on few executors instead of spreading on all.
>
> Is there a way to avoid this data skewness ? I couldn't find any RDD API,
> spark config which could enforce the data reading tasks evenly on all
> executors.
>
> --
> Regards
> Kapil Garg
>
>
> *-*
>
> *This email and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they are
> addressed. If you have received this email in error, please notify the
> system manager. This message contains confidential information and is
> intended only for the individual named. If you are not the named addressee,
> you should not disseminate, distribute or copy this email. Please notify
> the sender immediately by email if you have received this email by mistake
> and delete this email from your system. If you are not the intended
> recipient, you are notified that disclosing, copying, distributing or
> taking any action in reliance on the contents of this information is
> strictly prohibited.*
>
>
>
> *Any views or opinions presented in this email are solely those of the
> author and do not necessarily represent those of the organization. Any
> information on shares, debentures or similar instruments, recommended
> product pricing, valuations and the like are for information purposes only.
> It is not meant to be an instruction or recommendation, as the case may be,
> to buy or to sell securities, products, services nor an offer to buy or
> sell securities, products or services unless specifically stated to be so
> on behalf of the Flipkart group. Employees of the Flipkart group of
> companies are expressly required not to make defamatory statements and not
> to infringe or authorise any infringement of copyright or any other legal
> right by email communications. Any such communication is contrary to
> organizational policy and outside the scope of the employment of the
> individual concerned. The organization will not accept any liability in
> respect of such communication, and the employee responsible will be
> personally liable for any damages or other liability arising.*
>
>
>
> *Our organization accepts no liability for the content of this email, or
> for the consequences of any actions taken on the basis of the information *
> provided,* unless that information is subsequently confirmed in writing.
> If you are not the intended recipient, you are notified that disclosing,
> copying, distributing or taking any action in reliance on the contents of
> this information is strictly prohibited.*
>
>
> *-*
>
>


How to read multiple HDFS directories

2021-05-05 Thread Kapil Garg
Hi,
I am facing issues while reading multiple HDFS directories. Please read the
problem statement and current approach below

*Problem Statement*
There are N HDFS directories each having K files. We want to read data from
all directories such that when we read data from directory D, we map all
the data and augment it with additional information specific to that
directory.

*Current Approach*
In current approach, we are iterating over the directories, reading it in
RDD, mapping the RDD and the putting the RDD into a list.
After all N directories have been read, we have a list of N RDDs
We call spark Union on the list to merge them together.

This approach is causing data skewness because there is 1 directory of size
12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's turn
comes, spark submits its task on available executors causing the RDD to
present on few executors instead of spreading on all.

Is there a way to avoid this data skewness ? I couldn't find any RDD API,
spark config which could enforce the data reading tasks evenly on all
executors.

-- 
Regards
Kapil Garg

-- 


*-*

*This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the 
system manager. This message contains confidential information and is 
intended only for the individual named. If you are not the named addressee, 
you should not disseminate, distribute or copy this email. Please notify 
the sender immediately by email if you have received this email by mistake 
and delete this email from your system. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.*

 

*Any views or opinions presented in this 
email are solely those of the author and do not necessarily represent those 
of the organization. Any information on shares, debentures or similar 
instruments, recommended product pricing, valuations and the like are for 
information purposes only. It is not meant to be an instruction or 
recommendation, as the case may be, to buy or to sell securities, products, 
services nor an offer to buy or sell securities, products or services 
unless specifically stated to be so on behalf of the Flipkart group. 
Employees of the Flipkart group of companies are expressly required not to 
make defamatory statements and not to infringe or authorise any 
infringement of copyright or any other legal right by email communications. 
Any such communication is contrary to organizational policy and outside the 
scope of the employment of the individual concerned. The organization will 
not accept any liability in respect of such communication, and the employee 
responsible will be personally liable for any damages or other liability 
arising.*

 

*Our organization accepts no liability for the 
content of this email, or for the consequences of any actions taken on the 
basis of the information *provided,* unless that information is 
subsequently confirmed in writing. If you are not the intended recipient, 
you are notified that disclosing, copying, distributing or taking any 
action in reliance on the contents of this information is strictly 
prohibited.*

_-_