Re: How to read multiple HDFS directories
Hi Lalwani, But I need to augment the directory specific data to every record of that directory. Once I have read the data, there is no link back to the directory in the data which I can use to augment additional data On Wed, May 5, 2021 at 10:41 PM Lalwani, Jayesh wrote: > You don’t have to union multiple RDDs. You can read files from multiple > directories in a single read call. Spark will manage partitioning of the > data across directories. > > > > *From: *Kapil Garg > *Date: *Wednesday, May 5, 2021 at 10:45 AM > *To: *spark users > *Subject: *[EXTERNAL] How to read multiple HDFS directories > > > > *CAUTION*: This email originated from outside of the organization. Do not > click links or open attachments unless you can confirm the sender and know > the content is safe. > > > > Hi, > > I am facing issues while reading multiple HDFS directories. Please read > the problem statement and current approach below > > > > *Problem Statement* > > There are N HDFS directories each having K files. We want to read data > from all directories such that when we read data from directory D, we map > all the data and augment it with additional information specific to that > directory. > > > > *Current Approach* > > In current approach, we are iterating over the directories, reading it in > RDD, mapping the RDD and the putting the RDD into a list. > > After all N directories have been read, we have a list of N RDDs > > We call spark Union on the list to merge them together. > > > > This approach is causing data skewness because there is 1 directory of > size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's > turn comes, spark submits its task on available executors causing the RDD > to present on few executors instead of spreading on all. > > > > Is there a way to avoid this data skewness ? I couldn't find any RDD API, > spark config which could enforce the data reading tasks evenly on all > executors. > > > -- > > Regards > Kapil Garg > > > > > *-* > > *This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they are > addressed. If you have received this email in error, please notify the > system manager. This message contains confidential information and is > intended only for the individual named. If you are not the named addressee, > you should not disseminate, distribute or copy this email. Please notify > the sender immediately by email if you have received this email by mistake > and delete this email from your system. If you are not the intended > recipient, you are notified that disclosing, copying, distributing or > taking any action in reliance on the contents of this information is > strictly prohibited.* > > > > *Any views or opinions presented in this email are solely those of the > author and do not necessarily represent those of the organization. Any > information on shares, debentures or similar instruments, recommended > product pricing, valuations and the like are for information purposes only. > It is not meant to be an instruction or recommendation, as the case may be, > to buy or to sell securities, products, services nor an offer to buy or > sell securities, products or services unless specifically stated to be so > on behalf of the Flipkart group. Employees of the Flipkart group of > companies are expressly required not to make defamatory statements and not > to infringe or authorise any infringement of copyright or any other legal > right by email communications. Any such communication is contrary to > organizational policy and outside the scope of the employment of the > individual concerned. The organization will not accept any liability in > respect of such communication, and the employee responsible will be > personally liable for any damages or other liability arising.* > > > > *Our organization accepts no liability for the content of this email, or > for the consequences of any actions taken on the basis of the information * > provided,* unless that information is subsequently confirmed in writing. > If you are not the intended recipient, you are notified that disclosing, > copying, distributing or taking any action in reliance on the contents of > this information is strictly prohibited.* > > > *-* > > -- Regards Kapil Garg -- *-* *This email and any files transmitted with it
Re: How to read multiple HDFS directories
Hi Mich, The number of directories can be 1000+, doing 1000+ reduce by key and union might be a costlier operation. On Wed, May 5, 2021 at 10:22 PM Mich Talebzadeh wrote: > This is my take > > >1. read the current snapshot (provide empty if it doesn't exist yet) >2. Loop over N directories > 1. read unprocessed new data from HDFS > 2. union them and do a `reduceByKey` operation > 3. output a new version of the snapshot > > HTH > >view my Linkedin profile > <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/> > > > > *Disclaimer:* Use it at your own risk. Any and all responsibility for any > loss, damage or destruction of data or any other property which may arise > from relying on this email's technical content is explicitly disclaimed. > The author will in no case be liable for any monetary damages arising from > such loss, damage or destruction. > > > > > On Wed, 5 May 2021 at 17:03, Kapil Garg wrote: > >> Sorry but I didn't get the question. It is possible that 1 record is >> present in multiple directories. That's why we do a reduceByKey after the >> union step. >> >> On Wed, May 5, 2021 at 9:20 PM Mich Talebzadeh >> wrote: >> >>> When you are doing union on these RDDs, (each RDD has one to one >>> correspondence with an HDFS directory), do you have a common key across all? >>> >>> >>>view my Linkedin profile >>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/> >>> >>> >>> >>> *Disclaimer:* Use it at your own risk. Any and all responsibility for >>> any loss, damage or destruction of data or any other property which may >>> arise from relying on this email's technical content is explicitly >>> disclaimed. The author will in no case be liable for any monetary damages >>> arising from such loss, damage or destruction. >>> >>> >>> >>> >>> On Wed, 5 May 2021 at 16:23, Kapil Garg wrote: >>> >>>> Hi Mich, >>>> I went through the thread and it doesn't relate to the problem >>>> statement I shared above. >>>> >>>> In my problem statement, there is a simple ETL job which doesn't use >>>> any external library (such as pandas) >>>> This is the flow >>>> >>>> *hdfsDirs := List(); //contains N directories* >>>> >>>> *rddList := List();* >>>> *for each directory in hdfsDirs:* >>>> *rdd = spark.read(directory)* >>>> *rdd.map() //augment the data with additional directory related >>>> data* >>>> *rddList.add(rdd)* >>>> >>>> *finalRdd = spark.union(rddList) // number of tasks = N*K // Files are >>>> distributed unevenly on executors here* >>>> >>>> *finalRdd.partionBy(hashpartitioner); // here tasks take uneven time* >>>> >>>> Is it possible to make the union step read the directory evenly on each >>>> executor, that way, each executor will have roughly the same amount of data >>>> >>>> >>>> >>>> On Wed, May 5, 2021 at 8:35 PM Mich Talebzadeh < >>>> mich.talebza...@gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> Have a look at this thread called >>>>> >>>>> Tasks are skewed to one executor >>>>> >>>>> and see if it helps and we can take it from there. >>>>> >>>>> HTH >>>>> >>>>> >>>>> >>>>>view my Linkedin profile >>>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/> >>>>> >>>>> >>>>> >>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for >>>>> any loss, damage or destruction of data or any other property which may >>>>> arise from relying on this email's technical content is explicitly >>>>> disclaimed. The author will in no case be liable for any monetary damages >>>>> arising from such loss, damage or destruction. >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, 5 May 2021 at 15:46, Kapil Garg >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> I am facing issues while reading multiple HDFS directories. Please >>>>>> read the pr
Re: How to read multiple HDFS directories
Sorry but I didn't get the question. It is possible that 1 record is present in multiple directories. That's why we do a reduceByKey after the union step. On Wed, May 5, 2021 at 9:20 PM Mich Talebzadeh wrote: > When you are doing union on these RDDs, (each RDD has one to one > correspondence with an HDFS directory), do you have a common key across all? > > >view my Linkedin profile > <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/> > > > > *Disclaimer:* Use it at your own risk. Any and all responsibility for any > loss, damage or destruction of data or any other property which may arise > from relying on this email's technical content is explicitly disclaimed. > The author will in no case be liable for any monetary damages arising from > such loss, damage or destruction. > > > > > On Wed, 5 May 2021 at 16:23, Kapil Garg wrote: > >> Hi Mich, >> I went through the thread and it doesn't relate to the problem statement >> I shared above. >> >> In my problem statement, there is a simple ETL job which doesn't use any >> external library (such as pandas) >> This is the flow >> >> *hdfsDirs := List(); //contains N directories* >> >> *rddList := List();* >> *for each directory in hdfsDirs:* >> *rdd = spark.read(directory)* >> *rdd.map() //augment the data with additional directory related data* >> *rddList.add(rdd)* >> >> *finalRdd = spark.union(rddList) // number of tasks = N*K // Files are >> distributed unevenly on executors here* >> >> *finalRdd.partionBy(hashpartitioner); // here tasks take uneven time* >> >> Is it possible to make the union step read the directory evenly on each >> executor, that way, each executor will have roughly the same amount of data >> >> >> >> On Wed, May 5, 2021 at 8:35 PM Mich Talebzadeh >> wrote: >> >>> Hi, >>> >>> Have a look at this thread called >>> >>> Tasks are skewed to one executor >>> >>> and see if it helps and we can take it from there. >>> >>> HTH >>> >>> >>> >>>view my Linkedin profile >>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/> >>> >>> >>> >>> *Disclaimer:* Use it at your own risk. Any and all responsibility for >>> any loss, damage or destruction of data or any other property which may >>> arise from relying on this email's technical content is explicitly >>> disclaimed. The author will in no case be liable for any monetary damages >>> arising from such loss, damage or destruction. >>> >>> >>> >>> >>> On Wed, 5 May 2021 at 15:46, Kapil Garg >>> wrote: >>> >>>> Hi, >>>> I am facing issues while reading multiple HDFS directories. Please read >>>> the problem statement and current approach below >>>> >>>> *Problem Statement* >>>> There are N HDFS directories each having K files. We want to read data >>>> from all directories such that when we read data from directory D, we map >>>> all the data and augment it with additional information specific to that >>>> directory. >>>> >>>> *Current Approach* >>>> In current approach, we are iterating over the directories, reading it >>>> in RDD, mapping the RDD and the putting the RDD into a list. >>>> After all N directories have been read, we have a list of N RDDs >>>> We call spark Union on the list to merge them together. >>>> >>>> This approach is causing data skewness because there is 1 directory of >>>> size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's >>>> turn comes, spark submits its task on available executors causing the RDD >>>> to present on few executors instead of spreading on all. >>>> >>>> Is there a way to avoid this data skewness ? I couldn't find any RDD >>>> API, spark config which could enforce the data reading tasks evenly on all >>>> executors. >>>> >>>> -- >>>> Regards >>>> Kapil Garg >>>> >>>> >>>> *-* >>>> >>>> *This email and any files transmitted with it are confidential and >>>> intended solely for the use of the individual or entity to whom they are >>>> addressed. If you have received
Re: How to read multiple HDFS directories
Hi Mich, I went through the thread and it doesn't relate to the problem statement I shared above. In my problem statement, there is a simple ETL job which doesn't use any external library (such as pandas) This is the flow *hdfsDirs := List(); //contains N directories* *rddList := List();* *for each directory in hdfsDirs:* *rdd = spark.read(directory)* *rdd.map() //augment the data with additional directory related data* *rddList.add(rdd)* *finalRdd = spark.union(rddList) // number of tasks = N*K // Files are distributed unevenly on executors here* *finalRdd.partionBy(hashpartitioner); // here tasks take uneven time* Is it possible to make the union step read the directory evenly on each executor, that way, each executor will have roughly the same amount of data On Wed, May 5, 2021 at 8:35 PM Mich Talebzadeh wrote: > Hi, > > Have a look at this thread called > > Tasks are skewed to one executor > > and see if it helps and we can take it from there. > > HTH > > > >view my Linkedin profile > <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/> > > > > *Disclaimer:* Use it at your own risk. Any and all responsibility for any > loss, damage or destruction of data or any other property which may arise > from relying on this email's technical content is explicitly disclaimed. > The author will in no case be liable for any monetary damages arising from > such loss, damage or destruction. > > > > > On Wed, 5 May 2021 at 15:46, Kapil Garg > wrote: > >> Hi, >> I am facing issues while reading multiple HDFS directories. Please read >> the problem statement and current approach below >> >> *Problem Statement* >> There are N HDFS directories each having K files. We want to read data >> from all directories such that when we read data from directory D, we map >> all the data and augment it with additional information specific to that >> directory. >> >> *Current Approach* >> In current approach, we are iterating over the directories, reading it in >> RDD, mapping the RDD and the putting the RDD into a list. >> After all N directories have been read, we have a list of N RDDs >> We call spark Union on the list to merge them together. >> >> This approach is causing data skewness because there is 1 directory of >> size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's >> turn comes, spark submits its task on available executors causing the RDD >> to present on few executors instead of spreading on all. >> >> Is there a way to avoid this data skewness ? I couldn't find any RDD API, >> spark config which could enforce the data reading tasks evenly on all >> executors. >> >> -- >> Regards >> Kapil Garg >> >> >> *-* >> >> *This email and any files transmitted with it are confidential and >> intended solely for the use of the individual or entity to whom they are >> addressed. If you have received this email in error, please notify the >> system manager. This message contains confidential information and is >> intended only for the individual named. If you are not the named addressee, >> you should not disseminate, distribute or copy this email. Please notify >> the sender immediately by email if you have received this email by mistake >> and delete this email from your system. If you are not the intended >> recipient, you are notified that disclosing, copying, distributing or >> taking any action in reliance on the contents of this information is >> strictly prohibited.* >> >> >> >> *Any views or opinions presented in this email are solely those of the >> author and do not necessarily represent those of the organization. Any >> information on shares, debentures or similar instruments, recommended >> product pricing, valuations and the like are for information purposes only. >> It is not meant to be an instruction or recommendation, as the case may be, >> to buy or to sell securities, products, services nor an offer to buy or >> sell securities, products or services unless specifically stated to be so >> on behalf of the Flipkart group. Employees of the Flipkart group of >> companies are expressly required not to make defamatory statements and not >> to infringe or authorise any infringement of copyright or any other legal >> right by email communications. Any such communication is contrary to >> organizational policy and outside the scope of the employment of the >> individual concerned. The organization will not acc
How to read multiple HDFS directories
Hi, I am facing issues while reading multiple HDFS directories. Please read the problem statement and current approach below *Problem Statement* There are N HDFS directories each having K files. We want to read data from all directories such that when we read data from directory D, we map all the data and augment it with additional information specific to that directory. *Current Approach* In current approach, we are iterating over the directories, reading it in RDD, mapping the RDD and the putting the RDD into a list. After all N directories have been read, we have a list of N RDDs We call spark Union on the list to merge them together. This approach is causing data skewness because there is 1 directory of size 12 GBs whereas other RDDs are less than 1 GB. So when the large RDD's turn comes, spark submits its task on available executors causing the RDD to present on few executors instead of spreading on all. Is there a way to avoid this data skewness ? I couldn't find any RDD API, spark config which could enforce the data reading tasks evenly on all executors. -- Regards Kapil Garg -- *-* *This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.* *Any views or opinions presented in this email are solely those of the author and do not necessarily represent those of the organization. Any information on shares, debentures or similar instruments, recommended product pricing, valuations and the like are for information purposes only. It is not meant to be an instruction or recommendation, as the case may be, to buy or to sell securities, products, services nor an offer to buy or sell securities, products or services unless specifically stated to be so on behalf of the Flipkart group. Employees of the Flipkart group of companies are expressly required not to make defamatory statements and not to infringe or authorise any infringement of copyright or any other legal right by email communications. Any such communication is contrary to organizational policy and outside the scope of the employment of the individual concerned. The organization will not accept any liability in respect of such communication, and the employee responsible will be personally liable for any damages or other liability arising.* *Our organization accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information *provided,* unless that information is subsequently confirmed in writing. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.* _-_
Re: Single executor processing all tasks in spark structured streaming kafka
Hi Sachit, What do you mean by "spark is running only 1 executor with 1 task" ? Did you submit the spark application with multiple executors but only 1 is being used and rest are idle ? If that's the case, then it might happen due to spark.locality.wait setting which is by default set to 3s. This will enable spark to wait for 3s for the tasks to finish on the executor before submitting the next batch on another executors. This happens due to spark's preference for cached kafka consumers. And regarding having 1 task doing all the processing. Please check if your kafka topic has only 1 partition. Spark draws the parallelism from the number of partitions in the kafka topic. Once you have loaded the data from partitions, you can choose to repartition the batch so it is processed by multiple tasks. On Mon, Mar 8, 2021 at 10:57 PM Sachit Murarka wrote: > Hi All, > > I am using Spark 3.0.1 Structuring streaming with Pyspark. > > The problem is spark is running only 1 executor with 1 task. Following is > the summary of what I am doing. > > Can anyone help on why my executor is 1 only? > > def process_events(event): > fetch_actual_data() > #many more steps > > def fetch_actual_data(): > #applying operation on actual data > > df = spark.readStream.format("kafka") \ > .option("kafka.bootstrap.servers", KAFKA_URL) \ > .option("subscribe", KAFKA_TOPICS) \ > .option("startingOffsets", > START_OFFSET).load() .selectExpr("CAST(value AS STRING)") > > > query = > df.writeStream.foreach(process_events).option("checkpointLocation", > "/opt/checkpoint").trigger(processingTime="30 seconds").start() > > > > Kind Regards, > Sachit Murarka > -- Regards Kapil Garg -- *-* *This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.* *Any views or opinions presented in this email are solely those of the author and do not necessarily represent those of the organization. Any information on shares, debentures or similar instruments, recommended product pricing, valuations and the like are for information purposes only. It is not meant to be an instruction or recommendation, as the case may be, to buy or to sell securities, products, services nor an offer to buy or sell securities, products or services unless specifically stated to be so on behalf of the Flipkart group. Employees of the Flipkart group of companies are expressly required not to make defamatory statements and not to infringe or authorise any infringement of copyright or any other legal right by email communications. Any such communication is contrary to organizational policy and outside the scope of the employment of the individual concerned. The organization will not accept any liability in respect of such communication, and the employee responsible will be personally liable for any damages or other liability arising.* *Our organization accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information *provided,* unless that information is subsequently confirmed in writing. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.* _-_
Re: Spark Version 3.0.1 Gui Display Query
Hi Ranju, The screenshots and logs you shared are from spark driver and executor. I meant for you to check the web page logs in chrome console. There might be some error logs indicating why UI is unable to fetch the information. I have faced a similar problem when I was accessing spark UI via a proxy and the proxy was having problems resolving the backend URL and data was not visible in executors tab. Just check the chrome console logs once and if you find any error logs then do share here for others to look at. On Fri, Mar 5, 2021 at 9:35 AM Ranju Jain wrote: > Hi Attila, > > Ok , I understood. I will switch on event logs . > > Regards > Ranju > > -Original Message- > From: Attila Zsolt Piros > Sent: Thursday, March 4, 2021 11:38 PM > To: user@spark.apache.org > Subject: RE: Spark Version 3.0.1 Gui Display Query > > Hi Ranju! > > I meant the event log would be very helpful for analyzing the problem at > your side. > > The three logs together (driver, executors, event) is the best from the > same run of course. > > I know you want check the executors tab during the job is running. And for > this you do not need to eventlog. But the event log is still useful for > finding out what happened. > > Regards, > Attila > > > > > -- > Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/ > > - > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > > ----- > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > > -- Regards Kapil Garg -- *-* *This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.* *Any views or opinions presented in this email are solely those of the author and do not necessarily represent those of the organization. Any information on shares, debentures or similar instruments, recommended product pricing, valuations and the like are for information purposes only. It is not meant to be an instruction or recommendation, as the case may be, to buy or to sell securities, products, services nor an offer to buy or sell securities, products or services unless specifically stated to be so on behalf of the Flipkart group. Employees of the Flipkart group of companies are expressly required not to make defamatory statements and not to infringe or authorise any infringement of copyright or any other legal right by email communications. Any such communication is contrary to organizational policy and outside the scope of the employment of the individual concerned. The organization will not accept any liability in respect of such communication, and the employee responsible will be personally liable for any damages or other liability arising.* *Our organization accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information *provided,* unless that information is subsequently confirmed in writing. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.* _-_
Re: Spark Version 3.0.1 Gui Display Query
Okay, Please share console outputs and network logs of executors tab from browser On Tue, Mar 2, 2021 at 11:56 AM Ranju Jain wrote: > Hi Kapil, > > > > I am not able to see executor info throughout the application lifetime. > > Attaching screenshots. > >1. Jobs Tab during application start >2. Executors Tab during application lifetime > > > > I need to tune my application , this Executor Info would be a great help > for tuning the parameters. But currently it is blank shown. > > Regards > > Ranju > > > > *From:* Kapil Garg > *Sent:* Tuesday, March 2, 2021 11:39 AM > *To:* Ranju Jain > *Cc:* user@spark.apache.org > *Subject:* Re: Spark Version 3.0.1 Gui Display Query > > > > Hi Ranju, > > Is it happening just after you submit the spark application ? or are you > not able to see executors info throughout the application lifetime ? > > Because you won't be able to see any info there until executors have been > added to the application and tasks have been submitted. > > > > On Tue, Mar 2, 2021 at 11:04 AM Ranju Jain < > ranju.j...@ericsson.com.invalid> wrote: > > Hi , > > > > I started using Spark 3.0.1 version recently and noticed the Executors Tab > on Spark GUI appears as blank. > > Please suggest what could be the reason of this type of display? > > > > Regards > > Ranju > > > > -- > > Regards > Kapil Garg > > > > > *-* > > *This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they are > addressed. If you have received this email in error, please notify the > system manager. This message contains confidential information and is > intended only for the individual named. If you are not the named addressee, > you should not disseminate, distribute or copy this email. Please notify > the sender immediately by email if you have received this email by mistake > and delete this email from your system. If you are not the intended > recipient, you are notified that disclosing, copying, distributing or > taking any action in reliance on the contents of this information is > strictly prohibited.* > > > > *Any views or opinions presented in this email are solely those of the > author and do not necessarily represent those of the organization. Any > information on shares, debentures or similar instruments, recommended > product pricing, valuations and the like are for information purposes only. > It is not meant to be an instruction or recommendation, as the case may be, > to buy or to sell securities, products, services nor an offer to buy or > sell securities, products or services unless specifically stated to be so > on behalf of the Flipkart group. Employees of the Flipkart group of > companies are expressly required not to make defamatory statements and not > to infringe or authorise any infringement of copyright or any other legal > right by email communications. Any such communication is contrary to > organizational policy and outside the scope of the employment of the > individual concerned. The organization will not accept any liability in > respect of such communication, and the employee responsible will be > personally liable for any damages or other liability arising.* > > > > *Our organization accepts no liability for the content of this email, or > for the consequences of any actions taken on the basis of the information * > provided,* unless that information is subsequently confirmed in writing. > If you are not the intended recipient, you are notified that disclosing, > copying, distributing or taking any action in reliance on the contents of > this information is strictly prohibited.* > > > *-* > > -- Regards Kapil Garg -- *-* *This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance
Re: Spark Version 3.0.1 Gui Display Query
Hi Ranju, Is it happening just after you submit the spark application ? or are you not able to see executors info throughout the application lifetime ? Because you won't be able to see any info there until executors have been added to the application and tasks have been submitted. On Tue, Mar 2, 2021 at 11:04 AM Ranju Jain wrote: > Hi , > > > > I started using Spark 3.0.1 version recently and noticed the Executors Tab > on Spark GUI appears as blank. > > Please suggest what could be the reason of this type of display? > > > > Regards > > Ranju > -- Regards Kapil Garg -- *-* *This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if you have received this email by mistake and delete this email from your system. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.* *Any views or opinions presented in this email are solely those of the author and do not necessarily represent those of the organization. Any information on shares, debentures or similar instruments, recommended product pricing, valuations and the like are for information purposes only. It is not meant to be an instruction or recommendation, as the case may be, to buy or to sell securities, products, services nor an offer to buy or sell securities, products or services unless specifically stated to be so on behalf of the Flipkart group. Employees of the Flipkart group of companies are expressly required not to make defamatory statements and not to infringe or authorise any infringement of copyright or any other legal right by email communications. Any such communication is contrary to organizational policy and outside the scope of the employment of the individual concerned. The organization will not accept any liability in respect of such communication, and the employee responsible will be personally liable for any damages or other liability arising.* *Our organization accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information *provided,* unless that information is subsequently confirmed in writing. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.* _-_