Re: A proposal for creating a Knowledge Sharing Hub for Apache Spark Community

2024-03-19 Thread Mich Talebzadeh
One option that comes to my mind, is that given the cyclic nature of these
types of proposals in these two forums, we should be able to use
Databricks's existing knowledge sharing hub Knowledge Sharing Hub -
Databricks

as well.

The majority of topics will be of interest to their audience as well. In
addition, they seem to invite everyone to contribute. Unless you have an
overriding concern why we should not take this approach, I can enquire from
Databricks community managers whether they can entertain this idea. They
seem to have a well defined structure for hosting topics.

Let me know your thoughts

Thanks

Mich Talebzadeh,
Dad | Technologist | Solutions Architect | Engineer
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* The information provided is correct to the best of my
knowledge but of course cannot be guaranteed . It is essential to note
that, as with any advice, quote "one test result is worth one-thousand
expert opinions (Werner  Von
Braun )".


On Tue, 19 Mar 2024 at 08:25, Joris Billen 
wrote:

> +1
>
>
> On 18 Mar 2024, at 21:53, Mich Talebzadeh 
> wrote:
>
> Well as long as it works.
>
> Please all check this link from Databricks and let us know your thoughts.
> Will something similar work for us?. Of course Databricks have much deeper
> pockets than our ASF community. Will it require moderation in our side to
> block spams and nutcases.
>
> Knowledge Sharing Hub - Databricks
> 
>
>
> Mich Talebzadeh,
> Dad | Technologist | Solutions Architect | Engineer
> London
> United Kingdom
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* The information provided is correct to the best of my
> knowledge but of course cannot be guaranteed . It is essential to note
> that, as with any advice, quote "one test result is worth one-thousand
> expert opinions (Werner  Von
> Braun )".
>
>
> On Mon, 18 Mar 2024 at 20:31, Bjørn Jørgensen 
> wrote:
>
>> something like this  Spark community · GitHub
>> 
>>
>>
>> man. 18. mars 2024 kl. 17:26 skrev Parsian, Mahmoud
>> :
>>
>>> Good idea. Will be useful
>>>
>>>
>>>
>>> +1
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *From: *ashok34...@yahoo.com.INVALID 
>>> *Date: *Monday, March 18, 2024 at 6:36 AM
>>> *To: *user @spark , Spark dev list <
>>> d...@spark.apache.org>, Mich Talebzadeh 
>>> *Cc: *Matei Zaharia 
>>> *Subject: *Re: A proposal for creating a Knowledge Sharing Hub for
>>> Apache Spark Community
>>>
>>> External message, be mindful when clicking links or attachments
>>>
>>>
>>>
>>> Good idea. Will be useful
>>>
>>>
>>>
>>> +1
>>>
>>>
>>>
>>> On Monday, 18 March 2024 at 11:00:40 GMT, Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
>>>
>>>
>>>
>>>
>>> Some of you may be aware that Databricks community Home | Databricks
>>>
>>> have just launched a knowledge sharing hub. I thought it would be a
>>>
>>> good idea for the Apache Spark user group to have the same, especially
>>>
>>> for repeat questions on Spark core, Spark SQL, Spark Structured
>>>
>>> Streaming, Spark Mlib and so forth.
>>>
>>>
>>>
>>> Apache Spark user and dev groups have been around for a good while.
>>>
>>> They are serving their purpose . We went through creating a slack
>>>
>>> community that managed to create more more heat than light.. This is
>>>
>>> what Databricks community came up with and I quote
>>>
>>>
>>>
>>> "Knowledge Sharing Hub
>>>
>>> Dive into a collaborative space where members like YOU can exchange
>>>
>>> knowledge, tips, and best practices. Join the conversation today and
>>>
>>> unlock a wealth of collective wisdom to enhance your experience and
>>>
>>> drive success."
>>>
>>>
>>>
>>> I don't know the logistics of setting it up.but I am sure that should
>>>
>>> not be that difficult. If anyone is supportive of this proposal, let
>>>
>>> the usual +1, 0, -1 decide
>>>
>>>
>>>
>>> HTH
>>>
>>>
>>>
>>> Mich Talebzadeh,
>>>
>>> Dad | Technologist | Solutions Architect | Engineer
>>>
>>> London
>>>
>>> United Kingdom
>>>
>>>
>>>
>>>
>>>
>>>   view my Linkedin profile
>>>
>>>
>>>
>>>
>>>
>>> https://en.everybodywiki.com/Mich_Talebzadeh
>>> 
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 

Spark-UI stages and other tabs not accessible in standalone mode when reverse-proxy is enabled

2024-03-19 Thread sharad mishra
Hi Team,
We're encountering an issue with Spark UI.
I've documented the details here:
https://issues.apache.org/jira/browse/SPARK-47232
When enabled reverse proxy in master and worker configOptions. We're not
able to access different tabs available in spark UI e.g.(stages,
environment, storage etc.)

We're deploying spark through bitnami helm chart :
https://github.com/bitnami/charts/tree/main/bitnami/spark

Name and Version

bitnami/spark - 6.0.0

What steps will reproduce the bug?

Kubernetes Version: 1.25
Spark: 3.4.2
Helm chart: 6.0.0

Steps to reproduce:
After installing the chart Spark Cluster(Master and worker) UI is available
at:


https://spark.staging.abc.com/

We are able to access running application by click on applicationID under
Running Applications link:



We can access spark UI by clicking Application Detail UI:

We are taken to jobs tab when we click on Application Detail UI


URL looks like:
https://spark.staging.abc.com/proxy/app-20240208103209-0030/stages/

When we click any of the tab from spark UI e.g. stages or environment etc,
it takes us back to spark cluster UI page
We noticed that endpoint changes to


https://spark.staging.abc.com/stages/
instead of
https://spark.staging.abc.com/proxy/app-20240208103209-0030/stages/



Are you using any custom parameters or values?

Configurations set in values.yaml
```
master:
  configOptions:
-Dspark.ui.reverseProxy=true
-Dspark.ui.reverseProxyUrl=https://spark.staging.abc.com

worker:
  configOptions:
-Dspark.ui.reverseProxy=true
-Dspark.ui.reverseProxyUrl=https://spark.staging.abc.com

service:
  type: ClusterIP
  ports:
http: 8080
https: 443
cluster: 7077

ingress:

  enabled: true
  pathType: ImplementationSpecific
  apiVersion: ""
  hostname: spark.staging.abc.com
  ingressClassName: "staging"
  path: /
```



What is the expected behavior?

Expected behaviour is that when I click on stages tab, instead of taking me
to
https://spark.staging.abc.com/stages/
it should take me to following URL:
https://spark.staging.abc.com/proxy/app-20240208103209-0030/stages/

What do you see instead?

current behaviour is it takes me to URL:
https://spark.staging.abc.com/stages/ , which shows spark cluster UI with
master and worker details

would appreciate any help on this, thanks.

Best,
Sharad


Re: A proposal for creating a Knowledge Sharing Hub for Apache Spark Community

2024-03-19 Thread Joris Billen
+1


On 18 Mar 2024, at 21:53, Mich Talebzadeh  wrote:

Well as long as it works.

Please all check this link from Databricks and let us know your thoughts. Will 
something similar work for us?. Of course Databricks have much deeper pockets 
than our ASF community. Will it require moderation in our side to block spams 
and nutcases.

Knowledge Sharing Hub - 
Databricks


Mich Talebzadeh,
Dad | Technologist | Solutions Architect | Engineer
London
United Kingdom

 
[https://ci3.googleusercontent.com/mail-sig/AIorK4zholKucR2Q9yMrKbHNn-o1TuS4mYXyi2KO6Xmx6ikHPySa9MLaLZ8t2hrA6AUcxSxDgHIwmKE]
   view my Linkedin 
profile

 https://en.everybodywiki.com/Mich_Talebzadeh



Disclaimer: The information provided is correct to the best of my knowledge but 
of course cannot be guaranteed . It is essential to note that, as with any 
advice, quote "one test result is worth one-thousand expert opinions (Werner 
 Von 
Braun)".


On Mon, 18 Mar 2024 at 20:31, Bjørn Jørgensen 
mailto:bjornjorgen...@gmail.com>> wrote:
something like this  Spark community · 
GitHub


man. 18. mars 2024 kl. 17:26 skrev Parsian, Mahmoud 
:
Good idea. Will be useful

+1



From: ashok34...@yahoo.com.INVALID 
Date: Monday, March 18, 2024 at 6:36 AM
To: user @spark mailto:user@spark.apache.org>>, Spark 
dev list mailto:d...@spark.apache.org>>, Mich Talebzadeh 
mailto:mich.talebza...@gmail.com>>
Cc: Matei Zaharia mailto:matei.zaha...@gmail.com>>
Subject: Re: A proposal for creating a Knowledge Sharing Hub for Apache Spark 
Community
External message, be mindful when clicking links or attachments

Good idea. Will be useful

+1

On Monday, 18 March 2024 at 11:00:40 GMT, Mich Talebzadeh 
mailto:mich.talebza...@gmail.com>> wrote:


Some of you may be aware that Databricks community Home | Databricks
have just launched a knowledge sharing hub. I thought it would be a
good idea for the Apache Spark user group to have the same, especially
for repeat questions on Spark core, Spark SQL, Spark Structured
Streaming, Spark Mlib and so forth.

Apache Spark user and dev groups have been around for a good while.
They are serving their purpose . We went through creating a slack
community that managed to create more more heat than light.. This is
what Databricks community came up with and I quote

"Knowledge Sharing Hub
Dive into a collaborative space where members like YOU can exchange
knowledge, tips, and best practices. Join the conversation today and
unlock a wealth of collective wisdom to enhance your experience and
drive success."

I don't know the logistics of setting it up.but I am sure that should
not be that difficult. If anyone is supportive of this proposal, let
the usual +1, 0, -1 decide

HTH

Mich Talebzadeh,
Dad | Technologist | Solutions Architect | Engineer
London
United Kingdom


  view my Linkedin profile


https://en.everybodywiki.com/Mich_Talebzadeh



Disclaimer: The information provided is correct to the best of my
knowledge but of course cannot be guaranteed . It is essential to note
that, as with any advice, quote "one test result is worth one-thousand
expert opinions (Werner Von Braun)".

-
To unsubscribe e-mail: 
user-unsubscr...@spark.apache.org



--
Bjørn Jørgensen
Vestre Aspehaug 4, 6010 Ålesund
Norge

+47 480 94 297



Re: pyspark - Where are Dataframes created from Python objects stored?

2024-03-19 Thread Varun Shah
Hi @Mich Talebzadeh  , community,

Where can I find such insights on the Spark Architecture ?

I found few sites below which did/does cover internals :
1. https://github.com/JerryLead/SparkInternals
2. https://books.japila.pl/apache-spark-internals/overview/
3. https://stackoverflow.com/questions/30691385/how-spark-works-internally

Most of them are very old, and hoping the basic internals have not changed,
where can we find more information on internals ? Asking in case you or
someone from community has more articles / videos / document links to
share.

Appreciate your help.


Regards,
Varun Shah



On Fri, Mar 15, 2024, 03:10 Mich Talebzadeh 
wrote:

> Hi,
>
> When you create a DataFrame from Python objects using
> spark.createDataFrame, here it goes:
>
>
> *Initial Local Creation:*
> The DataFrame is initially created in the memory of the driver node. The
> data is not yet distributed to executors at this point.
>
> *The role of lazy Evaluation:*
>
> Spark applies lazy evaluation, *meaning transformations are not executed
> immediately*.  It constructs a logical plan describing the operations,
> but data movement does not occur yet.
>
> *Action Trigger:*
>
> When you initiate an action (things like show(), collect(), etc), Spark
> triggers the execution.
>
>
>
> *When partitioning  and distribution come in:Spark partitions the
> DataFrame into logical chunks for parallel processing*. It divides the
> data based on a partitioning scheme (default is hash partitioning). Each
> partition is sent to different executor nodes for distributed execution.
> This stage involves data transfer across the cluster, but it is not that
> expensive shuffle you have heard of. Shuffles happen within repartitioning
> or certain join operations.
>
> *Storage on Executors:*
>
> Executors receive their assigned partitions and store them in their
> memory. If memory is limited, Spark spills partitions to disk. look at
> stages tab in UI (4040)
>
>
> *In summary:*
> No Data Transfer During Creation: --> Data transfer occurs only when an
> action is triggered.
> Distributed Processing: --> DataFrames are distributed for parallel
> execution, not stored entirely on the driver node.
> Lazy Evaluation Optimization: --> Delaying data transfer until necessary
> enhances performance.
> Shuffle vs. Partitioning: --> Data movement during partitioning is not
> considered a shuffle in Spark terminology.
> Shuffles involve more complex data rearrangement.
>
> *Considerations: *
> Large DataFrames: For very large DataFrames
>
>- manage memory carefully to avoid out-of-memory errors. Consider
>options like:
>- Increasing executor memory
>- Using partitioning strategies to optimize memory usage
>- Employing techniques like checkpointing to persistent storage (hard
>disks) or caching for memory efficiency
>- You can get additional info from Spark UI default port 4040 tabs
>like SQL and executors
>- Spark uses Catalyst optimiser for efficient execution plans.
>df.explain("extended") shows both logical and physical plans
>
> HTH
>
> Mich Talebzadeh,
> Dad | Technologist | Solutions Architect | Engineer
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* The information provided is correct to the best of my
> knowledge but of course cannot be guaranteed . It is essential to note
> that, as with any advice, quote "one test result is worth one-thousand
> expert opinions (Werner  Von
> Braun )".
>
>
> On Thu, 14 Mar 2024 at 19:46, Sreyan Chakravarty 
> wrote:
>
>> I am trying to understand Spark Architecture.
>>
>> For Dataframes that are created from python objects ie. that are *created
>> in memory where are they stored ?*
>>
>> Take following example:
>>
>> from pyspark.sql import Rowimport datetime
>> courses = [
>> {
>> 'course_id': 1,
>> 'course_title': 'Mastering Python',
>> 'course_published_dt': datetime.date(2021, 1, 14),
>> 'is_active': True,
>> 'last_updated_ts': datetime.datetime(2021, 2, 18, 16, 57, 25)
>> }
>>
>> ]
>>
>>
>> courses_df = spark.createDataFrame([Row(**course) for course in courses])
>>
>>
>> Where is the dataframe stored when I invoke the call:
>>
>> courses_df = spark.createDataFrame([Row(**course) for course in courses])
>>
>> Does it:
>>
>>1. Send the data to a random executor ?
>>
>>
>>- Does this mean this counts as a shuffle ?
>>
>>
>>1. Or does it stay on the driver node ?
>>
>>
>>- That does not make sense when the dataframe grows large.
>>
>>
>> --
>> Regards,
>> Sreyan Chakravarty
>>
>


Re: A proposal for creating a Knowledge Sharing Hub for Apache Spark Community

2024-03-19 Thread Varun Shah
+1  Great initiative.

QQ : Stack overflow has a similar feature called "Collectives", but I am
not sure of the expenses to create one for Apache Spark. With SO being used
( atleast before ChatGPT became quite the norm for searching questions), it
already has a lot of questions asked and answered by the community over a
period of time and hence, if possible, we could leverage it as the starting
point for building a community before creating a complete new website from
scratch. Any thoughts on this?

Regards,
Varun Shah


On Mon, Mar 18, 2024, 16:29 Mich Talebzadeh 
wrote:

> Some of you may be aware that Databricks community Home | Databricks
> have just launched a knowledge sharing hub. I thought it would be a
> good idea for the Apache Spark user group to have the same, especially
> for repeat questions on Spark core, Spark SQL, Spark Structured
> Streaming, Spark Mlib and so forth.
>
> Apache Spark user and dev groups have been around for a good while.
> They are serving their purpose . We went through creating a slack
> community that managed to create more more heat than light.. This is
> what Databricks community came up with and I quote
>
> "Knowledge Sharing Hub
> Dive into a collaborative space where members like YOU can exchange
> knowledge, tips, and best practices. Join the conversation today and
> unlock a wealth of collective wisdom to enhance your experience and
> drive success."
>
> I don't know the logistics of setting it up.but I am sure that should
> not be that difficult. If anyone is supportive of this proposal, let
> the usual +1, 0, -1 decide
>
> HTH
>
> Mich Talebzadeh,
> Dad | Technologist | Solutions Architect | Engineer
> London
> United Kingdom
>
>
>view my Linkedin profile
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> Disclaimer: The information provided is correct to the best of my
> knowledge but of course cannot be guaranteed . It is essential to note
> that, as with any advice, quote "one test result is worth one-thousand
> expert opinions (Werner Von Braun)".
>
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>