Re: Re: [VOTE] Release Apache Spark 3.4.0 (RC4)

2023-03-10 Thread Xinrong Meng
Thank you @beliefer.

On Sat, Mar 11, 2023 at 9:54 AM beliefer  wrote:

> There is a bug fix.
>
> https://issues.apache.org/jira/browse/SPARK-42740
>
>
>
> 在 2023-03-10 20:48:30,"Xinrong Meng"  写道:
>
> https://issues.apache.org/jira/browse/SPARK-42745 can be a new release
> blocker, thanks @Peter Toth  for reporting that.
>
> On Fri, Mar 10, 2023 at 8:21 PM Xinrong Meng 
> wrote:
>
>> Please vote on releasing the following candidate(RC4) as Apache Spark
>> version 3.4.0.
>>
>> The vote is open until 11:59pm Pacific time *March 15th* and passes if a
>> majority +1 PMC votes are cast, with a minimum of 3 +1 votes.
>>
>> [ ] +1 Release this package as Apache Spark 3.4.0
>> [ ] -1 Do not release this package because ...
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is *v3.4.0-rc4* (commit
>> 4000d6884ce973eb420e871c8d333431490be763):
>> https://github.com/apache/spark/tree/v3.4.0-rc4
>>
>> The release files, including signatures, digests, etc. can be found at:
>> https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc4-bin/
>>
>> Signatures used for Spark RCs can be found in this file:
>> https://dist.apache.org/repos/dist/dev/spark/KEYS
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1438
>>
>> The documentation corresponding to this release can be found at:
>> https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc4-docs/
>>
>> The list of bug fixes going into 3.4.0 can be found at the following URL:
>> https://issues.apache.org/jira/projects/SPARK/versions/12351465
>>
>> This release is using the release script of the tag v3.4.0-rc4.
>>
>>
>> FAQ
>>
>> =
>> How can I help test this release?
>> =
>> If you are a Spark user, you can help us test this release by taking
>> an existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> If you're working in PySpark you can set up a virtual env and install
>> the current RC and see if anything important breaks, in the Java/Scala
>> you can add the staging repository to your projects resolvers and test
>> with the RC (make sure to clean up the artifact cache before/after so
>> you don't end up building with a out of date RC going forward).
>>
>> ===
>> What should happen to JIRA tickets still targeting 3.4.0?
>> ===
>> The current list of open tickets targeted at 3.4.0 can be found at:
>> https://issues.apache.org/jira/projects/SPARK and search for "Target
>> Version/s" = 3.4.0
>>
>> Committers should look at those and triage. Extremely important bug
>> fixes, documentation, and API tweaks that impact compatibility should
>> be worked on immediately. Everything else please retarget to an
>> appropriate release.
>>
>> ==
>> But my bug isn't fixed?
>> ==
>> In order to make timely releases, we will typically not hold the
>> release unless the bug in question is a regression from the previous
>> release. That being said, if there is something which is a regression
>> that has not been correctly targeted please ping me or a committer to
>> help target the issue.
>>
>> Thanks,
>> Xinrong Meng
>>
>


Re:Re: [VOTE] Release Apache Spark 3.4.0 (RC4)

2023-03-10 Thread beliefer
There is a bug fix.

https://issues.apache.org/jira/browse/SPARK-42740







在 2023-03-10 20:48:30,"Xinrong Meng"  写道:

https://issues.apache.org/jira/browse/SPARK-42745 can be a new release blocker, 
thanks @Peter Toth for reporting that.


On Fri, Mar 10, 2023 at 8:21 PM Xinrong Meng  wrote:

Please vote on releasing the following candidate(RC4) as Apache Spark version 
3.4.0.

The vote is open until 11:59pm Pacific time March 15th and passes if a majority 
+1 PMC votes are cast, with a minimum of 3 +1 votes.

[ ] +1 Release this package as Apache Spark 3.4.0
[ ] -1 Do not release this package because ...

To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v3.4.0-rc4 (commit 
4000d6884ce973eb420e871c8d333431490be763):
https://github.com/apache/spark/tree/v3.4.0-rc4

The release files, including signatures, digests, etc. can be found at:
https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc4-bin/

Signatures used for Spark RCs can be found in this file:
https://dist.apache.org/repos/dist/dev/spark/KEYS

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1438

The documentation corresponding to this release can be found at:
https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc4-docs/

The list of bug fixes going into 3.4.0 can be found at the following URL:
https://issues.apache.org/jira/projects/SPARK/versions/12351465

This release is using the release script of the tag v3.4.0-rc4.


FAQ

=
How can I help test this release?
=
If you are a Spark user, you can help us test this release by taking
an existing Spark workload and running on this release candidate, then
reporting any regressions.

If you're working in PySpark you can set up a virtual env and install
the current RC and see if anything important breaks, in the Java/Scala
you can add the staging repository to your projects resolvers and test
with the RC (make sure to clean up the artifact cache before/after so
you don't end up building with a out of date RC going forward).

===
What should happen to JIRA tickets still targeting 3.4.0?
===
The current list of open tickets targeted at 3.4.0 can be found at:
https://issues.apache.org/jira/projects/SPARK and search for "Target Version/s" 
= 3.4.0

Committers should look at those and triage. Extremely important bug
fixes, documentation, and API tweaks that impact compatibility should
be worked on immediately. Everything else please retarget to an
appropriate release.

==
But my bug isn't fixed?
==
In order to make timely releases, we will typically not hold the
release unless the bug in question is a regression from the previous
release. That being said, if there is something which is a regression
that has not been correctly targeted please ping me or a committer to
help target the issue.

Thanks,
Xinrong Meng


Re: spark executor pod has same memory value for request and limit

2023-03-10 Thread Mich Talebzadeh
agreed. need to be enhanced!


HTH


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 10 Mar 2023 at 19:15, Ismail Yenigul 
wrote:

> Hi Mich,
>
> The issue is here there is no parameter to set executor pod request memory
> value.
> Currently we have only one parameter which is spark.executor.memory and it
> set  pod resources limit  and requests.
>
> Mich Talebzadeh , 10 Mar 2023 Cum, 22:04
> tarihinde şunu yazdı:
>
>> Yes, both EKS and GKE (Google) are on 3.1.2 so I am not sure  those
>> parameters will work :(
>>
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Fri, 10 Mar 2023 at 19:01, Ismail Yenigul 
>> wrote:
>>
>>> Hi Mich,
>>>
>>> it is on AWS EKS
>>>
>>>
>>> Mich Talebzadeh , 10 Mar 2023 Cum, 21:11
>>> tarihinde şunu yazdı:
>>>
 I forgot top ask which k8s cluster are you using, assuming some clod
 vendor



view my Linkedin profile
 


  https://en.everybodywiki.com/Mich_Talebzadeh



 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.




 On Fri, 10 Mar 2023 at 18:00, Ismail Yenigul 
 wrote:

> and If you look at the code
>
>
> https://github.com/apache/spark/blob/e64262f417bf381bdc664dfd1cbcfaa5aa7221fe/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala#L194
>
> .editOrNewResources()
> .addToRequests("memory", executorMemoryQuantity)
> .addToLimits("memory", executorMemoryQuantity)
> .addToRequests("cpu", executorCpuQuantity)
> .addToLimits(executorResourceQuantities.asJava)
> .endResources()
>
> addToRequests and addToLimits for memory have the same value.
> maybe it is by design. but can I set custom values for them if I use
> podtemplate?
>
>
>
> Ismail Yenigul , 10 Mar 2023 Cum, 20:52
> tarihinde şunu yazdı:
>
>> Hi,
>> using spark version v.3.1.2
>>
>> spark.executor.memory is set.
>> But the problem is not setting spark.executor.memory, the problem is
>> that whatever  value I set spark.executor.memory,
>> spark executor pod has the same value for resources.limit.memory and
>> resources.request.memory.
>> I want to be able to set different values for them.
>>
>>
>>
>>
>> Mich Talebzadeh , 10 Mar 2023 Cum, 20:44
>> tarihinde şunu yazdı:
>>
>>> What are those currently set in spark-submit and which spark version
>>> on k8s
>>>
>>>  --conf spark.driver.memory=2000m \
>>>--conf spark.executor.memory=2000m \
>>>
>>>   HTH
>>>
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>>> for any loss, damage or destruction of data or any other property which 
>>> may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary 
>>> damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Fri, 10 Mar 2023 at 17:39, Ismail Yenigul <
>>> ismailyeni...@gmail.com> wrote:
>>>
 Hi,

 There is a cpu parameter to set spark executor on k8s
 spark.kubernetes.executor.limit.cores and
 spark.kubernetes.executor.request.cores
 but there is no parameter to set memory request different then
 limits memory (such as spark.kubernetes.executor.request.memory)
 For that reason,
 

Re: spark executor pod has same memory value for request and limit

2023-03-10 Thread Ismail Yenigul
Hi Mich,

The issue is here there is no parameter to set executor pod request memory
value.
Currently we have only one parameter which is spark.executor.memory and it
set  pod resources limit  and requests.

Mich Talebzadeh , 10 Mar 2023 Cum, 22:04
tarihinde şunu yazdı:

> Yes, both EKS and GKE (Google) are on 3.1.2 so I am not sure  those
> parameters will work :(
>
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Fri, 10 Mar 2023 at 19:01, Ismail Yenigul 
> wrote:
>
>> Hi Mich,
>>
>> it is on AWS EKS
>>
>>
>> Mich Talebzadeh , 10 Mar 2023 Cum, 21:11
>> tarihinde şunu yazdı:
>>
>>> I forgot top ask which k8s cluster are you using, assuming some clod
>>> vendor
>>>
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Fri, 10 Mar 2023 at 18:00, Ismail Yenigul 
>>> wrote:
>>>
 and If you look at the code


 https://github.com/apache/spark/blob/e64262f417bf381bdc664dfd1cbcfaa5aa7221fe/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala#L194

 .editOrNewResources()
 .addToRequests("memory", executorMemoryQuantity)
 .addToLimits("memory", executorMemoryQuantity)
 .addToRequests("cpu", executorCpuQuantity)
 .addToLimits(executorResourceQuantities.asJava)
 .endResources()

 addToRequests and addToLimits for memory have the same value.
 maybe it is by design. but can I set custom values for them if I use
 podtemplate?



 Ismail Yenigul , 10 Mar 2023 Cum, 20:52
 tarihinde şunu yazdı:

> Hi,
> using spark version v.3.1.2
>
> spark.executor.memory is set.
> But the problem is not setting spark.executor.memory, the problem is
> that whatever  value I set spark.executor.memory,
> spark executor pod has the same value for resources.limit.memory and
> resources.request.memory.
> I want to be able to set different values for them.
>
>
>
>
> Mich Talebzadeh , 10 Mar 2023 Cum, 20:44
> tarihinde şunu yazdı:
>
>> What are those currently set in spark-submit and which spark version
>> on k8s
>>
>>  --conf spark.driver.memory=2000m \
>>--conf spark.executor.memory=2000m \
>>
>>   HTH
>>
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility
>> for any loss, damage or destruction of data or any other property which 
>> may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Fri, 10 Mar 2023 at 17:39, Ismail Yenigul 
>> wrote:
>>
>>> Hi,
>>>
>>> There is a cpu parameter to set spark executor on k8s
>>> spark.kubernetes.executor.limit.cores and
>>> spark.kubernetes.executor.request.cores
>>> but there is no parameter to set memory request different then
>>> limits memory (such as spark.kubernetes.executor.request.memory)
>>> For that reason,
>>> spark.executor.memory is assigned to  requests.memory and
>>> limits.memory like the following
>>>
>>> Limits:
>>>   memory:  5734MiRequests:
>>>   cpu: 4
>>>   memory:  5734Mi
>>>
>>>
>>> Is there any special reason to not have
>>> spark.kubernetes.executor.request.memory parameter?
>>> and can I use spark.kubernetes.executor.podTemplateFile parameter to
>>> set smaller memory request than the memory limit in pod template file?
>>>
>>>
>>> Limits:
>>>   memory:  5734MiRequests:
>>>   cpu: 4
>>>   memory:  1024Mi
>>>
>>>
>>> Thanks
>>>
>>>


Re: spark executor pod has same memory value for request and limit

2023-03-10 Thread Ismail Yenigul
It means,

my requested feature is not available in the latest branch too ;)


Bjørn Jørgensen , 10 Mar 2023 Cum, 21:05
tarihinde şunu yazdı:

> Strange to see that you are using spark 3.1.2 witch is EOL and you are
> reading source files from 3.4.0-SNAPSHOT
>
> fre. 10. mar. 2023 kl. 19:01 skrev Ismail Yenigul  >:
>
>> and If you look at the code
>>
>>
>> https://github.com/apache/spark/blob/e64262f417bf381bdc664dfd1cbcfaa5aa7221fe/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala#L194
>>
>> .editOrNewResources()
>> .addToRequests("memory", executorMemoryQuantity)
>> .addToLimits("memory", executorMemoryQuantity)
>> .addToRequests("cpu", executorCpuQuantity)
>> .addToLimits(executorResourceQuantities.asJava)
>> .endResources()
>>
>> addToRequests and addToLimits for memory have the same value.
>> maybe it is by design. but can I set custom values for them if I use
>> podtemplate?
>>
>>
>>
>> Ismail Yenigul , 10 Mar 2023 Cum, 20:52
>> tarihinde şunu yazdı:
>>
>>> Hi,
>>> using spark version v.3.1.2
>>>
>>> spark.executor.memory is set.
>>> But the problem is not setting spark.executor.memory, the problem is
>>> that whatever  value I set spark.executor.memory,
>>> spark executor pod has the same value for resources.limit.memory and
>>> resources.request.memory.
>>> I want to be able to set different values for them.
>>>
>>>
>>>
>>>
>>> Mich Talebzadeh , 10 Mar 2023 Cum, 20:44
>>> tarihinde şunu yazdı:
>>>
 What are those currently set in spark-submit and which spark version on
 k8s

  --conf spark.driver.memory=2000m \
--conf spark.executor.memory=2000m \

   HTH



view my Linkedin profile
 


  https://en.everybodywiki.com/Mich_Talebzadeh



 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.




 On Fri, 10 Mar 2023 at 17:39, Ismail Yenigul 
 wrote:

> Hi,
>
> There is a cpu parameter to set spark executor on k8s
> spark.kubernetes.executor.limit.cores and
> spark.kubernetes.executor.request.cores
> but there is no parameter to set memory request different then limits
> memory (such as spark.kubernetes.executor.request.memory)
> For that reason,
> spark.executor.memory is assigned to  requests.memory and
> limits.memory like the following
>
> Limits:
>   memory:  5734MiRequests:
>   cpu: 4
>   memory:  5734Mi
>
>
> Is there any special reason to not have
> spark.kubernetes.executor.request.memory parameter?
> and can I use spark.kubernetes.executor.podTemplateFile parameter to
> set smaller memory request than the memory limit in pod template file?
>
>
> Limits:
>   memory:  5734MiRequests:
>   cpu: 4
>   memory:  1024Mi
>
>
> Thanks
>
>
>
> --
> Bjørn Jørgensen
> Vestre Aspehaug 4, 6010 Ålesund
> Norge
>
> +47 480 94 297
>


Re: spark executor pod has same memory value for request and limit

2023-03-10 Thread Mich Talebzadeh
I forgot top ask which k8s cluster are you using, assuming some clod vendor



   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 10 Mar 2023 at 18:00, Ismail Yenigul 
wrote:

> and If you look at the code
>
>
> https://github.com/apache/spark/blob/e64262f417bf381bdc664dfd1cbcfaa5aa7221fe/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala#L194
>
> .editOrNewResources()
> .addToRequests("memory", executorMemoryQuantity)
> .addToLimits("memory", executorMemoryQuantity)
> .addToRequests("cpu", executorCpuQuantity)
> .addToLimits(executorResourceQuantities.asJava)
> .endResources()
>
> addToRequests and addToLimits for memory have the same value.
> maybe it is by design. but can I set custom values for them if I use
> podtemplate?
>
>
>
> Ismail Yenigul , 10 Mar 2023 Cum, 20:52
> tarihinde şunu yazdı:
>
>> Hi,
>> using spark version v.3.1.2
>>
>> spark.executor.memory is set.
>> But the problem is not setting spark.executor.memory, the problem is that
>> whatever  value I set spark.executor.memory,
>> spark executor pod has the same value for resources.limit.memory and
>> resources.request.memory.
>> I want to be able to set different values for them.
>>
>>
>>
>>
>> Mich Talebzadeh , 10 Mar 2023 Cum, 20:44
>> tarihinde şunu yazdı:
>>
>>> What are those currently set in spark-submit and which spark version on
>>> k8s
>>>
>>>  --conf spark.driver.memory=2000m \
>>>--conf spark.executor.memory=2000m \
>>>
>>>   HTH
>>>
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Fri, 10 Mar 2023 at 17:39, Ismail Yenigul 
>>> wrote:
>>>
 Hi,

 There is a cpu parameter to set spark executor on k8s
 spark.kubernetes.executor.limit.cores and
 spark.kubernetes.executor.request.cores
 but there is no parameter to set memory request different then limits
 memory (such as spark.kubernetes.executor.request.memory)
 For that reason,
 spark.executor.memory is assigned to  requests.memory and limits.memory
 like the following

 Limits:
   memory:  5734MiRequests:
   cpu: 4
   memory:  5734Mi


 Is there any special reason to not have
 spark.kubernetes.executor.request.memory parameter?
 and can I use spark.kubernetes.executor.podTemplateFile parameter to
 set smaller memory request than the memory limit in pod template file?


 Limits:
   memory:  5734MiRequests:
   cpu: 4
   memory:  1024Mi


 Thanks




Re: spark executor pod has same memory value for request and limit

2023-03-10 Thread Bjørn Jørgensen
Strange to see that you are using spark 3.1.2 witch is EOL and you are
reading source files from 3.4.0-SNAPSHOT

fre. 10. mar. 2023 kl. 19:01 skrev Ismail Yenigul :

> and If you look at the code
>
>
> https://github.com/apache/spark/blob/e64262f417bf381bdc664dfd1cbcfaa5aa7221fe/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala#L194
>
> .editOrNewResources()
> .addToRequests("memory", executorMemoryQuantity)
> .addToLimits("memory", executorMemoryQuantity)
> .addToRequests("cpu", executorCpuQuantity)
> .addToLimits(executorResourceQuantities.asJava)
> .endResources()
>
> addToRequests and addToLimits for memory have the same value.
> maybe it is by design. but can I set custom values for them if I use
> podtemplate?
>
>
>
> Ismail Yenigul , 10 Mar 2023 Cum, 20:52
> tarihinde şunu yazdı:
>
>> Hi,
>> using spark version v.3.1.2
>>
>> spark.executor.memory is set.
>> But the problem is not setting spark.executor.memory, the problem is that
>> whatever  value I set spark.executor.memory,
>> spark executor pod has the same value for resources.limit.memory and
>> resources.request.memory.
>> I want to be able to set different values for them.
>>
>>
>>
>>
>> Mich Talebzadeh , 10 Mar 2023 Cum, 20:44
>> tarihinde şunu yazdı:
>>
>>> What are those currently set in spark-submit and which spark version on
>>> k8s
>>>
>>>  --conf spark.driver.memory=2000m \
>>>--conf spark.executor.memory=2000m \
>>>
>>>   HTH
>>>
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Fri, 10 Mar 2023 at 17:39, Ismail Yenigul 
>>> wrote:
>>>
 Hi,

 There is a cpu parameter to set spark executor on k8s
 spark.kubernetes.executor.limit.cores and
 spark.kubernetes.executor.request.cores
 but there is no parameter to set memory request different then limits
 memory (such as spark.kubernetes.executor.request.memory)
 For that reason,
 spark.executor.memory is assigned to  requests.memory and limits.memory
 like the following

 Limits:
   memory:  5734MiRequests:
   cpu: 4
   memory:  5734Mi


 Is there any special reason to not have
 spark.kubernetes.executor.request.memory parameter?
 and can I use spark.kubernetes.executor.podTemplateFile parameter to
 set smaller memory request than the memory limit in pod template file?


 Limits:
   memory:  5734MiRequests:
   cpu: 4
   memory:  1024Mi


 Thanks



-- 
Bjørn Jørgensen
Vestre Aspehaug 4, 6010 Ålesund
Norge

+47 480 94 297


Re: spark executor pod has same memory value for request and limit

2023-03-10 Thread Ismail Yenigul
and If you look at the code

https://github.com/apache/spark/blob/e64262f417bf381bdc664dfd1cbcfaa5aa7221fe/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala#L194

.editOrNewResources()
.addToRequests("memory", executorMemoryQuantity)
.addToLimits("memory", executorMemoryQuantity)
.addToRequests("cpu", executorCpuQuantity)
.addToLimits(executorResourceQuantities.asJava)
.endResources()

addToRequests and addToLimits for memory have the same value.
maybe it is by design. but can I set custom values for them if I use
podtemplate?



Ismail Yenigul , 10 Mar 2023 Cum, 20:52 tarihinde
şunu yazdı:

> Hi,
> using spark version v.3.1.2
>
> spark.executor.memory is set.
> But the problem is not setting spark.executor.memory, the problem is that
> whatever  value I set spark.executor.memory,
> spark executor pod has the same value for resources.limit.memory and
> resources.request.memory.
> I want to be able to set different values for them.
>
>
>
>
> Mich Talebzadeh , 10 Mar 2023 Cum, 20:44
> tarihinde şunu yazdı:
>
>> What are those currently set in spark-submit and which spark version on
>> k8s
>>
>>  --conf spark.driver.memory=2000m \
>>--conf spark.executor.memory=2000m \
>>
>>   HTH
>>
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Fri, 10 Mar 2023 at 17:39, Ismail Yenigul 
>> wrote:
>>
>>> Hi,
>>>
>>> There is a cpu parameter to set spark executor on k8s
>>> spark.kubernetes.executor.limit.cores and
>>> spark.kubernetes.executor.request.cores
>>> but there is no parameter to set memory request different then limits
>>> memory (such as spark.kubernetes.executor.request.memory)
>>> For that reason,
>>> spark.executor.memory is assigned to  requests.memory and limits.memory
>>> like the following
>>>
>>> Limits:
>>>   memory:  5734MiRequests:
>>>   cpu: 4
>>>   memory:  5734Mi
>>>
>>>
>>> Is there any special reason to not have
>>> spark.kubernetes.executor.request.memory parameter?
>>> and can I use spark.kubernetes.executor.podTemplateFile parameter to set
>>> smaller memory request than the memory limit in pod template file?
>>>
>>>
>>> Limits:
>>>   memory:  5734MiRequests:
>>>   cpu: 4
>>>   memory:  1024Mi
>>>
>>>
>>> Thanks
>>>
>>>


Re: spark executor pod has same memory value for request and limit

2023-03-10 Thread Ismail Yenigul
Hi,
using spark version v.3.1.2

spark.executor.memory is set.
But the problem is not setting spark.executor.memory, the problem is that
whatever  value I set spark.executor.memory,
spark executor pod has the same value for resources.limit.memory and
resources.request.memory.
I want to be able to set different values for them.




Mich Talebzadeh , 10 Mar 2023 Cum, 20:44
tarihinde şunu yazdı:

> What are those currently set in spark-submit and which spark version on k8s
>
>  --conf spark.driver.memory=2000m \
>--conf spark.executor.memory=2000m \
>
>   HTH
>
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Fri, 10 Mar 2023 at 17:39, Ismail Yenigul 
> wrote:
>
>> Hi,
>>
>> There is a cpu parameter to set spark executor on k8s
>> spark.kubernetes.executor.limit.cores and
>> spark.kubernetes.executor.request.cores
>> but there is no parameter to set memory request different then limits
>> memory (such as spark.kubernetes.executor.request.memory)
>> For that reason,
>> spark.executor.memory is assigned to  requests.memory and limits.memory
>> like the following
>>
>> Limits:
>>   memory:  5734MiRequests:
>>   cpu: 4
>>   memory:  5734Mi
>>
>>
>> Is there any special reason to not have
>> spark.kubernetes.executor.request.memory parameter?
>> and can I use spark.kubernetes.executor.podTemplateFile parameter to set
>> smaller memory request than the memory limit in pod template file?
>>
>>
>> Limits:
>>   memory:  5734MiRequests:
>>   cpu: 4
>>   memory:  1024Mi
>>
>>
>> Thanks
>>
>>


Re: spark executor pod has same memory value for request and limit

2023-03-10 Thread Mich Talebzadeh
What are those currently set in spark-submit and which spark version on k8s

 --conf spark.driver.memory=2000m \
   --conf spark.executor.memory=2000m \

  HTH



   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 10 Mar 2023 at 17:39, Ismail Yenigul 
wrote:

> Hi,
>
> There is a cpu parameter to set spark executor on k8s
> spark.kubernetes.executor.limit.cores and
> spark.kubernetes.executor.request.cores
> but there is no parameter to set memory request different then limits
> memory (such as spark.kubernetes.executor.request.memory)
> For that reason,
> spark.executor.memory is assigned to  requests.memory and limits.memory
> like the following
>
> Limits:
>   memory:  5734MiRequests:
>   cpu: 4
>   memory:  5734Mi
>
>
> Is there any special reason to not have
> spark.kubernetes.executor.request.memory parameter?
> and can I use spark.kubernetes.executor.podTemplateFile parameter to set
> smaller memory request than the memory limit in pod template file?
>
>
> Limits:
>   memory:  5734MiRequests:
>   cpu: 4
>   memory:  1024Mi
>
>
> Thanks
>
>


spark executor pod has same memory value for request and limit

2023-03-10 Thread Ismail Yenigul
Hi,

There is a cpu parameter to set spark executor on k8s
spark.kubernetes.executor.limit.cores and
spark.kubernetes.executor.request.cores
but there is no parameter to set memory request different then limits
memory (such as spark.kubernetes.executor.request.memory)
For that reason,
spark.executor.memory is assigned to  requests.memory and limits.memory
like the following

Limits:
  memory:  5734MiRequests:
  cpu: 4
  memory:  5734Mi


Is there any special reason to not have
spark.kubernetes.executor.request.memory parameter?
and can I use spark.kubernetes.executor.podTemplateFile parameter to set
smaller memory request than the memory limit in pod template file?


Limits:
  memory:  5734MiRequests:
  cpu: 4
  memory:  1024Mi


Thanks


Unsubscribe

2023-03-10 Thread Gary Liu
unsubscribe

-- 
Gary Liu


Re: [VOTE] Release Apache Spark 3.4.0 (RC4)

2023-03-10 Thread Xinrong Meng
https://issues.apache.org/jira/browse/SPARK-42745 can be a new release
blocker, thanks @Peter Toth  for reporting that.

On Fri, Mar 10, 2023 at 8:21 PM Xinrong Meng 
wrote:

> Please vote on releasing the following candidate(RC4) as Apache Spark
> version 3.4.0.
>
> The vote is open until 11:59pm Pacific time *March 15th* and passes if a
> majority +1 PMC votes are cast, with a minimum of 3 +1 votes.
>
> [ ] +1 Release this package as Apache Spark 3.4.0
> [ ] -1 Do not release this package because ...
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is *v3.4.0-rc4* (commit
> 4000d6884ce973eb420e871c8d333431490be763):
> https://github.com/apache/spark/tree/v3.4.0-rc4
>
> The release files, including signatures, digests, etc. can be found at:
> https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc4-bin/
>
> Signatures used for Spark RCs can be found in this file:
> https://dist.apache.org/repos/dist/dev/spark/KEYS
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1438
>
> The documentation corresponding to this release can be found at:
> https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc4-docs/
>
> The list of bug fixes going into 3.4.0 can be found at the following URL:
> https://issues.apache.org/jira/projects/SPARK/versions/12351465
>
> This release is using the release script of the tag v3.4.0-rc4.
>
>
> FAQ
>
> =
> How can I help test this release?
> =
> If you are a Spark user, you can help us test this release by taking
> an existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> If you're working in PySpark you can set up a virtual env and install
> the current RC and see if anything important breaks, in the Java/Scala
> you can add the staging repository to your projects resolvers and test
> with the RC (make sure to clean up the artifact cache before/after so
> you don't end up building with a out of date RC going forward).
>
> ===
> What should happen to JIRA tickets still targeting 3.4.0?
> ===
> The current list of open tickets targeted at 3.4.0 can be found at:
> https://issues.apache.org/jira/projects/SPARK and search for "Target
> Version/s" = 3.4.0
>
> Committers should look at those and triage. Extremely important bug
> fixes, documentation, and API tweaks that impact compatibility should
> be worked on immediately. Everything else please retarget to an
> appropriate release.
>
> ==
> But my bug isn't fixed?
> ==
> In order to make timely releases, we will typically not hold the
> release unless the bug in question is a regression from the previous
> release. That being said, if there is something which is a regression
> that has not been correctly targeted please ping me or a committer to
> help target the issue.
>
> Thanks,
> Xinrong Meng
>


Re: [VOTE] Release Apache Spark 3.4.0 (RC3)

2023-03-10 Thread Xinrong Meng
Hi Peter,

Thank you for raising that! Unfortunately, v3.4.0-rc4 has been cut already.

On Fri, Mar 10, 2023 at 8:15 PM Peter Toth  wrote:

> Hi Xinrong,
>
> I've opened a PR to fix a regression from 3.3 to 3.4:
> https://github.com/apache/spark/pull/40364
> Please wait with the RC4 cut if possible.
>
> Thanks,
> Peter
>
> Xinrong Meng  ezt írta (időpont: 2023. márc.
> 10., P, 0:07):
>
>> Thank you Hyukjin! :)
>>
>> I would prefer to cut v3.4.0-rc4 now if there are no objections.
>>
>> On Fri, Mar 10, 2023 at 7:01 AM Hyukjin Kwon  wrote:
>>
>>> BTW doing another RC isn't a very big deal (compared to what I did
>>> before :-) ) since it's not a canonical release yet.
>>>
>>> On Fri, Mar 10, 2023 at 7:58 AM Hyukjin Kwon 
>>> wrote:
>>>
 I guess directly tagging is fine too I guess.
 I don't mind cutting the RC4 right away either if that's what you
 prefer.

 On Fri, Mar 10, 2023 at 7:06 AM Xinrong Meng 
 wrote:

> Hi All,
>
> Thank you all for catching that. Unfortunately, the release script
> failed to push the release tag v3.4.0-rc3 to branch-3.4. Sorry about the
> issue.
>
> Shall we cut v3.4.0-rc4 immediately or wait until March 14th?
>
> On Fri, Mar 10, 2023 at 5:34 AM Sean Owen  wrote:
>
>> If the issue were just tags, then you can simply delete the tag and
>> re-tag the right commit. That doesn't change a commit log.
>> But is the issue that the relevant commits aren't in branch-3.4? Like
>> I don't see the usual release commits in
>> https://github.com/apache/spark/commits/branch-3.4
>> Yeah OK that needs a re-do.
>>
>> We can still test this release.
>> It works for me, except that I still get the weird
>> infinite-compile-loop issue that doesn't seem to be related to Spark. The
>> Spark Connect parts seem to work.
>>
>> On Thu, Mar 9, 2023 at 3:25 PM Dongjoon Hyun 
>> wrote:
>>
>>> No~ We cannot in the AS-IS commit log status because it's screwed
>>> already as Emil wrote.
>>> Did you check the branch-3.2 commit log, Sean?
>>>
>>> Dongjoon.
>>>
>>>
>>> On Thu, Mar 9, 2023 at 11:42 AM Sean Owen  wrote:
>>>
 We can just push the tags onto the branches as needed right? No
 need to roll a new release

 On Thu, Mar 9, 2023, 1:36 PM Dongjoon Hyun 
 wrote:

> Yes, I also confirmed that the v3.4.0-rc3 tag is invalid.
>
> I guess we need RC4.
>
> Dongjoon.
>
> On Thu, Mar 9, 2023 at 7:13 AM Emil Ejbyfeldt
>  wrote:
>
>> It might being caused by the v3.4.0-rc3 tag not being part of the
>> 3.4
>> branch branch-3.4:
>>
>> $ git log --pretty='format:%d %h' --graph origin/branch-3.4
>> v3.4.0-rc3
>> | head -n 10
>> *  (HEAD, origin/branch-3.4) e38e619946
>> *  f3e69a1fe2
>> *  74cf1a32b0
>> *  0191a5bde0
>> *  afced91348
>> | *  (tag: v3.4.0-rc3) b9be9ce15a
>> |/
>> *  006e838ede
>> *  fc29b07a31
>> *  8655dfe66d
>>
>>
>> Best,
>> Emil
>>
>> On 09/03/2023 15:50, yangjie01 wrote:
>> > HI, all
>> >
>> > I can't git check out the tag of v3.4.0-rc3. At the same time,
>> there is
>> > the following information on the Github page.
>> >
>> > Does anyone else have the same problem?
>> >
>> > Yang Jie
>> >
>> > *发件人**: *Xinrong Meng 
>> > *日期**: *2023年3月9日星期四20:05
>> > *收件人**: *dev 
>> > *主题**: *[VOTE] Release Apache Spark 3.4.0 (RC3)
>> >
>> > Please vote on releasing the following candidate(RC3) as Apache
>> Spark
>> > version 3.4.0.
>> >
>> > The vote is open until 11:59pm Pacific time *March 14th* and
>> passes if a
>> > majority +1 PMC votes are cast, with a minimum of 3 +1 votes.
>> >
>> > [ ] +1 Release this package as Apache Spark 3.4.0
>> > [ ] -1 Do not release this package because ...
>> >
>> > To learn more about Apache Spark, please see
>> http://spark.apache.org/
>> > <
>> https://mailshield.baidu.com/check?q=eJcUboQ1HRRomPZKEwRzpl69wA8DbI%2fNIiRNsQ%3d%3d
>> >
>> >
>> > The tag to be voted on is *v3.4.0-rc3* (commit
>> > b9be9ce15a82b18cca080ee365d308c0820a29a9):
>> > https://github.com/apache/spark/tree/v3.4.0-rc3
>> > <
>> https://mailshield.baidu.com/check?q=ScnsHLDD3dexVfW9cjs3GovMbG2LLAZqBLq9cA8V%2fTOpCQ1LdeNWoD0%2fy7eVo%2b3de8Rk%2bQ%3d%3d
>> >
>> >
>> > The release files, including signatures, digests, etc. can be
>> found at:
>> > https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc3-bin/
>> > <

[VOTE] Release Apache Spark 3.4.0 (RC4)

2023-03-10 Thread Xinrong Meng
Please vote on releasing the following candidate(RC4) as Apache Spark
version 3.4.0.

The vote is open until 11:59pm Pacific time *March 15th* and passes if a
majority +1 PMC votes are cast, with a minimum of 3 +1 votes.

[ ] +1 Release this package as Apache Spark 3.4.0
[ ] -1 Do not release this package because ...

To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is *v3.4.0-rc4* (commit
4000d6884ce973eb420e871c8d333431490be763):
https://github.com/apache/spark/tree/v3.4.0-rc4

The release files, including signatures, digests, etc. can be found at:
https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc4-bin/

Signatures used for Spark RCs can be found in this file:
https://dist.apache.org/repos/dist/dev/spark/KEYS

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1438

The documentation corresponding to this release can be found at:
https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc4-docs/

The list of bug fixes going into 3.4.0 can be found at the following URL:
https://issues.apache.org/jira/projects/SPARK/versions/12351465

This release is using the release script of the tag v3.4.0-rc4.


FAQ

=
How can I help test this release?
=
If you are a Spark user, you can help us test this release by taking
an existing Spark workload and running on this release candidate, then
reporting any regressions.

If you're working in PySpark you can set up a virtual env and install
the current RC and see if anything important breaks, in the Java/Scala
you can add the staging repository to your projects resolvers and test
with the RC (make sure to clean up the artifact cache before/after so
you don't end up building with a out of date RC going forward).

===
What should happen to JIRA tickets still targeting 3.4.0?
===
The current list of open tickets targeted at 3.4.0 can be found at:
https://issues.apache.org/jira/projects/SPARK and search for "Target
Version/s" = 3.4.0

Committers should look at those and triage. Extremely important bug
fixes, documentation, and API tweaks that impact compatibility should
be worked on immediately. Everything else please retarget to an
appropriate release.

==
But my bug isn't fixed?
==
In order to make timely releases, we will typically not hold the
release unless the bug in question is a regression from the previous
release. That being said, if there is something which is a regression
that has not been correctly targeted please ping me or a committer to
help target the issue.

Thanks,
Xinrong Meng


Re: [VOTE] Release Apache Spark 3.4.0 (RC3)

2023-03-10 Thread Peter Toth
Hi Xinrong,

I've opened a PR to fix a regression from 3.3 to 3.4:
https://github.com/apache/spark/pull/40364
Please wait with the RC4 cut if possible.

Thanks,
Peter

Xinrong Meng  ezt írta (időpont: 2023. márc. 10.,
P, 0:07):

> Thank you Hyukjin! :)
>
> I would prefer to cut v3.4.0-rc4 now if there are no objections.
>
> On Fri, Mar 10, 2023 at 7:01 AM Hyukjin Kwon  wrote:
>
>> BTW doing another RC isn't a very big deal (compared to what I did before
>> :-) ) since it's not a canonical release yet.
>>
>> On Fri, Mar 10, 2023 at 7:58 AM Hyukjin Kwon  wrote:
>>
>>> I guess directly tagging is fine too I guess.
>>> I don't mind cutting the RC4 right away either if that's what you prefer.
>>>
>>> On Fri, Mar 10, 2023 at 7:06 AM Xinrong Meng 
>>> wrote:
>>>
 Hi All,

 Thank you all for catching that. Unfortunately, the release script
 failed to push the release tag v3.4.0-rc3 to branch-3.4. Sorry about the
 issue.

 Shall we cut v3.4.0-rc4 immediately or wait until March 14th?

 On Fri, Mar 10, 2023 at 5:34 AM Sean Owen  wrote:

> If the issue were just tags, then you can simply delete the tag and
> re-tag the right commit. That doesn't change a commit log.
> But is the issue that the relevant commits aren't in branch-3.4? Like
> I don't see the usual release commits in
> https://github.com/apache/spark/commits/branch-3.4
> Yeah OK that needs a re-do.
>
> We can still test this release.
> It works for me, except that I still get the weird
> infinite-compile-loop issue that doesn't seem to be related to Spark. The
> Spark Connect parts seem to work.
>
> On Thu, Mar 9, 2023 at 3:25 PM Dongjoon Hyun 
> wrote:
>
>> No~ We cannot in the AS-IS commit log status because it's screwed
>> already as Emil wrote.
>> Did you check the branch-3.2 commit log, Sean?
>>
>> Dongjoon.
>>
>>
>> On Thu, Mar 9, 2023 at 11:42 AM Sean Owen  wrote:
>>
>>> We can just push the tags onto the branches as needed right? No need
>>> to roll a new release
>>>
>>> On Thu, Mar 9, 2023, 1:36 PM Dongjoon Hyun 
>>> wrote:
>>>
 Yes, I also confirmed that the v3.4.0-rc3 tag is invalid.

 I guess we need RC4.

 Dongjoon.

 On Thu, Mar 9, 2023 at 7:13 AM Emil Ejbyfeldt
  wrote:

> It might being caused by the v3.4.0-rc3 tag not being part of the
> 3.4
> branch branch-3.4:
>
> $ git log --pretty='format:%d %h' --graph origin/branch-3.4
> v3.4.0-rc3
> | head -n 10
> *  (HEAD, origin/branch-3.4) e38e619946
> *  f3e69a1fe2
> *  74cf1a32b0
> *  0191a5bde0
> *  afced91348
> | *  (tag: v3.4.0-rc3) b9be9ce15a
> |/
> *  006e838ede
> *  fc29b07a31
> *  8655dfe66d
>
>
> Best,
> Emil
>
> On 09/03/2023 15:50, yangjie01 wrote:
> > HI, all
> >
> > I can't git check out the tag of v3.4.0-rc3. At the same time,
> there is
> > the following information on the Github page.
> >
> > Does anyone else have the same problem?
> >
> > Yang Jie
> >
> > *发件人**: *Xinrong Meng 
> > *日期**: *2023年3月9日星期四20:05
> > *收件人**: *dev 
> > *主题**: *[VOTE] Release Apache Spark 3.4.0 (RC3)
> >
> > Please vote on releasing the following candidate(RC3) as Apache
> Spark
> > version 3.4.0.
> >
> > The vote is open until 11:59pm Pacific time *March 14th* and
> passes if a
> > majority +1 PMC votes are cast, with a minimum of 3 +1 votes.
> >
> > [ ] +1 Release this package as Apache Spark 3.4.0
> > [ ] -1 Do not release this package because ...
> >
> > To learn more about Apache Spark, please see
> http://spark.apache.org/
> > <
> https://mailshield.baidu.com/check?q=eJcUboQ1HRRomPZKEwRzpl69wA8DbI%2fNIiRNsQ%3d%3d
> >
> >
> > The tag to be voted on is *v3.4.0-rc3* (commit
> > b9be9ce15a82b18cca080ee365d308c0820a29a9):
> > https://github.com/apache/spark/tree/v3.4.0-rc3
> > <
> https://mailshield.baidu.com/check?q=ScnsHLDD3dexVfW9cjs3GovMbG2LLAZqBLq9cA8V%2fTOpCQ1LdeNWoD0%2fy7eVo%2b3de8Rk%2bQ%3d%3d
> >
> >
> > The release files, including signatures, digests, etc. can be
> found at:
> > https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc3-bin/
> > <
> https://mailshield.baidu.com/check?q=U%2fLs35p0l%2bUUTclb%2blAPSYb%2bALxMfer1Jc%2b3i965Bjh2CxHpG45RFLW0NqSwMx00Ci3MRMz%2b7mTmcKUIa27Pww%3d%3d
> >
> >
> > Signatures used for Spark RCs can be found in this file:
> > https://dist.apache.org/repos/dist/dev/spark/KEYS

Re: [VOTE] Release Apache Spark 3.4.0 (RC3)

2023-03-10 Thread Mridul Muralidharan
Other than the tag issue, the sigs/artifacts/build/etc worked for me.
So the next RC candidate looks promising !

Regards,
Mridul


On Thu, Mar 9, 2023 at 5:07 PM Xinrong Meng 
wrote:

> Thank you Hyukjin! :)
>
> I would prefer to cut v3.4.0-rc4 now if there are no objections.
>
> On Fri, Mar 10, 2023 at 7:01 AM Hyukjin Kwon  wrote:
>
>> BTW doing another RC isn't a very big deal (compared to what I did before
>> :-) ) since it's not a canonical release yet.
>>
>> On Fri, Mar 10, 2023 at 7:58 AM Hyukjin Kwon  wrote:
>>
>>> I guess directly tagging is fine too I guess.
>>> I don't mind cutting the RC4 right away either if that's what you prefer.
>>>
>>> On Fri, Mar 10, 2023 at 7:06 AM Xinrong Meng 
>>> wrote:
>>>
 Hi All,

 Thank you all for catching that. Unfortunately, the release script
 failed to push the release tag v3.4.0-rc3 to branch-3.4. Sorry about the
 issue.

 Shall we cut v3.4.0-rc4 immediately or wait until March 14th?

 On Fri, Mar 10, 2023 at 5:34 AM Sean Owen  wrote:

> If the issue were just tags, then you can simply delete the tag and
> re-tag the right commit. That doesn't change a commit log.
> But is the issue that the relevant commits aren't in branch-3.4? Like
> I don't see the usual release commits in
> https://github.com/apache/spark/commits/branch-3.4
> Yeah OK that needs a re-do.
>
> We can still test this release.
> It works for me, except that I still get the weird
> infinite-compile-loop issue that doesn't seem to be related to Spark. The
> Spark Connect parts seem to work.
>
> On Thu, Mar 9, 2023 at 3:25 PM Dongjoon Hyun 
> wrote:
>
>> No~ We cannot in the AS-IS commit log status because it's screwed
>> already as Emil wrote.
>> Did you check the branch-3.2 commit log, Sean?
>>
>> Dongjoon.
>>
>>
>> On Thu, Mar 9, 2023 at 11:42 AM Sean Owen  wrote:
>>
>>> We can just push the tags onto the branches as needed right? No need
>>> to roll a new release
>>>
>>> On Thu, Mar 9, 2023, 1:36 PM Dongjoon Hyun 
>>> wrote:
>>>
 Yes, I also confirmed that the v3.4.0-rc3 tag is invalid.

 I guess we need RC4.

 Dongjoon.

 On Thu, Mar 9, 2023 at 7:13 AM Emil Ejbyfeldt
  wrote:

> It might being caused by the v3.4.0-rc3 tag not being part of the
> 3.4
> branch branch-3.4:
>
> $ git log --pretty='format:%d %h' --graph origin/branch-3.4
> v3.4.0-rc3
> | head -n 10
> *  (HEAD, origin/branch-3.4) e38e619946
> *  f3e69a1fe2
> *  74cf1a32b0
> *  0191a5bde0
> *  afced91348
> | *  (tag: v3.4.0-rc3) b9be9ce15a
> |/
> *  006e838ede
> *  fc29b07a31
> *  8655dfe66d
>
>
> Best,
> Emil
>
> On 09/03/2023 15:50, yangjie01 wrote:
> > HI, all
> >
> > I can't git check out the tag of v3.4.0-rc3. At the same time,
> there is
> > the following information on the Github page.
> >
> > Does anyone else have the same problem?
> >
> > Yang Jie
> >
> > *发件人**: *Xinrong Meng 
> > *日期**: *2023年3月9日星期四20:05
> > *收件人**: *dev 
> > *主题**: *[VOTE] Release Apache Spark 3.4.0 (RC3)
> >
> > Please vote on releasing the following candidate(RC3) as Apache
> Spark
> > version 3.4.0.
> >
> > The vote is open until 11:59pm Pacific time *March 14th* and
> passes if a
> > majority +1 PMC votes are cast, with a minimum of 3 +1 votes.
> >
> > [ ] +1 Release this package as Apache Spark 3.4.0
> > [ ] -1 Do not release this package because ...
> >
> > To learn more about Apache Spark, please see
> http://spark.apache.org/
> > <
> https://mailshield.baidu.com/check?q=eJcUboQ1HRRomPZKEwRzpl69wA8DbI%2fNIiRNsQ%3d%3d
> >
> >
> > The tag to be voted on is *v3.4.0-rc3* (commit
> > b9be9ce15a82b18cca080ee365d308c0820a29a9):
> > https://github.com/apache/spark/tree/v3.4.0-rc3
> > <
> https://mailshield.baidu.com/check?q=ScnsHLDD3dexVfW9cjs3GovMbG2LLAZqBLq9cA8V%2fTOpCQ1LdeNWoD0%2fy7eVo%2b3de8Rk%2bQ%3d%3d
> >
> >
> > The release files, including signatures, digests, etc. can be
> found at:
> > https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc3-bin/
> > <
> https://mailshield.baidu.com/check?q=U%2fLs35p0l%2bUUTclb%2blAPSYb%2bALxMfer1Jc%2b3i965Bjh2CxHpG45RFLW0NqSwMx00Ci3MRMz%2b7mTmcKUIa27Pww%3d%3d
> >
> >
> > Signatures used for Spark RCs can be found in this file:
> > https://dist.apache.org/repos/dist/dev/spark/KEYS
> > <
>