the order in which Maven executes the test cases in
> the `connect` module.
>
>
>
> I have submitted a backport PR
> <https://github.com/apache/spark/pull/45141> to branch-3.5, and if
> necessary, we can merge it to fix this test issue.
>
>
>
> Jie Yang
>
>
>
Is anyone seeing this Spark Connect test failure? then again, I have some
weird issue with this env that always fails 1 or 2 tests that nobody else
can replicate.
- Test observe *** FAILED ***
== FAIL: Plans do not match ===
!CollectMetrics my_metric, [min(id#0) AS min_val#0, max(id#0) AS
I'm not aware of much usage. but that doesn't mean a lot.
FWIW, in the past month or so, the Kinesis docs page got about 700 views,
compared to about 1400 for Kafka
Agreed, that looks wrong. From the code, it seems that "timezone" is only
used for testing, though apparently no test caught this. I'll submit a PR
to patch it in any event: https://github.com/apache/spark/pull/44619
On Mon, Jan 8, 2024 at 1:33 AM Janda Martin wrote:
> I think that
>
It already does. I think that's not the same idea?
On Mon, Dec 4, 2023, 8:12 PM Almog Tavor wrote:
> I think Spark should start shading it’s problematic deps similar to how
> it’s done in Flink
>
> On Mon, 4 Dec 2023 at 2:57 Sean Owen wrote:
>
>> I am not sure we can con
I am not sure we can control that - the Scala _x.y suffix has particular
meaning in the Scala ecosystem for artifacts and thus the naming of .jar
files. And we need to work with the Scala ecosystem.
What can't handle these files, Spring Boot? does it somehow assume the .jar
file name relates to
I think we already updated this in Spark 4. However for now you would have
to also include a JAR with the jakarta.* classes instead.
You are welcome to try Spark 4 now by building from master, but it's far
from release.
On Thu, Oct 5, 2023 at 11:53 AM Ahmed Albalawi
wrote:
> Hello team,
>
> We
I think the announcement mentioned there were some issues with pypi and the
upload size this time. I am sure it's intended to be there when possible.
On Wed, Sep 20, 2023, 3:00 PM Kezhi Xiong wrote:
> Hi,
>
> Are there any plans to upload PySpark 3.5.0 to PyPI (
>
nd all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destructio
Pyspark follows SQL databases here. stddev is stddev_samp, and sample
standard deviation is the calculation with the Bessel correction, n-1 in
the denominator. stddev_pop is simply standard deviation, with n in the
denominator.
On Tue, Sep 19, 2023 at 7:13 AM Helene Bøe
wrote:
> Hi!
>
>
>
> I
I have seen this, and not sure if it's just the ASF mailer being weird, or
more likely, because emails are moderated and we inadvertently moderate
them out of order
On Mon, Sep 18, 2023 at 10:59 AM Mich Talebzadeh
wrote:
> Hi,
>
> I use gmail to receive spark user group emails.
>
> On
I think it's the same, and always has been - yes you don't have a
guaranteed ordering unless an operation produces a specific ordering. Could
be the result of order by, yes; I believe you would be guaranteed that
reading input files results in data in the order they appear in the file,
etc. 1:1
Yes, should work fine, just set up according to the docs. There needs to be
network connectivity between whatever the driver node is and these 4 nodes.
On Thu, Sep 14, 2023 at 11:57 PM Ilango wrote:
>
> Hi all,
>
> We have 4 HPC nodes and installed spark individually in all nodes.
>
> Spark is
ame issue.
>
>
> org.elasticsearch
> elasticsearch-spark-30_${scala.compat.version}
> 7.12.1
>
>
>
> On Fri, Sep 8, 2023 at 4:41 AM Sean Owen wrote:
>
>> By marking it provided, you are not including this dependency with your
>> app. If it is also
By marking it provided, you are not including this dependency with your
app. If it is also not somehow already provided by your spark cluster (this
is what it means), then yeah this is not anywhere on the class path at
runtime. Remove the provided scope.
On Thu, Sep 7, 2023, 4:09 PM Dipayan Dev
f some other dependency.
>
>
>
> *From:* Sean Owen
> *Sent:* Thursday, August 31, 2023 5:10 PM
> *To:* Agrawal, Sanket
> *Cc:* user@spark.apache.org
> *Subject:* [EXT] Re: Okio Vulnerability in Spark 3.4.1
>
>
>
> Does the vulnerability affect Spark?
>
Does the vulnerability affect Spark?
In any event, have you tried updating Okio in the Spark build? I don't
believe you could just replace the JAR, as other libraries probably rely on
it and compiled against the current version.
On Thu, Aug 31, 2023 at 6:02 AM Agrawal, Sanket
wrote:
> Hi All,
>
I think you're talking past Hyukjin here.
I think the response is: none of that is managed by Pyspark now, and this
proposal does not change that. Your current interpreter and environment is
used to execute the stored procedure, which is just Python code. It's on
you to bring an environment that
to verify?
>
>
>
> Thanks,
>
> Jie Yang
>
>
>
> *发件人**: *Dipayan Dev
> *日期**: *2023年8月30日 星期三 17:01
> *收件人**: *Sean Owen
> *抄送**: *Yuanjian Li , Spark dev list <
> dev@spark.apache.org>
> *主题**: *Re: [VOTE] Release Apache Spark 3.5.0 (RC3)
>
>
It looks good except that I'm getting errors running the Spark Connect
tests at the end (Java 17, Scala 2.13) It looks like I missed something
necessary to build; is anyone getting this?
[ERROR] [Error]
ooks like spark 3.4.1 (my version) uses scala Scala 2.12
> How do I specify the scala version?
>
> On Mon, Aug 21, 2023 at 4:47 PM Sean Owen wrote:
>
>> That's a mismatch in the version of scala that your library uses vs spark
>> uses.
>>
>> On Mon, Aug 21, 2023, 6:
That's a mismatch in the version of scala that your library uses vs spark
uses.
On Mon, Aug 21, 2023, 6:46 PM Kal Stevens wrote:
> I am having a hard time figuring out what I am doing wrong here.
> I am not sure if I have an incompatible version of something installed or
> something else.
> I
+1 this looks better to me. Works with Scala 2.13 / Java 17 for me.
On Sat, Aug 19, 2023 at 3:23 AM Yuanjian Li wrote:
> Please vote on releasing the following candidate(RC2) as Apache Spark
> version 3.5.0.
>
> The vote is open until 11:59pm Pacific time Aug 23th and passes if a
> majority +1
Yeah, we generally don't respond to "look at the output of my static
analyzer".
Some of these are already addressed in a later version.
Some don't affect Spark.
Some are possibly an issue but hard to change without breaking lots of
things - they are really issues with upstream dependencies.
But
There shouldn't be any difference here. In fact, I get the results you list
for 'spark' from Databricks. It's possible the difference is a bug fix
along the way that is in the Spark version you are using locally but not in
the DBR you are using. But, yeah seems to work as. you say.
If you're
While we're noodling on the topic, what else might be worth removing in
Spark 4?
For example, looks like we're finally hitting problems supporting Java 8
through 21 all at once, related to Scala 2.13.x updates. It would be
reasonable to require Java 11, or even 17, as a baseline for the
Aug 5, 2023 at 5:42 PM Sean Owen wrote:
> I'm still testing other combinations, but it looks like tests fail on Java
> 17 after building with Java 8, which should be a normal supported
> configuration.
> This is described at https://github.com/apache/spark/pull/41943 and looks
> l
I'm still testing other combinations, but it looks like tests fail on Java
17 after building with Java 8, which should be a normal supported
configuration.
This is described at https://github.com/apache/spark/pull/41943 and looks
like it is resolved by moving back to Scala 2.13.8 for now.
Unless
pp4 has one row, I'm guessing - containing an array of 10 images. You want
10 rows of 1 image each.
But, just don't do this. Pass the bytes of the image as an array,
along with width/height/channels, and reshape it on use. It's just easier.
That is how the Spark image representation works anyway
to the ASF Source Header and Copyright Notice Policy[1], code
>>> directly submitted to ASF should include the Apache license header
>>> without any additional copyright notice.
>>>
>>>
>>> Kent Yao
>>>
>>> [1]
>>> https://u
+1 I think that porting the package 'as is' into Spark is probably
worthwhile.
That's relatively easy; the code is already pretty battle-tested and not
that big and even originally came from Spark code, so is more or less
similar already.
One thing it never got was DSv2 support, which means XML
There is no such method in Spark. I think that's some EMR-specific
modification.
On Wed, Jul 26, 2023 at 11:06 PM second_co...@yahoo.com.INVALID
wrote:
> I ran the following code
>
> spark.sparkContext.list_packages()
>
> on spark 3.4.1 and i get below error
>
> An error was encountered:
>
There aren't "LTS" releases, though you might expect the last 3.x release
will see maintenance releases longer. See end of
https://spark.apache.org/versioning-policy.html
On Wed, Jul 26, 2023 at 3:56 AM Manu Zhang wrote:
> Will Apache Spark 3.5 be a LTS version?
>
> Thanks,
> Manu
>
> On Mon,
When contributing to an ASF project, it's governed by the terms of the ASF
ICLA: https://www.apache.org/licenses/icla.pdf or CCLA:
https://www.apache.org/licenses/cla-corporate.pdf
I don't believe ASF projects ever retain an original author copyright
statement, but rather source files have a
No, a pandas on Spark DF is distributed.
On Tue, Jun 20, 2023, 1:45 PM Mich Talebzadeh
wrote:
> Thanks but if you create a Spark DF from Pandas DF that Spark DF is not
> distributed and remains on the driver. I recall a while back we had this
> conversation. I don't think anything has changed.
It is indeed not part of SparkSession. See the link you cite. It is part of
the pyspark pandas API
On Tue, Jun 20, 2023, 5:42 AM John Paul Jayme
wrote:
> Good day,
>
>
>
> I have a task to read excel files in databricks but I cannot seem to
> proceed. I am referencing the API documents -
On Fri, Jun 16, 2023 at 3:58 PM Dongjoon Hyun
wrote:
> I started the thread about already publicly visible version issues
> according to the ASF PMC communication guideline. It's no confidential,
> personal, or security-related stuff. Are you insisting this is confidential?
>
Discussion about a
As we noted in the last thread, this discussion should have been on private@
to begin with, but, the ship has sailed.
You are suggesting that non-PMC members vote on whether the PMC has to do
something? No, that's not how anything works here.
It's certainly the PMC that decides what to put in the
What does a vote on dev@ mean? did you mean this for the PMC list?
Dongjoon - this offers no rationale about "why". The more relevant thread
begins here:
https://lists.apache.org/thread/k7gr65wt0fwtldc7hp7bd0vkg1k93rrb but it
likewise never got to connecting a specific observation to policy.
You sure it is not just that it's displaying in your local TZ? Check the
actual value as a long for example. That is likely the same time.
On Thu, Jun 8, 2023, 5:50 PM karan alang wrote:
> ref :
>
in Spark 4, just
> thought I'd bring this issue to your attention.
>
> Best Regards, Martin
> --
> *From:* Jungtaek Lim
> *Sent:* Wednesday, June 7, 2023 23:19
> *To:* Sean Owen
> *Cc:* Dongjoon Hyun ; Holden Karau <
> hol...@pigscanfly.ca&
2:42:19 yangjie01 wrote:
>> > +1 on dropping Java 8 in Spark 4.0, and I even hope Spark 4.0 can only
>> support Java 17 and the upcoming Java 21.
>> >
>> > 发件人: Denny Lee
>> > 日期: 2023年6月7日 星期三 07:10
>> > 收件人: Sean Owen
>> > 抄送: Dav
Hi Dongjoon, I think this conversation is not advancing anymore. I
personally consider the matter closed unless you can find other support or
respond with more specifics. While this perhaps should be on private@, I
think it's not wrong as an instructive discussion on dev@.
I don't believe you've
(With consent, shall we move this to the PMC list?)
No, I don't think that's what this policy says.
First, could you please be more specific here? why do you think a certain
release is at odds with this?
Because so far you've mentioned, I think, not taking a Scala maintenance
release update.
I haven't followed this discussion closely, but I think we could/should
drop Java 8 in Spark 4.0, which is up next after 3.5?
On Tue, Jun 6, 2023 at 2:44 PM David Li wrote:
> Hello Spark developers,
>
> I'm from the Apache Arrow project. We've discussed Java version support
> [1], and
I think the issue is whether a distribution of Spark is so materially
different from OSS that it causes problems for the larger community of
users. There's a legitimate question of whether such a thing can be called
"Apache Spark + changes", as describing it that way becomes meaningfully
On Mon, Jun 5, 2023 at 12:01 PM Dongjoon Hyun
wrote:
> 1. For the naming, yes, but the company should use different version
> numbers instead of the exact "3.4.0". As I shared the screenshot in my
> previous email, the company exposes "Apache Spark 3.4.0" exactly because
> they build their
1/ Regarding naming - I believe releasing "Apache Foo X.Y + patches" is
acceptable, if it is substantially Apache Foo X.Y. This is common practice
for downstream vendors. It's fair nominative use. The principle here is
consumer confusion. Is anyone substantially misled? Here I don't think so.
I
It does seem risky; there are still likely libs out there that don't cross
compile for 2.13. I would make it the default at 4.0, myself.
On Mon, May 29, 2023 at 7:16 PM Hyukjin Kwon wrote:
> While I support going forward with a higher version, actually using Scala
> 2.13 by default is a big
Per docs, it is Java 8. It's possible Java 11 partly works with 2.x but not
supported. But then again 2.x is not supported either.
On Mon, May 29, 2023, 6:43 AM Poorna Murali wrote:
> We are currently using JDK 11 and spark 2.4.5.1 is working fine with that.
> So, we wanted to check the maximum
Are you looking for
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/tree/impl/RandomForest.scala
On Thu, May 25, 2023 at 6:54 AM Max
wrote:
> Good day, I'm working on an Implantation from Joint Probability Trees
> (JPT) using the Spark framework. For this
nds
>
> the code is at below
> https://gist.github.com/cometta/240bbc549155e22f80f6ba670c9a2e32
>
> Do you have an example of tensorflow+big dataset that I can test?
>
>
>
>
>
>
>
> On Saturday, April 29, 2023 at 08:44:04 PM GMT+8, Sean Owen <
> sro...@gmai
You don't want to use CPUs with Tensorflow.
If it's not scaling, you may have a problem that is far too small to
distribute.
On Sat, Apr 29, 2023 at 7:30 AM second_co...@yahoo.com.INVALID
wrote:
> Anyone successfully run native tensorflow on Spark ? i tested example at
>
We just removed it now, yes.
On Thu, Apr 20, 2023 at 9:08 AM Emil Ejbyfeldt
wrote:
> Hi,
>
> I think this is expected as it was dropped from the release process in
> https://issues.apache.org/jira/browse/SPARK-40651
>
> Also I don't see a Hadoop2.7 option when selecting Spark 3.4.0 on
>
+1 from me
On Sun, Apr 9, 2023 at 7:19 PM Dongjoon Hyun wrote:
> I'll start with my +1.
>
> I verified the checksum, signatures of the artifacts, and documentations.
> Also, ran the tests with YARN and K8s modules.
>
> Dongjoon.
>
> On 2023/04/09 23:46:10 Dongjoon Hyun wrote:
> > Please vote on
+1 form me, same result as last time.
On Fri, Apr 7, 2023 at 6:30 PM Xinrong Meng
wrote:
> Please vote on releasing the following candidate(RC7) as Apache Spark
> version 3.4.0.
>
> The vote is open until 11:59pm Pacific time *April 12th* and passes if a
> majority +1 PMC votes are cast, with a
That won't work, you can't use Spark within Spark like that.
If it were exact matches, the best solution would be to load both datasets
and join on telephone number.
For this case, I think your best bet is a UDF that contains the telephone
numbers as a list and decides whether a given number
+1 same result from me as last time.
On Thu, Mar 30, 2023 at 3:21 AM Xinrong Meng
wrote:
> Please vote on releasing the following candidate(RC5) as Apache Spark
> version 3.4.0.
>
> The vote is open until 11:59pm Pacific time *April 4th* and passes if a
> majority +1 PMC votes are cast, with a
>From the docs:
* Note that this is not the "normalized" PageRank and as a consequence
pages that have no
* inlinks will have a PageRank of alpha. In particular, the pageranks may
have some values
* greater than 1.
On Tue, Mar 28, 2023 at 9:11 AM lee wrote:
> When I calculate pagerank using
What do you mean by asynchronously here?
On Sun, Mar 26, 2023, 10:22 AM Emmanouil Kritharakis <
kritharakismano...@gmail.com> wrote:
> Hello again,
>
> Do we have any news for the above question?
> I would really appreciate it.
>
> Thank you,
>
>
It is telling you that the UI can't bind to any port. I presume that's
because of container restrictions?
If you don't want the UI at all, just set spark.ui.enabled to false
On Sat, Mar 25, 2023 at 8:28 AM Lorenzo Ferrando <
lorenzo.ferra...@edu.unige.it> wrote:
> Dear Spark team,
>
> I am
Yes more specifically, you can't ask for executors once the app starts,
in SparkConf like that. You set this when you launch it against a Spark
cluster in spark-submit or otherwise.
On Tue, Mar 21, 2023 at 4:23 AM Mich Talebzadeh
wrote:
> Hi Emmanouil,
>
> This means that your job is running on
All else equal it is better to have the same resources in fewer executors.
More tasks are local to other tasks which helps perf. There is more
possibility of 'borrowing' extra mem and CPU in a task.
On Thu, Mar 16, 2023, 2:14 PM Nikhil Goyal wrote:
> Hi folks,
> I am trying to understand what
Pickle won't work. But the others should. I think you are specifying an
invalid path in both cases but hard to say without more detail
On Wed, Mar 15, 2023, 9:13 AM Mnisi, Caleb
wrote:
> Good Day
>
>
>
> I am having trouble saving a spark.ml Pipeline model to a pickle file,
> when running
That's incorrect, it's spark.default.parallelism, but as the name suggests,
that is merely a default. You control partitioning directly with
.repartition()
On Tue, Mar 14, 2023 at 11:37 AM Mich Talebzadeh
wrote:
> Check this link
>
>
>
Are you just looking for DataFrame.repartition()?
On Tue, Mar 14, 2023 at 10:57 AM Emmanouil Kritharakis <
kritharakismano...@gmail.com> wrote:
> Hello,
>
> I hope this email finds you well!
>
> I have a simple dataflow in which I read from a kafka topic, perform a map
> transformation and then
You want Antlr 3 and Spark is on 4? no I don't think Spark would downgrade.
You can shade your app's dependencies maybe.
On Tue, Mar 14, 2023 at 8:21 AM Sahu, Karuna
wrote:
> Hi Team
>
>
>
> We are upgrading a legacy application using Spring boot , Spark and
> Hibernate. While upgrading
not in the AS-IS commit log status because it's screwed already
> as Emil wrote.
> Did you check the branch-3.2 commit log, Sean?
>
> Dongjoon.
>
>
> On Thu, Mar 9, 2023 at 11:42 AM Sean Owen wrote:
>
>> We can just push the tags onto the branches as needed right? No need to
>>
Put the file on HDFS, if you have a Hadoop cluster?
On Thu, Mar 9, 2023 at 3:02 PM sam smith wrote:
> Hello,
>
> I use Yarn client mode to submit my driver program to Hadoop, the dataset
> I load is from the local file system, when i invoke load("file://path")
> Spark complains about the csv
We can just push the tags onto the branches as needed right? No need to
roll a new release
On Thu, Mar 9, 2023, 1:36 PM Dongjoon Hyun wrote:
> Yes, I also confirmed that the v3.4.0-rc3 tag is invalid.
>
> I guess we need RC4.
>
> Dongjoon.
>
> On Thu, Mar 9, 2023 at 7:13 AM Emil Ejbyfeldt
>
I need to install Apple Developer Tools?
> - 原始邮件 -
> 发件人:Sean Owen
> 收件人:ckgppl_...@sina.cn
> 抄送人:user
> 主题:Re: Build SPARK from source with SBT failed
> 日期:2023年03月07日 20点58分
>
> This says you don't have the java compiler installed. Did you install the
> Apple
It's hard to evaluate without knowing what you're doing. Generally, using a
built-in function will be fastest. pandas UDFs can be faster than normal
UDFs if you can take advantage of processing multiple rows at once.
On Tue, Mar 7, 2023 at 6:47 AM neha garde wrote:
> Hello All,
>
> I need help
This says you don't have the java compiler installed. Did you install the
Apple Developer Tools package?
On Tue, Mar 7, 2023 at 1:42 AM wrote:
> Hello,
>
> I have tried to build SPARK source codes with SBT in my local dev
> environment (MacOS 13.2.1). But it reported following error:
> [error]
hich may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 4 Mar 2023 at 20:13, Sean Owen wrote:
>
>> It's the sam
It's the same batch ID already, no?
Or why not simply put the logic of both in one function? or write one
function that calls both?
On Sat, Mar 4, 2023 at 2:07 PM Mich Talebzadeh
wrote:
>
> This is probably pretty straight forward but somehow is does not look
> that way
>
>
>
> On Spark
path get set up differently when running via
> SBT vs. Maven?
>
> On Thu, Mar 2, 2023 at 5:37 PM Sean Owen wrote:
>
>> Thanks, that's good to know. The workaround (deleting the thriftserver
>> target dir) works for me. Who knows?
>>
>> But I'm als
/sbt/issues/6183>.
>
> One thing that I did find to help was to
> delete sql/hive-thriftserver/target between building Spark and running the
> tests. This helps in my builds where the issue only occurs during the
> testing phase and not during the initial build phase, but of cours
Has anyone seen this behavior -- I've never seen it before. The Hive
thriftserver module for me just goes into an infinite loop when running
tests:
...
[INFO] done compiling
[INFO] compiling 22 Scala sources and 24 Java sources to
Right, it contains ALv2 licensed code attributed to two authors - some is
from Guava, some is from Apache Spark contributors.
I thought this is how we should handle this. It's not feasible to go line
by line and say what came from where.
On Wed, Mar 1, 2023 at 1:33 AM Dongjoon Hyun
wrote:
> May
Right, it contains ALv2 licensed code attributed to two authors - some is
from Guava, some is from Apache Spark contributors.
I thought this is how we should handle this. It's not feasible to go line
by line and say what came from where.
On Wed, Mar 1, 2023 at 1:33 AM Dongjoon Hyun
wrote:
> May
", line 62, in main
>>> distances = joined.withColumn("distance", max(col("start") -
>>> col("position"), col("position") - col("end"), 0))
>>> File
>>> "/mnt/yarn/usercache/hadoop/appcache/application_1677167576690
That error sounds like it's from pandas not spark. Are you sure it's this
line?
On Thu, Feb 23, 2023, 12:57 PM Oliver Ruebenacker <
oliv...@broadinstitute.org> wrote:
>
> Hello,
>
> I'm trying to calculate the distance between a gene (with start and end)
> and a variant (with position),
FWIW I agree with this.
On Wed, Feb 22, 2023 at 2:59 PM Allan Folting wrote:
> Hi all,
>
> I would like to propose that we show Python code examples first in the
> Spark documentation where we have multiple programming language examples.
> An example is on the Quick Start page:
>
wait for next releases more easily.
>
> In addition, I want to add the first RC1 date requirement because RC1
> always did a great job for us.
>
> I guess `branch-cut + 1M (no later than 1month)` could be the reasonable
> deadline.
>
> Thanks,
> Dongjoon.
>
>
> O
I'm fine with shifting to a stricter cadence-based schedule. Sometimes,
it'll mean some significant change misses a release rather than delays it.
If people are OK with that discipline, sure.
A hard 6-month cycle would mean the minor releases are more frequent and
have less change in them. That's
Agree, just, if it's such a tiny change, and it actually fixes the issue,
maybe worth getting that into 3.3.x. I don't feel strongly.
On Mon, Feb 13, 2023 at 11:19 AM L. C. Hsieh wrote:
> If it is not supported in Spark 3.3.x, it looks like an improvement at
> Spark 3.4.
> For such cases we
? When I use the latest
>>> Python 3.11, I can reproduce similar test failures (43 tests of sql module
>>> fail), but when I use python 3.10, they will succeed
>>>
>>>
>>>
>>> YangJie
>>>
>>>
>>>
>>> *发件人**: *
a single partition, which has the
>> same downside as collect, so this is as bad as using collect.
>>
>> Cheers,
>> Enrico
>>
>>
>> Am 12.02.23 um 18:05 schrieb sam smith:
>>
>> @Enrico Minack Thanks for "unpivot" but I am
&g
rsion 3.3.0 (you are taking it way too far as usual :) )
> @Sean Owen Pls then show me how it can be improved by
> code.
>
> Also, why such an approach (using withColumn() ) doesn't work:
>
> for (String columnName : df.columns()) {
> df= df.withColumn(columnName,
> df.sele
+1 The tests and all results were the same as ever for me (Java 11, Scala
2.13, Ubuntu 22.04)
I also didn't see that issue ... maybe somehow locale related? which could
still be a bug.
On Sat, Feb 11, 2023 at 8:49 PM L. C. Hsieh wrote:
> Thank you for testing it.
>
> I was going to run it again
>
>
>
>
> On Fri, 10 Feb 2023 at 21:59, sam smith
> wrote:
>
>> I am not sure i understand well " Just need to do the cols one at a
>> time". Plus I think Apostolos is right, this needs a dataframe approach not
>> a list approach.
>>
>>
That gives you all distinct tuples of those col values. You need to select
the distinct values of each col one at a time. Sure just collect() the
result as you do here.
On Fri, Feb 10, 2023, 3:34 PM sam smith wrote:
> I want to get the distinct values of each column in a List (is it good
>
enJDK 64-Bit Server VM Homebrew (build 11.0.17+0, mixed mode)
>
>
> > OS
>
> Ventura 13.1 (22C65)
>
>
> Best,
>
>
> Adam Chhina
>
> On Jan 18, 2023, at 6:50 PM, Sean Owen wrote:
>
> Release _branches_ are tested as commits arrive to the branch, ye
I can help offline. Send me your preferred JIRA user name.
On Thu, Jan 19, 2023 at 7:12 AM Wei Yan wrote:
> When I tried to sign up through this site:
> https://issues.apache.org/jira/secure/Signup!default.jspa
> I got an error message:"Sorry, you can't sign up to this Jira site at the
> moment
java_server
> self.socket.connect((self.java_address, self.java_port))
> ConnectionRefusedError: [Errno 61] Connection refused
>
> ------
> Ran 7 tests in 12.950s
>
> FAILED (errors=7)
> sys:1: ResourceWarning: unclosed f
b spark-321 v3.2.1
>
> with
> git clone --branch branch-3.2 https://github.com/apache/spark.git
> This will give you branch 3.2 as today, what I suppose you call upstream
>
> https://github.com/apache/spark/commits/branch-3.2
> and right now all tests in github action are passed
Never seen those, but it's probably a difference in pandas, numpy versions.
You can see the current CICD test results in GitHub Actions. But, you want
to use release versions, not an RC. 3.2.1 is not the latest version, and
it's possible the tests were actually failing in the RC.
On Wed, Jan 18,
I think you want array_contains:
https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.array_contains.html
On Tue, Jan 17, 2023 at 4:18 PM Oliver Ruebenacker <
oliv...@broadinstitute.org> wrote:
>
> Hello,
>
> I have data originally stored as
One is a normal Pyspark DataFrame, the other is a pandas work-alike wrapper
on a Pyspark DataFrame. They're the same thing with different APIs.
Neither has a 'storage format'.
spark-excel might be fine, and it's used with Spark DataFrames. Because it
emulates pandas's read_excel API, the Pyspark
Right, nothing wrong with a for loop here. Seems like just the right thing.
On Fri, Jan 6, 2023, 3:20 PM Joris Billen
wrote:
> Hello Community,
> I am working in pyspark with sparksql and have a very similar very complex
> list of dataframes that Ill have to execute several times for all the
>
1 - 100 of 24107 matches
Mail list logo