Hello to all users, contributors and Committers!
The Travel Assistance Committee (TAC) are pleased to announce that
travel assistance applications for Community over Code EU 2024 are now
open!
We will be supporting Community over Code EU, Bratislava, Slovakia,
June 3th - 5th, 2024.
TAC exists
unsubscribe
unsubscribe
Unsubscribe
unsubscribe
subject
I wants to unsubscribe
Sent from Mail for Windows
I have installed pyspark using pip.
I m getting the error while running the following code.
from pyspark import SparkContext
sc=SparkContext()
a=sc.parallelize([1,2,3,4])
print(f"a_take:{a.take(2)}")
py4j.protocol.Py4JJavaError: An error occurred while calling
Hi all,
Currently the user-facing Catalog API doesn't support backup/restore
metadata. Our customers are asking for such functionalities. Here is a
usage example:
1. Read all metadata of one Spark cluster
2. Save them into a Parquet file on DFS
3. Read the Parquet file and restore all metadata in
Unsubscribe
Unsubscribe
Unsubscribe
Unsubscribe
Unsubscribe
Unsubscribe
Unsubscribe
Unsubscribe
Unsubscribe
unsubscribe
Unsubscribe
Unsubscribe
Hello Everyone,
Someone asked this question on JIRA and since it was a question I requested
him to check stack overflow. Personally I don't have an answer to this
question so in case anyone has an idea please feel free to update the
issue. I have marked it resolved for the time being but thought
It sounds like with the slight wording change we’re in agreement so I’ll
bounce this by an editor friend to fix my grammar/spelling before I put it
up for a vote.
On Sat, Jul 25, 2020 at 9:23 PM Hyukjin Kwon wrote:
> +1 thanks Holden.
>
> On Fri, 24 Jul 2020, 22:34 Tom Graves,
> wrote:
>
>> +1
+1 thanks Holden.
On Fri, 24 Jul 2020, 22:34 Tom Graves, wrote:
> +1
>
> Tom
>
> On Tuesday, July 21, 2020, 03:35:18 PM CDT, Holden Karau <
> hol...@pigscanfly.ca> wrote:
>
>
> Hi Spark Developers,
>
> There has been a rather active discussion regarding the specific vetoes
> that occured during
+1
Tom
On Tuesday, July 21, 2020, 03:35:18 PM CDT, Holden Karau
wrote:
Hi Spark Developers,
There has been a rather active discussion regarding the specific vetoes that
occured during Spark 3. From that I believe we are now mostly in agreement that
it would be best to clarify our
Thanks Holden, this version looks good to me.
+1
Regards,
Mridul
On Thu, Jul 23, 2020 at 3:56 PM Imran Rashid wrote:
> Sure, that sounds good to me. +1
>
> On Wed, Jul 22, 2020 at 1:50 PM Holden Karau wrote:
>
>>
>>
>> On Wed, Jul 22, 2020 at 7:39 AM Imran Rashid < iras...@apache.org >
>>
Sure, that sounds good to me. +1
On Wed, Jul 22, 2020 at 1:50 PM Holden Karau wrote:
>
>
> On Wed, Jul 22, 2020 at 7:39 AM Imran Rashid < iras...@apache.org > wrote:
>
>> Hi Holden,
>>
>> thanks for leading this discussion, I'm in favor in general. I have one
>> specific question -- these two
On Wed, Jul 22, 2020 at 7:39 AM Imran Rashid < iras...@apache.org > wrote:
> Hi Holden,
>
> thanks for leading this discussion, I'm in favor in general. I have one
> specific question -- these two sections seem to contradict each other
> slightly:
>
> > If there is a -1 from a non-committer,
Hi Holden,
thanks for leading this discussion, I'm in favor in general. I have one
specific question -- these two sections seem to contradict each other
slightly:
> If there is a -1 from a non-committer, multiple committers or the PMC
should be consulted before moving forward.
>
>If the
Hi Spark Developers,
There has been a rather active discussion regarding the specific vetoes
that occured during Spark 3. From that I believe we are now mostly in
agreement that it would be best to clarify our rules around code vetoes &
merging in general. Personally I believe this change is
Hi devs,
question: how to convert hive output format to spark sql datasource format?
spark version: spark 2.3.0
scene: there are many small files on hdfs(hive) generated by spark sql
applications when dynamic partition is enabled or setting
spark.sql.shuffle.partitions >200. so i am
Unsubscribe
--
[image: Brandmark_small.jpg]
Stepan Tuchin, Automation Quality Engineer
Grid Dynamics
Vavilova, 38/114, Saratov
Dir: +7 (902) 047-55-55
Regards
Sanjiv Singh
Mob : +1 571-599-5236
Folks,
I've opened a PR a while ago with a PR to merge the possibility to merge a
custom data type, into a native data type. This is something new because of
the introduction of Delta.
To have some background, I'm having a DataSet that has fields of the type
XMLGregorianCalendarType. I don't
unsubscribe
unsubscribe
Unsubscribe me, please.
Thank you so much
Unsubscribe
Regards
M Anbazhagan
IT Analyst
--
Hao
Hi ,
Is it possible for an executor (or slave) to know when an actual job ends?
I'm running spark on a cluster (with yarn) and my workers create some
temporary files that I would like to clean up once the job ends. Is there a
way for the worker to detect that a job has finished? I tried doing it
Hi All,
PFB sample code ,
val df = spark.read.parquet()
df.registerTempTable("df")
val zip = df.select("zip_code").distinct().as[String].rdd
def comp(zipcode:String):Unit={
val zipval = "SELECT * FROM df WHERE
zip_code='$zipvalrepl'".replace("$zipvalrepl", zipcode)
val data =
Hi All,
I am running some spark scala code on zeppelin on CDH 5.5.1 (Spark version
1.5.0). I customized the Spark interpreter to use
org.apache.spark.serializer.KryoSerializer as spark.serializer. And in the
dependency I added Kyro-3.0.3 as following:
com.esotericsoftware:kryo:3.0.3
When I
unsuscribe
-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
I would like to contribute to spark. I am working on spark-15429. Please give
permission to contribute.
Yes, but in the take() approach we will be bringing the data to the driver
and is no longer distributed.
Also, the take() takes only count as argument which means that every time
we would transferring the redundant elements.
Regards,
Sandeep Giri,
+1 347 781 4573 (US)
+91-953-899-8962 (IN)
51 matches
Mail list logo