[no subject]

2024-02-03 Thread Gavin McDonald
Hello to all users, contributors and Committers! The Travel Assistance Committee (TAC) are pleased to announce that travel assistance applications for Community over Code EU 2024 are now open! We will be supporting Community over Code EU, Bratislava, Slovakia, June 3th - 5th, 2024. TAC exists

[no subject]

2023-08-07 Thread Bode, Meikel
unsubscribe

[no subject]

2023-06-13 Thread Amanda Liu

[no subject]

2023-03-26 Thread Tanay Banerjee
unsubscribe

[no subject]

2023-03-21 Thread Tanay Banerjee
Unsubscribe

[no subject]

2023-03-06 Thread ansel boero
unsubscribe

subject

2022-11-24 Thread huldar chen
subject

[no subject]

2022-11-02 Thread yogita bhardwaj
I wants to unsubscribe Sent from Mail for Windows

[no subject]

2022-09-20 Thread yogita bhardwaj
I have installed pyspark using pip. I m getting the error while running the following code. from pyspark import SparkContext sc=SparkContext() a=sc.parallelize([1,2,3,4]) print(f"a_take:{a.take(2)}") py4j.protocol.Py4JJavaError: An error occurred while calling

[no subject]

2021-05-03 Thread Tianchen Zhang
Hi all, Currently the user-facing Catalog API doesn't support backup/restore metadata. Our customers are asking for such functionalities. Here is a usage example: 1. Read all metadata of one Spark cluster 2. Save them into a Parquet file on DFS 3. Read the Parquet file and restore all metadata in

[no subject]

2021-03-26 Thread Domingo Mihovilovic

[no subject]

2021-03-10 Thread rahul c
Unsubscribe

[no subject]

2021-03-09 Thread Anton Solod
Unsubscribe

[no subject]

2021-01-20 Thread iriv kang
Unsubscribe

[no subject]

2021-01-09 Thread Christos Ziakas
Unsubscribe

[no subject]

2021-01-08 Thread Bhavya Jain
Unsubscribe

[no subject]

2021-01-08 Thread Chris Brown
Unsubscribe

[no subject]

2021-01-08 Thread Christos Ziakas
Unsubscribe

[no subject]

2021-01-07 Thread iriv kang
Unsubscribe

[no subject]

2021-01-07 Thread rahul c
Unsubscribe

[no subject]

2020-12-08 Thread Владимир Курятков
unsubscribe

[no subject]

2020-12-08 Thread rahul c
Unsubscribe

[no subject]

2020-12-02 Thread rahul c
Unsubscribe

[no subject]

2020-08-04 Thread Rohit Mishra
Hello Everyone, Someone asked this question on JIRA and since it was a question I requested him to check stack overflow. Personally I don't have an answer to this question so in case anyone has an idea please feel free to update the issue. I have marked it resolved for the time being but thought

Re: [DISCUSS] Amend the commiter guidelines on the subject of -1s & how we expect PR discussion to be treated.

2020-07-25 Thread Holden Karau
It sounds like with the slight wording change we’re in agreement so I’ll bounce this by an editor friend to fix my grammar/spelling before I put it up for a vote. On Sat, Jul 25, 2020 at 9:23 PM Hyukjin Kwon wrote: > +1 thanks Holden. > > On Fri, 24 Jul 2020, 22:34 Tom Graves, > wrote: > >> +1

Re: [DISCUSS] Amend the commiter guidelines on the subject of -1s & how we expect PR discussion to be treated.

2020-07-25 Thread Hyukjin Kwon
+1 thanks Holden. On Fri, 24 Jul 2020, 22:34 Tom Graves, wrote: > +1 > > Tom > > On Tuesday, July 21, 2020, 03:35:18 PM CDT, Holden Karau < > hol...@pigscanfly.ca> wrote: > > > Hi Spark Developers, > > There has been a rather active discussion regarding the specific vetoes > that occured during

Re: [DISCUSS] Amend the commiter guidelines on the subject of -1s & how we expect PR discussion to be treated.

2020-07-24 Thread Tom Graves
+1 Tom On Tuesday, July 21, 2020, 03:35:18 PM CDT, Holden Karau wrote: Hi Spark Developers, There has been a rather active discussion regarding the specific vetoes that occured during Spark 3. From that I believe we are now mostly in agreement that it would be best to clarify our

Re: [DISCUSS] Amend the commiter guidelines on the subject of -1s & how we expect PR discussion to be treated.

2020-07-23 Thread Mridul Muralidharan
Thanks Holden, this version looks good to me. +1 Regards, Mridul On Thu, Jul 23, 2020 at 3:56 PM Imran Rashid wrote: > Sure, that sounds good to me. +1 > > On Wed, Jul 22, 2020 at 1:50 PM Holden Karau wrote: > >> >> >> On Wed, Jul 22, 2020 at 7:39 AM Imran Rashid < iras...@apache.org > >>

Re: [DISCUSS] Amend the commiter guidelines on the subject of -1s & how we expect PR discussion to be treated.

2020-07-23 Thread Imran Rashid
Sure, that sounds good to me. +1 On Wed, Jul 22, 2020 at 1:50 PM Holden Karau wrote: > > > On Wed, Jul 22, 2020 at 7:39 AM Imran Rashid < iras...@apache.org > wrote: > >> Hi Holden, >> >> thanks for leading this discussion, I'm in favor in general. I have one >> specific question -- these two

Re: [DISCUSS] Amend the commiter guidelines on the subject of -1s & how we expect PR discussion to be treated.

2020-07-22 Thread Holden Karau
On Wed, Jul 22, 2020 at 7:39 AM Imran Rashid < iras...@apache.org > wrote: > Hi Holden, > > thanks for leading this discussion, I'm in favor in general. I have one > specific question -- these two sections seem to contradict each other > slightly: > > > If there is a -1 from a non-committer,

Re: [DISCUSS] Amend the commiter guidelines on the subject of -1s & how we expect PR discussion to be treated.

2020-07-22 Thread Imran Rashid
Hi Holden, thanks for leading this discussion, I'm in favor in general. I have one specific question -- these two sections seem to contradict each other slightly: > If there is a -1 from a non-committer, multiple committers or the PMC should be consulted before moving forward. > >If the

[DISCUSS] Amend the commiter guidelines on the subject of -1s & how we expect PR discussion to be treated.

2020-07-21 Thread Holden Karau
Hi Spark Developers, There has been a rather active discussion regarding the specific vetoes that occured during Spark 3. From that I believe we are now mostly in agreement that it would be best to clarify our rules around code vetoes & merging in general. Personally I believe this change is

[no subject]

2020-07-02 Thread vtygoss
Hi devs, question: how to convert hive output format to spark sql datasource format? spark version: spark 2.3.0 scene: there are many small files on hdfs(hive) generated by spark sql applications when dynamic partition is enabled or setting spark.sql.shuffle.partitions >200. so i am

[no subject]

2020-02-02 Thread Stepan Tuchin
Unsubscribe -- [image: Brandmark_small.jpg] Stepan Tuchin, Automation Quality Engineer Grid Dynamics Vavilova, 38/114, Saratov Dir: +7 (902) 047-55-55

[no subject]

2020-01-14 Thread @Sanjiv Singh
Regards Sanjiv Singh Mob : +1 571-599-5236

[no subject]

2019-12-20 Thread Driesprong, Fokko
Folks, I've opened a PR a while ago with a PR to merge the possibility to merge a custom data type, into a native data type. This is something new because of the introduction of Delta. To have some background, I'm having a DataSet that has fields of the type XMLGregorianCalendarType. I don't

[no subject]

2019-04-02 Thread Uzi Hadad
unsubscribe

[no subject]

2019-04-02 Thread Daniel Sierra
unsubscribe

[no subject]

2019-03-06 Thread Dongxu Wang

[no subject]

2019-01-03 Thread marco rocchi
Unsubscribe me, please. Thank you so much

[no subject]

2018-06-23 Thread Anbazhagan Muthuramalingam
Unsubscribe Regards M Anbazhagan IT Analyst

[no subject]

2017-07-28 Thread Hao Chen
-- Hao

[no subject]

2017-01-19 Thread Keith Chapman
Hi , Is it possible for an executor (or slave) to know when an actual job ends? I'm running spark on a cluster (with yarn) and my workers create some temporary files that I would like to clean up once the job ends. Is there a way for the worker to detect that a job has finished? I tried doing it

[no subject]

2016-12-20 Thread satyajit vegesna
Hi All, PFB sample code , val df = spark.read.parquet() df.registerTempTable("df") val zip = df.select("zip_code").distinct().as[String].rdd def comp(zipcode:String):Unit={ val zipval = "SELECT * FROM df WHERE zip_code='$zipvalrepl'".replace("$zipvalrepl", zipcode) val data =

[no subject]

2016-11-24 Thread Rostyslav Sotnychenko

[no subject]

2016-10-10 Thread Fei Hu
Hi All, I am running some spark scala code on zeppelin on CDH 5.5.1 (Spark version 1.5.0). I customized the Spark interpreter to use org.apache.spark.serializer.KryoSerializer as spark.serializer. And in the dependency I added Kyro-3.0.3 as following: com.esotericsoftware:kryo:3.0.3 When I

[no subject]

2016-07-26 Thread thibaut
unsuscribe - To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

[no subject]

2016-05-22 Thread ????
I would like to contribute to spark. I am working on spark-15429. Please give permission to contribute.

[no subject]

2015-12-01 Thread Alexander Pivovarov

[no subject]

2015-11-26 Thread Dmitry Tolpeko

[no subject]

2015-08-05 Thread Sandeep Giri
Yes, but in the take() approach we will be bringing the data to the driver and is no longer distributed. Also, the take() takes only count as argument which means that every time we would transferring the redundant elements. Regards, Sandeep Giri, +1 347 781 4573 (US) +91-953-899-8962 (IN)