Mailing lists matching spark.apache.org

commits spark.apache.org
dev spark.apache.org
issues spark.apache.org
reviews spark.apache.org
user spark.apache.org


Unsubscribe

2024-01-02 Thread Atlas - Samir Souidi
Unsubscribe

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: [PR] Miland db/miland legacy error class [spark]

2024-03-07 Thread via GitHub


HyukjinKwon commented on PR #45423:
URL: https://github.com/apache/spark/pull/45423#issuecomment-1984835207

   See also https://spark.apache.org/contributing.html


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Unsubscribe

2024-04-08 Thread bruce COTTMAN



-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Invalid link for Spark 1.0.0 in Official Web Site

2014-07-04 Thread Kousuke Saruta
Hi,

I found there is a invalid link in http://spark.apache.org/downloads.html .
The link for release note of Spark 1.0.0 indicates 
http://spark.apache.org/releases/spark-release-1.0.0.html but this link is 
invalid.
I think that is mistake for 
http://spark.apache.org/releases/spark-release-1-0-0.html.

Thanks,
Kousuke




Re: compiling spark source code

2014-09-12 Thread qihong
follow the instruction here:
http://spark.apache.org/docs/latest/building-with-maven.html



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/compiling-spark-source-code-tp13980p14144.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re:

2014-10-22 Thread Ted Yu
See first section of http://spark.apache.org/community

On Wed, Oct 22, 2014 at 7:42 AM, Margusja mar...@roo.ee wrote:

 unsubscribe

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




Re: unsubscribe

2014-11-03 Thread Akhil Das
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org

Thanks
Best Regards

On Mon, Nov 3, 2014 at 5:53 PM, Karthikeyan Arcot Kuppusamy 
karthikeyan...@zanec.com wrote:

 hi

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




Re: unsubscribe

2014-11-18 Thread Corey Nolet
Abdul,

Please send an email to user-unsubscr...@spark.apache.org

On Tue, Nov 18, 2014 at 2:05 PM, Abdul Hakeem alhak...@gmail.com wrote:





 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




Re: Spark SQL with a sorted file

2014-12-22 Thread Jerry Raj

Michael,
Thanks. Is this still turned off in the released 1.2? Is it possible to 
turn it on just to get an idea of how much of a difference it makes?


-Jerry

On 05/12/14 12:40 am, Michael Armbrust wrote:

I'll add that some of our data formats will actual infer this sort of
useful information automatically.  Both parquet and cached inmemory
tables keep statistics on the min/max value for each column.  When you
have predicates over these sorted columns, partitions will be eliminated
if they can't possibly match the predicate given the statistics.

For parquet this is new in Spark 1.2 and it is turned off by defaults
(due to bugs we are working with the parquet library team to fix).
Hopefully soon it will be on by default.

On Wed, Dec 3, 2014 at 8:44 PM, Cheng, Hao hao.ch...@intel.com
mailto:hao.ch...@intel.com wrote:

You can try to write your own Relation with filter push down or use
the ParquetRelation2 for workaround.

(https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala)

Cheng Hao

-Original Message-
From: Jerry Raj [mailto:jerry@gmail.com
mailto:jerry@gmail.com]
Sent: Thursday, December 4, 2014 11:34 AM
To: user@spark.apache.org mailto:user@spark.apache.org
Subject: Spark SQL with a sorted file

Hi,
If I create a SchemaRDD from a file that I know is sorted on a
certain field, is it possible to somehow pass that information on to
Spark SQL so that SQL queries referencing that field are optimized?

Thanks
-Jerry

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org For additional commands,
e-mail: user-h...@spark.apache.org mailto:user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
mailto:user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
mailto:user-h...@spark.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Berlin Apache Spark Meetup

2015-02-17 Thread Ralph Bergmann | the4thFloor.eu
Hi,


there is a small Spark Meetup group in Berlin, Germany :-)
http://www.meetup.com/Berlin-Apache-Spark-Meetup/

Plaes add this group to the Meetups list at
https://spark.apache.org/community.html


Ralph

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



textFile() ordering and header rows

2015-02-22 Thread Michael Malak
Since RDDs are generally unordered, aren't things like textFile().first() not 
guaranteed to return the first row (such as looking for a header row)? If so, 
doesn't that make the example in 
http://spark.apache.org/docs/1.2.1/quick-start.html#basics misleading?

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Mllib kmeans #iteration

2015-04-03 Thread amoners
Have you refer to official document of kmeans on
https://spark.apache.org/docs/1.1.1/mllib-clustering.html ?




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Mllib-kmeans-iteration-tp22353p22365.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



saveasorcfile on partitioned orc

2015-05-20 Thread patcharee

Hi,

I followed the information on 
https://www.mail-archive.com/reviews@spark.apache.org/msg141113.html to 
save orc file with spark 1.2.1.


I can save data to a new orc file. I wonder how to save data to an 
existing and partitioned orc file? Any suggestions?


BR,
Patcharee

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Spark History Server pointing to S3

2015-06-16 Thread Gianluca Privitera
In Spark website it’s stated in the View After the Fact section 
(https://spark.apache.org/docs/latest/monitoring.html) that you can point the 
start-history-server.sh script to a directory in order do view the Web UI using 
the logs as data source.

Is it possible to point that script to S3? Maybe from a EC2 instance? 

Thanks,

Gianluca
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: subscribe

2015-08-22 Thread Brandon White
https://www.youtube.com/watch?v=umDr0mPuyQc

On Sat, Aug 22, 2015 at 8:01 AM, Ted Yu yuzhih...@gmail.com wrote:

 See http://spark.apache.org/community.html

 Cheers

 On Sat, Aug 22, 2015 at 2:51 AM, Lars Hermes 
 li...@hermes-it-consulting.de wrote:

 subscribe

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org





Re: subscribe

2015-08-22 Thread Ted Yu
See http://spark.apache.org/community.html

Cheers

On Sat, Aug 22, 2015 at 2:51 AM, Lars Hermes li...@hermes-it-consulting.de
wrote:

 subscribe

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




Python Kafka support?

2015-11-10 Thread Darren Govoni

Hi,
 I read on this page 
http://spark.apache.org/docs/latest/streaming-kafka-integration.html 
about python support for "receiverless" kafka integration (Approach 2) 
but it says its incomplete as of version 1.4.


Has this been updated in version 1.5.1?

Darren

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: JMX with Spark

2015-11-05 Thread Romi Kuntsman
Have you read this?
https://spark.apache.org/docs/latest/monitoring.html

*Romi Kuntsman*, *Big Data Engineer*
http://www.totango.com

On Thu, Nov 5, 2015 at 2:08 PM, Yogesh Vyas <informy...@gmail.com> wrote:

> Hi,
> How we can use JMX and JConsole to monitor our Spark applications?
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


Re: unsubscribe

2015-09-23 Thread Richard Hillegas

Hi Ntale,

To unsubscribe from the user list, please send a message to
user-unsubscr...@spark.apache.org as described here:
http://spark.apache.org/community.html#mailing-lists.

Thanks,
-Rick

Ntale Lukama <ntaleluk...@gmail.com> wrote on 09/23/2015 04:34:48 AM:

> From: Ntale Lukama <ntaleluk...@gmail.com>
> To: user <user@spark.apache.org>
> Date: 09/23/2015 04:35 AM
> Subject: unsubscribe

Re: I want to unsubscribe

2016-04-05 Thread Jakob Odersky
to unsubscribe, send an email to user-unsubscr...@spark.apache.org

On Tue, Apr 5, 2016 at 4:50 PM, Ranjana Rajendran
<ranjana.rajend...@gmail.com> wrote:
> I get to see the threads in the public mailing list. I don;t want so many
> messages in my inbox. I want to unsubscribe.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Spark ML Interaction

2016-03-08 Thread amarouni
Hi,

Did anyone here manage to write an example of the following ML feature
transformer
http://spark.apache.org/docs/latest/api/java/org/apache/spark/ml/feature/Interaction.html
?
It's not documented on the official Spark ML features pages but it can
be found in the package API javadocs.

Thanks,

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-04 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69509768
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
--- End diff --

what will happen if there is no userinfo in the url?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-05 Thread janplus
Github user janplus commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69532511
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
--- End diff --

Then the result is `null`. I'll add a test case for this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69266296
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,43 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType)),
+expected)
+}
+def checkParseUrlWithKey(expected: String, urlStr: String,
+  partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+ Literal.create(key, StringType)), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("jian", "http://j...@spark.apache.org/path?query=1;, 
"USERINFO")
--- End diff --

If you don't mind, could you replace `jian` with `userinfo`? :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread janplus
Github user janplus commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69276506
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,43 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType)),
+expected)
+}
+def checkParseUrlWithKey(expected: String, urlStr: String,
+  partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+ Literal.create(key, StringType)), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("jian", "http://j...@spark.apache.org/path?query=1;, 
"USERINFO")
--- End diff --

OK


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: Already subscribed to user@spark.apache.org

2016-11-06 Thread Maitray Thaker
On Mon, Nov 7, 2016 at 1:26 PM, <user-h...@spark.apache.org> wrote:

> Hi! This is the ezmlm program. I'm managing the
> user@spark.apache.org mailing list.
>
> Acknowledgment: The address
>
>maitraytha...@gmail.com
>
> was already on the user mailing list when I received
> your request, and remains a subscriber.
>
>
> --- Administrative commands for the user list ---
>
> I can handle administrative requests automatically. Please
> do not send them to the list address! Instead, send
> your message to the correct command address:
>
> To subscribe to the list, send a message to:
><user-subscr...@spark.apache.org>
>
> To remove your address from the list, send a message to:
><user-unsubscr...@spark.apache.org>
>
> Send mail to the following for info and FAQ for this list:
><user-i...@spark.apache.org>
><user-...@spark.apache.org>
>
> Similar addresses exist for the digest list:
><user-digest-subscr...@spark.apache.org>
><user-digest-unsubscr...@spark.apache.org>
>
> To get messages 123 through 145 (a maximum of 100 per request), mail:
><user-get.123_...@spark.apache.org>
>
> To get an index with subject and author for messages 123-456 , mail:
><user-index.123_...@spark.apache.org>
>
> They are always returned as sets of 100, max 2000 per request,
> so you'll actually get 100-499.
>
> To receive all messages with the same subject as message 12345,
> send a short message to:
><user-thread.12...@spark.apache.org>
>
> The messages should contain one line or word of text to avoid being
> treated as sp@m, but I will ignore their content.
> Only the ADDRESS you send to is important.
>
> You can start a subscription for an alternate address,
> for example "john@host.domain", just add a hyphen and your
> address (with '=' instead of '@') after the command word:
> 

Re: Unsubscribe

2016-12-08 Thread Nicholas Chammas
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

This is explained here: http://spark.apache.org/community.html#mailing-lists

On Thu, Dec 8, 2016 at 12:54 AM Roger Holenweger <ro...@lotadata.com> wrote:

>
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Unsubscribe

2016-12-08 Thread Nicholas Chammas
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

This is explained here: http://spark.apache.org/community.html#mailing-lists

On Thu, Dec 8, 2016 at 12:12 AM Ajit Jaokar <ajit.jao...@futuretext.com>
wrote:

>
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: unsubscribe

2016-12-27 Thread Minikek
Once you are in, there is no way out… :-)

> On Dec 27, 2016, at 7:37 PM, Kyle Kelley <rgb...@gmail.com> wrote:
> 
> You are now in position 238 for unsubscription. If you wish for your
> subscription to occur immediately, please email
> dev-unsubscr...@spark.apache.org
> 
> Best wishes.
> 
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
> 


-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: unsubscribe

2016-12-27 Thread Minikek
Once you are in, there is no way out… :-)

> On Dec 27, 2016, at 7:37 PM, Kyle Kelley <rgb...@gmail.com> wrote:
> 
> You are now in position 238 for unsubscription. If you wish for your
> subscription to occur immediately, please email
> dev-unsubscr...@spark.apache.org
> 
> Best wishes.
> 
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
> 


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



[GitHub] spark issue #19263: Optionally add block updates to log

2017-09-18 Thread jerryshao
Github user jerryshao commented on the issue:

https://github.com/apache/spark/pull/19263
  
@michaelmior would you please follow the instruction 
(https://spark.apache.org/contributing.html) to update PR title and create a 
corresponding JIRA, thanks!


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19268: Incorrect Metric reported in MetricsReporter.scala

2017-09-18 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/19268
  
No way to make the change without a PR, so no leave it.
http://spark.apache.org/contributing.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19283: Update quickstart python dataset example

2017-09-19 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/19283
  
Have a look at http://spark.apache.org/contributing.html -- maybe prefix 
the title with `[MINOR][DOCS]` for completeness?

Are there other instances of this same issue in the Pyspark docs?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19489: The declared package "org.apache.hive.service.cli.thrift...

2017-10-13 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/19489
  
This should be closed no matter what. Please start at the web site. 
http://spark.apache.org/community.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19145: add logic to test whether the complete container has bee...

2017-09-06 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/19145
  
Could you fix the title to be a form, `[SPARK-][COMPONENT] Title`, as 
described in http://spark.apache.org/contributing.html?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19126: [SPARK-21915][ML][PySpark]Model 1 and Model 2 ParamMaps ...

2017-09-06 Thread marktab
Github user marktab commented on the issue:

https://github.com/apache/spark/pull/19126
  
@srowen since I am a new to this review process, should I be seeing the 
change at http://spark.apache.org/docs/latest/ml-pipeline.html 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19347: Branch 2.2 sparkmlib's output of many algorithms is not ...

2017-09-25 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/19347
  
@ithjz, If you'd like to ask a question, please ask this to the mailing 
list (see https://spark.apache.org/community.html). 

Could you close this please?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19321: [SPARK-22100] [SQL] Make percentile_approx support numer...

2017-09-23 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/19321
  
Could you document the change in the output type of `percentile_approx ` in 
the following section?


https://spark.apache.org/docs/latest/sql-programming-guide.html#migration-guide


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19335: mapPartitions Api

2017-09-24 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/19335
  
@listenLearning, If you'd like to ask a question, please ask this to the 
mailing list (see https://spark.apache.org/community.html). 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #18833: [SPARK-21625][SQL] sqrt(negative number) should be null.

2017-10-23 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/18833
  
Can we document this difference in 
https://spark.apache.org/docs/latest/sql-programming-guide.html#compatibility-with-apache-hive?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20027: Branch 2.2

2017-12-20 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/20027
  
Hey @Maple-Wang, could you close this and file an issue via JIRA please 
(see http://spark.apache.org/contributing.html)?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19515: [SPARK-22287][MESOS] SPARK_DAEMON_MEMORY not honored by ...

2017-11-09 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/19515
  
@pmackles perhaps you could email this to d...@spark.apache.org to get some 
visibility to this and hopefully someone else on the mesos side can review?



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #895: [SPARK-1940] Enabling rolling of executor logs, and automa...

2017-12-04 Thread wbowditch
Github user wbowditch commented on the issue:

https://github.com/apache/spark/pull/895
  
Can these configuration additions be added to Spark Documentation 
(https://spark.apache.org/docs/latest/configuration.html) ?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21264: Branch 2.2

2018-05-07 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21264
  
@yotingting, mind closing this and open an issue at JIRA or asking it to 
mailing list please? I think you can have a better answer there. Please check 
out https://spark.apache.org/community.html too.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21162: shaded guava is not used anywhere, seems guava is not sh...

2018-05-13 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/21162
  
CC @vanzin but it's more complex than that as far as I know. It is still 
shaded. You need to read https://spark.apache.org/contributing.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21419: Branch 2.2

2018-05-24 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21419
  
@gentlewangyu, please close this and read 
https://spark.apache.org/contributing.html. Questions should go to mailing list 
and issues should be filed in JIRA.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21496: docs: fix typo

2018-06-12 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/21496
  
Because it's obviously a test problem elsewhere, I'll merge this.
@tomsaleeba please see https://spark.apache.org/contributing.html for the 
future


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21092: [SPARK-23984][K8S] Initial Python Bindings for PySpark o...

2018-06-14 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/21092
  
@lucashu1 please send your question to stackoverflow or 
u...@spark.apache.org!


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21438: Improve SQLAppStatusListener.aggregateMetrics() too show

2018-05-26 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/21438
  
please see http://spark.apache.org/contributing.html on "Pull Request"

also fix the PR title to start with `[SPARK-24398]`


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21597: [SPARK-24603] Fix findTightestCommonType reference in co...

2018-06-20 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21597
  
@Fokko, thanks for bearing with it. (see also 
https://spark.apache.org/contributing.html).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21669: [SPARK-23257][K8S][WIP] Kerberos Support for Spark on K8...

2018-06-29 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/21669
  
btw, have you sent out this + doc to d...@spark.apache.org?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21695: Maintining an order

2018-07-02 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/21695
  
As noted above, please see http://spark.apache.org/contributing.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21207: SPARK-24136: Fix MemoryStreamDataReader.next to skip sle...

2018-05-03 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21207
  
@arunmahadevan, not a big deal but mind if I ask to fix the PR title to 
`[SPARK-24136][SS] blabla`? It's actually encouraged in the guide - 
https://spark.apache.org/contributing.html 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20188: [SPARK-22993][ML] Clarify HasCheckpointInterval param do...

2018-01-09 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/20188
  
Actually in R setCheckpointDir method is not attached to the SparkContext; 
I’d leave it as “not set” or “not set in the session”

https://spark.apache.org/docs/latest/api/R/setCheckpointDir.html




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20023: [SPARK-22036][SQL] Decimal multiplication with high prec...

2018-01-16 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/20023
  
Since this introduces a behavior change, please update the [migration guide 
of Spark 
SQL](https://spark.apache.org/docs/latest/sql-programming-guide.html#migration-guide)


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20212: Update rdd-programming-guide.md

2018-01-12 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/20212
  
OK consider that and http://spark.apache.org/contributing.html for the 
future. I'll just merge this.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20372: Improved block merging logic for partitions

2018-01-26 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/20372
  
please see https://spark.apache.org/contributing.html
open a JIRA and update this PR?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19431: [SPARK-18580] [DStreams] [external/kafka-0-10][external/...

2018-02-07 Thread akonopko
Github user akonopko commented on the issue:

https://github.com/apache/spark/pull/19431
  
@gaborgsomogyi 
`spark.streaming.backpressure.initialRate` is already documented in here: 
https://spark.apache.org/docs/latest/configuration.html
But was mistakenly not included to to direct Kafka Streams


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21884: k8s: explicitly expose ports on driver container

2018-07-27 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/21884
  
can you update the format of the title and description as described here
"Pull Request" in https://spark.apache.org/contributing.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21893: Support selecting from partitioned tabels with partition...

2018-07-27 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21893
  
Mind filing a JIRA please? Please see 
http://spark.apache.org/contributing.html 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21893: Support selecting from partitioned tabels with partition...

2018-07-27 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21893
  
Please review http://spark.apache.org/contributing.html before opening a 
pull request.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21921: [SPARK-24971][SQL] remove SupportsDeprecatedScanRow

2018-08-01 Thread rdblue
Github user rdblue commented on the issue:

https://github.com/apache/spark/pull/21921
  
@cloud-fan, I thought it was a requirement to have a committer +1 before 
merging. Or is this [list of 
committers](https://spark.apache.org/committers.html) out of date?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21988: [SPARK-25003][PYSPARK][BRANCH-2.2] Use SessionExtensions...

2018-08-07 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/21988
  
we always open against master and backport if agreed upon.
this is documented here https://spark.apache.org/contributing.html



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22116: Update configuration.md

2018-08-15 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/22116
  
@KraFusion, mind double checking if there's same instance and fixing the PR 
title to reflect the change? Also should be good to read 
https://spark.apache.org/contributing.html even though it's a minor change.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21812: SPARK UI K8S : this parameter's illustration(spark.kuber...

2018-07-18 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21812
  
@hehuiyuan, please ask a question via a mailing list. See also 
https://spark.apache.org/community.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21767: SPARK-24804 There are duplicate words in the test title ...

2018-07-18 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/21767
  
yeah, please avoid PRs that are this trivial, it's just not worth the 
overhead. But I merged it this time.
Also please read https://spark.apache.org/contributing.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21828: Update regression.py

2018-07-22 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21828
  
@woodthom2, if you have some plans to update this PR quite soon, please see 
https://spark.apache.org/contributing.html and proceed. Otherwise, I would 
suggest to leave this closed so that active PRs are left for reviewing.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21755: Doc fix: The Imputer is an Estimator

2018-07-15 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/21755
  
I'm sure the failure is spurious, so merged to master.
PS see https://spark.apache.org/contributing.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20370: Changing JDBC relation to better process quotes

2018-01-23 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/20370
  
@conorbmurphy Could you create a JIRA and follow [the 
instruction](https://spark.apache.org/contributing.html) to make a 
contribution? 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20790: AccumulatorV2 subclass isZero scaladoc fix

2018-03-10 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/20790
  
Wait .. I just found you opened a JIRA - SPARK-23642. Please link it by 
`[SPARK-23642][DOCS] ...`. see https://spark.apache.org/contributing.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20897: [MINOR][DOC] Fix a few markdown typos

2018-04-01 Thread Lemonjing
Github user Lemonjing commented on the issue:

https://github.com/apache/spark/pull/20897
  
see http://spark.apache.org/docs/latest/ml-features.html#elementwiseproduct


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #20669: [SPARK-22839][K8S] Remove the use of init-container for ...

2018-03-19 Thread foxish
Github user foxish commented on the issue:

https://github.com/apache/spark/pull/20669
  
There's a section explaining it at the bottom of 
https://spark.apache.org/committers.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22891: SPARK-25881

2018-10-30 Thread kiszk
Github user kiszk commented on the issue:

https://github.com/apache/spark/pull/22891
  
Thank you for your contribution.
Could you please write appropriate title and descriptions based on 
http://spark.apache.org/contributing.html ?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22893: One part of Spark MLlib Kmean Logic Performance problem

2018-10-30 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/22893
  
Please fix the PR title as described in 
https://spark.apache.org/contributing.html and read it.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22997: SPARK-25999: make-distribution.sh failure with --r and -...

2018-11-10 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/22997
  
btw, please see the page https://spark.apache.org/contributing.html and 
particularly "Pull Request" on the format.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22596: Fix lint failure in 2.2

2018-09-30 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/22596
  
Can you link the JIRA https://issues.apache.org/jira/browse/SPARK-25576 ? 
Please see https://spark.apache.org/contributing.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #23246: [SPARK-26292][CORE]Assert statement of currentPage may b...

2018-12-06 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/23246
  
It's not clear this is where it should be from the description. Please 
review https://spark.apache.org/contributing.html This one should be closed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #23107: small question in Spillable class

2018-11-21 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/23107
  
Please send questions to u...@spark.apache.org; this should be closed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #23107: small question in Spillable class

2018-11-21 Thread dongjoon-hyun
Github user dongjoon-hyun commented on the issue:

https://github.com/apache/spark/pull/23107
  
Hi, @Charele .
Could you read http://spark.apache.org/community.html ?
You had better close this. :)


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: unsubscribe

2019-06-12 Thread B2B Web ID
Hi, Sonu.
You can send email to user-unsubscr...@spark.apache.org with subject
"(send this email to unsubscribe)" to unsubscribe from this mailling
list[1].

Regards.

[1] https://spark.apache.org/community.html


2019-05-27 2:01 GMT+07.00, Sonu Jyotshna :
>
>


-- 

-- 
Salam Hangat,
Pengelola B2B.Web.ID
<http://berbagi-berkat.blogspot.com>

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: unsubscribe

2019-04-30 Thread Arne Zachlod

please read this to unsubscribe: https://spark.apache.org/community.html

TL;DR: user-unsubscr...@spark.apache.org so no mail to the list

On 4/30/19 6:38 AM, Amrit Jangid wrote:




-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



How to unsubscribe

2020-05-06 Thread Fred Liu
Hi guys


-

To unsubscribe e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>



From: Fred Liu 
Sent: Wednesday, May 6, 2020 10:10 AM
To: user@spark.apache.org
Subject: Unsubscribe


[External E-mail]

CAUTION: This email originated from outside the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.




[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-06 Thread janplus
Github user janplus commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69851574
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,52 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal(urlStr), Literal(partToExtract))), expected)
+}
+def checkParseUrlWithKey(
+expected: String,
+urlStr: String,
+partToExtract: String,
+key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal(urlStr), Literal(partToExtract), 
Literal(key))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
+checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
    +checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
+checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrl(null, "http://spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
+  evaluate(ParseUrl(Seq(Literal("http://spark.apache.org/path?;),
+Literal("QUERY"), Literal("???"
+}
+
+// arguments checking
+assert(ParseUrl(Seq(Literal("1"))).checkInputDataTypes().isFailure)
+assert(ParseUrl(Seq(Literal("1"), Literal("2"), Literal("3"), 
Literal("4")))
--- End diff --

As I declare ParseUrl with `ImplicitCastInputTypes`, I am no sure whether 
the cases with invalid-type parameters is necessary


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-06 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69851130
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,52 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal(urlStr), Literal(partToExtract))), expected)
+}
+def checkParseUrlWithKey(
+expected: String,
+urlStr: String,
+partToExtract: String,
+key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal(urlStr), Literal(partToExtract), 
Literal(key))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
+checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
    +checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
+checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrl(null, "http://spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
+  evaluate(ParseUrl(Seq(Literal("http://spark.apache.org/path?;),
+Literal("QUERY"), Literal("???"
+}
+
+// arguments checking
+assert(ParseUrl(Seq(Literal("1"))).checkInputDataTypes().isFailure)
+assert(ParseUrl(Seq(Literal("1"), Literal("2"), Literal("3"), 
Literal("4")))
    --- End diff --

also add some cases with invalid-type parameters?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-06 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69853099
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,52 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal(urlStr), Literal(partToExtract))), expected)
+}
+def checkParseUrlWithKey(
+expected: String,
+urlStr: String,
+partToExtract: String,
+key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal(urlStr), Literal(partToExtract), 
Literal(key))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
+checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
    +checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
+checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrl(null, "http://spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
+  evaluate(ParseUrl(Seq(Literal("http://spark.apache.org/path?;),
+Literal("QUERY"), Literal("???"
+}
+
+// arguments checking
+assert(ParseUrl(Seq(Literal("1"))).checkInputDataTypes().isFailure)
+assert(ParseUrl(Seq(Literal("1"), Literal("2"), Literal("3"), 
Literal("4")))
    --- End diff --

ah right, no need to bother here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[spark-website] branch asf-site updated: Fix 2-4-6 web build

2020-06-10 Thread holden
This is an automated email from the ASF dual-hosted git repository.

holden pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 3d9740f  Fix 2-4-6 web build
3d9740f is described below

commit 3d9740f38beca3b8609b8650409edb93a70c1aec
Author: Holden Karau 
AuthorDate: Wed Jun 10 18:36:12 2020 -0700

Fix 2-4-6 web build

Fix the 2.4.6 web build, the jekyll serve wrote some localhost values in 
the sitemap we don't want and add the generated release files.

Author: Holden Karau 

Closes #266 from holdenk/spark-2-4-6-rebuild.
---
 site/mailing-lists.html|   2 +-
 site/{mailing-lists.html => news/spark-2-4-6.html} |  16 +-
 .../spark-release-2-4-6.html}  |  56 +++-
 site/sitemap.xml   | 370 ++---
 4 files changed, 248 insertions(+), 196 deletions(-)

diff --git a/site/mailing-lists.html b/site/mailing-lists.html
index 2f4a88f..f6686f9 100644
--- a/site/mailing-lists.html
+++ b/site/mailing-lists.html
@@ -12,7 +12,7 @@
 
   
 
-http://localhost:4000/community.html; />
+https://spark.apache.org/community.html; />
   
 
   
diff --git a/site/mailing-lists.html b/site/news/spark-2-4-6.html
similarity index 94%
copy from site/mailing-lists.html
copy to site/news/spark-2-4-6.html
index 2f4a88f..53d1399 100644
--- a/site/mailing-lists.html
+++ b/site/news/spark-2-4-6.html
@@ -6,14 +6,11 @@
   
 
   
- Mailing Lists | Apache Spark
+ Spark 2.4.6 released | Apache Spark
 
   
 
   
-
-http://localhost:4000/community.html; />
-  
 
   
 
@@ -203,7 +200,16 @@
   
 
   
-
+Spark 2.4.6 released
+
+
+We are happy to announce the availability of Spark 
2.4.6! Visit the release notes to read about the new features, or download the release today.
+
+
+
+
+Spark News Archive
+
 
   
 
diff --git a/site/mailing-lists.html b/site/releases/spark-release-2-4-6.html
similarity index 68%
copy from site/mailing-lists.html
copy to site/releases/spark-release-2-4-6.html
index 2f4a88f..299cf58 100644
--- a/site/mailing-lists.html
+++ b/site/releases/spark-release-2-4-6.html
@@ -6,14 +6,11 @@
   
 
   
- Mailing Lists | Apache Spark
+ Spark Release 2.4.6 | Apache Spark
 
   
 
   
-
-http://localhost:4000/community.html; />
-  
 
   
 
@@ -203,7 +200,56 @@
   
 
   
-
+Spark Release 2.4.6
+
+
+Spark 2.4.6 is a maintenance release containing stability, correctness, and 
security fixes. This release is based on the branch-2.4 maintenance branch of 
Spark. We strongly recommend all 2.4 users to upgrade to this stable 
release.
+
+Notable changes
+
+  https://issues.apache.org/jira/browse/SPARK-29419;>[SPARK-29419]: 
Seq.toDS / spark.createDataset(Seq) is not thread-safe
+  https://issues.apache.org/jira/browse/SPARK-31519;>[SPARK-31519]: 
Cast in having aggregate expressions returns the wrong result
+  https://issues.apache.org/jira/browse/SPARK-26293;>[SPARK-26293]: 
Cast exception when having python udf in subquery
+  https://issues.apache.org/jira/browse/SPARK-30826;>[SPARK-30826]: 
LIKE returns wrong result from external table using parquet
+  https://issues.apache.org/jira/browse/SPARK-30857;>[SPARK-30857]: 
Wrong truncations of timestamps before the epoch to hours and days
+  https://issues.apache.org/jira/browse/SPARK-31256;>[SPARK-31256]: 
Dropna doesnt work for struct columns
+  https://issues.apache.org/jira/browse/SPARK-31312;>[SPARK-31312]: 
Transforming Hive simple UDF (using JAR) expression may incur CNFE in later 
evaluation
+  https://issues.apache.org/jira/browse/SPARK-31420;>[SPARK-31420]: 
Infinite timeline redraw in job details page
+  https://issues.apache.org/jira/browse/SPARK-31485;>[SPARK-31485]: 
Barrier stage can hang if only partial tasks launched
+  https://issues.apache.org/jira/browse/SPARK-31500;>[SPARK-31500]: 
collect_set() of BinaryType returns duplicate elements
+  https://issues.apache.org/jira/browse/SPARK-31503;>[SPARK-31503]: fix 
the SQL string of the TRIM functions
+  https://issues.apache.org/jira/browse/SPARK-31663;>[SPARK-31663]: 
Grouping sets with having clause returns the wrong result
+  https://issues.apache.org/jira/browse/SPARK-26908;>[SPARK-26908]: Fix 
toMilis
+  https://issues.apache.org/jira/browse/SPARK-31563;>[SPARK-31563]: 
Failure of Inset.sql for UTF8String collection
+
+
+Dependency Changes
+
+While being a maintence release we did still upgrade some dependencies in 
this release they are:
+
+  netty-all to 4.1.47.Final (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20445;>[CVE-2019-20445])
+  Janino to 3.0.16 (SQL Generated code)
+  aws-java-sdk-sts to 1.11.655 (required for kinesis client upgrade)
+  snappy 1.1.7.5 (stability improvements  ppc64le performance)
+
+
+Known issues
+
+  htt

[spark-website] branch asf-site updated: Use ASF mail archives not defunct nabble links

2021-08-18 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new dc9faff  Use ASF mail archives not defunct nabble links
dc9faff is described below

commit dc9faff4a121070d58fc0f145d8f0a3521074fb3
Author: Sean Owen 
AuthorDate: Wed Aug 18 12:58:05 2021 -0500

Use ASF mail archives not defunct nabble links

Nabble archive links appear to not work anymore. Use ASF pony mail links 
instead for archives.

Author: Sean Owen 

Closes #355 from srowen/Nabble.
---
 community.md| 16 
 faq.md  |  2 +-
 site/community.html | 16 
 site/faq.html   |  2 +-
 4 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/community.md b/community.md
index e8f2cf7..ebc438a 100644
--- a/community.md
+++ b/community.md
@@ -24,8 +24,8 @@ Some quick tips when using StackOverflow:
   - Search StackOverflow's 
   https://stackoverflow.com/questions/tagged/apache-spark;>`apache-spark`
 tag to see if 
   your question has already been answered
-  - Search the nabble archive for
-  http://apache-spark-user-list.1001560.n3.nabble.com/;>u...@spark.apache.org
 
+  - Search the ASF archive for
+  https://lists.apache.org/list.html?u...@spark.apache.org;>u...@spark.apache.org
 
 - Please follow the StackOverflow https://stackoverflow.com/help/how-to-ask;>code of conduct  
 - Always use the `apache-spark` tag when asking questions
 - Please also use a secondary tag to specify components so subject matter 
experts can more easily find them.
@@ -42,16 +42,16 @@ project, and scenarios, it is recommended you use the 
u...@spark.apache.org mail
 
 
   
-http://apache-spark-user-list.1001560.n3.nabble.com;>u...@spark.apache.org
 is for usage questions, help, and announcements.
+https://lists.apache.org/list.html?u...@spark.apache.org;>u...@spark.apache.org
 is for usage questions, help, and announcements.
 mailto:user-subscr...@spark.apache.org?subject=(send%20this%20email%20to%20subscribe)">(subscribe)
     mailto:user-unsubscr...@spark.apache.org?subject=(send%20this%20email%20to%20unsubscribe)">(unsubscribe)
-http://apache-spark-user-list.1001560.n3.nabble.com;>(archives)
+https://lists.apache.org/list.html?u...@spark.apache.org;>(archives)
   
   
-http://apache-spark-developers-list.1001551.n3.nabble.com;>d...@spark.apache.org
 is for people who want to contribute code to Spark.
+    https://lists.apache.org/list.html?d...@spark.apache.org;>d...@spark.apache.org
 is for people who want to contribute code to Spark.
 mailto:dev-subscr...@spark.apache.org?subject=(send%20this%20email%20to%20subscribe)">(subscribe)
 mailto:dev-unsubscr...@spark.apache.org?subject=(send%20this%20email%20to%20unsubscribe)">(unsubscribe)
-http://apache-spark-developers-list.1001551.n3.nabble.com;>(archives)
+https://lists.apache.org/list.html?d...@spark.apache.org;>(archives)
   
 
 
@@ -60,8 +60,8 @@ Some quick tips when using email:
 - Prior to asking submitting questions, please:
   - Search StackOverflow at https://stackoverflow.com/questions/tagged/apache-spark;>`apache-spark`
 
   to see if your question has already been answered
-  - Search the nabble archive for
-  http://apache-spark-user-list.1001560.n3.nabble.com/;>u...@spark.apache.org
 
+  - Search the ASF archive for
+  https://lists.apache.org/list.html?u...@spark.apache.org;>u...@spark.apache.org
 
 - Tagging the subject line of your email will help you get a faster response, 
e.g. 
 `[Spark SQL]: Does Spark SQL support LEFT SEMI JOIN?`
 - Tags may help identify a topic by:
diff --git a/faq.md b/faq.md
index af57f26..0275c18 100644
--- a/faq.md
+++ b/faq.md
@@ -71,4 +71,4 @@ Please also refer to our
 
 Where can I get more help?
 
-Please post on StackOverflow's https://stackoverflow.com/questions/tagged/apache-spark;>apache-spark
 tag or http://apache-spark-user-list.1001560.n3.nabble.com;>Spark 
Users mailing list.  For more information, please refer to https://spark.apache.org/community.html#have-questions;>Have 
Questions?.  We'll be glad to help!
+Please post on StackOverflow's https://stackoverflow.com/questions/tagged/apache-spark;>apache-spark
 tag or https://lists.apache.org/list.html?u...@spark.apache.org;>Spark Users 
mailing list.  For more information, please refer to https://spark.apache.org/community.html#have-questions;>Have 
Questions?.  We'll be glad to help!
diff --git a/site/community.html b/site/community.html
index b779d37..f4e1fcf 100644
--- a/site/community.html
+++ b/site/community.html
@@ -219,8 +219,8 @@ as it is an active forum for Spark users questions 
and answers.
   Search StackOverflows 
 https://stackoverflow.com/questions/tagged/apache-spark;

[jira] [Updated] (SPARK-40322) Fix all dead links

2022-09-05 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-40322:
-
Description: 
 

[https://www.deadlinkchecker.com/website-dead-link-checker.asp]

 

 
||Status||URL||Source link text||
|-1 Not found: The server name or address could not be 
resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
 Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
|404 Not 
Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
|-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
University|https://spark.apache.org/powered-by.html]|
|404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
Networks|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://www.nubetech.co/]|[Nube 
Technologies|https://spark.apache.org/powered-by.html]|
|-1 Timeout|[http://ooyala.com/]|[Ooyala, 
Inc.|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
 for Fast Queries|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
America|https://spark.apache.org/powered-by.html]|
|-1 
Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
Camp 2 [302 from 
http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
from 
http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
|-500 Internal Server 
Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
 Cookbook [301 from 
https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
|-500 Internal Server 
Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
 Spark Graph Processing [301 from 
https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
|500 Internal Server 
Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
|500 Internal Server 
Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
|500 Internal Server 
Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
|500 Internal Server 
Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
 Summit Europe|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
|-1 Not found: The server name or address could not be 
resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing with 
Spark|https://spark.apache.org/news/]|
|-1 Not found: The server name or address could not be 
resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring Spark's 
logs|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strata.oreilly.com/2012/08/seven-reasons-why-i-like-spark.html]|[Spark|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strata.oreilly.com/2012/11/shark-real-time-queries-and-analytics-for-big-data.html]|[Shark|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strata.oreilly.com/2012/10/spark-0-6-improves-performance-and-accessibility.html]|[Spark
 0.6 release|https://spark.apache.org/news/]|
|404 Not 
Found|[http://data-informed.com/spark-an-open-source-engine-for-iterative-data-mining/]|[DataInformed|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strataconf.com/strata2013/public/schedule/detail/27438]|[introduction
 to Spark, Shark and BDAS|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strataconf.com/strata2013/public/schedule/detail/27440]|[hands-on
 exercise session

[jira] [Updated] (SPARK-40322) Fix all dead links

2022-09-05 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-40322:
-
Description: 
 

[https://www.deadlinkchecker.com/website-dead-link-checker.asp]

 

 
||Status||URL||Source link text||
|-1 Not found: The server name or address could not be 
resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
 Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
|404 Not 
Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
|-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
University|https://spark.apache.org/powered-by.html]|
|404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
Networks|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://www.nubetech.co/]|[Nube 
Technologies|https://spark.apache.org/powered-by.html]|
|-1 Timeout|[http://ooyala.com/]|[Ooyala, 
Inc.|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
 for Fast Queries|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
America|https://spark.apache.org/powered-by.html]|
|-1 
Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
Camp 2 [302 from 
http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
from 
http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
|-500 Internal Server 
Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
 Cookbook [301 from 
https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
|-500 Internal Server 
Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
 Spark Graph Processing [301 from 
https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
|500 Internal Server 
Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
|500 Internal Server 
Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
|500 Internal Server 
Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
|500 Internal Server 
Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
 Summit Europe|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
|-1 Not found: The server name or address could not be 
resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing with 
Spark|https://spark.apache.org/news/]|
|-1 Not found: The server name or address could not be 
resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring Spark's 
logs|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strata.oreilly.com/2012/08/seven-reasons-why-i-like-spark.html]|[Spark|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strata.oreilly.com/2012/11/shark-real-time-queries-and-analytics-for-big-data.html]|[Shark|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strata.oreilly.com/2012/10/spark-0-6-improves-performance-and-accessibility.html]|[Spark
 0.6 release|https://spark.apache.org/news/]|
|404 Not 
Found|[http://data-informed.com/spark-an-open-source-engine-for-iterative-data-mining/]|[DataInformed|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strataconf.com/strata2013/public/schedule/detail/27438]|[introduction
 to Spark, Shark and BDAS|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strataconf.com/strata2013/public/schedule/detail/27440]|[hands-on
 exercise session

Difference among batchDuration, windowDuration, slideDuration

2014-07-16 Thread hsy...@gmail.com
When I'm reading the API of spark streaming, I'm confused by the 3
different durations

StreamingContext(conf: SparkConf
http://spark.apache.org/docs/latest/api/scala/org/apache/spark/SparkConf.html
, batchDuration: Duration
http://spark.apache.org/docs/latest/api/scala/org/apache/spark/streaming/Duration.html
)

DStream window(windowDuration: Duration
http://spark.apache.org/docs/latest/api/scala/org/apache/spark/streaming/Duration.html
, slideDuration: Duration
http://spark.apache.org/docs/latest/api/scala/org/apache/spark/streaming/Duration.html
): DStream
http://spark.apache.org/docs/latest/api/scala/org/apache/spark/streaming/dstream/DStream.html
[T]


Can anyone please explain these 3 different durations


Best,
Siyuan


Re: groupBy gives non deterministic results

2014-09-10 Thread Ye Xianjin
Great. And you should ask question in user@spark.apache.org mail list.  I 
believe many people don't subscribe the incubator mail list now.

-- 
Ye Xianjin
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Wednesday, September 10, 2014 at 6:03 PM, redocpot wrote:

 Hi, 
 
 I am using spark 1.0.0. The bug is fixed by 1.0.1.
 
 Hao
 
 
 
 --
 View this message in context: 
 http://apache-spark-user-list.1001560.n3.nabble.com/groupBy-gives-non-deterministic-results-tp13698p13864.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com 
 (http://Nabble.com).
 
 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 
 (mailto:user-unsubscr...@spark.apache.org)
 For additional commands, e-mail: user-h...@spark.apache.org 
 (mailto:user-h...@spark.apache.org)
 
 




RE: Sort based shuffle not working properly?

2015-02-03 Thread Mohammed Guller
Nitin,
Suing Spark is not going to help. Perhaps you should sue someone else :-) Just 
kidding!

Mohammed


-Original Message-
From: nitinkak001 [mailto:nitinkak...@gmail.com] 
Sent: Tuesday, February 3, 2015 1:57 PM
To: user@spark.apache.org
Subject: Re: Sort based shuffle not working properly?

Just to add, I am suing Spark 1.1.0



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Sort-based-shuffle-not-working-properly-tp21487p21488.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: small error in the docs?

2015-01-15 Thread Sean Owen
Yes that's a typo. The API docs and source code are correct though.
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions

That and your IDE should show the correct signature. You can open a PR
to fix the typo in
https://spark.apache.org/docs/latest/programming-guide.html

On Thu, Jan 15, 2015 at 4:43 PM, kirillfish k.rybac...@datacentric.ru wrote:
 cogroup() function seems to return (K, (IterableV, IterableW)), rather
 than (K, IterableV, IterableW), as it is pointed out in the docs (at
 least for version 1.1.0):

 https://spark.apache.org/docs/1.1.0/programming-guide.html

 This simple discrepancy costed to me half a day of debug and frustration.
 Kirill

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



RE: spark 1.2 compatibility

2015-01-16 Thread Judy Nash
Yes. It's compatible with HDP 2.1 

-Original Message-
From: bhavyateja [mailto:bhavyateja.potin...@gmail.com] 
Sent: Friday, January 16, 2015 3:17 PM
To: user@spark.apache.org
Subject: spark 1.2 compatibility

Is spark 1.2 is compatibly with HDP 2.1



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark-1-2-compatibility-tp21197.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: missing explanation of cache in the documentation of cluster overview

2015-03-09 Thread Sean Owen
It's explained at
https://spark.apache.org/docs/latest/programming-guide.html and it's
configuration at
https://spark.apache.org/docs/latest/configuration.html  Have a read
over all the docs first.

On Mon, Mar 9, 2015 at 9:24 AM, Hui WANG hedonp...@gmail.com wrote:
 Hello Guys,

 I'm reading the documentation of cluster mode overview on
 https://spark.apache.org/docs/latest/cluster-overview.html.

 In the schema, cache is shown aside executor but no explanation is done on
 it.

 Can someone please help to explain it and improve this page ?

 --
 Hui WANG
 Tel : +33 (0) 6 71 33 45 39
 Blog : http://www.hui-wang.info

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Running Spark jobs via oozie

2015-03-04 Thread Felix C
We have gotten it to work...

--- Original Message ---

From: nitinkak001 nitinkak...@gmail.com
Sent: March 3, 2015 7:46 AM
To: user@spark.apache.org
Subject: Re: Running Spark jobs via oozie

I am also starting to work on this one. Did you get any solution to this
issue?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-jobs-via-oozie-tp5187p21896.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Announcing Spark 1.3.1 and 1.2.2

2015-04-17 Thread Patrick Wendell
Hi All,

I'm happy to announce the Spark 1.3.1 and 1.2.2 maintenance releases.
We recommend all users on the 1.3 and 1.2 Spark branches upgrade to
these releases, which contain several important bug fixes.

Download Spark 1.3.1 or 1.2.2:
http://spark.apache.org/downloads.html

Release notes:
1.3.1: http://spark.apache.org/releases/spark-release-1-3-1.html
1.2.2:  http://spark.apache.org/releases/spark-release-1-2-2.html

Comprehensive list of fixes:
1.3.1: http://s.apache.org/spark-1.3.1
1.2.2: http://s.apache.org/spark-1.2.2

Thanks to everyone who worked on these releases!

- Patrick

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Announcing Spark 1.3.1 and 1.2.2

2015-04-17 Thread Patrick Wendell
Hi All,

I'm happy to announce the Spark 1.3.1 and 1.2.2 maintenance releases.
We recommend all users on the 1.3 and 1.2 Spark branches upgrade to
these releases, which contain several important bug fixes.

Download Spark 1.3.1 or 1.2.2:
http://spark.apache.org/downloads.html

Release notes:
1.3.1: http://spark.apache.org/releases/spark-release-1-3-1.html
1.2.2:  http://spark.apache.org/releases/spark-release-1-2-2.html

Comprehensive list of fixes:
1.3.1: http://s.apache.org/spark-1.3.1
1.2.2: http://s.apache.org/spark-1.2.2

Thanks to everyone who worked on these releases!

- Patrick

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



broken link on Spark Programming Guide

2015-04-07 Thread jonathangreenleaf
in the current Programming Guide:
https://spark.apache.org/docs/1.3.0/programming-guide.html#actions

under Actions, the Python link goes to:
https://spark.apache.org/docs/1.3.0/api/python/pyspark.rdd.RDD-class.html
which is 404

which I think should be:
https://spark.apache.org/docs/1.3.0/api/python/index.html#org.apache.spark.rdd.RDD

Thanks - Jonathan



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/broken-link-on-Spark-Programming-Guide-tp22414.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



RE: coalesce on dataFrame

2015-07-01 Thread Ewan Leith
It's in spark 1.4.0, or should be at least:

https://issues.apache.org/jira/browse/SPARK-6972

Ewan

-Original Message-
From: Hafiz Mujadid [mailto:hafizmujadi...@gmail.com] 
Sent: 01 July 2015 08:23
To: user@spark.apache.org
Subject: coalesce on dataFrame

How can we use coalesce(1, true) on dataFrame?


Thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/coalesce-on-dataFrame-tp23564.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



<    3   4   5   6   7   8   9   10   11   12   >