Mailing lists matching spark.apache.org

commits spark.apache.org
dev spark.apache.org
issues spark.apache.org
reviews spark.apache.org
user spark.apache.org


Updated Spark logo

2016-06-10 Thread Matei Zaharia
Hi all, FYI, we've recently updated the Spark logo at https://spark.apache.org/ 
to say "Apache Spark" instead of just "Spark". Many ASF projects have been 
doing this recently to make it clearer that they are associated with the ASF, 
and indeed the ASF's branding guidelines generally require that projects be 
referred to as "Apache X" in various settings, especially in related commercial 
or open source products (https://www.apache.org/foundation/marks/). If you have 
any kind of site or product that uses Spark logo, it would be great to update 
to this full one.

There are EPS versions of the logo available at 
https://spark.apache.org/images/spark-logo.eps and 
https://spark.apache.org/images/spark-logo-reverse.eps; before using these also 
check https://www.apache.org/foundation/marks/.

Matei
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark Website

2016-07-13 Thread Maurin Lenglart
Same here

From: Benjamin Kim <bbuil...@gmail.com>
Date: Wednesday, July 13, 2016 at 11:47 AM
To: manish ranjan <cse1.man...@gmail.com>
Cc: user <user@spark.apache.org>
Subject: Re: Spark Website

It takes me to the directories instead of the webpage.

On Jul 13, 2016, at 11:45 AM, manish ranjan 
<cse1.man...@gmail.com<mailto:cse1.man...@gmail.com>> wrote:

working for me. What do you mean 'as supposed to'?

~Manish


On Wed, Jul 13, 2016 at 11:45 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Has anyone noticed that the spark.apache.org<http://spark.apache.org/> is not 
working as supposed to?


-----
To unsubscribe e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>




Re: Spark Website

2016-07-13 Thread Pradeep Gollakota
Worked for me if I go to https://spark.apache.org/site/ but not
https://spark.apache.org

On Wed, Jul 13, 2016 at 11:48 AM, Maurin Lenglart <mau...@cuberonlabs.com>
wrote:

> Same here
>
>
>
> *From: *Benjamin Kim <bbuil...@gmail.com>
> *Date: *Wednesday, July 13, 2016 at 11:47 AM
> *To: *manish ranjan <cse1.man...@gmail.com>
> *Cc: *user <user@spark.apache.org>
> *Subject: *Re: Spark Website
>
>
>
> It takes me to the directories instead of the webpage.
>
>
>
> On Jul 13, 2016, at 11:45 AM, manish ranjan <cse1.man...@gmail.com> wrote:
>
>
>
> working for me. What do you mean 'as supposed to'?
>
>
> ~Manish
>
>
>
> On Wed, Jul 13, 2016 at 11:45 AM, Benjamin Kim <bbuil...@gmail.com> wrote:
>
> Has anyone noticed that the spark.apache.org is not working as supposed
> to?
>
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
>
>
>


[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-02 Thread janplus
Github user janplus commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69385849
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
--- End diff --

Yes, definitely I can do that. In fact I have finished it.
But before I do the commit, let us get thought it first.
In `checkAnalysis` method for `LogicalPlan`, the only method will be called 
for `Expression` is `checkInputDataTypes`

https://github.com/apache/spark/blob/d1e8108854deba3de8e2d87eb4389d11fb17ee57/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala#L64

Which means we can only implement this validation in `checkInputDataTypes` 
of `ParseUrl`. In that circumstance spark will give the AnalysisException like 
this
> org.apache.spark.sql.AnalysisException: cannot resolve 
'parse_url("http://spark.apache.org/path?;, "QUERY", "???")' due to data type 
mismatch: wrong key "???"; line 1 pos 0

But obviously this should not be a data type mismatch. This message may 
confuse the users. Also the different message for **Literal** `key` and **Not 
Literal** `key` may make them confused too.
Otherwise, if we do not validate the **Literal** `key`, the `Executor` will 
get an exception at the first row. It seems not that unacceptable.
So compared the both sides, I think we should not do the Literal `key` 
validation.
How do you think about this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[jira] [Created] (SPARK-19546) Every mail to u...@spark.apache.org is blocked

2017-02-10 Thread Shivam Sharma (JIRA)
Shivam Sharma created SPARK-19546:
-

 Summary: Every mail to u...@spark.apache.org is blocked
 Key: SPARK-19546
 URL: https://issues.apache.org/jira/browse/SPARK-19546
 Project: Spark
  Issue Type: IT Help
  Components: Project Infra
Affects Versions: 2.1.0
Reporter: Shivam Sharma
Priority: Minor


Each time I am sending mail to  u...@spark.apache.org I am getting email from 
yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".

P



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19546) Every mail to u...@spark.apache.org is getting blocked

2017-02-10 Thread Shivam Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivam Sharma updated SPARK-19546:
--
Priority: Major  (was: Minor)

> Every mail to u...@spark.apache.org is getting blocked
> --
>
> Key: SPARK-19546
> URL: https://issues.apache.org/jira/browse/SPARK-19546
> Project: Spark
>  Issue Type: IT Help
>  Components: Project Infra
>Affects Versions: 2.1.0
>Reporter: Shivam Sharma
>
> Each time I am sending mail to  u...@spark.apache.org I am getting email from 
> yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-19034) Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2

2016-12-29 Thread Sanjay Dasgupta (JIRA)
Sanjay Dasgupta created SPARK-19034:
---

 Summary: Download packages on 'spark.apache.org/downloads.html' 
contain release 2.0.2
 Key: SPARK-19034
 URL: https://issues.apache.org/jira/browse/SPARK-19034
 Project: Spark
  Issue Type: Bug
  Components: Build
Affects Versions: 2.1.0
 Environment: All
Reporter: Sanjay Dasgupta


Download packages on 'https://spark.apache.org/downloads.html' have the right 
name ( spark-2.1.0-bin-...) but contain the release 2.0.2 software



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19034) Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2

2016-12-30 Thread Dongjoon Hyun (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15787411#comment-15787411
 ] 

Dongjoon Hyun commented on SPARK-19034:
---

+1

> Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2
> 
>
> Key: SPARK-19034
> URL: https://issues.apache.org/jira/browse/SPARK-19034
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.1.0
> Environment: All
>Reporter: Sanjay Dasgupta
>  Labels: distribution, download
>
> Download packages on 'https://spark.apache.org/downloads.html' have the right 
> name ( spark-2.1.0-bin-...) but contain the release 2.0.2 software



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19034) Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2

2016-12-30 Thread Sanjay Dasgupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15787426#comment-15787426
 ] 

Sanjay Dasgupta commented on SPARK-19034:
-

Yes, the SPARK_HOME was the issue.

Apologies for the confusion.

> Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2
> 
>
> Key: SPARK-19034
> URL: https://issues.apache.org/jira/browse/SPARK-19034
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.1.0
> Environment: All
>Reporter: Sanjay Dasgupta
>  Labels: distribution, download
>
> Download packages on 'https://spark.apache.org/downloads.html' have the right 
> name ( spark-2.1.0-bin-...) but contain the release 2.0.2 software



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20456) Document major aggregation functions for pyspark

2017-04-25 Thread Hyukjin Kwon (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982507#comment-15982507
 ] 

Hyukjin Kwon edited comment on SPARK-20456 at 4/25/17 7:37 AM:
---

{quote}
Document `sql.functions.py`:
1. Document the common aggregate functions (`min`, `max`, `mean`, `count`, 
`collect_set`, `collect_list`, `stddev`, `variance`)
{quote}

I think we have documentations for ...

min - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.min
max - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.max
mean - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.mean
count - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.count
collect_set - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.collect_set
collect_list - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.collect_list
stddev - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.stddev
variance - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.variance

in 
https://github.com/apache/spark/blob/3fbf0a5f9297f438bc92db11f106d4a0ae568613/python/pyspark/sql/functions.py

{quote}
2. Rename columns in datetime examples.
{quote}

Could you give some pointers?

{quote}
5. Document `lit`
{quote}

lit - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.lit

It seems documented.


was (Author: hyukjin.kwon):

> Document `sql.functions.py`:
1. Document the common aggregate functions (`min`, `max`, `mean`, `count`, 
`collect_set`, `collect_list`, `stddev`, `variance`)

I think we have documentations for ...

min - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.min
max - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.max
mean - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.mean
count - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.count
collect_set - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.collect_set
collect_list - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.collect_list
stddev - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.stddev
variance - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.variance

in 
https://github.com/apache/spark/blob/3fbf0a5f9297f438bc92db11f106d4a0ae568613/python/pyspark/sql/functions.py

> 2. Rename columns in datetime examples.

Could you give some pointers?

> 5. Document `lit`

lit - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.lit

It seems documented.

> Document major aggregation functions for pyspark
> 
>
> Key: SPARK-20456
> URL: https://issues.apache.org/jira/browse/SPARK-20456
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.1.0
>Reporter: Michael Patterson
>
> Document `sql.functions.py`:
> 1. Document the common aggregate functions (`min`, `max`, `mean`, `count`, 
> `collect_set`, `collect_list`, `stddev`, `variance`)
> 2. Rename columns in datetime examples.
> 3. Add examples for `unix_timestamp` and `from_unixtime`
> 4. Add note to all trigonometry functions that units are radians.
> 5. Document `lit`



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21593) Fix broken configuration page

2017-08-01 Thread Artur Sukhenko (JIRA)
Artur Sukhenko created SPARK-21593:
--

 Summary: Fix broken configuration page
 Key: SPARK-21593
 URL: https://issues.apache.org/jira/browse/SPARK-21593
 Project: Spark
  Issue Type: Bug
  Components: Documentation
Affects Versions: 2.2.0
 Environment: Chrome/Firefox
Reporter: Artur Sukhenko
Priority: Minor


Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: Welcoming Tejas Patil as a Spark committer

2017-10-03 Thread Dilip Biswal
Congratulations , Tejas!
 
-- Dilip
 
 
- Original message -From: Suresh Thalamati <suresh.thalam...@gmail.com>To: "dev@spark.apache.org" <dev@spark.apache.org>Cc:Subject: Re: Welcoming Tejas Patil as a Spark committerDate: Tue, Oct 3, 2017 12:01 PM 
Congratulations , Tejas!-suresh> On Sep 29, 2017, at 12:58 PM, Matei Zaharia <matei.zaha...@gmail.com> wrote:>> Hi all,>> The Spark PMC recently added Tejas Patil as a committer on the> project. Tejas has been contributing across several areas of Spark for> a while, focusing especially on scalability issues and SQL. Please> join me in welcoming Tejas!>> Matei>> -----> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org>-----To unsubscribe e-mail: dev-unsubscr...@spark.apache.org 
 


-----
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



[jira] [Commented] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/

2018-01-15 Thread Reynold Xin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326619#comment-16326619
 ] 

Reynold Xin commented on SPARK-23083:
-

Here's the website repo: [https://github.com/apache/spark-website]

 

> Adding Kubernetes as an option to https://spark.apache.org/
> ---
>
> Key: SPARK-23083
> URL: https://issues.apache.org/jira/browse/SPARK-23083
> Project: Spark
>  Issue Type: Sub-task
>  Components: Kubernetes
>Affects Versions: 2.3.0
>Reporter: Anirudh Ramanathan
>Priority: Minor
>
> [https://spark.apache.org/] can now include a reference to, and the k8s logo.
> I think this is not tied to the docs.
> cc/ [~rxin] [~sameer]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/

2018-01-15 Thread Anirudh Ramanathan (JIRA)
Anirudh Ramanathan created SPARK-23083:
--

 Summary: Adding Kubernetes as an option to 
https://spark.apache.org/
 Key: SPARK-23083
 URL: https://issues.apache.org/jira/browse/SPARK-23083
 Project: Spark
  Issue Type: Sub-task
  Components: Kubernetes
Affects Versions: 2.3.0
Reporter: Anirudh Ramanathan


[https://spark.apache.org/] can now include a reference to, and the k8s logo.

I think this is not tied to the docs.

cc/ [~rxin] [~sameer]

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/

2018-01-15 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326635#comment-16326635
 ] 

Sean Owen commented on SPARK-23083:
---

Yes that's fine. If it only makes sense after the 2.3 release then we'll wait 
to merge the PR.

> Adding Kubernetes as an option to https://spark.apache.org/
> ---
>
> Key: SPARK-23083
> URL: https://issues.apache.org/jira/browse/SPARK-23083
> Project: Spark
>  Issue Type: Sub-task
>  Components: Kubernetes
>Affects Versions: 2.3.0
>Reporter: Anirudh Ramanathan
>Priority: Minor
>
> [https://spark.apache.org/] can now include a reference to, and the k8s logo.
> I think this is not tied to the docs.
> cc/ [~rxin] [~sameer]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/

2018-01-15 Thread Anirudh Ramanathan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326711#comment-16326711
 ] 

Anirudh Ramanathan commented on SPARK-23083:


opened https://github.com/apache/spark-website/pull/87

> Adding Kubernetes as an option to https://spark.apache.org/
> ---
>
> Key: SPARK-23083
> URL: https://issues.apache.org/jira/browse/SPARK-23083
> Project: Spark
>  Issue Type: Sub-task
>  Components: Kubernetes
>Affects Versions: 2.3.0
>Reporter: Anirudh Ramanathan
>Priority: Minor
>
> [https://spark.apache.org/] can now include a reference to, and the k8s logo.
> I think this is not tied to the docs.
> cc/ [~rxin] [~sameer]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[GitHub] spark pull request #22370: don't link to deprecated function

2018-09-10 Thread MichaelChirico
Github user MichaelChirico commented on a diff in the pull request:

https://github.com/apache/spark/pull/22370#discussion_r216222094
  
--- Diff: R/pkg/R/catalog.R ---
@@ -69,7 +69,6 @@ createExternalTable <- function(x, ...) {
 #' @param ... additional named parameters as options for the data source.
 #' @return A SparkDataFrame.
 #' @rdname createTable
-#' @seealso \link{createExternalTable}
--- End diff --

I don't see it here (nothing pointing to 
`sparkR.init`/`sparkRHive.init`/`sparkRSQL.init`):

https://spark.apache.org/docs/latest/api/R/sparkR.session.html

or here (nothing pointing to `dropTempTable`):

https://spark.apache.org/docs/latest/api/R/dropTempView.html

But I do see it here (points to `registerTempTable`):

https://spark.apache.org/docs/latest/api/R/createOrReplaceTempView.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[jira] [Resolved] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/

2018-02-28 Thread Anirudh Ramanathan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anirudh Ramanathan resolved SPARK-23083.

Resolution: Fixed

This has been merged, closing.

> Adding Kubernetes as an option to https://spark.apache.org/
> ---
>
> Key: SPARK-23083
> URL: https://issues.apache.org/jira/browse/SPARK-23083
> Project: Spark
>  Issue Type: Sub-task
>  Components: Kubernetes
>Affects Versions: 2.3.0
>Reporter: Anirudh Ramanathan
>Priority: Minor
>
> [https://spark.apache.org/] can now include a reference to, and the k8s logo.
> I think this is not tied to the docs.
> cc/ [~rxin] [~sameer]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/

2018-03-13 Thread Anirudh Ramanathan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anirudh Ramanathan reassigned SPARK-23083:
--

Assignee: Anirudh Ramanathan

> Adding Kubernetes as an option to https://spark.apache.org/
> ---
>
> Key: SPARK-23083
> URL: https://issues.apache.org/jira/browse/SPARK-23083
> Project: Spark
>  Issue Type: Sub-task
>  Components: Kubernetes
>Affects Versions: 2.3.0
>Reporter: Anirudh Ramanathan
>Assignee: Anirudh Ramanathan
>    Priority: Minor
>
> [https://spark.apache.org/] can now include a reference to, and the k8s logo.
> I think this is not tied to the docs.
> cc/ [~rxin] [~sameer]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: Sample date_trunc error for webpage (https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc )

2019-07-07 Thread Sean Owen
binggan1989, I don't see any problem in that snippet. What are you
referring to?

On Sun, Jul 7, 2019, 2:22 PM Chris Lambertus  wrote:

> Spark,
>
> We received this message. I have not ACKd it.
>
> -Chris
> INFRA
>
>
> Begin forwarded message:
>
> *From: *"binggan1989" 
> *Subject: **Sample date_trunc error for webpage
> (https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc
> <https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc> )*
> *Date: *July 5, 2019 at 2:54:54 AM PDT
> *To: *"webmaster" 
> *Reply-To: *"binggan1989" 
>
>
>
> I found an example of the function usage given on the website is incorrect
> and needs to be fixed.
>
> https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc
>
>
>

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

[GitHub] [spark] sarutak commented on pull request #34720: [SPARK-37469][WebUI] unified shuffle read block time to shuffle read fetch wait time in StagePage

2021-11-25 Thread GitBox


sarutak commented on pull request #34720:
URL: https://github.com/apache/spark/pull/34720#issuecomment-979728617


   @toujours33 
   nit, but could you update the 
[screenshot](http://spark.apache.org/docs/latest/img/AllStagesPageDetail6.png) 
in the [doc](spark.apache.org/docs/latest/web-ui.html) ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] toujours33 commented on pull request #34720: [SPARK-37469][WebUI] unified shuffle read block time to shuffle read fetch wait time in StagePage

2021-11-25 Thread GitBox


toujours33 commented on pull request #34720:
URL: https://github.com/apache/spark/pull/34720#issuecomment-979739800


   > @toujours33 nit, but could you update the 
[screenshot](http://spark.apache.org/docs/latest/img/AllStagesPageDetail6.png) 
in the [doc](spark.apache.org/docs/latest/web-ui.html) ?
   
   ok


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark-website] srowen commented on a diff in pull request #400: [SPARK-39512] Document docker image release steps

2022-06-23 Thread GitBox


srowen commented on code in PR #400:
URL: https://github.com/apache/spark-website/pull/400#discussion_r904998278


##
site/sitemap.xml:
##
@@ -941,27 +941,27 @@
   weekly
 
 
-  https://spark.apache.org/graphx/
+  https://spark.apache.org/news/

Review Comment:
   @holdenk just checking if you saw this - do we want to revert or is this the 
right order now?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] gengliangwang commented on a diff in pull request #400: [SPARK-39512] Document docker image release steps

2022-06-24 Thread GitBox


gengliangwang commented on code in PR #400:
URL: https://github.com/apache/spark-website/pull/400#discussion_r906444239


##
site/sitemap.xml:
##
@@ -941,27 +941,27 @@
   weekly
 
 
-  https://spark.apache.org/graphx/
+  https://spark.apache.org/news/

Review Comment:
   +1 @srowen. The changes on this file seem not necessary.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] srowen commented on a diff in pull request #400: [SPARK-39512] Document docker image release steps

2022-06-18 Thread GitBox


srowen commented on code in PR #400:
URL: https://github.com/apache/spark-website/pull/400#discussion_r901012063


##
site/sitemap.xml:
##
@@ -941,27 +941,27 @@
   weekly
 
 
-  https://spark.apache.org/graphx/
+  https://spark.apache.org/news/

Review Comment:
   I don't know which ordering is correct, but maybe revert this change?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark] sunchao commented on pull request #38352: [SPARK-40801][BUILD][3.2] Upgrade `Apache commons-text` to 1.10

2022-11-15 Thread GitBox


sunchao commented on PR #38352:
URL: https://github.com/apache/spark/pull/38352#issuecomment-1316416703

   @bsikander again, pls check 
[d...@spark.apache.org](mailto:d...@spark.apache.org) - it's being voted.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] bjornjorgensen commented on pull request #38352: [SPARK-40801][BUILD][3.2] Upgrade `Apache commons-text` to 1.10

2022-11-08 Thread GitBox


bjornjorgensen commented on PR #38352:
URL: https://github.com/apache/spark/pull/38352#issuecomment-1307663321

   @fryz It will be posted at d...@spark.apache.org and u...@spark.apache.org


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] bjornjorgensen commented on pull request #40171: [Tests] Refactor TPCH schema to separate file similar to TPCDS for code reuse

2023-02-25 Thread via GitHub


bjornjorgensen commented on PR #40171:
URL: https://github.com/apache/spark/pull/40171#issuecomment-1445168808

   [priv...@spark.apache.org](mailto:priv...@spark.apache.org)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on pull request #36374: [SPARK-39006][K8S] Show a directional error message for executor PVC dynamic allocation failure

2023-06-23 Thread via GitHub


dongjoon-hyun commented on PR #36374:
URL: https://github.com/apache/spark/pull/36374#issuecomment-1605232031

   Thank you for confirming. Apache Spark 3.4.1 is released officially.
   - https://lists.apache.org/list.html?d...@spark.apache.org
   - https://spark.apache.org/downloads.html


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[jira] [Commented] (SPARK-20456) Document major aggregation functions for pyspark

2017-04-25 Thread Hyukjin Kwon (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982507#comment-15982507
 ] 

Hyukjin Kwon commented on SPARK-20456:
--


> Document `sql.functions.py`:
1. Document the common aggregate functions (`min`, `max`, `mean`, `count`, 
`collect_set`, `collect_list`, `stddev`, `variance`)

I think we have documentations for ...

min - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.min
max - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.max
mean - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.mean
count - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.count
collect_set - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.collect_set
collect_list - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.collect_list
stddev - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.stddev
variance - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.variance

in 
https://github.com/apache/spark/blob/3fbf0a5f9297f438bc92db11f106d4a0ae568613/python/pyspark/sql/functions.py

> 2. Rename columns in datetime examples.

Could you give some pointers?

> 5. Document `lit`

lit - 
https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.lit

It seems documented.

> Document major aggregation functions for pyspark
> 
>
> Key: SPARK-20456
> URL: https://issues.apache.org/jira/browse/SPARK-20456
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.1.0
>Reporter: Michael Patterson
>
> Document `sql.functions.py`:
> 1. Document the common aggregate functions (`min`, `max`, `mean`, `count`, 
> `collect_set`, `collect_list`, `stddev`, `variance`)
> 2. Rename columns in datetime examples.
> 3. Add examples for `unix_timestamp` and `from_unixtime`
> 4. Add note to all trigonometry functions that units are radians.
> 5. Document `lit`



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21593) Fix broken configuration page

2017-08-01 Thread Artur Sukhenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artur Sukhenko updated SPARK-21593:
---
Description: 
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]


  was:
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]

!doc_latest.jpg|thumbnail!


> Fix broken configuration page
> -
>
> Key: SPARK-21593
> URL: https://issues.apache.org/jira/browse/SPARK-21593
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 2.2.0
> Environment: Chrome/Firefox
>Reporter: Artur Sukhenko
>Priority: Minor
> Attachments: doc_211.png, doc_latest.png
>
>
> Latest configuration page for Spark 2.2.0 has broken menu list and named 
> anchors.
> Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
> with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]
> Or try this link [Configuration # Dynamic 
> Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21593) Fix broken configuration page

2017-08-01 Thread Artur Sukhenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artur Sukhenko updated SPARK-21593:
---
Description: 
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]
!dyn_latest.jpg|thumbnail!
!dyn_211.jpg|thumbnail!

!doc_latest.jpg|thumbnail!
!doc_211.jpg|thumbnail!


  was:
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]



> Fix broken configuration page
> -
>
> Key: SPARK-21593
> URL: https://issues.apache.org/jira/browse/SPARK-21593
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 2.2.0
> Environment: Chrome/Firefox
>Reporter: Artur Sukhenko
>Priority: Minor
> Attachments: doc_211.jpg, doc_latest.jpg, dyn_211.jpg, dyn_latest.jpg
>
>
> Latest configuration page for Spark 2.2.0 has broken menu list and named 
> anchors.
> Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
> with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]
> Or try this link [Configuration # Dynamic 
> Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]
> !dyn_latest.jpg|thumbnail!
> !dyn_211.jpg|thumbnail!
> !doc_latest.jpg|thumbnail!
> !doc_211.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21593) Fix broken configuration page

2017-08-01 Thread Artur Sukhenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artur Sukhenko updated SPARK-21593:
---
Description: 
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]
!dyn_latest.jpg!



!dyn_211.jpg!



  was:
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]
!dyn_latest.jpg|thumbnail!
!dyn_211.jpg|thumbnail!

!doc_latest.jpg|thumbnail!
!doc_211.jpg|thumbnail!



> Fix broken configuration page
> -
>
> Key: SPARK-21593
> URL: https://issues.apache.org/jira/browse/SPARK-21593
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 2.2.0
> Environment: Chrome/Firefox
>Reporter: Artur Sukhenko
>Priority: Minor
> Attachments: doc_211.jpg, doc_latest.jpg, dyn_211.jpg, dyn_latest.jpg
>
>
> Latest configuration page for Spark 2.2.0 has broken menu list and named 
> anchors.
> Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
> with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]
> Or try this link [Configuration # Dynamic 
> Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]
> !dyn_latest.jpg!
> 
> !dyn_211.jpg!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21593) Fix broken configuration page

2017-08-01 Thread Artur Sukhenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artur Sukhenko updated SPARK-21593:
---
Description: 
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]

!doc_latest.jpg|thumbnail!

  was:
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]


> Fix broken configuration page
> -
>
> Key: SPARK-21593
> URL: https://issues.apache.org/jira/browse/SPARK-21593
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 2.2.0
> Environment: Chrome/Firefox
>Reporter: Artur Sukhenko
>Priority: Minor
> Attachments: doc_211.png, doc_latest.png
>
>
> Latest configuration page for Spark 2.2.0 has broken menu list and named 
> anchors.
> Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
> with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]
> Or try this link [Configuration # Dynamic 
> Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]
> !doc_latest.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21593) Fix broken configuration page

2017-08-01 Thread Artur Sukhenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artur Sukhenko updated SPARK-21593:
---
Description: 
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation]
 with should open Dynamic Allocation part of the page, but doesn't.
!dyn_latest.jpg!



!dyn_211.jpg!



  was:
Latest configuration page for Spark 2.2.0 has broken menu list and named 
anchors.
Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]

Or try this link [Configuration # Dynamic 
Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation]
!dyn_latest.jpg!



!dyn_211.jpg!




> Fix broken configuration page
> -
>
> Key: SPARK-21593
> URL: https://issues.apache.org/jira/browse/SPARK-21593
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 2.2.0
> Environment: Chrome/Firefox
>Reporter: Artur Sukhenko
>Priority: Minor
> Attachments: doc_211.jpg, doc_latest.jpg, dyn_211.jpg, dyn_latest.jpg
>
>
> Latest configuration page for Spark 2.2.0 has broken menu list and named 
> anchors.
> Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] 
> with [Latest docs |https://spark.apache.org/docs/latest/configuration.html]
> Or try this link [Configuration # Dynamic 
> Allocation|https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation]
>  with should open Dynamic Allocation part of the page, but doesn't.
> !dyn_latest.jpg!
> 
> !dyn_211.jpg!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: [PR] [SPARK-45935][PYTHON][DOCS] Fix RST files link substitutions error [spark]

2023-11-15 Thread via GitHub


dongjoon-hyun commented on code in PR #43815:
URL: https://github.com/apache/spark/pull/43815#discussion_r1394963625


##
python/docs/source/conf.py:
##
@@ -102,9 +102,9 @@
 .. |examples| replace:: Examples
 .. _examples: https://github.com/apache/spark/tree/{0}/examples/src/main/python
 .. |downloading| replace:: Downloading
-.. _downloading: https://spark.apache.org/docs/{1}/building-spark.html
+.. _downloading: https://spark.apache.org/docs/{1}/#downloading
 .. |building_spark| replace:: Building Spark
-.. _building_spark: https://spark.apache.org/docs/{1}/#downloading
+.. _building_spark: https://spark.apache.org/docs/{1}/building-spark.html

Review Comment:
   If this happens in Apache Spark 3.5.0, could you add `3.5.0` to the affected 
version, @panbingkun ?
   
   https://github.com/apache/spark/assets/9700541/d3966d02-8572-4a96-b56a-a4bf729e65f9;>
   



##
python/docs/source/conf.py:
##
@@ -102,9 +102,9 @@
 .. |examples| replace:: Examples
 .. _examples: https://github.com/apache/spark/tree/{0}/examples/src/main/python
 .. |downloading| replace:: Downloading
-.. _downloading: https://spark.apache.org/docs/{1}/building-spark.html
+.. _downloading: https://spark.apache.org/docs/{1}/#downloading
 .. |building_spark| replace:: Building Spark
-.. _building_spark: https://spark.apache.org/docs/{1}/#downloading
+.. _building_spark: https://spark.apache.org/docs/{1}/building-spark.html

Review Comment:
   If this happens in Apache Spark 3.5.0, could you add `3.5.0` to the affected 
version, @panbingkun ?
   
   https://github.com/apache/spark/assets/9700541/d3966d02-8572-4a96-b56a-a4bf729e65f9;>
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [sql]enable spark sql cli support spark sql

2014-08-14 Thread Cheng Lian
In the long run, as Michael suggested in his Spark Summit 14 talk, we’d like to 
implement SQL-92, maybe with the help of Optiq.

On Aug 15, 2014, at 1:13 PM, Cheng, Hao hao.ch...@intel.com wrote:

 Actually the SQL Parser (another SQL dialect in SparkSQL) is quite weak, and 
 only support some basic queries, not sure what's the plan for its enhancement.
 
 -Original Message-
 From: scwf [mailto:wangf...@huawei.com] 
 Sent: Friday, August 15, 2014 11:22 AM
 To: dev@spark.apache.org
 Subject: [sql]enable spark sql cli support spark sql
 
 hi all,
   now spark sql cli only support spark hql, i think we can enable this cli to 
 support spark sql, do you think it's necessary?
 
 -- 
 
 Best Regards
 Fei Wang
 
 
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional 
 commands, e-mail: dev-h...@spark.apache.org
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Hamburg Apache Spark Meetup

2015-02-18 Thread Johan Beisser
If you could also add the Hamburg Apache Spark Meetup, I'd appreciate it.

http://www.meetup.com/Hamburg-Apache-Spark-Meetup/

On Tue, Feb 17, 2015 at 5:08 PM, Matei Zaharia matei.zaha...@gmail.com wrote:
 Thanks! I've added you.

 Matei

 On Feb 17, 2015, at 4:06 PM, Ralph Bergmann | the4thFloor.eu 
 ra...@the4thfloor.eu wrote:

 Hi,


 there is a small Spark Meetup group in Berlin, Germany :-)
 http://www.meetup.com/Berlin-Apache-Spark-Meetup/

 Plaes add this group to the Meetups list at
 https://spark.apache.org/community.html


 Ralph

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org



 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: insert Hive table with RDD

2015-03-04 Thread patcharee

Hi,

I guess that toDF() api in spark 1.3 which is required build from source 
code?


Patcharee

On 03. mars 2015 13:42, Cheng, Hao wrote:

Using the SchemaRDD / DataFrame API via HiveContext

Assume you're using the latest code, something probably like:

val hc = new HiveContext(sc)
import hc.implicits._
existedRdd.toDF().insertInto(hivetable)
or

existedRdd.toDF().registerTempTable(mydata)
hc.sql(insert into hivetable as select xxx from mydata)



-Original Message-
From: patcharee [mailto:patcharee.thong...@uni.no]
Sent: Tuesday, March 3, 2015 7:09 PM
To: user@spark.apache.org
Subject: insert Hive table with RDD

Hi,

How can I insert an existing hive table with an RDD containing my data?
Any examples?

Best,
Patcharee

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: broken link on Spark Programming Guide

2015-04-07 Thread Sean Owen
I fixed this a while ago in master. It should go out with the next
release and next push of the site.

On Tue, Apr 7, 2015 at 4:32 PM, jonathangreenleaf
jonathangreenl...@gmail.com wrote:
 in the current Programming Guide:
 https://spark.apache.org/docs/1.3.0/programming-guide.html#actions

 under Actions, the Python link goes to:
 https://spark.apache.org/docs/1.3.0/api/python/pyspark.rdd.RDD-class.html
 which is 404

 which I think should be:
 https://spark.apache.org/docs/1.3.0/api/python/index.html#org.apache.spark.rdd.RDD

 Thanks - Jonathan



 --
 View this message in context: 
 http://apache-spark-user-list.1001560.n3.nabble.com/broken-link-on-Spark-Programming-Guide-tp22414.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[jira] [Updated] (SPARK-10700) Spark R Documentation not available

2015-09-18 Thread Dev Lakhani (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-10700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dev Lakhani updated SPARK-10700:

Description: 
Documentation

https://spark.apache.org/docs/latest/api/R/glm.html refered to in

 https://spark.apache.org/docs/latest/sparkr.html  is not available.

I searched this JIRA site for sparkr.html SparkR Documentation and do not think 
any one else has raised this.

  was:
Documentation https://spark.apache.org/docs/latest/sparkr.html  is not 
available.

I searched this JIRA site for sparkr.html SparkR Documentation and do not think 
any one else has raised this.


> Spark R Documentation not available
> ---
>
> Key: SPARK-10700
> URL: https://issues.apache.org/jira/browse/SPARK-10700
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Dev Lakhani
>Priority: Minor
>
> Documentation
> https://spark.apache.org/docs/latest/api/R/glm.html refered to in
>  https://spark.apache.org/docs/latest/sparkr.html  is not available.
> I searched this JIRA site for sparkr.html SparkR Documentation and do not 
> think any one else has raised this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: Tracking / estimating job progress

2016-05-13 Thread Dood

On 5/13/2016 10:16 AM, Anthony May wrote:

http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkStatusTracker

Might be useful


How do you use it? You cannot instantiate the class - is the constructor 
private? Thanks!




On Fri, 13 May 2016 at 11:11 Ted Yu <yuzhih...@gmail.com 
<mailto:yuzhih...@gmail.com>> wrote:


Have you looked
at core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala
?

Cheers

On Fri, May 13, 2016 at 10:05 AM, Dood@ODDO <oddodao...@gmail.com
<mailto:oddodao...@gmail.com>> wrote:

I provide a RESTful API interface from scalatra for launching
Spark jobs - part of the functionality is tracking these jobs.
What API is available to track the progress of a particular
spark application? How about estimating where in the total job
progress the job is?

Thanks!

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
<mailto:user-unsubscr...@spark.apache.org>
For additional commands, e-mail: user-h...@spark.apache.org
<mailto:user-h...@spark.apache.org>





-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[jira] [Commented] (SPARK-37873) SQL Syntax links are broken

2022-01-13 Thread Alex Ott (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17475993#comment-17475993
 ] 

Alex Ott commented on SPARK-37873:
--

If you click on any:
 * [DDL Statements|https://spark.apache.org/docs/latest/sql-ref-syntax-ddl.html]
 * [DML Statements|https://spark.apache.org/docs/latest/sql-ref-syntax-dml.html]
 * [Data Retrieval 
Statements|https://spark.apache.org/docs/latest/sql-ref-syntax-qry.html]
 * [Auxiliary 
Statements|https://spark.apache.org/docs/latest/sql-ref-syntax-aux.html]

it will show file not found (see image)

!Screenshot 2022-01-14 at 08.07.24.png!

> SQL Syntax links are broken
> ---
>
> Key: SPARK-37873
> URL: https://issues.apache.org/jira/browse/SPARK-37873
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.2.0
>Reporter: Alex Ott
>Priority: Major
> Attachments: Screenshot 2022-01-14 at 08.07.24.png
>
>
> SQL Syntax links at [https://spark.apache.org/docs/latest/sql-ref.html] are 
> broken



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[GitHub] [spark] yaooqinn opened a new pull request #36053: [SPARK-38778][INFRA][BUILD] Replace http with https for project url in pom

2022-04-03 Thread GitBox


yaooqinn opened a new pull request #36053:
URL: https://github.com/apache/spark/pull/36053


   
   
   
   
   ### What changes were proposed in this pull request?
   
   
   change http://spark.apache.org/ to 
https://spark.apache.org/ in the project URL of all pom files
   ### Why are the changes needed?
   
   
   fix home page in maven central 
https://mvnrepository.com/artifact/org.apache.spark/spark-sql_2.13/3.2.1
   
   
    From
   License | Apache 2.0
   -- | --
   Categories |Hadoop Query Engines
   HomePage|http://spark.apache.org/
   Date | (Jan 26, 2022)
   
    to
   
   License | Apache 2.0
   -- | --
   Categories |Hadoop Query Engines
   HomePage|https://spark.apache.org/
   Date | (Jan 26, 2022)
   ### Does this PR introduce _any_ user-facing change?
   
   no
   
   
   ### How was this patch tested?
   
   
   pass GHA


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: groupBy gives non deterministic results

2014-09-10 Thread Ye Xianjin
Well, That's weird. I don't see this thread in my mail box as sending to user 
list. Maybe because I also subscribe the incubator mail list? I do see mails 
sending to incubator mail list and no one replies. I thought it was because 
people don't subscribe the incubator now.

-- 
Ye Xianjin
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Thursday, September 11, 2014 at 12:12 AM, Davies Liu wrote:

 I think the mails to spark.incubator.apache.org 
 (http://spark.incubator.apache.org) will be forwarded to
 spark.apache.org (http://spark.apache.org).
 
 Here is the header of the first mail:
 
 from: redocpot julien19890...@gmail.com (mailto:julien19890...@gmail.com)
 to: u...@spark.incubator.apache.org (mailto:u...@spark.incubator.apache.org)
 date: Mon, Sep 8, 2014 at 7:29 AM
 subject: groupBy gives non deterministic results
 mailing list: user.spark.apache.org (http://user.spark.apache.org) Filter 
 messages from this mailing list
 mailed-by: spark.apache.org (http://spark.apache.org)
 
 I only subscribe spark.apache.org (http://spark.apache.org), and I do see all 
 the mails from he.
 
 On Wed, Sep 10, 2014 at 6:29 AM, Ye Xianjin advance...@gmail.com 
 (mailto:advance...@gmail.com) wrote:
  | Do the two mailing lists share messages ?
  I don't think so. I didn't receive this message from the user list. I am
  not in databricks, so I can't answer your other questions. Maybe Davies Liu
  dav...@databricks.com (mailto:dav...@databricks.com) can answer you?
  
  --
  Ye Xianjin
  Sent with Sparrow
  
  On Wednesday, September 10, 2014 at 9:05 PM, redocpot wrote:
  
  Hi, Xianjin
  
  I checked user@spark.apache.org (mailto:user@spark.apache.org), and found 
  my post there:
  http://mail-archives.apache.org/mod_mbox/spark-user/201409.mbox/browser
  
  I am using nabble to send this mail, which indicates that the mail will be
  sent from my email address to the u...@spark.incubator.apache.org 
  (mailto:u...@spark.incubator.apache.org) mailing
  list.
  
  Do the two mailing lists share messages ?
  
  Do we have a nabble interface for user@spark.apache.org 
  (mailto:user@spark.apache.org) mail list ?
  
  Thank you.
  
  
  
  
  --
  View this message in context:
  http://apache-spark-user-list.1001560.n3.nabble.com/groupBy-gives-non-deterministic-results-tp13698p13876.html
  Sent from the Apache Spark User List mailing list archive at Nabble.com 
  (http://Nabble.com).
  
  -
  To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 
  (mailto:user-unsubscr...@spark.apache.org)
  For additional commands, e-mail: user-h...@spark.apache.org 
  (mailto:user-h...@spark.apache.org)
  
 
 
 




[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-02 Thread janplus
Github user janplus commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69384848
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
--- End diff --

I'll have a investigation on this.
It should be different whether `key` is `Literal`.

> hive> select parse_url("http://spark/path?;, "QUERY", "???");
FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '"???"': 
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute method 
public java.lang.String 
org.apache.hadoop.hive.ql.udf.UDFParseUrl.evaluate(java.lang.String,java.lang.String,java.lang.String)
  on object org.apache.hadoop.hive.ql.udf.UDFParseUrl@6682e6a5 of class 
org.apache.hadoop.hive.ql.udf.UDFParseUrl with arguments 
{http://spark/path?:java.lang.String, QUERY:java.lang.String, 
???:java.lang.String} of size 3
>
> hive> select parse_url("http://spark/path?;, "QUERY", name) from test;
OK
Failed with exception 
java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: Unable to 
execute method public java.lang.String 
org.apache.hadoop.hive.ql.udf.UDFParseUrl.evaluate(java.lang.String,java.lang.String,java.lang.String)
  on object org.apache.hadoop.hive.ql.udf.UDFParseUrl@2035d65b of class 
org.apache.hadoop.hive.ql.udf.UDFParseUrl with arguments 
{http://spark/path?:java.lang.String, QUERY:java.lang.String, 
???:java.lang.String} of size 3
Time taken: 0.039 seconds


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-02 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69385614
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
--- End diff --

Thank you for nice investigation. Yes, the validation of Hive seems to be 
too limited.
I think you can be better than Hive if you supports **Literal** `key` 
validation?
How do you think about that?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-02 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69383544
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
--- End diff --

In case of Hive, it's also `SemanticException`, not a raw 
`PatternSyntaxException`.
You may need to investigate Hive `SemanticException` logic.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-02 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69383504
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
--- End diff --

Hi, @janplus .
I thought about this a little more. Currently, this exception happens in 
`Executor` side. It's not desirable. IMO, we had better make this as 
`AnalysisException`.
Could you add some simple validation logic for `key`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[spark-website] branch asf-site updated: Hotfix site links in sitemap.xml

2018-12-20 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 63d  Hotfix site links in sitemap.xml
63d is described below

commit 63db35119aba73cb28ababaa72ea609bb835
Author: Sean Owen 
AuthorDate: Thu Dec 20 03:46:01 2018 -0600

Hotfix site links in sitemap.xml
---
 site/sitemap.xml | 328 +++
 1 file changed, 164 insertions(+), 164 deletions(-)

diff --git a/site/sitemap.xml b/site/sitemap.xml
index c7f17db..80a4845 100644
--- a/site/sitemap.xml
+++ b/site/sitemap.xml
@@ -143,658 +143,658 @@
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-4-0.html
+  https://spark.apache.org/releases/spark-release-2-4-0.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-4-0-released.html
+  https://spark.apache.org/news/spark-2-4-0-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-3-2.html
+  https://spark.apache.org/releases/spark-release-2-3-2.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-3-2-released.html
+  https://spark.apache.org/news/spark-2-3-2-released.html
   weekly
 
 
-  
http://localhost:4000/news/spark-summit-oct-2018-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-oct-2018-agenda-posted.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-2-2.html
+  https://spark.apache.org/releases/spark-release-2-2-2.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-2-2-released.html
+  https://spark.apache.org/news/spark-2-2-2-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-1-3.html
+  https://spark.apache.org/releases/spark-release-2-1-3.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-1-3-released.html
+  https://spark.apache.org/news/spark-2-1-3-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-3-1.html
+  https://spark.apache.org/releases/spark-release-2-3-1.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-3-1-released.html
+  https://spark.apache.org/news/spark-2-3-1-released.html
   weekly
 
 
-  
http://localhost:4000/news/spark-summit-june-2018-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-june-2018-agenda-posted.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-3-0.html
+  https://spark.apache.org/releases/spark-release-2-3-0.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-3-0-released.html
+  https://spark.apache.org/news/spark-2-3-0-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-2-1.html
+  https://spark.apache.org/releases/spark-release-2-2-1.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-2-1-released.html
+  https://spark.apache.org/news/spark-2-2-1-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-1-2.html
+  https://spark.apache.org/releases/spark-release-2-1-2.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-1-2-released.html
+  https://spark.apache.org/news/spark-2-1-2-released.html
   weekly
 
 
-  http://localhost:4000/news/spark-summit-eu-2017-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-eu-2017-agenda-posted.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-2-0.html
+  https://spark.apache.org/releases/spark-release-2-2-0.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-2-0-released.html
+  https://spark.apache.org/news/spark-2-2-0-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-1-1.html
+  https://spark.apache.org/releases/spark-release-2-1-1.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-1-1-released.html
+  https://spark.apache.org/news/spark-2-1-1-released.html
   weekly
 
 
-  
http://localhost:4000/news/spark-summit-june-2017-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-june-2017-agenda-posted.html
   weekly
 
 
-  
http://localhost:4000/news/spark-summit-east-2017-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-east-2017-agenda-posted.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-1-0.html
+  https://spark.apache.org/releases/spark-release-2-1-0.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-1-0-released.html
+  https://spark.apache.org/news/spark-2-1-0-released.html
   weekly
 
 
-  
http://localhost:4000/news/spark-wins-cloudsort-100tb-benchmark.html
+  
https://spark.apache.org/news/spark-wins-cloudsort-100tb-benchmark.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-0-2.html
+  https://spark.apache.org/releases/spark-release-2-0-2.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-0-2-released.html
+  https://spark.apache.org/news/spark-2-0-2-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-1-6-3.html
+  https

spark-website git commit: Update Spark 2.4 release window (and fix Spark URLs in sitemap)

2018-07-29 Thread srowen
Repository: spark-website
Updated Branches:
  refs/heads/asf-site d86cffd19 -> 50b4660ce


Update Spark 2.4 release window (and fix Spark URLs in sitemap)


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/50b4660c
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/50b4660c
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/50b4660c

Branch: refs/heads/asf-site
Commit: 50b4660ce81f04fe34b995f3e9f0a74e336f482c
Parents: d86cffd
Author: Sean Owen 
Authored: Sun Jul 29 09:16:53 2018 -0500
Committer: Sean Owen 
Committed: Sun Jul 29 09:16:53 2018 -0500

--
 site/mailing-lists.html |   2 +-
 site/sitemap.xml| 318 +++
 site/versioning-policy.html |   6 +-
 versioning-policy.md|   6 +-
 4 files changed, 166 insertions(+), 166 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark-website/blob/50b4660c/site/mailing-lists.html
--
diff --git a/site/mailing-lists.html b/site/mailing-lists.html
index d447046..f7ae56f 100644
--- a/site/mailing-lists.html
+++ b/site/mailing-lists.html
@@ -12,7 +12,7 @@
 
   
 
-http://localhost:4000/community.html; />
+https://spark.apache.org/community.html; />
   
 
   

http://git-wip-us.apache.org/repos/asf/spark-website/blob/50b4660c/site/sitemap.xml
--
diff --git a/site/sitemap.xml b/site/sitemap.xml
index dd69976..87ca6f6 100644
--- a/site/sitemap.xml
+++ b/site/sitemap.xml
@@ -139,641 +139,641 @@
 
 
 
-  
http://localhost:4000/news/spark-summit-oct-2018-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-oct-2018-agenda-posted.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-2-2.html
+  https://spark.apache.org/releases/spark-release-2-2-2.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-2-2-released.html
+  https://spark.apache.org/news/spark-2-2-2-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-1-3.html
+  https://spark.apache.org/releases/spark-release-2-1-3.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-1-3-released.html
+  https://spark.apache.org/news/spark-2-1-3-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-3-1.html
+  https://spark.apache.org/releases/spark-release-2-3-1.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-3-1-released.html
+  https://spark.apache.org/news/spark-2-3-1-released.html
   weekly
 
 
-  
http://localhost:4000/news/spark-summit-june-2018-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-june-2018-agenda-posted.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-3-0.html
+  https://spark.apache.org/releases/spark-release-2-3-0.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-3-0-released.html
+  https://spark.apache.org/news/spark-2-3-0-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-2-1.html
+  https://spark.apache.org/releases/spark-release-2-2-1.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-2-1-released.html
+  https://spark.apache.org/news/spark-2-2-1-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-1-2.html
+  https://spark.apache.org/releases/spark-release-2-1-2.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-1-2-released.html
+  https://spark.apache.org/news/spark-2-1-2-released.html
   weekly
 
 
-  http://localhost:4000/news/spark-summit-eu-2017-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-eu-2017-agenda-posted.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-2-0.html
+  https://spark.apache.org/releases/spark-release-2-2-0.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-2-0-released.html
+  https://spark.apache.org/news/spark-2-2-0-released.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-1-1.html
+  https://spark.apache.org/releases/spark-release-2-1-1.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-1-1-released.html
+  https://spark.apache.org/news/spark-2-1-1-released.html
   weekly
 
 
-  
http://localhost:4000/news/spark-summit-june-2017-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-june-2017-agenda-posted.html
   weekly
 
 
-  
http://localhost:4000/news/spark-summit-east-2017-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-east-2017-agenda-posted.html
   weekly
 
 
-  http://localhost:4000/releases/spark-release-2-1-0.html
+  https://spark.apache.org/releases/spark-release-2-1-0.html
   weekly
 
 
-  http://localhost:4000/news/spark-2-1-0-released.html
+  https://spark.apache.org/news/spark-2-1-0-released.html

Re: Welcoming three new committers

2015-02-03 Thread Chao Chen

Congratulations guys, well done!

在 15-2-4 上午9:26, Nan Zhu 写道:

Congratulations!

--
Nan Zhu
http://codingcat.me


On Tuesday, February 3, 2015 at 8:08 PM, Xuefeng Wu wrote:


Congratulations!well done.
  
Yours, Xuefeng Wu 吴雪峰 敬上
  

On 2015年2月4日, at 上午6:34, Matei Zaharia matei.zaha...@gmail.com 
(mailto:matei.zaha...@gmail.com) wrote:
  
Hi all,
  
The PMC recently voted to add three new committers: Cheng Lian, Joseph Bradley and Sean Owen. All three have been major contributors to Spark in the past year: Cheng on Spark SQL, Joseph on MLlib, and Sean on ML and many pieces throughout Spark Core. Join me in welcoming them as committers!
  
Matei

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org 
(mailto:dev-unsubscr...@spark.apache.org)
For additional commands, e-mail: dev-h...@spark.apache.org 
(mailto:dev-h...@spark.apache.org)
  
  
  
-

To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org 
(mailto:dev-unsubscr...@spark.apache.org)
For additional commands, e-mail: dev-h...@spark.apache.org 
(mailto:dev-h...@spark.apache.org)
  
  






-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Welcoming three new committers

2015-02-03 Thread Denny Lee
Awesome stuff - congratulations! :)

On Tue Feb 03 2015 at 5:34:06 PM Chao Chen crazy...@gmail.com wrote:

 Congratulations guys, well done!

 在 15-2-4 上午9:26, Nan Zhu 写道:
  Congratulations!
 
  --
  Nan Zhu
  http://codingcat.me
 
 
  On Tuesday, February 3, 2015 at 8:08 PM, Xuefeng Wu wrote:
 
  Congratulations!well done.
 
  Yours, Xuefeng Wu 吴雪峰 敬上
 
  On 2015年2月4日, at 上午6:34, Matei Zaharia matei.zaha...@gmail.com
 (mailto:matei.zaha...@gmail.com) wrote:
 
  Hi all,
 
  The PMC recently voted to add three new committers: Cheng Lian, Joseph
 Bradley and Sean Owen. All three have been major contributors to Spark in
 the past year: Cheng on Spark SQL, Joseph on MLlib, and Sean on ML and many
 pieces throughout Spark Core. Join me in welcoming them as committers!
 
  Matei
  -
  To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org (mailto:
 dev-unsubscr...@spark.apache.org)
  For additional commands, e-mail: dev-h...@spark.apache.org (mailto:
 dev-h...@spark.apache.org)
 
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org (mailto:
 dev-unsubscr...@spark.apache.org)
  For additional commands, e-mail: dev-h...@spark.apache.org (mailto:
 dev-h...@spark.apache.org)
 
 
 
 


 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org




Re: Welcoming three new committers

2015-02-03 Thread Debasish Das
Congratulations !

Keep helping the community :-)

On Tue, Feb 3, 2015 at 5:34 PM, Denny Lee denny.g@gmail.com wrote:

 Awesome stuff - congratulations! :)

 On Tue Feb 03 2015 at 5:34:06 PM Chao Chen crazy...@gmail.com wrote:

  Congratulations guys, well done!
 
  在 15-2-4 上午9:26, Nan Zhu 写道:
   Congratulations!
  
   --
   Nan Zhu
   http://codingcat.me
  
  
   On Tuesday, February 3, 2015 at 8:08 PM, Xuefeng Wu wrote:
  
   Congratulations!well done.
  
   Yours, Xuefeng Wu 吴雪峰 敬上
  
   On 2015年2月4日, at 上午6:34, Matei Zaharia matei.zaha...@gmail.com
  (mailto:matei.zaha...@gmail.com) wrote:
  
   Hi all,
  
   The PMC recently voted to add three new committers: Cheng Lian,
 Joseph
  Bradley and Sean Owen. All three have been major contributors to Spark in
  the past year: Cheng on Spark SQL, Joseph on MLlib, and Sean on ML and
 many
  pieces throughout Spark Core. Join me in welcoming them as committers!
  
   Matei
   -
   To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org (mailto:
  dev-unsubscr...@spark.apache.org)
   For additional commands, e-mail: dev-h...@spark.apache.org (mailto:
  dev-h...@spark.apache.org)
  
  
  
   -
   To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org (mailto:
  dev-unsubscr...@spark.apache.org)
   For additional commands, e-mail: dev-h...@spark.apache.org (mailto:
  dev-h...@spark.apache.org)
  
  
  
  
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
  For additional commands, e-mail: dev-h...@spark.apache.org
 
 



Re: Upgrade to Spark 1.2.1 using Guava

2015-02-27 Thread Sean Owen
This seems like a job for userClassPathFirst. Or could be. It's
definitely an issue of visibility between where the serializer is and
where the user class is.

At the top you said Pat that you didn't try this, but why not?

On Fri, Feb 27, 2015 at 10:11 PM, Pat Ferrel p...@occamsmachete.com wrote:
 I’ll try to find a Jira for it. I hope a fix is in 1.3


 On Feb 27, 2015, at 1:59 PM, Pat Ferrel p...@occamsmachete.com wrote:

 Thanks! that worked.

 On Feb 27, 2015, at 1:50 PM, Pat Ferrel p...@occamsmachete.com wrote:

 I don’t use spark-submit I have a standalone app.

 So I guess you want me to add that key/value to the conf in my code and make 
 sure it exists on workers.


 On Feb 27, 2015, at 1:47 PM, Marcelo Vanzin van...@cloudera.com wrote:

 On Fri, Feb 27, 2015 at 1:42 PM, Pat Ferrel p...@occamsmachete.com wrote:
 I changed in the spark master conf, which is also the only worker. I added a 
 path to the jar that has guava in it. Still can’t find the class.

 Sorry, I'm still confused about what config you're changing. I'm
 suggesting using:

 spark-submit --conf spark.executor.extraClassPath=/guava.jar blah


 --
 Marcelo

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org



 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org



 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org



 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[jira] [Updated] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation

2015-05-16 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-7671:
-
Assignee: Favio Vázquez

 Fix wrong URLs in MLlib Data Types Documentation
 

 Key: SPARK-7671
 URL: https://issues.apache.org/jira/browse/SPARK-7671
 Project: Spark
  Issue Type: Documentation
  Components: Documentation, MLlib
 Environment: Ubuntu 14.04. Apache Mesos in cluster mode with HDFS 
 from cloudera 2.6.0-cdh5.4.0.
Reporter: Favio Vázquez
Assignee: Favio Vázquez
Priority: Trivial
  Labels: Documentation,, Fix, MLlib,, URL
 Fix For: 1.4.0


 There is a mistake in the URL of Matrices in the MLlib Data Types 
 documentation (Local matrix scala section), the URL points to 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices
  which is a mistake, since Matrices is an object that implements factory 
 methods for Matrix that does not have a companion class. The correct link 
 should point to 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices$
 There is another mistake, in the Local Vector section in Scala, Java and 
 Python
 In the Scala section the URL of Vectors points to the trait Vector 
 (https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vector)
  and not to the factory methods implemented in Vectors. 
 The correct link should be: 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vectors$
 In the Java section the URL of Vectors points to the Interface Vector 
 (https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vector.html)
  and not to the Class Vectors
 The correct link should be:
 https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vectors.html
 In the Python section the URL of Vectors points to the class Vector 
 (https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vector)
  and not the Class Vectors
 The correct link should be:
 https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vectors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation

2015-05-15 Thread Joseph K. Bradley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph K. Bradley updated SPARK-7671:
-
Component/s: MLlib

 Fix wrong URLs in MLlib Data Types Documentation
 

 Key: SPARK-7671
 URL: https://issues.apache.org/jira/browse/SPARK-7671
 Project: Spark
  Issue Type: Documentation
  Components: Documentation, MLlib
 Environment: Ubuntu 14.04. Apache Mesos in cluster mode with HDFS 
 from cloudera 2.6.0-cdh5.4.0.
Reporter: Favio Vázquez
Priority: Trivial
  Labels: Documentation,, Fix, MLlib,, URL

 There is a mistake in the URL of Matrices in the MLlib Data Types 
 documentation (Local matrix scala section), the URL points to 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices
  which is a mistake, since Matrices is an object that implements factory 
 methods for Matrix that does not have a companion class. The correct link 
 should point to 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices$
 There is another mistake, in the Local Vector section in Scala, Java and 
 Python
 In the Scala section the URL of Vectors points to the trait Vector 
 (https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vector)
  and not to the factory methods implemented in Vectors. 
 The correct link should be: 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vectors$
 In the Java section the URL of Vectors points to the Interface Vector 
 (https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vector.html)
  and not to the Class Vectors
 The correct link should be:
 https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vectors.html
 In the Python section the URL of Vectors points to the class Vector 
 (https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vector)
  and not the Class Vectors
 The correct link should be:
 https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vectors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation

2015-05-15 Thread JIRA
Favio Vázquez created SPARK-7671:


 Summary: Fix wrong URLs in MLlib Data Types Documentation
 Key: SPARK-7671
 URL: https://issues.apache.org/jira/browse/SPARK-7671
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
 Environment: Ubuntu 14.04. Apache Mesos in cluster mode with HDFS from 
cloudera 2.6.0-cdh5.4.0.
Reporter: Favio Vázquez
Priority: Trivial


There is a mistake in the URL of Matrices in the MLlib Data Types documentation 
(Local matrix scala section), the URL points to 
https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices
 which is a mistake, since Matrices is an object that implements factory 
methods for Matrix that does not have a companion class. The correct link 
should point to 
https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices$

There is another mistake, in the Local Vector section in Scala, Java and Python

In the Scala section the URL of Vectors points to the trait Vector 
(https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vector)
 and not to the factory methods implemented in Vectors. 

The correct link should be: 
https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vectors$

In the Java section the URL of Vectors points to the Interface Vector 
(https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vector.html)
 and not to the Class Vectors

The correct link should be:
https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vectors.html

In the Python section the URL of Vectors points to the class Vector 
(https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vector)
 and not the Class Vectors

The correct link should be:
https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vectors




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation

2015-05-15 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-7671:
---

Assignee: (was: Apache Spark)

 Fix wrong URLs in MLlib Data Types Documentation
 

 Key: SPARK-7671
 URL: https://issues.apache.org/jira/browse/SPARK-7671
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
 Environment: Ubuntu 14.04. Apache Mesos in cluster mode with HDFS 
 from cloudera 2.6.0-cdh5.4.0.
Reporter: Favio Vázquez
Priority: Trivial
  Labels: Documentation,, Fix, MLlib,, URL

 There is a mistake in the URL of Matrices in the MLlib Data Types 
 documentation (Local matrix scala section), the URL points to 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices
  which is a mistake, since Matrices is an object that implements factory 
 methods for Matrix that does not have a companion class. The correct link 
 should point to 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices$
 There is another mistake, in the Local Vector section in Scala, Java and 
 Python
 In the Scala section the URL of Vectors points to the trait Vector 
 (https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vector)
  and not to the factory methods implemented in Vectors. 
 The correct link should be: 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vectors$
 In the Java section the URL of Vectors points to the Interface Vector 
 (https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vector.html)
  and not to the Class Vectors
 The correct link should be:
 https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vectors.html
 In the Python section the URL of Vectors points to the class Vector 
 (https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vector)
  and not the Class Vectors
 The correct link should be:
 https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vectors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation

2015-05-15 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-7671:
---

Assignee: Apache Spark

 Fix wrong URLs in MLlib Data Types Documentation
 

 Key: SPARK-7671
 URL: https://issues.apache.org/jira/browse/SPARK-7671
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
 Environment: Ubuntu 14.04. Apache Mesos in cluster mode with HDFS 
 from cloudera 2.6.0-cdh5.4.0.
Reporter: Favio Vázquez
Assignee: Apache Spark
Priority: Trivial
  Labels: Documentation,, Fix, MLlib,, URL

 There is a mistake in the URL of Matrices in the MLlib Data Types 
 documentation (Local matrix scala section), the URL points to 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices
  which is a mistake, since Matrices is an object that implements factory 
 methods for Matrix that does not have a companion class. The correct link 
 should point to 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices$
 There is another mistake, in the Local Vector section in Scala, Java and 
 Python
 In the Scala section the URL of Vectors points to the trait Vector 
 (https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vector)
  and not to the factory methods implemented in Vectors. 
 The correct link should be: 
 https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Vectors$
 In the Java section the URL of Vectors points to the Interface Vector 
 (https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vector.html)
  and not to the Class Vectors
 The correct link should be:
 https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/linalg/Vectors.html
 In the Python section the URL of Vectors points to the class Vector 
 (https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vector)
  and not the Class Vectors
 The correct link should be:
 https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.Vectors



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69348238
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
 ---
@@ -652,6 +656,129 @@ case class StringRPad(str: Expression, len: 
Expression, pad: Expression)
   override def prettyName: String = "rpad"
 }
 
+object ParseUrl {
+  private val HOST = UTF8String.fromString("HOST")
+  private val PATH = UTF8String.fromString("PATH")
+  private val QUERY = UTF8String.fromString("QUERY")
+  private val REF = UTF8String.fromString("REF")
+  private val PROTOCOL = UTF8String.fromString("PROTOCOL")
+  private val FILE = UTF8String.fromString("FILE")
+  private val AUTHORITY = UTF8String.fromString("AUTHORITY")
+  private val USERINFO = UTF8String.fromString("USERINFO")
+  private val REGEXPREFIX = "(&|^)"
+  private val REGEXSUBFIX = "=([^&]*)"
+}
+
+/**
+ * Extracts a part from a URL
+ */
+@ExpressionDescription(
+  usage = "_FUNC_(url, partToExtract[, key]) - extracts a part from a URL",
+  extended = """Parts: HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, 
USERINFO.
    +Key specifies which query to extract.
+    Examples:
+  > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'HOST')\n 
'spark.apache.org'
+  > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'QUERY')\n 
'query=1'
+  > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'QUERY', 
'query')\n '1'""")
--- End diff --

Maybe,
```
  > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'HOST')
  'spark.apache.org'
  > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'QUERY')
  'query=1'
  > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'QUERY', 
'query')
  '1'""")
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-----
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[jira] [Created] (SPARK-22631) Consolidate all configuration properties into one page

2017-11-28 Thread Andreas Maier (JIRA)
Andreas Maier created SPARK-22631:
-

 Summary: Consolidate all configuration properties into one page
 Key: SPARK-22631
 URL: https://issues.apache.org/jira/browse/SPARK-22631
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
Affects Versions: 2.2.0
Reporter: Andreas Maier


The page https://spark.apache.org/docs/2.2.0/configuration.html gives the 
impression as if all configuration properties of Spark are described on this 
page. Unfortunately this is not true. The description of important properties 
is spread through the documentation. The following pages list properties, which 
are not described on the configuration page: 

https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#performance-tuning
https://spark.apache.org/docs/2.2.0/monitoring.html#spark-configuration-options
https://spark.apache.org/docs/2.2.0/security.html#ssl-configuration
https://spark.apache.org/docs/2.2.0/sparkr.html#starting-up-from-rstudio
https://spark.apache.org/docs/2.2.0/running-on-yarn.html#spark-properties
https://spark.apache.org/docs/2.2.0/running-on-mesos.html#configuration
https://spark.apache.org/docs/2.2.0/spark-standalone.html#cluster-launch-scripts

As a reader of the documentation I would like to have single central webpage 
describing all Spark configuration properties. Alternatively it would be nice 
to at least add links from the configuration page to the other pages of the 
documentation, where configuration properties are described. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-22630) Consolidate all configuration properties into one page

2017-11-28 Thread Andreas Maier (JIRA)
Andreas Maier created SPARK-22630:
-

 Summary: Consolidate all configuration properties into one page
 Key: SPARK-22630
 URL: https://issues.apache.org/jira/browse/SPARK-22630
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
Affects Versions: 2.2.0
Reporter: Andreas Maier


The page https://spark.apache.org/docs/2.2.0/configuration.html gives the 
impression as if all configuration properties of Spark are described on this 
page. Unfortunately this is not true. The description of important properties 
is spread through the documentation. The following pages list properties, which 
are not described on the configuration page: 

https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#performance-tuning
https://spark.apache.org/docs/2.2.0/monitoring.html#spark-configuration-options
https://spark.apache.org/docs/2.2.0/security.html#ssl-configuration
https://spark.apache.org/docs/2.2.0/sparkr.html#starting-up-from-rstudio
https://spark.apache.org/docs/2.2.0/running-on-yarn.html#spark-properties
https://spark.apache.org/docs/2.2.0/running-on-mesos.html#configuration
https://spark.apache.org/docs/2.2.0/spark-standalone.html#cluster-launch-scripts

As a reader of the documentation I would like to have single central webpage 
describing all Spark configuration properties. Alternatively it would be nice 
to at least add links from the configuration page to the other pages of the 
documentation, where configuration properties are described. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22630) Consolidate all configuration properties into one page

2017-11-28 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16268880#comment-16268880
 ] 

Sean Owen commented on SPARK-22630:
---

The flip-side is that you end up duplicating documentation. They're not only in 
the source code, but mentioned in module-specific pages, and now possibly again 
on a single configuration page.

It's at least useful to advertise that main page as a list of "important 
options" only and link to other relevant docs. It may be useful to repeat some 
important configs in the main doc that are obviously missing (which ones?)  It 
may not be a goal to list every config also in this main doc. 

> Consolidate all configuration properties into one page
> --
>
> Key: SPARK-22630
> URL: https://issues.apache.org/jira/browse/SPARK-22630
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.2.0
>Reporter: Andreas Maier
>
> The page https://spark.apache.org/docs/2.2.0/configuration.html gives the 
> impression as if all configuration properties of Spark are described on this 
> page. Unfortunately this is not true. The description of important properties 
> is spread through the documentation. The following pages list properties, 
> which are not described on the configuration page: 
> https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#performance-tuning
> https://spark.apache.org/docs/2.2.0/monitoring.html#spark-configuration-options
> https://spark.apache.org/docs/2.2.0/security.html#ssl-configuration
> https://spark.apache.org/docs/2.2.0/sparkr.html#starting-up-from-rstudio
> https://spark.apache.org/docs/2.2.0/running-on-yarn.html#spark-properties
> https://spark.apache.org/docs/2.2.0/running-on-mesos.html#configuration
> https://spark.apache.org/docs/2.2.0/spark-standalone.html#cluster-launch-scripts
> As a reader of the documentation I would like to have single central webpage 
> describing all Spark configuration properties. Alternatively it would be nice 
> to at least add links from the configuration page to the other pages of the 
> documentation, where configuration properties are described. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-22631) Consolidate all configuration properties into one page

2017-11-28 Thread Marco Gaido (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-22631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marco Gaido resolved SPARK-22631.
-
Resolution: Duplicate

> Consolidate all configuration properties into one page
> --
>
> Key: SPARK-22631
> URL: https://issues.apache.org/jira/browse/SPARK-22631
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.2.0
>Reporter: Andreas Maier
>
> The page https://spark.apache.org/docs/2.2.0/configuration.html gives the 
> impression as if all configuration properties of Spark are described on this 
> page. Unfortunately this is not true. The description of important properties 
> is spread through the documentation. The following pages list properties, 
> which are not described on the configuration page: 
> https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#performance-tuning
> https://spark.apache.org/docs/2.2.0/monitoring.html#spark-configuration-options
> https://spark.apache.org/docs/2.2.0/security.html#ssl-configuration
> https://spark.apache.org/docs/2.2.0/sparkr.html#starting-up-from-rstudio
> https://spark.apache.org/docs/2.2.0/running-on-yarn.html#spark-properties
> https://spark.apache.org/docs/2.2.0/running-on-mesos.html#configuration
> https://spark.apache.org/docs/2.2.0/spark-standalone.html#cluster-launch-scripts
> As a reader of the documentation I would like to have single central webpage 
> describing all Spark configuration properties. Alternatively it would be nice 
> to at least add links from the configuration page to the other pages of the 
> documentation, where configuration properties are described. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22630) Consolidate all configuration properties into one page

2017-11-29 Thread Hyukjin Kwon (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272033#comment-16272033
 ] 

Hyukjin Kwon commented on SPARK-22630:
--

+1 for ^.

> Consolidate all configuration properties into one page
> --
>
> Key: SPARK-22630
> URL: https://issues.apache.org/jira/browse/SPARK-22630
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.2.0
>Reporter: Andreas Maier
>
> The page https://spark.apache.org/docs/2.2.0/configuration.html gives the 
> impression as if all configuration properties of Spark are described on this 
> page. Unfortunately this is not true. The description of important properties 
> is spread through the documentation. The following pages list properties, 
> which are not described on the configuration page: 
> https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#performance-tuning
> https://spark.apache.org/docs/2.2.0/monitoring.html#spark-configuration-options
> https://spark.apache.org/docs/2.2.0/security.html#ssl-configuration
> https://spark.apache.org/docs/2.2.0/sparkr.html#starting-up-from-rstudio
> https://spark.apache.org/docs/2.2.0/running-on-yarn.html#spark-properties
> https://spark.apache.org/docs/2.2.0/running-on-mesos.html#configuration
> https://spark.apache.org/docs/2.2.0/spark-standalone.html#cluster-launch-scripts
> As a reader of the documentation I would like to have single central webpage 
> describing all Spark configuration properties. Alternatively it would be nice 
> to at least add links from the configuration page to the other pages of the 
> documentation, where configuration properties are described. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-22630) Consolidate all configuration properties into one page

2019-05-20 Thread Hyukjin Kwon (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-22630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-22630:
-
Labels: bulk-closed  (was: )

> Consolidate all configuration properties into one page
> --
>
> Key: SPARK-22630
> URL: https://issues.apache.org/jira/browse/SPARK-22630
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.2.0
>Reporter: Andreas Maier
>Priority: Major
>  Labels: bulk-closed
>
> The page https://spark.apache.org/docs/2.2.0/configuration.html gives the 
> impression as if all configuration properties of Spark are described on this 
> page. Unfortunately this is not true. The description of important properties 
> is spread through the documentation. The following pages list properties, 
> which are not described on the configuration page: 
> https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#performance-tuning
> https://spark.apache.org/docs/2.2.0/monitoring.html#spark-configuration-options
> https://spark.apache.org/docs/2.2.0/security.html#ssl-configuration
> https://spark.apache.org/docs/2.2.0/sparkr.html#starting-up-from-rstudio
> https://spark.apache.org/docs/2.2.0/running-on-yarn.html#spark-properties
> https://spark.apache.org/docs/2.2.0/running-on-mesos.html#configuration
> https://spark.apache.org/docs/2.2.0/spark-standalone.html#cluster-launch-scripts
> As a reader of the documentation I would like to have single central webpage 
> describing all Spark configuration properties. Alternatively it would be nice 
> to at least add links from the configuration page to the other pages of the 
> documentation, where configuration properties are described. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-22630) Consolidate all configuration properties into one page

2019-05-20 Thread Hyukjin Kwon (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-22630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-22630.
--
Resolution: Incomplete

> Consolidate all configuration properties into one page
> --
>
> Key: SPARK-22630
> URL: https://issues.apache.org/jira/browse/SPARK-22630
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.2.0
>Reporter: Andreas Maier
>Priority: Major
>  Labels: bulk-closed
>
> The page https://spark.apache.org/docs/2.2.0/configuration.html gives the 
> impression as if all configuration properties of Spark are described on this 
> page. Unfortunately this is not true. The description of important properties 
> is spread through the documentation. The following pages list properties, 
> which are not described on the configuration page: 
> https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#performance-tuning
> https://spark.apache.org/docs/2.2.0/monitoring.html#spark-configuration-options
> https://spark.apache.org/docs/2.2.0/security.html#ssl-configuration
> https://spark.apache.org/docs/2.2.0/sparkr.html#starting-up-from-rstudio
> https://spark.apache.org/docs/2.2.0/running-on-yarn.html#spark-properties
> https://spark.apache.org/docs/2.2.0/running-on-mesos.html#configuration
> https://spark.apache.org/docs/2.2.0/spark-standalone.html#cluster-launch-scripts
> As a reader of the documentation I would like to have single central webpage 
> describing all Spark configuration properties. Alternatively it would be nice 
> to at least add links from the configuration page to the other pages of the 
> documentation, where configuration properties are described. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45208) Kubernetes Configuration in Spark Community Website doesn't have horizontal scrollbar

2023-09-19 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-45208:
--
Description: 
I find a recent issue with the official Spark documentation on the website. 
Specifically, the Kubernetes configuration lists on the right-hand side are not 
visible and doc doesn't have a horizontal scrollbar.

 
- [https://spark.apache.org/docs/3.5.0/running-on-kubernetes.html#configuration]
- [https://spark.apache.org/docs/3.4.1/running-on-kubernetes.html#configuration]

Wide tables are broken in the same way.

- https://spark.apache.org/docs/latest/spark-standalone.html

  was:
I find a recent issue with the official Spark documentation on the website. 
Specifically, the Kubernetes configuration lists on the right-hand side are not 
visible and doc doesn't have a horizontal scrollbar.

 
- [https://spark.apache.org/docs/3.5.0/running-on-kubernetes.html#configuration]
- [https://spark.apache.org/docs/3.4.1/running-on-kubernetes.html#configuration]


> Kubernetes Configuration in Spark Community Website doesn't have horizontal 
> scrollbar
> -
>
> Key: SPARK-45208
> URL: https://issues.apache.org/jira/browse/SPARK-45208
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.5.0
>Reporter: Qian Sun
>Priority: Major
>
> I find a recent issue with the official Spark documentation on the website. 
> Specifically, the Kubernetes configuration lists on the right-hand side are 
> not visible and doc doesn't have a horizontal scrollbar.
>  
> - 
> [https://spark.apache.org/docs/3.5.0/running-on-kubernetes.html#configuration]
> - 
> [https://spark.apache.org/docs/3.4.1/running-on-kubernetes.html#configuration]
> Wide tables are broken in the same way.
> - https://spark.apache.org/docs/latest/spark-standalone.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-44820) Switch languages consistently across docs for all code snippets

2023-08-16 Thread Ruifeng Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-44820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17754892#comment-17754892
 ] 

Ruifeng Zheng edited comment on SPARK-44820 at 8/16/23 6:27 AM:


update:

this issue emerged since 3.1.1 (note that we don't have an official 3.1.0, 
since 3.1.0 was a mistake https://spark.apache.org/news/index.html)

3.0.3 works well: 
https://spark.apache.org/docs/3.0.3/structured-streaming-programming-guide.html

3.1.1 was broken: 
https://spark.apache.org/docs/3.1.1/structured-streaming-programming-guide.html




was (Author: podongfeng):
update:

this issue emerged since 3.1.1 (nit we don't have an official 3.1.0, since 
3.1.0 was a mistake https://spark.apache.org/news/index.html)

3.0.3 works well: 
https://spark.apache.org/docs/3.0.3/structured-streaming-programming-guide.html

3.1.1 was broken: 
https://spark.apache.org/docs/3.1.1/structured-streaming-programming-guide.html



> Switch languages consistently across docs for all code snippets
> ---
>
> Key: SPARK-44820
> URL: https://issues.apache.org/jira/browse/SPARK-44820
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 3.4.1, 3.5.0
>Reporter: Allison Wang
>Priority: Major
>
> When a user chooses a different language for a code snippet, all code 
> snippets on that page should switch to the chosen language. This was the 
> behavior for, for example, Spark 2.0 doc: 
> [https://spark.apache.org/docs/2.0.0/structured-streaming-programming-guide.html]
> But it was broken for later docs, for example the Spark 3.4.1 doc: 
> [https://spark.apache.org/docs/latest/quick-start.html]
> We should fix this behavior change and possibly add test cases to prevent 
> future regressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-40322) Fix all dead links

2022-09-03 Thread Yuming Wang (Jira)
Yuming Wang created SPARK-40322:
---

 Summary: Fix all dead links
 Key: SPARK-40322
 URL: https://issues.apache.org/jira/browse/SPARK-40322
 Project: Spark
  Issue Type: Bug
  Components: Documentation
Affects Versions: 3.4.0
Reporter: Yuming Wang


 

https://www.deadlinkchecker.com/website-dead-link-checker.asp

 

 
||Status||URL||Source link text||
|-1 Not found: The server name or address could not be 
resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
 Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
|404 Not 
Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
|-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
University|https://spark.apache.org/powered-by.html]|
|404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
Networks|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://www.nubetech.co/]|[Nube 
Technologies|https://spark.apache.org/powered-by.html]|
|-1 Timeout|[http://ooyala.com/]|[Ooyala, 
Inc.|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
 for Fast Queries|https://spark.apache.org/powered-by.html]|
|-1 Not found: The server name or address could not be 
resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
America|https://spark.apache.org/powered-by.html]|
|-1 
Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
Camp 2 [302 from 
http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
from 
http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
|404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
|500 Internal Server 
Error|[https://www.packtpub.com/product/spark-cookbook/9781783987061]|[Spark 
Cookbook [301 from 
https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]|
|500 Internal Server 
Error|[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]|[Apache
 Spark Graph Processing [301 from 
https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]|
|500 Internal Server 
Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
|500 Internal Server 
Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
|500 Internal Server 
Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
|500 Internal Server 
Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
 Summit Europe|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
|-1 Not found: The server name or address could not be 
resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing with 
Spark|https://spark.apache.org/news/]|
|-1 Not found: The server name or address could not be 
resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring Spark's 
logs|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strata.oreilly.com/2012/08/seven-reasons-why-i-like-spark.html]|[Spark|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strata.oreilly.com/2012/11/shark-real-time-queries-and-analytics-for-big-data.html]|[Shark|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strata.oreilly.com/2012/10/spark-0-6-improves-performance-and-accessibility.html]|[Spark
 0.6 release|https://spark.apache.org/news/]|
|404 Not 
Found|[http://data-informed.com/spark-an-open-source-engine-for-iterative-data-mining/]|[DataInformed|https://spark.apache.org/news/]|
|-1 
Timeout|[http://strataconf.com/strata2013/public/schedule/detail/27438]|[introduction
 to Spark

[jira] [Updated] (SPARK-40322) Fix all dead links

2022-09-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-40322:
--
Fix Version/s: 3.3.2
   (was: 3.3.1)

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Fix For: 3.3.2
>
>
>  
> [https://www.deadlinkchecker.com/website-dead-link-checker.asp]
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
>  Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring 
> Spark's logs|https://spark.apache.org/n

[jira] [Resolved] (SPARK-40322) Fix all dead links

2022-09-24 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-40322.
--
Fix Version/s: 3.3.1
   Resolution: Fixed

Issue resolved by pull request 37984
[https://github.com/apache/spark/pull/37984]

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Fix For: 3.3.1
>
>
>  
> [https://www.deadlinkchecker.com/website-dead-link-checker.asp]
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
>  Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifin

[jira] [Assigned] (SPARK-40322) Fix all dead links

2022-09-24 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-40322:


Assignee: Yuming Wang

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
>
>  
> [https://www.deadlinkchecker.com/website-dead-link-checker.asp]
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
>  Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring 
> Spark's logs|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strata.oreilly.com/2012/08

[jira] [Commented] (SPARK-40322) Fix all dead links

2022-09-24 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608955#comment-17608955
 ] 

Apache Spark commented on SPARK-40322:
--

User 'wangyum' has created a pull request for this issue:
https://github.com/apache/spark/pull/37984

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Priority: Major
>
>  
> [https://www.deadlinkchecker.com/website-dead-link-checker.asp]
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
>  Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring 
> Spark's logs|https://spark.apa

[jira] [Assigned] (SPARK-40322) Fix all dead links

2022-09-23 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-40322:


Assignee: Apache Spark

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Assignee: Apache Spark
>Priority: Major
>
>  
> [https://www.deadlinkchecker.com/website-dead-link-checker.asp]
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
>  Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring 
> Spark's logs|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strata.oreilly.com/2012/08

[jira] [Commented] (SPARK-40322) Fix all dead links

2022-09-23 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608617#comment-17608617
 ] 

Apache Spark commented on SPARK-40322:
--

User 'wangyum' has created a pull request for this issue:
https://github.com/apache/spark/pull/37980

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Priority: Major
>
>  
> [https://www.deadlinkchecker.com/website-dead-link-checker.asp]
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
>  Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring 
> Spark's logs|https://spark.apa

[jira] [Updated] (SPARK-40322) Fix all dead links

2022-10-16 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated SPARK-40322:

Fix Version/s: 3.4.0
   3.3.1
   (was: 3.3.2)

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Fix For: 3.4.0, 3.3.1
>
>
>  
> [https://www.deadlinkchecker.com/website-dead-link-checker.asp]
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
>  Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring 
> Spark's

[jira] [Commented] (SPARK-40322) Fix all dead links

2022-09-05 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17600367#comment-17600367
 ] 

Yang Jie commented on SPARK-40322:
--

The links related to `Spark Summit` have now been redirected to 
https://www.databricks.com/dataaisummit/. Is it better to keep the links, or to 
remove the links and only keep the text?

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Priority: Major
>
>  
> https://www.deadlinkchecker.com/website-dead-link-checker.asp
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |500 Internal Server 
> Error|[https://www.packtpub.com/product/spark-cookbook/9781783987061]|[Spark 
> Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]|
> |500 Internal Server 
> Error|[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]|[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/pos

[jira] [Commented] (SPARK-40322) Fix all dead links

2022-09-05 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17600370#comment-17600370
 ] 

Yang Jie commented on SPARK-40322:
--

Many historical links on the news page are no longer accessible

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Priority: Major
>
>  
> https://www.deadlinkchecker.com/website-dead-link-checker.asp
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |500 Internal Server 
> Error|[https://www.packtpub.com/product/spark-cookbook/9781783987061]|[Spark 
> Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]|
> |500 Internal Server 
> Error|[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]|[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/logging-post/]|[Configuring 
> Spark's logs|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strata.oreilly.com

[jira] [Commented] (SPARK-40322) Fix all dead links

2022-09-05 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17600384#comment-17600384
 ] 

Yang Jie commented on SPARK-40322:
--

[https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook] 
and 

[https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]
 not dead links

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Priority: Major
>
>  
> https://www.deadlinkchecker.com/website-dead-link-checker.asp
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |500 Internal Server 
> Error|[https://www.packtpub.com/product/spark-cookbook/9781783987061]|[Spark 
> Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]|
> |500 Internal Server 
> Error|[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]|[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://strataconf.com/strata2013]|[Strata|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/posts/spark-unit-test/]|[Unit testing 
> with Spark|https://spark.apache.org/news/]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blog.quantifind.com/pos

[jira] [Comment Edited] (SPARK-40322) Fix all dead links

2022-09-05 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-40322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17600384#comment-17600384
 ] 

Yang Jie edited comment on SPARK-40322 at 9/5/22 12:51 PM:
---

[https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook] 
and 

[https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]

[https://www.packtpub.com/big-data-and-business-intelligence/big-data-analytics]

not dead links


was (Author: luciferyang):
[https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook] 
and 

[https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]
 not dead links

> Fix all dead links
> --
>
> Key: SPARK-40322
> URL: https://issues.apache.org/jira/browse/SPARK-40322
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Priority: Major
>
>  
> [https://www.deadlinkchecker.com/website-dead-link-checker.asp]
>  
>  
> ||Status||URL||Source link text||
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using
>  Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]|
> |404 Not 
> Found|[https://github.com/AyasdiOpenSource/df]|[DF|https://spark.apache.org/third-party-projects.html]|
> |-1 Timeout|[https://atp.io/]|[atp|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sehir.edu.tr/en/]|[Istanbul Sehir 
> University|https://spark.apache.org/powered-by.html]|
> |404 Not Found|[http://nsn.com/]|[Nokia Solutions and 
> Networks|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.nubetech.co/]|[Nube 
> Technologies|https://spark.apache.org/powered-by.html]|
> |-1 Timeout|[http://ooyala.com/]|[Ooyala, 
> Inc.|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://engineering.ooyala.com/blog/fast-spark-queries-memory-datasets]|[Spark
>  for Fast Queries|https://spark.apache.org/powered-by.html]|
> |-1 Not found: The server name or address could not be 
> resolved|[http://www.sisa.samsung.com/]|[Samsung Research 
> America|https://spark.apache.org/powered-by.html]|
> |-1 
> Timeout|[https://checker.apache.org/projs/spark.html]|[https://checker.apache.org/projs/spark.html|https://spark.apache.org/release-process.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|[AMP 
> Camp 2 [302 from 
> http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/agenda-2012/]|[AMP Camp 1 [302 
> from 
> http://ampcamp.berkeley.edu/agenda-2012/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/4/]|[AMP Camp 4 [302 from 
> http://ampcamp.berkeley.edu/4/]|https://spark.apache.org/documentation.html]|
> |404 Not Found|[https://ampcamp.berkeley.edu/3/]|[AMP Camp 3 [302 from 
> http://ampcamp.berkeley.edu/3/]|https://spark.apache.org/documentation.html]|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/spark-cookbook/9781783987061]-|-[Spark
>  Cookbook [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/spark-cookbook]|https://spark.apache.org/documentation.html]-|
> |-500 Internal Server 
> Error-|-[https://www.packtpub.com/product/apache-spark-graph-processing/9781784391805]-|-[Apache
>  Spark Graph Processing [301 from 
> https://www.packtpub.com/big-data-and-business-intelligence/apache-spark-graph-processing]|https://spark.apache.org/documentation.html]-|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/eu17/]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://prevalentdesignevents.com/sparksummit/ss17/?_ga=1.211902866.780052874.1433437196]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header]|[register|https://spark.apache.org/news/]|
> |500 Internal Server 
> Error|[https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/]|[Spark
>  Summit Europe|https://spark.apache.org/news/]|
> |-1 
> Timeout|[http://

Re: transforming a Map object to RDD

2014-09-01 Thread Matthew Farrellee

and in python,

 map = {'a': 1, 'b': 2, 'c': 3}
 rdd = sc.parallelize(map.items())
 rdd.collect()
[('a', 1), ('c', 3), ('b', 2)]

best,


matt

On 08/28/2014 07:01 PM, Sean Owen wrote:

val map = Map(foo - 1, bar - 2, baz - 3)
val rdd = sc.parallelize(map.toSeq)

rdd is a an RDD[(String,Int)] and you can do what you like from there.

On Thu, Aug 28, 2014 at 11:56 PM, SK skrishna...@gmail.com wrote:

Hi,

How do I convert a Map object to an RDD so that I can use the
saveAsTextFile() operation to output the Map object?

thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/transforming-a-Map-object-to-RDD-tp13071.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Cores on Master

2014-11-18 Thread Pat Ferrel
This seems to work only on a ‘worker’ not the master? So I’m back to having no 
way to control cores on the master?
 
On Nov 18, 2014, at 3:24 PM, Pat Ferrel p...@occamsmachete.com wrote:

Looks like I can do this by not using start-all.sh but starting each worker 
separately passing in a '--cores n' to the master? No config/env way?

On Nov 18, 2014, at 3:14 PM, Pat Ferrel p...@occamsmachete.com wrote:

I see the default and max cores settings but these seem to control total cores 
per cluster.

My cobbled together home cluster needs the Master to not use all its cores or 
it may lock up (it does other things). Is there a way to control max cores used 
for a particular cluster machine in standalone mode?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



RE: spark 1.2 compatibility

2015-01-16 Thread Judy Nash
Should clarify on this. I personally have used HDP 2.1 + Spark 1.2 and have not 
seen a problem. 

However officially HDP 2.1 + Spark 1.2 is not a supported scenario. 

-Original Message-
From: Judy Nash 
Sent: Friday, January 16, 2015 5:35 PM
To: 'bhavyateja'; user@spark.apache.org
Subject: RE: spark 1.2 compatibility

Yes. It's compatible with HDP 2.1 

-Original Message-
From: bhavyateja [mailto:bhavyateja.potin...@gmail.com] 
Sent: Friday, January 16, 2015 3:17 PM
To: user@spark.apache.org
Subject: spark 1.2 compatibility

Is spark 1.2 is compatibly with HDP 2.1



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark-1-2-compatibility-tp21197.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: How to recovery application running records when I restart Spark master?

2015-01-12 Thread Chong Tang
Thank you, Cody! Actually, I have enabled this option, and I saved logs
into Hadoop file system. The problem is, how can I get the duration of an
application? The attached file is the log I copied from HDFS.

On Mon, Jan 12, 2015 at 4:36 PM, Cody Koeninger c...@koeninger.org wrote:

 http://spark.apache.org/docs/latest/monitoring.html

 http://spark.apache.org/docs/latest/configuration.html#spark-ui

 spark.eventLog.enabled



 On Mon, Jan 12, 2015 at 3:00 PM, ChongTang ct...@virginia.edu wrote:

 Is there any body can help me with this? Thank you very much!



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/How-to-recovery-application-running-records-when-I-restart-Spark-master-tp21088p21108.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org





EVENT_LOG_1
Description: Binary data

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Re: How to recovery application running records when I restart Spark master?

2015-01-12 Thread Chong Tang
Thank you, Cody! Actually, I have enabled this option, and I saved logs
into Hadoop file system. The problem is, how can I get the duration of an
application? The attached file is the log I copied from HDFS.

On Mon, Jan 12, 2015 at 4:36 PM, Cody Koeninger c...@koeninger.org wrote:

 http://spark.apache.org/docs/latest/monitoring.html

 http://spark.apache.org/docs/latest/configuration.html#spark-ui

 spark.eventLog.enabled



 On Mon, Jan 12, 2015 at 3:00 PM, ChongTang ct...@virginia.edu wrote:

 Is there any body can help me with this? Thank you very much!



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/How-to-recovery-application-running-records-when-I-restart-Spark-master-tp21088p21108.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org





EVENT_LOG_1
Description: Binary data

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Re: Spark Job History Server

2015-03-20 Thread Sean Owen
Uh, does that mean HDP shipped Marcelo's uncommitted patch from
SPARK-1537 anyway? Given the discussion there, that seems kinda
aggressive.

On Wed, Mar 18, 2015 at 8:49 AM, Marcelo Vanzin van...@cloudera.com wrote:
 Those classes are not part of standard Spark. You may want to contact
 Hortonworks directly if they're suggesting you use those.

 On Wed, Mar 18, 2015 at 3:30 AM, patcharee patcharee.thong...@uni.no wrote:
 Hi,

 I am using spark 1.3. I would like to use Spark Job History Server. I added
 the following line into conf/spark-defaults.conf

 spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService
 spark.history.provider
 org.apache.spark.deploy.yarn.history.YarnHistoryProvider
 spark.yarn.historyServer.address  sandbox.hortonworks.com:19888

 But got Exception in thread main java.lang.ClassNotFoundException:
 org.apache.spark.deploy.yarn.history.YarnHistoryProvider

 What class is really needed? How to fix it?

 Br,
 Patcharee

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




 --
 Marcelo

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Upgrade to Spark 1.2.1 using Guava

2015-02-27 Thread Pat Ferrel
Thanks! that worked.

On Feb 27, 2015, at 1:50 PM, Pat Ferrel p...@occamsmachete.com wrote:

I don’t use spark-submit I have a standalone app.

So I guess you want me to add that key/value to the conf in my code and make 
sure it exists on workers.


On Feb 27, 2015, at 1:47 PM, Marcelo Vanzin van...@cloudera.com wrote:

On Fri, Feb 27, 2015 at 1:42 PM, Pat Ferrel p...@occamsmachete.com wrote:
 I changed in the spark master conf, which is also the only worker. I added a 
 path to the jar that has guava in it. Still can’t find the class.

Sorry, I'm still confused about what config you're changing. I'm
suggesting using:

spark-submit --conf spark.executor.extraClassPath=/guava.jar blah


-- 
Marcelo

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: spark mesos deployment : starting workers based on attributes

2015-04-03 Thread Ankur Chauhan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

Thanks! I'll add the JIRA. I'll also try to work on a patch this weekend
.

- -- Ankur Chauhan

On 03/04/2015 13:23, Tim Chen wrote:
 Hi Ankur,
 
 There isn't a way to do that yet, but it's simple to add.
 
 Can you create a JIRA in Spark for this?
 
 Thanks!
 
 Tim
 
 On Fri, Apr 3, 2015 at 1:08 PM, Ankur Chauhan
 achau...@brightcove.com mailto:achau...@brightcove.com wrote:
 
 Hi,
 
 I am trying to figure out if there is a way to tell the mesos 
 scheduler in spark to isolate the workers to a set of mesos slaves 
 that have a given attribute such as `tachyon:true`.
 
 Anyone knows if that is possible or how I could achieve such a
 behavior.
 
 Thanks! -- Ankur Chauhan
 
 -

 
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 mailto:user-unsubscr...@spark.apache.org For additional commands,
 e-mail: user-h...@spark.apache.org 
 mailto:user-h...@spark.apache.org
 
 

- -- 
- -- Ankur Chauhan
-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJVHxMDAAoJEOSJAMhvLp3LEPAH/1T7Ywu2W2vEZR/f6KbP+xbd
CiECqbgy1lMw0TxK3jyoiGttTL0uDcgoqev5kjaUFaGgcpsbzZg2jiaqM5RagJRv
55HvGXtSXKQ3l5NlRyMsbmRGVu8qoV2qv2qrCQHLKhVc0ipXEQgSjrkDGx9yP397
Dz1tFMsY/bgvQL0nMAm/HwJokv701IDGeFXFNI4GXhLGcARYDHou4bY0nzZq+w8t
V9vEFji4jyroJmacHdX0np3KsA6tzVItD6Wi9tLKr0+UWDw2Fb1HfYK0CPYX+FK8
dEgZ/hKwNolAzfIF6kHyNKEIf6H6GKihdLxaB23Im7QojvgGNBTqfGV4tGoJLPc=
=KyHk
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: broken link on Spark Programming Guide

2015-04-07 Thread Ted Yu
For the last link, you might have meant:
https://spark.apache.org/docs/1.3.0/api/python/pyspark.html#pyspark.RDD

Cheers

On Tue, Apr 7, 2015 at 1:32 PM, jonathangreenleaf 
jonathangreenl...@gmail.com wrote:

 in the current Programming Guide:
 https://spark.apache.org/docs/1.3.0/programming-guide.html#actions

 under Actions, the Python link goes to:
 https://spark.apache.org/docs/1.3.0/api/python/pyspark.rdd.RDD-class.html
 which is 404

 which I think should be:

 https://spark.apache.org/docs/1.3.0/api/python/index.html#org.apache.spark.rdd.RDD

 Thanks - Jonathan



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/broken-link-on-Spark-Programming-Guide-tp22414.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




RE: How to share large resources like dictionaries while processing data with Spark ?

2015-06-04 Thread Huang, Roger
Is the dictionary read-only?
Did you look at 
http://spark.apache.org/docs/latest/programming-guide.html#broadcast-variables ?


-Original Message-
From: dgoldenberg [mailto:dgoldenberg...@gmail.com] 
Sent: Thursday, June 04, 2015 4:50 PM
To: user@spark.apache.org
Subject: How to share large resources like dictionaries while processing data 
with Spark ?

We have some pipelines defined where sometimes we need to load potentially 
large resources such as dictionaries.

What would be the best strategy for sharing such resources among the 
transformations/actions within a consumer?  Can they be shared somehow across 
the RDD's?

I'm looking for a way to load such a resource once into the cluster memory and 
have it be available throughout the lifecycle of a consumer...

Thanks.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-share-large-resources-like-dictionaries-while-processing-data-with-Spark-tp23162.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: [pyspark] Starting workers in a virtualenv

2015-05-22 Thread Karlson

That works, thank you!

On 2015-05-22 03:15, Davies Liu wrote:

Could you try with specify PYSPARK_PYTHON to the path of python in
your virtual env, for example

PYSPARK_PYTHON=/path/to/env/bin/python bin/spark-submit xx.py

On Mon, Apr 20, 2015 at 12:51 AM, Karlson ksonsp...@siberie.de wrote:

Hi all,

I am running the Python process that communicates with Spark in a
virtualenv. Is there any way I can make sure that the Python processes 
of

the workers are also started in a virtualenv? Currently I am getting
ImportErrors when the worker tries to unpickle stuff that is not 
installed
system-wide. For now both the worker and the driver run on the same 
machine

in local mode.

Thanks in advance!

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Is there more information about spark shuffer-service

2015-07-21 Thread Ted Yu
To my knowledge, there is no HA for External Shuffle Service. 

Cheers



 On Jul 21, 2015, at 2:16 AM, JoneZhang joyoungzh...@gmail.com wrote:
 
 There is a saying If the service is enabled, Spark executors will fetch
 shuffle files from the service instead of from each other.  in the wiki
 https://spark.apache.org/docs/1.3.0/job-scheduling.html#graceful-decommission-of-executors
 https://spark.apache.org/docs/1.3.0/job-scheduling.html#graceful-decommission-of-executors
 
 
 Is there more information about shuffer-service.
 For example.
 How to deal with the service shut down, does any redundancy exists?
 
 Thanks!
 
 
 
 
 --
 View this message in context: 
 http://apache-spark-user-list.1001560.n3.nabble.com/Is-there-more-information-about-spark-shuffer-service-tp23925.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.
 
 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org
 

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Master build fails ?

2015-11-05 Thread Dilip Biswal
Hello,

I am getting the same build error about not being able to find 
com.google.common.hash.HashCodes.

Is there a solution to this ?

Regards,
Dilip Biswal
Tel: 408-463-4980
dbis...@us.ibm.com



From:   Jean-Baptiste Onofré <j...@nanthrax.net>
To: Ted Yu <yuzhih...@gmail.com>
Cc: "dev@spark.apache.org" <dev@spark.apache.org>
Date:   11/03/2015 07:20 AM
Subject:Re: Master build fails ?



Hi Ted,

thanks for the update. The build with sbt is in progress on my box.

Regards
JB

On 11/03/2015 03:31 PM, Ted Yu wrote:
> Interesting, Sbt builds were not all failing:
>
> https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-SBT/
>
> FYI
>
> On Tue, Nov 3, 2015 at 5:58 AM, Jean-Baptiste Onofré <j...@nanthrax.net
> <mailto:j...@nanthrax.net>> wrote:
>
> Hi Jacek,
>
> it works fine with mvn: the problem is with sbt.
>
> I suspect a different reactor order in sbt compare to mvn.
>
> Regards
> JB
>
> On 11/03/2015 02:44 PM, Jacek Laskowski wrote:
>
> Hi,
>
> Just built the sources using the following command and it worked
> fine.
>
> ➜  spark git:(master) ✗ ./build/mvn -Pyarn -Phadoop-2.6
> -Dhadoop.version=2.7.1 -Dscala-2.11 -Phive -Phive-thriftserver
> -DskipTests clean install
> ...
> [INFO]
> 
> [INFO] BUILD SUCCESS
> [INFO]
> 
> [INFO] Total time: 14:15 min
> [INFO] Finished at: 2015-11-03T14:40:40+01:00
> [INFO] Final Memory: 438M/1972M
> [INFO]
> 
>
> ➜  spark git:(master) ✗ java -version
> java version "1.8.0_66"
> Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>
> I'm on Mac OS.
>
> Pozdrawiam,
> Jacek
>
> --
> Jacek Laskowski | http://blog.japila.pl |
> http://blog.jaceklaskowski.pl
> Follow me at https://twitter.com/jaceklaskowski
> Upvote at http://stackoverflow.com/users/1305344/jacek-laskowski
>
>
> On Tue, Nov 3, 2015 at 1:37 PM, Jean-Baptiste Onofré
> <j...@nanthrax.net <mailto:j...@nanthrax.net>> wrote:
>
> Thanks for the update, I used mvn to build but without hive
> profile.
>
> Let me try with mvn with the same options as you and sbt 
also.
>
> I keep you posted.
>
> Regards
> JB
>
> On 11/03/2015 12:55 PM, Jeff Zhang wrote:
>
>
> I found it is due to SPARK-11073.
>
> Here's the command I used to build
>
> build/sbt clean compile -Pyarn -Phadoop-2.6 -Phive
> -Phive-thriftserver
> -Psparkr
>
> On Tue, Nov 3, 2015 at 7:52 PM, Jean-Baptiste Onofré
> <j...@nanthrax.net <mailto:j...@nanthrax.net>
> <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>> 
wrote:
>
>   Hi Jeff,
>
>   it works for me (with skipping the tests).
>
>   Let me try again, just to be sure.
>
>   Regards
>   JB
>
>
>   On 11/03/2015 11:50 AM, Jeff Zhang wrote:
>
>   Looks like it's due to guava version
> conflicts, I see both guava
>   14.0.1
>   and 16.0.1 under lib_managed/bundles. Anyone
> meet this issue too ?
>
>   [error]
>
> 
/Users/jzhang/github/spark_apache/core/src/main/scala/org/apache/spark/SecurityManager.scala:26:
>   object HashCodes is not a member of package
> com.google.common.hash
>   [error] import 
com.google.common.hash.HashCodes
>   [error]^
>   [info] Resolving
> org.apache.commons#commons-math;2.2 ...
>   [error]
>
> 
/Users/jzhang/github/spark_apache/core/src/main/scala/org/apache/spark/SecurityManager.scala:384:
>   not found: value HashCodes
>   [error] val cookie =
> HashCodes.fromBytes(secret).toString()
>   [e

Re: Spark Streaming - History UI

2015-12-02 Thread patcharee

I meant there is no streaming tab at all. It looks like I need version 1.6

Patcharee

On 02. des. 2015 11:34, Steve Loughran wrote:

The history UI doesn't update itself for live apps (SPARK-7889) -though I'm 
working on it

Are you trying to view a running streaming job?


On 2 Dec 2015, at 05:28, patcharee <patcharee.thong...@uni.no> wrote:

Hi,

On my history server UI, I cannot see "streaming" tab for any streaming jobs? I 
am using version 1.5.1. Any ideas?

Thanks,
Patcharee

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-11835] Adds a sidebar menu to MLlib's d...

2015-11-19 Thread thunterdb
Github user thunterdb commented on the pull request:

https://github.com/apache/spark/pull/9826#issuecomment-158220169
  
@andrewor14 this is a different issue: the SIPs show the table of contents 
within one document, which we already have with the `{:toc}` directive. This PR 
adds the per-project organization (linking various markdown files together). 
This is not something we can infer automatically because we show different 
levels of nesting for each of the pages. Each project already has this 
description in their overview pages, and I am proposing we move them to the 
side bar:
 - http://spark.apache.org/docs/latest/streaming-programming-guide.html
 - http://spark.apache.org/docs/latest/sql-programming-guide.html
 - http://spark.apache.org/docs/latest/graphx-programming-guide.html
 - http://spark.apache.org/docs/latest/sparkr.html



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: Spark 2.0.0 preview docs uploaded

2016-06-09 Thread Pete Robbins
It would be nice to have a "what's new in 2.0.0" equivalent to
https://spark.apache.org/releases/spark-release-1-6-0.html available or am
I just missing it?

On Wed, 8 Jun 2016 at 13:15 Sean Owen <so...@cloudera.com> wrote:

> OK, this is done:
>
> http://spark.apache.org/documentation.html
> http://spark.apache.org/docs/2.0.0-preview/
> http://spark.apache.org/docs/preview/
>
> On Tue, Jun 7, 2016 at 4:59 PM, Shivaram Venkataraman
> <shiva...@eecs.berkeley.edu> wrote:
> > As far as I know the process is just to copy docs/_site from the build
> > to the appropriate location in the SVN repo (i.e.
> > site/docs/2.0.0-preview).
> >
> > Thanks
> > Shivaram
> >
> > On Tue, Jun 7, 2016 at 8:14 AM, Sean Owen <so...@cloudera.com> wrote:
> >> As a stop-gap, I can edit that page to have a small section about
> >> preview releases and point to the nightly docs.
> >>
> >> Not sure who has the power to push 2.0.0-preview to site/docs, but, if
> >> that's done then we can symlink "preview" in that dir to it and be
> >> done, and update this section about preview docs accordingly.
> >>
>
> -----
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: Kafka connection logs in Spark

2016-05-26 Thread Mail.com
Hi Cody,

I used Horton Works jars for spark streaming that would enable get messages  
from Kafka with kerberos.

Thanks,
Pradeep


> On May 26, 2016, at 11:04 AM, Cody Koeninger <c...@koeninger.org> wrote:
> 
> I wouldn't expect kerberos to work with anything earlier than the beta
> consumer for kafka 0.10
> 
>> On Wed, May 25, 2016 at 9:41 PM, Mail.com <pradeep.mi...@mail.com> wrote:
>> Hi All,
>> 
>> I am connecting Spark 1.6 streaming  to Kafka 0.8.2 with Kerberos. I ran 
>> spark streaming in debug mode, but do not see any log saying it connected to 
>> Kafka or  topic etc. How could I enable that.
>> 
>> My spark streaming job runs but no messages are fetched from the RDD. Please 
>> suggest.
>> 
>> Thanks,
>> Pradeep
>> 
>> ---------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
> 
> -----
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: I want to subscribe to mailing lists

2016-02-11 Thread Josh Elser
No, you need to send to the subscribe address as that community page 
instructs:


mailto:user-subscr...@spark.apache.org

and

mailto:dev-subscr...@spark.apache.org

Shyam Sarkar wrote:

Do I have @apache.org e-mail address ?  I am getting following error when\
I send from ssarkarayushnet...@gmail.com address:

mailer-dae...@apache.org

to me
Hi. This is the qmail-send program at apache.org.
I'm afraid I wasn't able to deliver your message to the following addresses.
This is a permanent error; I've given up. Sorry it didn't work out.

<u...@spark.apache.org>:
Must be sent from an @apache.org address or a subscriber address or an
address in LDAP.

Thanks.

On Thu, Feb 11, 2016 at 11:35 AM, Matthias J. Sax<mj...@apache.org>  wrote:


https://spark.apache.org/community.html

On 02/11/2016 08:34 PM, Shyam Sarkar wrote:

u...@spark.apache.org

d...@spark.apache.org







<    4   5   6   7   8   9   10   11   12   13   >