Mailing lists matching spark.apache.org
commits spark.apache.orgdev spark.apache.org
issues spark.apache.org
reviews spark.apache.org
user spark.apache.org
Updated Spark logo
Hi all, FYI, we've recently updated the Spark logo at https://spark.apache.org/ to say "Apache Spark" instead of just "Spark". Many ASF projects have been doing this recently to make it clearer that they are associated with the ASF, and indeed the ASF's branding guidelin
Re: Spark Website
Same here From: Benjamin Kim <bbuil...@gmail.com> Date: Wednesday, July 13, 2016 at 11:47 AM To: manish ranjan <cse1.man...@gmail.com> Cc: user <user@spark.apache.org> Subject: Re: Spark Website It takes me to the directories instead of the webpage. On Jul 13, 2016, at 11:45
Re: Spark Website
Worked for me if I go to https://spark.apache.org/site/ but not https://spark.apache.org On Wed, Jul 13, 2016 at 11:48 AM, Maurin Lenglart <mau...@cuberonlabs.com> wrote: > Same here > > > > *From: *Benjamin Kim <bbuil...@gmail.com> > *Date: *Wednesday, July 13, 2
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Literal.create(key, StringType))), expected) + } + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1",
[jira] [Created] (SPARK-19546) Every mail to u...@spark.apache.org is blocked
Shivam Sharma created SPARK-19546: - Summary: Every mail to u...@spark.apache.org is blocked Key: SPARK-19546 URL: https://issues.apache.org/jira/browse/SPARK-19546 Project: Spark Issue Type
[jira] [Updated] (SPARK-19546) Every mail to u...@spark.apache.org is getting blocked
[ https://issues.apache.org/jira/browse/SPARK-19546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shivam Sharma updated SPARK-19546: -- Priority: Major (was: Minor) > Every mail to u...@spark.apache.org is getting bloc
[jira] [Created] (SPARK-19034) Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2
Sanjay Dasgupta created SPARK-19034: --- Summary: Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2 Key: SPARK-19034 URL: https://issues.apache.org/jira/browse/SPARK-19034
[jira] [Commented] (SPARK-19034) Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2
[ https://issues.apache.org/jira/browse/SPARK-19034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15787411#comment-15787411 ] Dongjoon Hyun commented on SPARK-19034: --- +1 > Download packages on 'spark.apache.
[jira] [Commented] (SPARK-19034) Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2
for the confusion. > Download packages on 'spark.apache.org/downloads.html' contain release 2.0.2 > > > Key: SPARK-19034 > URL: https://issues.apache.org/jira/bro
[jira] [Comment Edited] (SPARK-20456) Document major aggregation functions for pyspark
} Document `sql.functions.py`: 1. Document the common aggregate functions (`min`, `max`, `mean`, `count`, `collect_set`, `collect_list`, `stddev`, `variance`) {quote} I think we have documentations for ... min - https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.min
[jira] [Created] (SPARK-21593) Fix broken configuration page
Components: Documentation Affects Versions: 2.2.0 Environment: Chrome/Firefox Reporter: Artur Sukhenko Priority: Minor Latest configuration page for Spark 2.2.0 has broken menu list and named anchors. Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1
Re: Welcoming Tejas Patil as a Spark committer
Congratulations , Tejas! -- Dilip - Original message -From: Suresh Thalamati <suresh.thalam...@gmail.com>To: "dev@spark.apache.org" <dev@spark.apache.org>Cc:Subject: Re: Welcoming Tejas Patil as a Spark committerDate: Tue, Oct 3, 2017 12:01 PM Congratulations ,
[jira] [Commented] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/
-website] > Adding Kubernetes as an option to https://spark.apache.org/ > --- > > Key: SPARK-23083 > URL: https://issues.apache.org/jira/browse/SPARK-23083 > Project: Spark >
[jira] [Created] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/
Anirudh Ramanathan created SPARK-23083: -- Summary: Adding Kubernetes as an option to https://spark.apache.org/ Key: SPARK-23083 URL: https://issues.apache.org/jira/browse/SPARK-23083 Project
[jira] [Commented] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/
then we'll wait to merge the PR. > Adding Kubernetes as an option to https://spark.apache.org/ > --- > > Key: SPARK-23083 > URL: https://issues.apache.org/jira/browse/SPARK-23083 >
[jira] [Commented] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/
/pull/87 > Adding Kubernetes as an option to https://spark.apache.org/ > --- > > Key: SPARK-23083 > URL: https://issues.apache.org/jira/browse/SPARK-23083 > Project: Spark >
[GitHub] spark pull request #22370: don't link to deprecated function
ons for the data source. #' @return A SparkDataFrame. #' @rdname createTable -#' @seealso \link{createExternalTable} --- End diff -- I don't see it here (nothing pointing to `sparkR.init`/`sparkRHive.init`/`sparkRSQL.init`): https://spark.apache.org/docs/latest/
[jira] [Resolved] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/
ion to https://spark.apache.org/ > --- > > Key: SPARK-23083 > URL: https://issues.apache.org/jira/browse/SPARK-23083 > Project: Spark > Issue Type: Sub-task >
[jira] [Assigned] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/
tps://spark.apache.org/ > --- > > Key: SPARK-23083 > URL: https://issues.apache.org/jira/browse/SPARK-23083 > Project: Spark > Issue Type: Sub-task > Components:
Re: Sample date_trunc error for webpage (https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc )
uot;binggan1989" > *Subject: **Sample date_trunc error for webpage > (https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc > <https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc> )* > *Date: *July 5, 2019 at 2:54:54 AM PDT > *To: *"webmaster" > *Rep
[GitHub] [spark] sarutak commented on pull request #34720: [SPARK-37469][WebUI] unified shuffle read block time to shuffle read fetch wait time in StagePage
sarutak commented on pull request #34720: URL: https://github.com/apache/spark/pull/34720#issuecomment-979728617 @toujours33 nit, but could you update the [screenshot](http://spark.apache.org/docs/latest/img/AllStagesPageDetail6.png) in the [doc](spark.apache.org/docs/latest/web
[GitHub] [spark] toujours33 commented on pull request #34720: [SPARK-37469][WebUI] unified shuffle read block time to shuffle read fetch wait time in StagePage
toujours33 commented on pull request #34720: URL: https://github.com/apache/spark/pull/34720#issuecomment-979739800 > @toujours33 nit, but could you update the [screenshot](http://spark.apache.org/docs/latest/img/AllStagesPageDetail6.png) in the [doc](spark.apache.org/docs/latest/
[GitHub] [spark-website] srowen commented on a diff in pull request #400: [SPARK-39512] Document docker image release steps
srowen commented on code in PR #400: URL: https://github.com/apache/spark-website/pull/400#discussion_r904998278 ## site/sitemap.xml: ## @@ -941,27 +941,27 @@ weekly - https://spark.apache.org/graphx/ + https://spark.apache.org/news/ Review Comment: @holdenk just
[GitHub] [spark-website] gengliangwang commented on a diff in pull request #400: [SPARK-39512] Document docker image release steps
gengliangwang commented on code in PR #400: URL: https://github.com/apache/spark-website/pull/400#discussion_r906444239 ## site/sitemap.xml: ## @@ -941,27 +941,27 @@ weekly - https://spark.apache.org/graphx/ + https://spark.apache.org/news/ Review Comment: +1
[GitHub] [spark-website] srowen commented on a diff in pull request #400: [SPARK-39512] Document docker image release steps
srowen commented on code in PR #400: URL: https://github.com/apache/spark-website/pull/400#discussion_r901012063 ## site/sitemap.xml: ## @@ -941,27 +941,27 @@ weekly - https://spark.apache.org/graphx/ + https://spark.apache.org/news/ Review Comment: I don't know
[GitHub] [spark] sunchao commented on pull request #38352: [SPARK-40801][BUILD][3.2] Upgrade `Apache commons-text` to 1.10
sunchao commented on PR #38352: URL: https://github.com/apache/spark/pull/38352#issuecomment-1316416703 @bsikander again, pls check [d...@spark.apache.org](mailto:d...@spark.apache.org) - it's being voted. -- This is an automated message from the Apache Git Service. To respond
[GitHub] [spark] bjornjorgensen commented on pull request #38352: [SPARK-40801][BUILD][3.2] Upgrade `Apache commons-text` to 1.10
bjornjorgensen commented on PR #38352: URL: https://github.com/apache/spark/pull/38352#issuecomment-1307663321 @fryz It will be posted at d...@spark.apache.org and u...@spark.apache.org -- This is an automated message from the Apache Git Service. To respond to the message, please log
[GitHub] [spark] bjornjorgensen commented on pull request #40171: [Tests] Refactor TPCH schema to separate file similar to TPCDS for code reuse
bjornjorgensen commented on PR #40171: URL: https://github.com/apache/spark/pull/40171#issuecomment-1445168808 [priv...@spark.apache.org](mailto:priv...@spark.apache.org) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub
[GitHub] [spark] dongjoon-hyun commented on pull request #36374: [SPARK-39006][K8S] Show a directional error message for executor PVC dynamic allocation failure
dongjoon-hyun commented on PR #36374: URL: https://github.com/apache/spark/pull/36374#issuecomment-1605232031 Thank you for confirming. Apache Spark 3.4.1 is released officially. - https://lists.apache.org/list.html?d...@spark.apache.org - https://spark.apache.org/downloads.html
[jira] [Commented] (SPARK-20456) Document major aggregation functions for pyspark
mon aggregate functions (`min`, `max`, `mean`, `count`, `collect_set`, `collect_list`, `stddev`, `variance`) I think we have documentations for ... min - https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.min max - https://spark.apache.org/docs/2.1.0/api/pyt
[jira] [Updated] (SPARK-21593) Fix broken configuration page
anchors. Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] with [Latest docs |https://spark.apache.org/docs/latest/configuration.html] Or try this link [Configuration # Dynamic Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation
[jira] [Updated] (SPARK-21593) Fix broken configuration page
anchors. Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] with [Latest docs |https://spark.apache.org/docs/latest/configuration.html] Or try this link [Configuration # Dynamic Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation
[jira] [Updated] (SPARK-21593) Fix broken configuration page
anchors. Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] with [Latest docs |https://spark.apache.org/docs/latest/configuration.html] Or try this link [Configuration # Dynamic Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation
[jira] [Updated] (SPARK-21593) Fix broken configuration page
anchors. Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] with [Latest docs |https://spark.apache.org/docs/latest/configuration.html] Or try this link [Configuration # Dynamic Allocation|https://spark.apache.org/docs/2.1.1/configuration.html#dynamic-allocation
[jira] [Updated] (SPARK-21593) Fix broken configuration page
anchors. Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] with [Latest docs |https://spark.apache.org/docs/latest/configuration.html] Or try this link [Configuration # Dynamic Allocation|https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation
Re: [PR] [SPARK-45935][PYTHON][DOCS] Fix RST files link substitutions error [spark]
/python .. |downloading| replace:: Downloading -.. _downloading: https://spark.apache.org/docs/{1}/building-spark.html +.. _downloading: https://spark.apache.org/docs/{1}/#downloading .. |building_spark| replace:: Building Spark -.. _building_spark: https://spark.apache.org/docs/{1}/#downloading
Re: [sql]enable spark sql cli support spark sql
basic queries, not sure what's the plan for its enhancement. -Original Message- From: scwf [mailto:wangf...@huawei.com] Sent: Friday, August 15, 2014 11:22 AM To: dev@spark.apache.org Subject: [sql]enable spark sql cli support spark sql hi all, now spark sql cli only support
Hamburg Apache Spark Meetup
| the4thFloor.eu ra...@the4thfloor.eu wrote: Hi, there is a small Spark Meetup group in Berlin, Germany :-) http://www.meetup.com/Berlin-Apache-Spark-Meetup/ Plaes add this group to the Meetups list at https://spark.apache.org/community.html Ralph
Re: insert Hive table with RDD
hc.implicits._ existedRdd.toDF().insertInto(hivetable) or existedRdd.toDF().registerTempTable(mydata) hc.sql(insert into hivetable as select xxx from mydata) -Original Message- From: patcharee [mailto:patcharee.thong...@uni.no] Sent: Tuesday, March 3, 2015 7:09 PM To: user@spark.apache.org
Re: broken link on Spark Programming Guide
I fixed this a while ago in master. It should go out with the next release and next push of the site. On Tue, Apr 7, 2015 at 4:32 PM, jonathangreenleaf jonathangreenl...@gmail.com wrote: in the current Programming Guide: https://spark.apache.org/docs/1.3.0/programming-guide.html#actions under
[jira] [Updated] (SPARK-10700) Spark R Documentation not available
[ https://issues.apache.org/jira/browse/SPARK-10700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dev Lakhani updated SPARK-10700: Description: Documentation https://spark.apache.org/docs/latest/api/R/glm.html refered
Re: Tracking / estimating job progress
On 5/13/2016 10:16 AM, Anthony May wrote: http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkStatusTracker Might be useful How do you use it? You cannot instantiate the class - is the constructor private? Thanks! On Fri, 13 May 2016 at 11:11 Ted Yu <yuz
[jira] [Commented] (SPARK-37873) SQL Syntax links are broken
[ https://issues.apache.org/jira/browse/SPARK-37873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17475993#comment-17475993 ] Alex Ott commented on SPARK-37873: -- If you click on any: * [DDL Statements|https://spark.apache.org
[GitHub] [spark] yaooqinn opened a new pull request #36053: [SPARK-38778][INFRA][BUILD] Replace http with https for project url in pom
yaooqinn opened a new pull request #36053: URL: https://github.com/apache/spark/pull/36053 ### What changes were proposed in this pull request? change http://spark.apache.org/ to https://spark.apache.org/ in the project URL of all pom files ### Why
Re: groupBy gives non deterministic results
with Sparrow (http://www.sparrowmailapp.com/?sig) On Thursday, September 11, 2014 at 12:12 AM, Davies Liu wrote: I think the mails to spark.incubator.apache.org (http://spark.incubator.apache.org) will be forwarded to spark.apache.org (http://spark.apache.org). Here is the header
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Literal.create(key, StringType))), expected) + } + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1",
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Literal.create(key, StringType))), expected) + } + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1&q
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Literal.create(key, StringType))), expected) + } + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1&q
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Literal.create(key, StringType))), expected) + } + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1&q
[spark-website] branch asf-site updated: Hotfix site links in sitemap.xml
(+), 164 deletions(-) diff --git a/site/sitemap.xml b/site/sitemap.xml index c7f17db..80a4845 100644 --- a/site/sitemap.xml +++ b/site/sitemap.xml @@ -143,658 +143,658 @@ weekly - http://localhost:4000/releases/spark-release-2-4-0.html + https://spark.apache.org/releases/spark-release-2-4-0
spark-website git commit: Update Spark 2.4 release window (and fix Spark URLs in sitemap)
tml -- diff --git a/site/mailing-lists.html b/site/mailing-lists.html index d447046..f7ae56f 100644 --- a/site/mailing-lists.html +++ b/site/mailing-lists.html @@ -12,7 +12,7 @@ -http://localhost:4000/community.html; /> +https://spark.apache.org/communit
Re: Welcoming three new committers
Core. Join me in welcoming them as committers! Matei - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org (mailto:dev-unsubscr...@spark.apache.org) For additional commands, e-mail: dev-h...@spark.apache.org (mailto:dev
Re: Welcoming three new committers
...@spark.apache.org (mailto: dev-unsubscr...@spark.apache.org) For additional commands, e-mail: dev-h...@spark.apache.org (mailto: dev-h...@spark.apache.org) - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
Re: Welcoming three new committers
in welcoming them as committers! Matei - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org (mailto: dev-unsubscr...@spark.apache.org) For additional commands, e-mail: dev-h...@spark.apache.org (mailto: dev-h
Re: Upgrade to Spark 1.2.1 using Guava
- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands
[jira] [Updated] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
documentation (Local matrix scala section), the URL points to https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices which is a mistake, since Matrices is an object that implements factory methods for Matrix that does not have a companion class. The correct
[jira] [Updated] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
to https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices which is a mistake, since Matrices is an object that implements factory methods for Matrix that does not have a companion class. The correct link should point to https://spark.apache.org/docs
[jira] [Created] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
documentation (Local matrix scala section), the URL points to https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices which is a mistake, since Matrices is an object that implements factory methods for Matrix that does not have a companion class. The correct link
[jira] [Assigned] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
points to https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices which is a mistake, since Matrices is an object that implements factory methods for Matrix that does not have a companion class. The correct link should point to https://spark.apache.org
[jira] [Assigned] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
scala section), the URL points to https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices which is a mistake, since Matrices is an object that implements factory methods for Matrix that does not have a companion class. The correct link should point
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
scription( + usage = "_FUNC_(url, partToExtract[, key]) - extracts a part from a URL", + extended = """Parts: HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, USERINFO. +Key specifies which query to extract. + Examples: + > SELECT _FUN
[jira] [Created] (SPARK-22631) Consolidate all configuration properties into one page
Issue Type: Documentation Components: Documentation Affects Versions: 2.2.0 Reporter: Andreas Maier The page https://spark.apache.org/docs/2.2.0/configuration.html gives the impression as if all configuration properties of Spark are described on this page. Unfortunately
[jira] [Created] (SPARK-22630) Consolidate all configuration properties into one page
Issue Type: Documentation Components: Documentation Affects Versions: 2.2.0 Reporter: Andreas Maier The page https://spark.apache.org/docs/2.2.0/configuration.html gives the impression as if all configuration properties of Spark are described on this page. Unfortunately
[jira] [Commented] (SPARK-22630) Consolidate all configuration properties into one page
> URL: https://issues.apache.org/jira/browse/SPARK-22630 > Project: Spark > Issue Type: Documentation > Components: Documentation >Affects Versions: 2.2.0 >Reporter: Andreas Maier > > The page https://spark.
[jira] [Resolved] (SPARK-22631) Consolidate all configuration properties into one page
Affects Versions: 2.2.0 >Reporter: Andreas Maier > > The page https://spark.apache.org/docs/2.2.0/configuration.html gives the > impression as if all configuration properties of Spark are described on this > page. Unfortunately this is not true. The description of important propert
[jira] [Commented] (SPARK-22630) Consolidate all configuration properties into one page
on >Affects Versions: 2.2.0 >Reporter: Andreas Maier > > The page https://spark.apache.org/docs/2.2.0/configuration.html gives the > impression as if all configuration properties of Spark are described on this > page. Unfortunately this is not true. The descri
[jira] [Updated] (SPARK-22630) Consolidate all configuration properties into one page
Affects Versions: 2.2.0 >Reporter: Andreas Maier >Priority: Major > Labels: bulk-closed > > The page https://spark.apache.org/docs/2.2.0/configuration.html gives the > impression as if all configuration properties of Spark are described on
[jira] [Resolved] (SPARK-22630) Consolidate all configuration properties into one page
Affects Versions: 2.2.0 >Reporter: Andreas Maier >Priority: Major > Labels: bulk-closed > > The page https://spark.apache.org/docs/2.2.0/configuration.html gives the > impression as if all configuration properties of Spark are described on
[jira] [Updated] (SPARK-45208) Kubernetes Configuration in Spark Community Website doesn't have horizontal scrollbar
. Specifically, the Kubernetes configuration lists on the right-hand side are not visible and doc doesn't have a horizontal scrollbar. - [https://spark.apache.org/docs/3.5.0/running-on-kubernetes.html#configuration] - [https://spark.apache.org/docs/3.4.1/running-on-kubernetes.html#configuration
[jira] [Comment Edited] (SPARK-44820) Switch languages consistently across docs for all code snippets
: this issue emerged since 3.1.1 (note that we don't have an official 3.1.0, since 3.1.0 was a mistake https://spark.apache.org/news/index.html) 3.0.3 works well: https://spark.apache.org/docs/3.0.3/structured-streaming-programming-guide.html 3.1.1 was broken: https://spark.apache.org/docs/3.1.1
[jira] [Created] (SPARK-40322) Fix all dead links
-and-scrooge-spark]|[Using Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| |-1 Not found: The server name or address could not be resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]| |404 Not Found|[https://github.com/AyasdiOpenSource
[jira] [Updated] (SPARK-40322) Fix all dead links
ld not be > resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The server name or address could not be > resolved|[http://blinkdb.org/]|[BlinkDB|http
[jira] [Resolved] (SPARK-40322) Fix all dead links
urce link text|| > |-1 Not found: The server name or address could not be > resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The server name
[jira] [Assigned] (SPARK-40322) Fix all dead links
ing-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The server name or address could not be > resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]| > |404 Not > Found|
[jira] [Commented] (SPARK-40322) Fix all dead links
ould not be > resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The server name or address could not be > resolved|[http://blinkdb.org/]|[BlinkDB|https:
[jira] [Assigned] (SPARK-40322) Fix all dead links
ing-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The server name or address could not be > resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]| > |404 Not > Found|
[jira] [Commented] (SPARK-40322) Fix all dead links
ould not be > resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The server name or address could not be > resolved|[http://blinkdb.org/]|[BlinkDB|https:
[jira] [Updated] (SPARK-40322) Fix all dead links
ound: The server name or address could not be > resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The server name or address could not be > resolved|[http://blinkdb
[jira] [Commented] (SPARK-40322) Fix all dead links
URL||Source link text|| > |-1 Not found: The server name or address could not be > resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The server name or
[jira] [Commented] (SPARK-40322) Fix all dead links
g.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The server name or address could not be > resolved|[http://blinkdb.org/]|[BlinkDB|https://spark.apache.org/third-party-projects.html]| >
[jira] [Commented] (SPARK-40322) Fix all dead links
; ||Status||URL||Source link text|| > |-1 Not found: The server name or address could not be > resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| > |-1 Not found: The serv
[jira] [Comment Edited] (SPARK-40322) Fix all dead links
kchecker.com/website-dead-link-checker.asp] > > > ||Status||URL||Source link text|| > |-1 Not found: The server name or address could not be > resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using > Parquet and Scrooge with Spark|https://spark.apache.org/docu
Re: transforming a Map object to RDD
.1001560.n3.nabble.com/transforming-a-Map-object-to-RDD-tp13071.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e
Re: Cores on Master
(it does other things). Is there a way to control max cores used for a particular cluster machine in standalone mode? - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h
RE: spark 1.2 compatibility
Should clarify on this. I personally have used HDP 2.1 + Spark 1.2 and have not seen a problem. However officially HDP 2.1 + Spark 1.2 is not a supported scenario. -Original Message- From: Judy Nash Sent: Friday, January 16, 2015 5:35 PM To: 'bhavyateja'; user@spark.apache.org
Re: How to recovery application running records when I restart Spark master?
://spark.apache.org/docs/latest/monitoring.html http://spark.apache.org/docs/latest/configuration.html#spark-ui spark.eventLog.enabled On Mon, Jan 12, 2015 at 3:00 PM, ChongTang ct...@virginia.edu wrote: Is there any body can help me with this? Thank you very much! -- View this message
Re: How to recovery application running records when I restart Spark master?
://spark.apache.org/docs/latest/monitoring.html http://spark.apache.org/docs/latest/configuration.html#spark-ui spark.eventLog.enabled On Mon, Jan 12, 2015 at 3:00 PM, ChongTang ct...@virginia.edu wrote: Is there any body can help me with this? Thank you very much! -- View this message
Re: Spark Job History Server
: org.apache.spark.deploy.yarn.history.YarnHistoryProvider What class is really needed? How to fix it? Br, Patcharee - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org -- Marcelo
Re: Upgrade to Spark 1.2.1 using Guava
changing. I'm suggesting using: spark-submit --conf spark.executor.extraClassPath=/guava.jar blah -- Marcelo - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
Re: spark mesos deployment : starting workers based on attributes
attribute such as `tachyon:true`. Anyone knows if that is possible or how I could achieve such a behavior. Thanks! -- Ankur Chauhan - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org mailto:user-unsubscr
Re: broken link on Spark Programming Guide
For the last link, you might have meant: https://spark.apache.org/docs/1.3.0/api/python/pyspark.html#pyspark.RDD Cheers On Tue, Apr 7, 2015 at 1:32 PM, jonathangreenleaf jonathangreenl...@gmail.com wrote: in the current Programming Guide: https://spark.apache.org/docs/1.3.0/programming
RE: How to share large resources like dictionaries while processing data with Spark ?
Is the dictionary read-only? Did you look at http://spark.apache.org/docs/latest/programming-guide.html#broadcast-variables ? -Original Message- From: dgoldenberg [mailto:dgoldenberg...@gmail.com] Sent: Thursday, June 04, 2015 4:50 PM To: user@spark.apache.org Subject: How to share
Re: [pyspark] Starting workers in a virtualenv
system-wide. For now both the worker and the driver run on the same machine in local mode. Thanks in advance! - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
Re: Is there more information about spark shuffer-service
://spark.apache.org/docs/1.3.0/job-scheduling.html#graceful-decommission-of-executors https://spark.apache.org/docs/1.3.0/job-scheduling.html#graceful-decommission-of-executors Is there more information about shuffer-service. For example. How to deal with the service shut down, does any
Re: Master build fails ?
Cc: "dev@spark.apache.org" <dev@spark.apache.org> Date: 11/03/2015 07:20 AM Subject:Re: Master build fails ? Hi Ted, thanks for the update. The build with sbt is in progress on my box. Regards JB On 11/03/2015 03:31 PM, Ted Yu wrote: > Interesting, Sbt builds
Re: Spark Streaming - History UI
l: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@
[GitHub] spark pull request: [SPARK-11835] Adds a sidebar menu to MLlib's d...
: - http://spark.apache.org/docs/latest/streaming-programming-guide.html - http://spark.apache.org/docs/latest/sql-programming-guide.html - http://spark.apache.org/docs/latest/graphx-programming-guide.html - http://spark.apache.org/docs/latest/sparkr.html --- If your project
Re: Spark 2.0.0 preview docs uploaded
It would be nice to have a "what's new in 2.0.0" equivalent to https://spark.apache.org/releases/spark-release-1-6-0.html available or am I just missing it? On Wed, 8 Jun 2016 at 13:15 Sean Owen <so...@cloudera.com> wrote: > OK, this is done: > > http://spark.apache.org/
Re: Kafka connection logs in Spark
ying it connected to >> Kafka or topic etc. How could I enable that. >> >> My spark streaming job runs but no messages are fetched from the RDD. Please >> suggest. >> >> Thanks, >> Pradeep >> >> -----
Re: I want to subscribe to mailing lists
No, you need to send to the subscribe address as that community page instructs: mailto:user-subscr...@spark.apache.org and mailto:dev-subscr...@spark.apache.org Shyam Sarkar wrote: Do I have @apache.org e-mail address ? I am getting following error when\ I send from ssarkarayushnet