Mailing lists matching spark.apache.org
commits spark.apache.orgdev spark.apache.org
issues spark.apache.org
reviews spark.apache.org
user spark.apache.org
Unsubscribe
Unsubscribe - To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Re: [PR] Miland db/miland legacy error class [spark]
HyukjinKwon commented on PR #45423: URL: https://github.com/apache/spark/pull/45423#issuecomment-1984835207 See also https://spark.apache.org/contributing.html -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL
Unsubscribe
- To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
Invalid link for Spark 1.0.0 in Official Web Site
Hi, I found there is a invalid link in http://spark.apache.org/downloads.html . The link for release note of Spark 1.0.0 indicates http://spark.apache.org/releases/spark-release-1.0.0.html but this link is invalid. I think that is mistake for http://spark.apache.org/releases/spark-release-1-0
Re: compiling spark source code
follow the instruction here: http://spark.apache.org/docs/latest/building-with-maven.html -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/compiling-spark-source-code-tp13980p14144.html Sent from the Apache Spark User List mailing list archive at Nabble.com
Re:
See first section of http://spark.apache.org/community On Wed, Oct 22, 2014 at 7:42 AM, Margusja mar...@roo.ee wrote: unsubscribe - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail
Re: unsubscribe
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org Thanks Best Regards On Mon, Nov 3, 2014 at 5:53 PM, Karthikeyan Arcot Kuppusamy karthikeyan...@zanec.com wrote: hi - To unsubscribe, e-mail: user-unsubscr
Re: unsubscribe
Abdul, Please send an email to user-unsubscr...@spark.apache.org On Tue, Nov 18, 2014 at 2:05 PM, Abdul Hakeem alhak...@gmail.com wrote: - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands
Re: Spark SQL with a sorted file
] Sent: Thursday, December 4, 2014 11:34 AM To: user@spark.apache.org mailto:user@spark.apache.org Subject: Spark SQL with a sorted file Hi, If I create a SchemaRDD from a file that I know is sorted on a certain field, is it possible to somehow pass that information
Berlin Apache Spark Meetup
Hi, there is a small Spark Meetup group in Berlin, Germany :-) http://www.meetup.com/Berlin-Apache-Spark-Meetup/ Plaes add this group to the Meetups list at https://spark.apache.org/community.html Ralph - To unsubscribe, e
textFile() ordering and header rows
Since RDDs are generally unordered, aren't things like textFile().first() not guaranteed to return the first row (such as looking for a header row)? If so, doesn't that make the example in http://spark.apache.org/docs/1.2.1/quick-start.html#basics misleading
Re: Mllib kmeans #iteration
Have you refer to official document of kmeans on https://spark.apache.org/docs/1.1.1/mllib-clustering.html ? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Mllib-kmeans-iteration-tp22353p22365.html Sent from the Apache Spark User List mailing list archive
saveasorcfile on partitioned orc
Hi, I followed the information on https://www.mail-archive.com/reviews@spark.apache.org/msg141113.html to save orc file with spark 1.2.1. I can save data to a new orc file. I wonder how to save data to an existing and partitioned orc file? Any suggestions? BR, Patcharee
Spark History Server pointing to S3
In Spark website it’s stated in the View After the Fact section (https://spark.apache.org/docs/latest/monitoring.html) that you can point the start-history-server.sh script to a directory in order do view the Web UI using the logs as data source. Is it possible to point that script to S3
Re: subscribe
https://www.youtube.com/watch?v=umDr0mPuyQc On Sat, Aug 22, 2015 at 8:01 AM, Ted Yu yuzhih...@gmail.com wrote: See http://spark.apache.org/community.html Cheers On Sat, Aug 22, 2015 at 2:51 AM, Lars Hermes li...@hermes-it-consulting.de wrote: subscribe
Re: subscribe
See http://spark.apache.org/community.html Cheers On Sat, Aug 22, 2015 at 2:51 AM, Lars Hermes li...@hermes-it-consulting.de wrote: subscribe - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional
Python Kafka support?
Hi, I read on this page http://spark.apache.org/docs/latest/streaming-kafka-integration.html about python support for "receiverless" kafka integration (Approach 2) but it says its incomplete as of version 1.4. Has this been updated in version 1.5.
Re: JMX with Spark
Have you read this? https://spark.apache.org/docs/latest/monitoring.html *Romi Kuntsman*, *Big Data Engineer* http://www.totango.com On Thu, Nov 5, 2015 at 2:08 PM, Yogesh Vyas <informy...@gmail.com> wrote: > Hi, > How we can use JMX and JConsole to monitor our Spark
Re: unsubscribe
Hi Ntale, To unsubscribe from the user list, please send a message to user-unsubscr...@spark.apache.org as described here: http://spark.apache.org/community.html#mailing-lists. Thanks, -Rick Ntale Lukama <ntaleluk...@gmail.com> wrote on 09/23/2015 04:34:48 AM: > From: Ntale Lukama
Re: I want to unsubscribe
to unsubscribe, send an email to user-unsubscr...@spark.apache.org On Tue, Apr 5, 2016 at 4:50 PM, Ranjana Rajendran <ranjana.rajend...@gmail.com> wrote: > I get to see the threads in the public mailing list. I don;t want so many > messages in my inbox. I want to
Spark ML Interaction
Hi, Did anyone here manage to write an example of the following ML feature transformer http://spark.apache.org/docs/latest/api/java/org/apache/spark/ml/feature/Interaction.html ? It's not documented on the official Spark ML features pages but it can be found in the package API javadocs. Thanks
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Literal.create(key, StringType))), expected) + } + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1&q
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Literal.create(key, StringType))), expected) + } + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1",
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Literal.create(key, StringType)), expected) + } + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1&q
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
Literal.create(key, StringType)), expected) + } + +checkParseUrl("spark.apache.org", "http://spark.apache.org/path?query=1;, "HOST") +checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH") +checkParseUrl("query=1",
Re: Already subscribed to user@spark.apache.org
On Mon, Nov 7, 2016 at 1:26 PM, <user-h...@spark.apache.org> wrote: > Hi! This is the ezmlm program. I'm managing the > user@spark.apache.org mailing list. > > Acknowledgment: The address > >maitraytha...@gmail.com > > was already on the user mailing list
Re: Unsubscribe
To unsubscribe e-mail: user-unsubscr...@spark.apache.org This is explained here: http://spark.apache.org/community.html#mailing-lists On Thu, Dec 8, 2016 at 12:54 AM Roger Holenweger <ro...@lotadata.com>
Re: Unsubscribe
To unsubscribe e-mail: user-unsubscr...@spark.apache.org This is explained here: http://spark.apache.org/community.html#mailing-lists On Thu, Dec 8, 2016 at 12:12 AM Ajit Jaokar <ajit.jao...@futuretext.com>
Re: unsubscribe
Once you are in, there is no way out… :-) > On Dec 27, 2016, at 7:37 PM, Kyle Kelley <rgb...@gmail.com> wrote: > > You are now in position 238 for unsubscription. If you wish for your > subscription to occur immediately, please email > dev-unsubscr...@spark.apache
Re: unsubscribe
Once you are in, there is no way out… :-) > On Dec 27, 2016, at 7:37 PM, Kyle Kelley <rgb...@gmail.com> wrote: > > You are now in position 238 for unsubscription. If you wish for your > subscription to occur immediately, please email > dev-unsubscr...@spark.apache
[GitHub] spark issue #19263: Optionally add block updates to log
Github user jerryshao commented on the issue: https://github.com/apache/spark/pull/19263 @michaelmior would you please follow the instruction (https://spark.apache.org/contributing.html) to update PR title and create a corresponding JIRA, thanks
[GitHub] spark issue #19268: Incorrect Metric reported in MetricsReporter.scala
Github user srowen commented on the issue: https://github.com/apache/spark/pull/19268 No way to make the change without a PR, so no leave it. http://spark.apache.org/contributing.html --- - To unsubscribe, e
[GitHub] spark issue #19283: Update quickstart python dataset example
Github user srowen commented on the issue: https://github.com/apache/spark/pull/19283 Have a look at http://spark.apache.org/contributing.html -- maybe prefix the title with `[MINOR][DOCS]` for completeness? Are there other instances of this same issue in the Pyspark docs
[GitHub] spark issue #19489: The declared package "org.apache.hive.service.cli.thrift...
Github user srowen commented on the issue: https://github.com/apache/spark/pull/19489 This should be closed no matter what. Please start at the web site. http://spark.apache.org/community.html --- - To unsubscribe
[GitHub] spark issue #19145: add logic to test whether the complete container has bee...
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/19145 Could you fix the title to be a form, `[SPARK-][COMPONENT] Title`, as described in http://spark.apache.org/contributing.html
[GitHub] spark issue #19126: [SPARK-21915][ML][PySpark]Model 1 and Model 2 ParamMaps ...
Github user marktab commented on the issue: https://github.com/apache/spark/pull/19126 @srowen since I am a new to this review process, should I be seeing the change at http://spark.apache.org/docs/latest/ml-pipeline.html
[GitHub] spark issue #19347: Branch 2.2 sparkmlib's output of many algorithms is not ...
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/19347 @ithjz, If you'd like to ask a question, please ask this to the mailing list (see https://spark.apache.org/community.html). Could you close this please
[GitHub] spark issue #19321: [SPARK-22100] [SQL] Make percentile_approx support numer...
Github user gatorsmile commented on the issue: https://github.com/apache/spark/pull/19321 Could you document the change in the output type of `percentile_approx ` in the following section? https://spark.apache.org/docs/latest/sql-programming-guide.html#migration-guide
[GitHub] spark issue #19335: mapPartitions Api
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/19335 @listenLearning, If you'd like to ask a question, please ask this to the mailing list (see https://spark.apache.org/community.html
[GitHub] spark issue #18833: [SPARK-21625][SQL] sqrt(negative number) should be null.
Github user gatorsmile commented on the issue: https://github.com/apache/spark/pull/18833 Can we document this difference in https://spark.apache.org/docs/latest/sql-programming-guide.html#compatibility-with-apache-hive
[GitHub] spark issue #20027: Branch 2.2
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/20027 Hey @Maple-Wang, could you close this and file an issue via JIRA please (see http://spark.apache.org/contributing.html
[GitHub] spark issue #19515: [SPARK-22287][MESOS] SPARK_DAEMON_MEMORY not honored by ...
Github user felixcheung commented on the issue: https://github.com/apache/spark/pull/19515 @pmackles perhaps you could email this to d...@spark.apache.org to get some visibility to this and hopefully someone else on the mesos side can review
[GitHub] spark issue #895: [SPARK-1940] Enabling rolling of executor logs, and automa...
Github user wbowditch commented on the issue: https://github.com/apache/spark/pull/895 Can these configuration additions be added to Spark Documentation (https://spark.apache.org/docs/latest/configuration.html
[GitHub] spark issue #21264: Branch 2.2
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/21264 @yotingting, mind closing this and open an issue at JIRA or asking it to mailing list please? I think you can have a better answer there. Please check out https://spark.apache.org
[GitHub] spark issue #21162: shaded guava is not used anywhere, seems guava is not sh...
Github user srowen commented on the issue: https://github.com/apache/spark/pull/21162 CC @vanzin but it's more complex than that as far as I know. It is still shaded. You need to read https://spark.apache.org/contributing.html
[GitHub] spark issue #21419: Branch 2.2
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/21419 @gentlewangyu, please close this and read https://spark.apache.org/contributing.html. Questions should go to mailing list and issues should be filed in JIRA
[GitHub] spark issue #21496: docs: fix typo
Github user srowen commented on the issue: https://github.com/apache/spark/pull/21496 Because it's obviously a test problem elsewhere, I'll merge this. @tomsaleeba please see https://spark.apache.org/contributing.html for the future
[GitHub] spark issue #21092: [SPARK-23984][K8S] Initial Python Bindings for PySpark o...
Github user felixcheung commented on the issue: https://github.com/apache/spark/pull/21092 @lucashu1 please send your question to stackoverflow or u...@spark.apache.org! --- - To unsubscribe, e-mail: reviews
[GitHub] spark issue #21438: Improve SQLAppStatusListener.aggregateMetrics() too show
Github user felixcheung commented on the issue: https://github.com/apache/spark/pull/21438 please see http://spark.apache.org/contributing.html on "Pull Request" also fix the PR title to start with `[S
[GitHub] spark issue #21597: [SPARK-24603] Fix findTightestCommonType reference in co...
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/21597 @Fokko, thanks for bearing with it. (see also https://spark.apache.org/contributing.html). --- - To unsubscribe, e-mail
[GitHub] spark issue #21669: [SPARK-23257][K8S][WIP] Kerberos Support for Spark on K8...
Github user felixcheung commented on the issue: https://github.com/apache/spark/pull/21669 btw, have you sent out this + doc to d...@spark.apache.org? --- - To unsubscribe, e-mail: reviews-unsubscr
[GitHub] spark issue #21695: Maintining an order
Github user srowen commented on the issue: https://github.com/apache/spark/pull/21695 As noted above, please see http://spark.apache.org/contributing.html --- - To unsubscribe, e-mail: reviews-unsubscr
[GitHub] spark issue #21207: SPARK-24136: Fix MemoryStreamDataReader.next to skip sle...
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/21207 @arunmahadevan, not a big deal but mind if I ask to fix the PR title to `[SPARK-24136][SS] blabla`? It's actually encouraged in the guide - https://spark.apache.org/contributing.html
[GitHub] spark issue #20188: [SPARK-22993][ML] Clarify HasCheckpointInterval param do...
Github user felixcheung commented on the issue: https://github.com/apache/spark/pull/20188 Actually in R setCheckpointDir method is not attached to the SparkContext; Iâd leave it as ânot setâ or ânot set in the sessionâ https://spark.apache.org/docs/latest/api/R
[GitHub] spark issue #20023: [SPARK-22036][SQL] Decimal multiplication with high prec...
Github user gatorsmile commented on the issue: https://github.com/apache/spark/pull/20023 Since this introduces a behavior change, please update the [migration guide of Spark SQL](https://spark.apache.org/docs/latest/sql-programming-guide.html#migration-guide
[GitHub] spark issue #20212: Update rdd-programming-guide.md
Github user srowen commented on the issue: https://github.com/apache/spark/pull/20212 OK consider that and http://spark.apache.org/contributing.html for the future. I'll just merge this. --- - To unsubscribe, e
[GitHub] spark issue #20372: Improved block merging logic for partitions
Github user felixcheung commented on the issue: https://github.com/apache/spark/pull/20372 please see https://spark.apache.org/contributing.html open a JIRA and update this PR? --- - To unsubscribe, e-mail
[GitHub] spark issue #19431: [SPARK-18580] [DStreams] [external/kafka-0-10][external/...
Github user akonopko commented on the issue: https://github.com/apache/spark/pull/19431 @gaborgsomogyi `spark.streaming.backpressure.initialRate` is already documented in here: https://spark.apache.org/docs/latest/configuration.html But was mistakenly not included
[GitHub] spark issue #21884: k8s: explicitly expose ports on driver container
Github user felixcheung commented on the issue: https://github.com/apache/spark/pull/21884 can you update the format of the title and description as described here "Pull Request" in https://spark.apache.org/contrib
[GitHub] spark issue #21893: Support selecting from partitioned tabels with partition...
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/21893 Mind filing a JIRA please? Please see http://spark.apache.org/contributing.html --- - To unsubscribe, e-mail: reviews
[GitHub] spark issue #21893: Support selecting from partitioned tabels with partition...
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/21893 Please review http://spark.apache.org/contributing.html before opening a pull request. --- - To unsubscribe, e-mail
[GitHub] spark issue #21921: [SPARK-24971][SQL] remove SupportsDeprecatedScanRow
Github user rdblue commented on the issue: https://github.com/apache/spark/pull/21921 @cloud-fan, I thought it was a requirement to have a committer +1 before merging. Or is this [list of committers](https://spark.apache.org/committers.html) out of date
[GitHub] spark issue #21988: [SPARK-25003][PYSPARK][BRANCH-2.2] Use SessionExtensions...
Github user felixcheung commented on the issue: https://github.com/apache/spark/pull/21988 we always open against master and backport if agreed upon. this is documented here https://spark.apache.org/contributing.html
[GitHub] spark issue #22116: Update configuration.md
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/22116 @KraFusion, mind double checking if there's same instance and fixing the PR title to reflect the change? Also should be good to read https://spark.apache.org/contributing.html even though it's
[GitHub] spark issue #21812: SPARK UI K8S : this parameter's illustration(spark.kuber...
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/21812 @hehuiyuan, please ask a question via a mailing list. See also https://spark.apache.org/community.html --- - To unsubscribe
[GitHub] spark issue #21767: SPARK-24804 There are duplicate words in the test title ...
Github user srowen commented on the issue: https://github.com/apache/spark/pull/21767 yeah, please avoid PRs that are this trivial, it's just not worth the overhead. But I merged it this time. Also please read https://spark.apache.org/contributing.html
[GitHub] spark issue #21828: Update regression.py
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/21828 @woodthom2, if you have some plans to update this PR quite soon, please see https://spark.apache.org/contributing.html and proceed. Otherwise, I would suggest to leave this closed so
[GitHub] spark issue #21755: Doc fix: The Imputer is an Estimator
Github user srowen commented on the issue: https://github.com/apache/spark/pull/21755 I'm sure the failure is spurious, so merged to master. PS see https://spark.apache.org/contributing.html --- - To unsubscribe
[GitHub] spark issue #20370: Changing JDBC relation to better process quotes
Github user gatorsmile commented on the issue: https://github.com/apache/spark/pull/20370 @conorbmurphy Could you create a JIRA and follow [the instruction](https://spark.apache.org/contributing.html) to make a contribution
[GitHub] spark issue #20790: AccumulatorV2 subclass isZero scaladoc fix
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/20790 Wait .. I just found you opened a JIRA - SPARK-23642. Please link it by `[SPARK-23642][DOCS] ...`. see https://spark.apache.org/contributing.html
[GitHub] spark issue #20897: [MINOR][DOC] Fix a few markdown typos
Github user Lemonjing commented on the issue: https://github.com/apache/spark/pull/20897 see http://spark.apache.org/docs/latest/ml-features.html#elementwiseproduct --- - To unsubscribe, e-mail: reviews-unsubscr
[GitHub] spark issue #20669: [SPARK-22839][K8S] Remove the use of init-container for ...
Github user foxish commented on the issue: https://github.com/apache/spark/pull/20669 There's a section explaining it at the bottom of https://spark.apache.org/committers.html --- - To unsubscribe, e-mail: reviews
[GitHub] spark issue #22891: SPARK-25881
Github user kiszk commented on the issue: https://github.com/apache/spark/pull/22891 Thank you for your contribution. Could you please write appropriate title and descriptions based on http://spark.apache.org/contributing.html
[GitHub] spark issue #22893: One part of Spark MLlib Kmean Logic Performance problem
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/22893 Please fix the PR title as described in https://spark.apache.org/contributing.html and read it. --- - To unsubscribe, e
[GitHub] spark issue #22997: SPARK-25999: make-distribution.sh failure with --r and -...
Github user felixcheung commented on the issue: https://github.com/apache/spark/pull/22997 btw, please see the page https://spark.apache.org/contributing.html and particularly "Pull Request" on
[GitHub] spark issue #22596: Fix lint failure in 2.2
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/22596 Can you link the JIRA https://issues.apache.org/jira/browse/SPARK-25576 ? Please see https://spark.apache.org/contributing.html
[GitHub] spark issue #23246: [SPARK-26292][CORE]Assert statement of currentPage may b...
Github user srowen commented on the issue: https://github.com/apache/spark/pull/23246 It's not clear this is where it should be from the description. Please review https://spark.apache.org/contributing.html This one should be closed
[GitHub] spark issue #23107: small question in Spillable class
Github user srowen commented on the issue: https://github.com/apache/spark/pull/23107 Please send questions to u...@spark.apache.org; this should be closed. --- - To unsubscribe, e-mail: reviews-unsubscr
[GitHub] spark issue #23107: small question in Spillable class
Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/23107 Hi, @Charele . Could you read http://spark.apache.org/community.html ? You had better close
Re: unsubscribe
Hi, Sonu. You can send email to user-unsubscr...@spark.apache.org with subject "(send this email to unsubscribe)" to unsubscribe from this mailling list[1]. Regards. [1] https://spark.apache.org/community.html 2019-05-27 2:01 GMT+07.00, Sonu Jyotshna : > > -- -- Salam H
Re: unsubscribe
please read this to unsubscribe: https://spark.apache.org/community.html TL;DR: user-unsubscr...@spark.apache.org so no mail to the list On 4/30/19 6:38 AM, Amrit Jangid wrote: - To unsubscribe e-mail: user-unsubscr
How to unsubscribe
Hi guys - To unsubscribe e-mail: user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org> From: Fred Liu Sent: Wednesday, May 6, 2020 10:10 AM To: user@spark.apache.org Subject: Unsubscribe [Exte
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
urlStr: String, +partToExtract: String, +key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal(urlStr), Literal(partToExtract), Literal(key))), expected) + } + +checkParseUrl("spark.apache.org", "http
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
expected: String, +urlStr: String, +partToExtract: String, +key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal(urlStr), Literal(partToExtract), Literal(key))), expected) + } + +checkParseUrl("spark.apac
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
expected: String, +urlStr: String, +partToExtract: String, +key: String): Unit = { + checkEvaluation( +ParseUrl(Seq(Literal(urlStr), Literal(partToExtract), Literal(key))), expected) + } + +checkParseUrl("spark.apac
[spark-website] branch asf-site updated: Fix 2-4-6 web build
2,7 @@ -http://localhost:4000/community.html; /> +https://spark.apache.org/community.html; /> diff --git a/site/mailing-lists.html b/site/news/spark-2-4-6.html similarity index 94% copy from site/mailing-lists.html copy to site/news/spark-2-4-6.html index 2f4a88f..53d1399 100644 -
[spark-website] branch asf-site updated: Use ASF mail archives not defunct nabble links
has already been answered - - Search the nabble archive for - http://apache-spark-user-list.1001560.n3.nabble.com/;>u...@spark.apache.org + - Search the ASF archive for + https://lists.apache.org/list.html?u...@spark.apache.org;>u...@spark.apache.org - Please follow the StackOverf
[jira] [Updated] (SPARK-40322) Fix all dead links
||Source link text|| |-1 Not found: The server name or address could not be resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| |-1 Not found: The server name or address could not be resolved
[jira] [Updated] (SPARK-40322) Fix all dead links
||Source link text|| |-1 Not found: The server name or address could not be resolved|[http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark]|[Using Parquet and Scrooge with Spark|https://spark.apache.org/documentation.html]| |-1 Not found: The server name or address could not be resolved
Difference among batchDuration, windowDuration, slideDuration
When I'm reading the API of spark streaming, I'm confused by the 3 different durations StreamingContext(conf: SparkConf http://spark.apache.org/docs/latest/api/scala/org/apache/spark/SparkConf.html , batchDuration: Duration http://spark.apache.org/docs/latest/api/scala/org/apache/spark/streaming
Re: groupBy gives non deterministic results
Great. And you should ask question in user@spark.apache.org mail list. I believe many people don't subscribe the incubator mail list now. -- Ye Xianjin Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Wednesday, September 10, 2014 at 6:03 PM, redocpot wrote: Hi, I am using
RE: Sort based shuffle not working properly?
Nitin, Suing Spark is not going to help. Perhaps you should sue someone else :-) Just kidding! Mohammed -Original Message- From: nitinkak001 [mailto:nitinkak...@gmail.com] Sent: Tuesday, February 3, 2015 1:57 PM To: user@spark.apache.org Subject: Re: Sort based shuffle not working
Re: small error in the docs?
Yes that's a typo. The API docs and source code are correct though. http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions That and your IDE should show the correct signature. You can open a PR to fix the typo in https://spark.apache.org/docs/latest
RE: spark 1.2 compatibility
Yes. It's compatible with HDP 2.1 -Original Message- From: bhavyateja [mailto:bhavyateja.potin...@gmail.com] Sent: Friday, January 16, 2015 3:17 PM To: user@spark.apache.org Subject: spark 1.2 compatibility Is spark 1.2 is compatibly with HDP 2.1 -- View this message in context
Re: missing explanation of cache in the documentation of cluster overview
It's explained at https://spark.apache.org/docs/latest/programming-guide.html and it's configuration at https://spark.apache.org/docs/latest/configuration.html Have a read over all the docs first. On Mon, Mar 9, 2015 at 9:24 AM, Hui WANG hedonp...@gmail.com wrote: Hello Guys, I'm reading
Re: Running Spark jobs via oozie
We have gotten it to work... --- Original Message --- From: nitinkak001 nitinkak...@gmail.com Sent: March 3, 2015 7:46 AM To: user@spark.apache.org Subject: Re: Running Spark jobs via oozie I am also starting to work on this one. Did you get any solution to this issue? -- View this message
Announcing Spark 1.3.1 and 1.2.2
Hi All, I'm happy to announce the Spark 1.3.1 and 1.2.2 maintenance releases. We recommend all users on the 1.3 and 1.2 Spark branches upgrade to these releases, which contain several important bug fixes. Download Spark 1.3.1 or 1.2.2: http://spark.apache.org/downloads.html Release notes: 1.3.1
Announcing Spark 1.3.1 and 1.2.2
Hi All, I'm happy to announce the Spark 1.3.1 and 1.2.2 maintenance releases. We recommend all users on the 1.3 and 1.2 Spark branches upgrade to these releases, which contain several important bug fixes. Download Spark 1.3.1 or 1.2.2: http://spark.apache.org/downloads.html Release notes: 1.3.1
broken link on Spark Programming Guide
in the current Programming Guide: https://spark.apache.org/docs/1.3.0/programming-guide.html#actions under Actions, the Python link goes to: https://spark.apache.org/docs/1.3.0/api/python/pyspark.rdd.RDD-class.html which is 404 which I think should be: https://spark.apache.org/docs/1.3.0/api
RE: coalesce on dataFrame
It's in spark 1.4.0, or should be at least: https://issues.apache.org/jira/browse/SPARK-6972 Ewan -Original Message- From: Hafiz Mujadid [mailto:hafizmujadi...@gmail.com] Sent: 01 July 2015 08:23 To: user@spark.apache.org Subject: coalesce on dataFrame How can we use coalesce(1, true