Mailing lists matching spark.apache.org

commits spark.apache.org
dev spark.apache.org
issues spark.apache.org
reviews spark.apache.org
user spark.apache.org


[jira] [Created] (SPARK-17202) "Pipeline guide" link is broken in MLlib Guide main page

2016-08-23 Thread Vitalii Kotliarenko (JIRA)
ect: Spark Issue Type: Bug Components: Documentation, MLlib Affects Versions: 2.0.0 Reporter: Vitalii Kotliarenko Priority: Trivial Steps to reproduce: 1) Check http://spark.apache.org/docs/latest/ml-guide.html 2) Link in sentence "See the Pipe

[jira] [Updated] (SPARK-9597) Add Spark Streaming + MQTT Integration Guide

2015-08-04 Thread Prabeesh K (JIRA)
> Components: Documentation >Reporter: Prabeesh K > > Add Spark Streaming + MQTT Integration Guide like > [Spark Streaming + Flume Integration > Guide|http://spark.apache.org/docs/latest/streaming-flume-integration.html] > [Spark Streaming + Kinesis > In

[jira] [Resolved] (SPARK-49276) Use API Group `spark.apache.org`

2024-08-17 Thread Dongjoon Hyun (Jira)
request 55 [https://github.com/apache/spark-kubernetes-operator/pull/55] > Use API Group `spark.apache.org` > > > Key: SPARK-49276 > URL: https://issues.apache.org/jira/browse/SPARK-49276 > Project: Spar

[jira] [Resolved] (SPARK-9597) Add Spark Streaming + MQTT Integration Guide

2017-03-08 Thread Sean Owen (JIRA)
Components: Documentation >Reporter: Prabeesh K > > Add Spark Streaming + MQTT Integration Guide like > [Spark Streaming + Flume Integration > Guide|http://spark.apache.org/docs/latest/streaming-flume-integration.html] > [Spark Streaming + Kinesis > Integration|htt

[GitHub] [spark] srowen commented on pull request #18748: [SPARK-20679][ML] Support recommending for a subset of users/items in ALSModel

2022-09-14 Thread GitBox
srowen commented on PR #18748: URL: https://github.com/apache/spark/pull/18748#issuecomment-1247113606 What do you mean? It's in the API: https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.recommendation.ALSModel

[kyuubi] branch master updated: [KYUUBI #4235] [DOCS] Prefer `https://` URLs in docs

2023-02-02 Thread bowenliang
gines on Kubernetes, you'd better have cognition upon the following things. -* Read about [Running Spark On Kubernetes](http://spark.apache.org/docs/latest/running-on-kubernetes.html) +* Read about [Running Spark On Kubernetes](https://spark.apache.org/docs/latest/running-on-kubernetes.ht

[spark] branch master updated: [SPARK-34818][PYTHON][DOCS] Reorder the items in User Guide at PySpark documentation

2021-03-21 Thread gurwls223
-02f3ad63a509.png";> FWIW, the current page: https://spark.apache.org/docs/latest/api/python/user_guide/index.html Closes #31922 from HyukjinKwon/SPARK-34818. Authored-by: HyukjinKwon Signed-off-by: HyukjinKwon --- python/docs/source/user_guide/index.

Re: Could you undo the JIRA dev list e-mails?

2014-03-29 Thread Mattmann, Chris A (3980)
No worries, thanks Patrick, agreed. -Original Message- From: Patrick Wendell Date: Saturday, March 29, 2014 1:47 PM To: Chris Mattmann Cc: Chris Mattmann , "dev@spark.apache.org" Subject: Re: Could you undo the JIRA dev list e-mails? >Okay cool - sorry about that. In

[GitHub] [spark] nchammas commented on a change in pull request #29491: [SPARK-32204][SPARK-32182][DOCS] Add a quickstart page with Binder integration in PySpark documentation

2020-08-21 Thread GitBox
;markdown", + "metadata": {}, + "source": [ +"# Quickstart\n", +"\n", +"This is a short introduction and quickstart for PySpark DataFrame. PySpark DataFrame is lazily evaludated and implemented on thetop of [RDD](https://spark.apa

[GitHub] [spark] nchammas commented on a change in pull request #29491: [SPARK-32204][SPARK-32182][DOCS] Add a quickstart page with Binder integration in PySpark documentation

2020-08-21 Thread GitBox
;markdown", + "metadata": {}, + "source": [ +"# Quickstart\n", +"\n", +"This is a short introduction and quickstart for PySpark DataFrame. PySpark DataFrame is lazily evaludated and implemented on thetop of [RDD](https://spark.apa

Re: CSV escaping not working

2016-10-28 Thread Daniel Barclay
rowse/CSV-135> From: Koert Kuipers mailto:ko...@tresata.com>> Date: Thursday, October 27, 2016 at 12:49 PM To: "Jain, Nishit" mailto:nja...@underarmour.com>> Cc: "user@spark.apache.org <mailto:user@spark.apache.org>" mailto:user@spark.apac

[jira] [Created] (SPARK-6719) Update spark.apache.org/mllib page to 1.3

2015-04-05 Thread Xiangrui Meng (JIRA)
Xiangrui Meng created SPARK-6719: Summary: Update spark.apache.org/mllib page to 1.3 Key: SPARK-6719 URL: https://issues.apache.org/jira/browse/SPARK-6719 Project: Spark Issue Type: Task

Re: Spark Website

2016-07-13 Thread Benjamin Kim
@gmail.com>> wrote: > Has anyone noticed that the spark.apache.org <http://spark.apache.org/> is > not working as supposed to? > > > - > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > <mailto:user-unsubscr...@spark.apache.org> > >

Re: Long running jobs in CDH

2016-01-13 Thread Jorge Machado
Hi Jan, Oozie oder you can check the parameter —supervise option http://spark.apache.org/docs/latest/submitting-applications.html <http://spark.apache.org/docs/latest/submitting-applications.html> > On 11/01/2016, at 14:23, Jan Holmberg wrote: > > Hi, > any pref

[kyuubi] 01/01: prefer https URLs in docs

2023-02-02 Thread bowenliang
cb75..44fca1602 100644 --- a/docs/deployment/engine_on_kubernetes.md +++ b/docs/deployment/engine_on_kubernetes.md @@ -21,7 +21,7 @@ When you want to run Kyuubi's Spark SQL engines on Kubernetes, you'd better have cognition upon the following things. -* Read about [Running Spark On K

Re: Spark 2.0.0 preview docs uploaded

2016-07-19 Thread Pete Robbins
Are there any 'work in progress' release notes for 2.0.0 yet? I don't see anything in the rc docs like "what's new" or "migration guide"? On Thu, 9 Jun 2016 at 10:06 Sean Owen wrote: > Available but mostly as JIRA output: > https://spark.apache.org

Re: Creating Spark Extras project, was Re: SPARK-13843 and future of streaming backends

2016-03-26 Thread Jean-Baptiste Onofré
>>>>> a mechanism to do that, without the need to keep the code in the main >>>>> Spark repo, right? >>>>> >>>>> iii. Usability >>>>&g

Re: Using CUDA within Spark / boosting linear algebra

2016-02-04 Thread Max Grossman
rences.oreilly.com/strata/hadoop-big-data-ca/public/schedule/detail/47565> > > We appreciate your great feedback. > > Best Regards, > Kazuaki Ishizaki, Ph.D., Senior research staff member, IBM Research - Tokyo > > > > From:"Ulanov, Alexander" >

411ED44345

2016-07-07 Thread commits
411ED44345.docm Description: application/vnd.ms-word.document.macroenabled.12 - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org

64DB3746CD44CB49

2016-07-26 Thread commits
64DB3746CD44CB49.docm Description: application/vnd.ms-word.document.macroenabled.12 - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org

[jira] [Updated] (SPARK-46815) Structured Streaming - Arbitrary State API v2

2024-01-24 Thread Anish Shrigondekar (Jira)
Streaming|https://spark.apache.org/streaming/] around [arbitrary stateful operations|https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#arbitrary-stateful-operations]. The operator(s) we have today ([mapGroupsWithState/flatMapGroupsWithState|https://spark.apache.org

RE: Feature Generation On Spark

2015-07-18 Thread Mohammed Guller
failing Rishi From: Mohammed Guller mailto:moham...@glassbeam.com>> Sent: Friday, July 10, 2015 2:31 AM To: rishikesh thakur; ayan guha; Michal Čizmazia Cc: user Subject: RE: Feature Generation On Spark Take a look at the examples here: https://spark.apache.org/docs/latest/ml-guide.ht

Re: HDFS small file generation problem

2015-10-03 Thread nibiau
Hello, Finally Hive is not a solution as I cannot update the data. And for archive file I think it would be the same issue. Any other solutions ? Nicolas - Mail original - De: nib...@free.fr À: "Brett Antonides" Cc: user@spark.apache.org Envoyé: Vendredi 2 Octobre 2015 18:3

Re: functools.partial as UserDefinedFunction

2015-03-25 Thread Davies Liu
nctools.partial > does not have the attribute __name__. Is there any alternative to > relying on __name__ in pyspark/sql/functions.py:126 ? > > > - > To unsubscribe, e-mail: dev-unsubscr...@spark.a

Re: Welcoming Yanbo Liang as a committer

2016-06-04 Thread Suresh Thalamati
gt; Yanbo! > > Matei > - > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org > For additional commands, e-mail: dev-h...@spark.apache.org > - To unsubscribe, e-mai

Re: Welcoming Yanbo Liang as a committer

2016-06-05 Thread Kousuke Saruta
- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands

[jira] [Resolved] (SPARK-6719) Update spark.apache.org/mllib page to 1.3

2015-04-20 Thread Xiangrui Meng (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-6719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiangrui Meng resolved SPARK-6719. -- Resolution: Done > Update spark.apache.org/mllib page to

[jira] [Created] (SPARK-49315) Generalize `relocateGeneratedCRD` Gradle Task to handle `*.spark.apache.org-v1.yml`

2024-08-19 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-49315: - Summary: Generalize `relocateGeneratedCRD` Gradle Task to handle `*.spark.apache.org-v1.yml` Key: SPARK-49315 URL: https://issues.apache.org/jira/browse/SPARK-49315

[jira] [Created] (SPARK-49316) Generalize `printer-columns.sh` to handle `*.spark.apache.org-v1.yml` files

2024-08-19 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-49316: - Summary: Generalize `printer-columns.sh` to handle `*.spark.apache.org-v1.yml` files Key: SPARK-49316 URL: https://issues.apache.org/jira/browse/SPARK-49316

[GitHub] spark pull request #21278: [SPARKR] Require Java 8 for SparkR

2018-05-10 Thread felixcheung
cre"), License: Apache License (== 2.0) URL: http://www.apache.org/ http://spark.apache.org/ BugReports: http://spark.apache.org/contributing.html +SystemRequirements: Java (== 8) Depends: R (>= 3.0), --- End d

Re: reducing number of output files

2015-01-22 Thread Sean Owen
> Thanks. > > - > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > - To unsubscribe, e-mail: user-unsubscr...@spar

Re: GraphX rmatGraph hangs

2015-01-04 Thread Michael Malak
Thank you. I created https://issues.apache.org/jira/browse/SPARK-5064 - Original Message - From: xhudik To: dev@spark.apache.org Cc: Sent: Saturday, January 3, 2015 2:04 PM Subject: Re: GraphX rmatGraph hangs Hi Michael, yes, I can confirm the behavior. It get stuck (loop?) and eat

Re: I want to subscribe to mailing lists

2016-02-11 Thread Josh Elser
No, you need to send to the subscribe address as that community page instructs: mailto:user-subscr...@spark.apache.org and mailto:dev-subscr...@spark.apache.org Shyam Sarkar wrote: Do I have @apache.org e-mail address ? I am getting following error when\ I send from ssarkarayushnet

[jira] [Updated] (SPARK-21593) Fix broken configuration page

2017-08-01 Thread Artur Sukhenko (JIRA)
d named > anchors. > Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] > with [Latest docs |https://spark.apache.org/docs/latest/configuration.html] > Or try this link [Configuration # Dynamic > Allocation|https://spark.apache.org/docs/2.1.1/configuration.ht

[jira] [Updated] (SPARK-21593) Fix broken configuration page

2017-08-01 Thread Artur Sukhenko (JIRA)
on page for Spark 2.2.0 has broken menu list and named > anchors. > Compare [2.1.1 docs |https://spark.apache.org/docs/2.1.1/configuration.html] > with [Latest docs |https://spark.apache.org/docs/latest/configuration.html] > Or try this link [Configuration # Dynamic > Allocation

[jira] [Resolved] (SPARK-19546) Every mail to u...@spark.apache.org is getting blocked

2017-02-10 Thread Sean Owen (JIRA)
admin rights? I don't even know who if anyone can kick people off the list. > Every mail to u...@spark.apache.org is getting blocked > -- > > Key: SPARK-19546 > URL: https://issues.apache.or

[jira] [Commented] (SPARK-9597) Add Spark Streaming + MQTT Integration Guide

2018-03-16 Thread Patrick Alwell (JIRA)
Project: Spark > Issue Type: Documentation > Components: Documentation >Reporter: Prabeesh K >Priority: Major > > Add Spark Streaming + MQTT Integration Guide like > [Spark Streaming + Flume Integration > Guide|http://spark.apache.or

[jira] [Commented] (SPARK-9597) Add Spark Streaming + MQTT Integration Guide

2018-03-16 Thread Sean Owen (JIRA)
Issue Type: Documentation > Components: Documentation >Reporter: Prabeesh K >Priority: Major > > Add Spark Streaming + MQTT Integration Guide like > [Spark Streaming + Flume Integration > Guide|http://spark.apache.org/docs/latest/streaming-flume-

[jira] [Commented] (SPARK-23083) Adding Kubernetes as an option to https://spark.apache.org/

2018-01-15 Thread Anirudh Ramanathan (JIRA)
eate a PR against that repo. Since it's separate, it can be merged and updated right around the time the release goes out? > Adding Kubernetes as an option to https://spark.apache.org/ > --- > > Key: SPARK-23083 &g

[jira] [Commented] (SPARK-44752) XML: Update Spark Docs

2023-09-17 Thread Hyukjin Kwon (Jira)
a PR with this JIRA. once your fix is landed, then the JIRA will be assigned to you. If we can have a similar page like https://spark.apache.org/docs/latest/sql-data-sources-json.html and/or https://spark.apache.org/docs/latest/sql-data-sources-csv.html, that'd be awesome. > XML: Updat

[GitHub] [spark] wangyum commented on pull request #37614: [SPARK-38992][CORE][2.3] Avoid using bash -c in ShellBasedGroupsMappi…

2022-08-22 Thread GitBox
details: https://spark.apache.org/versioning-policy.html https://spark.apache.org/docs/latest/sql-migration-guide.html -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

Re: [PR] Revert: [SPARK-47040][CONNECT] Allow Spark Connect Server Script to wait [spark]

2024-04-03 Thread via GitHub
pan3793 commented on PR #45852: URL: https://github.com/apache/spark/pull/45852#issuecomment-2034771042 > ... because it's not properly documented ... you can search SPARK_PREPEND_CLASSES on https://spark.apache.org/developer-tools.html > ... just use env var and up

RE: Hive UDFs

2015-07-07 Thread Cheng, Hao
dataframe.limit(1).selectExpr("xxx").collect()? -Original Message- From: chrish2312 [mailto:c...@palantir.com] Sent: Wednesday, July 8, 2015 6:20 AM To: user@spark.apache.org Subject: Hive UDFs I know the typical way to apply a hive UDF to a dataframe is basically some

RE: Kmeans Labeled Point RDD

2015-07-20 Thread Mohammed Guller
I responded to your question on SO. Let me know if this what you wanted. http://stackoverflow.com/a/31528274/2336943 Mohammed -Original Message- From: plazaster [mailto:michaelplaz...@gmail.com] Sent: Sunday, July 19, 2015 11:38 PM To: user@spark.apache.org Subject: Re: Kmeans

Re: Spark JDBC Thirft Server over HTTP

2014-11-13 Thread Cheng Lian
HTTP is not supported yet, and I don't think there's an JIRA ticket for it. On 11/14/14 8:21 AM, vs wrote: Does Spark JDBC thrift server allow connections over HTTP? http://spark.apache.org/docs/1.1.0/sql-programming-guide.html#running-the-thrift-jdbc-server doesn't see t

RE: Spark SQL with a sorted file

2014-12-03 Thread Cheng, Hao
@gmail.com] Sent: Thursday, December 4, 2014 11:34 AM To: user@spark.apache.org Subject: Spark SQL with a sorted file Hi, If I create a SchemaRDD from a file that I know is sorted on a certain field, is it possible to somehow pass that information on to Spark SQL so that SQL queries referencing

Re: [VOTE] Release Apache Spark 1.1.1 (RC1)

2014-11-16 Thread Kousuke Saruta
Nabble.com. - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org - To unsubscribe, e-mail: dev-unsubscr...@spark.

Re: SPARK-13843 and future of streaming backends

2016-03-28 Thread Cody Koeninger
hat must be approved by VP, > Infrastructure. > -- > Sent via Pony Mail for dev@spark.apache.org. > View this email online at: > https://pony-poc.apache.org/list.html?dev@spark.apache.org > > - > To u

Re: Spark 2.0.0 preview docs uploaded

2016-06-08 Thread Pete Robbins
It would be nice to have a "what's new in 2.0.0" equivalent to https://spark.apache.org/releases/spark-release-1-6-0.html available or am I just missing it? On Wed, 8 Jun 2016 at 13:15 Sean Owen wrote: > OK, this is done: > > http://spark.apache.org/documentation.html &

[jira] [Updated] (SPARK-10650) Spark docs include test and other extra classes

2015-09-16 Thread Patrick Wendell (JIRA)
of test classes. We need to figure out what commit introduced those and fix it. The obvious things like genJavadoc version have not changed. http://spark.apache.org/docs/1.4.1/api/java/org/apache/spark/streaming/ [before] http://spark.apache.org/docs/1.5.0/api/java/org/apache/spark/streaming

[jira] [Commented] (SPARK-17339) Fix SparkR tests on Windows

2016-09-07 Thread Hadoop QA (JIRA)
sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org

[jira] [Commented] (SPARK-17339) Fix SparkR tests on Windows

2016-09-07 Thread Hadoop QA (JIRA)
sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org

[jira] [Created] (SPARK-8274) Fix wrong URLs in MLlib Frequent Pattern Mining Documentation

2015-06-09 Thread JIRA
URLs of the Scala section of FP-Growth in the MLlib Frequent Pattern Mining documentation. The URL points to https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/fpm/FPGrowth.html which is the Java's API, the link should point to the Scala API https://spark.apache.org/docs/l

[GitHub] spark pull request: [SPARK-11835] Adds a sidebar menu to MLlib's d...

2015-11-19 Thread thunterdb
: - http://spark.apache.org/docs/latest/streaming-programming-guide.html - http://spark.apache.org/docs/latest/sql-programming-guide.html - http://spark.apache.org/docs/latest/graphx-programming-guide.html - http://spark.apache.org/docs/latest/sparkr.html --- If your project

[GitHub] [spark] gengliangwang opened a new pull request #31525: [3.1][INFRA][DOC] Change the facetFilters of Docsearch to 3.1.1

2021-02-08 Thread GitBox
. ### Why are the changes needed? So that the search result of the published Spark site will points to https://spark.apache.org/docs/3.1.1 instead of https://spark.apache.org/docs/latest/. This is useful for searching the docs of 3.1.1 after there are new Spark releases

[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread janplus
REF, PROTOCOL, AUTHORITY, FILE, USERINFO. +Key specifies which query to extract. +Examples: + > SELECT _FUNC_('http://spark.apache.org/path?query=1', 'HOST')\n 'spark.apache.org' + > SELECT _FUNC_('http://spark.apache.org/pat

RE: How to share large resources like dictionaries while processing data with Spark ?

2015-06-04 Thread Huang, Roger
Is the dictionary read-only? Did you look at http://spark.apache.org/docs/latest/programming-guide.html#broadcast-variables ? -Original Message- From: dgoldenberg [mailto:dgoldenberg...@gmail.com] Sent: Thursday, June 04, 2015 4:50 PM To: user@spark.apache.org Subject: How to share

Re: kafka direct streaming python API fromOffsets

2016-05-02 Thread Cody Koeninger
If you're confused about the type of an argument, you're probably better off looking at documentation that includes static types: http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.streaming.kafka.KafkaUtils$ createDirectStream's fromOffsets parameter t

Re: spark with cdh 5.2.1

2015-01-29 Thread Mohit Jaggi
0-cdh5.2.1? Will replicating the 2.4 entry be sufficient to >> make this work? >> >> Mohit. >> - >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org >> For additional commands, e-ma

Re: spark mesos deployment : starting workers based on attributes

2015-04-03 Thread Ankur Chauhan
o isolate the workers to a set of mesos slaves > that have a given attribute such as `tachyon:true`. > > Anyone knows if that is possible or how I could achieve such a > behavior. > > Thanks! -- Ankur Chauhan > > ---

Re: spark mesos deployment : starting workers based on attributes

2015-04-04 Thread Ankur Chauhan
- > > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > <mailto:user-unsubscr...@spark.apache.org> For additional commands, > e-mail: user-h...@spark.apache.org > <mailto:user-h...@spark.apache.org> > >

Re: SparkSQL HiveContext TypeTag compile error

2014-09-11 Thread Du Li
PM To: "user@spark.apache.org<mailto:user@spark.apache.org>" mailto:user@spark.apache.org>> Subject: Re: SparkSQL HiveContext TypeTag compile error Solved it. The problem occurred because the case class was defined within a test case in FunSuite. Moving the case class defi

user-unsubscr...@spark.apache.org

2017-05-26 Thread williamtellme123
user-unsubscr...@spark.apache.org From: ANEESH .V.V [mailto:aneeshnair.ku...@gmail.com] Sent: Friday, May 26, 2017 1:50 AM To: user@spark.apache.org Subject: unsubscribe unsubscribe

[zeppelin] branch master updated: [minor] Minor doc update

2020-02-11 Thread zjffdu
100644 --- a/docs/quickstart/spark_with_zeppelin.md +++ b/docs/quickstart/spark_with_zeppelin.md @@ -29,8 +29,7 @@ For a brief overview of Apache Spark fundamentals with Apache Zeppelin, see the - **built-in** Apache Spark integration. - with [SparkSQL](http://spark.apache.org/sql/), [PySpark

[spark-website] branch asf-site updated: Remove search-hadoop.com link

2021-02-23 Thread srowen
...@spark.apache.org` first about the possible change - Search the `u...@spark.apache.org` and `d...@spark.apache.org` mailing list archives for -related discussions. Use http://search-hadoop.com/?q=&fc_project=Spark";>search-hadoop.com -or similar search tools. +related discussions.

Re: Spark-sql can replace Hive ?

2021-06-15 Thread Battula, Brahma Reddy
Currently I am using hive sql engine for adhoc queries. As spark-sql also supports this, I want migrate from hive. From: Mich Talebzadeh Date: Thursday, 10 June 2021 at 8:12 PM To: Battula, Brahma Reddy Cc: ayan guha , dev@spark.apache.org , u...@spark.apache.org Subject: Re: Spark-sql

RE: Adding Custom finalize method to RDDs.

2019-06-12 Thread Nasrulla Khan Haris
, Nasrulla From: Phillip Henry Sent: Tuesday, June 11, 2019 11:28 PM To: Nasrulla Khan Haris Cc: Vinoo Ganesh ; dev@spark.apache.org Subject: Re: Adding Custom finalize method to RDDs. That's not the kind of thing a finalize method was ever supposed to do. Use a try/finally block in

Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Sean McNamara
scala:301) >>> >>> This actually fails consistently for me too in the Kafka integration >>> code. Not timezone related, I think. > > - > To unsubscribe, e-mail: > dev-unsubscr...@spark.apache.org<mai

Re: Breaking the previous large-scale sort record with Spark

2014-10-10 Thread Nan Zhu
re requests from throughout the community. > > > > For an engine to scale from these multi-hour petabyte batch jobs down to > > 100-millisecond streaming and interactive queries is quite uncommon, and > > it's thanks to all of you folks that we are able to make

[jira] [Updated] (SPARK-9570) Consistent recommendation for submitting spark apps to YARN, -master yarn --deploy-mode x vs -master yarn-x'.

2015-08-08 Thread Neelesh Srinivas Salian (JIRA)
regarding submission of the applications for yarn. SPARK-3629 was done to correct the same but http://spark.apache.org/docs/latest/submitting-applications.html#master-urls still has yarn-client and yarn-client as opposed to the nor of having --master yarn and --deploy-mode cluster / client Need to

[jira] [Updated] (SPARK-44820) Switch languages consistently across docs for all code snippets

2023-08-15 Thread Allison Wang (Jira)
that page should switch to the chosen language. This was the behavior for, for example, Spark 2.0 doc: [https://spark.apache.org/docs/2.0.0/structured-streaming-programming-guide.html] But it was broken for later docs, for example the Spark 3.4.1 doc: [https://spark.apache.org/docs/latest/quick

[jira] [Updated] (SPARK-15660) Update RDD `variance/stdev` description and add popVariance/popStdev

2016-06-17 Thread Dongjoon Hyun (JIRA)
redefined as the sample variance/stdev instead of population ones. This PR updates the comments to prevent users from misunderstanding. This will update the following API docs. - http://spark.apache.org/docs/2.0.0-preview/api/scala/index.html#org.apache.spark.api.java.JavaDoubleRDD - http

Re: Spark-sql can replace Hive ?

2021-06-15 Thread Battula, Brahma Reddy
Currently I am using hive sql engine for adhoc queries. As spark-sql also supports this, I want migrate from hive. From: Mich Talebzadeh Date: Thursday, 10 June 2021 at 8:12 PM To: Battula, Brahma Reddy Cc: ayan guha , d...@spark.apache.org , user@spark.apache.org Subject: Re: Spark-sql

Re: Partitioning of Dataframes

2015-05-22 Thread Ted Yu
Ds you can control how the data is partitioned across nodes >>>>> with >>>>> partitionBy. There is no such method on Dataframes however. Can I >>>>> somehow >>>>> partition the underlying the RDD manually? I am currently using the >>>&

RE: SparkR Error in sparkR.init(master=“local”) in RStudio

2015-10-06 Thread Khandeshi, Ami
554719\AppData\Local\Temp\Rtmpw11KJ1\backend_port31b0afd4391' had status 127 -Original Message- From: Sun, Rui [mailto:rui@intel.com] Sent: Tuesday, October 06, 2015 9:39 AM To: akhandeshi; user@spark.apache.org Subject: RE: SparkR Error in sparkR.init(master=“local”) in RStu

Re: Event logging not working when worker machine terminated

2015-09-09 Thread David Rosenstrauch
round for this? BTW, I'm running Spark v1.3.0. Thanks, DR ----- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org --

Re: Breaking the previous large-scale sort record with Spark

2014-10-10 Thread Nan Zhu
re requests from throughout the community. > > > > For an engine to scale from these multi-hour petabyte batch jobs down to > > 100-millisecond streaming and interactive queries is quite uncommon, and > > it's thanks to all of you folks that we are able to make

Re: spark on kubernetes

2016-05-22 Thread Gurvinder Singh
ter, so when a user initiate a request to access the information, >>>>master can proxy the request to corresponding endpoint. >>>> >>>>So I am wondering if someone has already done work in this direction >>>>then it would be great to know.

Re: Spark 1.5 Streaming and Kinesis

2015-10-19 Thread Jean-Baptiste Onofré
! In the meantime, can anyone confirm their ability to run the Kinesis-ASL example using Spark > 1.5.x ? Would be helpful to know if it works in some cases but not others. http://spark.apache.org/docs/1.5.1/streaming-kinesis-integration.html Thanks Phil On Thu, Oct 15, 2015 at 10:35 PM, J

Re: Spark 1.5 Streaming and Kinesis

2015-10-20 Thread Jean-Baptiste Onofré
-11193 to track progress, hope that is okay! In the meantime, can anyone confirm their ability to run the Kinesis-ASL example using Spark > 1.5.x ? Would be helpful to know if it works in some cases but not others. http://spark.apache.org/docs/1.5.1/streaming-kinesis-integration.html Thanks P

spark git commit: [SPARK-16281][SQL] Implement parse_url SQL function

2016-07-08 Thread rxin
quot; + private val REGEXSUBFIX = "=([^&]*)" +} + +/** + * Extracts a part from a URL + */ +@ExpressionDescription( + usage = "_FUNC_(url, partToExtract[, key]) - extracts a part from a URL", + extended = """Parts: HOST, PATH, QUERY, REF, PR

RE: How to specify default value for StructField?

2017-02-21 Thread Begar, Veena
Thanks Yan and Yong, Yes, from Spark, I can access ORC files loaded to Hive tables. Thanks. From: 颜发才(Yan Facai) [mailto:facai@gmail.com] Sent: Friday, February 17, 2017 6:59 PM To: Yong Zhang Cc: Begar, Veena ; smartzjp ; user@spark.apache.org Subject: Re: How to specify default value

RE: Question on saveAsTextFile with overwrite option

2014-12-24 Thread Shao, Saisai
Thanks Patrick for your detailed explanation. BR Jerry -Original Message- From: Patrick Wendell [mailto:pwend...@gmail.com] Sent: Thursday, December 25, 2014 3:43 PM To: Cheng, Hao Cc: Shao, Saisai; u...@spark.apache.org; dev@spark.apache.org Subject: Re: Question on saveAsTextFile with

Re: [VOTE] Release Apache Spark 1.2.1 (RC2)

2015-01-28 Thread Ye Xianjin
cast. > > > > [ ] +1 Release this package as Apache Spark 1.2.1 > > [ ] -1 Do not release this package because ... > > > > For a list of fixes in this release, see http://s.apache.org/Mpn. > > > > To learn more about Apache Spark, please

[jira] [Commented] (SPARK-18437) Inconsistent mark-down for `Note:` across Scala/Java/R/Python in API documentations

2016-11-14 Thread Sean Owen (JIRA)
it seems there are also, > - {{Note:}} > - {{NOTE:}} > - {{Note that}} > - {{@note}} > In case of R, it seems pretty consistent. {{@note}} only contains the > information about when the function came out such as {{locate since 1.5.0}} > without other information[

[jira] [Commented] (SPARK-18437) Inconsistent mark-down for `Note:` across Scala/Java/R/Python in API documentations

2016-11-14 Thread Hyukjin Kwon (JIRA)
an > the others[4][5][6]. > For R, it seems there are also, > - {{Note:}} > - {{NOTE:}} > - {{Note that}} > - {{@note}} > In case of R, it seems pretty consistent. {{@note}} only contains the > information about when the function came out such as {{locate since 1.5.0}} > wit

[jira] [Commented] (SPARK-18437) Inconsistent mark-down for `Note:` across Scala/Java/R/Python in API documentations

2016-11-14 Thread Hyukjin Kwon (JIRA)
ote:}} > - {{NOTE:}} > - {{Note that}} > - {{@note}} > In case of R, it seems pretty consistent. {{@note}} only contains the > information about when the function came out such as {{locate since 1.5.0}} > without other information[7]. So, I am not too sure for this. > It would

[jira] [Commented] (SPARK-18437) Inconsistent mark-down for `Note:`/`NOTE:`/`Note that` across Scala/Java/R/Python in API documentations

2016-11-15 Thread Aditya (JIRA)
an > the others[4][5][6]. > For R, it seems there are also, > - {{Note:}} > - {{NOTE:}} > - {{Note that}} > - {{@note}} > In case of R, it seems pretty consistent. {{@note}} only contains the > information about when the function came out such as {{locate since 1.5.0}} >

[jira] [Commented] (SPARK-18437) Inconsistent mark-down for `Note:`/`NOTE:`/`Note that` across Scala/Java/R/Python in API documentations

2016-11-16 Thread Aditya (JIRA)
since 1.5.0}} > without other information[7]. So, I am not too sure for this. > It would be nicer if they are consistent, at least for Scala/Python/Java. > [1] > http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext@hadoopFile[K,V,F<:org.apache.ha

[jira] [Resolved] (SPARK-18437) Inconsistent mark-down for `Note:`/`NOTE:`/`Note that` across Scala/Java/R/Python in API documentations

2016-11-22 Thread Sean Owen (JIRA)
seems there are also, > - {{Note:}} > - {{NOTE:}} > - {{Note that}} > - {{@note}} > In case of R, it seems pretty consistent. {{@note}} only contains the > information about when the function came out such as {{locate since 1.5.0}} > without other information[7]. So

RE: Does Spark 3.1.2/3.2 support log4j 2.17.1+, and how? your target release day for Spark3.3?

2022-01-19 Thread Bode, Meikel, NM-X-DS
Hi, New releases are announced via mailing lists user@spark.apache.org<mailto:user@spark.apache.org> & d...@spark.apache.org<mailto:d...@spark.apache.org>. Best, Meikel From: Theodore J Griesenbrock Sent: Mittwoch, 19. Januar 2022 18:50 To: sro...@gmail.com Cc:

RE: Is it possible to use SparkSQL JDBC ThriftServer without Hive

2016-01-15 Thread Mohammed Guller
tables. Mohammed -Original Message- From: Sambit Tripathy (RBEI/EDS1) [mailto:sambit.tripa...@in.bosch.com] Sent: Friday, January 15, 2016 11:30 AM To: Mohammed Guller; angela.whelan; user@spark.apache.org Subject: RE: Is it possible to use SparkSQL JDBC ThriftServer without Hive Hi

RE: SparkR Error in sparkR.init(master=“local”) in RStudio

2015-10-06 Thread Sun, Rui
Not sure "/C/DevTools/spark-1.5.1/bin/spark-submit.cmd" is a valid? From: Hossein [mailto:fal...@gmail.com] Sent: Wednesday, October 7, 2015 12:46 AM To: Khandeshi, Ami Cc: Sun, Rui; akhandeshi; user@spark.apache.org Subject: Re: SparkR Error in sparkR.init(master=“local”) in RStudio

RE: How to calculate percentile of a column of DataFrame?

2015-10-09 Thread Saif.A.Ellafi
to calculate percentile of a column of DataFrame? I found it in 1.3 documentation lit says something else not percent public static Column<https://spark.apache.org/docs/1.3.1/api/java/org/apache/spark/sql/Column.html> lit(Object literal) Creates a Column<https://spark.apache.org/do

Re: Small File to HDFS

2015-09-03 Thread Ndjido Ardo Bar
t 12:43 PM, < nib...@free.fr > wrote: > > > Hi, > I already store them in MongoDB in parralel for operational access and don't > want to add an other database in the loop > Is it the only solution ? > > Tks > Nicolas > > - Mail original -

Re: CSV escaping not working

2016-10-27 Thread Jain, Nishit
>> Date: Thursday, October 27, 2016 at 12:49 PM To: "Jain, Nishit" mailto:nja...@underarmour.com>> Cc: "user@spark.apache.org<mailto:user@spark.apache.org>" mailto:user@spark.apache.org>> Subject: Re: CSV escaping not working well my expectation would be

RE: Question on saveAsTextFile with overwrite option

2014-12-24 Thread Shao, Saisai
Thanks Patrick for your detailed explanation. BR Jerry -Original Message- From: Patrick Wendell [mailto:pwend...@gmail.com] Sent: Thursday, December 25, 2014 3:43 PM To: Cheng, Hao Cc: Shao, Saisai; user@spark.apache.org; d...@spark.apache.org Subject: Re: Question on saveAsTextFile

Re: pyspark not working for me...

2015-07-25 Thread IT CTO
Thanks for the help. Fixing the z.show() in pySpark will help a lot my users :-) Eran On Sat, Jul 25, 2015 at 10:25 PM wrote: > > I’ve tested this out and found these issues. Firstly, > > http:// > <http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlig

[spark] branch master updated: [R] update package description

2019-02-21 Thread gurwls223
/DESCRIPTION @@ -1,8 +1,8 @@ Package: SparkR Type: Package Version: 3.0.0 -Title: R Frontend for Apache Spark -Description: Provides an R Frontend for Apache Spark. +Title: R Front end for 'Apache Spark' +Description: Provides an R Front end for 'Apache Spark' <https://spark.apa

[spark] branch branch-2.4 updated: [R][BACKPORT-2.4] update package description

2019-02-21 Thread felixcheung
R/pkg/DESCRIPTION @@ -1,8 +1,8 @@ Package: SparkR Type: Package Version: 2.4.2 -Title: R Frontend for Apache Spark -Description: Provides an R Frontend for Apache Spark. +Title: R Front end for 'Apache Spark' +Description: Provides an R Front end for 'Apache Spark' <https://spar

<    1   2   3   4   5   6   7   8   9   10   >