Mailing lists matching spark.apache.org
commits spark.apache.orgdev spark.apache.org
issues spark.apache.org
reviews spark.apache.org
user spark.apache.org
[jira] [Commented] (SPARK-20456) Document major aggregation functions for pyspark
ocument the common aggregate functions (`min`, `max`, `mean`, `count`, `collect_set`, `collect_list`, `stddev`, `variance`) I think we have documentations for ... min - https://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.min max - https://spark.apache.org/doc
Re: [PR] [SPARK-45935][PYTHON][DOCS] Fix RST files link substitutions error [spark]
/python .. |downloading| replace:: Downloading -.. _downloading: https://spark.apache.org/docs/{1}/building-spark.html +.. _downloading: https://spark.apache.org/docs/{1}/#downloading .. |building_spark| replace:: Building Spark -.. _building_spark: https://spark.apache.org/docs/{1}/#downloading
Re: Wrong version on the Spark documentation page
When I enter http://spark.apache.org/docs/latest/ into Chrome address bar, I saw 1.3.0 Cheers On Sun, Mar 15, 2015 at 11:12 AM, Patrick Wendell wrote: > Cheng - what if you hold shift+refresh? For me the /latest link > correctly points to 1.3.0 > > On Sun, Mar 15, 2015 at 10:40 AM
RE: Does Spark 3.1.2/3.2 support log4j 2.17.1+, and how? your target release day for Spark3.3?
Hello Juan Liu, The release process is well documented (see last step on announcement): https://spark.apache.org/release-process.html To (un)subcribe to the mailing lists see: https://spark.apache.org/community.html Best, Meikel Meikel Bode, MSc Senior Manager | Head of SAP Data Platforms
[jira] [Created] (SPARK-22631) Consolidate all configuration properties into one page
Issue Type: Documentation Components: Documentation Affects Versions: 2.2.0 Reporter: Andreas Maier The page https://spark.apache.org/docs/2.2.0/configuration.html gives the impression as if all configuration properties of Spark are described on this page. Unfortunately
[jira] [Created] (SPARK-22630) Consolidate all configuration properties into one page
Issue Type: Documentation Components: Documentation Affects Versions: 2.2.0 Reporter: Andreas Maier The page https://spark.apache.org/docs/2.2.0/configuration.html gives the impression as if all configuration properties of Spark are described on this page. Unfortunately
[jira] [Resolved] (SPARK-22631) Consolidate all configuration properties into one page
Affects Versions: 2.2.0 >Reporter: Andreas Maier > > The page https://spark.apache.org/docs/2.2.0/configuration.html gives the > impression as if all configuration properties of Spark are described on this > page. Unfortunately this is not true. The description of important propert
Re: Wrong version on the Spark documentation page
Cheng - what if you hold shift+refresh? For me the /latest link correctly points to 1.3.0 On Sun, Mar 15, 2015 at 10:40 AM, Cheng Lian wrote: > It's still marked as 1.2.1 here http://spark.apache.org/docs/latest/ > > But this page is updated (1.3.0) > http://spark.apach
[GitHub] spark issue #22517: Branch 2.3 how can i fix error use Pyspark
Github user wangyum commented on the issue: https://github.com/apache/spark/pull/22517 Do you mind close this PR. questions and help should be sent to `u...@spark.apache.org` ``` u...@spark.apache.org is for usage questions, help, and announcements. (subscribe) (unsubscribe
[spark] branch branch-3.1 updated: [MINOR][DOCS] Replace http to https when possible in PySpark documentation
There are many types of contribution, for example, helping other users, testing releases, reviewing changes, documentation contribution, bug reporting, JIRA maintenance, code changes, etc. -These are documented at `the general guidelines <http://spark.apache.org/contributing.html>`_. +The
Re: Spark SQL with a sorted file
t;] Sent: Thursday, December 4, 2014 11:34 AM To: user@spark.apache.org <mailto:user@spark.apache.org> Subject: Spark SQL with a sorted file Hi, If I create a SchemaRDD from a file that I know is sorted on a certain field, is it possible to somehow pass that information
[jira] [Updated] (SPARK-40103) Support read/write.csv() in SparkR
need to use df.read() to read the csv file. We need a more high-level api for it. Java: [DataFrameReader.csv()|https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/DataFrameReader.html] Scala: [DataFrameReader.csv()|https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql
[jira] [Updated] (SPARK-9597) Spark Streaming + MQTT Integration Guide
Guide|http://spark.apache.org/docs/latest/streaming-flume-integration.html] [Spark Streaming + Kinesis Integration|http://spark.apache.org/docs/latest/streaming-kinesis-integration.html] [Spark Streaming + Kafka Integration Guide|http://spark.apache.org/docs/latest/streaming-kafka-integration.html
[jira] [Commented] (SPARK-36209) https://spark.apache.org/docs/latest/sql-programming-guide.html contains invalid link to Python doc
created a pull request for this issue: https://github.com/apache/spark/pull/33420 > https://spark.apache.org/docs/latest/sql-programming-guide.html contains > invalid link to Python doc > ---
Re: Master build fails ?
hCodes import was added. >> >> Regards, >> Dilip Biswal >> Tel: 408-463-4980 >> dbis...@us.ibm.com >> >> >> >> From:Ted Yu >> To:Dilip Biswal/Oakland/IBM@IBMUS >> Cc:Jean-Baptiste Onofré , "dev@spar
Re: Master build fails ?
>> >>> I am building on CentOS and on master branch. >>> >>> One other thing, i was able to build fine with the above command up until >>> recently. I think i have stared >>> to have problem after SPARK-11073 where the HashCodes import was added. >>
[jira] [Commented] (SPARK-22630) Consolidate all configuration properties into one page
Components: Documentation >Affects Versions: 2.2.0 >Reporter: Andreas Maier > > The page https://spark.apache.org/docs/2.2.0/configuration.html gives the > impression as if all configuration properties of Spark are described on this > page. Unfortunately this is not
[jira] [Updated] (SPARK-22630) Consolidate all configuration properties into one page
Affects Versions: 2.2.0 >Reporter: Andreas Maier >Priority: Major > Labels: bulk-closed > > The page https://spark.apache.org/docs/2.2.0/configuration.html gives the > impression as if all configuration properties of Spark are described on this
[jira] [Resolved] (SPARK-22630) Consolidate all configuration properties into one page
Affects Versions: 2.2.0 >Reporter: Andreas Maier >Priority: Major > Labels: bulk-closed > > The page https://spark.apache.org/docs/2.2.0/configuration.html gives the > impression as if all configuration properties of Spark are described on this
[1/3] spark-website git commit: spark summit 2018 agenda
xml -- diff --git a/site/sitemap.xml b/site/sitemap.xml index 9662385..60503c0 100644 --- a/site/sitemap.xml +++ b/site/sitemap.xml @@ -139,605 +139,609 @@ - https://spark.apache.org/releases/spark-release-2-3-0.html + http://localhost:4000/news/spark-summit-june-2018-agenda-posted.h
Hamburg Apache Spark Meetup
| the4thFloor.eu >> wrote: >> >> Hi, >> >> >> there is a small Spark Meetup group in Berlin, Germany :-) >> http://www.meetup.com/Berlin-Apache-Spark-Meetup/ >> >> Plaes add this grou
[GitHub] [spark] srowen commented on a change in pull request #29733: [SPARK-32180][PYTHON][DOCS][FOLLOW-UP] Fix Python lint issue
rge/pyspark>`_ is avai Official Release Channel -Different flavors of PySpark is available in the `official release channel <https://spark.apache.org/downloads.html>`_. +Different flavors of PySpark is available in the `official release channel <https://s
Re: Master build fails ?
lip Biswal > Tel: 408-463-4980 > dbis...@us.ibm.com > > > > From:Ted Yu > To:Dilip Biswal/Oakland/IBM@IBMUS > Cc:Jean-Baptiste Onofré , "dev@spark.apache.org" > > Date:11/05/2015 10:46 AM > Subject:Re: Master build fail
[jira] [Resolved] (SPARK-36209) https://spark.apache.org/docs/latest/sql-programming-guide.html contains invalid link to Python doc
Resolution: Fixed Resolved by https://github.com/apache/spark/pull/33420 > https://spark.apache.org/docs/latest/sql-programming-guide.html contains > invalid link to Python doc > --- > >
Re: groupBy gives non deterministic results
.com/?sig) On Wednesday, September 10, 2014 at 9:05 PM, redocpot wrote: > Hi, Xianjin > > I checked user@spark.apache.org (mailto:user@spark.apache.org), and found my > post there: > http://mail-archives.apache.org/mod_mbox/spark-user/201409.mbox/browser > > I am using nabble to
[jira] [Comment Edited] (SPARK-44820) Switch languages consistently across docs for all code snippets
7 AM: update: this issue emerged since 3.1.1 (note that we don't have an official 3.1.0, since 3.1.0 was a mistake https://spark.apache.org/news/index.html) 3.0.3 works well: https://spark.apache.org/docs/3.0.3/structured-streaming-programming-guide.html 3.1.1 was broke
[jira] [Updated] (SPARK-45208) Kubernetes Configuration in Spark Community Website doesn't have horizontal scrollbar
. Specifically, the Kubernetes configuration lists on the right-hand side are not visible and doc doesn't have a horizontal scrollbar. - [https://spark.apache.org/docs/3.5.0/running-on-kubernetes.html#configuration] - [https://spark.apache.org/docs/3.4.1/running-on-kubernetes.html#configur
Re: Ranger-like Security on Spark
when reading and writing data to > >> HDFS? Is Kerberos my only option then? > >> > >> Kind regards, Daniel. > >> ----- > >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org >
[jira] [Updated] (SPARK-18437) Inconsistent mark-down for `Note:` across Scala/Java/R/Python in API documentations
{{@note}} In case of R, it seems pretty consistent. {{@note}} only contains the information about when the function came out such as {{locate since 1.5.0}} without other information[7]. So, I am not too sure for this. It would be nicer if they are consistent, at least for Scala/Python/Java.
[jira] [Updated] (SPARK-18437) Inconsistent mark-down for `Note:` across Scala/Java/R/Python in API documentations
{{@note}} In case of R, it seems pretty consistent. {{@note}} only contains the information about when the function came out such as {{locate since 1.5.0}} without other information[7]. So, I am not too sure for this. It would be nicer if they are consistent, at least for Scala/Python/Java.
[jira] [Updated] (SPARK-18437) Inconsistent mark-down for `Note:` across Scala/Java/R/Python in API documentations
{{@note}} In case of R, it seems pretty consistent. {{@note}} only contains the information about when the function came out such as {{locate since 1.5.0}} without other information[7]. So, I am not too sure for this. It would be nicer if they are consistent, at least for Scala/Python/Java. [
Re: Master build fails ?
thing, i was able to build fine with the above command up until >>>> recently. I think i have stared >>>> to have problem after SPARK-11073 where the HashCodes import was added. >>>> >>>> Regards, >>>> Dilip Biswal >>>> Tel: 408-
Re: Spark 2.0.0 preview docs uploaded
Available but mostly as JIRA output: https://spark.apache.org/news/spark-2.0.0-preview.html On Thu, Jun 9, 2016 at 7:33 AM, Pete Robbins wrote: > It would be nice to have a "what's new in 2.0.0" equivalent to > https://spark.apache.org/releases/spark-release-1-6-0.html ava
Re: insert Hive table with RDD
day, March 3, 2015 7:09 PM To: user@spark.apache.org Subject: insert Hive table with RDD Hi, How can I insert an existing hive table with an RDD containing my data? Any examples? Best, Patcharee - To unsubscribe, e-mail: user-unsubscr
Re: broken link on Spark Programming Guide
I fixed this a while ago in master. It should go out with the next release and next push of the site. On Tue, Apr 7, 2015 at 4:32 PM, jonathangreenleaf wrote: > in the current Programming Guide: > https://spark.apache.org/docs/1.3.0/programming-guide.html#actions > > under Actions
Scanned image from cop...@spark.apache.org
Reply to: cop...@spark.apache.org Device Name: COPIER Device Model: MX-2310U File Format: XLS (Medium) Resolution: 200dpi x 200dpi Attached file is scanned document in XLS format. Use Microsoft(R)Excel(R) of Microsoft Systems Incorporated to view the document. copier
[jira] [Created] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
documentation (Local matrix scala section), the URL points to https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices which is a mistake, since Matrices is an object that implements factory methods for Matrix that does not have a companion class. The correct link
[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...
scription( + usage = "_FUNC_(url, partToExtract[, key]) - extracts a part from a URL", + extended = """Parts: HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, USERINFO. +Key specifies which query to extract. + Examples: + > SELECT _FUN
Re: Upgrade to Spark 1.2.1 using Guava
celo > > ----- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > > > > -
[GitHub] [spark] yaooqinn opened a new pull request, #36633: [MINOR][ML][DOCS] Fix sql data types link in the ml-pipeline page
park SQL datatype reference](https://spark.apache.org/docs/latest/sql-reference.html#data-types) - `https://spark.apache.org/docs/latest/sql-reference.html#data-types` is invalid and it shall be [Spark SQL datatype reference](https://spark.apache.org/docs/latest/sql-ref-datatypes.html) -
[GitHub] [spark] MichaelChirico commented on pull request #29007: [SPARK-XXXXX][SQL][DOCS] consistency in argument naming for time functions
personally assure I read [R](https://spark.apache.org/docs/latest/api/R/) for R reference, [SQL](https://spark.apache.org/docs/latest/api/sql/) for SQL reference, and [Python](https://spark.apache.org/docs/latest/api/python/) for python reference. I also see headings for [Java](https
Re: Tracking / estimating job progress
It looks like it might only be available via REST, http://spark.apache.org/docs/latest/monitoring.html#rest-api On Fri, 13 May 2016 at 11:24 Dood@ODDO wrote: > On 5/13/2016 10:16 AM, Anthony May wrote: > > > http://spark.apache.org/docs/latest/api/scal
Re: spark 1.2 compatibility
oblem. > > However officially HDP 2.1 + Spark 1.2 is not a supported scenario. > > -Original Message- > From: Judy Nash > Sent: Friday, January 16, 2015 5:35 PM > To: 'bhavyateja'; user@spark.apache.org > Subject: RE: spark 1.2 compatibility > > Y
Re: Unsubscribe
To unsubscribe from the dev list, please send a message to dev-unsubscr...@spark.apache.org as described here: http://spark.apache.org/community.html#mailing-lists. Thanks, -Rick Dulaj Viduranga wrote on 09/21/2015 10:15:58 AM: > From: Dulaj Viduranga > To: dev@spark.apache.org > Da
Re: UNSUBSCRIBE
Writing to the list user@spark.apache.org Subscription address user-subscr...@spark.apache.org Digest subscription address user-digest-subscr...@spark.apache.org Unsubscription addresses user-unsubscr...@spark.apache.org Getting help with the list user-h...@spark.apache.org Feeds: Atom 1.0 <ht
spark-website git commit: Patch references to docs/programming-guide.html to docs/rdd-programming-guide.html
of the Spark API is its [RDD API](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds). +to it. The building block of the Spark API is its [RDD API](https://spark.apache.org/docs/latest/rdd-programming-guide.html#resilient-distributed-datasets-rdds).
[jira] [Created] (SPARK-36209) https://spark.apache.org/docs/latest/sql-programming-guide.html contains invalid link to Python doc
Dominik Gehl created SPARK-36209: Summary: https://spark.apache.org/docs/latest/sql-programming-guide.html contains invalid link to Python doc Key: SPARK-36209 URL: https://issues.apache.org/jira/browse/SPARK
[jira] [Updated] (CALCITE-6278) Add REGEXP, REGEXP_LIKE function (enabled in Spark library)
. Add Function [REGEXP|https://spark.apache.org/docs/latest/api/sql/index.html#regexp], [REGEXP_LIKE|https://spark.apache.org/docs/latest/api/sql/index.html#regexp_like] Since this function has the same implementation as the Spark [RLIKE|https://spark.apache.org/docs/latest/api/sql/index.html#rlike
Re: Welcoming three new PMC members
gt;>>> > > > >>>> Congratulations > > > >>>> > > > >>>> On Tue, Aug 9, 2022, 11:40 AM Xiao Li > mailto:gatorsm...@gmail.com> > <mailto:gatorsm...@gmail.com<mailto:gatorsm...@gmail.com>>> wrote: > > > >>>>> >
Re: Upgrade to Spark 1.2.1 using Guava
confused about what config you're changing. I'm > suggesting using: > > spark-submit --conf spark.executor.extraClassPath=/guava.jar blah > > > -- > Marcelo > > --
Re: Spark SQL with a sorted file
newParquet.scala) Cheng Hao -Original Message- From: Jerry Raj [mailto:jerry@gmail.com <mailto:jerry@gmail.com>] Sent: Thursday, December 4, 2014 11:34 AM To: user@spark.apache.org <mailto:user@spark.apache.org> Subject: Spark SQL with a sorted fil
Re: unsubscribe
Hi Sukesh, To unsubscribe from the dev list, please send a message to dev-unsubscr...@spark.apache.org. To unsubscribe from the user list, please send a message user-unsubscr...@spark.apache.org. Please see: http://spark.apache.org/community.html#mailing-lists. Thanks, -Rick sukesh kumar
Re: unsubscribe
Hi Sukesh, To unsubscribe from the dev list, please send a message to dev-unsubscr...@spark.apache.org. To unsubscribe from the user list, please send a message user-unsubscr...@spark.apache.org. Please see: http://spark.apache.org/community.html#mailing-lists. Thanks, -Rick sukesh kumar
[1/3] spark-website git commit: spark summit eu 2018
976 100644 --- a/site/sitemap.xml +++ b/site/sitemap.xml @@ -139,637 +139,641 @@ - https://spark.apache.org/releases/spark-release-2-2-2.html + http://localhost:4000/news/spark-summit-oct-2018-agenda-posted.html weekly - https://spark.apache.org/news/spark-2-2-2-released.html + h
Re: Master build fails ?
>>>> >>>>> build/sbt clean >>>>> build/sbt -Pyarn -Phadoop-2.6 -Phive -Phive-thriftserver >>>>> -Dhadoop.version=2.6.0 -DskipTests assembly >>>>> >>>>> I am building on CentOS and on master branch. >>>>> >>
Re: [sql]enable spark sql cli support spark sql
s, not sure what's the plan for its enhancement. > > -Original Message- > From: scwf [mailto:wangf...@huawei.com] > Sent: Friday, August 15, 2014 11:22 AM > To: dev@spark.apache.org > Subject: [sql]enable spark sql cli support spark sql > > hi all, > now s
[jira] [Commented] (SPARK-37873) SQL Syntax links are broken
ttps://spark.apache.org/docs/latest/sql-ref-syntax-ddl.html] * [DML Statements|https://spark.apache.org/docs/latest/sql-ref-syntax-dml.html] * [Data Retrieval Statements|https://spark.apache.org/docs/latest/sql-ref-syntax-qry.html] * [Auxiliary Statements|https://spark.apache.org/docs/latest/sql
Re: Tracking / estimating job progress
On 5/13/2016 10:16 AM, Anthony May wrote: http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkStatusTracker Might be useful How do you use it? You cannot instantiate the class - is the constructor private? Thanks! On Fri, 13 May 2016 at 11:11 Ted Yu
[jira] [Updated] (SPARK-18437) Inconsistent mark-down for `Note:` across Scala/Java/R/Python in API documentations
{{@note}} In case of R, it seems pretty consistent. {{@note}} only contains the information about when the function came out such as {{locate since 1.5.0}} without other information[7]. So, I am not too sure for this. It would be nicer if they are consistent, at least for Scala/Python/Java. [
[1/4] spark-website git commit: Add Spark 2.1.1 release.
xml index ec79c82..bc93fb7 100644 --- a/site/sitemap.xml +++ b/site/sitemap.xml @@ -139,6 +139,14 @@ + http://spark.apache.org/releases/spark-release-2-1-1.html + weekly + + + http://spark.apache.org/news/spark-2-1-1-released.html + weekly + + http://spark.apache.org/news/spark-summit-j
[GitHub] [spark] srowen commented on a change in pull request #29733: [SPARK-32180][PYTHON][DOCS][FOLLOW-UP] Fix Python lint issue
rge/pyspark>`_ is avai Official Release Channel -Different flavors of PySpark is available in the `official release channel <https://spark.apache.org/downloads.html>`_. +Different flavors of PySpark is available in the `official release channel <https://s
(spark-website) branch asf-site updated: Remove duplicate files
--- a/site/docs/3.5.3/api/R/reference/shiftleft,Column,numeric-method.html +++ /dev/null @@ -1,8 +0,0 @@ - - -https://spark.apache.org/docs/3.5.3/api/R/reference/column_math_functions.html"; /> - -https://spark.apache.org/docs/3.5.3/api/R/reference/column_math_functi
RE: Feature Generation On Spark
Take a look at the examples here: https://spark.apache.org/docs/latest/ml-guide.html Mohammed From: rishikesh thakur [mailto:rishikeshtha...@hotmail.com] Sent: Saturday, July 4, 2015 10:49 PM To: ayan guha; Michal Čizmazia Cc: user Subject: RE: Feature Generation On Spark I have one document
[jira] [Created] (SPARK-48853) https://spark.apache.org/ returns 404
Ross Lawley created SPARK-48853: --- Summary: https://spark.apache.org/ returns 404 Key: SPARK-48853 URL: https://issues.apache.org/jira/browse/SPARK-48853 Project: Spark Issue Type: Bug
Re: functools.partial as UserDefinedFunction
y:126 ? - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org (mailto:dev-unsubscr...@spark.apache.org) For additional commands, e-mail: dev-h...@spark.apache.org (mailto:dev-h...@spark.apach
[jira] [Created] (SPARK-37260) PYSPARK Arrow 3.2.0 docs link invalid
Components: Documentation Affects Versions: 3.2.0 Reporter: Thomas Graves [http://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html] links to: [https://spark.apache.org/docs/latest/api/python/user_guide/arrow_pandas.html] which links to: [https
Re: finding free ports for tests
to unreliable tests, > especially on Jenkins. > > Before I implement the logic myself -Is there a utility class/trait for > finding ports for tests? > > ----- > To unsubscribe, e-mail: > dev-unsubscr...@spar
[jira] [Updated] (SPARK-40103) Support read.csv in SparkR
need to use df.read() to read the csv file. We need a more high-level api for it. Java: [DataFrameReader.csv()|https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/DataFrameReader.html] Scala: [DataFrameReader.csv()|https://spark.apache.org/docs/latest/api/scala/org/apache/spark
Re: Is Spark 1.6 released?
Please refer to the following: https://spark.apache.org/docs/latest/sql-programming-guide.html#datasets https://spark.apache.org/docs/latest/sql-programming-guide.html#creating-datasets https://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets Cheers On Mon, Jan 4, 2016 at
Re: mailing list subscription
Nabble is not related to the mailing list, so I think that's the problem. spark.apache.org -> Community -> Mailing Lists: http://spark.apache.org/community.html These instructions from the project website itself are correct. On Tue, Oct 20, 2015 at 5:22 PM, Jeff Sadowski wrote: >
Re: Upgrade to Spark 1.2.1 using Guava
Sorry, I'm still confused about what config you're changing. I'm suggesting using: spark-submit --conf spark.executor.extraClassPath=/guava.jar blah -- Marcelo - To unsubscribe, e-mail: user-unsubscr...@spark.apac
Re: Cores on Master
Master to not use all its cores or it may lock up (it does other things). Is there a way to control max cores used for a particular cluster machine in standalone mode? - To unsubscribe, e-mail: user-unsubscr...@spark.apac
Re: Memory-efficient successive calls to repartition()
ccessive-calls-to-repartition-tp24358.html Sent from the Apache Spark User List mailing list archive at Nabble.com. ----- To unsubscribe, e-mail: user-unsubscr...@spark.apache.or
[jira] [Commented] (SPARK-24530) pyspark.ml doesn't generate class docs correctly
om Spark 2.3 docs and master docs generated on my local. Not > sure if this is because my local setup. > cc: [~dongjoon] Could you help verify? > > The followings are our released doc status. Some recent docs seems to be > broken. > *2.1.x* > (O) > [https://spark.apac
[jira] [Commented] (SPARK-24530) pyspark.ml doesn't generate class docs correctly
ly. I attached the > screenshot from Spark 2.3 docs and master docs generated on my local. Not > sure if this is because my local setup. > cc: [~dongjoon] Could you help verify? > > The followings are our released doc status. Some recent docs seems to be > brok
[jira] [Updated] (SPARK-24530) Sphinx doesn't render autodoc_docstring_signature correctly (using Python 2?)
t render class docs correctly. I attached the > screenshot from Spark 2.3 docs and master docs generated on my local. Not > sure if this is because my local setup. > cc: [~dongjoon] Could you help verify? > > The followings are our released doc status. Some recent docs
[jira] [Commented] (SPARK-24530) Sphinx doesn't render autodoc_docstring_signature correctly (with Python 2?) and pyspark.ml docs are broken
ender class docs correctly. I attached the > screenshot from Spark 2.3 docs and master docs generated on my local. Not > sure if this is because my local setup. > cc: [~dongjoon] Could you help verify? > > The followings are our released doc status. Some recent docs see
[jira] [Assigned] (SPARK-24530) Sphinx doesn't render autodoc_docstring_signature correctly (with Python 2?) and pyspark.ml docs are broken
ctly. I attached the > screenshot from Spark 2.3 docs and master docs generated on my local. Not > sure if this is because my local setup. > cc: [~dongjoon] Could you help verify? > > The followings are our released doc status. Some recent docs seems to be > br
[jira] [Assigned] (SPARK-24530) Sphinx doesn't render autodoc_docstring_signature correctly (with Python 2?) and pyspark.ml docs are broken
; screenshot from Spark 2.3 docs and master docs generated on my local. Not > sure if this is because my local setup. > cc: [~dongjoon] Could you help verify? > > The followings are our released doc status. Some recent docs seems to be > broke
[jira] [Assigned] (SPARK-24530) Sphinx doesn't render autodoc_docstring_signature correctly (with Python 2?) and pyspark.ml docs are broken
correctly. I attached the > screenshot from Spark 2.3 docs and master docs generated on my local. Not > sure if this is because my local setup. > cc: [~dongjoon] Could you help verify? > > The followings are our released doc status. Some recent docs seems to be > br
[jira] [Updated] (SPARK-24530) Sphinx doesn't render autodoc_docstring_signature correctly (with Python 2?) and pyspark.ml docs are broken
correctly. I attached the > screenshot from Spark 2.3 docs and master docs generated on my local. Not > sure if this is because my local setup. > cc: [~dongjoon] Could you help verify? > > The followings are our released doc status. Some recent docs seems to be > br
Re: groupBy gives non deterministic results
I think the mails to spark.incubator.apache.org will be forwarded to spark.apache.org. Here is the header of the first mail: from: redocpot to: u...@spark.incubator.apache.org date: Mon, Sep 8, 2014 at 7:29 AM subject: groupBy gives non deterministic results mailing list: user.spark.apache.org
RE: Using CUDA within Spark / boosting linear algebra
: Wednesday, March 25, 2015 2:55 PM To: Ulanov, Alexander Cc: Sam Halliday; dev@spark.apache.org; Xiangrui Meng; Joseph Bradley; Evan R. Sparks; jfcanny Subject: Re: Using CUDA within Spark / boosting linear algebra Alexander, does using netlib imply that one cannot switch between CPU and GPU blas
[jira] [Created] (SPARK-49276) Use API Group `spark.apache.org`
Dongjoon Hyun created SPARK-49276: - Summary: Use API Group `spark.apache.org` Key: SPARK-49276 URL: https://issues.apache.org/jira/browse/SPARK-49276 Project: Spark Issue Type: Sub-task
Re: Disable logger in SparkR
You should be able to do that with log4j.properties http://spark.apache.org/docs/latest/configuration.html#configuring-logging Or programmatically https://spark.apache.org/docs/2.0.0/api/R/setLogLevel.html _ From: Yogesh Vyas mailto:informy...@gmail.com>> Sent:
Re: Berlin Apache Spark Meetup
roup to the Meetups list at > https://spark.apache.org/community.html > > > Ralph > > - > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org >
[1/2] spark-website git commit: Update text/wording to more "modern" Spark and more consistent.
+ b/site/sitemap.xml @@ -139,613 +139,613 @@ - https://spark.apache.org/news/spark-summit-june-2018-agenda-posted.html + http://localhost:4000/news/spark-summit-june-2018-agenda-posted.html weekly - https://spark.apache.org/releases/spark-release-2-3-0.html + http://localhost:4
[jira] [Commented] (SPARK-22630) Consolidate all configuration properties into one page
gt; Key: SPARK-22630 > URL: https://issues.apache.org/jira/browse/SPARK-22630 > Project: Spark > Issue Type: Documentation > Components: Documentation >Affects Versions: 2.2.0 > Reporter: Andreas
[jira] [Assigned] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
ta Types > documentation (Local matrix scala section), the URL points to > https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices > which is a mistake, since Matrices is an object that implements factory > methods for Matrix that does not ha
[jira] [Commented] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
> Labels: Documentation,, Fix, MLlib,, URL > > There is a mistake in the URL of Matrices in the MLlib Data Types > documentation (Local matrix scala section), the URL points to > https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg
[jira] [Assigned] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
atrices in the MLlib Data Types > documentation (Local matrix scala section), the URL points to > https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices > which is a mistake, since Matrices is an object that implements factory > methods for M
[jira] [Updated] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
> documentation (Local matrix scala section), the URL points to > https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices > which is a mistake, since Matrices is an object that implements factory > methods for Matrix that does not have a comp
[jira] [Resolved] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
Fix, MLlib,, URL > Fix For: 1.4.0 > > > There is a mistake in the URL of Matrices in the MLlib Data Types > documentation (Local matrix scala section), the URL points to > https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrice
[jira] [Updated] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
There is a mistake in the URL of Matrices in the MLlib Data Types > documentation (Local matrix scala section), the URL points to > https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.linalg.Matrices > which is a mistake, since Matrices is an object tha
[jira] [Commented] (SPARK-7671) Fix wrong URLs in MLlib Data Types Documentation
Labels: Documentation,, Fix, MLlib,, URL > Fix For: 1.4.0 > > > There is a mistake in the URL of Matrices in the MLlib Data Types > documentation (Local matrix scala section), the URL points to > https://spark.apache.org/docs/latest/api/scala/index.html#org
[jira] [Resolved] (SPARK-37260) PYSPARK Arrow 3.2.0 docs link invalid
s Graves > Priority: Major > > [http://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html] > links to: > [https://spark.apache.org/docs/latest/api/python/user_guide/arrow_pandas.html] > which links to: > [https://spark.apache.org/docs/latest/api/python/sql/a
Re: Spark Streaming - History UI
015, at 05:28, patcharee wrote: Hi, On my history server UI, I cannot see "streaming" tab for any streaming jobs? I am using version 1.5.1. Any ideas? Thanks, Patcharee - To unsubscribe, e-mail: user-unsubscr...@
Re: broken link on Spark Programming Guide
For the last link, you might have meant: https://spark.apache.org/docs/1.3.0/api/python/pyspark.html#pyspark.RDD Cheers On Tue, Apr 7, 2015 at 1:32 PM, jonathangreenleaf < jonathangreenl...@gmail.com> wrote: > in the current Programming Guide: > https://spark.apache.org/docs/1.3.0
Re: Could we expose log likelihood of EM algorithm in MLLIB?
https://issues.apache.org/jira/browse/SPARK-17825 Actually I had created a JIRA. Could you let me your progress to avoid duplicated work. Thanks! 发件人: didi mailto:wangleikidd...@didichuxing.com>> 日期: 2016年10月8日 星期六 上午12:21 至: Yanbo Liang mailto:yblia...@gmail.com>> 抄送: "dev
Re: Could we expose log likelihood of EM algorithm in MLLIB?
https://issues.apache.org/jira/browse/SPARK-17825 Actually I had created a JIRA. Could you let me your progress to avoid duplicated work. Thanks! 发件人: didi mailto:wangleikidd...@didichuxing.com>> 日期: 2016年10月8日 星期六 上午12:21 至: Yanbo Liang mailto:yblia...@gmail.com>> 抄送: "d...