[
https://issues.apache.org/jira/browse/SPARK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6942:
---
Component/s: Web UI
Umbrella: UI Visualizations for Core and Dataframes
Patrick Wendell created SPARK-6942:
--
Summary: Umbrella: UI Visualizations for Core and Dataframes
Key: SPARK-6942
URL: https://issues.apache.org/jira/browse/SPARK-6942
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-3468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-3468:
---
Issue Type: Sub-task (was: New Feature)
Parent: SPARK-6942
WebUI Timeline-View
Patrick Wendell created SPARK-6943:
--
Summary: Graphically show RDD's included in a stage
Key: SPARK-6943
URL: https://issues.apache.org/jira/browse/SPARK-6943
Project: Spark
Issue Type: Sub
[
https://issues.apache.org/jira/browse/SPARK-3468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-3468:
---
Summary: Provide timeline view in Job and Stage pages (was: WebUI
Timeline-View feature
[
https://issues.apache.org/jira/browse/SPARK-3468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-3468:
---
Summary: Provide timeline view in Job and Stage UI pages (was: Provide
timeline view in Job
[
https://issues.apache.org/jira/browse/SPARK-6950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6950:
---
Component/s: Web UI
Spark master UI believes some applications are in progress when
I'd like to close this vote to coincide with the 1.3.1 release,
however, it would be great to have more people test this release
first. I'll leave it open for a bit longer and see if others can give
a +1.
On Tue, Apr 14, 2015 at 9:55 PM, Patrick Wendell pwend...@gmail.com wrote:
+1 from me ass
+1 from myself as well
On Mon, Apr 13, 2015 at 8:35 PM, GuoQiang Li wi...@qq.com wrote:
+1 (non-binding)
-- Original --
From: Patrick Wendell;pwend...@gmail.com;
Date: Sat, Apr 11, 2015 02:05 PM
To: dev@spark.apache.orgdev@spark.apache.org;
Subject
This vote passes with 10 +1 votes (5 binding) and no 0 or -1 votes.
+1:
Sean Owen*
Reynold Xin*
Krishna Sankar
Denny Lee
Mark Hamstra*
Sean McNamara*
Sree V
Marcelo Vanzin
GuoQiang Li
Patrick Wendell*
0:
-1:
I will work on packaging this release in the next 48 hours.
- Patrick
,1/14/15
SPARK-4888,Spark EC2 doesn't mount local disks for i2.8xlarge
instances,,Open,1/27/15
SPARK-4879,Missing output partitions after job completes with
speculative execution,Josh Rosen,Open,3/5/15
SPARK-4568,Publish release candidates under $VERSION-RCX instead of
$VERSION,Patrick Wendell
[
https://issues.apache.org/jira/browse/SPARK-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492888#comment-14492888
]
Patrick Wendell commented on SPARK-6703:
Hey [~ilganeli] - sure thing. I've pinged
[
https://issues.apache.org/jira/browse/SPARK-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6703:
---
Assignee: Ilya Ganelin
Provide a way to discover existing SparkContext's
[
https://issues.apache.org/jira/browse/SPARK-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6703:
---
Priority: Critical (was: Major)
Provide a way to discover existing SparkContext's
Hey Jonathan,
Are you referring to disk space used for storing persisted RDD's? For
that, Spark does not bound the amount of data persisted to disk. It's
a similar story to how Spark's shuffle disk output works (and also
Hadoop and other frameworks make this assumption as well for their
shuffle
[
https://issues.apache.org/jira/browse/SPARK-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492898#comment-14492898
]
Patrick Wendell commented on SPARK-6703:
/cc [~velvia]
Provide a way to discover
[
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493183#comment-14493183
]
Patrick Wendell edited comment on SPARK-6511 at 4/13/15 10:11 PM
[
https://issues.apache.org/jira/browse/SPARK-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493179#comment-14493179
]
Patrick Wendell commented on SPARK-6703:
Yes, ideally we get it into 1.4 - though
[
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493183#comment-14493183
]
Patrick Wendell commented on SPARK-6511:
Just as an example I tried to wire Spark
[
https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493274#comment-14493274
]
Patrick Wendell commented on SPARK-6511:
Can we just run HADOOP_HOME/bin/hadoop
[
https://issues.apache.org/jira/browse/SPARK-6889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493254#comment-14493254
]
Patrick Wendell commented on SPARK-6889:
Thanks for posting this Sean. Overall, I
[
https://issues.apache.org/jira/browse/SPARK-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6199:
---
Assignee: (was: Cheng Hao)
Support CTE
---
Key: SPARK-6199
[
https://issues.apache.org/jira/browse/SPARK-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6858:
---
Assignee: Liang-Chi Hsieh
Register Java HashMap for SparkSqlSerializer
[
https://issues.apache.org/jira/browse/SPARK-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-4760.
Resolution: Not A Problem
ANALYZE TABLE table COMPUTE STATISTICS noscan failed estimating
[
https://issues.apache.org/jira/browse/SPARK-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6611:
---
Assignee: Santiago M. Mola
Add support for INTEGER as synonym of INT to DDLParser
[
https://issues.apache.org/jira/browse/SPARK-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491863#comment-14491863
]
Patrick Wendell commented on SPARK-1529:
Hey Kannan,
We originally considered
[
https://issues.apache.org/jira/browse/SPARK-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell reopened SPARK-4760:
ANALYZE TABLE table COMPUTE STATISTICS noscan failed estimating table size
for tables
[
https://issues.apache.org/jira/browse/SPARK-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6179:
---
Assignee: Zhongshuai Pei
Support SHOW PRINCIPALS role_name
[
https://issues.apache.org/jira/browse/SPARK-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6199:
---
Assignee: Cheng Hao
Support CTE
---
Key: SPARK-6199
[
https://issues.apache.org/jira/browse/SPARK-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6863:
---
Assignee: Santiago M. Mola
Formatted list broken on Hive compatibility section of SQL
spark on yarn against hadoop 2.6.
Tom
On Wednesday, April 8, 2015 6:15 AM, Sean Owen so...@cloudera.com
wrote:
Still a +1 from me; same result (except that now of course the
UISeleniumSuite test does not fail)
On Wed, Apr 8, 2015 at 1:46 AM, Patrick Wendell pwend...@gmail.com
:30 Patrick Wendell pwend...@gmail.com wrote:
Hey Denny,
I beleive the 2.4 bits are there. The 2.6 bits I had done specially
(we haven't merge that into our upstream build script). I'll do it
again now for RC2.
- Patrick
On Wed, Apr 8, 2015 at 1:53 PM, Timothy Chen tnac...@gmail.com wrote
[
https://issues.apache.org/jira/browse/SPARK-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6792.
Resolution: Not A Problem
Resolving per Josh's comment.
pySpark groupByKey returns rows
[
https://issues.apache.org/jira/browse/SPARK-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6785:
---
Component/s: SQL
DateUtils can not handle date before 1970/01/01 correctly
[
https://issues.apache.org/jira/browse/SPARK-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6778.
Resolution: Duplicate
SQL contexts in spark-shell and pyspark should both be called
[
https://issues.apache.org/jira/browse/SPARK-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14486595#comment-14486595
]
Patrick Wendell commented on SPARK-6399:
It would be good to document more clearly
[
https://issues.apache.org/jira/browse/SPARK-6784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6784:
---
Component/s: SQL
Clean up all the inbound/outbound conversions for DateType
[
https://issues.apache.org/jira/browse/SPARK-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6783:
---
Component/s: Project Infra
Add timing and test output for PR tests
)
Ran standalone and yarn tests on the hadoop-2.6 tarball, with and
without the external shuffle service in yarn mode.
On Sat, Apr 4, 2015 at 5:09 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version
1.3.1!
The tag to be voted
to 1.3.x.
- Josh
Sent from my phone
On Apr 7, 2015, at 4:13 PM, Patrick Wendell pwend...@gmail.com wrote:
Hey All,
Today SPARK-6737 came to my attention. This is a bug that causes a
memory leak for any long running program that repeatedly saves data
out to a Hadoop FileSystem
Please vote on releasing the following candidate as Apache Spark version 1.3.1!
The tag to be voted on is v1.3.1-rc2 (commit 7c4473a):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=7c4473aa5a7f5de0323394aaedeefbf9738e8eb5
The list of fixes present in this release can be found
[
https://issues.apache.org/jira/browse/SPARK-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6222:
---
Fix Version/s: 1.4.0
1.3.1
[STREAMING] All data may not be recovered from
,,Open,3/24/15
SPARK-5098,Number of running tasks become negative after tasks
lost,,Open,1/14/15
SPARK-4925,Publish Spark SQL hive-thriftserver maven artifact,Patrick
Wendell,Reopened,3/23/15
SPARK-4922,Support dynamic allocation for coarse-grained
Mesos,,Open,3/31/15
SPARK-4888,Spark EC2
as possible.
On Mon, Apr 6, 2015 at 7:31 PM Patrick Wendell pwend...@gmail.com wrote:
What if you don't run zinc? I.e. just download maven and run that mvn
package It might take longer, but I wonder if it will work.
On Mon, Apr 6, 2015 at 10:26 PM, mjhb sp...@mjhb.com wrote:
Similar
attempt.
Trying to build as clean as possible.
On Mon, Apr 6, 2015 at 7:31 PM Patrick Wendell pwend...@gmail.com wrote:
What if you don't run zinc? I.e. just download maven and run that mvn
package It might take longer, but I wonder if it will work.
On Mon, Apr 6, 2015 at 10:26 PM, mjhb
What if you don't run zinc? I.e. just download maven and run that mvn
package It might take longer, but I wonder if it will work.
On Mon, Apr 6, 2015 at 10:26 PM, mjhb sp...@mjhb.com wrote:
Similar problem on 1.2 branch:
[ERROR] Failed to execute goal on project spark-core_2.11: Could not
The only think that can persist outside of Spark is if there is still
a live Zinc process. We took care to make sure this was a generally
stateless mechanism.
Both the 1.2.X and 1.3.X releases are built with Scala 2.11 for
packaging purposes. And these have been built as recently as in the
last
Hmm.. Make sure you are building with the right flags. I think you need to
pass -Dscala-2.11 to maven. Take a look at the upstream docs - on my phone
now so can't easily access.
On Apr 7, 2015 1:01 AM, mjhb sp...@mjhb.com wrote:
I even deleted my local maven repository (.m2) but still stuck
[
https://issues.apache.org/jira/browse/SPARK-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6703:
---
Description:
Right now it is difficult to write a Spark application in a way that can be run
[
https://issues.apache.org/jira/browse/SPARK-6676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395932#comment-14395932
]
Patrick Wendell commented on SPARK-6676:
[~srowen] This is such a common source
Patrick Wendell created SPARK-6703:
--
Summary: Provide a way to discover existing SparkContext's
Key: SPARK-6703
URL: https://issues.apache.org/jira/browse/SPARK-6703
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-6627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6627.
Resolution: Fixed
Fix Version/s: 1.4.0
Clean up of shuffle code and interfaces
[
https://issues.apache.org/jira/browse/SPARK-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6659:
---
Component/s: SQL
Spark SQL 1.3 cannot read json file that only with a record
[
https://issues.apache.org/jira/browse/SPARK-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell closed SPARK-6659.
--
Resolution: Invalid
Per the comment, I think the issue is the JSON is not correctly formatted
Hey Marcelo,
Great question. Right now, some of the more active developers have an
account that allows them to log into this cluster to inspect logs (we
copy the logs from each run to a node on that cluster). The
infrastructure is maintained by the AMPLab.
I will put you in touch the someone
Patrick Wendell created SPARK-6627:
--
Summary: Clean up of shuffle code and interfaces
Key: SPARK-6627
URL: https://issues.apache.org/jira/browse/SPARK-6627
Project: Spark
Issue Type
[
https://issues.apache.org/jira/browse/SPARK-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383413#comment-14383413
]
Patrick Wendell commented on SPARK-6561:
FYI - I just removed Affects Version's
[
https://issues.apache.org/jira/browse/SPARK-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6561:
---
Affects Version/s: (was: 1.3.1)
(was: 1.3.0)
Add partition
If you invoke this, you will get at-least-once semantics on failure.
For instance, if a machine dies in the middle of executing the foreach
for a single partition, that will be re-executed on another machine.
It could even fully complete on one machine, but the machine dies
immediately before
[
https://issues.apache.org/jira/browse/SPARK-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6544:
---
Fix Version/s: 1.3.1
Problem with Avro and Kryo Serialization
The source code should match the Spark commit
4aaf48d46d13129f0f9bdafd771dd80fe568a7dc. Do you see any differences?
On Fri, Mar 27, 2015 at 11:28 AM, Manoj Samel manojsamelt...@gmail.com wrote:
While looking into a issue, I noticed that the source displayed on Github
site does not matches the
[
https://issues.apache.org/jira/browse/SPARK-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-4073.
Resolution: Won't Fix
I have never seen someone else run into this, so closing
[
https://issues.apache.org/jira/browse/SPARK-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-5025.
Resolution: Won't Fix
I'm closing this as wont fix. There are now a bunch of community
[
https://issues.apache.org/jira/browse/SPARK-1844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-1844.
Resolution: Won't Fix
Closing given the combination of (a) this is not that important
[
https://issues.apache.org/jira/browse/SPARK-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-2709:
---
Target Version/s: (was: 1.2.0)
Add a tool for certifying Spark API compatiblity
[
https://issues.apache.org/jira/browse/SPARK-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-2709:
---
Priority: Critical (was: Major)
Add a tool for certifying Spark API compatiblity
[
https://issues.apache.org/jira/browse/SPARK-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell reopened SPARK-2709:
This came up in some recent conversations. I actually don't think we ever
merged
[
https://issues.apache.org/jira/browse/SPARK-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6405.
Resolution: Fixed
Assignee: Matthew Cheah
Spark Kryo buffer should be forced
[
https://issues.apache.org/jira/browse/SPARK-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6549.
Resolution: Won't Fix
I think this is a wont fix due to compatibility issues. If I'm wrong
I think we have a version of mapPartitions that allows you to tell
Spark the partitioning is preserved:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/RDD.scala#L639
We could also add a map function that does same. Or you can just write
your map using an
[
https://issues.apache.org/jira/browse/SPARK-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6499:
---
Component/s: PySpark
pyspark: printSchema command on a dataframe hangs
[
https://issues.apache.org/jira/browse/SPARK-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6520:
---
Component/s: Spark Shell
Kyro serialization broken in the shell
[
https://issues.apache.org/jira/browse/SPARK-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14380432#comment-14380432
]
Patrick Wendell commented on SPARK-6481:
Hey All,
One issue here, (I think
and pullreq when i have some time.
On Wed, Mar 25, 2015 at 1:23 AM, Patrick Wendell pwend...@gmail.com wrote:
I see - if you look, in the saving functions we have the option for
the user to pass an arbitrary Configuration.
https://github.com/apache/spark/blob/master/core/src/main/scala/org
Great - that's even easier. Maybe we could have a simple example in the doc.
On Wed, Mar 25, 2015 at 7:06 PM, Sandy Ryza sandy.r...@cloudera.com wrote:
Regarding Patrick's question, you can just do new Configuration(oldConf)
to get a cloned Configuration object and add any new properties to it.
Hey Jim,
Thanks for reporting this. Can you give a small end-to-end code
example that reproduces it? If so, we can definitely fix it.
- Patrick
On Tue, Mar 24, 2015 at 4:55 PM, Jim Carroll jimfcarr...@gmail.com wrote:
I have code that works under 1.2.1 but when I upgraded to 1.3.0 it fails to
My philosophy has been basically what you suggested, Sean. One thing
you didn't mention though is if a bug fix seems complicated, I will
think very hard before back-porting it. This is because fixes can
introduce their own new bugs, in some cases worse than the original
issue. It's really bad to
Yeah - to Nick's point, I think the way to do this is to pass in a
custom conf when you create a Hadoop RDD (that's AFAIK why the conf
field is there). Is there anything you can't do with that feature?
On Tue, Mar 24, 2015 at 11:50 AM, Nick Pentreath
nick.pentre...@gmail.com wrote:
Imran, on
Hey All,
For a while we've published binary packages with different Hadoop
client's pre-bundled. We currently have three interfaces to a Hadoop
cluster (a) the HDFS client (b) the YARN client (c) the Hive client.
Because (a) and (b) are supposed to be backwards compatible
interfaces. My working
[
https://issues.apache.org/jira/browse/SPARK-2331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376495#comment-14376495
]
Patrick Wendell commented on SPARK-2331:
By the way - [~rxin] recently pointed out
[
https://issues.apache.org/jira/browse/SPARK-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell reopened SPARK-6122:
I reverted this because it looks like it was responsible for some testing
failures due
If the official solution from the Scala community is to use Java
enums, then it seems strange they aren't generated in scaldoc? Maybe
we can just fix that w/ Typesafe's help and then we can use them.
On Mon, Mar 23, 2015 at 1:46 PM, Sean Owen so...@cloudera.com wrote:
Yeah the fully realized #4,
Hey Yiannis,
If you just perform a count on each name, date pair... can it succeed?
If so, can you do a count and then order by to find the largest one?
I'm wondering if there is a single pathologically large group here that is
somehow causing OOM.
Also, to be clear, you are getting GC limit
[
https://issues.apache.org/jira/browse/SPARK-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6449:
---
Component/s: (was: Spark Core)
YARN
Driver OOM results in reported
[
https://issues.apache.org/jira/browse/SPARK-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6456:
---
Component/s: (was: Spark Core)
Spark Sql throwing exception on large partitioned data
[
https://issues.apache.org/jira/browse/SPARK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-2858.
Resolution: Invalid
This is really old and I don't think it still an issue. I'm just
[
https://issues.apache.org/jira/browse/SPARK-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375229#comment-14375229
]
Patrick Wendell commented on SPARK-5863:
This seems worth potentially fixing
[
https://issues.apache.org/jira/browse/SPARK-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-5863:
---
Target Version/s: 1.3.1, 1.4.0 (was: 1.4.0)
Improve performance of convertToScala codepath
[
https://issues.apache.org/jira/browse/SPARK-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4227:
---
Priority: Critical (was: Major)
Document external shuffle service
[
https://issues.apache.org/jira/browse/SPARK-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4227:
---
Target Version/s: 1.3.1, 1.4.0 (was: 1.3.0, 1.4.0)
Document external shuffle service
[
https://issues.apache.org/jira/browse/SPARK-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-5863:
---
Target Version/s: 1.4.0 (was: 1.3.1, 1.4.0)
Improve performance of convertToScala codepath
[
https://issues.apache.org/jira/browse/SPARK-6012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6012:
---
Target Version/s: 1.4.0 (was: 1.3.1, 1.4.0)
Deadlock when asking for partitions from
[
https://issues.apache.org/jira/browse/SPARK-6012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6012:
---
Target Version/s: 1.3.1, 1.4.0 (was: 1.4.0)
Deadlock when asking for partitions from
[
https://issues.apache.org/jira/browse/SPARK-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375230#comment-14375230
]
Patrick Wendell commented on SPARK-5863:
Ah actually - I see [~marmbrus
[
https://issues.apache.org/jira/browse/SPARK-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4925:
---
Fix Version/s: (was: 1.2.1)
(was: 1.3.0)
Publish Spark SQL hive
[
https://issues.apache.org/jira/browse/SPARK-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4925:
---
Priority: Critical (was: Major)
Publish Spark SQL hive-thriftserver maven artifact
[
https://issues.apache.org/jira/browse/SPARK-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4925:
---
Affects Version/s: (was: 1.2.0)
1.3.0
1.2.1
[
https://issues.apache.org/jira/browse/SPARK-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4123:
---
Summary: Show dependency changes in pull requests (was: Show new
dependencies added in pull
[
https://issues.apache.org/jira/browse/SPARK-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell reopened SPARK-4925:
Thanks for bringing this up. Actually - realized this wasn't fixed by some of
the other work
[
https://issues.apache.org/jira/browse/SPARK-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4925:
---
Target Version/s: 1.3.1
Publish Spark SQL hive-thriftserver maven artifact
601 - 700 of 4069 matches
Mail list logo