Github user mallman commented on the issue:
https://github.com/apache/spark/pull/13818
> I have a few questions.
>
> Is it a regression from 1.6? Looks like not?
I don't know about 1.6. I know it's a regression from 1.5.
> Is it a corre
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/13818
@zsxwing I was able to do following without error:
git clone g...@github.com:apache/spark.git spark-master
cd spark-master
./dev/change-scala-version.sh 2.10
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/14031
Thank you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/13818
I believe I've addressed @liancheng's style issues in my new unit test,
along with the same in the two tests from which it was copy-pasta'd (boy scout
rule). Hopefully I didn't cock it up
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/14031#discussion_r69479805
--- Diff: project/SparkBuild.scala ---
@@ -723,8 +723,8 @@ object Unidoc {
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/h
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/14031#discussion_r69385673
--- Diff: project/SparkBuild.scala ---
@@ -723,8 +723,8 @@ object Unidoc {
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/h
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/14031#discussion_r69382212
--- Diff: project/SparkBuild.scala ---
@@ -723,8 +723,8 @@ object Unidoc {
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/h
GitHub user mallman opened a pull request:
https://github.com/apache/spark/pull/14031
[SPARK-16353][BUILD][DOC] Missing javadoc options for java unidoc
## What changes were proposed in this pull request?
The javadoc options for the java unidoc generation are ignored when
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/13818#discussion_r69233431
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -265,9 +265,12 @@ private[hive] class HiveMetastoreCatalog
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/13818
You are very welcome. Thank you for taking time to review it! ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/13818#discussion_r69231754
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -298,6 +298,7 @@ case class InsertIntoHiveTable
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/13818#discussion_r69230833
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -265,9 +265,12 @@ private[hive] class HiveMetastoreCatalog
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/13818#discussion_r69159655
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -200,7 +201,6 @@ private[hive] class HiveMetastoreCatalog
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/13818#discussion_r69107723
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -191,6 +191,7 @@ private[hive] class HiveMetastoreCatalog
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/13818#discussion_r69106546
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -200,7 +201,6 @@ private[hive] class HiveMetastoreCatalog
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/13818#discussion_r68064203
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/parquetSuites.scala ---
@@ -425,6 +425,28 @@ class ParquetMetastoreSuite extends
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/13818
@hvanhovell I'm mentioning you here because you commented on my previous PR
for this Jira issue. In response to your original question, yes, I have added a
unit test for this patch.
---
If your
GitHub user mallman opened a pull request:
https://github.com/apache/spark/pull/13818
[SPARK-15968][SQL] Nonempty partitioned metastore tables are not cached
(Please note this is a revision of PR #13686, which has been closed in
favor of this PR.)
## What changes were
Github user mallman closed the pull request at:
https://github.com/apache/spark/pull/13686
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/13686
I'm going to close this PR and open a new one when I've fixed the test
failures. My bad.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/13686
Aaaak! Some unit tests are failing on my build. Sorry, I will re-examine
and submit a new commit. Ugh.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/13686
Actually, let me think about this some more...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/13686
@hvanhovell Sounds like a good idea, but I don't know how to unit test this
without opening up some of this caching api to at least the `private[hive]`
access level. Would that be acceptable? I'm
GitHub user mallman opened a pull request:
https://github.com/apache/spark/pull/13686
[SPARK-15968][SQL] HiveMetastoreCatalog does not correctly validate
## What changes were proposed in this pull request?
The `getCached` method of `HiveMetastoreCatalog` computes
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/10700#issuecomment-174599806
Thanks, @srowen.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/10700#issuecomment-173490509
Here are my current thoughts. Josh says this functionality is going to be
removed in Spark 2.0. The bug this PR is designed to address manifests itself
in Spark 1.5
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/10700#issuecomment-172564021
Sorry guys. I bungled the ordering of the `stop()` calls. That's what I get
for doing a manual patch from a manual diff from another branch-1.5...
:disappointed
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/10700#issuecomment-172440507
Hi Josh,
Good questions. I may have submitted this PR incorrectly. Perhaps you can
guide me in the right direction.
I submitted this PR for merging
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/10700#issuecomment-172443532
I should also state that my original motivation in submitting this patch
was to address the confusing log messages
Application ... is still in progress
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/10700#issuecomment-170659870
Changed Jira ref from SPARK-6950 to SPARK-12755. SPARK-6950 is an older,
defunct ticket. Oops.
---
If your project is set up for it, you can reply to this email
GitHub user mallman opened a pull request:
https://github.com/apache/spark/pull/10700
[SPARK-6950][CORE] Stop the event logger before the DAG scheduler
[SPARK-6950][CORE] Stop the event logger before the DAG scheduler to avoid
a race condition where the standalone master attempts
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/7345#issuecomment-165532682
To add my two cents, I think that to call this change "cosmetic" is
strictly true but underrates its value. In our case we have additional
monitori
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/7345#issuecomment-165559632
@andrewor14 We've put this in production. Everything looks good. Hostnames
show up in the UI as expected. No broken links.
---
If your project is set up for it, you
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/7639#issuecomment-124568478
Thanks for the fix @srowen. It was my oversight to assume it was safe to
remove these scripts in the first place.
---
If your project is set up for it, you can reply
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-122957103
@srowen I've pushed a new commit to replace usage of
`dev/change-version-to-*.sh` scripts with `dev/change-scala-version.sh
version`. I also modified the latter so
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-122946797
@srowen I'm working on this now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-122112493
@srowen Sorry, I've been swamped. I think I can get this done by Saturday
if you want to wait.
---
If your project is set up for it, you can reply to this email
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-120988236
@srowen @ScrapCodes Let me know if you'd like me to take on those
additional tasks. Cheers.
---
If your project is set up for it, you can reply to this email and have
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-120059239
I've pushed a commit to implement the second strategy. I've tested this
script successfully on OS X Yosemite and Ubuntu 14.
---
If your project is set up for it, you
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-119708081
Thanks for the tip, @srowen. That works.
I now have a version following approach (2) which I've verified works on OS
X with its built-in sed. I'll test on a GNU
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-119266285
I'll work on a revision following along the lines of what @ScrapCodes did
and push it to this PR. Incidentally, I was going to suggest we use `mktemp` to
create
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-119333018
I've run into a roadblock. This syntax:
sed -e
'0,/scala\.binary\.version2.10/s//scala.binary.version2.11/' pom.xml
doesn't work with my Mac's sed
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-119375569
The original code replaces only the first instance of
`scala.binary.version2.10` in the file, which is the desired behavior. The
code you presented replaces all of them
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-118999888
I've spent some more time googling around this problem. Unsurprisingly,
there's plenty of discussion/frustration around finding a cross-platform
solution. There doesn't
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-118743436
@srowen I just returned from my vacation abroad and am catching up. Sorry
for the wait. I'll take a look at this tomorrow. Cheers.
---
If your project is set up
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-113548582
Sure thing. FYI, I'm leaving for Iceland tomorrow (Saturday), and I'll be
away for two weeks. I will probably be incommunicado during this time. If you
need something
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-113206728
Indeed the build does generate the scaladoc in the right location, but the
`docs/_plugin/copy_api_dirs.rb` is currently hardcoded to always look for the
api docs
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/6832#discussion_r32703376
--- Diff: dev/change-scala-version.sh ---
@@ -0,0 +1,63 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/6832#discussion_r32649538
--- Diff: dev/change-scala-version.sh ---
@@ -0,0 +1,63 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/6832#discussion_r32650306
--- Diff: dev/change-scala-version.sh ---
@@ -0,0 +1,63 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-112477102
@srowen Will create a Jira ticket.
@ScrapCodes This is what I get with (presumably BSD) sed on OS X:
```
[msa@Michaels-MacBook-Pro spark-1.4]$ ./dev
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-112478712
@srowen Should I create one Jira ticket for this or multiple?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-112608540
@srowen I created the Jira ticket which shows the problem with the current
version changing scripts.
---
If your project is set up for it, you can reply to this email
GitHub user mallman opened a pull request:
https://github.com/apache/spark/pull/6832
Scala version switching build enhancements
These commits address a few minor issues in the Scala cross-version support
in the build:
1. Correct two missing `${scala.binary.version}` pom
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/3703#issuecomment-83691288
I'm confused. Why was this PR abruptly closed? Was there another active PR
for window functions?
---
If your project is set up for it, you can reply to this email
Github user mallman commented on the pull request:
https://github.com/apache/spark/pull/4382#issuecomment-73081758
FWIW I'd like to add my two cents. The main piece of functionality the
installation at my company would benefit from is independent user sessions. I'm
not familiar
601 - 656 of 656 matches
Mail list logo