Github user jameszhouyi commented on the issue:
https://github.com/apache/spark/pull/15011
Thanks a lot @gatorsmile @cloud-fan @adrian-wang !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user jameszhouyi commented on the issue:
https://github.com/apache/spark/pull/14169
Hi Spark guys,
Could you please help to review this PR to merge it in Spark 2.0.0 ? Thanks
in advance !
Best Regards,
Yi
---
If your project is set up for it, you can reply to
Github user jameszhouyi commented on the issue:
https://github.com/apache/spark/pull/14169
Hi,
Cool ! All of my cases relative to transformation script PASSED after
applying this PR . Could Spark guys please review this codes to merge this PR ?
Thanks a lots !
Best
Github user jameszhouyi commented on the issue:
https://github.com/apache/spark/pull/13542
Hi,
Could you please help to review this PR to merge it in 2.0.0 ? this broken
thing blocked our testing. Thanks!
---
If your project is set up for it, you can reply to this email and have
Github user jameszhouyi commented on the issue:
https://github.com/apache/spark/pull/13542
Hi Spark community ï¼
Could you please help to review this PR to merge in 2.0.0 ? Since this bug
has broken our real cases. Thanks in advance !
---
If your project is set up for it, you
Github user jameszhouyi commented on the issue:
https://github.com/apache/spark/pull/13542
Hi @chenghao-intel
This PR is tested and it's OK for my case.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/9589#issuecomment-161855688
Thanks @marmbrus for your response. This is regression bug(not found in
1.5.X) so hopefully it can be fixed in 1.6.0.
---
If your project is set up for it, you can
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/9589#issuecomment-161215023
This is critical bug. Strong hopefully it can be fixed and merged in 1.6.0.
Thanks !
---
If your project is set up for it, you can reply to this email and have
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/9589#issuecomment-159505641
Hi @adrian-wang ,
For SPARK-11972, the case passed now after applying the patch.Thanks !
---
If your project is set up for it, you can reply to this email and
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/9055#issuecomment-153918005
Thank you @gatorsmile for your suggestion.
I think this feature("IN" sub query) is necessary for Spark SQL engine as
SQL-on-Hadoop.
---
If your proj
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/9055#issuecomment-152941016
Hi @yhuai ,
This missing feature("IN" sub query) in Spark SQL blocked our real-world
case. Could you please help to review this PR ? Strongly hopeful
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/8652#issuecomment-142917749
Hi @yhuai , I saw the PR is ready for some time. could you help to review
this PR. Hopefully it can be fixed in 1.5.1.
---
If your project is set up for it, you
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/8476#issuecomment-140675429
I saw the issue marked as 'Target Version 1.5.1' and hopefully it can be
merged in 1.5.1...
---
If your project is set up for it, you can reply to this
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/8652#issuecomment-138741073
After optimized patch , we can see "CartesianProduct" optimized to
"BroadcastNestedLoopJoin" from physical plan for cross join. The benchmar
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/8652#issuecomment-138538607
Hi @chenghao-intel ,
We can pass the case and also get better performance than before
optimization for cross join after applying the patch.
---
If your project
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/8476#issuecomment-13695
This is real-world case using Spark SQL and hopefully it can be
fixed/merged in Spark 1.5.0.Thanks in advance !
---
If your project is set up for it, you can reply
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/8476#issuecomment-136362940
Apply the spark master code(commit
8d2ab75d3b71b632f2394f2453af32f417cb45e5) with this PR patch, the previous
broken cases can be passed now..
---
If your project
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/8331#issuecomment-133317182
This is blocker issue and hopefully it is fixed in Spark 1.5.0, Thanks !
---
If your project is set up for it, you can reply to this email and have your
reply
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/7191#issuecomment-127498023
Hi @steveloughran ,
Have you came across below errors when build the spark? Please correct me
if any missing or wrong. (Build command: mvn -Pyarn -Phadoop-2.4
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/6638#issuecomment-125098294
Apply this PR based on commit id 'c025c3d0a1fdfbc45b64db9c871176b40b4a7b9b'
and the case relative to script transform can pass now.
---
If your project
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/6638#issuecomment-119795927
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/6638#issuecomment-119771895
Hi,
I saw the 'Merged build finished. Test FAILed.' if there is a latest
version for fix ?
---
If your project is set up for it, you can reply to
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/5688#issuecomment-104464124
@viirya , please see below query details with Using script transform:
ADD FILE ${env:BIG_BENCH_QUERIES_DIR}/Resources/bigbenchqueriesmr.jar
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/5688#issuecomment-103314083
Hi,
I tried to apply this patch to run such SQL query with 'Using' Script
operation and found below error throwing:
15/04/27 09:
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/4035#issuecomment-70643624
Sorry for opened accidentally..please kindly close it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user jameszhouyi opened a pull request:
https://github.com/apache/spark/pull/4035
Merge pull request #1 from apache/master
[SPARK-2140] Updating heap memory calculation for YARN stable and alpha.
You can merge this pull request into a Git repository by running:
$ git
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/2646#issuecomment-58307284
Hi @davies ,
The error have been fixed via 'yum install time'. Thanks.
---
If your project is set up for it, you can reply to this email and have
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/2646#issuecomment-58205767
Hi @davies @JoshRosen
Found below errors after add 'time' in run-tests
Running PySpark tests. Output is in python/unit-tests.log.
Te
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/2444#issuecomment-57071944
Hi @pwendell ,
After this commit, for spark-perf will complain 'not found slaves' when run
./bin/run... so have to modify from slaves.template to slave
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/2359#issuecomment-55473168
Hi @andrewor14 , I just got the the latest branch-1.1 and run again, but
still run across the issue like below details, please kindly review below (I
ran it on
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/2359#issuecomment-55369150
Hi andrewor14,
I also tested this on apache/spark master, there is no errors as like
yours. I am not sure why causes this inconsistent behavior on master v.s
Github user jameszhouyi commented on the pull request:
https://github.com/apache/spark/pull/2359#issuecomment-55368800
The issue found in apache/spark branch-1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user jameszhouyi reopened a pull request:
https://github.com/apache/spark/pull/2359
SPARK-3480 - Throws out Not a valid command 'yarn-alpha/scalastyle' in
dev/scalastyle for sbt build tool during 'Running Scala style checks'
Symptom:
Run ./dev/run-te
Github user jameszhouyi closed the pull request at:
https://github.com/apache/spark/pull/2359
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user jameszhouyi opened a pull request:
https://github.com/apache/spark/pull/2359
SPARK-3480 - Throws out Not a valid command 'yarn-alpha/scalastyle' in
dev/scalastyle for sbt build tool during 'Running Scala style checks'
Symptom:
Run ./dev/run-tests
Github user jameszhouyi closed the pull request at:
https://github.com/apache/spark/pull/2353
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user jameszhouyi reopened a pull request:
https://github.com/apache/spark/pull/2353
[SPARK-3480] Throws out Not a valid command 'yarn-alpha/scalastyle' in
dev/scalastyle for sbt build tool during 'Running Scala style checks'
Symptom:
Run ./dev/run-te
Github user jameszhouyi closed the pull request at:
https://github.com/apache/spark/pull/2353
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user jameszhouyi reopened a pull request:
https://github.com/apache/spark/pull/2353
[SPARK-3480] Throws out Not a valid command 'yarn-alpha/scalastyle' in
dev/scalastyle for sbt build tool during 'Running Scala style checks'
Symptom:
Run ./dev/run-te
Github user jameszhouyi closed the pull request at:
https://github.com/apache/spark/pull/2353
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user jameszhouyi opened a pull request:
https://github.com/apache/spark/pull/2353
Branch 1.1
Symptom:
Run ./dev/run-tests and dump outputs as following:
SBT_MAVEN_PROFILES_ARGS="-Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0
-Pkinesis-asl"
[Warn] Java 8
41 matches
Mail list logo