Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/12877
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216713635
Thanks merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216697634
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216697630
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216697465
**[Test build #57679 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57679/consoleFull)**
for PR 12877 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216692547
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216692550
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216692365
**[Test build #57675 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57675/consoleFull)**
for PR 12877 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216687286
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216687288
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216686843
**[Test build #57666 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57666/consoleFull)**
for PR 12877 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216681554
**[Test build #57679 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57679/consoleFull)**
for PR 12877 at commit
Github user koertkuipers commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216678197
yup needs to be transient, will fix
On Tue, May 3, 2016 at 5:58 PM, andrewor14 wrote:
> I think it's OK for it to be
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216677161
I think it's OK for it to be lazy; just wanted to understand why. But it
should be transient though since `sparkSession` is also transient.
---
If your project is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216676070
**[Test build #57675 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57675/consoleFull)**
for PR 12877 at commit
Github user koertkuipers commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216675245
if a SparkSession sits inside a Dataset does that mean _wrapped is always
already initialized (because you cannot have a Dataset without a
SparkContext)? if
Github user koertkuipers commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216670925
i made it lazy val since SparkSession.wrapped is effectively lazy too:
protected[sql] def wrapped: SQLContext = {
if (_wrapped == null) {
Github user koertkuipers commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216670423
oh since since sparkSession is just a normal val i guess it can also be
On Tue, May 3, 2016 at 5:25 PM, andrewor14 wrote:
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216668953
Looks good otherwise.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12877#discussion_r61958695
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -211,7 +211,7 @@ class Dataset[T] private[sql](
private implicit
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12877#discussion_r61958574
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -211,7 +211,7 @@ class Dataset[T] private[sql](
private implicit
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12877#discussion_r61958468
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -211,7 +211,7 @@ class Dataset[T] private[sql](
private implicit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216659629
**[Test build #57666 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57666/consoleFull)**
for PR 12877 at commit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/12877#issuecomment-216659182
cc @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user koertkuipers opened a pull request:
https://github.com/apache/spark/pull/12877
[SPARK-15097][SQL] make Dataset.sqlContext a stable identifier for imports
## What changes were proposed in this pull request?
Make Dataset.sqlContext a lazy val so that its a stable
25 matches
Mail list logo