[
https://issues.apache.org/jira/browse/SPARK-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14339690#comment-14339690
]
Jeff Zhang commented on SPARK-1015:
---
[~sowen] I may not have time for this recently.
[
https://issues.apache.org/jira/browse/SPARK-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589699#comment-14589699
]
Jeff Zhang commented on SPARK-6653:
---
Although this is already committed, would it be
[
https://issues.apache.org/jira/browse/SPARK-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694619#comment-14694619
]
Jeff Zhang commented on SPARK-4311:
---
[~rafa.alfaro] I couldn't reproduce this issue.
[
https://issues.apache.org/jira/browse/SPARK-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694608#comment-14694608
]
Jeff Zhang commented on SPARK-2971:
---
Looks like it has been resolved.
{code}
[
https://issues.apache.org/jira/browse/SPARK-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14694608#comment-14694608
]
Jeff Zhang edited comment on SPARK-2971 at 8/13/15 3:05 AM:
[
https://issues.apache.org/jira/browse/SPARK-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702777#comment-14702777
]
Jeff Zhang commented on SPARK-9195:
---
Find more issues on RDD/Storage UI
* Column Size in
[
https://issues.apache.org/jira/browse/SPARK-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702777#comment-14702777
]
Jeff Zhang edited comment on SPARK-9195 at 8/19/15 9:52 AM:
[
https://issues.apache.org/jira/browse/SPARK-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14647313#comment-14647313
]
Jeff Zhang commented on SPARK-8167:
---
[~mcheah] What's the status of this ticket ? I
Jeff Zhang created SPARK-11279:
--
Summary: Add DataFrame#toDF in PySpark
Key: SPARK-11279
URL: https://issues.apache.org/jira/browse/SPARK-11279
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975906#comment-14975906
]
Jeff Zhang commented on SPARK-11342:
Yes, it would be ideal to allow to set any available profile for
[
https://issues.apache.org/jira/browse/SPARK-11342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975888#comment-14975888
]
Jeff Zhang commented on SPARK-11342:
[~sowen] Isn't it also for local testing ?
{code}
if
Jeff Zhang created SPARK-11342:
--
Summary: Allow to set hadoop profile when running dev/run_tests
Key: SPARK-11342
URL: https://issues.apache.org/jira/browse/SPARK-11342
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang reopened SPARK-11102:
Reopen it
> Uninformative exception when specifing non-exist input for JSON data source
>
[
https://issues.apache.org/jira/browse/SPARK-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-10388:
---
Attachment: SPARK-10388PublicDataSetLoaderInterface.pdf
> Public dataset loader interface
>
[
https://issues.apache.org/jira/browse/SPARK-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998759#comment-14998759
]
Jeff Zhang commented on SPARK-10388:
[~mengxr] I talked with [~rams] offline, and would love to
Jeff Zhang created SPARK-11622:
--
Summary: Make LibSVMRelation extends HadoopFsRelation
Key: SPARK-11622
URL: https://issues.apache.org/jira/browse/SPARK-11622
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11622:
---
Summary: Make LibSVMRelation extends HadoopFsRelation and Add
LibSVMOutputWriter (was: Make
Jeff Zhang created SPARK-11691:
--
Summary: Allow to specify compression codec in HadoopFsRelation
when saving
Key: SPARK-11691
URL: https://issues.apache.org/jira/browse/SPARK-11691
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15001801#comment-15001801
]
Jeff Zhang commented on SPARK-11691:
Will create a PR soon.
> Allow to specify compression codec in
[
https://issues.apache.org/jira/browse/SPARK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005208#comment-15005208
]
Jeff Zhang commented on SPARK-11725:
Thanks [~hvanhovell] Should we prevent use primitive in UDF
[
https://issues.apache.org/jira/browse/SPARK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005399#comment-15005399
]
Jeff Zhang commented on SPARK-11725:
I am on master
> Let UDF to handle null value
>
Jeff Zhang created SPARK-11747:
--
Summary: Can not specify input path in python logistic_regression
example under ml
Key: SPARK-11747
URL: https://issues.apache.org/jira/browse/SPARK-11747
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11747:
---
Component/s: Examples
> Can not specify input path in python logistic_regression example under ml
>
[
https://issues.apache.org/jira/browse/SPARK-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11747:
---
Description:
Not sure why it is hard coded, it would be nice to allow user to specify input
path
[
https://issues.apache.org/jira/browse/SPARK-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997831#comment-14997831
]
Jeff Zhang commented on SPARK-11368:
Looks like an issue in QueryPlan optimization step, will work on
[
https://issues.apache.org/jira/browse/SPARK-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997711#comment-14997711
]
Jeff Zhang commented on SPARK-6517:
---
Yes, I'd love to do the follow up work. will create jira for Python
[
https://issues.apache.org/jira/browse/SPARK-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997711#comment-14997711
]
Jeff Zhang edited comment on SPARK-6517 at 11/10/15 12:09 AM:
--
Yes, I'd love
[
https://issues.apache.org/jira/browse/SPARK-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997728#comment-14997728
]
Jeff Zhang commented on SPARK-6517:
---
Oh, got it, thanks for letting me know.
> Bisecting k-means
[
https://issues.apache.org/jira/browse/SPARK-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14991314#comment-14991314
]
Jeff Zhang commented on SPARK-11145:
I ran it on the master, seems it has been resolved.
> Cannot
[
https://issues.apache.org/jira/browse/SPARK-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11102:
---
Summary: Unreadable exception when specifing non-exist input for JSON data
source (was: Not
[
https://issues.apache.org/jira/browse/SPARK-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14963029#comment-14963029
]
Jeff Zhang commented on SPARK-10861:
[~JihongMA] what's your progress on this ?
> Univariate
[
https://issues.apache.org/jira/browse/SPARK-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958397#comment-14958397
]
Jeff Zhang commented on SPARK-11125:
Will create pull request soon.
> Unreadable exception when
Jeff Zhang created SPARK-11125:
--
Summary: Unreadable exception when running spark-sql without
building with -Phive-thriftserver and SPARK_PREPEND_CLASSES is set
Key: SPARK-11125
URL:
[
https://issues.apache.org/jira/browse/SPARK-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11099:
---
Description:
spark.driver.extraClassPath doesn't take effect in the latest code, and find
the root
Jeff Zhang created SPARK-11099:
--
Summary: Default conf property file is not loaded
Key: SPARK-11099
URL: https://issues.apache.org/jira/browse/SPARK-11099
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11099:
---
Component/s: Spark Submit
> Default conf property file is not loaded
>
[
https://issues.apache.org/jira/browse/SPARK-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956399#comment-14956399
]
Jeff Zhang commented on SPARK-11099:
Will create a pull request soon
> Default conf property file is
[
https://issues.apache.org/jira/browse/SPARK-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11099:
---
Component/s: Spark Shell
> Default conf property file is not loaded
>
[
https://issues.apache.org/jira/browse/SPARK-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11102:
---
Issue Type: Improvement (was: Bug)
> Not readable exception when specifing non-exist input for JSON
Jeff Zhang created SPARK-11102:
--
Summary: Not readable exception when specifing non-exist input for
JSON data source
Key: SPARK-11102
URL: https://issues.apache.org/jira/browse/SPARK-11102
Project:
[
https://issues.apache.org/jira/browse/SPARK-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11102:
---
Priority: Minor (was: Major)
> Not readable exception when specifing non-exist input for JSON data
[
https://issues.apache.org/jira/browse/SPARK-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956491#comment-14956491
]
Jeff Zhang commented on SPARK-11102:
Will create a pull request soon
> Not readable exception when
[
https://issues.apache.org/jira/browse/SPARK-11099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11099:
---
Affects Version/s: 1.5.1
> Default conf property file is not loaded
>
[
https://issues.apache.org/jira/browse/SPARK-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11102:
---
Summary: Uninformative exception when specifing non-exist input for JSON
data source (was:
[
https://issues.apache.org/jira/browse/SPARK-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964821#comment-14964821
]
Jeff Zhang commented on SPARK-11205:
Will create PR soon.
> Delegate to scala DataFrame API rather
Jeff Zhang created SPARK-11204:
--
Summary: Delegate to scala DataFrame API rather than print in
python
Key: SPARK-11204
URL: https://issues.apache.org/jira/browse/SPARK-11204
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11205:
---
Description:
When I use DataFrame#explain(), I found the output is a little different from
scala
Jeff Zhang created SPARK-11205:
--
Summary: Delegate to scala DataFrame API rather than print in
python
Key: SPARK-11205
URL: https://issues.apache.org/jira/browse/SPARK-11205
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964586#comment-14964586
]
Jeff Zhang commented on SPARK-2654:
---
[~davies] I think currently spark-core also don't have logging
[
https://issues.apache.org/jira/browse/SPARK-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964550#comment-14964550
]
Jeff Zhang commented on SPARK-6517:
---
Is the work still going on ? If not, I'd like to help continue the
[
https://issues.apache.org/jira/browse/SPARK-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14964656#comment-14964656
]
Jeff Zhang commented on SPARK-9299:
---
Link with SPARK-6761 as they can share the same algorithm
>
[
https://issues.apache.org/jira/browse/SPARK-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11205:
---
Summary: Match the output of DataFrame#explain() in both scala api and
python (was: Delegate to
Jeff Zhang created SPARK-11226:
--
Summary: Empty line in json file should be skipped
Key: SPARK-11226
URL: https://issues.apache.org/jira/browse/SPARK-11226
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-11002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang resolved SPARK-11002.
Resolution: Duplicate
Duplicate of SPARK-10915
> pyspark doesn't support UDAF
>
[
https://issues.apache.org/jira/browse/SPARK-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11798:
---
Description:
I notice the comments in https://github.com/apache/spark/pull/9575 said that
Jeff Zhang created SPARK-11798:
--
Summary: Datanucleus jars is missing under lib_managed/jars
Key: SPARK-11798
URL: https://issues.apache.org/jira/browse/SPARK-11798
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15010140#comment-15010140
]
Jeff Zhang commented on SPARK-11804:
It is a bug in PySpark, working on it.
> Exception raise when
[
https://issues.apache.org/jira/browse/SPARK-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11804:
---
Priority: Minor (was: Major)
> Exception raise when using Jdbc predicates option in PySpark
>
Jeff Zhang created SPARK-11804:
--
Summary: Exception raise when using Jdbc predicates option in
PySpark
Key: SPARK-11804
URL: https://issues.apache.org/jira/browse/SPARK-11804
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11804:
---
Priority: Major (was: Minor)
> Exception raise when using Jdbc predicates option in PySpark
>
Jeff Zhang created SPARK-11725:
--
Summary: Let UDF to handle null value
Key: SPARK-11725
URL: https://issues.apache.org/jira/browse/SPARK-11725
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003836#comment-15003836
]
Jeff Zhang commented on SPARK-11725:
And I found that PySpark will allow the UDF to handle null
[
https://issues.apache.org/jira/browse/SPARK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003950#comment-15003950
]
Jeff Zhang commented on SPARK-11725:
bq. So there is no way to express null; in these case scala will
[
https://issues.apache.org/jira/browse/SPARK-11775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15008364#comment-15008364
]
Jeff Zhang commented on SPARK-11775:
Working on it.
> Allow PySpark to register Java UDF
>
Jeff Zhang created SPARK-11775:
--
Summary: Allow PySpark to register Java UDF
Key: SPARK-11775
URL: https://issues.apache.org/jira/browse/SPARK-11775
Project: Spark
Issue Type: New Feature
[
https://issues.apache.org/jira/browse/SPARK-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15008360#comment-15008360
]
Jeff Zhang commented on SPARK-5185:
---
I think pyspark --jars do put classes to driver class path. But the
[
https://issues.apache.org/jira/browse/SPARK-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-10481:
---
Description:
It happens when SPARK_PREPEND_CLASSES is set and run spark on yarn.
If
Jeff Zhang created SPARK-10481:
--
Summary: SPARK_PREPEND_CLASSES make spark-yarn related jar could
not be found
Key: SPARK-10481
URL: https://issues.apache.org/jira/browse/SPARK-10481
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734134#comment-14734134
]
Jeff Zhang commented on SPARK-10481:
Working on it (Try to throw a more readable exception)
>
[
https://issues.apache.org/jira/browse/SPARK-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-10481:
---
Description:
It happens when SPARK_PREPEND_CLASSES is set and run spark on yarn.
If
Jeff Zhang created SPARK-10526:
--
Summary: Display cores/memory on ExecutorsTab
Key: SPARK-10526
URL: https://issues.apache.org/jira/browse/SPARK-10526
Project: Spark
Issue Type: Improvement
Jeff Zhang created SPARK-10530:
--
Summary: Kill other task attempts when one taskattempt belonging
the same task is succeeded in speculation
Key: SPARK-10530
URL: https://issues.apache.org/jira/browse/SPARK-10530
[
https://issues.apache.org/jira/browse/SPARK-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738187#comment-14738187
]
Jeff Zhang commented on SPARK-9790:
---
Pretty useful feature IMO, any progress on it ?
> [YARN] Expose in
Jeff Zhang created SPARK-10531:
--
Summary: AppId is set as AppName in status rest api
Key: SPARK-10531
URL: https://issues.apache.org/jira/browse/SPARK-10531
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-12092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang closed SPARK-12092.
--
Resolution: Won't Fix
> StringIndexer failing with Unseen label exception on test data
>
[
https://issues.apache.org/jira/browse/SPARK-11940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037391#comment-15037391
]
Jeff Zhang commented on SPARK-11940:
Looks like there's even no scala api under ml for LDA. Will
[
https://issues.apache.org/jira/browse/SPARK-11940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037413#comment-15037413
]
Jeff Zhang commented on SPARK-11940:
Thanks [~yanboliang] didn't rebase my repository :)
> Python
[
https://issues.apache.org/jira/browse/SPARK-11940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-11940:
---
Comment: was deleted
(was: Thanks [~yanboliang] didn't rebase my repository :))
> Python API for
[
https://issues.apache.org/jira/browse/SPARK-11940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037414#comment-15037414
]
Jeff Zhang commented on SPARK-11940:
Thanks [~yanboliang] didn't rebase my repository :)
> Python
Jeff Zhang created SPARK-12119:
--
Summary: Support compression in PySpark
Key: SPARK-12119
URL: https://issues.apache.org/jira/browse/SPARK-12119
Project: Spark
Issue Type: Sub-task
Jeff Zhang created SPARK-12120:
--
Summary: Improve exception message when failing to initialize
HiveContext in PySpark
Key: SPARK-12120
URL: https://issues.apache.org/jira/browse/SPARK-12120
Project:
[
https://issues.apache.org/jira/browse/SPARK-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-12120:
---
Description:
I get the following exception message when failing to initialize HiveContext.
This is
Jeff Zhang created SPARK-12166:
--
Summary: Unset hadoop related environment in testing
Key: SPARK-12166
URL: https://issues.apache.org/jira/browse/SPARK-12166
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-12166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-12166:
---
Priority: Minor (was: Major)
> Unset hadoop related environment in testing
>
Jeff Zhang created SPARK-12086:
--
Summary: Support multiple input paths for LibSVMRelation
Key: SPARK-12086
URL: https://issues.apache.org/jira/browse/SPARK-12086
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-12092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035433#comment-15035433
]
Jeff Zhang commented on SPARK-12092:
Looks like it is been resolved in SPARK-8764
> StringIndexer
[
https://issues.apache.org/jira/browse/SPARK-12045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035333#comment-15035333
]
Jeff Zhang commented on SPARK-12045:
[~cloud_fan] I saw you are on the history of DateTimeUtils, so
[
https://issues.apache.org/jira/browse/SPARK-12045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046056#comment-15046056
]
Jeff Zhang commented on SPARK-12045:
bq. Our general policy for exceptions is that we return null for
[
https://issues.apache.org/jira/browse/SPARK-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049974#comment-15049974
]
Jeff Zhang commented on SPARK-4591:
---
Should this be closed ? Seems many algorithms have been ported, and
[
https://issues.apache.org/jira/browse/SPARK-12180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061320#comment-15061320
]
Jeff Zhang commented on SPARK-12180:
Simulate your sample code, but it works for me. But I am on
[
https://issues.apache.org/jira/browse/SPARK-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061835#comment-15061835
]
Jeff Zhang commented on SPARK-12384:
Correct, the memory is controlled by cluster manager, set
[
https://issues.apache.org/jira/browse/SPARK-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15055924#comment-15055924
]
Jeff Zhang commented on SPARK-4497:
---
Can not reproduce it. [~yanakad] Is this still an issue for you ?
Jeff Zhang created SPARK-12334:
--
Summary: Support read from multiple input paths for orc file in
DataFrameReader.orc
Key: SPARK-12334
URL: https://issues.apache.org/jira/browse/SPARK-12334
Project:
[
https://issues.apache.org/jira/browse/SPARK-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-12334:
---
Component/s: PySpark
> Support read from multiple input paths for orc file in DataFrameReader.orc
>
[
https://issues.apache.org/jira/browse/SPARK-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeff Zhang updated SPARK-12334:
---
Affects Version/s: 1.6.0
Target Version/s: 1.6.1
> Support read from multiple input paths for
[
https://issues.apache.org/jira/browse/SPARK-12180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15055429#comment-15055429
]
Jeff Zhang commented on SPARK-12180:
Could you paste your code ? It works fine for me to join 2
[
https://issues.apache.org/jira/browse/SPARK-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062069#comment-15062069
]
Jeff Zhang commented on SPARK-12384:
OK, get it. You mean the driver side.
> Allow -Xms to be set
[
https://issues.apache.org/jira/browse/SPARK-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063736#comment-15063736
]
Jeff Zhang commented on SPARK-12420:
+1, this is very common use data format. Not sure why it is not
[
https://issues.apache.org/jira/browse/SPARK-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063736#comment-15063736
]
Jeff Zhang edited comment on SPARK-12420 at 12/18/15 9:15 AM:
--
+1, this is
Jeff Zhang created SPARK-12318:
--
Summary: Save mode in SparkR should be error by default
Key: SPARK-12318
URL: https://issues.apache.org/jira/browse/SPARK-12318
Project: Spark
Issue Type: Bug
1 - 100 of 331 matches
Mail list logo