[
https://issues.apache.org/jira/browse/SPARK-17961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-17961:
---
Component/s: SQL
SparkR
> Add storageLevel to Dataset for SparkR
>
[
https://issues.apache.org/jira/browse/SPARK-17961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-17961:
---
Issue Type: Improvement (was: Bug)
> Add storageLevel to Dataset for SparkR
>
[
https://issues.apache.org/jira/browse/SPARK-17961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15580131#comment-15580131
]
Weichen Xu commented on SPARK-17961:
I am working on it and will create PR soon.
> Add storageLevel
Weichen Xu created SPARK-17961:
--
Summary: Add storageLevel to Dataset for SparkR
Key: SPARK-17961
URL: https://issues.apache.org/jira/browse/SPARK-17961
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564097#comment-15564097
]
Weichen Xu edited comment on SPARK-17139 at 10/11/16 1:25 AM:
--
I'm working
[
https://issues.apache.org/jira/browse/SPARK-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564097#comment-15564097
]
Weichen Xu commented on SPARK-17139:
I'm working on it hardly and will create PR this week, thanks!
[
https://issues.apache.org/jira/browse/SPARK-17540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-17540:
---
Description:
SparkR cannot handle array serde when array length == 0
when length = 0
R side set the
[
https://issues.apache.org/jira/browse/SPARK-17540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-17540:
---
Description:
SparkR cannot handle array serde when array length == 0
when length = 0
R side set the
[
https://issues.apache.org/jira/browse/SPARK-17540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu closed SPARK-17540.
--
Resolution: Won't Fix
> SparkR array serde cannot work correctly when array length == 0
>
[
https://issues.apache.org/jira/browse/SPARK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-17745:
---
Component/s: PySpark
> Update Python API for NB to support weighted instances
>
[
https://issues.apache.org/jira/browse/SPARK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535750#comment-15535750
]
Weichen Xu commented on SPARK-17745:
I will work on it and create PR ASAP, thanks!
> Update Python
[
https://issues.apache.org/jira/browse/SPARK-17281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492931#comment-15492931
]
Weichen Xu commented on SPARK-17281:
because currently, the AFTSuvivalRegression use `treeAggregate`
Weichen Xu created SPARK-17540:
--
Summary: SparkR array serde cannot work correctly when array
length == 0
Key: SPARK-17540
URL: https://issues.apache.org/jira/browse/SPARK-17540
Project: Spark
Weichen Xu created SPARK-17507:
--
Summary: check weight vector size in ANN
Key: SPARK-17507
URL: https://issues.apache.org/jira/browse/SPARK-17507
Project: Spark
Issue Type: Improvement
Weichen Xu created SPARK-17499:
--
Summary: make the default params in sparkR spark.mlp consistent
with MultilayerPerceptronClassifier
Key: SPARK-17499
URL: https://issues.apache.org/jira/browse/SPARK-17499
Weichen Xu created SPARK-17390:
--
Summary: optimize MultivariantOnlineSummerizer by making the
summarized target configurable
Key: SPARK-17390
URL: https://issues.apache.org/jira/browse/SPARK-17390
Weichen Xu created SPARK-17362:
--
Summary: fix MultivariantOnlineSummerizer.numNonZeros
Key: SPARK-17362
URL: https://issues.apache.org/jira/browse/SPARK-17362
Project: Spark
Issue Type: Bug
Weichen Xu created SPARK-17363:
--
Summary: fix MultivariantOnlineSummerizer.numNonZeros
Key: SPARK-17363
URL: https://issues.apache.org/jira/browse/SPARK-17363
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-17175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15455963#comment-15455963
]
Weichen Xu commented on SPARK-17175:
I will work on it, thanks!
> Add a expert formula to
[
https://issues.apache.org/jira/browse/SPARK-17050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452401#comment-15452401
]
Weichen Xu commented on SPARK-17050:
because KMeans algo is being optimized by another task I close
[
https://issues.apache.org/jira/browse/SPARK-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15442542#comment-15442542
]
Weichen Xu commented on SPARK-17139:
Because LOR & MLOR interface need to be unified, I will create
Weichen Xu created SPARK-17281:
--
Summary: Add treeAggregateDepth parameter for AFTSurvivalRegression
Key: SPARK-17281
URL: https://issues.apache.org/jira/browse/SPARK-17281
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-17169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436402#comment-15436402
]
Weichen Xu commented on SPARK-17169:
I will work on it and create pr soon!
> To use scala macros to
[
https://issues.apache.org/jira/browse/SPARK-17201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434073#comment-15434073
]
Weichen Xu commented on SPARK-17201:
yeah, you are right...
I search some proof for this such as
[
https://issues.apache.org/jira/browse/SPARK-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427519#comment-15427519
]
Weichen Xu edited comment on SPARK-17139 at 8/19/16 3:05 AM:
-
I will work on
[
https://issues.apache.org/jira/browse/SPARK-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427518#comment-15427518
]
Weichen Xu edited comment on SPARK-17138 at 8/19/16 3:06 AM:
-
I will work on
[
https://issues.apache.org/jira/browse/SPARK-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427519#comment-15427519
]
Weichen Xu commented on SPARK-17139:
I will work on it and create PR soon, thanks.
> Add model
[
https://issues.apache.org/jira/browse/SPARK-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427518#comment-15427518
]
Weichen Xu commented on SPARK-17138:
I will work on it and create PR soon, thanks.
> Python API for
[
https://issues.apache.org/jira/browse/SPARK-16934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16934:
---
Description: Update LogisticCostAggregator serialization code to make it
consistent with
[
https://issues.apache.org/jira/browse/SPARK-16934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16934:
---
Summary: Update LogisticCostAggregator serialization code to make it
consistent with
Weichen Xu created SPARK-17050:
--
Summary: Improve initKMeansParallel with treeAggregate
Key: SPARK-17050
URL: https://issues.apache.org/jira/browse/SPARK-17050
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-17046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-17046:
---
Description:
currently, we can use:
dataframe.select()
which select nothing.
it is illegal and
Weichen Xu created SPARK-17046:
--
Summary: prevent user using dataframe.select with empty param list
Key: SPARK-17046
URL: https://issues.apache.org/jira/browse/SPARK-17046
Project: Spark
Issue
Weichen Xu created SPARK-16934:
--
Summary: Improve LogisticCostFun to avoid redundant serielization
Key: SPARK-16934
URL: https://issues.apache.org/jira/browse/SPARK-16934
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409409#comment-15409409
]
Weichen Xu commented on SPARK-16915:
I know the reason, not a bug,..
it will serialize C1 object so
[
https://issues.apache.org/jira/browse/SPARK-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu closed SPARK-16915.
--
Resolution: Not A Bug
> broadcast var cause Task not serializable exception when broadcast var is a
>
[
https://issues.apache.org/jira/browse/SPARK-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16915:
---
Description:
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
class C1(val
[
https://issues.apache.org/jira/browse/SPARK-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16915:
---
Description:
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
class C1(val
[
https://issues.apache.org/jira/browse/SPARK-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16915:
---
Description:
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
class C1(val
[
https://issues.apache.org/jira/browse/SPARK-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16915:
---
Description:
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
class C1(val
[
https://issues.apache.org/jira/browse/SPARK-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16915:
---
Description:
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
class C1(val
[
https://issues.apache.org/jira/browse/SPARK-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16915:
---
Description:
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
class C1(val
[
https://issues.apache.org/jira/browse/SPARK-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16915:
---
Description:
---
import
Weichen Xu created SPARK-16915:
--
Summary: broadcast var cause Task not serializable exception when
broadcast var is a class member
Key: SPARK-16915
URL: https://issues.apache.org/jira/browse/SPARK-16915
[
https://issues.apache.org/jira/browse/SPARK-16880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16880:
---
Component/s: MLlib
ML
> Improve ANN training, add training data persist if needed
>
Weichen Xu created SPARK-16880:
--
Summary: Improve ANN training, add training data persist if needed
Key: SPARK-16880
URL: https://issues.apache.org/jira/browse/SPARK-16880
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-16835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16835:
---
Affects Version/s: 2.1.0
2.0.1
> LinearRegression LogisticRegression
Weichen Xu created SPARK-16835:
--
Summary: LinearRegression LogisticRegression AFTSuvivalRegression
should unpersist input training data when exception throws
Key: SPARK-16835
URL:
[
https://issues.apache.org/jira/browse/SPARK-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16696:
---
Issue Type: Improvement (was: Bug)
> unused broadcast variables should call destroy instead of
[
https://issues.apache.org/jira/browse/SPARK-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16697:
---
Description:
In mllib.clustering.LDAOptimizer
the submitMiniBatch method,
the stats: RDD do not
Weichen Xu created SPARK-16697:
--
Summary: redundant RDD computation in LDAOptimizer
Key: SPARK-16697
URL: https://issues.apache.org/jira/browse/SPARK-16697
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16696:
---
Description:
Unused broadcast variables should call destroy() instead of unpersist() so that
the
Weichen Xu created SPARK-16696:
--
Summary: unused broadcast variables should call destroy instead of
unpersist
Key: SPARK-16696
URL: https://issues.apache.org/jira/browse/SPARK-16696
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16662:
---
Issue Type: Bug (was: Improvement)
> The HiveContext deprecate warning in python always shown even
Weichen Xu created SPARK-16662:
--
Summary: The HiveContext deprecate warning in python always shown
even if do not use HiveContext
Key: SPARK-16662
URL: https://issues.apache.org/jira/browse/SPARK-16662
[
https://issues.apache.org/jira/browse/SPARK-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16653:
---
Component/s: Optimizer
ML
> Make convergence tolerance param in ANN default value
Weichen Xu created SPARK-16653:
--
Summary: Make convergence tolerance param in ANN default value
consistent with other algorithm using LBFGS
Key: SPARK-16653
URL: https://issues.apache.org/jira/browse/SPARK-16653
[
https://issues.apache.org/jira/browse/SPARK-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu closed SPARK-16638.
--
Resolution: Not A Problem
> The L2 regularization of LinearRegression seems wrong when standardization
[
https://issues.apache.org/jira/browse/SPARK-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385306#comment-15385306
]
Weichen Xu commented on SPARK-16638:
seems i'm wrong, the intention of author may be to use w[i] /
[
https://issues.apache.org/jira/browse/SPARK-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16638:
---
Description:
The original L2 is
0.5 * effectiveL2regParam * sigma( wi^2 )
(wi is the coefficients we
Weichen Xu created SPARK-16638:
--
Summary: The L2 regularization of LinearRegression seems wrong
when standardization is false
Key: SPARK-16638
URL: https://issues.apache.org/jira/browse/SPARK-16638
Weichen Xu created SPARK-16600:
--
Summary: fix latex formula syntax error in mllib
Key: SPARK-16600
URL: https://issues.apache.org/jira/browse/SPARK-16600
Project: Spark
Issue Type: Improvement
Weichen Xu created SPARK-16568:
--
Summary: update sql programing guide refreshTable API
Key: SPARK-16568
URL: https://issues.apache.org/jira/browse/SPARK-16568
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-16561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16561:
---
Description:
In `MultivariateOnlineSummarizer` min/max method,
use judgement "nnz(i) < weightSum",
[
https://issues.apache.org/jira/browse/SPARK-16561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16561:
---
Description:
In `MultivariateOnlineSummarizer` min/max method,
use judgement "nnz(i) < weightSum",
[
https://issues.apache.org/jira/browse/SPARK-16561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16561:
---
Description:
In `MultivariateOnlineSummarizer` min/max method,
use judgement nnz(i) < weightSum, it
Weichen Xu created SPARK-16561:
--
Summary: Potential numerial problem in
MultivariateOnlineSummarizer min/max
Key: SPARK-16561
URL: https://issues.apache.org/jira/browse/SPARK-16561
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-16546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16546:
---
Component/s: SQL
PySpark
> Dataframe.drop supported multi-columns in spark api and
Weichen Xu created SPARK-16546:
--
Summary: Dataframe.drop supported multi-columns in spark api and
should make python api also support it.
Key: SPARK-16546
URL: https://issues.apache.org/jira/browse/SPARK-16546
[
https://issues.apache.org/jira/browse/SPARK-16500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16500:
---
Component/s: Optimizer
> Add LBFG training not convergence warning for all ML algorithm
>
[
https://issues.apache.org/jira/browse/SPARK-16500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373062#comment-15373062
]
Weichen Xu commented on SPARK-16500:
OK. I'll keep it in mind in future task. Thanks!
> Add LBFG
Weichen Xu created SPARK-16500:
--
Summary: Add LBFG training not convergence warning for all ML
algorithm
Key: SPARK-16500
URL: https://issues.apache.org/jira/browse/SPARK-16500
Project: Spark
Weichen Xu created SPARK-16499:
--
Summary: Improve applyInPlace function for matrix in ANN code
Key: SPARK-16499
URL: https://issues.apache.org/jira/browse/SPARK-16499
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-16470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16470:
---
Description:
In `ml.regression.LinearRegression`, it use breeze `LBFGS` and `OWLQN`
optimizer to do
Weichen Xu created SPARK-16470:
--
Summary: ml.regression.LinearRegression training data do not check
whether the result actually reach convergence
Key: SPARK-16470
URL:
[
https://issues.apache.org/jira/browse/SPARK-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362429#comment-15362429
]
Weichen Xu edited comment on SPARK-16377 at 7/5/16 12:49 PM:
-
And I test on
[
https://issues.apache.org/jira/browse/SPARK-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362429#comment-15362429
]
Weichen Xu commented on SPARK-16377:
And I test on master version, also encounter the following
[
https://issues.apache.org/jira/browse/SPARK-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362426#comment-15362426
]
Weichen Xu commented on SPARK-16377:
This exception still exists on master code.
ERROR
[
https://issues.apache.org/jira/browse/SPARK-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362419#comment-15362419
]
Weichen Xu commented on SPARK-16377:
hi,
the exception:
java.lang.ArrayIndexOutOfBoundsException
at
[
https://issues.apache.org/jira/browse/SPARK-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-16345:
---
Description:
Currently, all example snippets in the graphx programming guide are hard-coded,
which
Weichen Xu created SPARK-16345:
--
Summary: Extract graphx programming guide example snippets from
source files instead of hard code them
Key: SPARK-16345
URL: https://issues.apache.org/jira/browse/SPARK-16345
[
https://issues.apache.org/jira/browse/SPARK-15874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326127#comment-15326127
]
Weichen Xu commented on SPARK-15874:
en...I got it.
But there is another problem, if I want to
[
https://issues.apache.org/jira/browse/SPARK-15874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326114#comment-15326114
]
Weichen Xu commented on SPARK-15874:
The hbase connector is implements in hive and spark-SQL can use
[
https://issues.apache.org/jira/browse/SPARK-15874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-15874:
---
Description:
Currently, Spark-SQL use `org.apache.hadoop.hive.hbase.HBaseStorageHandler`
for Hbase
[
https://issues.apache.org/jira/browse/SPARK-15874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-15874:
---
Summary: HBase rowkey optimization support for Hbase-Storage-handler (was:
HBase rowkey
[
https://issues.apache.org/jira/browse/SPARK-15874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15324385#comment-15324385
]
Weichen Xu commented on SPARK-15874:
[~rxin]What do you think about it ?
> HBase rowkey optimization
Weichen Xu created SPARK-15874:
--
Summary: HBase rowkey optimization support for Hbase-handler
Key: SPARK-15874
URL: https://issues.apache.org/jira/browse/SPARK-15874
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15322082#comment-15322082
]
Weichen Xu commented on SPARK-15086:
OK. [~srowen] What do you think about it?
> Update Java API
[
https://issues.apache.org/jira/browse/SPARK-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15322074#comment-15322074
]
Weichen Xu commented on SPARK-15086:
If do so, only rename the java API in this type or rename scala
[
https://issues.apache.org/jira/browse/SPARK-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321899#comment-15321899
]
Weichen Xu commented on SPARK-15837:
I'll work on it and create a PR soon !
> PySpark ML Word2Vec
[
https://issues.apache.org/jira/browse/SPARK-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320428#comment-15320428
]
Weichen Xu commented on SPARK-15086:
So, if considering java API compatibility with old version, the
[
https://issues.apache.org/jira/browse/SPARK-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320351#comment-15320351
]
Weichen Xu commented on SPARK-15086:
I think the Java API should be the same to scala API if
[
https://issues.apache.org/jira/browse/SPARK-15820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-15820:
---
Summary: Add Catalog.refreshTable into python API (was: Add spark-SQL
Catalog.refreshTable into
[
https://issues.apache.org/jira/browse/SPARK-15820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-15820:
---
External issue ID: (was: SPARK-15367)
> Add spark-SQL Catalog.refreshTable into python api
>
[
https://issues.apache.org/jira/browse/SPARK-15820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu updated SPARK-15820:
---
Description:
The Catalog.refreshTable API is missing in python interface for Spark-SQL, add
it.
Weichen Xu created SPARK-15820:
--
Summary: Add spark-SQL Catalog.refreshTable into python api
Key: SPARK-15820
URL: https://issues.apache.org/jira/browse/SPARK-15820
Project: Spark
Issue Type:
Weichen Xu created SPARK-15805:
--
Summary: update the whole sql programming guide
Key: SPARK-15805
URL: https://issues.apache.org/jira/browse/SPARK-15805
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-15212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weichen Xu closed SPARK-15212.
--
Resolution: Won't Fix
> CSV file reader when read file with first line schema do not filter blank in
Weichen Xu created SPARK-15702:
--
Summary: Update document programming-guide accumulator section
Key: SPARK-15702
URL: https://issues.apache.org/jira/browse/SPARK-15702
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-15670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309379#comment-15309379
]
Weichen Xu commented on SPARK-15670:
OK, I'll follow SPARK-15086 jira, thanks!
> Add deprecate
501 - 600 of 640 matches
Mail list logo