[
https://issues.apache.org/jira/browse/SPARK-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15810768#comment-15810768
]
lichenglin commented on SPARK-19131:
Yes
> Support "alter table drop partition [if e
lichenglin created SPARK-19131:
--
Summary: Support "alter table drop partition [if exists]"
Key: SPARK-19131
URL: https://issues.apache.org/jira/browse/SPARK-19131
Project: Spark
Issue Type: New
lichenglin created SPARK-19129:
--
Summary: alter table table_name drop partition with a empty string
will drop the whole table
Key: SPARK-19129
URL: https://issues.apache.org/jira/browse/SPARK-19129
Proje
lichenglin created SPARK-19075:
--
Summary: Plz make MinMaxScaler can work with a Number type field
Key: SPARK-19075
URL: https://issues.apache.org/jira/browse/SPARK-19075
Project: Spark
Issue Typ
[
https://issues.apache.org/jira/browse/SPARK-18893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15753304#comment-15753304
]
lichenglin commented on SPARK-18893:
spark 2.0 has disable "alter table".
[https://i
[
https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15753146#comment-15753146
]
lichenglin edited comment on SPARK-14130 at 12/16/16 2:00 AM:
-
[
https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-14130:
---
Comment: was deleted
(was: "TOK_ALTERTABLE_ADDCOLS" is a very important command for data warehouse.
[
https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-14130:
---
Comment: was deleted
(was: "TOK_ALTERTABLE_ADDCOLS" is a very important command for data warehouse.
[
https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15753147#comment-15753147
]
lichenglin commented on SPARK-14130:
"TOK_ALTERTABLE_ADDCOLS" is a very important com
[
https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15753148#comment-15753148
]
lichenglin commented on SPARK-14130:
"TOK_ALTERTABLE_ADDCOLS" is a very important com
[
https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15753146#comment-15753146
]
lichenglin commented on SPARK-14130:
"TOK_ALTERTABLE_ADDCOLS" is a very important com
[
https://issues.apache.org/jira/browse/SPARK-18441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15669125#comment-15669125
]
lichenglin commented on SPARK-18441:
Thanks ,It works now
> Add Smote in spark mlib
[
https://issues.apache.org/jira/browse/SPARK-18441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15669050#comment-15669050
]
lichenglin commented on SPARK-18441:
Thanks for your reply.
May I ask what version of
lichenglin created SPARK-18441:
--
Summary: Add Smote in spark mlib and ml
Key: SPARK-18441
URL: https://issues.apache.org/jira/browse/SPARK-18441
Project: Spark
Issue Type: Wish
Compone
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15662759#comment-15662759
]
lichenglin commented on SPARK-18413:
I'm sorry,my network is too bad to download dep
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658921#comment-15658921
]
lichenglin commented on SPARK-18413:
Sorry,I can't.
I'm a rookie and have a really te
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658920#comment-15658920
]
lichenglin commented on SPARK-18413:
Sorry,I can't.
I'm a rookie and have a really te
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-18413:
---
Comment: was deleted
(was: Sorry,I can't.
I'm a rookie and have a really terrible network...
)
> Add
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658562#comment-15658562
]
lichenglin edited comment on SPARK-18413 at 11/11/16 11:57 PM:
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658562#comment-15658562
]
lichenglin commented on SPARK-18413:
{code}
CREATE or replace TEMPORARY VIEW resultvi
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15658490#comment-15658490
]
lichenglin commented on SPARK-18413:
I'm using spark sql,and how to call repartition
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin reopened SPARK-18413:
> Add a property to control the number of partitions when save a jdbc rdd
> ---
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-18413:
---
Description:
{code}
CREATE or replace TEMPORARY VIEW resultview
USING org.apache.spark.sql.jdbc
OPTIO
[
https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin closed SPARK-18413.
--
Resolution: Invalid
> Add a property to control the number of partitions when save a jdbc rdd
> ---
lichenglin created SPARK-18413:
--
Summary: Add a property to control the number of partitions when
save a jdbc rdd
Key: SPARK-18413
URL: https://issues.apache.org/jira/browse/SPARK-18413
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15580857#comment-15580857
]
lichenglin commented on SPARK-17898:
I have found a way to declaration the username a
[
https://issues.apache.org/jira/browse/SPARK-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15573929#comment-15573929
]
lichenglin edited comment on SPARK-17898 at 10/14/16 2:41 AM:
-
[
https://issues.apache.org/jira/browse/SPARK-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15573929#comment-15573929
]
lichenglin commented on SPARK-17898:
I know it.
But how to build these dependencies
lichenglin created SPARK-17898:
--
Summary: --repositories needs username and password
Key: SPARK-17898
URL: https://issues.apache.org/jira/browse/SPARK-17898
Project: Spark
Issue Type: Wish
[
https://issues.apache.org/jira/browse/SPARK-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-16517:
---
Summary: can't add columns on the table create by spark'writer (was:
can't add columns on the parqu
[
https://issues.apache.org/jira/browse/SPARK-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-16517:
---
Summary: can't add columns on the parquet table (was: can't add columns
on the table witch column m
[
https://issues.apache.org/jira/browse/SPARK-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-16517:
---
Description:
{code}
setName("abc");
HiveContext hive = getHiveContext();
DataFrame d = hive.createDat
[
https://issues.apache.org/jira/browse/SPARK-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-16517:
---
Description:
{code}
setName("abc");
HiveContext hive = getHiveContext();
DataFrame d = hive.createDat
lichenglin created SPARK-16517:
--
Summary: can't add columns on the table witch column metadata is
serializer
Key: SPARK-16517
URL: https://issues.apache.org/jira/browse/SPARK-16517
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15362180#comment-15362180
]
lichenglin commented on SPARK-16361:
I think a cube with just about 10 fields is fami
[
https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-16361:
---
Comment: was deleted
(was: I have set master url in java application.
here is a copy from spark mast
[
https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361869#comment-15361869
]
lichenglin commented on SPARK-16361:
I have set master url in java application.
here
[
https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361870#comment-15361870
]
lichenglin commented on SPARK-16361:
I have set master url in java application.
here
[
https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361117#comment-15361117
]
lichenglin commented on SPARK-16361:
GenerateUnsafeProjection: Code generated in 4.01
[
https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361112#comment-15361112
]
lichenglin commented on SPARK-16361:
Here is my whole setting
{code}
spark.local.dir
[
https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361073#comment-15361073
]
lichenglin commented on SPARK-16361:
The data'size is 1 million.
I'm sure that 40 GB
[
https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361039#comment-15361039
]
lichenglin edited comment on SPARK-16361 at 7/4/16 9:03 AM:
"
[
https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361039#comment-15361039
]
lichenglin commented on SPARK-16361:
"A long time" means the gctime/Duration of each
lichenglin created SPARK-16361:
--
Summary: It takes a long time for gc when building cube with many
fields
Key: SPARK-16361
URL: https://issues.apache.org/jira/browse/SPARK-16361
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-15900:
---
Summary: please add a map param on MQTTUtils.createStream for setting
MqttConnectOptions (was: plea
[
https://issues.apache.org/jira/browse/SPARK-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-15900:
---
Summary: please add a map param on MQTTUtils.createStreamfor setting
MqttConnectOptions (was: pleas
lichenglin created SPARK-15900:
--
Summary: please add a map param on MQTTUtils.create for setting
MqttConnectOptions
Key: SPARK-15900
URL: https://issues.apache.org/jira/browse/SPARK-15900
Project: Spark
lichenglin created SPARK-15497:
--
Summary: DecisionTreeClassificationModel can't be saved within in
Pipeline caused by not implement Writable
Key: SPARK-15497
URL: https://issues.apache.org/jira/browse/SPARK-15497
[
https://issues.apache.org/jira/browse/SPARK-15044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297540#comment-15297540
]
lichenglin commented on SPARK-15044:
This exception is caused by “the HiveContext cac
[
https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin closed SPARK-15478.
--
Resolution: Not A Problem
> LogisticRegressionModel coefficients() returns an empty vector
> --
[
https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296332#comment-15296332
]
lichenglin commented on SPARK-15478:
Sorry,I made a mistake with wrong example data
[
https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-15478:
---
Description:
I'm not sure this is a bug.
I'm runing the sample code like this
{code}
public s
[
https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-15478:
---
Summary: LogisticRegressionModel 's Coefficients always return a empty
vector (was: LogisticRegres
[
https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-15478:
---
Description:
I don't know if this is a bug.
I'm runing the sample code like this
{code}
publi
lichenglin created SPARK-15478:
--
Summary: LogisticRegressionModel 's Coefficients always be a
empty vector
Key: SPARK-15478
URL: https://issues.apache.org/jira/browse/SPARK-15478
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-14886:
---
Description:
{code}
@Since("1.2.0")
def ndcgAt(k: Int): Double = {
require(k > 0, "ranking pos
[
https://issues.apache.org/jira/browse/SPARK-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-14886:
---
Description:
@Since("1.2.0")
def ndcgAt(k: Int): Double = {
require(k > 0, "ranking position k
lichenglin created SPARK-14886:
--
Summary: RankingMetrics.ndcgAt throw
java.lang.ArrayIndexOutOfBoundsException
Key: SPARK-14886
URL: https://issues.apache.org/jira/browse/SPARK-14886
Project: Spark
lichenglin created SPARK-13999:
--
Summary: Run 'group by' before building cube
Key: SPARK-13999
URL: https://issues.apache.org/jira/browse/SPARK-13999
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin closed SPARK-13907.
--
> Imporvement the cube with the Fast Cubing In apache Kylin
> -
[
https://issues.apache.org/jira/browse/SPARK-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-13907:
---
Description:
I tried to build a cube on a 100 million data set.
When I set 9 fields to build the cube
[
https://issues.apache.org/jira/browse/SPARK-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-13907:
---
Description:
I tried to build a cube on a 100 million data set.
When I set 9 fields to build the cube
lichenglin created SPARK-13907:
--
Summary: Imporvement the cube with the Fast Cubing In apache Kylin
Key: SPARK-13907
URL: https://issues.apache.org/jira/browse/SPARK-13907
Project: Spark
Issue T
[
https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157801#comment-15157801
]
lichenglin commented on SPARK-13433:
I know the property 'spark.dirver.cores'
What I
[
https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157802#comment-15157802
]
lichenglin commented on SPARK-13433:
I know the property 'spark.dirver.cores'
What I
[
https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157121#comment-15157121
]
lichenglin commented on SPARK-13433:
What I mean is
We should set a limit on the tot
[
https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157118#comment-15157118
]
lichenglin commented on SPARK-13433:
But When? When someting else frees up resources?
[
https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15157050#comment-15157050
]
lichenglin commented on SPARK-13433:
It's something like deadlock.
driver use all co
[
https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-13433:
---
Description:
I have a 16 cores cluster.
A Running driver at least use 1 core may be more.
When I sub
[
https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
lichenglin updated SPARK-13433:
---
Description:
I have a 16 cores cluster.
When I submit a lot of job to the standalone server in clust
lichenglin created SPARK-13433:
--
Summary: The standalone should limit the count of Running Drivers
Key: SPARK-13433
URL: https://issues.apache.org/jira/browse/SPARK-13433
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15121065#comment-15121065
]
lichenglin commented on SPARK-12963:
I think the driver's ip should be stationary lik
[
https://issues.apache.org/jira/browse/SPARK-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15121053#comment-15121053
]
lichenglin commented on SPARK-12963:
I think the driver's ip should be stationary lik
lichenglin created SPARK-12963:
--
Summary: In cluster mode,spark_local_ip will cause driver
exception:Service 'Driver' failed after 16 retries!
Key: SPARK-12963
URL: https://issues.apache.org/jira/browse/SPARK-12963
74 matches
Mail list logo