[jira] [Commented] (SPARK-19131) Support "alter table drop partition [if exists]"

2017-01-08 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810768#comment-15810768 ] lichenglin commented on SPARK-19131: Yes > Support "alter table drop partition [if exists]" >

[jira] [Created] (SPARK-19131) Support "alter table drop partition [if exists]"

2017-01-08 Thread lichenglin (JIRA)
lichenglin created SPARK-19131: -- Summary: Support "alter table drop partition [if exists]" Key: SPARK-19131 URL: https://issues.apache.org/jira/browse/SPARK-19131 Project: Spark Issue Type: New

[jira] [Created] (SPARK-19129) alter table table_name drop partition with a empty string will drop the whole table

2017-01-08 Thread lichenglin (JIRA)
lichenglin created SPARK-19129: -- Summary: alter table table_name drop partition with a empty string will drop the whole table Key: SPARK-19129 URL: https://issues.apache.org/jira/browse/SPARK-19129

[jira] [Created] (SPARK-19075) Plz make MinMaxScaler can work with a Number type field

2017-01-03 Thread lichenglin (JIRA)
lichenglin created SPARK-19075: -- Summary: Plz make MinMaxScaler can work with a Number type field Key: SPARK-19075 URL: https://issues.apache.org/jira/browse/SPARK-19075 Project: Spark Issue

[jira] [Commented] (SPARK-18893) Not support "alter table .. add columns .."

2016-12-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753304#comment-15753304 ] lichenglin commented on SPARK-18893: spark 2.0 has disable "alter table".

[jira] [Comment Edited] (SPARK-14130) [Table related commands] Alter column

2016-12-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753146#comment-15753146 ] lichenglin edited comment on SPARK-14130 at 12/16/16 2:00 AM: --

[jira] [Issue Comment Deleted] (SPARK-14130) [Table related commands] Alter column

2016-12-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-14130: --- Comment: was deleted (was: "TOK_ALTERTABLE_ADDCOLS" is a very important command for data warehouse.

[jira] [Issue Comment Deleted] (SPARK-14130) [Table related commands] Alter column

2016-12-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-14130: --- Comment: was deleted (was: "TOK_ALTERTABLE_ADDCOLS" is a very important command for data warehouse.

[jira] [Commented] (SPARK-14130) [Table related commands] Alter column

2016-12-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753147#comment-15753147 ] lichenglin commented on SPARK-14130: "TOK_ALTERTABLE_ADDCOLS" is a very important command for data

[jira] [Commented] (SPARK-14130) [Table related commands] Alter column

2016-12-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753148#comment-15753148 ] lichenglin commented on SPARK-14130: "TOK_ALTERTABLE_ADDCOLS" is a very important command for data

[jira] [Commented] (SPARK-14130) [Table related commands] Alter column

2016-12-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753146#comment-15753146 ] lichenglin commented on SPARK-14130: "TOK_ALTERTABLE_ADDCOLS" is a very important command for data

[jira] [Commented] (SPARK-18441) Add Smote in spark mlib and ml

2016-11-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669125#comment-15669125 ] lichenglin commented on SPARK-18441: Thanks ,It works now > Add Smote in spark mlib and ml >

[jira] [Commented] (SPARK-18441) Add Smote in spark mlib and ml

2016-11-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669050#comment-15669050 ] lichenglin commented on SPARK-18441: Thanks for your reply. May I ask what version of spark this

[jira] [Created] (SPARK-18441) Add Smote in spark mlib and ml

2016-11-14 Thread lichenglin (JIRA)
lichenglin created SPARK-18441: -- Summary: Add Smote in spark mlib and ml Key: SPARK-18441 URL: https://issues.apache.org/jira/browse/SPARK-18441 Project: Spark Issue Type: Wish

[jira] [Commented] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-13 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15662759#comment-15662759 ] lichenglin commented on SPARK-18413: I'm sorry,my network is too bad to download dependencies from

[jira] [Commented] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658921#comment-15658921 ] lichenglin commented on SPARK-18413: Sorry,I can't. I'm a rookie and have a really terrible

[jira] [Commented] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658920#comment-15658920 ] lichenglin commented on SPARK-18413: Sorry,I can't. I'm a rookie and have a really terrible

[jira] [Issue Comment Deleted] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-18413: --- Comment: was deleted (was: Sorry,I can't. I'm a rookie and have a really terrible network... ) >

[jira] [Comment Edited] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658562#comment-15658562 ] lichenglin edited comment on SPARK-18413 at 11/11/16 11:57 PM: --- {code}

[jira] [Commented] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658562#comment-15658562 ] lichenglin commented on SPARK-18413: {code} CREATE or replace TEMPORARY VIEW resultview USING

[jira] [Commented] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658490#comment-15658490 ] lichenglin commented on SPARK-18413: I'm using spark sql,and how to call repartition with sql??? >

[jira] [Reopened] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin reopened SPARK-18413: > Add a property to control the number of partitions when save a jdbc rdd >

[jira] [Updated] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-18413: --- Description: {code} CREATE or replace TEMPORARY VIEW resultview USING org.apache.spark.sql.jdbc

[jira] [Closed] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin closed SPARK-18413. -- Resolution: Invalid > Add a property to control the number of partitions when save a jdbc rdd >

[jira] [Created] (SPARK-18413) Add a property to control the number of partitions when save a jdbc rdd

2016-11-11 Thread lichenglin (JIRA)
lichenglin created SPARK-18413: -- Summary: Add a property to control the number of partitions when save a jdbc rdd Key: SPARK-18413 URL: https://issues.apache.org/jira/browse/SPARK-18413 Project: Spark

[jira] [Commented] (SPARK-17898) --repositories needs username and password

2016-10-16 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15580857#comment-15580857 ] lichenglin commented on SPARK-17898: I have found a way to declaration the username and password:

[jira] [Comment Edited] (SPARK-17898) --repositories needs username and password

2016-10-13 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573929#comment-15573929 ] lichenglin edited comment on SPARK-17898 at 10/14/16 2:41 AM: -- I know it.

[jira] [Commented] (SPARK-17898) --repositories needs username and password

2016-10-13 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15573929#comment-15573929 ] lichenglin commented on SPARK-17898: I know it. But how to build these dependencies into my jar.

[jira] [Created] (SPARK-17898) --repositories needs username and password

2016-10-12 Thread lichenglin (JIRA)
lichenglin created SPARK-17898: -- Summary: --repositories needs username and password Key: SPARK-17898 URL: https://issues.apache.org/jira/browse/SPARK-17898 Project: Spark Issue Type: Wish

[jira] [Updated] (SPARK-16517) can't add columns on the table create by spark'writer

2016-07-13 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-16517: --- Summary: can't add columns on the table create by spark'writer (was: can't add columns on the

[jira] [Updated] (SPARK-16517) can't add columns on the parquet table

2016-07-13 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-16517: --- Summary: can't add columns on the parquet table (was: can't add columns on the table witch column

[jira] [Updated] (SPARK-16517) can't add columns on the table witch column metadata is serializer

2016-07-12 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-16517: --- Description: {code} setName("abc"); HiveContext hive = getHiveContext(); DataFrame d =

[jira] [Updated] (SPARK-16517) can't add columns on the table witch column metadata is serializer

2016-07-12 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-16517: --- Description: {code} setName("abc"); HiveContext hive = getHiveContext(); DataFrame d =

[jira] [Created] (SPARK-16517) can't add columns on the table witch column metadata is serializer

2016-07-12 Thread lichenglin (JIRA)
lichenglin created SPARK-16517: -- Summary: can't add columns on the table witch column metadata is serializer Key: SPARK-16517 URL: https://issues.apache.org/jira/browse/SPARK-16517 Project: Spark

[jira] [Commented] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-05 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362180#comment-15362180 ] lichenglin commented on SPARK-16361: I think a cube with just about 10 fields is familiar in OLAP

[jira] [Issue Comment Deleted] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-04 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-16361: --- Comment: was deleted (was: I have set master url in java application. here is a copy from spark

[jira] [Commented] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-04 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361869#comment-15361869 ] lichenglin commented on SPARK-16361: I have set master url in java application. here is a copy from

[jira] [Commented] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-04 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361870#comment-15361870 ] lichenglin commented on SPARK-16361: I have set master url in java application. here is a copy from

[jira] [Commented] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-04 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361117#comment-15361117 ] lichenglin commented on SPARK-16361: GenerateUnsafeProjection: Code generated in 4.012162 ms The

[jira] [Commented] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-04 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361112#comment-15361112 ] lichenglin commented on SPARK-16361: Here is my whole setting {code} spark.local.dir

[jira] [Commented] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-04 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361073#comment-15361073 ] lichenglin commented on SPARK-16361: The data'size is 1 million. I'm sure that 40 GB memory is

[jira] [Comment Edited] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-04 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361039#comment-15361039 ] lichenglin edited comment on SPARK-16361 at 7/4/16 9:03 AM: "A long time"

[jira] [Commented] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-04 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361039#comment-15361039 ] lichenglin commented on SPARK-16361: "A long time" means the gctime/Duration of each task. you can

[jira] [Created] (SPARK-16361) It takes a long time for gc when building cube with many fields

2016-07-04 Thread lichenglin (JIRA)
lichenglin created SPARK-16361: -- Summary: It takes a long time for gc when building cube with many fields Key: SPARK-16361 URL: https://issues.apache.org/jira/browse/SPARK-16361 Project: Spark

[jira] [Updated] (SPARK-15900) please add a map param on MQTTUtils.createStream for setting MqttConnectOptions

2016-06-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-15900: --- Summary: please add a map param on MQTTUtils.createStream for setting MqttConnectOptions (was:

[jira] [Updated] (SPARK-15900) please add a map param on MQTTUtils.createStreamfor setting MqttConnectOptions

2016-06-11 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-15900: --- Summary: please add a map param on MQTTUtils.createStreamfor setting MqttConnectOptions (was:

[jira] [Created] (SPARK-15900) please add a map param on MQTTUtils.create for setting MqttConnectOptions

2016-06-11 Thread lichenglin (JIRA)
lichenglin created SPARK-15900: -- Summary: please add a map param on MQTTUtils.create for setting MqttConnectOptions Key: SPARK-15900 URL: https://issues.apache.org/jira/browse/SPARK-15900 Project:

[jira] [Created] (SPARK-15497) DecisionTreeClassificationModel can't be saved within in Pipeline caused by not implement Writable

2016-05-23 Thread lichenglin (JIRA)
lichenglin created SPARK-15497: -- Summary: DecisionTreeClassificationModel can't be saved within in Pipeline caused by not implement Writable Key: SPARK-15497 URL: https://issues.apache.org/jira/browse/SPARK-15497

[jira] [Commented] (SPARK-15044) spark-sql will throw "input path does not exist" exception if it handles a partition which exists in hive table, but the path is removed manually

2016-05-23 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-15044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297540#comment-15297540 ] lichenglin commented on SPARK-15044: This exception is caused by “the HiveContext cache the metadata

[jira] [Closed] (SPARK-15478) LogisticRegressionModel coefficients() returns an empty vector

2016-05-23 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin closed SPARK-15478. -- Resolution: Not A Problem > LogisticRegressionModel coefficients() returns an empty vector >

[jira] [Commented] (SPARK-15478) LogisticRegressionModel coefficients() returns an empty vector

2016-05-23 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296332#comment-15296332 ] lichenglin commented on SPARK-15478: Sorry,I made a mistake with wrong example data >

[jira] [Updated] (SPARK-15478) LogisticRegressionModel 's Coefficients always return a empty vector

2016-05-23 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-15478: --- Description: I'm not sure this is a bug. I'm runing the sample code like this {code} public

[jira] [Updated] (SPARK-15478) LogisticRegressionModel 's Coefficients always return a empty vector

2016-05-23 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-15478: --- Summary: LogisticRegressionModel 's Coefficients always return a empty vector (was:

[jira] [Updated] (SPARK-15478) LogisticRegressionModel 's Coefficients always be a empty vector

2016-05-23 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-15478: --- Description: I don't know if this is a bug. I'm runing the sample code like this {code}

[jira] [Created] (SPARK-15478) LogisticRegressionModel 's Coefficients always be a empty vector

2016-05-23 Thread lichenglin (JIRA)
lichenglin created SPARK-15478: -- Summary: LogisticRegressionModel 's Coefficients always be a empty vector Key: SPARK-15478 URL: https://issues.apache.org/jira/browse/SPARK-15478 Project: Spark

[jira] [Updated] (SPARK-14886) RankingMetrics.ndcgAt throw java.lang.ArrayIndexOutOfBoundsException

2016-04-25 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-14886: --- Description: {code} @Since("1.2.0") def ndcgAt(k: Int): Double = { require(k > 0, "ranking

[jira] [Updated] (SPARK-14886) RankingMetrics.ndcgAt throw java.lang.ArrayIndexOutOfBoundsException

2016-04-24 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-14886: --- Description: @Since("1.2.0") def ndcgAt(k: Int): Double = { require(k > 0, "ranking position

[jira] [Created] (SPARK-14886) RankingMetrics.ndcgAt throw java.lang.ArrayIndexOutOfBoundsException

2016-04-24 Thread lichenglin (JIRA)
lichenglin created SPARK-14886: -- Summary: RankingMetrics.ndcgAt throw java.lang.ArrayIndexOutOfBoundsException Key: SPARK-14886 URL: https://issues.apache.org/jira/browse/SPARK-14886 Project: Spark

[jira] [Created] (SPARK-13999) Run 'group by' before building cube

2016-03-19 Thread lichenglin (JIRA)
lichenglin created SPARK-13999: -- Summary: Run 'group by' before building cube Key: SPARK-13999 URL: https://issues.apache.org/jira/browse/SPARK-13999 Project: Spark Issue Type: Improvement

[jira] [Closed] (SPARK-13907) Imporvement the cube with the Fast Cubing In apache Kylin

2016-03-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin closed SPARK-13907. -- > Imporvement the cube with the Fast Cubing In apache Kylin >

[jira] [Updated] (SPARK-13907) Imporvement the cube with the Fast Cubing In apache Kylin

2016-03-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-13907: --- Description: I tried to build a cube on a 100 million data set. When I set 9 fields to build the

[jira] [Updated] (SPARK-13907) Imporvement the cube with the Fast Cubing In apache Kylin

2016-03-15 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-13907: --- Description: I tried to build a cube on a 100 million data set. When I set 9 fields to build the

[jira] [Created] (SPARK-13907) Imporvement the cube with the Fast Cubing In apache Kylin

2016-03-15 Thread lichenglin (JIRA)
lichenglin created SPARK-13907: -- Summary: Imporvement the cube with the Fast Cubing In apache Kylin Key: SPARK-13907 URL: https://issues.apache.org/jira/browse/SPARK-13907 Project: Spark Issue

[jira] [Commented] (SPARK-13433) The standalone server should limit the count of cores and memory for running Drivers

2016-02-22 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157801#comment-15157801 ] lichenglin commented on SPARK-13433: I know the property 'spark.dirver.cores' What I want to limit

[jira] [Commented] (SPARK-13433) The standalone server should limit the count of cores and memory for running Drivers

2016-02-22 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157802#comment-15157802 ] lichenglin commented on SPARK-13433: I know the property 'spark.dirver.cores' What I want to limit

[jira] [Commented] (SPARK-13433) The standalone server should limit the count of cores and memory for running Drivers

2016-02-22 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157121#comment-15157121 ] lichenglin commented on SPARK-13433: What I mean is We should set a limit on the total cores for

[jira] [Commented] (SPARK-13433) The standalone server should limit the count of cores and memory for running Drivers

2016-02-22 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157118#comment-15157118 ] lichenglin commented on SPARK-13433: But When? When someting else frees up resources?? All the cores

[jira] [Commented] (SPARK-13433) The standalone server should limit the count of cores and memory for running Drivers

2016-02-22 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15157050#comment-15157050 ] lichenglin commented on SPARK-13433: It's something like deadlock. driver use all cores> application

[jira] [Updated] (SPARK-13433) The standalone server should limit the count of cores and memory for running Drivers

2016-02-22 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-13433: --- Description: I have a 16 cores cluster. A Running driver at least use 1 core may be more. When I

[jira] [Updated] (SPARK-13433) The standalone server should limit the count of cores and memory for running Drivers

2016-02-22 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lichenglin updated SPARK-13433: --- Description: I have a 16 cores cluster. When I submit a lot of job to the standalone server in

[jira] [Created] (SPARK-13433) The standalone should limit the count of Running Drivers

2016-02-22 Thread lichenglin (JIRA)
lichenglin created SPARK-13433: -- Summary: The standalone should limit the count of Running Drivers Key: SPARK-13433 URL: https://issues.apache.org/jira/browse/SPARK-13433 Project: Spark Issue

[jira] [Commented] (SPARK-12963) In cluster mode,spark_local_ip will cause driver exception:Service 'Driver' failed after 16 retries!

2016-01-28 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121065#comment-15121065 ] lichenglin commented on SPARK-12963: I think the driver's ip should be stationary like 'localhost'

[jira] [Commented] (SPARK-12963) In cluster mode,spark_local_ip will cause driver exception:Service 'Driver' failed after 16 retries!

2016-01-28 Thread lichenglin (JIRA)
[ https://issues.apache.org/jira/browse/SPARK-12963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121053#comment-15121053 ] lichenglin commented on SPARK-12963: I think the driver's ip should be stationary like 'localhost'

[jira] [Created] (SPARK-12963) In cluster mode,spark_local_ip will cause driver exception:Service 'Driver' failed after 16 retries!

2016-01-21 Thread lichenglin (JIRA)
lichenglin created SPARK-12963: -- Summary: In cluster mode,spark_local_ip will cause driver exception:Service 'Driver' failed after 16 retries! Key: SPARK-12963 URL: https://issues.apache.org/jira/browse/SPARK-12963