[jira] [Updated] (SPARK-6546) A wrong, but this will make spark compile failed!!

2015-03-25 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6546:
--
Summary: A wrong, but this will make spark compile failed!!   (was: A 
little spell wrong, but this will make spark compile failed!! )

 A wrong, but this will make spark compile failed!! 
 ---

 Key: SPARK-6546
 URL: https://issues.apache.org/jira/browse/SPARK-6546
 Project: Spark
  Issue Type: Bug
  Components: Build
Reporter: DoingDone9

 wrong code : val tmpDir = Files.createTempDir()
 not Files should Utils



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6546) A wrong use, but this will make spark compile failed!!

2015-03-25 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6546:
--
Summary: A wrong use, but this will make spark compile failed!!   (was: A 
wrong, but this will make spark compile failed!! )

 A wrong use, but this will make spark compile failed!! 
 ---

 Key: SPARK-6546
 URL: https://issues.apache.org/jira/browse/SPARK-6546
 Project: Spark
  Issue Type: Bug
  Components: Build
Reporter: DoingDone9

 wrong code : val tmpDir = Files.createTempDir()
 not Files should Utils



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6546) A little spell wrong, but this will make spark compile failed!!

2015-03-25 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6546:
-

 Summary: A little spell wrong, but this will make spark compile 
failed!! 
 Key: SPARK-6546
 URL: https://issues.apache.org/jira/browse/SPARK-6546
 Project: Spark
  Issue Type: Bug
  Components: Build
Reporter: DoingDone9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6546) A little spell wrong, but this will make spark compile failed!!

2015-03-25 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6546:
--
Description: 
wrong code : val tmpDir = Files.createTempDir()
not Files should Utils

 A little spell wrong, but this will make spark compile failed!! 
 

 Key: SPARK-6546
 URL: https://issues.apache.org/jira/browse/SPARK-6546
 Project: Spark
  Issue Type: Bug
  Components: Build
Reporter: DoingDone9

 wrong code : val tmpDir = Files.createTempDir()
 not Files should Utils



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6546) Using the wrong code that will make spark compile failed!!

2015-03-25 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6546:
--
Summary: Using the wrong code that will make spark compile failed!!   (was: 
A wrong use, but this will make spark compile failed!! )

 Using the wrong code that will make spark compile failed!! 
 ---

 Key: SPARK-6546
 URL: https://issues.apache.org/jira/browse/SPARK-6546
 Project: Spark
  Issue Type: Bug
  Components: Build
Reporter: DoingDone9

 wrong code : val tmpDir = Files.createTempDir()
 not Files should Utils



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6493) Support numeric(a,b) in the sqlContext

2015-03-24 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6493:
--
Summary: Support numeric(a,b) in the sqlContext  (was: Support numeric(a,b) 
in the parser)

 Support numeric(a,b) in the sqlContext
 --

 Key: SPARK-6493
 URL: https://issues.apache.org/jira/browse/SPARK-6493
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.3.0
Reporter: DoingDone9
Priority: Minor

 support sql like that :
 select cast(20.12 as numeric(4,2)) from src limit1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6493) Support numeric(a,b) in the sqlContext

2015-03-24 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6493:
--
Description: 
support sql like that :

select cast(20.12 as numeric(4,2)) from src limit 1;

  was:
support sql like that :

select cast(20.12 as numeric(4,2)) from src limit1;


 Support numeric(a,b) in the sqlContext
 --

 Key: SPARK-6493
 URL: https://issues.apache.org/jira/browse/SPARK-6493
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.3.0
Reporter: DoingDone9
Priority: Minor

 support sql like that :
 select cast(20.12 as numeric(4,2)) from src limit 1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6493) Support numeric(a,b) in the parser

2015-03-24 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6493:
-

 Summary: Support numeric(a,b) in the parser
 Key: SPARK-6493
 URL: https://issues.apache.org/jira/browse/SPARK-6493
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.3.0
Reporter: DoingDone9
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6493) Support numeric(a,b) in the parser

2015-03-24 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6493:
--
Description: 
support sql like that :

select cast(20.12 as numeric(4,2)) from src limit1;

 Support numeric(a,b) in the parser
 --

 Key: SPARK-6493
 URL: https://issues.apache.org/jira/browse/SPARK-6493
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.3.0
Reporter: DoingDone9
Priority: Minor

 support sql like that :
 select cast(20.12 as numeric(4,2)) from src limit1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6409) It is not necessary that avoid old inteface of hive because this will make some UDAF can not work.

2015-03-22 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6409:
--
Summary: It is not necessary that avoid old inteface of hive because this 
will make some UDAF can not work.  (was: It is not necessary that avoid old 
inteface of hive that will make some UDAF can not work.)

 It is not necessary that avoid old inteface of hive because this will make 
 some UDAF can not work.
 --

 Key: SPARK-6409
 URL: https://issues.apache.org/jira/browse/SPARK-6409
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9
  Labels: starter

 I run SQL like that 
 {code}
 CREATE TEMPORARY FUNCTION test_avg AS 
 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage'; 
   
 SELECT 
 test_avg(1), 
 test_avg(substr(value,5)) 
 FROM src; 
 {code}
 then i get a exception
 {code}
 15/03/19 09:36:45 ERROR CliDriver: org.apache.spark.SparkException: Job 
 aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent 
 failure: Lost task 0.3 in stage 2.0 (TID 6, HPC-3): 
 java.lang.ClassCastException: 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage$AverageAggregationBuffer
  cannot be cast to 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator$AbstractAggregationBuffer
  
 at 
 org.apache.spark.sql.hive.HiveUdafFunction.init(hiveUdfs.scala:369) 
 at 
 org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:214) 
 at 
 org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:188) 
 {code}
 i find that GenericUDAFAverage used a deprecated interface AggregationBuffer 
 that has been instead by AbstractAggregationBuffer. and spark avoid the old 
 interface AggregationBuffer , so GenericUDAFAverage  can not work.I think it 
 is not necessary.
 code in spark
 {code}
   // Cast required to avoid type inference selecting a deprecated Hive API.
   private val buffer =
 
 function.getNewAggregationBuffer.asInstanceOf[GenericUDAFEvaluator.AbstractAggregationBuffer]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6409) It is not necessary that avoid old inteface of hive that will make some UDAF can not work.

2015-03-22 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6409:
--
Summary: It is not necessary that avoid old inteface of hive that will make 
some UDAF can not work.  (was: It is not necessary that avoid old inteface of 
hive that will make some UDAF can work.)

 It is not necessary that avoid old inteface of hive that will make some UDAF 
 can not work.
 --

 Key: SPARK-6409
 URL: https://issues.apache.org/jira/browse/SPARK-6409
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9
  Labels: starter

 I run SQL like that 
 {code}
 CREATE TEMPORARY FUNCTION test_avg AS 
 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage'; 
   
 SELECT 
 test_avg(1), 
 test_avg(substr(value,5)) 
 FROM src; 
 {code}
 then i get a exception
 {code}
 15/03/19 09:36:45 ERROR CliDriver: org.apache.spark.SparkException: Job 
 aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent 
 failure: Lost task 0.3 in stage 2.0 (TID 6, HPC-3): 
 java.lang.ClassCastException: 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage$AverageAggregationBuffer
  cannot be cast to 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator$AbstractAggregationBuffer
  
 at 
 org.apache.spark.sql.hive.HiveUdafFunction.init(hiveUdfs.scala:369) 
 at 
 org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:214) 
 at 
 org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:188) 
 {code}
 i find that GenericUDAFAverage used a deprecated interface AggregationBuffer 
 that has been instead by AbstractAggregationBuffer. and spark avoid the old 
 interface AggregationBuffer , so GenericUDAFAverage  can not work.I think it 
 is not necessary.
 code in spark
 {code}
   // Cast required to avoid type inference selecting a deprecated Hive API.
   private val buffer =
 
 function.getNewAggregationBuffer.asInstanceOf[GenericUDAFEvaluator.AbstractAggregationBuffer]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6409) It is not necessary that avoid old inteface of hive, because this will make some UDAF can not work.

2015-03-22 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6409:
--
Summary: It is not necessary that avoid old inteface of hive, because this 
will make some UDAF can not work.  (was: It is not necessary that avoid old 
inteface of hive because this will make some UDAF can not work.)

 It is not necessary that avoid old inteface of hive, because this will make 
 some UDAF can not work.
 ---

 Key: SPARK-6409
 URL: https://issues.apache.org/jira/browse/SPARK-6409
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9
  Labels: starter

 I run SQL like that 
 {code}
 CREATE TEMPORARY FUNCTION test_avg AS 
 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage'; 
   
 SELECT 
 test_avg(1), 
 test_avg(substr(value,5)) 
 FROM src; 
 {code}
 then i get a exception
 {code}
 15/03/19 09:36:45 ERROR CliDriver: org.apache.spark.SparkException: Job 
 aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent 
 failure: Lost task 0.3 in stage 2.0 (TID 6, HPC-3): 
 java.lang.ClassCastException: 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage$AverageAggregationBuffer
  cannot be cast to 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator$AbstractAggregationBuffer
  
 at 
 org.apache.spark.sql.hive.HiveUdafFunction.init(hiveUdfs.scala:369) 
 at 
 org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:214) 
 at 
 org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:188) 
 {code}
 i find that GenericUDAFAverage used a deprecated interface AggregationBuffer 
 that has been instead by AbstractAggregationBuffer. and spark avoid the old 
 interface AggregationBuffer , so GenericUDAFAverage  can not work.I think it 
 is not necessary.
 code in spark
 {code}
   // Cast required to avoid type inference selecting a deprecated Hive API.
   private val buffer =
 
 function.getNewAggregationBuffer.asInstanceOf[GenericUDAFEvaluator.AbstractAggregationBuffer]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-2926) Add MR-style (merge-sort) SortShuffleReader for sort-based shuffle

2015-03-22 Thread DoingDone9 (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375323#comment-14375323
 ] 

DoingDone9 edited comment on SPARK-2926 at 3/23/15 3:07 AM:


hi, i test sortByKey with spark-perf(https://github.com/databricks/spark-perf), 
but i have a result like that :

spark1.3 : 
{time:452.453},{time:457.929},{time:452.295}

with your pr
{time:471.215},{time:460.59},{time:463.795}

could you tell me something that i did incorrectly. Thank you.


was (Author: doingdone9):
hi, i test sortByKey with spark-perf(https://github.com/databricks/spark-perf), 
but i have a result like that :

spark1.3 : 
{time:452.453},{time:457.929},{time:452.295}

with your pr
{time:471.215},{time:460.59},{time:463.795}

could you tell me something taht i did incorretly. Thank you.

 Add MR-style (merge-sort) SortShuffleReader for sort-based shuffle
 --

 Key: SPARK-2926
 URL: https://issues.apache.org/jira/browse/SPARK-2926
 Project: Spark
  Issue Type: Improvement
  Components: Shuffle
Affects Versions: 1.1.0
Reporter: Saisai Shao
Assignee: Saisai Shao
 Attachments: SortBasedShuffleRead.pdf, Spark Shuffle Test 
 Report(contd).pdf, Spark Shuffle Test Report.pdf


 Currently Spark has already integrated sort-based shuffle write, which 
 greatly improve the IO performance and reduce the memory consumption when 
 reducer number is very large. But for the reducer side, it still adopts the 
 implementation of hash-based shuffle reader, which neglects the ordering 
 attributes of map output data in some situations.
 Here we propose a MR style sort-merge like shuffle reader for sort-based 
 shuffle to better improve the performance of sort-based shuffle.
 Working in progress code and performance test report will be posted later 
 when some unit test bugs are fixed.
 Any comments would be greatly appreciated. 
 Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-2926) Add MR-style (merge-sort) SortShuffleReader for sort-based shuffle

2015-03-22 Thread DoingDone9 (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14375323#comment-14375323
 ] 

DoingDone9 commented on SPARK-2926:
---

hi, i test sortByKey with spark-perf(https://github.com/databricks/spark-perf), 
but i have a result like that :

spark1.3 : 
{time:452.453},{time:457.929},{time:452.295}

with your pr
{time:471.215},{time:460.59},{time:463.795}

could you tell me something taht i did incorretly. Thank you.

 Add MR-style (merge-sort) SortShuffleReader for sort-based shuffle
 --

 Key: SPARK-2926
 URL: https://issues.apache.org/jira/browse/SPARK-2926
 Project: Spark
  Issue Type: Improvement
  Components: Shuffle
Affects Versions: 1.1.0
Reporter: Saisai Shao
Assignee: Saisai Shao
 Attachments: SortBasedShuffleRead.pdf, Spark Shuffle Test 
 Report(contd).pdf, Spark Shuffle Test Report.pdf


 Currently Spark has already integrated sort-based shuffle write, which 
 greatly improve the IO performance and reduce the memory consumption when 
 reducer number is very large. But for the reducer side, it still adopts the 
 implementation of hash-based shuffle reader, which neglects the ordering 
 attributes of map output data in some situations.
 Here we propose a MR style sort-merge like shuffle reader for sort-based 
 shuffle to better improve the performance of sort-based shuffle.
 Working in progress code and performance test report will be posted later 
 when some unit test bugs are fixed.
 Any comments would be greatly appreciated. 
 Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6409) It is not necessary that avoid old inteface of hive that will make some UDAF can work.

2015-03-19 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6409:
--
Description: 
I run SQL like that 
CREATE TEMPORARY FUNCTION test_avg AS 
'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage'; 
  
SELECT 
test_avg(1), 
test_avg(substr(value,5)) 
FROM src; 

then i get a exception
15/03/19 09:36:45 ERROR CliDriver: org.apache.spark.SparkException: Job aborted 
due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: 
Lost task 0.3 in stage 2.0 (TID 6, HPC-3): java.lang.ClassCastException: 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage$AverageAggregationBuffer
 cannot be cast to 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator$AbstractAggregationBuffer
 
at 
org.apache.spark.sql.hive.HiveUdafFunction.init(hiveUdfs.scala:369) 
at 
org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:214) 
at 
org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:188) 




 It is not necessary that avoid old inteface of hive that will make some UDAF 
 can work.
 --

 Key: SPARK-6409
 URL: https://issues.apache.org/jira/browse/SPARK-6409
 Project: Spark
  Issue Type: Question
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9

 I run SQL like that 
 CREATE TEMPORARY FUNCTION test_avg AS 
 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage'; 
   
 SELECT 
 test_avg(1), 
 test_avg(substr(value,5)) 
 FROM src; 
 then i get a exception
 15/03/19 09:36:45 ERROR CliDriver: org.apache.spark.SparkException: Job 
 aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent 
 failure: Lost task 0.3 in stage 2.0 (TID 6, HPC-3): 
 java.lang.ClassCastException: 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage$AverageAggregationBuffer
  cannot be cast to 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator$AbstractAggregationBuffer
  
 at 
 org.apache.spark.sql.hive.HiveUdafFunction.init(hiveUdfs.scala:369) 
 at 
 org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:214) 
 at 
 org.apache.spark.sql.hive.HiveGenericUdaf.newInstance(hiveUdfs.scala:188) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6409) It is not necessary that avoid old inteface of hive that will make some UDAF can work.

2015-03-19 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6409:
--
Summary: It is not necessary that avoid old inteface of hive that will make 
some UDAF can work.  (was: Is it necessary that avoid old inteface of hive that 
will make some UDAF can work.)

 It is not necessary that avoid old inteface of hive that will make some UDAF 
 can work.
 --

 Key: SPARK-6409
 URL: https://issues.apache.org/jira/browse/SPARK-6409
 Project: Spark
  Issue Type: Question
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6409) Is it necessary that avoid old inteface of hive that will make some UDAF can work.

2015-03-19 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6409:
-

 Summary: Is it necessary that avoid old inteface of hive that will 
make some UDAF can work.
 Key: SPARK-6409
 URL: https://issues.apache.org/jira/browse/SPARK-6409
 Project: Spark
  Issue Type: Question
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6300) sc.addFile(path) does not support the relative path.

2015-03-12 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6300:
-

 Summary: sc.addFile(path) does not support the relative path.
 Key: SPARK-6300
 URL: https://issues.apache.org/jira/browse/SPARK-6300
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.1
Reporter: DoingDone9


when i run cmd like that sc.addFile(../test.txt), it did not work and throw 
an exception

java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path 
in absolute URI: file:../test.txt
at org.apache.hadoop.fs.Path.initialize(Path.java:206)
at org.apache.hadoop.fs.Path.init(Path.java:172) 

...
Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
file:../test.txt
at java.net.URI.checkPath(URI.java:1804)
at java.net.URI.init(URI.java:752)
at org.apache.hadoop.fs.Path.initialize(Path.java:203)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-6272) Sort these tokens in alphabetic order to avoid further duplicate in HiveQl

2015-03-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 closed SPARK-6272.
-
Resolution: Duplicate

 Sort these tokens in alphabetic order to avoid further duplicate in HiveQl
 --

 Key: SPARK-6272
 URL: https://issues.apache.org/jira/browse/SPARK-6272
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6272) Sort these tokens in alphabetic order to avoid further duplicate in HiveQl

2015-03-10 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6272:
-

 Summary: Sort these tokens in alphabetic order to avoid further 
duplicate in HiveQl
 Key: SPARK-6272
 URL: https://issues.apache.org/jira/browse/SPARK-6272
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6271) Sort these tokens in alphabetic order to avoid further duplicate in HiveQl

2015-03-10 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6271:
-

 Summary: Sort these tokens in alphabetic order to avoid further 
duplicate in HiveQl
 Key: SPARK-6271
 URL: https://issues.apache.org/jira/browse/SPARK-6271
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6243) The Operation of match not include all possible scenarios.

2015-03-10 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6243:
-

 Summary: The Operation of match not include all possible scenarios.
 Key: SPARK-6243
 URL: https://issues.apache.org/jira/browse/SPARK-6243
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: DoingDone9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6243) The Operation of match not include all possible scenarios.

2015-03-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6243:
--
Description: 
It did not conside that order.dataType does not match NativeType.

val comparison = order.dataType match {
 case n: NativeType if order.direction == Ascending =
n.ordering.asInstanceOf[Ordering[Any]].compare(left, right)
case n: NativeType if order.direction == Descending =
n.ordering.asInstanceOf[Ordering[Any]].reverse.compare(left, right)
 }

  was:
It did not conside that order.dataType does not match NativeType.

val comparison = order.dataType match {
  case n: NativeType if order.direction == Ascending =
n.ordering.asInstanceOf[Ordering[Any]].compare(left, right)
  case n: NativeType if order.direction == Descending =
n.ordering.asInstanceOf[Ordering[Any]].reverse.compare(left, right)
 }


 The Operation of match not include all possible scenarios.
 --

 Key: SPARK-6243
 URL: https://issues.apache.org/jira/browse/SPARK-6243
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: DoingDone9

 It did not conside that order.dataType does not match NativeType.
 val comparison = order.dataType match {
  case n: NativeType if order.direction == Ascending =
 n.ordering.asInstanceOf[Ordering[Any]].compare(left, right)
 case n: NativeType if order.direction == Descending =
 n.ordering.asInstanceOf[Ordering[Any]].reverse.compare(left, 
 right)
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6243) The Operation of match not include all possible scenarios.

2015-03-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6243:
--
Description: 
It did not conside that order.dataType does not match NativeType.

val comparison = order.dataType match {
  case n: NativeType if order.direction == Ascending =
n.ordering.asInstanceOf[Ordering[Any]].compare(left, right)
  case n: NativeType if order.direction == Descending =
n.ordering.asInstanceOf[Ordering[Any]].reverse.compare(left, right)
 }

 The Operation of match not include all possible scenarios.
 --

 Key: SPARK-6243
 URL: https://issues.apache.org/jira/browse/SPARK-6243
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: DoingDone9

 It did not conside that order.dataType does not match NativeType.
 val comparison = order.dataType match {
   case n: NativeType if order.direction == Ascending =
 n.ordering.asInstanceOf[Ordering[Any]].compare(left, right)
   case n: NativeType if order.direction == Descending =
 n.ordering.asInstanceOf[Ordering[Any]].reverse.compare(left, 
 right)
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6243) The Operation of match did not conside that order.dataType does not match NativeType

2015-03-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6243:
--
Summary: The Operation of match did not conside that order.dataType does 
not match NativeType  (was: The Operation of match not include all possible 
scenarios.)

 The Operation of match did not conside that order.dataType does not match 
 NativeType
 

 Key: SPARK-6243
 URL: https://issues.apache.org/jira/browse/SPARK-6243
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: DoingDone9

 It did not conside that order.dataType does not match NativeType.
 val comparison = order.dataType match {
  case n: NativeType if order.direction == Ascending =
 n.ordering.asInstanceOf[Ordering[Any]].compare(left, right)
 case n: NativeType if order.direction == Descending =
 n.ordering.asInstanceOf[Ordering[Any]].reverse.compare(left, 
 right)
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6243) The Operation of match did not conside the scenarios that order.dataType does not match NativeType

2015-03-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6243:
--
Summary: The Operation of match did not conside the scenarios that 
order.dataType does not match NativeType  (was: The Operation of match did not 
conside that order.dataType does not match NativeType)

 The Operation of match did not conside the scenarios that order.dataType does 
 not match NativeType
 --

 Key: SPARK-6243
 URL: https://issues.apache.org/jira/browse/SPARK-6243
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: DoingDone9

 It did not conside that order.dataType does not match NativeType.
 val comparison = order.dataType match {
  case n: NativeType if order.direction == Ascending =
 n.ordering.asInstanceOf[Ordering[Any]].compare(left, right)
 case n: NativeType if order.direction == Descending =
 n.ordering.asInstanceOf[Ordering[Any]].reverse.compare(left, 
 right)
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6185) Deltele repeated TOKEN. TOK_CREATEFUNCTION has existed at Line 84;

2015-03-05 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6185:
-

 Summary:   Deltele repeated TOKEN. TOK_CREATEFUNCTION has 
existed at Line 84;
 Key: SPARK-6185
 URL: https://issues.apache.org/jira/browse/SPARK-6185
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6185) Deltele repeated TOKEN. TOK_CREATEFUNCTION has existed at Line 84;

2015-03-05 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6185:
--
Description: 
TOK_CREATEFUNCTION has existed at Line 84;


Line 84TOK_CREATEFUNCTION,
Line 85 TOK_DROPFUNCTION,

Line 106  TOK_CREATEFUNCTION,

   Deltele repeated TOKEN. TOK_CREATEFUNCTION has existed at Line 84;
 -

 Key: SPARK-6185
 URL: https://issues.apache.org/jira/browse/SPARK-6185
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9

 TOK_CREATEFUNCTION has existed at Line 84;
 Line 84TOK_CREATEFUNCTION,
 Line 85 TOK_DROPFUNCTION,
 Line 106  TOK_CREATEFUNCTION,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-6181) Support SHOW COMPACTIONS;

2015-03-05 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 closed SPARK-6181.
-
Resolution: Invalid

 sparkSQL does not support transactions

 Support  SHOW COMPACTIONS;
 

 Key: SPARK-6181
 URL: https://issues.apache.org/jira/browse/SPARK-6181
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9

 SHOW COMPACTIONS returns a list of all tables and partitions currently being 
 compacted or scheduled for compaction when Hive transactions are being used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6198) Support select current_database()

2015-03-05 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6198:
-

 Summary: Support select current_database()
 Key: SPARK-6198
 URL: https://issues.apache.org/jira/browse/SPARK-6198
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6198) Support select current_database()

2015-03-05 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6198:
--
Description: 
The method(evaluate) has changed in UDFCurrentDB, it just throws a 
exception.But hiveUdfs call this method and failed.

@Override
  public Object evaluate(DeferredObject[] arguments) throws HiveException {
throw new IllegalStateException(never);
  }

 Support select current_database()
 ---

 Key: SPARK-6198
 URL: https://issues.apache.org/jira/browse/SPARK-6198
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9

 The method(evaluate) has changed in UDFCurrentDB, it just throws a 
 exception.But hiveUdfs call this method and failed.
 @Override
   public Object evaluate(DeferredObject[] arguments) throws HiveException {
 throw new IllegalStateException(never);
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6179) Support SHOW PRINCIPALS role_name;

2015-03-04 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6179:
-

 Summary: Support SHOW PRINCIPALS role_name;
 Key: SPARK-6179
 URL: https://issues.apache.org/jira/browse/SPARK-6179
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6179) Support SHOW PRINCIPALS role_name;

2015-03-04 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6179:
--
Description: 
SHOW PRINCIPALS role_name;

Lists all roles and users who belong to this role.
Only the admin role has privilege for this.

 Support SHOW PRINCIPALS role_name;
 

 Key: SPARK-6179
 URL: https://issues.apache.org/jira/browse/SPARK-6179
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9

 SHOW PRINCIPALS role_name;
 Lists all roles and users who belong to this role.
 Only the admin role has privilege for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6181) Support SHOW COMPACTIONS;

2015-03-04 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-6181:
-

 Summary: Support  SHOW COMPACTIONS;
 Key: SPARK-6181
 URL: https://issues.apache.org/jira/browse/SPARK-6181
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6181) Support SHOW COMPACTIONS;

2015-03-04 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-6181:
--
Description: 
SHOW COMPACTIONS returns a list of all tables and partitions currently being 
compacted or scheduled for compaction when Hive transactions are being used.



 Support  SHOW COMPACTIONS;
 

 Key: SPARK-6181
 URL: https://issues.apache.org/jira/browse/SPARK-6181
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.1
Reporter: DoingDone9

 SHOW COMPACTIONS returns a list of all tables and partitions currently being 
 compacted or scheduled for compaction when Hive transactions are being used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5648) suppot alter view/table tableName unset tblproperties(k)

2015-02-06 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5648:
--
Description: 
make hivecontext support unset tblproperties
like :
alter view viewName unset tblproperties(k)
alter table tableName unset tblproperties(k)






  was:
make hivecontext support unset tblproperties
like :







 suppot alter view/table tableName unset tblproperties(k) 
 -

 Key: SPARK-5648
 URL: https://issues.apache.org/jira/browse/SPARK-5648
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.0
Reporter: DoingDone9

 make hivecontext support unset tblproperties
 like :
 alter view viewName unset tblproperties(k)
 alter table tableName unset tblproperties(k)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5648) suppot alter ... unset tblproperties(key)

2015-02-06 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5648:
--
Summary: suppot alter ... unset tblproperties(key)   (was: suppot 
alter ... unset tblproperties(k) )

 suppot alter ... unset tblproperties(key) 
 --

 Key: SPARK-5648
 URL: https://issues.apache.org/jira/browse/SPARK-5648
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.0
Reporter: DoingDone9

 make hivecontext support unset tblproperties
 like :
 alter view viewName unset tblproperties(k)
 alter table tableName unset tblproperties(k)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5648) suppot alter view/table tableName unset tblproperties(k)

2015-02-06 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5648:
--
Description: 
make hivecontext support unset tblproperties
like :






 suppot alter view/table tableName unset tblproperties(k) 
 -

 Key: SPARK-5648
 URL: https://issues.apache.org/jira/browse/SPARK-5648
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.0
Reporter: DoingDone9

 make hivecontext support unset tblproperties
 like :



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5648) suppot alter ... unset tblproperties(k)

2015-02-06 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5648:
--
Summary: suppot alter ... unset tblproperties(k)   (was: suppot alter 
view/table tableName unset tblproperties(k) )

 suppot alter ... unset tblproperties(k) 
 

 Key: SPARK-5648
 URL: https://issues.apache.org/jira/browse/SPARK-5648
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.0
Reporter: DoingDone9

 make hivecontext support unset tblproperties
 like :
 alter view viewName unset tblproperties(k)
 alter table tableName unset tblproperties(k)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5648) support alter ... unset tblproperties(key)

2015-02-06 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5648:
--
Summary: support alter ... unset tblproperties(key)   (was: suppot 
alter ... unset tblproperties(key) )

 support alter ... unset tblproperties(key) 
 ---

 Key: SPARK-5648
 URL: https://issues.apache.org/jira/browse/SPARK-5648
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.0
Reporter: DoingDone9

 make hivecontext support unset tblproperties(key)
 like :
 alter view viewName unset tblproperties(k)
 alter table tableName unset tblproperties(k)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5648) suppot alter view/table tableName unset tblproperties(k)

2015-02-06 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-5648:
-

 Summary: suppot alter view/table tableName unset 
tblproperties(k) 
 Key: SPARK-5648
 URL: https://issues.apache.org/jira/browse/SPARK-5648
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.0
Reporter: DoingDone9






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5648) suppot alter ... unset tblproperties(key)

2015-02-06 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5648:
--
Description: 
make hivecontext support unset tblproperties(key)
like :
alter view viewName unset tblproperties(k)
alter table tableName unset tblproperties(k)






  was:
make hivecontext support unset tblproperties
like :
alter view viewName unset tblproperties(k)
alter table tableName unset tblproperties(k)







 suppot alter ... unset tblproperties(key) 
 --

 Key: SPARK-5648
 URL: https://issues.apache.org/jira/browse/SPARK-5648
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.2.0
Reporter: DoingDone9

 make hivecontext support unset tblproperties(key)
 like :
 alter view viewName unset tblproperties(k)
 alter table tableName unset tblproperties(k)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5129) make SqlContext support select date + XX DAYS from table

2015-01-07 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5129:
--
Description: 
Example :
create table test (date: Date)

2014-01-01
2014-01-02
2014-01-03

when  running select date + 10 DAYS from test, i want get

2014-01-11 
2014-01-12
2014-01-13




  was:
Example :
create table test (date: Date, name: String)

2014-01-01   a
2014-01-02   b
2014-01-03   c

when  running select date + 10 DAYS from test, i want get

2014-01-11 
2014-01-12
2014-01-13





 make SqlContext support select date + XX DAYS from table  
 

 Key: SPARK-5129
 URL: https://issues.apache.org/jira/browse/SPARK-5129
 Project: Spark
  Issue Type: Improvement
Reporter: DoingDone9
Priority: Minor

 Example :
 create table test (date: Date)
 2014-01-01
 2014-01-02
 2014-01-03
 when  running select date + 10 DAYS from test, i want get
 2014-01-11 
 2014-01-12
 2014-01-13



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5129) make SqlContext support select date + XX DAYS from table

2015-01-07 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5129:
--
Description: 
Example :
create table test (date: Date, name: String)

2014-01-01   a
2014-01-02   b
2014-01-03   c

when  running select date + 10 DAYS from test, i want get

2014-01-11 
2014-01-12
2014-01-13




  was:
Example :
create table test (date: Date, name: String)
datename
2014-01-01 a
2014-01-02 b
2014-01-03 c

when  running select date + 10 DAYS from test, i want get

2014-01-11 
2014-01-12
2014-01-13





 make SqlContext support select date + XX DAYS from table  
 

 Key: SPARK-5129
 URL: https://issues.apache.org/jira/browse/SPARK-5129
 Project: Spark
  Issue Type: Improvement
Reporter: DoingDone9
Priority: Minor

 Example :
 create table test (date: Date, name: String)
 2014-01-01   a
 2014-01-02   b
 2014-01-03   c
 when  running select date + 10 DAYS from test, i want get
 2014-01-11 
 2014-01-12
 2014-01-13



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5129) make SqlContext support select date + XX DAYS from table

2015-01-07 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5129:
--
Priority: Minor  (was: Major)

 make SqlContext support select date + XX DAYS from table  
 

 Key: SPARK-5129
 URL: https://issues.apache.org/jira/browse/SPARK-5129
 Project: Spark
  Issue Type: Improvement
Reporter: DoingDone9
Priority: Minor

 Example :



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5129) make SqlContext support select date + XX DAYS from table

2015-01-07 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5129:
--
Description: 
Example :





 make SqlContext support select date + XX DAYS from table  
 

 Key: SPARK-5129
 URL: https://issues.apache.org/jira/browse/SPARK-5129
 Project: Spark
  Issue Type: Improvement
Reporter: DoingDone9

 Example :



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5129) make SqlContext support select date + XX DAYS from table

2015-01-07 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5129:
--
Description: 
Example :
create table test (date: Date, name: String)
datename
2014-01-01 a
2014-01-02 b
2014-01-03 c

when i run select date + 10 DAYS from test, i want get

2014-01-11 
2014-01-12
2014-01-13




  was:
Example :






 make SqlContext support select date + XX DAYS from table  
 

 Key: SPARK-5129
 URL: https://issues.apache.org/jira/browse/SPARK-5129
 Project: Spark
  Issue Type: Improvement
Reporter: DoingDone9
Priority: Minor

 Example :
 create table test (date: Date, name: String)
 datename
 2014-01-01 a
 2014-01-02 b
 2014-01-03 c
 when i run select date + 10 DAYS from test, i want get
 2014-01-11 
 2014-01-12
 2014-01-13



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5129) make SqlContext support select date + XX DAYS from table

2015-01-07 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5129:
--
Description: 
Example :
create table test (date: Date, name: String)
datename
2014-01-01 a
2014-01-02 b
2014-01-03 c

when  running select date + 10 DAYS from test, i want get

2014-01-11 
2014-01-12
2014-01-13




  was:
Example :
create table test (date: Date, name: String)
datename
2014-01-01 a
2014-01-02 b
2014-01-03 c

when i run select date + 10 DAYS from test, i want get

2014-01-11 
2014-01-12
2014-01-13





 make SqlContext support select date + XX DAYS from table  
 

 Key: SPARK-5129
 URL: https://issues.apache.org/jira/browse/SPARK-5129
 Project: Spark
  Issue Type: Improvement
Reporter: DoingDone9
Priority: Minor

 Example :
 create table test (date: Date, name: String)
 datename
 2014-01-01 a
 2014-01-02 b
 2014-01-03 c
 when  running select date + 10 DAYS from test, i want get
 2014-01-11 
 2014-01-12
 2014-01-13



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5129) make SqlContext support select date +/- XX DAYS from table

2015-01-07 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5129:
--
Summary: make SqlContext support select date +/- XX DAYS from table
(was: make SqlContext support select date + XX DAYS from table  )

 make SqlContext support select date +/- XX DAYS from table  
 --

 Key: SPARK-5129
 URL: https://issues.apache.org/jira/browse/SPARK-5129
 Project: Spark
  Issue Type: Improvement
Reporter: DoingDone9
Priority: Minor

 Example :
 create table test (date: Date)
 2014-01-01
 2014-01-02
 2014-01-03
 when  running select date + 10 DAYS from test, i want get
 2014-01-11 
 2014-01-12
 2014-01-13



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5129) make SqlContext support select date +/- XX DAYS from table

2015-01-07 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5129:
--
Description: 
Example :
create table test (date: Date)

2014-01-01
2014-01-02
2014-01-03

when  running select date + 10 DAYS from test, i want get

2014-01-11 
2014-01-12
2014-01-13

and running select date - 10 DAYS from test,  get

2013-12-22
2013-12-23
2013-12-24



  was:
Example :
create table test (date: Date)

2014-01-01
2014-01-02
2014-01-03

when  running select date + 10 DAYS from test, i want get

2014-01-11 
2014-01-12
2014-01-13





 make SqlContext support select date +/- XX DAYS from table  
 --

 Key: SPARK-5129
 URL: https://issues.apache.org/jira/browse/SPARK-5129
 Project: Spark
  Issue Type: Improvement
Reporter: DoingDone9
Priority: Minor

 Example :
 create table test (date: Date)
 2014-01-01
 2014-01-02
 2014-01-03
 when  running select date + 10 DAYS from test, i want get
 2014-01-11 
 2014-01-12
 2014-01-13
 and running select date - 10 DAYS from test,  get
 2013-12-22
 2013-12-23
 2013-12-24



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5066) Can not get all value that has the same key when reading key ordered from different Streaming.

2015-01-03 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-5066:
-

 Summary: Can not get all value that has the same key  when reading 
key ordered  from different Streaming.
 Key: SPARK-5066
 URL: https://issues.apache.org/jira/browse/SPARK-5066
 Project: Spark
  Issue Type: Bug
Reporter: DoingDone9
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5066) Can not get all key when reading key ordered from different Streaming.

2015-01-03 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5066:
--
Summary: Can not get all key  when reading key ordered  from different 
Streaming.  (was: Can not get all value that has the same key  when reading key 
ordered  from different Streaming.)

 Can not get all key  when reading key ordered  from different Streaming.
 

 Key: SPARK-5066
 URL: https://issues.apache.org/jira/browse/SPARK-5066
 Project: Spark
  Issue Type: Bug
Reporter: DoingDone9
Priority: Critical





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5066) Can not get all key that has same hashcode when reading key ordered from different Streaming.

2015-01-03 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5066:
--
Summary: Can not get all key that has same hashcode  when reading key 
ordered  from different Streaming.  (was: Can not get all key  when reading key 
ordered  from different Streaming.)

 Can not get all key that has same hashcode  when reading key ordered  from 
 different Streaming.
 ---

 Key: SPARK-5066
 URL: https://issues.apache.org/jira/browse/SPARK-5066
 Project: Spark
  Issue Type: Bug
Reporter: DoingDone9
Priority: Critical





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5066) Can not get all key that has same hashcode when reading key ordered from different Streaming.

2015-01-03 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5066:
--
Description: 
when spill is open, data ordered by hashCode will be spilled to disk. We need 
get all key that has the same hashCode from different tmp files when merge 
value, but it just read the key that has the minHashCode that in a tmp file, we 
can not read all key.
Example :
If file1 has [k1, k2, k3], file2 has [k4,k5,k1].
And hashcode of k4  hashcode of k5  hashcode of k1   hashcode of k2   
hashcode of k3
we just  read k1 from file1 and k4 from file2. Can not read all k1.

Code :

private val inputStreams = (Seq(sortedMap) ++ spilledMaps).map(it = 
it.buffered)

inputStreams.foreach { it =
  val kcPairs = new ArrayBuffer[(K, C)]
  readNextHashCode(it, kcPairs)
  if (kcPairs.length  0) {
mergeHeap.enqueue(new StreamBuffer(it, kcPairs))
  }
}

 private def readNextHashCode(it: BufferedIterator[(K, C)], buf: 
ArrayBuffer[(K, C)]): Unit = {
  if (it.hasNext) {
var kc = it.next()
buf += kc
val minHash = hashKey(kc)
while (it.hasNext  it.head._1.hashCode() == minHash) {
  kc = it.next()
  buf += kc
}
  }
}



 Can not get all key that has same hashcode  when reading key ordered  from 
 different Streaming.
 ---

 Key: SPARK-5066
 URL: https://issues.apache.org/jira/browse/SPARK-5066
 Project: Spark
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: DoingDone9
Priority: Critical

 when spill is open, data ordered by hashCode will be spilled to disk. We need 
 get all key that has the same hashCode from different tmp files when merge 
 value, but it just read the key that has the minHashCode that in a tmp file, 
 we can not read all key.
 Example :
 If file1 has [k1, k2, k3], file2 has [k4,k5,k1].
 And hashcode of k4  hashcode of k5  hashcode of k1   hashcode of k2   
 hashcode of k3
 we just  read k1 from file1 and k4 from file2. Can not read all k1.
 Code :
 private val inputStreams = (Seq(sortedMap) ++ spilledMaps).map(it = 
 it.buffered)
 inputStreams.foreach { it =
   val kcPairs = new ArrayBuffer[(K, C)]
   readNextHashCode(it, kcPairs)
   if (kcPairs.length  0) {
 mergeHeap.enqueue(new StreamBuffer(it, kcPairs))
   }
 }
  private def readNextHashCode(it: BufferedIterator[(K, C)], buf: 
 ArrayBuffer[(K, C)]): Unit = {
   if (it.hasNext) {
 var kc = it.next()
 buf += kc
 val minHash = hashKey(kc)
 while (it.hasNext  it.head._1.hashCode() == minHash) {
   kc = it.next()
   buf += kc
 }
   }
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5066) Can not get all key that has same hashcode when reading key ordered from different Streaming.

2015-01-03 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-5066:
--
Affects Version/s: 1.2.0

 Can not get all key that has same hashcode  when reading key ordered  from 
 different Streaming.
 ---

 Key: SPARK-5066
 URL: https://issues.apache.org/jira/browse/SPARK-5066
 Project: Spark
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: DoingDone9
Priority: Critical

 when spill is open, data ordered by hashCode will be spilled to disk. We need 
 get all key that has the same hashCode from different tmp files when merge 
 value, but it just read the key that has the minHashCode that in a tmp file, 
 we can not read all key.
 Example :
 If file1 has [k1, k2, k3], file2 has [k4,k5,k1].
 And hashcode of k4  hashcode of k5  hashcode of k1   hashcode of k2   
 hashcode of k3
 we just  read k1 from file1 and k4 from file2. Can not read all k1.
 Code :
 private val inputStreams = (Seq(sortedMap) ++ spilledMaps).map(it = 
 it.buffered)
 inputStreams.foreach { it =
   val kcPairs = new ArrayBuffer[(K, C)]
   readNextHashCode(it, kcPairs)
   if (kcPairs.length  0) {
 mergeHeap.enqueue(new StreamBuffer(it, kcPairs))
   }
 }
  private def readNextHashCode(it: BufferedIterator[(K, C)], buf: 
 ArrayBuffer[(K, C)]): Unit = {
   if (it.hasNext) {
 var kc = it.next()
 buf += kc
 val minHash = hashKey(kc)
 while (it.hasNext  it.head._1.hashCode() == minHash) {
   kc = it.next()
   buf += kc
 }
   }
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-4635) Delete the val that never used in execute() of HashOuterJoin.

2014-11-28 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 closed SPARK-4635.
-
Resolution: Not a Problem

 Delete the val that never used in  execute() of HashOuterJoin.
 --

 Key: SPARK-4635
 URL: https://issues.apache.org/jira/browse/SPARK-4635
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 The val boundCondition is created in execute(),but it never be used in 
 execute().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-4635) Delete the val that never used in HashOuterJoin.

2014-11-26 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-4635:
-

 Summary: Delete the val that never used in HashOuterJoin.
 Key: SPARK-4635
 URL: https://issues.apache.org/jira/browse/SPARK-4635
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4635) Delete the val that never used in HashOuterJoin.

2014-11-26 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4635:
--
Description: The val boundCondition is created in execute(),but it never 
be used in execute();

 Delete the val that never used in HashOuterJoin.
 

 Key: SPARK-4635
 URL: https://issues.apache.org/jira/browse/SPARK-4635
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 The val boundCondition is created in execute(),but it never be used in 
 execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4635) Delete the val that never used in execute() of HashOuterJoin.

2014-11-26 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4635:
--
Summary: Delete the val that never used in  execute() of HashOuterJoin.  
(was: Delete the val that never used in HashOuterJoin.)

 Delete the val that never used in  execute() of HashOuterJoin.
 --

 Key: SPARK-4635
 URL: https://issues.apache.org/jira/browse/SPARK-4635
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 The val boundCondition is created in execute(),but it never be used in 
 execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4635) Delete the val that never used in execute() of HashOuterJoin.

2014-11-26 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4635:
--
Description: The val boundCondition is created in execute(),but it never 
be used in execute().  (was: The val boundCondition is created in 
execute(),but it never be used in execute();)

 Delete the val that never used in  execute() of HashOuterJoin.
 --

 Key: SPARK-4635
 URL: https://issues.apache.org/jira/browse/SPARK-4635
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 The val boundCondition is created in execute(),but it never be used in 
 execute().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-4339) Make fixedPoint Configurable in Analyzer

2014-11-18 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 closed SPARK-4339.
-
Resolution: Not a Problem

 Make fixedPoint Configurable in Analyzer
 

 Key: SPARK-4339
 URL: https://issues.apache.org/jira/browse/SPARK-4339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 fixedPoint  limits the max number of iterations,it should be Configurable.
 But it is a contant in Analyzer.scala,like that val fixedPoint = 
 FixedPoint(100).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-4339) Use configuration instead of constant

2014-11-11 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-4339:
-

 Summary: Use configuration instead of constant
 Key: SPARK-4339
 URL: https://issues.apache.org/jira/browse/SPARK-4339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4339) Use configuration instead of constant

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4339:
--
Description: fixedPoint  limits the max number of iterations,it should be 
Configurable.

 Use configuration instead of constant
 -

 Key: SPARK-4339
 URL: https://issues.apache.org/jira/browse/SPARK-4339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 fixedPoint  limits the max number of iterations,it should be Configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4339) Make fixedPoint Configurable in Analyzer.scala

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4339:
--
Summary: Make fixedPoint Configurable in Analyzer.scala  (was: Use 
configuration instead of constant)

 Make fixedPoint Configurable in Analyzer.scala
 --

 Key: SPARK-4339
 URL: https://issues.apache.org/jira/browse/SPARK-4339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 fixedPoint  limits the max number of iterations,it should be Configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4339) Make fixedPoint Configurable in Analyzer.scala

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4339:
--
Description: 
fixedPoint  limits the max number of iterations,it should be Configurable.But 
it is a contant in analyzer.scala


  was:fixedPoint  limits the max number of iterations,it should be Configurable.


 Make fixedPoint Configurable in Analyzer.scala
 --

 Key: SPARK-4339
 URL: https://issues.apache.org/jira/browse/SPARK-4339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 fixedPoint  limits the max number of iterations,it should be Configurable.But 
 it is a contant in analyzer.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4339) Make fixedPoint Configurable in Analyzer

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4339:
--
Summary: Make fixedPoint Configurable in Analyzer  (was: Make fixedPoint 
Configurable in Analyzer.scala)

 Make fixedPoint Configurable in Analyzer
 

 Key: SPARK-4339
 URL: https://issues.apache.org/jira/browse/SPARK-4339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 fixedPoint  limits the max number of iterations,it should be Configurable.But 
 it is a contant in analyzer.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4339) Make fixedPoint Configurable in Analyzer

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4339:
--
Description: 
fixedPoint  limits the max number of iterations,it should be Configurable.But 
it is a contant in analyzer.scala,like that val fixedPoint = FixedPoint(100).


  was:
fixedPoint  limits the max number of iterations,it should be Configurable.But 
it is a contant in analyzer.scala,like that val fixedPoint = FixedPoint(100).



 Make fixedPoint Configurable in Analyzer
 

 Key: SPARK-4339
 URL: https://issues.apache.org/jira/browse/SPARK-4339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 fixedPoint  limits the max number of iterations,it should be Configurable.But 
 it is a contant in analyzer.scala,like that val fixedPoint = 
 FixedPoint(100).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4339) Make fixedPoint Configurable in Analyzer

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4339:
--
Description: 
fixedPoint  limits the max number of iterations,it should be Configurable.
But it is a contant in analyzer.scala,like that val fixedPoint = 
FixedPoint(100).


  was:
fixedPoint  limits the max number of iterations,it should be Configurable.But 
it is a contant in analyzer.scala,like that val fixedPoint = FixedPoint(100).



 Make fixedPoint Configurable in Analyzer
 

 Key: SPARK-4339
 URL: https://issues.apache.org/jira/browse/SPARK-4339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 fixedPoint  limits the max number of iterations,it should be Configurable.
 But it is a contant in analyzer.scala,like that val fixedPoint = 
 FixedPoint(100).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4339) Make fixedPoint Configurable in Analyzer

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4339:
--
Description: 
fixedPoint  limits the max number of iterations,it should be Configurable.But 
it is a contant in analyzer.scala,like that val fixedPoint = FixedPoint(100).


  was:
fixedPoint  limits the max number of iterations,it should be Configurable.But 
it is a contant in analyzer.scala



 Make fixedPoint Configurable in Analyzer
 

 Key: SPARK-4339
 URL: https://issues.apache.org/jira/browse/SPARK-4339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 fixedPoint  limits the max number of iterations,it should be Configurable.But 
 it is a contant in analyzer.scala,like that val fixedPoint = FixedPoint(100).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-4353) Delete the val that never used

2014-11-11 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-4353:
-

 Summary: Delete the val that never used
 Key: SPARK-4353
 URL: https://issues.apache.org/jira/browse/SPARK-4353
 Project: Spark
  Issue Type: Wish
Reporter: DoingDone9
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4353) Delete the val that never used

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4353:
--
Issue Type: Improvement  (was: Wish)

 Delete the val that never used
 --

 Key: SPARK-4353
 URL: https://issues.apache.org/jira/browse/SPARK-4353
 Project: Spark
  Issue Type: Improvement
Reporter: DoingDone9
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4353) Delete the val that never used

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4353:
--
Component/s: SQL

 Delete the val that never used
 --

 Key: SPARK-4353
 URL: https://issues.apache.org/jira/browse/SPARK-4353
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 dbName in Catalog never used, like that val (dbName, tblName) = 
 processDatabaseAndTableName(databaseName, tableName); tables -= tblName. i 
 think it should be deleted,it should be val tblName = 
 processDatabaseAndTableName(databaseName, tableName)._2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4353) Delete the val that never used

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4353:
--
Description: dbName in Catalog never used, like that val (dbName, 
tblName) = processDatabaseAndTableName(databaseName, tableName); tables -= 
tblName. i think it should be deleted,it should be val tblName = 
processDatabaseAndTableName(databaseName, tableName)._2

 Delete the val that never used
 --

 Key: SPARK-4353
 URL: https://issues.apache.org/jira/browse/SPARK-4353
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 dbName in Catalog never used, like that val (dbName, tblName) = 
 processDatabaseAndTableName(databaseName, tableName); tables -= tblName. i 
 think it should be deleted,it should be val tblName = 
 processDatabaseAndTableName(databaseName, tableName)._2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4353) Delete the val that never used in Catalog

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4353:
--
Summary: Delete the val that never used in Catalog  (was: Delete the val 
that never used)

 Delete the val that never used in Catalog
 -

 Key: SPARK-4353
 URL: https://issues.apache.org/jira/browse/SPARK-4353
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 dbName in Catalog never used, like that val (dbName, tblName) = 
 processDatabaseAndTableName(databaseName, tableName); tables -= tblName. i 
 think it should be deleted,it should be val tblName = 
 processDatabaseAndTableName(databaseName, tableName)._2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4353) Delete the val that never used in Catalog

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4353:
--
Description: 
dbName in Catalog never used, like that 
{
   val (dbName, tblName) = processDatabaseAndTableName(databaseName, tableName)
   tables -= tblName
}
I think it should be deleted,it should be val tblName = 
processDatabaseAndTableName(databaseName, tableName)._2

  was:dbName in Catalog never used, like that val (dbName, tblName) = 
processDatabaseAndTableName(databaseName, tableName); tables -= tblName. i 
think it should be deleted,it should be val tblName = 
processDatabaseAndTableName(databaseName, tableName)._2


 Delete the val that never used in Catalog
 -

 Key: SPARK-4353
 URL: https://issues.apache.org/jira/browse/SPARK-4353
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 dbName in Catalog never used, like that 
 {
val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
 tableName)
tables -= tblName
 }
 I think it should be deleted,it should be val tblName = 
 processDatabaseAndTableName(databaseName, tableName)._2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4353) Delete the val that never used in Catalog

2014-11-11 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4353:
--
Description: 
dbName in Catalog never used, like that 
{
   val (dbName, tblName) = processDatabaseAndTableName(databaseName, tableName);
   tables -= tblName
}
I think it should be deleted,it should be val tblName = 
processDatabaseAndTableName(databaseName, tableName)._2

  was:
dbName in Catalog never used, like that 
{
   val (dbName, tblName) = processDatabaseAndTableName(databaseName, tableName)
   tables -= tblName
}
I think it should be deleted,it should be val tblName = 
processDatabaseAndTableName(databaseName, tableName)._2


 Delete the val that never used in Catalog
 -

 Key: SPARK-4353
 URL: https://issues.apache.org/jira/browse/SPARK-4353
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Reporter: DoingDone9
Priority: Minor

 dbName in Catalog never used, like that 
 {
val (dbName, tblName) = processDatabaseAndTableName(databaseName, 
 tableName);
tables -= tblName
 }
 I think it should be deleted,it should be val tblName = 
 processDatabaseAndTableName(databaseName, tableName)._2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-4332) RuleExecutor breaks, num of iteration should be ${iteration -1} not ${iteration} .Log looks like Fixed point reached for batch ${batch.name} after 3 iterations., but it

2014-11-10 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-4332:
-

 Summary: RuleExecutor breaks, num of iteration should be 
${iteration -1} not ${iteration} .Log looks like Fixed point reached for batch 
${batch.name} after 3 iterations., but it did 2 iterations really! 
 Key: SPARK-4332
 URL: https://issues.apache.org/jira/browse/SPARK-4332
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4332) RuleExecutor breaks, num of iteration should be ${iteration -1} not ${iteration} .Log looks like Fixed point reached for batch ${batch.name} after 3 iterations., but it

2014-11-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4332:
--
Description: RuleExecutor breaks, num of iteration should be ${iteration 
-1} not ${iteration} .Log looks like Fixed point reached for batch ${batch.nam

 RuleExecutor breaks, num of iteration should be ${iteration -1} not 
 ${iteration} .Log looks like Fixed point reached for batch ${batch.name} 
 after 3 iterations., but it did 2 iterations really! 
 

 Key: SPARK-4332
 URL: https://issues.apache.org/jira/browse/SPARK-4332
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 RuleExecutor breaks, num of iteration should be ${iteration -1} not 
 ${iteration} .Log looks like Fixed point reached for batch ${batch.nam



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4332) RuleExecutor breaks, num of iteration should be ${iteration -1} not ${iteration} .Log looks like Fixed point reached for batch ${batch.name} after 3 iterations., but it

2014-11-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4332:
--
Description: RuleExecutor breaks, num of iteration should be ${iteration 
-1} not ${iteration} .Log looks like Fixed point reached for batch 
${batch.name} after 3 iterations., but it did 2 iterations really!  (was: 
RuleExecutor breaks, num of iteration should be ${iteration -1} not 
${iteration} .Log looks like Fixed point reached for batch ${batch.nam)

 RuleExecutor breaks, num of iteration should be ${iteration -1} not 
 ${iteration} .Log looks like Fixed point reached for batch ${batch.name} 
 after 3 iterations., but it did 2 iterations really! 
 

 Key: SPARK-4332
 URL: https://issues.apache.org/jira/browse/SPARK-4332
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 RuleExecutor breaks, num of iteration should be ${iteration -1} not 
 ${iteration} .Log looks like Fixed point reached for batch ${batch.name} 
 after 3 iterations., but it did 2 iterations really!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-4332) RuleExecutor breaks, num of iteration should be ${iteration -1} not ${iteration} .Log looks like Fixed point reached for batch ${batch.name} after 3 iterations., but it

2014-11-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 closed SPARK-4332.
-
Resolution: Invalid

 RuleExecutor breaks, num of iteration should be ${iteration -1} not 
 ${iteration} .Log looks like Fixed point reached for batch ${batch.name} 
 after 3 iterations., but it did 2 iterations really! 
 

 Key: SPARK-4332
 URL: https://issues.apache.org/jira/browse/SPARK-4332
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 RuleExecutor breaks, num of iteration should be ${iteration -1} not 
 ${iteration} .Log looks like Fixed point reached for batch ${batch.name} 
 after 3 iterations., but it did 2 iterations really!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4333) change num of iteration printed in trace log from ${iteration} to ${iteration - 1}

2014-11-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4333:
--
Description: RuleExecutor breaks, num of iteration should be ${iteration 
-1} not ${iteration} .Log looks like Fixed point reached for batch 
${batch.name} after 3 iterations., but it did 2 iterations really!

 change num of iteration printed in trace log from ${iteration} to ${iteration 
 - 1}
 --

 Key: SPARK-4333
 URL: https://issues.apache.org/jira/browse/SPARK-4333
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 RuleExecutor breaks, num of iteration should be ${iteration -1} not 
 ${iteration} .Log looks like Fixed point reached for batch ${batch.name} 
 after 3 iterations., but it did 2 iterations really!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-4333) change num of iteration printed in trace log from ${iteration} to ${iteration - 1}

2014-11-10 Thread DoingDone9 (JIRA)
DoingDone9 created SPARK-4333:
-

 Summary: change num of iteration printed in trace log from 
${iteration} to ${iteration - 1}
 Key: SPARK-4333
 URL: https://issues.apache.org/jira/browse/SPARK-4333
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4333) change num of iteration printed in trace log from ${iteration} to ${iteration - 1}

2014-11-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4333:
--
Description: RuleExecutor breaks, num of iteration should be {iteration -1} 
not {iteration} .Log looks like Fixed point reached for batch {batch.name} 
after 3 iterations., but it did 2 iterations really!  (was: RuleExecutor 
breaks, num of iteration should be ${iteration -1} not ${iteration} .Log looks 
like Fixed point reached for batch ${batch.name} after 3 iterations., but it 
did 2 iterations really!)

 change num of iteration printed in trace log from ${iteration} to ${iteration 
 - 1}
 --

 Key: SPARK-4333
 URL: https://issues.apache.org/jira/browse/SPARK-4333
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 RuleExecutor breaks, num of iteration should be {iteration -1} not 
 {iteration} .Log looks like Fixed point reached for batch {batch.name} after 
 3 iterations., but it did 2 iterations really!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4333) change num of iteration printed in trace log from ${iteration} to ${iteration - 1}

2014-11-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4333:
--
Description: 
RuleExecutor breaks, num of iteration should be ${iteration -1} not {iteration} 
.
Log looks like Fixed point reached for batch ${batch.name} after 3 
iterations., but it did 2 iterations really!

  was:RuleExecutor breaks, num of iteration should be {iteration -1} not 
{iteration} .Log looks like Fixed point reached for batch {batch.name} after 3 
iterations., but it did 2 iterations really!


 change num of iteration printed in trace log from ${iteration} to ${iteration 
 - 1}
 --

 Key: SPARK-4333
 URL: https://issues.apache.org/jira/browse/SPARK-4333
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 RuleExecutor breaks, num of iteration should be ${iteration -1} not 
 {iteration} .
 Log looks like Fixed point reached for batch ${batch.name} after 3 
 iterations., but it did 2 iterations really!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4333) change num of iteration printed in trace log from ${iteration} to ${iteration - 1}

2014-11-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4333:
--
Description: 
RuleExecutor breaks, num of iteration should be $(iteration -1) not (iteration) 
.
Log looks like Fixed point reached for batch $(batch.name) after 3 
iterations., but it did 2 iterations really!

  was:
RuleExecutor breaks, num of iteration should be ${iteration -1} not {iteration} 
.
Log looks like Fixed point reached for batch ${batch.name} after 3 
iterations., but it did 2 iterations really!


 change num of iteration printed in trace log from ${iteration} to ${iteration 
 - 1}
 --

 Key: SPARK-4333
 URL: https://issues.apache.org/jira/browse/SPARK-4333
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 RuleExecutor breaks, num of iteration should be $(iteration -1) not 
 (iteration) .
 Log looks like Fixed point reached for batch $(batch.name) after 3 
 iterations., but it did 2 iterations really!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4333) Correctly log number of iterations in RuleExecutor

2014-11-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4333:
--
Description: 
RuleExecutor breaks, num of iteration should be $(iteration -1) not 
$(iteration) .
Log looks like Fixed point reached for batch $(batch.name) after 3 
iterations., but it did 2 iterations really!

  was:
RuleExecutor breaks, num of iteration should be $(iteration -1) not (iteration) 
.
Log looks like Fixed point reached for batch $(batch.name) after 3 
iterations., but it did 2 iterations really!


 Correctly log number of iterations in RuleExecutor
 --

 Key: SPARK-4333
 URL: https://issues.apache.org/jira/browse/SPARK-4333
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 RuleExecutor breaks, num of iteration should be $(iteration -1) not 
 $(iteration) .
 Log looks like Fixed point reached for batch $(batch.name) after 3 
 iterations., but it did 2 iterations really!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4333) Correctly log number of iterations in RuleExecutor

2014-11-10 Thread DoingDone9 (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DoingDone9 updated SPARK-4333:
--
Summary: Correctly log number of iterations in RuleExecutor  (was: change 
num of iteration printed in trace log from ${iteration} to ${iteration - 1})

 Correctly log number of iterations in RuleExecutor
 --

 Key: SPARK-4333
 URL: https://issues.apache.org/jira/browse/SPARK-4333
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0
Reporter: DoingDone9
Priority: Minor

 RuleExecutor breaks, num of iteration should be $(iteration -1) not 
 (iteration) .
 Log looks like Fixed point reached for batch $(batch.name) after 3 
 iterations., but it did 2 iterations really!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org