[jira] Updated: (HIVE-308) UNION ALL should create different destination directories for different operands

2009-03-08 Thread Zheng Shao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Shao updated HIVE-308:


Attachment: HIVE-308.1.patch

Fixing the bug (the second case) and added a test case.

 UNION ALL should create different destination directories for different 
 operands
 

 Key: HIVE-308
 URL: https://issues.apache.org/jira/browse/HIVE-308
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.3.0
Reporter: Zheng Shao
Priority: Blocker
 Attachments: HIVE-308.1.patch


 The following query hangs:
 {code} 
 select * from (select 1 from zshao_lazy union all select 2 from zshao_lazy) a;
 {code} 
 The following query produce wrong results: (one map-reduce job 
 overwrite/cannot overwrite the result of the other)
 {code} 
 select * from (select 1 as id from zshao_lazy cluster by id union all select 
 2 as id from zshao_meta) a;
 {code} 
 The reason of both is that the destination directory of the file sink 
 operator conflicts with each other.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



JIRA_HIVE-308.1.patch_UNIT_TEST_FAILED

2009-03-08 Thread Murli Varadachari

ERROR: UNIT TEST using PATCH HIVE-308.1.patch FAILED!!

[junit] Test org.apache.hadoop.hive.cli.TestCliDriver FAILED
BUILD FAILED


[jira] Updated: (HIVE-286) testCliDriver_udf7 fails

2009-03-08 Thread Johan Oskarsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johan Oskarsson updated HIVE-286:
-

  Resolution: Fixed
Release Note: HIVE-286. Use round(xxx,12) to make sure there is no 
precision matching problem in testCliDriver_udf7. (zshao via johan)   
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

Committed revision 751398. Thanks Zheng!

 testCliDriver_udf7 fails
 

 Key: HIVE-286
 URL: https://issues.apache.org/jira/browse/HIVE-286
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
 Environment: Ubuntu, Java6, Hadoop 0.17, 0.18 and 0.19
Reporter: Johan Oskarsson
Assignee: Zheng Shao
Priority: Blocker
 Fix For: 0.3.0

 Attachments: hive-286.1.patch, HIVE-286.2.patch


 The org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf7 test fails.
 See this url for more information: 
 http://hudson.zones.apache.org/hudson/job/Hive-trunk-h0.19/lastBuild/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_udf7/

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Please check grammar for TIMESTAMP

2009-03-08 Thread Shyam Sarkar
Hi Zheng and others,

Could you please check Hive.g grammar changes for TIMESTAMP (See the comments 
with // Change by Shyam)?
Please review and let me know your feedback. I shall write a short design doc 
later for review after these short exchanges.

Thanks,
shyam_sar...@yahoo.com


  grammar Hive;

options
{
output=AST;
ASTLabelType=CommonTree;
backtrack=true;
k=1;
}
 
tokens {
TOK_INSERT;
TOK_QUERY;
TOK_SELECT;
TOK_SELECTDI;
TOK_SELEXPR;
TOK_FROM;
TOK_TAB;
TOK_PARTSPEC;
TOK_PARTVAL;
TOK_DIR;
TOK_LOCAL_DIR;
TOK_TABREF;
TOK_SUBQUERY;
TOK_DESTINATION;
TOK_ALLCOLREF;
TOK_COLREF;
TOK_FUNCTION;
TOK_FUNCTIONDI;
TOK_WHERE;
TOK_OP_EQ;
TOK_OP_NE;
TOK_OP_LE;
TOK_OP_LT;
TOK_OP_GE;
TOK_OP_GT;
TOK_OP_DIV;
TOK_OP_ADD;
TOK_OP_SUB;
TOK_OP_MUL;
TOK_OP_MOD;
TOK_OP_BITAND;
TOK_OP_BITNOT;
TOK_OP_BITOR;
TOK_OP_BITXOR;
TOK_OP_AND;
TOK_OP_OR;
TOK_OP_NOT;
TOK_OP_LIKE;
TOK_TRUE;
TOK_FALSE;
TOK_TRANSFORM;
TOK_EXPLIST;
TOK_ALIASLIST;
TOK_GROUPBY;
TOK_ORDERBY;
TOK_CLUSTERBY;
TOK_DISTRIBUTEBY;
TOK_SORTBY;
TOK_UNION;
TOK_JOIN;
TOK_LEFTOUTERJOIN;
TOK_RIGHTOUTERJOIN;
TOK_FULLOUTERJOIN;
TOK_LOAD;
TOK_NULL;
TOK_ISNULL;
TOK_ISNOTNULL;
TOK_TINYINT;
TOK_SMALLINT;
TOK_INT;
TOK_BIGINT;
TOK_BOOLEAN;
TOK_FLOAT;
TOK_DOUBLE;
TOK_DATE;
TOK_DATETIME;
TOK_TIMESTAMP;
TOK_STRING;
TOK_LIST;
TOK_MAP;
TOK_CREATETABLE;
TOK_DESCTABLE;
TOK_ALTERTABLE_RENAME;
TOK_ALTERTABLE_ADDCOLS;
TOK_ALTERTABLE_REPLACECOLS;
TOK_ALTERTABLE_ADDPARTS;
TOK_ALTERTABLE_DROPPARTS;
TOK_ALTERTABLE_SERDEPROPERTIES;
TOK_ALTERTABLE_SERIALIZER;
TOK_ALTERTABLE_PROPERTIES;
TOK_MSCK;
TOK_SHOWTABLES;
TOK_SHOWPARTITIONS;
TOK_CREATEEXTTABLE;
TOK_DROPTABLE;
TOK_TABCOLLIST;
TOK_TABCOL;
TOK_TABLECOMMENT;
TOK_TABLEPARTCOLS;
TOK_TABLEBUCKETS;
TOK_TABLEROWFORMAT;
TOK_TABLEROWFORMATFIELD;
TOK_TABLEROWFORMATCOLLITEMS;
TOK_TABLEROWFORMATMAPKEYS;
TOK_TABLEROWFORMATLINES;
TOK_TBLSEQUENCEFILE;
TOK_TBLTEXTFILE;
TOK_TABLEFILEFORMAT;
TOK_TABCOLNAME;
TOK_TABLELOCATION;
TOK_PARTITIONLOCATION;
TOK_TABLESAMPLE;
TOK_TMP_FILE;
TOK_TABSORTCOLNAMEASC;
TOK_TABSORTCOLNAMEDESC;
TOK_CHARSETLITERAL;
TOK_CREATEFUNCTION;
TOK_EXPLAIN;
TOK_TABLESERIALIZER;
TOK_TABLEPROPERTIES;
TOK_TABLEPROPLIST;
TOK_TABTYPE;
TOK_LIMIT;
TOK_TABLEPROPERTY;
TOK_IFNOTEXISTS;
}


// Package headers
@header {
package org.apache.hadoop.hive.ql.parse;
}
@lexer::header {package org.apache.hadoop.hive.ql.parse;}


@members { 
  Stack msgs = new StackString();
}

@rulecatch {
catch (RecognitionException e) {
 reportError(e);
  throw e;
}
}
 
// starting rule
statement
: explainStatement EOF
| execStatement EOF
;

explainStatement
@init { msgs.push(explain statement); }
@after { msgs.pop(); }
: KW_EXPLAIN (isExtended=KW_EXTENDED)? execStatement - ^(TOK_EXPLAIN 
execStatement $isExtended?)
;

execStatement
@init { msgs.push(statement); }
@after { msgs.pop(); }
: queryStatementExpression
| loadStatement
| ddlStatement
;

loadStatement
@init { msgs.push(load statement); }
@after { msgs.pop(); }
: KW_LOAD KW_DATA (islocal=KW_LOCAL)? KW_INPATH (path=StringLiteral) 
(isoverwrite=KW_OVERWRITE)? KW_INTO KW_TABLE (tab=tabName) 
- ^(TOK_LOAD $path $tab $islocal? $isoverwrite?)
;

ddlStatement
@init { msgs.push(ddl statement); }
@after { msgs.pop(); }
: createStatement
| dropStatement
| alterStatement
| descStatement
| showStatement
| metastoreCheck
| createFunctionStatement
;

ifNotExists
@init { msgs.push(if not exists clause); }
@after { msgs.pop(); }
: KW_IF KW_NOT KW_EXISTS
- ^(TOK_IFNOTEXISTS)
;

createStatement
@init { msgs.push(create statement); }
@after { msgs.pop(); }
: KW_CREATE (ext=KW_EXTERNAL)? KW_TABLE ifNotExists? name=Identifier 
(LPAREN columnNameTypeList RPAREN)? tableComment? tablePartition? tableBuckets? 
tableRowFormat? tableFileFormat? tableLocation?
- {$ext == null}? ^(TOK_CREATETABLE $name ifNotExists? columnNameTypeList? 
tableComment? tablePartition? tableBuckets? tableRowFormat? tableFileFormat? 
tableLocation?)
- ^(TOK_CREATEEXTTABLE $name ifNotExists? 
columnNameTypeList? tableComment? tablePartition? tableBuckets? tableRowFormat? 
tableFileFormat? tableLocation?)
;

dropStatement
@init { msgs.push(drop statement); }
@after { msgs.pop(); }
: KW_DROP KW_TABLE Identifier  - ^(TOK_DROPTABLE Identifier)
;

alterStatement
@init { msgs.push(alter statement); }
@after { msgs.pop(); }
: alterStatementRename
| alterStatementAddCol
| alterStatementDropPartitions
| alterStatementAddPartitions
| alterStatementProperties
| alterStatementSerdeProperties
;

alterStatementRename
@init { msgs.push(rename statement); }
@after { msgs.pop(); }
: KW_ALTER KW_TABLE oldName=Identifier KW_RENAME KW_TO newName=Identifier 
- ^(TOK_ALTERTABLE_RENAME $oldName $newName)
;

alterStatementAddCol
@init { msgs.push(add column statement); }
@after { msgs.pop(); }
: KW_ALTER KW_TABLE Identifier (add=KW_ADD | replace=KW_REPLACE) KW_COLUMNS 
LPAREN 

Re: Please check grammar for TIMESTAMP

2009-03-08 Thread Tim Hawkins
Is there going to be any Timezone Support?, ie will the time-stamp be  
stored in a recognised standard such as UTC regardless of the actual  
time submitted, given that hive/hadoop tend to be used for log  
processing and reporting in many use cases, understanding the  
normalising  time-zone details may be nessacary, especially where you  
may have data sourced from multiple time zones.


It may be worth considering this issue now as retrofitting it later  
may cause problems.


On 8 Mar 2009, at 14:15, Shyam Sarkar wrote:


Hi Zheng and others,

Could you please check Hive.g grammar changes for TIMESTAMP (See the  
comments with // Change by Shyam)?
Please review and let me know your feedback. I shall write a short  
design doc later for review after these short exchanges.


Thanks,
shyam_sar...@yahoo.com


grammar Hive;

options
{
output=AST;
ASTLabelType=CommonTree;
backtrack=true;
k=1;
}
 
tokens {
TOK_INSERT;
TOK_QUERY;
TOK_SELECT;
TOK_SELECTDI;
TOK_SELEXPR;
TOK_FROM;
TOK_TAB;
TOK_PARTSPEC;
TOK_PARTVAL;
TOK_DIR;
TOK_LOCAL_DIR;
TOK_TABREF;
TOK_SUBQUERY;
TOK_DESTINATION;
TOK_ALLCOLREF;
TOK_COLREF;
TOK_FUNCTION;
TOK_FUNCTIONDI;
TOK_WHERE;
TOK_OP_EQ;
TOK_OP_NE;
TOK_OP_LE;
TOK_OP_LT;
TOK_OP_GE;
TOK_OP_GT;
TOK_OP_DIV;
TOK_OP_ADD;
TOK_OP_SUB;
TOK_OP_MUL;
TOK_OP_MOD;
TOK_OP_BITAND;
TOK_OP_BITNOT;
TOK_OP_BITOR;
TOK_OP_BITXOR;
TOK_OP_AND;
TOK_OP_OR;
TOK_OP_NOT;
TOK_OP_LIKE;
TOK_TRUE;
TOK_FALSE;
TOK_TRANSFORM;
TOK_EXPLIST;
TOK_ALIASLIST;
TOK_GROUPBY;
TOK_ORDERBY;
TOK_CLUSTERBY;
TOK_DISTRIBUTEBY;
TOK_SORTBY;
TOK_UNION;
TOK_JOIN;
TOK_LEFTOUTERJOIN;
TOK_RIGHTOUTERJOIN;
TOK_FULLOUTERJOIN;
TOK_LOAD;
TOK_NULL;
TOK_ISNULL;
TOK_ISNOTNULL;
TOK_TINYINT;
TOK_SMALLINT;
TOK_INT;
TOK_BIGINT;
TOK_BOOLEAN;
TOK_FLOAT;
TOK_DOUBLE;
TOK_DATE;
TOK_DATETIME;
TOK_TIMESTAMP;
TOK_STRING;
TOK_LIST;
TOK_MAP;
TOK_CREATETABLE;
TOK_DESCTABLE;
TOK_ALTERTABLE_RENAME;
TOK_ALTERTABLE_ADDCOLS;
TOK_ALTERTABLE_REPLACECOLS;
TOK_ALTERTABLE_ADDPARTS;
TOK_ALTERTABLE_DROPPARTS;
TOK_ALTERTABLE_SERDEPROPERTIES;
TOK_ALTERTABLE_SERIALIZER;
TOK_ALTERTABLE_PROPERTIES;
TOK_MSCK;
TOK_SHOWTABLES;
TOK_SHOWPARTITIONS;
TOK_CREATEEXTTABLE;
TOK_DROPTABLE;
TOK_TABCOLLIST;
TOK_TABCOL;
TOK_TABLECOMMENT;
TOK_TABLEPARTCOLS;
TOK_TABLEBUCKETS;
TOK_TABLEROWFORMAT;
TOK_TABLEROWFORMATFIELD;
TOK_TABLEROWFORMATCOLLITEMS;
TOK_TABLEROWFORMATMAPKEYS;
TOK_TABLEROWFORMATLINES;
TOK_TBLSEQUENCEFILE;
TOK_TBLTEXTFILE;
TOK_TABLEFILEFORMAT;
TOK_TABCOLNAME;
TOK_TABLELOCATION;
TOK_PARTITIONLOCATION;
TOK_TABLESAMPLE;
TOK_TMP_FILE;
TOK_TABSORTCOLNAMEASC;
TOK_TABSORTCOLNAMEDESC;
TOK_CHARSETLITERAL;
TOK_CREATEFUNCTION;
TOK_EXPLAIN;
TOK_TABLESERIALIZER;
TOK_TABLEPROPERTIES;
TOK_TABLEPROPLIST;
TOK_TABTYPE;
TOK_LIMIT;
TOK_TABLEPROPERTY;
TOK_IFNOTEXISTS;
}


// Package headers
@header {
package org.apache.hadoop.hive.ql.parse;
}
@lexer::header {package org.apache.hadoop.hive.ql.parse;}


@members { 
  Stack msgs = new StackString();
}

@rulecatch {
catch (RecognitionException e) {
 reportError(e);
  throw e;
}
}
 
// starting rule
statement
: explainStatement EOF
| execStatement EOF
;

explainStatement
@init { msgs.push(explain statement); }
@after { msgs.pop(); }
: KW_EXPLAIN (isExtended=KW_EXTENDED)? execStatement - ^(TOK_EXPLAIN 
execStatement $isExtended?)
;

execStatement
@init { msgs.push(statement); }
@after { msgs.pop(); }
: queryStatementExpression
| loadStatement
| ddlStatement
;

loadStatement
@init { msgs.push(load statement); }
@after { msgs.pop(); }
: KW_LOAD KW_DATA (islocal=KW_LOCAL)? KW_INPATH (path=StringLiteral) 
(isoverwrite=KW_OVERWRITE)? KW_INTO KW_TABLE (tab=tabName) 
- ^(TOK_LOAD $path $tab $islocal? $isoverwrite?)
;

ddlStatement
@init { msgs.push(ddl statement); }
@after { msgs.pop(); }
: createStatement
| dropStatement
| alterStatement
| descStatement
| showStatement
| metastoreCheck
| createFunctionStatement
;

ifNotExists
@init { msgs.push(if not exists clause); }
@after { msgs.pop(); }
: KW_IF KW_NOT KW_EXISTS
- ^(TOK_IFNOTEXISTS)
;

createStatement
@init { msgs.push(create statement); }
@after { msgs.pop(); }
: KW_CREATE (ext=KW_EXTERNAL)? KW_TABLE ifNotExists? name=Identifier 
(LPAREN columnNameTypeList RPAREN)? tableComment? tablePartition? tableBuckets? 
tableRowFormat? tableFileFormat? tableLocation?
- {$ext == null}? ^(TOK_CREATETABLE $name ifNotExists? columnNameTypeList? 
tableComment? tablePartition? tableBuckets? tableRowFormat? tableFileFormat? 
tableLocation?)
- ^(TOK_CREATEEXTTABLE $name ifNotExists? 
columnNameTypeList? tableComment? tablePartition? tableBuckets? tableRowFormat? 
tableFileFormat? tableLocation?)
;

dropStatement
@init { msgs.push(drop statement); }
@after { msgs.pop(); }
: KW_DROP KW_TABLE Identifier  - ^(TOK_DROPTABLE Identifier)
;

alterStatement
@init { msgs.push(alter statement); }
@after { msgs.pop(); }
: alterStatementRename
| alterStatementAddCol
| 

Re: Please check grammar for TIMESTAMP

2009-03-08 Thread Shyam Sarkar

Yes there will be Timezone support. We shall follow MySQL 6.0 TIMESTAMP 
specification::

http://dev.mysql.com/doc/refman/6.0/en/timestamp.html

Thanks,
shyam_sar...@yahoo.com


--- On Sun, 3/8/09, Tim Hawkins tim.hawk...@bejant.com wrote:

 From: Tim Hawkins tim.hawk...@bejant.com
 Subject: Re: Please check grammar for TIMESTAMP
 To: hive-dev@hadoop.apache.org
 Date: Sunday, March 8, 2009, 7:22 AM
 Is there going to be any Timezone Support?, ie will the
 time-stamp be stored in a recognised standard such as UTC
 regardless of the actual time submitted, given that
 hive/hadoop tend to be used for log processing and reporting
 in many use cases, understanding the normalising  time-zone
 details may be nessacary, especially where you may have data
 sourced from multiple time zones.
 
 It may be worth considering this issue now as retrofitting
 it later may cause problems.
 
 On 8 Mar 2009, at 14:15, Shyam Sarkar wrote:
 
  Hi Zheng and others,
  
  Could you please check Hive.g grammar changes for
 TIMESTAMP (See the comments with // Change by Shyam)?
  Please review and let me know your feedback. I shall
 write a short design doc later for review after these short
 exchanges.
  
  Thanks,
  shyam_sar...@yahoo.com
  
 


  


[jira] Commented: (HIVE-308) UNION ALL should create different destination directories for different operands

2009-03-08 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12679992#action_12679992
 ] 

Namit Jain commented on HIVE-308:
-

I saw your other mail just now - if you are in a hurry, go ahead.
The changes look good


+1


 UNION ALL should create different destination directories for different 
 operands
 

 Key: HIVE-308
 URL: https://issues.apache.org/jira/browse/HIVE-308
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.3.0
Reporter: Zheng Shao
Priority: Blocker
 Attachments: HIVE-308.1.patch


 The following query hangs:
 {code} 
 select * from (select 1 from zshao_lazy union all select 2 from zshao_lazy) a;
 {code} 
 The following query produce wrong results: (one map-reduce job 
 overwrite/cannot overwrite the result of the other)
 {code} 
 select * from (select 1 as id from zshao_lazy cluster by id union all select 
 2 as id from zshao_meta) a;
 {code} 
 The reason of both is that the destination directory of the file sink 
 operator conflicts with each other.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-308) UNION ALL should create different destination directories for different operands

2009-03-08 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12679990#action_12679990
 ] 

Namit Jain commented on HIVE-308:
-

Zheng, there are a lot of problems with union, and I am in the process of 
fixing them in:

https://issues.apache.org/jira/browse/HIVE-318

Some corner cases are not working, and I should be done hopefully in a day or 2.
Can you hold on to this patch - let us look at these 2 patches together and 
then decide

 UNION ALL should create different destination directories for different 
 operands
 

 Key: HIVE-308
 URL: https://issues.apache.org/jira/browse/HIVE-308
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.3.0
Reporter: Zheng Shao
Priority: Blocker
 Attachments: HIVE-308.1.patch


 The following query hangs:
 {code} 
 select * from (select 1 from zshao_lazy union all select 2 from zshao_lazy) a;
 {code} 
 The following query produce wrong results: (one map-reduce job 
 overwrite/cannot overwrite the result of the other)
 {code} 
 select * from (select 1 as id from zshao_lazy cluster by id union all select 
 2 as id from zshao_meta) a;
 {code} 
 The reason of both is that the destination directory of the file sink 
 operator conflicts with each other.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Hudson build is back to normal: Hive-trunk-h0.17 #25

2009-03-08 Thread Apache Hudson Server
See http://hudson.zones.apache.org/hudson/job/Hive-trunk-h0.17/25/changes




Hudson build is back to normal: Hive-trunk-h0.18 #26

2009-03-08 Thread Apache Hudson Server
See http://hudson.zones.apache.org/hudson/job/Hive-trunk-h0.18/26/changes




Hudson build is back to normal: Hive-trunk-h0.19 #25

2009-03-08 Thread Apache Hudson Server
See http://hudson.zones.apache.org/hudson/job/Hive-trunk-h0.19/25/changes




[jira] Commented: (HIVE-322) cannot create temporary udf dynamically, with a ClassNotFoundException

2009-03-08 Thread Min Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12680030#action_12680030
 ] 

Min Zhou commented on HIVE-322:
---

Hey Joydeep,  I'll try to implement it, however, we will make our UDFs 
temporarily on Thrift  Server  mode, not  CLI.
That a great feature you mentioned! 

 cannot create temporary udf dynamically, with a ClassNotFoundException 
 ---

 Key: HIVE-322
 URL: https://issues.apache.org/jira/browse/HIVE-322
 Project: Hadoop Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.3.0
Reporter: Min Zhou
Priority: Blocker
 Attachments: registerjars-v3.patch, registerjars1.patch, 
 registerjars2.patch


 I found the ClassLoader cannot load my UDF when doing FunctionTask, because 
 the ClassLoader hasnot append its classpaths on-the-fly yet.
 The ExecDriver' s addToClassPath(String[] newPaths) method is the only entry 
 for ClassLoader dynamically append its classhpaths (besides hadoop's 
 GenericOptionsParser).
 But that function wasnot called before FunctionTask getting my UDF class by 
 class name. I think this is the reason why I came across that failure.
 scenario description:
 I set a peroperty in hive-site.xml to configure the classpath of my udf. 
 property
   namehive.aux.jars.path/name
   value/home/hadoop/hdpsoft/hive-auxs/zhoumin.jar/value
 /property
 but failed to register it with a ClassNotFoundException when creating udf 
 through the sql command.
 CREATE TEMPORARY FUNCTION strlen AS 'hadoop.hive.udf.UdfStringLength'
 I'll make a patch soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (HIVE-329) start and stop hive thrift server in daemon mode

2009-03-08 Thread Min Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12680045#action_12680045
 ] 

coderplay edited comment on HIVE-329 at 3/8/09 8:37 PM:
---

I prefer to my way. It's simpler, I think.
So how can I start my hiveserver not in daemon mode following your suggestion?

  was (Author: coderplay):
I prefer my way. It's simpler, I think.
So how can I start my hiveserver not in daemon mode following your suggestion?
  
 start and stop hive thrift server  in daemon mode
 -

 Key: HIVE-329
 URL: https://issues.apache.org/jira/browse/HIVE-329
 Project: Hadoop Hive
  Issue Type: New Feature
  Components: Server Infrastructure
Affects Versions: 0.3.0
Reporter: Min Zhou
 Attachments: daemon.patch


 I write two shell script to start and stop hive thrift server more convenient.
 usage:
 bin/hive --service start-hive [HIVE_PORT]
 bin/hive --service stop-hive 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (HIVE-329) start and stop hive thrift server in daemon mode

2009-03-08 Thread Min Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12680045#action_12680045
 ] 

coderplay edited comment on HIVE-329 at 3/8/09 8:37 PM:
---

I prefer my way. It's simpler, I think.
So how can I start my hiveserver not in daemon mode following your suggestion?

  was (Author: coderplay):
I prefer to my way. It's simpler, I think.
So how can I start my hiveserver not in daemon mode following your suggestion?
  
 start and stop hive thrift server  in daemon mode
 -

 Key: HIVE-329
 URL: https://issues.apache.org/jira/browse/HIVE-329
 Project: Hadoop Hive
  Issue Type: New Feature
  Components: Server Infrastructure
Affects Versions: 0.3.0
Reporter: Min Zhou
 Attachments: daemon.patch


 I write two shell script to start and stop hive thrift server more convenient.
 usage:
 bin/hive --service start-hive [HIVE_PORT]
 bin/hive --service stop-hive 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-329) start and stop hive thrift server in daemon mode

2009-03-08 Thread Raghotham Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12680049#action_12680049
 ] 

Raghotham Murthy commented on HIVE-329:
---

How about, if --action option is present, it means start or stop a daemon. If 
no --action is specified, the server is run as a regular process.

 start and stop hive thrift server  in daemon mode
 -

 Key: HIVE-329
 URL: https://issues.apache.org/jira/browse/HIVE-329
 Project: Hadoop Hive
  Issue Type: New Feature
  Components: Server Infrastructure
Affects Versions: 0.3.0
Reporter: Min Zhou
 Attachments: daemon.patch


 I write two shell script to start and stop hive thrift server more convenient.
 usage:
 bin/hive --service start-hive [HIVE_PORT]
 bin/hive --service stop-hive 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-313) add UDF date_add, date_sub, datediff

2009-03-08 Thread Zheng Shao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Shao updated HIVE-313:


Attachment: HIVE-313.1.patch

Added the UDFs.

 add UDF date_add, date_sub, datediff
 

 Key: HIVE-313
 URL: https://issues.apache.org/jira/browse/HIVE-313
 Project: Hadoop Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Zheng Shao
 Attachments: HIVE-313.1.patch


 See 
 http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-add
 See 
 http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-sub
 See 
 http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_datediff

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.