[jira] [Work started] (TRAFODION-1912) support automatic convert DOS format during bulkload

2016-03-30 Thread liu ming (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-1912 started by liu ming.
---
> support automatic convert DOS format during bulkload
> 
>
> Key: TRAFODION-1912
> URL: https://issues.apache.org/jira/browse/TRAFODION-1912
> Project: Apache Trafodion
>  Issue Type: Sub-task
>Reporter: liu ming
>Assignee: liu ming
>
> during bulkload, if the raw text file on HDFS is in DOS format, it is desired 
> to automaticlly convert to Unix format as Trafodion requires during the load 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15218140#comment-15218140
 ] 

ASF GitHub Bot commented on TRAFODION-1910:
---

Github user selvaganesang commented on a diff in the pull request:

https://github.com/apache/incubator-trafodion/pull/405#discussion_r57908077
  
--- Diff: core/sql/cli/CliExtern.cpp ---
@@ -6316,7 +6316,8 @@ Lng32 SQL_EXEC_DeleteHbaseJNI()
   threadContext->incrNumOfCliCalls();
 
   HBaseClient_JNI::deleteInstance();
-  HiveClient_JNI::deleteInstance();
+  // The Hive client persists across connections
+  // HiveClient_JNI::deleteInstance();
--- End diff --

Whenever a context is dropped, both hbaseClient_JNI and hiveClient_JNI is 
getting deleted. This CLI call is being called from the two places shown below. 
I am not sure about the need for this calls in these functions. If it is 
needed, I would think there would be similar problem with 
HBaseClient_JNI::deleteInstance too. Or is it better to get rid of this CLI 
call.

arkcmp/CmpStatement.cpp:  SQL_EXEC_DeleteHbaseJNI();
sqlcomp/CmpSeabaseDDLcommon.cpp:  SQL_EXEC_DeleteHbaseJNI();

In case of mxosrvr, there is default context. In case of T2 driver, the 
CliContext would be deallocated and these objects would be deleted. I am 
assuming that it should be ok to do that.


> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at 

[jira] [Created] (TRAFODION-1913) org.trafodion.jdbc.t4 throws an error

2016-03-30 Thread Pierre Smits (JIRA)
Pierre Smits created TRAFODION-1913:
---

 Summary:  org.trafodion.jdbc.t4 throws an error
 Key: TRAFODION-1913
 URL: https://issues.apache.org/jira/browse/TRAFODION-1913
 Project: Apache Trafodion
  Issue Type: Bug
  Components: client-jdbc-t4
Affects Versions: 1.3-incubating
Reporter: Pierre Smits


After a restart of both the Trafodion 1.3 Sandbox and Apache ofbiz I get the 
subject of this posting returned in the OFBiz log. The excerpt shows:


 [java] Caused by: org.trafodion.jdbc.t4.HPT4Exception: The message id: 
problem_with_server_read

 [java] at 
org.trafodion.jdbc.t4.HPT4Messages.createSQLException(HPT4Messages.java:304) 
~[jdbcT4.jar:?]

 [java] at org.trafodion.jdbc.t4.InputOutput.doIO(InputOutput.java:373) 
~[jdbcT4.jar:?]

 [java] at 
org.trafodion.jdbc.t4.T4Connection.getReadBuffer(T4Connection.java:154) 
~[jdbcT4.jar:?]

 [java] at 
org.trafodion.jdbc.t4.T4Connection.EndTransaction(T4Connection.java:398) 
~[jdbcT4.jar:?]

 [java] at 
org.trafodion.jdbc.t4.InterfaceConnection.endTransaction(InterfaceConnection.java:1222)
 ~[jdbcT4.jar:?]

 [java] at 
org.trafodion.jdbc.t4.InterfaceConnection.rollback(InterfaceConnection.java:440)
 ~[jdbcT4.jar:?]

 [java] at 
org.trafodion.jdbc.t4.TrafT4Connection.rollback(TrafT4Connection.java:1025) 
~[jdbcT4.jar:?]


This continued multiple times until the OFBiz instantiation halts.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TRAFODION-1913) org.trafodion.jdbc.t4 throws an error

2016-03-30 Thread Pierre Smits (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Smits updated TRAFODION-1913:

Attachment: hs_err_pid50530.log

>  org.trafodion.jdbc.t4 throws an error
> --
>
> Key: TRAFODION-1913
> URL: https://issues.apache.org/jira/browse/TRAFODION-1913
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: client-jdbc-t4
>Affects Versions: 1.3-incubating
>Reporter: Pierre Smits
> Attachments: hs_err_pid50530.log
>
>
> After a restart of both the Trafodion 1.3 Sandbox and Apache ofbiz I get the 
> subject of this posting returned in the OFBiz log. The excerpt shows:
>  [java] Caused by: org.trafodion.jdbc.t4.HPT4Exception: The message id: 
> problem_with_server_read
>  [java] at 
> org.trafodion.jdbc.t4.HPT4Messages.createSQLException(HPT4Messages.java:304) 
> ~[jdbcT4.jar:?]
>  [java] at org.trafodion.jdbc.t4.InputOutput.doIO(InputOutput.java:373) 
> ~[jdbcT4.jar:?]
>  [java] at 
> org.trafodion.jdbc.t4.T4Connection.getReadBuffer(T4Connection.java:154) 
> ~[jdbcT4.jar:?]
>  [java] at 
> org.trafodion.jdbc.t4.T4Connection.EndTransaction(T4Connection.java:398) 
> ~[jdbcT4.jar:?]
>  [java] at 
> org.trafodion.jdbc.t4.InterfaceConnection.endTransaction(InterfaceConnection.java:1222)
>  ~[jdbcT4.jar:?]
>  [java] at 
> org.trafodion.jdbc.t4.InterfaceConnection.rollback(InterfaceConnection.java:440)
>  ~[jdbcT4.jar:?]
>  [java] at 
> org.trafodion.jdbc.t4.TrafT4Connection.rollback(TrafT4Connection.java:1025) 
> ~[jdbcT4.jar:?]
> This continued multiple times until the OFBiz instantiation halts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TRAFODION-1823) ESP idle timeout does not kick in, leading to too many ESPs on the system

2016-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15218290#comment-15218290
 ] 

ASF GitHub Bot commented on TRAFODION-1823:
---

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-trafodion/pull/406


> ESP idle timeout does not kick in, leading to too many ESPs on the system
> -
>
> Key: TRAFODION-1823
> URL: https://issues.apache.org/jira/browse/TRAFODION-1823
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Atanu Mishra
>Assignee: Selvaganesan Govindarajan
>
> There are two mechanisms to time out idle ESPs:
> • The master executor kills idle ESPs after a timeout, based on session 
> default ESP_IDLE_TIMEOUT. However, this only happens when we allocate another 
> statement that needs ESPs, because the needed logic is triggered in the code 
> to allocate ESPs. Example, to set this timeout to 30 minutes (1800 seconds):
> set session default esp_idle_timeout '1800';
> • ESPs also have a built-in timeout a period of inactivity. The same 
> session default ESP_IDLE_TIMEOUT is used as well. This timeout should trigger 
> automatically, no action from the master is required.
> Bug: Right now, this second timeout isn't working, because the idle ESP sees 
> two connections from the master, and that is not considered an idle ESP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-30 Thread Hans Zeller (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15218296#comment-15218296
 ] 

Hans Zeller commented on TRAFODION-1910:


The issue in this bug was when a JDBC client disconnected. During disconnect, 
we delete the HiveClient_JNI object with this stack trace:

{noformat}
#1  0x725ad180 in HiveClient_JNI::~HiveClient_JNI (this=0x7fffdf726900, 
__in_chrg=) at ../executor/HBaseClient_JNI.cpp:4882
#2  0x725ad0d3 in HiveClient_JNI::deleteInstance () at 
../executor/HBaseClient_JNI.cpp:4870
#3  0x73eda59a in SQL_EXEC_DeleteHbaseJNI () at 
../cli/CliExtern.cpp:6319
#4  0x7fffeba7466e in CmpStatement::process (this=0x7fffcd1d6420, es=...) 
at ../arkcmp/CmpStatement.cpp:1314
#5  0x7fffeba61f86 in CmpContext::compileDirect (this=0x7fffdeb80090, 
data=0x7fffe0beca30 "\002", data_len=4, outHeap=0x7fffe0189138, charset=15, 
op=CmpMessageObj::END_SESSION, gen_code=@0x7fffe0beca18, 
gen_code_len=@0x7fffe0beca14, parserFlags=4194304, parentQid=0x0, 
parentQidLen=0, diagsArea=0x0) at ../arkcmp/CmpContext.cpp:894
#6  0x73e7678f in ContextCli::endMxcmpSession (this=0x7fffe0189128, 
cleanupEsps=0, clearCmpCache=0) at ../cli/Context.cpp:3907
#7  0x73e76c6e in ContextCli::endSession (this=0x7fffe0189128, 
cleanupEsps=0, cleanupEspsOnly=0, cleanupOpens=0) at ../cli/Context.cpp:4053
#8  0x7232cfbd in ExSetSessionDefaultTcb::work (this=0x7fffdf764a48) at 
../executor/ex_control.cpp:815
#9  0x72357c2d in ex_tcb::sWork (tcb=0x7fffdf764a48) at 
../executor/ex_tcb.h:103
#10 0x724e343f in ExSubtask::work (this=0x7fffdf764f80) at 
../executor/ExScheduler.cpp:754
#11 0x724e2802 in ExScheduler::work (this=0x7fffdf7645b0, 
prevWaitTime=0) at ../executor/ExScheduler.cpp:331
#12 0x723b2b2a in ex_root_tcb::execute (this=0x7fffdf765000, 
cliGlobals=0x1086dc0, glob=0x7fffdf76f2d8, input_desc=0x0, 
diagsArea=@0x7fffe0bee220, reExecute=0) at ../executor/ex_root.cpp:1058
#13 0x73ebea2b in CliStatement::execute (this=0x7fffdf7827e0, 
cliGlobals=0x1086dc0, input_desc=0x0, diagsArea=..., 
execute_state=CliStatement::INITIAL_STATE_, fixupOnly=0, cliflags=0) at 
../cli/Statement.cpp:4525
#14 0x73e407f4 in SQLCLI_PerformTasks(CliGlobals *, ULng32, SQLSTMT_ID 
*, SQLDESC_ID *, SQLDESC_ID *, Lng32, Lng32, typedef __va_list_tag 
__va_list_tag *, SQLCLI_PTR_PAIRS *, SQLCLI_PTR_PAIRS *) (cliGlobals=0x1086dc0, 
tasks=606, statement_id=0x30613e8, input_descriptor=0x0, output_descriptor=0x0, 
num_input_ptr_pairs=0, num_output_ptr_pairs=0, ap=0x7fffe0bee890, 
input_ptr_pairs=0x0, output_ptr_pairs=0x0) at ../cli/Cli.cpp:3297
#15 0x73e418f6 in SQLCLI_ExecDirect2(CliGlobals *, SQLSTMT_ID *, 
SQLDESC_ID *, Int32, SQLDESC_ID *, Lng32, typedef __va_list_tag __va_list_tag 
*, SQLCLI_PTR_PAIRS *) (cliGlobals=0x1086dc0, statement_id=0x30613e8, 
sql_source=0x7fffe0beeae0, prepFlags=0, input_descriptor=0x0, num_ptr_pairs=0, 
ap=0x7fffe0bee890, ptr_pairs=0x0) at ../cli/Cli.cpp:3731
#16 0x73ed4c17 in SQL_EXEC_ExecDirect2 (statement_id=0x30613e8, 
sql_source=0x7fffe0beeae0, prep_flags=0, input_descriptor=0x0, num_ptr_pairs=0) 
at ../cli/CliExtern.cpp:2329
#17 0x769ceb3d in SRVR::WSQL_EXEC_ExecDirect (statement_id=0x30613e8, 
sql_source=0x7fffe0beeae0, input_descriptor=0x0, num_ptr_pairs=0) at 
SQLWrapper.cpp:363
#18 0x769b594f in SRVR::EXECDIRECT (pSrvrStmt=0x3060dd0) at 
sqlinterface.cpp:4521
#19 0x76941e73 in SRVR::ControlProc (pParam=0x3060dd0) at 
csrvrstmt.cpp:763
#20 0x769414b1 in SRVR_STMT_HDL::ExecDirect (this=0x3060dd0, 
inCursorName=0x0, inSqlString=0x61c430 "SET SESSION DEFAULT SQL_SESSION 'END'", 
inStmtType=1, inSqlStmtType=0, inSqlAsyncEnable=0, inQueryTimeout=0) at 
csrvrstmt.cpp:445
#21 0x769b614b in SRVR::EXECDIRECT (pSqlStr=0x61c430 "SET SESSION 
DEFAULT SQL_SESSION 'END'", WriteError=0) at sqlinterface.cpp:4702
#22 0x00581162 in SRVR::SrvrSessionCleanup () at SrvrConnect.cpp:4080
#23 0x00580d14 in odbc_SQLSvc_TerminateDialogue_ame_ 
(objtag_=0x10785e0, call_id_=0x1078638, dialogueId=903221211) at 
SrvrConnect.cpp:3950
#24 0x0051ce04 in SQLDISCONNECT_IOMessage (objtag_=0x10785e0, 
call_id_=0x1078638) at Interface/odbcs_srvr.cpp:653
#25 0x0051eff4 in DISPATCH_TCPIPRequest (objtag_=0x10785e0, 
call_id_=0x1078638, operation_id=3002) at Interface/odbcs_srvr.cpp:1775
#26 0x00465928 in BUILD_TCPIP_REQUEST (pnode=0x10785e0) at 
../Common/TCPIPSystemSrvr.cpp:606
#27 0x0046586f in PROCESS_TCPIP_REQUEST (pnode=0x10785e0) at 
../Common/TCPIPSystemSrvr.cpp:584
#28 0x004b32b0 in CNSKListenerSrvr::CheckTCPIPRequest (this=0xf2d850, 
ipnode=0x10785e0) at Interface/Listener_srvr.cpp:64
#29 0x004c4939 in CNSKListenerSrvr::tcpip_listener (arg=0xf2d850) at 
Interface/linux/Listener_srvr_ps.cpp:403
#30 0x743752f4 

[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-30 Thread Hans Zeller (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15218300#comment-15218300
 ] 

Hans Zeller commented on TRAFODION-1910:


We could also go the other way and try to call a method in the NATableDB to 
clear the cached pointer and delete the HiveClient_JNI when an ODBC/JDBC user 
disconnects. Would that be better?

> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at ../arkcmp/CmpStatement.cpp:499
> 017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
> data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
> charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7f47b8594320,
> gen_code_len=@0x7f47b8594328, parserFlags=4194304, parentQid=0x0,
> parentQidLen=0, diagsArea=0x7f47b7112e50) at ../arkcmp/CmpContext.cpp:841
> 018 0x7f47caa0dd38 in CliStatement::prepare2 (this=0x7f47b70d4028,
> source=0x7f47b711ab18 "showddl mytable", diagsArea=...,
> passed_gen_code=, passed_gen_code_len=3081953576,
> charset=15, unpackTdbs=1, cliFlags=129) at ../cli/Statement.cpp:1775
> 019 0x7f47ca9bac94 in SQLCLI_Prepare2 (cliGlobals=0x27bcbb0,
> statement_id=0x370a9c8, sql_source=0x7f47b8594610, gencode_ptr=0x0,
> gencode_len=0, ret_gencode_len=0x0, query_cost_info=0x370abf8,
> query_comp_stats_info=0x370ac48, uniqueStmtId=,
> uniqueStmtIdLen=0x370ab2c, flags=1) at ../cli/Cli.cpp:5927
> 020 0x7f47caa1b1ae 

[jira] [Commented] (TRAFODION-1896) CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format

2016-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15218425#comment-15218425
 ] 

ASF GitHub Bot commented on TRAFODION-1896:
---

Github user sureshsubbiah commented on a diff in the pull request:

https://github.com/apache/incubator-trafodion/pull/393#discussion_r57931410
  
--- Diff: core/sql/optimizer/BindRelExpr.cpp ---
@@ -10165,22 +10165,35 @@ base table then the old version of the row will 
have to be deleted from
 indexes, and a new version inserted. Upsert is being transformed to merge
 so that we can delete the old version of an updated row from the index.
 
-Upsert is also converted into merge when there are omitted cols with 
default values and 
-TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS is set to  OFF in case of 
aligned format table or 
+Upsert is also converted into merge when TRAF_UPSERT_MODE is set to MERGE 
and 
+there are omitted cols with default values in case of aligned format table 
or 
 omitted current timestamp cols in case of non-aligned row format
 */
 NABoolean Insert::isUpsertThatNeedsMerge(NABoolean isAlignedRowFormat, 
NABoolean omittedDefaultCols,
NABoolean 
omittedCurrentDefaultClassCols) const
 {
+  // The necessary conditions to convert upsert to merge and
   if (isUpsert() && 
   (NOT getIsTrafLoadPrep()) && 
   (NOT (getTableDesc()->isIdentityColumnGeneratedAlways() && 
getTableDesc()->hasIdentityColumnInClusteringKey())) && 
   (NOT 
(getTableDesc()->getClusteringIndex()->getNAFileSet()->hasSyskey())) && 
-   ((getTableDesc()->hasSecondaryIndexes()) ||
- (( NOT isAlignedRowFormat) && omittedCurrentDefaultClassCols) ||
- ((isAlignedRowFormat && omittedDefaultCols
-  && 
(CmpCommon::getDefault(TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS) == DF_OFF)))
-   ))
+// table has secondary indexes or
+(getTableDesc()->hasSecondaryIndexes() ||
+  // CQD is set to MERGE  
+  ((CmpCommon::getDefault(TRAF_UPSERT_MODE) == DF_MERGE) &&
+// omitted current default columns with non-aligned row format 
tables
+// or omitted default columns with aligned row format tables 
+(((NOT isAlignedRowFormat) && omittedCurrentDefaultClassCols) 
||
+(isAlignedRowFormat && omittedDefaultCols))) ||
+  // CQD is set to Optimal, for non-aligned row format with 
omitted 
+  // current columns, it is converted into merge though it is not
+  // optimal for performance - This is done to ensure that when 
the 
+  // CQD is set to optimal, non-aligned format would behave like 
+  // merge when any column is  omitted 
--- End diff --

I suppose the comment in line 10192 should say "when any current default 
column is omitted" ?

A table some place in code that shows what the behaviour is with 3 cqd 
settings for aligned omitted default, aligned omitted default current 
non-aligned omitted default, non-aligned omitted current default will be 
helpful.  


> CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format
> 
>
> Key: TRAFODION-1896
> URL: https://issues.apache.org/jira/browse/TRAFODION-1896
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 1.3-incubating
> Environment: Any
>Reporter: Hans Zeller
>Assignee: Selvaganesan Govindarajan
>
> See the discussion with subject "Upsert semantics" in the user list on 
> 3/15/2016 and in https://github.com/apache/incubator-trafodion/pull/380, as 
> well as TRAFODION-1887 and TRAFODION-14.
> It would be good if CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS would 
> ensure the same semantics for all supported table formats.
> Here is part of the email exchange from the user list:
> Selva Govindarajan
> 5:36 PM 
> to user 
> I believe phoenix doesn’t support insert semantics or the non-null default 
> value columns.  Trafodion supports insert, upsert, non-null default value 
> columns as well as current default values like current timestamp and current 
> user.
>  
> Upsert handling in Trafodion is same as phoenix for non-aligned format. For 
> aligned format it can be controlled via CQD.
>  
> {noformat}
>Aligned Format  Aligned format 
> with  Non-Aligned with   Non-Aligned with}}
>With no omitted  omitted columns   
>   with no omitted omitted current default
> columns 

[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15218454#comment-15218454
 ] 

ASF GitHub Bot commented on TRAFODION-1910:
---

Github user sureshsubbiah commented on a diff in the pull request:

https://github.com/apache/incubator-trafodion/pull/405#discussion_r57933613
  
--- Diff: core/sql/cli/CliExtern.cpp ---
@@ -6316,7 +6316,8 @@ Lng32 SQL_EXEC_DeleteHbaseJNI()
   threadContext->incrNumOfCliCalls();
 
   HBaseClient_JNI::deleteInstance();
-  HiveClient_JNI::deleteInstance();
+  // The Hive client persists across connections
+  // HiveClient_JNI::deleteInstance();
--- End diff --

I would echo Selva's comment. Can you please explain how the HiveClient_JNI 
object is different from HBaseClient_JNI? I had tried to model HiveClient_JNI 
after HBaseClient_JNI and would like check if that assumption is incorrect in 
other places too.


> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at ../arkcmp/CmpStatement.cpp:499
> 017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
> data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
> charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7f47b8594320,
> gen_code_len=@0x7f47b8594328, parserFlags=4194304, parentQid=0x0,
> parentQidLen=0, diagsArea=0x7f47b7112e50) at ../arkcmp/CmpContext.cpp:841
> 018 0x7f47caa0dd38 i

[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15218527#comment-15218527
 ] 

ASF GitHub Bot commented on TRAFODION-1910:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/incubator-trafodion/pull/405#discussion_r57938814
  
--- Diff: core/sql/cli/CliExtern.cpp ---
@@ -6316,7 +6316,8 @@ Lng32 SQL_EXEC_DeleteHbaseJNI()
   threadContext->incrNumOfCliCalls();
 
   HBaseClient_JNI::deleteInstance();
-  HiveClient_JNI::deleteInstance();
+  // The Hive client persists across connections
+  // HiveClient_JNI::deleteInstance();
--- End diff --

Sorry, I added a couple of comments to the JIRA, TRAFODION-1910 
(https://issues.apache.org/jira/browse/TRAFODION-1910), should have added them 
here.

Let me try to summarize: The comments I made try to explain that we _could_ 
treat HiveClient_JNI and HBaseClient_JNI differently. Suresh is asking why we 
_should_ do that.

So, I think what you are telling me is that I should write a new fix that 
treats the two objects in a similar way. Hope to have that ready soon. Thanks 
for your input.


> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at ../arkcmp/CmpStatement.cpp:499
> 017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
> data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
> c

[jira] [Commented] (TRAFODION-1896) CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format

2016-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15218594#comment-15218594
 ] 

ASF GitHub Bot commented on TRAFODION-1896:
---

Github user selvaganesang commented on a diff in the pull request:

https://github.com/apache/incubator-trafodion/pull/393#discussion_r57943451
  
--- Diff: core/sql/optimizer/BindRelExpr.cpp ---
@@ -10165,22 +10165,35 @@ base table then the old version of the row will 
have to be deleted from
 indexes, and a new version inserted. Upsert is being transformed to merge
 so that we can delete the old version of an updated row from the index.
 
-Upsert is also converted into merge when there are omitted cols with 
default values and 
-TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS is set to  OFF in case of 
aligned format table or 
+Upsert is also converted into merge when TRAF_UPSERT_MODE is set to MERGE 
and 
+there are omitted cols with default values in case of aligned format table 
or 
 omitted current timestamp cols in case of non-aligned row format
 */
 NABoolean Insert::isUpsertThatNeedsMerge(NABoolean isAlignedRowFormat, 
NABoolean omittedDefaultCols,
NABoolean 
omittedCurrentDefaultClassCols) const
 {
+  // The necessary conditions to convert upsert to merge and
   if (isUpsert() && 
   (NOT getIsTrafLoadPrep()) && 
   (NOT (getTableDesc()->isIdentityColumnGeneratedAlways() && 
getTableDesc()->hasIdentityColumnInClusteringKey())) && 
   (NOT 
(getTableDesc()->getClusteringIndex()->getNAFileSet()->hasSyskey())) && 
-   ((getTableDesc()->hasSecondaryIndexes()) ||
- (( NOT isAlignedRowFormat) && omittedCurrentDefaultClassCols) ||
- ((isAlignedRowFormat && omittedDefaultCols
-  && 
(CmpCommon::getDefault(TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS) == DF_OFF)))
-   ))
+// table has secondary indexes or
+(getTableDesc()->hasSecondaryIndexes() ||
+  // CQD is set to MERGE  
+  ((CmpCommon::getDefault(TRAF_UPSERT_MODE) == DF_MERGE) &&
+// omitted current default columns with non-aligned row format 
tables
+// or omitted default columns with aligned row format tables 
+(((NOT isAlignedRowFormat) && omittedCurrentDefaultClassCols) 
||
+(isAlignedRowFormat && omittedDefaultCols))) ||
+  // CQD is set to Optimal, for non-aligned row format with 
omitted 
+  // current columns, it is converted into merge though it is not
+  // optimal for performance - This is done to ensure that when 
the 
+  // CQD is set to optimal, non-aligned format would behave like 
+  // merge when any column is  omitted 
--- End diff --

What I intended to say is this:
Non-aligned format table usually behaves like merge though it is not 
changed into MERGE statement because Trafodion engine doesn't store the omitted 
default column in the raw hbase table.
When the upsert statement has omitted current default columns, it is 
changed into MERGE statement though it is not optimal because we want it to 
behave like above for this case also. Otherwise, it might be confusing to the 
application.


> CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format
> 
>
> Key: TRAFODION-1896
> URL: https://issues.apache.org/jira/browse/TRAFODION-1896
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 1.3-incubating
> Environment: Any
>Reporter: Hans Zeller
>Assignee: Selvaganesan Govindarajan
>
> See the discussion with subject "Upsert semantics" in the user list on 
> 3/15/2016 and in https://github.com/apache/incubator-trafodion/pull/380, as 
> well as TRAFODION-1887 and TRAFODION-14.
> It would be good if CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS would 
> ensure the same semantics for all supported table formats.
> Here is part of the email exchange from the user list:
> Selva Govindarajan
> 5:36 PM 
> to user 
> I believe phoenix doesn’t support insert semantics or the non-null default 
> value columns.  Trafodion supports insert, upsert, non-null default value 
> columns as well as current default values like current timestamp and current 
> user.
>  
> Upsert handling in Trafodion is same as phoenix for non-aligned format. For 
> aligned format it can be controlled via CQD.
>  
> {noformat}
>Aligned Format  Aligned format 
> with  Non-Aligned with   Non-Aligned with}}
>With no omitted  omitted 

[jira] [Commented] (TRAFODION-1896) CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format

2016-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15218601#comment-15218601
 ] 

ASF GitHub Bot commented on TRAFODION-1896:
---

Github user selvaganesang commented on a diff in the pull request:

https://github.com/apache/incubator-trafodion/pull/393#discussion_r57943651
  
--- Diff: core/sql/optimizer/BindRelExpr.cpp ---
@@ -10165,22 +10165,35 @@ base table then the old version of the row will 
have to be deleted from
 indexes, and a new version inserted. Upsert is being transformed to merge
 so that we can delete the old version of an updated row from the index.
 
-Upsert is also converted into merge when there are omitted cols with 
default values and 
-TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS is set to  OFF in case of 
aligned format table or 
+Upsert is also converted into merge when TRAF_UPSERT_MODE is set to MERGE 
and 
+there are omitted cols with default values in case of aligned format table 
or 
 omitted current timestamp cols in case of non-aligned row format
 */
 NABoolean Insert::isUpsertThatNeedsMerge(NABoolean isAlignedRowFormat, 
NABoolean omittedDefaultCols,
NABoolean 
omittedCurrentDefaultClassCols) const
 {
+  // The necessary conditions to convert upsert to merge and
   if (isUpsert() && 
   (NOT getIsTrafLoadPrep()) && 
   (NOT (getTableDesc()->isIdentityColumnGeneratedAlways() && 
getTableDesc()->hasIdentityColumnInClusteringKey())) && 
   (NOT 
(getTableDesc()->getClusteringIndex()->getNAFileSet()->hasSyskey())) && 
-   ((getTableDesc()->hasSecondaryIndexes()) ||
- (( NOT isAlignedRowFormat) && omittedCurrentDefaultClassCols) ||
- ((isAlignedRowFormat && omittedDefaultCols
-  && 
(CmpCommon::getDefault(TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS) == DF_OFF)))
-   ))
+// table has secondary indexes or
+(getTableDesc()->hasSecondaryIndexes() ||
+  // CQD is set to MERGE  
+  ((CmpCommon::getDefault(TRAF_UPSERT_MODE) == DF_MERGE) &&
+// omitted current default columns with non-aligned row format 
tables
+// or omitted default columns with aligned row format tables 
+(((NOT isAlignedRowFormat) && omittedCurrentDefaultClassCols) 
||
+(isAlignedRowFormat && omittedDefaultCols))) ||
+  // CQD is set to Optimal, for non-aligned row format with 
omitted 
+  // current columns, it is converted into merge though it is not
+  // optimal for performance - This is done to ensure that when 
the 
+  // CQD is set to optimal, non-aligned format would behave like 
+  // merge when any column is  omitted 
--- End diff --

I will add the table in the comments section from the message in user dlist 
when I get a chance to update this file again


> CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format
> 
>
> Key: TRAFODION-1896
> URL: https://issues.apache.org/jira/browse/TRAFODION-1896
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 1.3-incubating
> Environment: Any
>Reporter: Hans Zeller
>Assignee: Selvaganesan Govindarajan
>
> See the discussion with subject "Upsert semantics" in the user list on 
> 3/15/2016 and in https://github.com/apache/incubator-trafodion/pull/380, as 
> well as TRAFODION-1887 and TRAFODION-14.
> It would be good if CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS would 
> ensure the same semantics for all supported table formats.
> Here is part of the email exchange from the user list:
> Selva Govindarajan
> 5:36 PM 
> to user 
> I believe phoenix doesn’t support insert semantics or the non-null default 
> value columns.  Trafodion supports insert, upsert, non-null default value 
> columns as well as current default values like current timestamp and current 
> user.
>  
> Upsert handling in Trafodion is same as phoenix for non-aligned format. For 
> aligned format it can be controlled via CQD.
>  
> {noformat}
>Aligned Format  Aligned format 
> with  Non-Aligned with   Non-Aligned with}}
>With no omitted  omitted columns   
>   with no omitted omitted current default
> columns   
>   / omitted non-current columns
>  
> Default behavior   Replaces rowMERGE  
>Replace the given columns   ME

[jira] [Created] (TRAFODION-1914) optimize "added columns" in indexes

2016-03-30 Thread Eric Owhadi (JIRA)
Eric Owhadi created TRAFODION-1914:
--

 Summary: optimize "added columns" in indexes
 Key: TRAFODION-1914
 URL: https://issues.apache.org/jira/browse/TRAFODION-1914
 Project: Apache Trafodion
  Issue Type: Improvement
  Components: sql-cmp
Reporter: Eric Owhadi


the current CREATE INDEX feature will always put each column added to the index 
in the clustering key. But sometimes, users just want to add columns to the 
index to avoid having to probe back the primary table to fetch just one or 2 
column back. Instead copying these columns in the index can avoid making a 
probe back to main table and therefore improve performance. Current 
implementation allows this, but will always put the extra column as part of the 
clustering key. That is not optimal, and very bad for the case of VARCHAR, 
since they are exploded to there max size when part of the clustering key. So 
this JIRA is abount altering the syntax of create index, and flag columns that 
are added but should not be part of the clustering key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TRAFODION-1843) Allow USER option(s) to be defined as defaults in a table column definition

2016-03-30 Thread Roberta Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roberta Marton reassigned TRAFODION-1843:
-

Assignee: (was: Roberta Marton)

> Allow USER option(s) to be defined as defaults in a table column definition
> ---
>
> Key: TRAFODION-1843
> URL: https://issues.apache.org/jira/browse/TRAFODION-1843
> Project: Apache Trafodion
>  Issue Type: Improvement
>  Components: sql-cmu
>Reporter: Roberta Marton
>
> SQL ANSI allows you to specify:  USER, CURRENT_USER, and SESSION_USER as 
> default options associated with the  when creating a column 
> definition for a table.  Trafodion should support similar functionality.
> USER and SYSTEM_USER are semantically the same and irepresents the value of 
> the current authorization identifier.
> SESSION_USER is the values of the SQL session authorization identifier
> Support for USER, CURRENT_USER, and SESSION_USER exist in the code and can be 
> used, for example, in insert statements.
> All user values should be returned as a varchar 128 to match the current 
> implementation.
> ANSI also support a SYSTEM_USER option.  The SYSTEM_USER is an implementation 
> defined value that represents the operating system user related to the 
> process running the request.  At this time, there are no plans to support the 
> SYSTEM_USER.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15219104#comment-15219104
 ] 

ASF GitHub Bot commented on TRAFODION-1910:
---

GitHub user zellerh opened a pull request:

https://github.com/apache/incubator-trafodion/pull/408

TRAFODION-1910 mxosrvr crashes on Hive query after reconnect (take 2)

NATableDB is caching a pointer to a HiveClient_JNI object
(HiveMetaData::client_), but that object gets deallocated when a JDBC
client disconnects. Fixing this by keeping the HiveClient_JNI around
across sessions.

Selva and Suresh commented on the first fix and suggested to treat both
HBaseClient_JNI and HiveClient_JNI the same and to remove the CLI
interface that's used to delete these objects.

Therefore, the new fix is to remove this CLI call. It gets called from
two places, one is when an ODBC/JDBC connection closes and the other
is from "initialize trafodion, drop". We believe that neither of them
is needed. Note that we have only one object of each type per CLI
context, and that we delete both objects when we delete the context
(ContextCli::deleteMe()), so there are no leaks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zellerh/incubator-trafodion bug/1910

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-trafodion/pull/408.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #408


commit 841321a57d03343faf7d894ff8987ea677bde05b
Author: Hans Zeller 
Date:   2016-03-31T00:08:50Z

TRAFODION-1910 mxosrvr crashes on Hive query after reconnect (take 2)

NATableDB is caching a pointer to a HiveClient_JNI object
(HiveMetaData::client_), but that object gets deallocated when a JDBC
client disconnects. Fixing this by keeping the HiveClient_JNI around
across sessions.

Selva and Suresh commented on the first fix and suggested to treat both
HBaseClient_JNI and HiveClient_JNI the same and to remove the CLI
interface that's used to delete these objects.

Therefore, the new fix is to remove this CLI call. It gets called from
two places, one is when an ODBC/JDBC connection closes and the other
is from "initialize trafodion, drop". We believe that neither of them
is needed. Note that we have only one object of each type per CLI
context, and that we delete both objects when we delete the context
(ContextCli::deleteMe()), so there are no leaks.




> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47

[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15219105#comment-15219105
 ] 

ASF GitHub Bot commented on TRAFODION-1910:
---

Github user zellerh closed the pull request at:

https://github.com/apache/incubator-trafodion/pull/405


> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at ../arkcmp/CmpStatement.cpp:499
> 017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
> data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
> charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7f47b8594320,
> gen_code_len=@0x7f47b8594328, parserFlags=4194304, parentQid=0x0,
> parentQidLen=0, diagsArea=0x7f47b7112e50) at ../arkcmp/CmpContext.cpp:841
> 018 0x7f47caa0dd38 in CliStatement::prepare2 (this=0x7f47b70d4028,
> source=0x7f47b711ab18 "showddl mytable", diagsArea=...,
> passed_gen_code=, passed_gen_code_len=3081953576,
> charset=15, unpackTdbs=1, cliFlags=129) at ../cli/Statement.cpp:1775
> 019 0x7f47ca9bac94 in SQLCLI_Prepare2 (cliGlobals=0x27bcbb0,
> statement_id=0x370a9c8, sql_source=0x7f47b8594610, gencode_ptr=0x0,
> gencode_len=0, ret_gencode_len=0x0, query_cost_info=0x370abf8,
> query_comp_stats_info=0x370ac48, uniqueStmtId=,
> uniqueStmtIdLen=0x370ab2c, flags=1) at ../cli/Cli.cpp:5927
> 020 0x7f47caa1b1ae in SQL_EXEC_Prepare2 (statement_id=0x370a9c8,
> sql_source=0x7f47b859461