[jira] [Created] (TRAFODION-1912) support automatic convert DOS format during bulkload

2016-03-29 Thread liu ming (JIRA)
liu ming created TRAFODION-1912:
---

 Summary: support automatic convert DOS format during bulkload
 Key: TRAFODION-1912
 URL: https://issues.apache.org/jira/browse/TRAFODION-1912
 Project: Apache Trafodion
  Issue Type: Sub-task
Reporter: liu ming


during bulkload, if the raw text file on HDFS is in DOS format, it is desired 
to automaticlly convert to Unix format as Trafodion requires during the load 
process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (TRAFODION-1912) support automatic convert DOS format during bulkload

2016-03-29 Thread liu ming (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liu ming reassigned TRAFODION-1912:
---

Assignee: liu ming

> support automatic convert DOS format during bulkload
> 
>
> Key: TRAFODION-1912
> URL: https://issues.apache.org/jira/browse/TRAFODION-1912
> Project: Apache Trafodion
>  Issue Type: Sub-task
>Reporter: liu ming
>Assignee: liu ming
>
> during bulkload, if the raw text file on HDFS is in DOS format, it is desired 
> to automaticlly convert to Unix format as Trafodion requires during the load 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TRAFODION-1896) CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format

2016-03-29 Thread Selvaganesan Govindarajan (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15216920#comment-15216920
 ] 

Selvaganesan Govindarajan commented on TRAFODION-1896:
--

he CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS is replaced with TR…
…AF_UPSERT_MODE.

This CQD takes 3 values

MERGE - If the row exists, use the old values for omitted columns. This
doesn't necessarily mean that MERGE command is used
REPLACE   - Always replace the column with default values for omitted columns
OPTIMAL   - Chooses MERGE like concept for non-aligned format table
REPLACE for aligned format table

MERGE is the default.

> CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format
> 
>
> Key: TRAFODION-1896
> URL: https://issues.apache.org/jira/browse/TRAFODION-1896
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 1.3-incubating
> Environment: Any
>Reporter: Hans Zeller
>Assignee: Selvaganesan Govindarajan
>
> See the discussion with subject "Upsert semantics" in the user list on 
> 3/15/2016 and in https://github.com/apache/incubator-trafodion/pull/380, as 
> well as TRAFODION-1887 and TRAFODION-14.
> It would be good if CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS would 
> ensure the same semantics for all supported table formats.
> Here is part of the email exchange from the user list:
> Selva Govindarajan
> 5:36 PM 
> to user 
> I believe phoenix doesn’t support insert semantics or the non-null default 
> value columns.  Trafodion supports insert, upsert, non-null default value 
> columns as well as current default values like current timestamp and current 
> user.
>  
> Upsert handling in Trafodion is same as phoenix for non-aligned format. For 
> aligned format it can be controlled via CQD.
>  
> {noformat}
>Aligned Format  Aligned format 
> with  Non-Aligned with   Non-Aligned with}}
>With no omitted  omitted columns   
>   with no omitted omitted current default
> columns   
>   / omitted non-current columns
>  
> Default behavior   Replaces rowMERGE  
>Replace the given columns   MERGE
> With the CQD   Replaces rowReplaces row   
>Replace the given columns   MERGE
>   set to on
> {noformat}
>  
> The CQD to be used is TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS (Default is 
> off). In short, this CQD is a no-op for non-aligned format.  
>  
> The behavior of the non-aligned format can’t be controlled by the CQD because 
> we don’t store values for the omitted columns in hbase and hence when the 
> user switches the CQD settings for upserts with different sets of omitted 
> columns, we could end up with non-deterministic values for these columns.
> For eq. upsert with the cqd set to ‘on’ with a set of omitted columns
> Upsert with the cqd set to ‘off’ with a different set of omitted columns
> If we switch to insert all column values all the time for non-aligned format, 
> then we can let user to control what value needs to be put in for the omitted 
> column.
>  
> Selva 
>  
> From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
> Sent: Tuesday, March 15, 2016 4:01 PM
> To: u...@trafodion.incubator.apache.org
> Subject: Re: Upsert semantics
> Hans Zeller 
> 5:57 PM (20 minutes ago)
> to user 
> Again, IMHO that's the wrong way to go, but I hope others will chime in. Dave 
> gave the best reason, it's a bad idea to make the semantics of UPSERT depend 
> on the internal format. Here is what I would suggest, using Selva's table:
> {noformat}
>  Aligned FormatAligned format with
>   Non-Aligned with   Non-Aligned with
>  With no omittedomitted columns   
>   with no omitted omitted current default
>  columns  
>/ omitted non-current columns
>  
> CQD off  Replaces row  MERGE  
>Replace the given columns   MERGE
> CQD on (default) Replaces row  Replaces row   
>Replace all columnsReplace all columns
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (TRAFODION-1896) CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format

2016-03-29 Thread Selvaganesan Govindarajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-1896 started by Selvaganesan Govindarajan.

> CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS for non-aligned format
> 
>
> Key: TRAFODION-1896
> URL: https://issues.apache.org/jira/browse/TRAFODION-1896
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-cmp
>Affects Versions: 1.3-incubating
> Environment: Any
>Reporter: Hans Zeller
>Assignee: Selvaganesan Govindarajan
>
> See the discussion with subject "Upsert semantics" in the user list on 
> 3/15/2016 and in https://github.com/apache/incubator-trafodion/pull/380, as 
> well as TRAFODION-1887 and TRAFODION-14.
> It would be good if CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS would 
> ensure the same semantics for all supported table formats.
> Here is part of the email exchange from the user list:
> Selva Govindarajan
> 5:36 PM 
> to user 
> I believe phoenix doesn’t support insert semantics or the non-null default 
> value columns.  Trafodion supports insert, upsert, non-null default value 
> columns as well as current default values like current timestamp and current 
> user.
>  
> Upsert handling in Trafodion is same as phoenix for non-aligned format. For 
> aligned format it can be controlled via CQD.
>  
> {noformat}
>Aligned Format  Aligned format 
> with  Non-Aligned with   Non-Aligned with}}
>With no omitted  omitted columns   
>   with no omitted omitted current default
> columns   
>   / omitted non-current columns
>  
> Default behavior   Replaces rowMERGE  
>Replace the given columns   MERGE
> With the CQD   Replaces rowReplaces row   
>Replace the given columns   MERGE
>   set to on
> {noformat}
>  
> The CQD to be used is TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS (Default is 
> off). In short, this CQD is a no-op for non-aligned format.  
>  
> The behavior of the non-aligned format can’t be controlled by the CQD because 
> we don’t store values for the omitted columns in hbase and hence when the 
> user switches the CQD settings for upserts with different sets of omitted 
> columns, we could end up with non-deterministic values for these columns.
> For eq. upsert with the cqd set to ‘on’ with a set of omitted columns
> Upsert with the cqd set to ‘off’ with a different set of omitted columns
> If we switch to insert all column values all the time for non-aligned format, 
> then we can let user to control what value needs to be put in for the omitted 
> column.
>  
> Selva 
>  
> From: Hans Zeller [mailto:hans.zel...@esgyn.com] 
> Sent: Tuesday, March 15, 2016 4:01 PM
> To: u...@trafodion.incubator.apache.org
> Subject: Re: Upsert semantics
> Hans Zeller 
> 5:57 PM (20 minutes ago)
> to user 
> Again, IMHO that's the wrong way to go, but I hope others will chime in. Dave 
> gave the best reason, it's a bad idea to make the semantics of UPSERT depend 
> on the internal format. Here is what I would suggest, using Selva's table:
> {noformat}
>  Aligned FormatAligned format with
>   Non-Aligned with   Non-Aligned with
>  With no omittedomitted columns   
>   with no omitted omitted current default
>  columns  
>/ omitted non-current columns
>  
> CQD off  Replaces row  MERGE  
>Replace the given columns   MERGE
> CQD on (default) Replaces row  Replaces row   
>Replace all columnsReplace all columns
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (TRAFODION-1823) ESP idle timeout does not kick in, leading to too many ESPs on the system

2016-03-29 Thread Selvaganesan Govindarajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-1823 started by Selvaganesan Govindarajan.

> ESP idle timeout does not kick in, leading to too many ESPs on the system
> -
>
> Key: TRAFODION-1823
> URL: https://issues.apache.org/jira/browse/TRAFODION-1823
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Atanu Mishra
>Assignee: Selvaganesan Govindarajan
>
> There are two mechanisms to time out idle ESPs:
> • The master executor kills idle ESPs after a timeout, based on session 
> default ESP_IDLE_TIMEOUT. However, this only happens when we allocate another 
> statement that needs ESPs, because the needed logic is triggered in the code 
> to allocate ESPs. Example, to set this timeout to 30 minutes (1800 seconds):
> set session default esp_idle_timeout '1800';
> • ESPs also have a built-in timeout a period of inactivity. The same 
> session default ESP_IDLE_TIMEOUT is used as well. This timeout should trigger 
> automatically, no action from the master is required.
> Bug: Right now, this second timeout isn't working, because the idle ESP sees 
> two connections from the master, and that is not considered an idle ESP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TRAFODION-1823) ESP idle timeout does not kick in, leading to too many ESPs on the system

2016-03-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15216907#comment-15216907
 ] 

ASF GitHub Bot commented on TRAFODION-1823:
---

GitHub user selvaganesang opened a pull request:

https://github.com/apache/incubator-trafodion/pull/406

[TRAFODION-1823] ESP idle timeout does not kick in, leading to too ma…

…ny ESPs on the system

Close message from the master was not sent to ESP by the platform
agnostic messaging layer in Trafodion. Fixed this bug and then ESP idle time
works as expected.

ESP_IDLE_TIMEOUT can now be given as a CQD. Internally, it would be
changed to SET SESSION DEFAULT and used by the IPC layer of Trafodion.
ESP_IDLE_TIMEOUT is set to 30 minutes by default in the CQD too.

Added test in core/TESTRTS to test ESP_IDLE_TIMEOUT

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/selvaganesang/incubator-trafodion 
trafodion-1823

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-trafodion/pull/406.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #406


commit 87eb03471199117c6f6585b255d430ef1325dcc7
Author: selvaganesang 
Date:   2016-03-29T21:37:05Z

[TRAFODION-1823] ESP idle timeout does not kick in, leading to too many 
ESPs on the system

Close message from the master was not sent to ESP by the platform
agnostic messaging layer in Trafodion. Fixed this bug and then ESP idle time
works as expected.

ESP_IDLE_TIMEOUT can now be given as a CQD. Internally, it would be
changed to SET SESSION DEFAULT and used by the IPC layer of Trafodion.
ESP_IDLE_TIMEOUT is set to 30 minutes by default in the CQD too.

Added test in core/TESTRTS to test ESP_IDLE_TIMEOUT




> ESP idle timeout does not kick in, leading to too many ESPs on the system
> -
>
> Key: TRAFODION-1823
> URL: https://issues.apache.org/jira/browse/TRAFODION-1823
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Atanu Mishra
>Assignee: Selvaganesan Govindarajan
>
> There are two mechanisms to time out idle ESPs:
> • The master executor kills idle ESPs after a timeout, based on session 
> default ESP_IDLE_TIMEOUT. However, this only happens when we allocate another 
> statement that needs ESPs, because the needed logic is triggered in the code 
> to allocate ESPs. Example, to set this timeout to 30 minutes (1800 seconds):
> set session default esp_idle_timeout '1800';
> • ESPs also have a built-in timeout a period of inactivity. The same 
> session default ESP_IDLE_TIMEOUT is used as well. This timeout should trigger 
> automatically, no action from the master is required.
> Bug: Right now, this second timeout isn't working, because the idle ESP sees 
> two connections from the master, and that is not considered an idle ESP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-29 Thread Hans Zeller (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15216895#comment-15216895
 ] 

Hans Zeller commented on TRAFODION-1910:


For this JIRA, the error we are seeing on the client side is this:

*** ERROR[29157] There was a problem reading from the server
*** ERROR[29160] The message header was not long enough
*** ERROR[29157] There was a problem reading from the server
*** ERROR[29160] The message header was not long enough


> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at ../arkcmp/CmpStatement.cpp:499
> 017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
> data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
> charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7f47b8594320,
> gen_code_len=@0x7f47b8594328, parserFlags=4194304, parentQid=0x0,
> parentQidLen=0, diagsArea=0x7f47b7112e50) at ../arkcmp/CmpContext.cpp:841
> 018 0x7f47caa0dd38 in CliStatement::prepare2 (this=0x7f47b70d4028,
> source=0x7f47b711ab18 "showddl mytable", diagsArea=...,
> passed_gen_code=, passed_gen_code_len=3081953576,
> charset=15, unpackTdbs=1, cliFlags=129) at ../cli/Statement.cpp:1775
> 019 0x7f47ca9bac94 in SQLCLI_Prepare2 (cliGlobals=0x27bcbb0,
> statement_id=0x370a9c8, sql_source=0x7f47b8594610, gencode_ptr=0x0,
> gencode_len=0, ret_gencode_len=0x0, query_cost_info=0x370abf8,
> query_comp_stats_info=0x370ac48, 

[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-29 Thread Hans Zeller (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15216894#comment-15216894
 ] 

Hans Zeller commented on TRAFODION-1910:


Hi Pierre, thanks for pointing this out. My guess would be that it is a 
different issue. One difference is that it seems your exception is generated on 
the client side, while ending a transaction and in my case it happens on the 
server side. Second, in my case no transaction is involved and we only see this 
if we do a Hive query in two sessions that get connected to the same mxosrvr.

> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at ../arkcmp/CmpStatement.cpp:499
> 017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
> data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
> charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7f47b8594320,
> gen_code_len=@0x7f47b8594328, parserFlags=4194304, parentQid=0x0,
> parentQidLen=0, diagsArea=0x7f47b7112e50) at ../arkcmp/CmpContext.cpp:841
> 018 0x7f47caa0dd38 in CliStatement::prepare2 (this=0x7f47b70d4028,
> source=0x7f47b711ab18 "showddl mytable", diagsArea=...,
> passed_gen_code=, passed_gen_code_len=3081953576,
> charset=15, unpackTdbs=1, cliFlags=129) at ../cli/Statement.cpp:1775
> 019 0x7f47ca9bac94 in SQLCLI_Prepare2 (cliGlobals=0x27bcbb0,
> statement_id=0x370a9c8, sql_source=0x7f47b8594610, gencode_ptr=0x0,
> gencode_len=0, 

[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15216874#comment-15216874
 ] 

ASF GitHub Bot commented on TRAFODION-1910:
---

Github user zellerh commented on a diff in the pull request:

https://github.com/apache/incubator-trafodion/pull/405#discussion_r57803569
  
--- Diff: core/sql/cli/CliExtern.cpp ---
@@ -6316,7 +6316,8 @@ Lng32 SQL_EXEC_DeleteHbaseJNI()
   threadContext->incrNumOfCliCalls();
 
   HBaseClient_JNI::deleteInstance();
-  HiveClient_JNI::deleteInstance();
+  // The Hive client persists across connections
+  // HiveClient_JNI::deleteInstance();
--- End diff --

Good question, and I think any security issues are (or need to be) handled 
in the HiveMetaData object, which already persists between connections. The 
object that now also persists is HiveClient_JNI (and the corresponding Java 
object). Those don't keep a cache or state about Hive metadata, they just have 
methods to read and validate Hive metadata. So, in short, no, I don't believe 
there is a security concern with this change.


> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at ../arkcmp/CmpStatement.cpp:499
> 017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
> data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
> charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7f47b8594320,
> 

[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15216867#comment-15216867
 ] 

ASF GitHub Bot commented on TRAFODION-1910:
---

Github user DaveBirdsall commented on a diff in the pull request:

https://github.com/apache/incubator-trafodion/pull/405#discussion_r57803047
  
--- Diff: core/sql/cli/CliExtern.cpp ---
@@ -6316,7 +6316,8 @@ Lng32 SQL_EXEC_DeleteHbaseJNI()
   threadContext->incrNumOfCliCalls();
 
   HBaseClient_JNI::deleteInstance();
-  HiveClient_JNI::deleteInstance();
+  // The Hive client persists across connections
+  // HiveClient_JNI::deleteInstance();
--- End diff --

Is there any security issue here? If we integrate with Hive security (and I 
don't know if we have or not) is there some notion of re-authentication at 
connection time?


> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at ../arkcmp/CmpStatement.cpp:499
> 017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
> data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
> charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7f47b8594320,
> gen_code_len=@0x7f47b8594328, parserFlags=4194304, parentQid=0x0,
> parentQidLen=0, diagsArea=0x7f47b7112e50) at ../arkcmp/CmpContext.cpp:841
> 018 0x7f47caa0dd38 in CliStatement::prepare2 (this=0x7f47b70d4028,
> source=0x7f47b711ab18 "showddl mytable", 

[jira] [Commented] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15216861#comment-15216861
 ] 

ASF GitHub Bot commented on TRAFODION-1910:
---

GitHub user zellerh opened a pull request:

https://github.com/apache/incubator-trafodion/pull/405

TRAFODION-1910 mxosrvr crashes on Hive query after reconnect

NATableDB is caching a pointer to a HiveClient_JNI object
(HiveMetaData::client_), but that object gets deallocated when a JDBC
client disconnects.  Fixing this by keeping the HiveClient_JNI around
across sessions.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zellerh/incubator-trafodion bug/1581

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-trafodion/pull/405.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #405


commit d97f9db39f0d35c1dc19566f5327f0812e521dc7
Author: Hans Zeller 
Date:   2016-03-29T21:06:44Z

TRAFODION-1910 mxosrvr crashes on Hive query after reconnect

NATableDB is caching a pointer to a HiveClient_JNI object
(HiveMetaData::client_), but that object gets deallocated when a JDBC
client disconnects.  Fixing this by keeping the HiveClient_JNI around
across sessions.




> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> 

[jira] [Work started] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-29 Thread Hans Zeller (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TRAFODION-1910 started by Hans Zeller.
--
> mxosrvr crashes on Hive query after reconnect
> -
>
> Key: TRAFODION-1910
> URL: https://issues.apache.org/jira/browse/TRAFODION-1910
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: sql-exe
>Affects Versions: 1.3-incubating
>Reporter: Hans Zeller
>Assignee: Hans Zeller
>
> This is a problem Wei-Shiun found when running tests with many connections 
> that use Hive queries. He sees intermittent core dumps with this stack trace:
> #0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
> 001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
> 002 0x7f47cc613a55 in os::abort(bool) ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 003 0x7f47cc793f87 in VMError::report_and_die() ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 004 0x7f47cc61896f in JVM_handle_linux_signal ()
>from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
> 005 
> 006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
> errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
> at ../executor/hiveHook.cpp:228
> 007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
> schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
> at ../executor/hiveHook.cpp:806
> 008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, 
> corrName=...,
> bindWA=0x7f47b85912d0, inTableDescStruct=)
> at ../optimizer/NATable.cpp:8377
> 009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
> corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
> at ../optimizer/BindRelExpr.cpp:1514
> 010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
> 011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
> 012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
> bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
> 013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:2071
> 014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
> input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
> queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
> gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
> fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
> cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
> at ../sqlcomp/CmpMain.cpp:1684
> 015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, 
> input=...,
> gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
> phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
> at ../sqlcomp/CmpMain.cpp:819
> 016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
> sqltext=) at ../arkcmp/CmpStatement.cpp:499
> 017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
> data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
> charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7f47b8594320,
> gen_code_len=@0x7f47b8594328, parserFlags=4194304, parentQid=0x0,
> parentQidLen=0, diagsArea=0x7f47b7112e50) at ../arkcmp/CmpContext.cpp:841
> 018 0x7f47caa0dd38 in CliStatement::prepare2 (this=0x7f47b70d4028,
> source=0x7f47b711ab18 "showddl mytable", diagsArea=...,
> passed_gen_code=, passed_gen_code_len=3081953576,
> charset=15, unpackTdbs=1, cliFlags=129) at ../cli/Statement.cpp:1775
> 019 0x7f47ca9bac94 in SQLCLI_Prepare2 (cliGlobals=0x27bcbb0,
> statement_id=0x370a9c8, sql_source=0x7f47b8594610, gencode_ptr=0x0,
> gencode_len=0, ret_gencode_len=0x0, query_cost_info=0x370abf8,
> query_comp_stats_info=0x370ac48, uniqueStmtId=,
> uniqueStmtIdLen=0x370ab2c, flags=1) at ../cli/Cli.cpp:5927
> 020 0x7f47caa1b1ae in SQL_EXEC_Prepare2 (statement_id=0x370a9c8,
> sql_source=0x7f47b8594610, gencode_ptr=0x0, gencode_len=0,
> ret_gencode_len=0x0, query_cost_info=0x370abf8, comp_stats_info=0x370ac48,
> uniqueStmtId=0x370ab30 "", 

[jira] [Created] (TRAFODION-1910) mxosrvr crashes on Hive query after reconnect

2016-03-29 Thread Hans Zeller (JIRA)
Hans Zeller created TRAFODION-1910:
--

 Summary: mxosrvr crashes on Hive query after reconnect
 Key: TRAFODION-1910
 URL: https://issues.apache.org/jira/browse/TRAFODION-1910
 Project: Apache Trafodion
  Issue Type: Bug
  Components: sql-exe
Affects Versions: 1.3-incubating
Reporter: Hans Zeller
Assignee: Hans Zeller


This is a problem Wei-Shiun found when running tests with many connections that 
use Hive queries. He sees intermittent core dumps with this stack trace:

#0 0x7f47cb0dd625 in raise () from /lib64/libc.so.6
001 0x7f47cb0ded8d in abort () from /lib64/libc.so.6
002 0x7f47cc613a55 in os::abort(bool) ()
   from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
003 0x7f47cc793f87 in VMError::report_and_die() ()
   from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
004 0x7f47cc61896f in JVM_handle_linux_signal ()
   from /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64/server/libjvm.so
005 
006 0x7f47c92bd5ee in HiveMetaData::recordError (this=0x7f47a5e50088,
errCode=122, errMethodName=0x7f47c935aaa3 "HiveClient_JNI::getTableStr()")
at ../executor/hiveHook.cpp:228
007 0x7f47c92bf613 in HiveMetaData::getTableDesc (this=0x7f47a5e50088,
schemaName=0x7f47b858e798 "mytest5", tblName=0x7f47b858e7c8 "mytable")
at ../executor/hiveHook.cpp:806
008 0x7f47c4056307 in NATableDB::get (this=0x7f47b652d3c0, corrName=...,
bindWA=0x7f47b85912d0, inTableDescStruct=)
at ../optimizer/NATable.cpp:8377
009 0x7f47c3db0743 in BindWA::getNATable (this=0x7f47b85912d0,
corrName=..., catmanCollectTableUsages=1, inTableDescStruct=0x0)
at ../optimizer/BindRelExpr.cpp:1514
010 0x7f47c3db3290 in Describe::bindNode (this=0x7f47a2aae440,
bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:13565
011 0x7f47c3d989f7 in RelExpr::bindChildren (this=0x7f47a2aaf5f8,
bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:2258
012 0x7f47c3dccbce in RelRoot::bindNode (this=0x7f47a2aaf5f8,
bindWA=0x7f47b85912d0) at ../optimizer/BindRelExpr.cpp:5204
013 0x7f47c577e84e in CmpMain::compile (this=0x7f47b8593c40,
input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
at ../sqlcomp/CmpMain.cpp:2071
014 0x7f47c578168c in CmpMain::sqlcomp (this=0x7f47b8593c40,
input_str=0x7f47a5e0b690 "showddl mytable", charset=15,
queryExpr=@0x7f47b8593b78, gen_code=0x7f47a5e0c1a8,
gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00, phase=CmpMain::END,
fragmentDir=0x7f47b8593d98, op=3004, useQueryCache=1,
cacheable=0x7f47b8593b88, begTime=0x7f47b8593b60, shouldLog=0)
at ../sqlcomp/CmpMain.cpp:1684
015 0x7f47c5782998 in CmpMain::sqlcomp (this=0x7f47b8593c40, input=...,
gen_code=0x7f47a5e0c1a8, gen_code_len=0x7f47a5e0c1a0, heap=0x7f47b70bbc00,
phase=CmpMain::END, fragmentDir=0x7f47b8593d98, op=3004)
at ../sqlcomp/CmpMain.cpp:819
016 0x7f47c33a8898 in CmpStatement::process (this=0x7f47a5e52f10,
sqltext=) at ../arkcmp/CmpStatement.cpp:499
017 0x7f47c339b48c in CmpContext::compileDirect (this=0x7f47b6525090,
data=0x7f47b7112db8 "\200", data_len=144, outHeap=0x7f47b7b2e128,
charset=15, op=CmpMessageObj::SQLTEXT_COMPILE, gen_code=@0x7f47b8594320,
gen_code_len=@0x7f47b8594328, parserFlags=4194304, parentQid=0x0,
parentQidLen=0, diagsArea=0x7f47b7112e50) at ../arkcmp/CmpContext.cpp:841
018 0x7f47caa0dd38 in CliStatement::prepare2 (this=0x7f47b70d4028,
source=0x7f47b711ab18 "showddl mytable", diagsArea=...,
passed_gen_code=, passed_gen_code_len=3081953576,
charset=15, unpackTdbs=1, cliFlags=129) at ../cli/Statement.cpp:1775
019 0x7f47ca9bac94 in SQLCLI_Prepare2 (cliGlobals=0x27bcbb0,
statement_id=0x370a9c8, sql_source=0x7f47b8594610, gencode_ptr=0x0,
gencode_len=0, ret_gencode_len=0x0, query_cost_info=0x370abf8,
query_comp_stats_info=0x370ac48, uniqueStmtId=,
uniqueStmtIdLen=0x370ab2c, flags=1) at ../cli/Cli.cpp:5927
020 0x7f47caa1b1ae in SQL_EXEC_Prepare2 (statement_id=0x370a9c8,
sql_source=0x7f47b8594610, gencode_ptr=0x0, gencode_len=0,
ret_gencode_len=0x0, query_cost_info=0x370abf8, comp_stats_info=0x370ac48,
uniqueStmtId=0x370ab30 "", uniqueStmtIdLen=0x370ab2c, flags=1)
at ../cli/CliExtern.cpp:5034
021 0x7f47cd4e31d9 in SRVR::WSQL_EXEC_Prepare2 (statement_id=0x370a9c8,
sql_source=, gencode_ptr=,
gencode_len=, ret_gencode_len=,
query_cost_info=, comp_stats_info=0x370ac48,
uniqueQueryId=0x370ab30 "", uniqueQueryIdLen=0x370ab2c, 

[jira] [Created] (TRAFODION-1909) Output of showddl differs from output of jdbc client (t4)

2016-03-29 Thread Pierre Smits (JIRA)
Pierre Smits created TRAFODION-1909:
---

 Summary: Output of showddl differs from output of jdbc client (t4)
 Key: TRAFODION-1909
 URL: https://issues.apache.org/jira/browse/TRAFODION-1909
 Project: Apache Trafodion
  Issue Type: Bug
  Components: client-jdbc-t4
Affects Versions: 1.3-incubating
Reporter: Pierre Smits


as per thread on the dev ml. See http://markmail.org/message/tw73lpq3k3fkldvr

This is returned in the application (Apache OFBiz)
{code}
Column [ESTIMATED_COST] of table [OFBIZ.WORK_EFFORT_GOOD_STANDARD] of
entity [WorkEffortGoodStandard] is of type [BIGINT] in the database, but is
defined as type [NUMERIC] in the entity definition.
{code}

This is returned with the showddl command;
{code}>>showddl table OFBIZ.WORK_EFFORT_GOOD_STANDARD;

   CREATE TABLE TRAFODION.OFBIZ.WORK_EFFORT_GOOD_STANDARD
 (
   WORK_EFFORT_ID   VARCHAR(20) CHARACTER SET ISO88591
   COLLATE
 DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE SERIALIZED
 , PRODUCT_ID   VARCHAR(20) CHARACTER SET ISO88591
   COLLATE
 DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE SERIALIZED
 , WORK_EFFORT_GOOD_STD_TYPE_ID VARCHAR(20) CHARACTER SET ISO88591
   COLLATE
 DEFAULT NO DEFAULT NOT NULL NOT DROPPABLE SERIALIZED
 , FROM_DATETIMESTAMP(6) NO DEFAULT NOT NULL NOT
 DROPPABLE NOT SERIALIZED
 , THRU_DATETIMESTAMP(6) DEFAULT NULL NOT
   SERIALIZED
 , STATUS_IDVARCHAR(20) CHARACTER SET ISO88591
   COLLATE
 DEFAULT DEFAULT NULL SERIALIZED
 , ESTIMATED_QUANTITY   DOUBLE PRECISION DEFAULT NULL NOT
 SERIALIZED
 , ESTIMATED_COST   NUMERIC(18, 2) DEFAULT NULL
   SERIALIZED
 , LAST_UPDATED_STAMP   TIMESTAMP(6) DEFAULT NULL NOT
   SERIALIZED
 , LAST_UPDATED_TX_STAMPTIMESTAMP(6) DEFAULT NULL NOT
   SERIALIZED
 , CREATED_STAMPTIMESTAMP(6) DEFAULT NULL NOT
   SERIALIZED
 , CREATED_TX_STAMP TIMESTAMP(6) DEFAULT NULL NOT
   SERIALIZED
 , PRIMARY KEY (WORK_EFFORT_ID ASC, PRODUCT_ID ASC,
   WORK_EFFORT_GOOD_STD_TYPE_ID ASC, FROM_DATE ASC)
 )
   ;
   CREATE INDEX WKEFF_GDSTD_PROD ON
   TRAFODION.OFBIZ.WORK_EFFORT_GOOD_STANDARD
 (
   PRODUCT_ID ASC
 )
   ;
   CREATE INDEX WKEFF_GDSTD_STTS ON
   TRAFODION.OFBIZ.WORK_EFFORT_GOOD_STANDARD
 (
   STATUS_ID ASC
 )
   ;
   CREATE INDEX WKEFF_GDSTD_TYPE ON
   TRAFODION.OFBIZ.WORK_EFFORT_GOOD_STANDARD
 (
   WORK_EFFORT_GOOD_STD_TYPE_ID ASC
 )
   ;
   CREATE INDEX WKEFF_GDSTD_WEFF ON
   TRAFODION.OFBIZ.WORK_EFFORT_GOOD_STANDARD
 (
   WORK_EFFORT_ID ASC
 )
   ;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TRAFODION-1854) Trafodion cannot start on nodes with uppercase hostname.

2016-03-29 Thread liu ming (JIRA)

 [ 
https://issues.apache.org/jira/browse/TRAFODION-1854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liu ming resolved TRAFODION-1854.
-
Resolution: Fixed

> Trafodion cannot start on nodes with uppercase hostname.
> 
>
> Key: TRAFODION-1854
> URL: https://issues.apache.org/jira/browse/TRAFODION-1854
> Project: Apache Trafodion
>  Issue Type: Bug
>  Components: foundation
>Reporter: Eason Zhang
>Assignee: liu ming
> Fix For: 2.0-incubating
>
>
> set the node's hostname to uppercase, in this case it is set to 'H1 H2 H3'
> Trafodion’s .bashrc is setting correctly:
>  
> # These env vars define all nodes in the cluster
> export NODE_LIST=" H1 H2 H3"
> export MY_NODES=" -w H1 -w H2 -w H3"
>  
> /etc/hosts are also set as ‘H1 H2 H3’.
> sqconfig is also set as uppercase hostname.
> when starting trafodion instance, it will report below error in sqmon.log:
> Processing cluster.conf on local host H1
> [SHELL] Shell/shell Version 1.0.1 EsgynDB_Enterprise Release 2.0.0 (Build 
> release [EsgynDB-2.0.0-0-g2ba9dde_Bld402], date 20151121_0002)
>  
> [SHELL] %
> ! Start the monitor processes across the cluster
> startup
> [SHELL] %startup
> [SHELL] Cannot start monitor from node 'H1' since it is not member of the 
> cluster configuration or 'hostname' string does not match configuration 
> string.
> [SHELL] Configuration node names:
> [SHELL]'h1'
> [SHELL]'h2'
> [SHELL]'h3'
> [SHELL] Failed to start environment!
>  
> [SHELL] %
> exit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)