RE: question

2017-12-04 Thread Selva Govindarajan
You can also refer to 
http://trafodion.apache.org/docs/load_transform/index.html that describes about 
trickle and bulk load concepts.

Selva

From: Xu, Kai-Hua (Kevin) [mailto:kaihua...@esgyn.cn]
Sent: Monday, December 4, 2017 1:22 AM
To: user@trafodion.incubator.apache.org
Subject: RE: question

Guihui,

If you want a large column, here is a paper for Lob data: 
http://trafodion.apache.org/docs/lob_guide/index.html.
If you want loading data via JDBC, it follows standard API and please use batch 
insertion. https://github.com/kevinxu021/sample/tree/master/jdbc, it show you a 
simple example in the link.

Best Regards,
Kevin Xu

From: Gui, Hui [mailto:hui@esgyn.cn]
Sent: Monday, December 04, 2017 5:12 PM
To: 
user@trafodion.incubator.apache.org
Subject: question

Is there the API of large capacity importing in the JDBC driver of trafodion?


RE: DCS is not started

2017-11-28 Thread Selva Govindarajan
Find the first core file produced when you attempted to do sqstart. It can be 
done using

ls -ltr core.*

And compare the timestamp of the core file with the time when sqstart was 
issued.

Then issue,

file 

to find the program that produced the core file.

gdb  
bt

If gdb not installed on the cluster,  you might need to install gdb via yum 
install gdb.

Selva

From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Tuesday, November 28, 2017 9:34 PM
To: user@trafodion.incubator.apache.org; Yuan 
Cc: Eric Owhadi ; Narendra Goyal 

Subject: RE: DCS is not started

The trafodion installer is ok by checking related classpath.

Check the trafodion.dtm.log, found some error like this, and many core dump in 
$TRAF_HOME/sql/scripts

2017-11-29 00:30:01,360 ERROR transactional.TransactionManager: doAbortX 
UnknownTransactionException for transaction 1691649 participantNum 6 Location 
TRAFODION.TPCC.STOCK,,1498022419057.1fc4d0ba0a5191b0325ea196985616d4.
org.apache.hadoop.hbase.client.transactional.UnknownTransactionException: 
java.io.IOException: UnknownTransactionException
at 
org.apache.hadoop.hbase.client.transactional.TransactionManager$TransactionManagerCallable.doAbortX(TransactionManager.java:973)
at 
org.apache.hadoop.hbase.client.transactional.TransactionManager$10.call(TransactionManager.java:2405)
at 
org.apache.hadoop.hbase.client.transactional.TransactionManager$10.call(TransactionManager.java:2403)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

-rw---. 1 trafodion trafodion6422528 Jun 21 21:10 core.11978
-rw---. 1 trafodion trafodion   14561280 Jun 21 21:10 core.11979
-rw---. 1 trafodion trafodion   14561280 Jun 21 21:10 core.11980
-rw---. 1 trafodion trafodion6422528 Jun 21 21:10 core.12130
-rw---. 1 trafodion trafodion6422528 Jun 21 21:10 core.12151
-rw---. 1 trafodion trafodion  147271680 Nov 28 22:16 core.1506
-rw-r--r--. 1 trafodion trafodion  162172000 Nov 28 04:32 
core.2017-11-28_04-24-31.ZSM000.6663.mxssmp
-rw---. 1 trafodion trafodion 2237276160 Jun 21 04:44 core.24494
-rw---. 1 trafodion trafodion  987738112 Sep  6 02:36 core.3926
-rw---. 1 trafodion trafodion  986812416 Sep  6 02:36 core.3970
-rw---. 1 trafodion trafodion 2353975296 Jun 21 09:03 core.51428
-rw---. 1 trafodion trafodion  12192 Nov 28 22:17 core.5279
-rw---. 1 trafodion trafodion  12192 Nov 28 23:06 core.55161
-rw---. 1 trafodion trafodion1552384 Nov 28 22:05 core.5604
-rw---. 1 trafodion trafodion  32672 Nov 28 23:11 core.57098
-rw---. 1 trafodion trafodion   48193536 Nov 28 23:11 core.57646
-rw---. 1 trafodion trafodion  12192 Nov 28 23:07 core.58026
-rw---. 1 trafodion trafodion  885874688 Nov 28 22:51 core.58335
-rw---. 1 trafodion trafodion  142344192 Nov 28 23:07 core.58551
-rw---. 1 trafodion trafodion  133554176 Nov 28 22:17 core.6053
-rw---. 1 trafodion trafodion  12192 Nov 28 23:07 core.61491
-rw---. 1 trafodion trafodion  12192 Nov 28 22:16 core.65350

Jack Huang
Dell EMC | CTD MRES Cyclone Group
mobile +86-13880577652
jack.hu...@dell.com<mailto:jack.hu...@dell.com>




From: Prashanth Vasudev [mailto:prashanth.vasu...@esgyn.com]
Sent: Wednesday, November 29, 2017 1:25 PM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>;
 Yuan mailto:yuan@esgyn.cn>>
Cc: Eric Owhadi mailto:eric.owh...@esgyn.com>>; Narendra 
Goyal mailto:narendra.go...@esgyn.com>>
Subject: RE: DCS is not started

1. Please also check to make sure all steps in the installer completed 
successfully.

2. From the shell, please check to see hbase classpath includes  trx jars.
$  hbase classpath | grep trx

Prashanth

From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
Sent: Tuesday, November 28, 2017 9:21 PM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>;
 Yuan mailto:yuan@esgyn.cn>>
Cc: Eric Owhadi mailto:eric.owh...@esgyn.com>>; Narendra 
Goyal mailto:narendra.go...@esgyn.com>>
Subject: RE: DCS is not started

It looks like the Transaction Manager failed to come up for some reason. The 
log directory $TRAF_HOME/logs should have files starting with tm_.log and 
trafodion_dtm.log.  These log files might give some clue to the problem.

Unless the Transaction Manager comes up, other processes will not be started.

Also check if there is a core file of TM program. The core file can be found in 
the directory pointed by /proc/sys/kernel/core_pattern. If there is no 
directory configured, the core file may be found at  $TRAF_HOME/sql/scripts

Selva

From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: T

RE: DCS is not started

2017-11-28 Thread Selva Govindarajan
It looks like the Transaction Manager failed to come up for some reason. The 
log directory $TRAF_HOME/logs should have files starting with tm_.log and 
trafodion_dtm.log.  These log files might give some clue to the problem.

Unless the Transaction Manager comes up, other processes will not be started.

Also check if there is a core file of TM program. The core file can be found in 
the directory pointed by /proc/sys/kernel/core_pattern. If there is no 
directory configured, the core file may be found at  $TRAF_HOME/sql/scripts

Selva

From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Tuesday, November 28, 2017 9:09 PM
To: Yuan ; user@trafodion.incubator.apache.org
Cc: Eric Owhadi ; Narendra Goyal 

Subject: RE: DCS is not started

Only 1 trafodion node but 128G Mem configured for the server. Does it enough ?


Jack Huang
Dell EMC | CTD MRES Cyclone Group
mobile +86-13880577652
jack.hu...@dell.com




From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn]
Sent: Wednesday, November 29, 2017 1:07 PM
To: Huang, Jack mailto:jack.hu...@emc.com>>; 
user@trafodion.incubator.apache.org
Cc: Eric Owhadi mailto:eric.owh...@esgyn.com>>; Narendra 
Goyal mailto:narendra.go...@esgyn.com>>
Subject: RE: DCS is not started

How many trafodion nodes do you have? What is the memory of each node? I think 
you configured too many mxosrvrs.


Best regards,
Yuan

From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Wednesday, November 29, 2017 12:16 PM
To: 
user@trafodion.incubator.apache.org
Cc: Liu, Yuan (Yuan) mailto:yuan@esgyn.cn>>; Eric Owhadi 
mailto:eric.owh...@esgyn.com>>; Narendra Goyal 
mailto:narendra.go...@esgyn.com>>
Subject: RE: DCS is not started

Sign! ckillall and sqstart , several mintues after, the trafodion env is still 
down!

[trafodion@trafodion logs]$ sqcheck

*** Checking Trafodion Environment ***

Checking if processes are up.
Checking attempt: 1; user specified max: 2. Execution time in seconds: 3.

The Trafodion environment is not up at all, or partially up and not 
operational. Check the logs.

Process Configured  Actual  Down
--- --  --  
DTM 2   0   \$TM0 \$TM1
RMS 4   0   \$ZSC000 \$ZSC001 \$ZSM000 \$ZSM001
DcsMaster   1   1
DcsServer   1   0   1
mxosrvr 100 0   100
RestServer  1   1


Jack Huang
Dell EMC | CTD MRES Cyclone Group
mobile +86-13880577652
jack.hu...@dell.com




From: Huang, Jack
Sent: Wednesday, November 29, 2017 10:13 AM
To: 'user@trafodion.incubator.apache.org' 
mailto:user@trafodion.incubator.apache.org>>
Cc: 'Liu, Yuan (Yuan)' mailto:yuan@esgyn.cn>>; 'Eric 
Owhadi' mailto:eric.owh...@esgyn.com>>; 'Narendra Goyal' 
mailto:narendra.go...@esgyn.com>>
Subject: RE: DCS is not started


Thanks all. ckillall/sqstart is working now.

Jack Huang
Dell EMC | CTD MRES Cyclone Group
mobile +86-13880577652
jack.hu...@dell.com




From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn]
Sent: Wednesday, November 29, 2017 10:07 AM
To: 
user@trafodion.incubator.apache.org
Subject: RE: DCS is not started

Please use cstat to check if any process existed. If yes, then use ckillall to 
kill all process and then run cstat again.


Best regards,
Yuan

From: Narendra Goyal [mailto:narendra.go...@esgyn.com]
Sent: Wednesday, November 29, 2017 10:05 AM
To: 
user@trafodion.incubator.apache.org
Subject: RE: DCS is not started

Hi Jack,

Please try:


  *   ckillall
 *   this should kill all the orphan processes in the environment



  *   sqstart

-Narendra

From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Tuesday, November 28, 2017 6:03 PM
To: 
user@trafodion.incubator.apache.org
Subject: DCS is not started

Hi,
My trafodion env is down, how can I recover the trafodion environment?

[trafodion@trafodion ~]$ sqcheck

*** Checking Trafodion Environment ***

Checking if processes are up.
Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.

The Trafodion environment is not up at all, or partially up and not 
operational. Check the logs.

Process Configured  Actual  Down
--- --  --  
DTM 2   0   \$TM0 \$TM1
RMS 4   0   \$ZSC000 \$ZSC001 \$ZSM000 \$ZSM001
DcsMaster   1   0   1
DcsServer   1   0   1
mxosrvr 100 0   100
RestServer  1   1


The Trafodion environment is down.
[trafodion@trafodion ~]$ dcsstart

*** Checking Trafodion Environment ***

Checking if processes are up.
Checking 

RE: trafodion mxosrvr down

2017-09-27 Thread Selva Govindarajan
The attached dump file was incomplete. It didn't have stack trace.  I thought 
that I had sent the response earlier, but forgot to hit the send button.

Selva

From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Wednesday, September 27, 2017 4:16 AM
To: user@trafodion.incubator.apache.org
Subject: RE: trafodion mxosrvr down

Any update on it?


Jack Huang
Dell EMC | CTD MRES Cyclone Group
mobile +86-13880577652
jack.hu...@dell.com<mailto:jack.hu...@dell.com>




From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Monday, September 25, 2017 9:39 AM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: RE: trafodion mxosrvr down

Hi,
Please see the ouput and attached dump file.

[root@trafodion apache-trafodion-2.1.0]# ls -lt core.*
-rw---. 1 trafodion trafodion 2134945792 Sep 24 00:46 core.54809
-rw---. 1 trafodion trafodion 2100211712 Sep 24 00:43 core.43521
-rw---. 1 trafodion trafodion 2109403136 Sep 24 00:43 core.52966
-rw---. 1 trafodion trafodion 2096242688 Sep 24 00:43 core.38905
-rw---. 1 trafodion trafodion 2102181888 Sep 24 00:43 core.45648
-rw-r--r--. 1 trafodion trafodion  271522520 Jun 21 09:48 
core.2017-06-21_09-48-29.ZSM000.16632.mxssmp
-rw-r--r--. 1 trafodion trafodion 4030914880 Jun 21 09:36 
core.2017-06-21_09-36-27.Z000T2R.34396.tdm_udrserv

[trafodion@trafodion ~]$ sqvers -u
TRAF_HOME=/home/trafodion/apache-trafodion-2.1.0
who@host=trafodion@trafodion
JAVA_HOME=/usr/jdk64/jdk1.8.0_60
linux=2.6.32-642.el6.x86_64
redhat=6.8
NO patches
Most common Apache_Trafodion Release 2.1.0 (Build release [2.1.0-0-gdc3d97f], 
branch 20170406-no_branch, date 06Apr17)
UTT count is 2
[4] Apache Trafodion Release 2.1.0 (Build release [2.1.0-0-gdc3d97f], 
branch release2.1, date 06Apr17)
  export/lib/jdbcT2.jar
  export/lib/jdbcT4-2.1.0.jar
  export/lib/jdbcT4.jar
  export/lib/lib_mgmt.jar
[15]Apache_Trafodion Release 2.1.0 (Build release [2.1.0-0-gdc3d97f], 
branch release2.1, date 06Apr17)
  export/lib/hbase-trx-apache1_0-2.1.0.jar
  export/lib/hbase-trx-apache1_1-2.1.0.jar
  export/lib/hbase-trx-apache1_2-2.1.0.jar
  export/lib/hbase-trx-cdh5_4-2.1.0.jar
  export/lib/hbase-trx-cdh5_5-2.1.0.jar
  export/lib/hbase-trx-cdh5_7-2.1.0.jar
  export/lib/hbase-trx-hdp2_3-2.1.0.jar
  export/lib/sqmanvers.jar
  export/lib/trafodion-dtm-apache-2.1.0.jar
  export/lib/trafodion-dtm-cdh-2.1.0.jar
  export/lib/trafodion-dtm-hdp-2.1.0.jar
  export/lib/trafodion-sql-apache-2.1.0.jar
  export/lib/trafodion-sql-cdh-2.1.0.jar
  export/lib/trafodion-sql-hdp-2.1.0.jar
  export/lib/trafodion-utility-2.1.0.jar


[trafodion@trafodion ~]$ sqcheck

*** Checking Trafodion Environment ***

Checking if processes are up.
Checking attempt: 1; user specified max: 2. Execution time in seconds: 1.

The Trafodion environment is up!


Process Configured  Actual  Down
--- --  --  
DTM 2   2
RMS 4   4
DcsMaster   1   1
DcsServer   1   0   1
mxosrvr 256 2   254
RestServer  1   1


Jack Huang
Dell EMC | CTD MRES Cyclone Group
mobile +86-13880577652
jack.hu...@dell.com<mailto:jack.hu...@dell.com>




From: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
Sent: Friday, September 22, 2017 1:58 AM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: RE: trafodion mxosrvr down

Can you please get the stack trace of some of the core files.

In the directory where core files are found, issue

ls -lt core.*

The cores with earlier timestamp will be displayed at the end.

gdb mxosrvr 
thread apply all bt

And send the stack trace of few of these core files?
Please issue at the shell prompt to get the version of Trafodion installed.
sqvers -u

and send the output of this command too.

Selva

From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn]
Sent: Thursday, September 21, 2017 12:05 AM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: RE: trafodion mxosrvr down

Mxosrvr is managed by dcsserver. You should run dcsstop/dcsstart and restart 
dcs, or just run dcsstart.

Best regards,
Yuan

From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Thursday, September 21, 2017 2:26 PM
To: 
user@trafodion.incubator.apache.org<mailto:user@trafodion.incubator.apache.org>
Subject: RE: trafodion mxosrvr down

Several core dump found. Does anyone how to restart the mxosrvr?

[root@trafodion apache-trafodion-2.1.0]# file core.40973
core.40973: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 
'mxosrvr -ZKHOST trafodion:2181 -RZ trafodion:1:24 -ZKPNODE /trafodion -CNGTO 
60', real uid: 1003, effective uid: 1003,

RE: trafodion mxosrvr down

2017-09-21 Thread Selva Govindarajan
Can you please get the stack trace of some of the core files.

In the directory where core files are found, issue

ls -lt core.*

The cores with earlier timestamp will be displayed at the end.

gdb mxosrvr 
thread apply all bt

And send the stack trace of few of these core files?
Please issue at the shell prompt to get the version of Trafodion installed.
sqvers -u

and send the output of this command too.

Selva

From: Liu, Yuan (Yuan) [mailto:yuan@esgyn.cn]
Sent: Thursday, September 21, 2017 12:05 AM
To: user@trafodion.incubator.apache.org
Subject: RE: trafodion mxosrvr down

Mxosrvr is managed by dcsserver. You should run dcsstop/dcsstart and restart 
dcs, or just run dcsstart.

Best regards,
Yuan

From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Thursday, September 21, 2017 2:26 PM
To: 
user@trafodion.incubator.apache.org
Subject: RE: trafodion mxosrvr down

Several core dump found. Does anyone how to restart the mxosrvr?

[root@trafodion apache-trafodion-2.1.0]# file core.40973
core.40973: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 
'mxosrvr -ZKHOST trafodion:2181 -RZ trafodion:1:24 -ZKPNODE /trafodion -CNGTO 
60', real uid: 1003, effective uid: 1003, real gid: 502, effective gid: 502, 
execfn: '/home/trafodion/apache-trafodion-2.1.0/export/bin64/mxosrvr', 
platform: 'x86_64'
[root@trafodion apache-trafodion-2.1.0]# gdb 
/home/trafodion/apache-trafodion-2.1.0/export/bin64/mxosrvr  core.40973

Core was generated by `mxosrvr -ZKHOST trafodion:2181 -RZ trafodion:1:24 
-ZKPNODE /trafodion -CNGTO 60'.
Program terminated with signal 6, Aborted.
#0  0x003b30a325e5 in raise () from /lib64/libc.so.6


Jack Huang
Dell EMC | CTD MRES Cyclone Group
mobile +86-13880577652
jack.hu...@dell.com




From: Huang, Jack [mailto:jack.hu...@dell.com]
Sent: Thursday, September 21, 2017 2:18 PM
To: 
user@trafodion.incubator.apache.org
Subject: trafodion mxosrvr down

Hi trafodioner,
I run the trafodion database workload with HammerDB about 40 hours.
Initially the mxosrvr actual is 254, but now they are all down. Would you help 
to triage it? Now the database can not receive any connection.


[trafodion@trafodion ~]$ sqcheck



*** Checking Trafodion Environment ***



Checking if processes are up.

Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.



The Trafodion environment is up!





Process Configured  Actual  Down

--- --  --  

DTM 2   2

RMS 4   4

DcsMaster   1   1

DcsServer   1   0   1

mxosrvr 256 2   254

RestServer  1   1




Jack Huang
Dell EMC | CTD MRES Cyclone Group
mobile +86-13880577652
jack.hu...@dell.com






RE: is it possible to pass a sql statement to sqlci?

2017-07-28 Thread Selva Govindarajan
Alternatively, you can do this too  like any other command that supports stdin 
and stdout redirection

echo "select * from \"_MD_\".objects;" | sqlci

Selva
From: Roberta Marton [mailto:roberta.mar...@esgyn.com]
Sent: Friday, July 28, 2017 8:06 AM
To: user@trafodion.incubator.apache.org
Subject: RE: is it possible to pass a sql statement to sqlci?

I know you can run commands like this:

sqlci << eof
env;
values (current_user);
eof


   Roberta


From: Eric Owhadi [mailto:eric.owh...@esgyn.com]
Sent: Friday, July 28, 2017 7:59 AM
To: 
user@trafodion.incubator.apache.org
Subject: is it possible to pass a sql statement to sqlci?

Hi Trafodioneers,
I know we can use sqlci -i filename
And this will trigger an OBEY filename.

But I was wondering if there is an option to directly pass the sql statement, 
without writing it to a file

Sqlci -??? "select count(*) from tbl"

Eric


Re: question on using Batch for a select statement with jdbc

2017-01-17 Thread Selva Govindarajan
Hi Eric,


Looking at the JDBC specification,  I am guessing preparedStatement.setArray 
and using Array interface can do the trick. But,  setArray is unsupported 
feature in Trafodion jdbc drivers.


Selva


From: Eric Owhadi 
Sent: Tuesday, January 17, 2017 12:08 PM
To: user@trafodion.incubator.apache.org
Subject: question on using Batch for a select statement with jdbc


Hi Trafodioneers,

Have following jdbc question:



select x.a, d.a from (values (?,?,?,?)) as x(a,b,c,d)

join directories d on

x.b = d.b and x.c= d.c and x.d = d.d;



I was thinking of using Batch to fill the list of values, but then I struggle 
with how to invoke the query. executeBatch does not return a resultSet, as I 
guess it is used for upsert or inserts?

Can I use executeQuery(), and that will do the trick as long as I use 
addBacth()?



Or it is not possible to use addBatch for this use model?

Thanks in advance for the help,

Eric


Re: jdbc rowset usage?

2017-01-13 Thread Selva Govindarajan
Just prepare the statement with '?' for parameters

Do


PreparedStatement.setXXX () for all parameters

PreparedStatement.addBatch()


Do the above in a loop, when you reach the required rowset size


PreparedStatement.executeBatch().


Selva



From: Eric Owhadi 
Sent: Friday, January 13, 2017 3:52 PM
To: user@trafodion.incubator.apache.org
Subject: jdbc rowset usage?


Hello,

I am struggling to find the jdbc syntax to set an dynamic array parameter:



Assuming I prepared this statement s with "Insert into t (a,b,c) values( 
?[1000], ?[1000], ?[1000])"

How do I set each parameter?



Assuming a is INT, b is CHAR[10], c is INT?



Am I doing something not really supported? Should I use AddBatch instead?

Thanks in advance for the help,
Eric






Re: how to create a table NOT using aligned format?

2016-12-02 Thread Selva Govindarajan
Sorry for the typo. It is 'attribute hbase format'


Selva


____
From: Selva Govindarajan 
Sent: Friday, December 2, 2016 7:53 AM
To: user@trafodion.incubator.apache.org
Subject: Re: how to create a table NOT using aligned format?


You can use the clause 'attribut hebase format' in the create table statement.


Selva



From: Liu, Ming (Ming) 
Sent: Friday, December 2, 2016 1:07 AM
To: user@trafodion.incubator.apache.org
Subject: how to create a table NOT using aligned format?


hi, all,



With latest Trafodion build, when I create a table, it is in aligned format by 
default.

What is the syntax to create table that is NOT aligned format?



thanks,

Ming


Re: how to create a table NOT using aligned format?

2016-12-02 Thread Selva Govindarajan
You can use the clause 'attribut hebase format' in the create table statement.


Selva



From: Liu, Ming (Ming) 
Sent: Friday, December 2, 2016 1:07 AM
To: user@trafodion.incubator.apache.org
Subject: how to create a table NOT using aligned format?


hi, all,



With latest Trafodion build, when I create a table, it is in aligned format by 
default.

What is the syntax to create table that is NOT aligned format?



thanks,

Ming


RE: View of currently running statements

2016-09-18 Thread Selva Govindarajan
Thanks Aaron for trying out Trafodion.

You can refer the documentation at the Trafodion website
http://trafodion.apache.org/docs/sql_reference/index.html#sql_runtime_stat
istics and it has a whole section on how to obtain the runtime information
about the query once the query id is known.

You can find the query id of the currently running queries by issuing the
following commands after logging into the trafodion cluster

cd $MY_SQROOT/export/limited-support-tools/LSO
./offender -s active

You can also refer to README file at this location or issue ./offender
-help to get the information about many different usages of this feature.

Selva

-Original Message-
From: Aaron Molitor [mailto:amoli...@splicemachine.com]
Sent: Sunday, September 18, 2016 9:33 AM
To: user@trafodion.incubator.apache.org
Subject: View of currently running statements

Hi all, I mn quite new to Trafodion and just trying to understand what
information is available (and where it is) when I am running statements.

I am running a simplified workflow based on the TPC-H benchmark.  The flow
is to:

- create tables
- bulk load data
- count tables
- create indexes
- analyze/collect statistics
- run the 22 TPC-H queries

I am currently on the count tables step. The system got an rpc timeout
exception at the client (trafci), and after dcscheck showed that the dcs
master was down.

Two questions:
Should I increase the rpc timeout for hbase? If so, how far is reasonable?
How can I see what is running, and how it is progressing (sql statements
in general, specifically count(*) and load statements)?

Here is the environment description:
9 nodes all are:
Dell R420
- 2 x E5-2430v2 2.5GHz (6C 12T)
- 64GB RAM
- 4x1TB SATA
Running
- CDH 5.4.10 (ZooKeeper, HDFS, HBase, YARN Hive)
- Trafidion 2.0.1 (binary installer)

Thanks,
Aaron


RE: when a query will be cancelled?

2016-06-08 Thread Selva Govindarajan
Cancel logging is on by default. So you should see some trace of the cancel
command at $MY_SQROOT/logs/ssmp_.log files. This can provide some clue
on how cancel was initiated



Selva



*From:* Liu, Ming (Ming) [mailto:ming@esgyn.cn]
*Sent:* Tuesday, June 7, 2016 6:01 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* 答复: when a query will be cancelled?



The SPJ is using Type 2 driver.



Thanks,

Ming



*发件人**:* Venkat Muthuswamy [mailto:venkat.muthusw...@esgyn.com
]
*发送时间**:* 2016年6月8日 8:58
*收件人**:* user@trafodion.incubator.apache.org
*主题**:* RE: when a query will be cancelled?



Hi Ming,



Is your SPJ using type 2 jdbc or type 4 jdbc ?



Venkat



*From:* Liu, Ming (Ming) [mailto:ming@esgyn.cn]
*Sent:* Tuesday, June 07, 2016 5:48 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* 答复: when a query will be cancelled?



Hi, Hans,



In my test SPJ, there is only one statement. It is one big UPDATE query,
which will try to update 10M rows in a single thread, so it takes 1.5 hour.

And we will try to narrow down the issue further by debugging. Want to
check with community that if there are some special settings about 1 hour
timeout.



Ming



*发件人**:* Hans Zeller [mailto:hans.zel...@esgyn.com ]
*发送时间**:* 2016年6月8日 0:42
*收件人**:* user@trafodion.incubator.apache.org
*主题**:* Re: when a query will be cancelled?



Hi Ming,



One thing to test would be where you get the timeout, whether it's in JDBC
done in the SPJ or in the communication between the master executor and the
UDR server. When you simulate it in your dev environment, do you also issue
a single JDBC call that takes more than an hour?



I have to admit I haven't tried it, but hopefully these instructions will
work for SPJs as well:
https://cwiki.apache.org/confluence/display/TRAFODION/Tutorial%3A+The+object-oriented+UDF+interface#Tutorial:Theobject-orientedUDFinterface-DebuggingUDFcode


Hans



On Tue, Jun 7, 2016 at 9:04 AM, Liu, Ming (Ming)  wrote:

Hi,



We have a SPJ that perform some insert/select operations against a big
table, and each time then the SPJ runs for 1 hour, the CALL statement will
return -8007 error code, said it is cancelled. What can be possible reasons
for a query to be cancelled?



>>CALL QUERY1SPJ();

*** ERROR[8007] The operation has been canceled.

*** ERROR[8811] Trying to close a statement that is either not in the open
state or has not reached EOF.

--- SQL operation failed with errors.



I have a dev environment, and simulate a long running SPJ (Not same as the
SPJ in real cluster as above), but I am not able to reproduce it. The test
SPJ runs 1 hour 50 mins and finish correctly. So this seems not a common
SPJ issue, but I am not sure.



Any suggestions to debug this issue will be very appreciated.



Thanks,

Ming


RE: 答复: Hive STRING and VARCHAR types

2016-05-26 Thread Selva Govindarajan
I think the official manual for hive can be accessed @
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-string

Selva



*From:* Qifan Chen [mailto:qifan.c...@esgyn.com]
*Sent:* Thursday, May 26, 2016 8:55 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: 答复: Hive STRING and VARCHAR types



To drive the support of Hive semantics, sounds like we need to know better
the operations allowed on STRING and VARCHAR(n) in hive?



On Thu, May 26, 2016 at 10:53 AM, Liu, Ming (Ming) 
wrote:

Agree with QiFan.

I think we should map closest data type from Hive to Trafodion. Current
mapping of hive STRING to Trafodion VARCHAR may not be proper. Since Hive
can save up to 2G in a String column, but Trafodion VARCHAR has much
smaller max size. So if there is a 2G hive string data, how can we convert
it into VARCHAR? We can implicitly truncate like Impala, but that not seems
good.

But why I cannot find an official Hive manual that describes the max length
of STRING?



This seems a big semantic change, if map Hive STRING into Trafodion CLOB,
and there will be no confusing anymore, since then STRING and VARCHAR are
two very different types. VARCHAR(n) will be treated as n Characters.



Thanks,

Ming





*发件人**:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*发送时间:* 2016年5月26日 23:15
*收件人:* user@trafodion.incubator.apache.org
*主题:* RE: Hive STRING and VARCHAR types



But CLOB would limit what predicates and functions one could use on the
column, right?



*From:* Qifan Chen [mailto:qifan.c...@esgyn.com]
*Sent:* Thursday, May 26, 2016 5:43 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Hive STRING and VARCHAR types



I wonder if we should consider the same length limit as Hive for a STRING
type, which is 2GB (http://www.folkstalk.com/2011/11/data-types-in-hive.html).
If so, the Trafodion mapping should be CLOB?



--Qifan



On Wed, May 25, 2016 at 11:49 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

>From the Cloudera documentation

*Text table considerations:*

Text data files can contain values that are longer than allowed by the
VARCHAR(n) length limit. Any extra trailing characters are ignored when
Impala processes those values during a query

Will Trafodion behave the same way? Having the maximum limit for the
individual column provides the flexibility and optimal resource
utilization. However, having the limit in number of bytes for String and
number of characters for Varchar could be quite confusing for the user.

Selva





*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Wednesday, May 25, 2016 6:12 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Hive STRING and VARCHAR types



Hi,



Here is a question on Hive data types. Ming is about to add support for
Hive VARCHAR data types, in addition to the existing STRING type, but we
hit a question we wanted to pose to the user community. Here is a proposed
type mapping from Hive to Trafodion:



*Hive type*

*Trafodion type*

*Max # of chars*

*Size in bytes*

*Existing/new*

*Comments*

STRING

VARCHAR(n BYTES)

n/4 to n

n

existing

n is determined by the HIVE_MAX_STRING_LENGTH CQD

VARCHAR(m)

VARCHAR(m)

m

4*m

proposed



Is it ok if we treat STRING and VARCHAR differently?



Thanks,


Ming and Hans





-- 

Regards, --Qifan







-- 

Regards, --Qifan


RE: Hive STRING and VARCHAR types

2016-05-25 Thread Selva Govindarajan
>From the Cloudera documentation

*Text table considerations:*

Text data files can contain values that are longer than allowed by the
VARCHAR(n) length limit. Any extra trailing characters are ignored when
Impala processes those values during a query

Will Trafodion behave the same way? Having the maximum limit for the
individual column provides the flexibility and optimal resource
utilization. However, having the limit in number of bytes for String and
number of characters for Varchar could be quite confusing for the user.

Selva





*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Wednesday, May 25, 2016 6:12 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Hive STRING and VARCHAR types



Hi,



Here is a question on Hive data types. Ming is about to add support for
Hive VARCHAR data types, in addition to the existing STRING type, but we
hit a question we wanted to pose to the user community. Here is a proposed
type mapping from Hive to Trafodion:



*Hive type*

*Trafodion type*

*Max # of chars*

*Size in bytes*

*Existing/new*

*Comments*

STRING

VARCHAR(n BYTES)

n/4 to n

n

existing

n is determined by the HIVE_MAX_STRING_LENGTH CQD

VARCHAR(m)

VARCHAR(m)

m

4*m

proposed



Is it ok if we treat STRING and VARCHAR differently?



Thanks,


Ming and Hans


RE: Upsert semantics

2016-03-22 Thread Selva Govindarajan
Hi Rohit,



I believe your questions are answered earlier in this email thread. Did you
get a chance to read the whole email thread?  If you still think that your
questions are not answered, please let me know. I will try my best to
answer it differently.



Selva



*From:* Rohit Jain [mailto:rohit.j...@esgyn.com]
*Sent:* Tuesday, March 22, 2016 9:47 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



Unless I understood what is being proposed incorrectly where we don’t want
to merge old values with new ones on an UPSERT, then that really is a
DELSERT isn’t it?  INSERT if not present, and DELETE and INSERT if it is.



*From:* Rohit Jain [mailto:rohit.j...@esgyn.com]
*Sent:* Tuesday, March 22, 2016 11:40 PM
*To:* 'user@trafodion.incubator.apache.org' <
user@trafodion.incubator.apache.org>
*Subject:* RE: Upsert semantics



So why do we really want to support UPSERT when it is subsumed by MERGE?
Why make life for SQL programmers hard and then add the extra burden of
CQDs on the top of it.  What does UPSERT do that MERGE cannot?



*From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com
]
*Sent:* Tuesday, March 22, 2016 10:34 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



I am making yet another attempt to get the pull request 393
https://github.com/apache/incubator-trafodion/pull/393 moving and arrive at
some conclusion.



The existing CQD can be replaced with TRAF_UPSERT_MODE.  This CQD takes 3
values and applicable in both aligned and non-aligned format.



MERGE   - If the row exists, use the old values for
omitted columns

REPLACE- Always replace the column with default values
for omitted columns

OPTIMAL   - Choose either MERGE or REPLACE for optimal
performance



Default will be MERGE. What do you think?



Selva





*From:* Roberta Marton [mailto:roberta.mar...@esgyn.com]
*Sent:* Friday, March 18, 2016 2:09 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics


Fyi,


JIRA-1843 “Allow USER option(s) to be defined as defaults in a table column
definition”

Exists to allow CURRENT_USER, SESSION_USER, and USER to be defaults.



   Roberta



*From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
*Sent:* Friday, March 18, 2016 1:10 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



It would hold good. However currently Trafodion doesn’t support
CURRENT_USER as the default attribute that can be given at the time of
create table. But current_user as a function is supported



Selva



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Friday, March 18, 2016 8:10 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



Hi Selva,

I am assuming that CURRENT_USER gets the same treatment as CURRENT, right
Selva? Else  this would be a problem.

Eric



*From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
*Sent:* Friday, March 18, 2016 1:40 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



CURRENT default is treated differently because the column value needs to be
resolved at the time of upsert rather than at the time of select. In case
of non-aligned row format, all default columns other than current default
won’t be populated in hbase table when it is omitted in the upsert/insert
statement.



The motivation for choosing a different default settings for the CQD
TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS based on the row format is to
ensure that the UPSERT gets the best performance by default. Based on the
feedback received till now, the different behavior is unacceptable.



>From performance perspective TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANICTCS
‘OFF’ is quite unfavorable to aligned format while setting it to ‘ON’ is
unfavorable to non-aligned format.



>From storage perspective, aligned format remains unaffected and but the
‘ON’ settings is unfavorable for non-aligned format.



Hence, my thinking is that the default value for this CQD should be ‘OFF’
as opposed to ‘ON’ as the default suggested by Hans.



Thanks

Hans.

*From:* Qifan Chen [mailto:qifan.c...@esgyn.com]
*Sent:* Thursday, March 17, 2016 5:28 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



I agree omitting default values from storage is an optimization and as such
it should provide the same UPSERT semantics as with other storage
formats/optimizations.



Specially our code could insert default value checking expression to verify
that an value is exact the same as the default value and omit it for
storage (extra overhead), or insert/update otherwise.



The other option would be not checking the default value at all and allow
mixed storage model for default values (fast upsert but some extra storage
overhead).



Any change on the handling of CURRENT defaults should still stick  to ANSI.



Thanks



-Qifan



Sent from my iPhone

RE: Upsert semantics

2016-03-22 Thread Selva Govindarajan
I am making yet another attempt to get the pull request 393
https://github.com/apache/incubator-trafodion/pull/393 moving and arrive at
some conclusion.



The existing CQD can be replaced with TRAF_UPSERT_MODE.  This CQD takes 3
values and applicable in both aligned and non-aligned format.



MERGE   - If the row exists, use the old values for
omitted columns

REPLACE- Always replace the column with default values
for omitted columns

OPTIMAL   - Choose either MERGE or REPLACE for optimal
performance



Default will be MERGE. What do you think?



Selva





*From:* Roberta Marton [mailto:roberta.mar...@esgyn.com]
*Sent:* Friday, March 18, 2016 2:09 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics


Fyi,


JIRA-1843 “Allow USER option(s) to be defined as defaults in a table column
definition”

Exists to allow CURRENT_USER, SESSION_USER, and USER to be defaults.



   Roberta



*From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
*Sent:* Friday, March 18, 2016 1:10 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



It would hold good. However currently Trafodion doesn’t support
CURRENT_USER as the default attribute that can be given at the time of
create table. But current_user as a function is supported



Selva



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Friday, March 18, 2016 8:10 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



Hi Selva,

I am assuming that CURRENT_USER gets the same treatment as CURRENT, right
Selva? Else  this would be a problem.

Eric



*From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
*Sent:* Friday, March 18, 2016 1:40 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



CURRENT default is treated differently because the column value needs to be
resolved at the time of upsert rather than at the time of select. In case
of non-aligned row format, all default columns other than current default
won’t be populated in hbase table when it is omitted in the upsert/insert
statement.



The motivation for choosing a different default settings for the CQD
TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS based on the row format is to
ensure that the UPSERT gets the best performance by default. Based on the
feedback received till now, the different behavior is unacceptable.



>From performance perspective TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANICTCS
‘OFF’ is quite unfavorable to aligned format while setting it to ‘ON’ is
unfavorable to non-aligned format.



>From storage perspective, aligned format remains unaffected and but the
‘ON’ settings is unfavorable for non-aligned format.



Hence, my thinking is that the default value for this CQD should be ‘OFF’
as opposed to ‘ON’ as the default suggested by Hans.



Thanks

Hans.

*From:* Qifan Chen [mailto:qifan.c...@esgyn.com]
*Sent:* Thursday, March 17, 2016 5:28 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



I agree omitting default values from storage is an optimization and as such
it should provide the same UPSERT semantics as with other storage
formats/optimizations.



Specially our code could insert default value checking expression to verify
that an value is exact the same as the default value and omit it for
storage (extra overhead), or insert/update otherwise.



The other option would be not checking the default value at all and allow
mixed storage model for default values (fast upsert but some extra storage
overhead).



Any change on the handling of CURRENT defaults should still stick  to ANSI.



Thanks



-Qifan



Sent from my iPhone


On Mar 18, 2016, at 7:23 AM, Suresh Subbiah 
wrote:

Hi,



To me upsert has meant a faster performing version of insert, with
duplicate key errors ignored. I would claim that most users are drawn
towards upsert since it performs better than insert.

I do not think compatibility with Phoenix syntax is an important
requirement.

As everyone has said we would not want a statement to have different
semantics depending on row format.

I do not quite understand why an omitted CURRENT default is treated
differently from other omitted defaults, so I could see the last column in
the first row below also being transformed to "Replace the given columns",
but this I do feel is not crucial. Whichever is easier for us to implement
as long as it is defined should be sufficient.





With these principles in mind my vote would be for the proposal Hans gave
above.



I am sorry for not stating my opinion clearly during review.



Thank you

Suresh



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted

RE: Upsert semantics

2016-03-19 Thread Selva Govindarajan
CURRENT default is treated differently because the column value needs to be
resolved at the time of upsert rather than at the time of select. In case
of non-aligned row format, all default columns other than current default
won’t be populated in hbase table when it is omitted in the upsert/insert
statement.



The motivation for choosing a different default settings for the CQD
TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS
based on the row format is to ensure that the UPSERT gets the best
performance by default. Based on the feedback received till now, the
different behavior is unacceptable.



>From performance perspective TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANICTCS
‘OFF’ is quite unfavorable to aligned format while setting it to ‘ON’ is
unfavorable to non-aligned format.



>From storage perspective, aligned format remains unaffected and but the
‘ON’ settings is unfavorable for non-aligned format.



Hence, my thinking is that the default value for this CQD should be ‘OFF’
as opposed to ‘ON’ as the default suggested by Hans.



Thanks

Hans.

*From:* Qifan Chen [mailto:qifan.c...@esgyn.com]
*Sent:* Thursday, March 17, 2016 5:28 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



I agree omitting default values from storage is an optimization and as such
it should provide the same UPSERT semantics as with other storage
formats/optimizations.



Specially our code could insert default value checking expression to verify
that an value is exact the same as the default value and omit it for
storage (extra overhead), or insert/update otherwise.



The other option would be not checking the default value at all and allow
mixed storage model for default values (fast upsert but some extra storage
overhead).



Any change on the handling of CURRENT defaults should still stick  to ANSI.



Thanks



-Qifan



Sent from my iPhone


On Mar 18, 2016, at 7:23 AM, Suresh Subbiah 
wrote:

Hi,



To me upsert has meant a faster performing version of insert, with
duplicate key errors ignored. I would claim that most users are drawn
towards upsert since it performs better than insert.

I do not think compatibility with Phoenix syntax is an important
requirement.

As everyone has said we would not want a statement to have different
semantics depending on row format.

I do not quite understand why an omitted CURRENT default is treated
differently from other omitted defaults, so I could see the last column in
the first row below also being transformed to "Replace the given columns",
but this I do feel is not crucial. Whichever is easier for us to implement
as long as it is defined should be sufficient.





With these principles in mind my vote would be for the proposal Hans gave
above.



I am sorry for not stating my opinion clearly during review.



Thank you

Suresh



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

columns
/ omitted
non-current columns



CQD off  Replaces row
MERGE Replace the given columns
MERGE

CQD on (default) Replaces row  Replaces
rowReplace all columns  Replace all columns










On Thu, Mar 17, 2016 at 4:58 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

Here is what I found with phoenix, just to compare with phoenix’s behavior
for upsert.



Phoenix expects the table to have a primary key. Upsert specification is

Inserts if not present and updates otherwise the value in the table. The
list of columns is optional and if not present, the values will map to the
column in the order they are declared in the schema. The values must
evaluate to constants.

create table phoenix.testtbl (c1 integer not null primary key, c2 integer ,
c3 integer) ;

upsert into phoenix.testtbl (c1, c2)  values (1,1) ;

upsert into phoenix.testtbl (c1,c3)  values (1,1) ;

upsert into phoenix.testtbl (c1,c2)  values (1,null) ;



0: jdbc:phoenix:localhost:51670> select * from phoenix.testtbl ;

+--+--+--+

|C1|
C2|C3|

+--+--+--+

| 1|
null   

RE: Upsert semantics

2016-03-19 Thread Selva Govindarajan
I am sorry it was an honest mistake. I meant to sign and didn’t mean to
send the message on behalf of Hans.



Currently Trafodion engine doesn’t do the Qifan’s suggestion of evaluating
the default expression and omit it from storage. However, I believe it is
feasible only when Trafodion engine can somehow delete the existing cell in
the midst of inserting a new row because the cell(column) could be having a
different value earlier.



The CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANICTCS will be set to ‘OFF’ by
default mimicking the upsert behavior of Phoenix for both aligned and
non-aligned format.

User can expect upsert to have same effect for both aligned and non-aligned
format, when this CQD is turned on.

User can set the CQD to ‘ON’ to make upsert perform better in case of
aligned format.



Selva





*From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
*Sent:* Thursday, March 17, 2016 11:40 PM
*To:* 'user@trafodion.incubator.apache.org' <
user@trafodion.incubator.apache.org>
*Subject:* RE: Upsert semantics



CURRENT default is treated differently because the column value needs to be
resolved at the time of upsert rather than at the time of select. In case
of non-aligned row format, all default columns other than current default
won’t be populated in hbase table when it is omitted in the upsert/insert
statement.



The motivation for choosing a different default settings for the CQD
TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS based on the row format is to
ensure that the UPSERT gets the best performance by default. Based on the
feedback received till now, the different behavior is unacceptable.



>From performance perspective TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANICTCS
‘OFF’ is quite unfavorable to aligned format while setting it to ‘ON’ is
unfavorable to non-aligned format.



>From storage perspective, aligned format remains unaffected and but the
‘ON’ settings is unfavorable for non-aligned format.



Hence, my thinking is that the default value for this CQD should be ‘OFF’
as opposed to ‘ON’ as the default suggested by Hans.



Thanks

Hans.

*From:* Qifan Chen [mailto:qifan.c...@esgyn.com ]
*Sent:* Thursday, March 17, 2016 5:28 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



I agree omitting default values from storage is an optimization and as such
it should provide the same UPSERT semantics as with other storage
formats/optimizations.



Specially our code could insert default value checking expression to verify
that an value is exact the same as the default value and omit it for
storage (extra overhead), or insert/update otherwise.



The other option would be not checking the default value at all and allow
mixed storage model for default values (fast upsert but some extra storage
overhead).



Any change on the handling of CURRENT defaults should still stick  to ANSI.



Thanks



-Qifan



Sent from my iPhone


On Mar 18, 2016, at 7:23 AM, Suresh Subbiah 
wrote:

Hi,



To me upsert has meant a faster performing version of insert, with
duplicate key errors ignored. I would claim that most users are drawn
towards upsert since it performs better than insert.

I do not think compatibility with Phoenix syntax is an important
requirement.

As everyone has said we would not want a statement to have different
semantics depending on row format.

I do not quite understand why an omitted CURRENT default is treated
differently from other omitted defaults, so I could see the last column in
the first row below also being transformed to "Replace the given columns",
but this I do feel is not crucial. Whichever is easier for us to implement
as long as it is defined should be sufficient.





With these principles in mind my vote would be for the proposal Hans gave
above.



I am sorry for not stating my opinion clearly during review.



Thank you

Suresh



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

columns
/ omitted
non-current columns



CQD off  Replaces row
MERGE Replace the given columns
MERGE

CQD on (default) Replaces row  Replaces
rowReplace all columns  Replace all columns










On Thu, Mar 17, 2016 at 4:58 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

Here is what I found with phoenix, just to compare with phoenix’s behavior
for upsert.



Phoenix expects the table to have a primary key. Upsert specification is

Inserts if not present and updates otherwise the value in the table. The
list of columns is optional and if n

RE: Upsert semantics

2016-03-19 Thread Selva Govindarajan
Here is what I found with phoenix, just to compare with phoenix’s behavior
for upsert.



Phoenix expects the table to have a primary key. Upsert specification is

Inserts if not present and updates otherwise the value in the table. The
list of columns is optional and if not present, the values will map to the
column in the order they are declared in the schema. The values must
evaluate to constants.

create table phoenix.testtbl (c1 integer not null primary key, c2 integer ,
c3 integer) ;

upsert into phoenix.testtbl (c1, c2)  values (1,1) ;

upsert into phoenix.testtbl (c1,c3)  values (1,1) ;

upsert into phoenix.testtbl (c1,c2)  values (1,null) ;



0: jdbc:phoenix:localhost:51670> select * from phoenix.testtbl ;

+--+--+--+

|C1|
C2|C3|

+--+--+--+

| 1|
null |
1|

+--+--+--+



In the raw hbase table, I see the following cells after the above 3
upserts. It looks like phoenix deletes the cell if it updated with null
value.



hbase(main):006:0> scan 'PHOENIX.TESTTBL'

ROW
COLUMN+CELL


 \x80\x00\x00\x01column=0:C3,
timestamp=1458249350858,
value=\x80\x00\x00\x01


 \x80\x00\x00\x01column=0:_0,
timestamp=1458249392491,
value=


1 row(s) in 0.0210 seconds



Selva



*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Thursday, March 17, 2016 11:09 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



Hi,



It sounds to me like this makes the semantics of UPSERT depend on physical
row layout, which seems contrary to the philosophy of SQL language design
as a declarative language.



I’d much rather have different syntax for each of these semantics. A
different verb perhaps. Or a clause added to it. Then it is clear to the
application developer what semantics he is getting. He does not have to
examine the physical schema to figure this out.



Dave



*From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
*Sent:* Thursday, March 17, 2016 11:01 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



I wonder if the CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS should be set
to “SYSTEM” by default. It can take ‘SYSTEM’, ‘ON’ or ‘OFF’.



For aligned format – SYSTEM would be treated as ‘ON’ – User can override it
with ‘OFF’ if he/she needs merge semantics.



For non-aligned format – SYSTEM would be treated as ‘OFF’. This would
ensure that all the columns are not inserted all the time into raw hbase
table.  User can avoid merge semantics for omitted default current columns
by overriding it with ‘ON’ semantics.



Selva



*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 6:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



Thank you, Selva. The JIRA is
https://issues.apache.org/jira/browse/TRAFODION-1896.


Hans



On Tue, Mar 15, 2016 at 6:15 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

Hans,



It didn’t occur to me your proposed change would work. I was always
thinking we shouldn’t be adding the omitted columns in non-aligned format.
  You can file a JIRA and I will fix it.



Selva



*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 6:03 PM


*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



yes, one cqd to switch between one or the other behavior in all formats is
the right way to go.



Doing the other way based on the row format would cause more issues when we

support hybrid format rows where some columns are in aligned format and
others

are not.



anoop



*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 5:58 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



Again, IMHO that's the wrong way to go, but I hope others will chime in.
Dave gave the best reason, it's a bad idea to make the semantics of UPSERT
depend on the internal format. Here is what I would suggest, using Selva's
table (proposed changes in red - hope Apache won't mangle them):



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

 

RE: Upsert semantics

2016-03-19 Thread Selva Govindarajan
I wonder if the CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS should be set
to “SYSTEM” by default. It can take ‘SYSTEM’, ‘ON’ or ‘OFF’.



For aligned format – SYSTEM would be treated as ‘ON’ – User can override it
with ‘OFF’ if he/she needs merge semantics.



For non-aligned format – SYSTEM would be treated as ‘OFF’. This would
ensure that all the columns are not inserted all the time into raw hbase
table.  User can avoid merge semantics for omitted default current columns
by overriding it with ‘ON’ semantics.



Selva



*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 6:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



Thank you, Selva. The JIRA is
https://issues.apache.org/jira/browse/TRAFODION-1896.


Hans



On Tue, Mar 15, 2016 at 6:15 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

Hans,



It didn’t occur to me your proposed change would work. I was always
thinking we shouldn’t be adding the omitted columns in non-aligned format.
  You can file a JIRA and I will fix it.



Selva



*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 6:03 PM


*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



yes, one cqd to switch between one or the other behavior in all formats is
the right way to go.



Doing the other way based on the row format would cause more issues when we

support hybrid format rows where some columns are in aligned format and
others

are not.



anoop



*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 5:58 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



Again, IMHO that's the wrong way to go, but I hope others will chime in.
Dave gave the best reason, it's a bad idea to make the semantics of UPSERT
depend on the internal format. Here is what I would suggest, using Selva's
table (proposed changes in red - hope Apache won't mangle them):



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

columns
/ omitted
non-current columns



CQD off  Replaces row
MERGE Replace the given columns
MERGE

CQD on (default) Replaces row  Replaces
rowReplace all columns  Replace all columns



Hans



On Tue, Mar 15, 2016 at 5:36 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

I believe phoenix doesn’t support insert semantics or the non-null default
value columns.  Trafodion supports insert, upsert, non-null default value
columns as well as current default values like current timestamp and
current user.



Upsert handling in Trafodion is same as phoenix for non-aligned format. For
aligned format it can be controlled via CQD.



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

columns
/ omitted
non-current columns



Default behavior Replaces row
MERGE Replace the given columns
MERGE

With the CQD Replaces row  Replaces
rowReplace the given columns MERGE

 set to on



The CQD to be used is TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS (Default is
off). In short, this CQD is a no-op for non-aligned format.



The behavior of the non-aligned format can’t be controlled by the CQD
because we don’t store values for the omitted columns in hbase and hence
when the user switches the CQD settings for upserts with different sets of
omitted columns, we could end up with non-deterministic values for these
columns.

For eq. upsert with the cqd set to ‘on’ with a set of omitted columns

Upsert with the cqd set to ‘off’ with a different set of omitted columns

If we switch to insert all column values all the time for non-aligned
format, then we can let user to control what value needs to be put in for
the omitted column.



Selva



*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 4:01 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



Yes, that's what I had in mind, using a CQD as the syntax:



UPSERT handling ali

RE: Upsert semantics

2016-03-18 Thread Selva Govindarajan
It would hold good. However currently Trafodion doesn’t support
CURRENT_USER as the default attribute that can be given at the time of
create table. But current_user as a function is supported



Selva



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Friday, March 18, 2016 8:10 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



Hi Selva,

I am assuming that CURRENT_USER gets the same treatment as CURRENT, right
Selva? Else  this would be a problem.

Eric



*From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
*Sent:* Friday, March 18, 2016 1:40 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



CURRENT default is treated differently because the column value needs to be
resolved at the time of upsert rather than at the time of select. In case
of non-aligned row format, all default columns other than current default
won’t be populated in hbase table when it is omitted in the upsert/insert
statement.



The motivation for choosing a different default settings for the CQD
TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS based on the row format is to
ensure that the UPSERT gets the best performance by default. Based on the
feedback received till now, the different behavior is unacceptable.



>From performance perspective TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANICTCS
‘OFF’ is quite unfavorable to aligned format while setting it to ‘ON’ is
unfavorable to non-aligned format.



>From storage perspective, aligned format remains unaffected and but the
‘ON’ settings is unfavorable for non-aligned format.



Hence, my thinking is that the default value for this CQD should be ‘OFF’
as opposed to ‘ON’ as the default suggested by Hans.



Thanks

Hans.

*From:* Qifan Chen [mailto:qifan.c...@esgyn.com]
*Sent:* Thursday, March 17, 2016 5:28 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



I agree omitting default values from storage is an optimization and as such
it should provide the same UPSERT semantics as with other storage
formats/optimizations.



Specially our code could insert default value checking expression to verify
that an value is exact the same as the default value and omit it for
storage (extra overhead), or insert/update otherwise.



The other option would be not checking the default value at all and allow
mixed storage model for default values (fast upsert but some extra storage
overhead).



Any change on the handling of CURRENT defaults should still stick  to ANSI.



Thanks



-Qifan



Sent from my iPhone


On Mar 18, 2016, at 7:23 AM, Suresh Subbiah 
wrote:

Hi,



To me upsert has meant a faster performing version of insert, with
duplicate key errors ignored. I would claim that most users are drawn
towards upsert since it performs better than insert.

I do not think compatibility with Phoenix syntax is an important
requirement.

As everyone has said we would not want a statement to have different
semantics depending on row format.

I do not quite understand why an omitted CURRENT default is treated
differently from other omitted defaults, so I could see the last column in
the first row below also being transformed to "Replace the given columns",
but this I do feel is not crucial. Whichever is easier for us to implement
as long as it is defined should be sufficient.





With these principles in mind my vote would be for the proposal Hans gave
above.



I am sorry for not stating my opinion clearly during review.



Thank you

Suresh



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

columns
/ omitted
non-current columns



CQD off  Replaces row
MERGE Replace the given columns
MERGE

CQD on (default) Replaces row  Replaces
rowReplace all columns  Replace all columns










On Thu, Mar 17, 2016 at 4:58 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

Here is what I found with phoenix, just to compare with phoenix’s behavior
for upsert.



Phoenix expects the table to have a primary key. Upsert specification is

Inserts if not present and updates otherwise the value in the table. The
list of columns is optional and if not present, the values will map to the
column in the order they are declared in the schema. The values must
evaluate to constants.

create table phoenix.testtbl (c1 integer not null primary key, c2 integer ,
c3 integer) ;

upsert into phoenix.testtbl (c1, c2)  values (1,1) ;

upsert into phoenix.testtbl (c1,c3)  values (1,1) ;

upsert into phoenix.testtbl (c1,c2)  

RE: Upsert semantics

2016-03-15 Thread Selva Govindarajan
Hans,



It didn’t occur to me your proposed change would work. I was always
thinking we shouldn’t be adding the omitted columns in non-aligned format.
  You can file a JIRA and I will fix it.



Selva



*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 6:03 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



yes, one cqd to switch between one or the other behavior in all formats is
the right way to go.



Doing the other way based on the row format would cause more issues when we

support hybrid format rows where some columns are in aligned format and
others

are not.



anoop



*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 5:58 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



Again, IMHO that's the wrong way to go, but I hope others will chime in.
Dave gave the best reason, it's a bad idea to make the semantics of UPSERT
depend on the internal format. Here is what I would suggest, using Selva's
table (proposed changes in red - hope Apache won't mangle them):



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

columns
/ omitted
non-current columns



CQD off  Replaces row
MERGE Replace the given columns
MERGE

CQD on (default) Replaces row  Replaces
rowReplace all columns  Replace all columns



Hans



On Tue, Mar 15, 2016 at 5:36 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

I believe phoenix doesn’t support insert semantics or the non-null default
value columns.  Trafodion supports insert, upsert, non-null default value
columns as well as current default values like current timestamp and
current user.



Upsert handling in Trafodion is same as phoenix for non-aligned format. For
aligned format it can be controlled via CQD.



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

columns
/ omitted
non-current columns



Default behavior Replaces row
MERGE Replace the given columns
MERGE

With the CQD Replaces row  Replaces
rowReplace the given columns MERGE

 set to on



The CQD to be used is TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS (Default is
off). In short, this CQD is a no-op for non-aligned format.



The behavior of the non-aligned format can’t be controlled by the CQD
because we don’t store values for the omitted columns in hbase and hence
when the user switches the CQD settings for upserts with different sets of
omitted columns, we could end up with non-deterministic values for these
columns.

For eq. upsert with the cqd set to ‘on’ with a set of omitted columns

Upsert with the cqd set to ‘off’ with a different set of omitted columns

If we switch to insert all column values all the time for non-aligned
format, then we can let user to control what value needs to be put in for
the omitted column.



Selva



*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 4:01 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



Yes, that's what I had in mind, using a CQD as the syntax:



UPSERT handling aligned formatnon-aligned format

--  
 ---

default behaviorreplace row   replace row (create
all values)

Phoenix behavior (via CQD): transform to MERGEinsert only specified
cols (*)



(*) One issue here is with "default current". In that case we may also need
to transform the statement into a MERGE.



>From a performance point of view, the "default behavior" would work better
for aligned format, the Phoenix behavior would work better for non-aligned
format.



In some cases it won't matter. Selva's code will detect many of these and
automatically choose the faster implementation.



Thanks,


Hans



On Tue, Mar 15, 2016 at 3:41 PM, Dave Birdsall 
wrote:

 Not sure we want the logical semantics of an operation to depend
on the physical layout of the row.



Would be better

RE: Upsert semantics

2016-03-15 Thread Selva Govindarajan
I believe phoenix doesn’t support insert semantics or the non-null default
value columns.  Trafodion supports insert, upsert, non-null default value
columns as well as current default values like current timestamp and
current user.



Upsert handling in Trafodion is same as phoenix for non-aligned format. For
aligned format it can be controlled via CQD.



 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

columns
/ omitted
non-current columns



Default behavior Replaces row
MERGE Replace the given columns
MERGE

With the CQD Replaces row  Replaces
rowReplace the given columns MERGE

 set to on



The CQD to be used is TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS (Default is
off). In short, this CQD is a no-op for non-aligned format.



The behavior of the non-aligned format can’t be controlled by the CQD
because we don’t store values for the omitted columns in hbase and hence
when the user switches the CQD settings for upserts with different sets of
omitted columns, we could end up with non-deterministic values for these
columns.

For eq. upsert with the cqd set to ‘on’ with a set of omitted columns

Upsert with the cqd set to ‘off’ with a different set of omitted columns

If we switch to insert all column values all the time for non-aligned
format, then we can let user to control what value needs to be put in for
the omitted column.



Selva



*From:* Hans Zeller [mailto:hans.zel...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 4:01 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Upsert semantics



Yes, that's what I had in mind, using a CQD as the syntax:



UPSERT handling aligned formatnon-aligned format

--  
 ---

default behaviorreplace row   replace row (create
all values)

Phoenix behavior (via CQD): transform to MERGEinsert only specified
cols (*)



(*) One issue here is with "default current". In that case we may also need
to transform the statement into a MERGE.



>From a performance point of view, the "default behavior" would work better
for aligned format, the Phoenix behavior would work better for non-aligned
format.



In some cases it won't matter. Selva's code will detect many of these and
automatically choose the faster implementation.



Thanks,


Hans



On Tue, Mar 15, 2016 at 3:41 PM, Dave Birdsall 
wrote:

 Not sure we want the logical semantics of an operation to depend
on the physical layout of the row.



Would be better to have different syntax for each. With an explanation that
one works faster on one format, and the other faster on the other format.



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 3:38 PM


*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



Would there be a problem if we implemented the phoenix semantic for non
align format, and the  upsert semantic proposed by Hans in align format?

This would allow speed optimization without having the user to know about
subtle differences?

Eric





*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 5:14 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



Phoenix has upsert command and from what can tell, they originally came up
with upsert syntax.

Their semantic is to insert if not present and update the specified columns
with the specified values if present.

We did do an experiment and upsert only updates the specified columns.

Maybe we can add a cqd so full row update vs. specified column update
behavior could be chosen.



Here is their specification.

Inserts if not present and updates otherwise the value in the table. The
list of columns is optional and if not present, the values will map to the
column in the order they are declared in the schema. The values must
evaluate to constants.

Example:

UPSERT INTO TEST VALUES('foo','bar',3);
UPSERT INTO TEST(NAME,ID) VALUES('foo',123);





*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, March 15, 2016 2:55 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Upsert semantics



Hi,



It seems that when ANSI first added MERGE to the standard, it was portrayed
as “upsert” (see https://en.wikipedia.org/wiki/Merge_(SQL)).



I agree though that we are free to define our UPSERT to mean anything we
want.



I like what you suggest. Since our UPSERT syntax already specifies values
for all the columns, it makes sense 

RE: Anyway to start Trafodion without sqstart

2016-03-15 Thread Selva Govindarajan
Yes.  The sqgen command takes in the configuration file for the trafodion
cluster and generates gomon.cold, gomon.warm and other relevant scripts.
These generated scripts are copied to all nodes in the cluster. These
scripts are nothing but the commands to sqshell. sqstart uses either
gomon.cold or gomon.warm to start the Trafodion instance.



Selva



*From:* Gunnar Tapper [mailto:tapper.gun...@gmail.com]
*Sent:* Monday, March 14, 2016 10:03 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: Anyway to start Trafodion without sqstart



DCS and REST follow the HBase model so that should be a simple matter of
invoking the *-daemon.sh scripts.



I think the rest is a matter of using sqshell:



[centos@trafodion incubator-trafodion]$ sqshell

Processing cluster.conf on local host trafodion.novalocal

[SHELL] Shell/shell Version 1.0.1 Apache_Trafodion Release 2.0.0 (Build
debug [centos], date 11Mar16)

[SHELL] %help

[SHELL] usage: shell {[-a|-i] []} | {-c }

[SHELL] - commands:

[SHELL] -- Command line environment variable replacement: ${}

[SHELL] -- ! comment statement

[SHELL] -- cd 

[SHELL] -- delay 

[SHELL] -- down  [, ]

[SHELL] -- dump [{path }]  | 

[SHELL] -- echo []

[SHELL] -- event [{ASE|TSE|DTM|AMP|BO|VR|CS}]  [ [
event-data] ]

[SHELL] -- exec [{[debug][nowait][pri ][name ]

[SHELL]   [nid ][type
{AMP|ASE|BO|CS|DTM|PSD|SMS|SPX|SSMP|TSE|VR}]

[SHELL] --[in |#default][out |#default]}] path
[[]...]

[SHELL] -- exit [!]

[SHELL] -- help

[SHELL] -- kill [{abort}]  | 

[SHELL] -- ldpath [[,]...]

[SHELL] -- ls [{[detail]}] []

[SHELL] -- measure | measure_cpu

[SHELL] -- monstats

[SHELL] -- node [info []]

[SHELL] -- path [[,]...]

[SHELL] -- ps [{ASE|TSE|DTM|AMP|BO|VR|CS}] [|]

[SHELL] -- pwd

[SHELL] -- quit

[SHELL] -- scanbufs

[SHELL] -- set [{[nid ]|[process ]}] key=

[SHELL] -- show [{[nid ]|[process ]}] [key]

[SHELL] -- shutdown [[immediate]|[abrupt]|[!]]

[SHELL] -- startup [trace] []

[SHELL] -- suspend []

[SHELL] -- time 

[SHELL] -- trace 

[SHELL] -- up 

[SHELL] -- wait [ | ]

[SHELL] -- warmstart [trace] []

[SHELL] -- zone [nid |zid ]

[



Obviously, you can up/down nodes but I don't know how that works in
relationship to the startup command.



On Mon, Mar 14, 2016 at 11:52 AM, Amanda Moran 
wrote:

Hi there-



Is there a way to start up Trafodion not by using sqstart...? I would like
to be able to start up/stop each node individually.



Thanks!



-- 

Thanks,



Amanda Moran





-- 

Thanks,



Gunnar

*If you think you can you can, if you think you can't you're right.*


RE: how to tell the most time-consuming part for a given Trafodion query plan?

2016-03-08 Thread Selva Govindarajan
Hi Ming,



The counters or metrics returned by RMS in Trafodion are documented at
http://trafodion.apache.org/docs/2.0.0/sql_reference/index.html#sql_runtime_statistics
.



Counters displayed in operator stats:

The DOP(Degree of Parallelism> determines the number of ESPs involved in
executing the Trafodion Operator or TDB(Task Definition Block). The TDB can
be identified by the number in ID column. LC and RC denotes the left and
right child of the operator. Using these IDs and the parent TDBID (PaID),
one can construct the query plan from this output. Dispatches column gives
an indication how often the operator is scheduled for execution by the
Trafodion SQL Engine scheduler. An operator is scheduled and run traversing
the different steps within itself till it can't continue or it gives up on
its own for other operators to be scheduled.

During query execution, you will see these metrics being changed
continuously for all the operators as the data flows across it till a
blocking operator is encountered in the plan. The blocking operators are
EX_SORT, EX_HASH_JOIN and EX_HASH_GRBY..

The operator cpu time is the sum of the cpu time spent in the operator in
the executor thread of all the processes hosting the operator. Operator cpu
time is real and measured in real time in microseconds and it is NOT a
relative number. It doesn’t include the cpu time spent by other threads in
executing the tasks on behalf of the executor thread. Usually, trafodion
executor instance is run in a single thread and the engine can have
multiple executor instances running in a process to support multi-threaded
client applications. Most notably, the Trafodion engine uses thread pool to
pre-fetch the rows while rows are fetched sequentially. It is also possible
that Hbase uses thread pools to complete the operation requested by
Trafodion. These thread timings are not included in the operator cpu time.
To account for this, RMS provides another counter in a different view. It
is the pertable view.



GET STATISTICS FOR QID  PERTABLE provides the following counters:



HBase/Hive IOs

Numbers of messages sent to HBase Region Servers(RS)

HBase/Hive IO MBytes

The cumulative size of these messages in MB accounted at the Trafodion
layer.

HBase/Hive Sum IO Time

The cumulative time taken in microseconds by RS to respond and summed up
across all ESPs

HBase/Hive Max IO Time

The maximum of the cumulative time taken in microseconds by RS to respond
for any ESP. This gives an indication how much of the elapsed time is spent
in HBase because the messages to RS are blocking

The Sum and Max IO time are the elapsed time measured as wall clock time in
microseconds.



The max IO time should be less than the elapsed time or response time of
the query. If the max IO time is closer to the elapsed time, then most of
the time is spent in Hbase.

The sum IO time should be less than the DOP * elapsed time.

The Operator time is the CPU time.

I sincerely hope you will find the above information useful to digest the
output from RMS.  I would say reading, analyzing and interpreting the
output from RMS is an art that you would develop over time and it is always
difficult to document every usage scenario. If you find something that
needs to be added or isn’t correct, please let us know.



Selva





*From:* Liu, Ming (Ming) [mailto:ming@esgyn.cn]
*Sent:* Tuesday, March 8, 2016 5:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* how to tell the most time-consuming part for a given Trafodion
query plan?



Hi, all,



We have running some complex queries using Trafodion, and need to analyze
the performance. One question is, if we want to know which part of the plan
take longest time, is there any good tool/skills to answer this?



I can use ‘get statistics for qid  default’ to get runtime stats. But
it is rather hard to interpret the output. I assume the “Oper CPU Time” is
the best one we can trust? But I am not sure it is the pure CPU time, or it
also include ‘waiting time’? If I want to know the whole time an operation
from start to end, is there any way?

And if it is CPU time, is it ns or something else, or just a relative
number?



Here is an example output of ‘get statistics’



LC   RC   Id   PaId ExId Frag TDB Name DOP
Dispatches  Oper CPU Time  Est. Records Used  Act. Records Used
Details



12   .13   .70EX_ROOT  1
1 69  0  0 1945

11   .12   13   60EX_SPLIT_TOP 1
1 32 99,550,560  0

10   .11   12   60EX_SEND_TOP  10
32  1,844 99,550,560  0

9.10   11   62EX_SEND_BOTTOM   10
20666 99,550,560  0

8.910   62EX_SPLIT_BOTTOM  10
40411 99,550,560  0 53670501

678952

RE: RMS questions

2016-03-02 Thread Selva Govindarajan
Hi Ming,



We are sorry for the delayed response.



Please see my responses embedded.



*From:* Liu, Ming (Ming) [mailto:ming@esgyn.cn]
*Sent:* Saturday, February 27, 2016 8:05 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RMS questions



Hi, all,



I am trying to gather query’s run-time statistics using RMS command ‘get
statistics’. It works fine, but I have some questions below:



As I understand, RMS will save stats for a given query in shared memory, so
it cannot save all the history. It only save CURRENT running queries’
stats. Is this true?*[Selva] *  RMS uses the shared segment to provide near
real time statistics of the query. The metrics are captured at the relevant
components in near real time and updated in the shared segment directly
while the query is being executed. RMS doesn’t poll for the metrics
collection, it is the infrastructure to provide real time statistics.

For a long-running query, I can start another session using ‘get statistics
for qid xxx ‘ to periodically get the stats. For short-running query
(finish in ms), it seems hard for me to start another session find out qid
and run the ‘get statistics’. I think there is a small time window that one
can still get stats for a query after it finished. *[Selva] * For short
running queries, you can get the statistics after the query is completed
before the next query is run in the same session using the command “get
statistics for qid  current”.  If the query is issued from a
non-interactive application, then you might be able to get some kind of
summary info from Trafodion repository if it is enabled.

What is that time window, 30 seconds?*[Selva]  *Generally, the statistics
is retained till the statement is deallocated. The server deallocates the
statement only when user initiates SQLDrop or Statement.close or the
connection is closed or the statement object on the client side is somehow
garbage collected and triggers resource deallocation on the server side.
RMS extends the statistics life time a bit more till a next statement is
prepared or executed in the same session after the statement is deallocated
 In case of non-interactive application, this time period could be very
short.





If I have a busy system with TPS like 3000 queries/s, can RMS save all of
them by 30 seconds? That seems huge, and memory is limited. If it works
like a ring buffer or cache (aging out oldest entries), what is the
strategy RMS keep stats or aging who out? *[Selva] *As I said earlier, RMS
is an infrastructure that aids in providing the real time statistics and it
is not statistics gathering tool. In Trafodion, Type 4 JDBC applications
and ODBC applications use the common infrastructure DCS to execute the
queries. DCS is capable providing the summary info or the detailed query
statistics based on the configuration settings in DCS.

What will happen if all active queries will run out of RMS memory? I know
we can enlarge the size of that memory, but not know exact how, any
instructions?

With the instruction, how can one calculate the required memory size if
s/he know how many queries s/he want to save.

*[Selva] *Default size of RMS shared segment is 64 MB. We have been able to
manage within this space for hundreds of concurrent queries because RMS
kicks in garbage collection every 10 minutes to gc any orphaned statistics
info. Statistics can become orphaned if the server component went away
abruptly or the server component itself failed to deallocate resources. Of
course a badly written application that doesn’t deallocate statements can
make RMS shared segment to become full.  RMS relies on the trusted DCS
components /type 2 JDBC driver to put some capacity limit on the connection
to avoid this. You can increase the RMS shared segment by adding
MX_RTS_STATS_SEG_SIZE=  in $MY_SQROOT/etc/ms.env in all nodes and
restarting the Trafodion instance. You can issue “get statistics for rms
all” to confirm the size of  RMS shared segment and to get heath info of
RMS itself.

Maybe we can only save stats for ‘slow queries’?



Many questions, thanks in advance for any help.*[Selva] * I sincerely wish
my responses are in order and useful.



Thanks,

Ming


RE: [VOTE] Apache Trafodion Logo Proposal

2016-02-19 Thread Selva Govindarajan
+1 for 13



Selva



*From:* Roberta Marton [mailto:roberta.mar...@esgyn.com]
*Sent:* Thursday, February 18, 2016 6:02 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* [VOTE] Apache Trafodion Logo Proposal



There has been quite a lot of discussion on our user list regarding the
proposed Apache Trafodion logos.

It has now come time for a formal vote on the two most popular logos fondly
known as  4g and 13.

Both have been attached for your reference



Please respond as follows:



[ ] +1-4g approve option 4g

[ ] +1-13 approve option 13

[ ] +0 no opinion

[ ] -1 disapprove (and reason why)



The vote will be open for 72 hours.



   Regards,

   Roberta Marton


RE: 答复: 答复: Logo Proposal

2016-02-16 Thread Selva Govindarajan
Did I cast an invalid vote J ? It is missing



   138

Selvaganesan Govindarajan  1 1



Selva



*From:* Kevin DeYager [mailto:kevin.deya...@gmail.com]
*Sent:* Tuesday, February 16, 2016 10:26 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: 答复: 答复: Logo Proposal



+1 to 13





Here's what the current vote tally looks like.

Option

13

4G

8

12

Ken Holt

Tina Krug

1

1

Sean Broeder

1

1

Sandya Sundaresan

1

1

Hans Zeller

1

1

Rohit Jain

1

Wei-Shun Tsai

1

1

Hai Lin (Henry)

1

Qifan Chen

1

Ming Liu

1

Narendra Goyal

1

1

Atanu Mishra

1

Carol Pearson

1

1

Venkat Muthuswamy

Amanda Moran

1

1

Jian Jin (Seth)

1

Christophe LeRouzo

1

1

Prashanth Vasudev

1

1



Dave Birdsall

1

Kevin DeYager

1

14

10

3

1





On Tue, Feb 16, 2016 at 9:52 AM, Gunnar Tapper 
wrote:

+1



On Tue, Feb 16, 2016 at 10:38 AM, Steve Varnau 
wrote:

If “Apache” and “Trafodion” are different colors, it seems to me that
“Trafodion” (in whatever case) should be the word that matches the color of
the dragon. Otherwise it makes me think the dragon is trying to be the
Apache logo.



--Steve



*From:* Ken Holt [mailto:knhknhknh...@gmail.com]
*Sent:* Saturday, February 13, 2016 4:50 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: 答复: 答复: Logo Proposal



Hi All,



A new set of Trafodion logo options are attached, taking account of much of
the input that I received.



One of these is a clear winner for me personally, but I submit them for
your consideration and hope we can zero in on one of them. (Or perhaps two
for a run-off.)



By the way, all logos use Google fonts, including the earlier front runner
option 4 where I found a comparable google font to use, now named option
4g.



Cheers, Ken





On Fri, Feb 12, 2016 at 6:34 PM, Liu, Ming (Ming)  wrote:

Thanks Gunnar,



I can see it now. I like dragon very much, but hope it is not too popular.

I think we also think about whale before?



The meaning behind whale is: it is the biggest animal on the earth , and
real J



And Hadoop projects seem to use cute animals (Hive, Hadoop). Maybe we can
draw a baby dragon?



And most people like sky blue? I used to have black or golden dragon as my
favorite J



My 2 cents.



Overall, I like this logo, it is already good enough for me. It shows to me
how powerful Trafodion is. I know it is hard to get consensus for this kind
of design.



Thanks,

Ming



*发件人**:* Gunnar Tapper [mailto:tapper.gun...@gmail.com]
*发送时间**:* 2016年2月13日 9:43
*收件人**:* user@trafodion.incubator.apache.org
*主**题**:* Re: 答复: Logo Proposal



Trying an attachment...



On Fri, Feb 12, 2016 at 6:26 PM, Liu, Ming (Ming)  wrote:

Hi, Gunnar,



Is it possible to put the logo as attachment? If they are not too big. I
saw QiFan once has an attachment of a logo in one of his post. Some people
cannot access Google drive .



Thanks,

Ming



*发件人**:* Gunnar Tapper [mailto:tapper.gun...@gmail.com]
*发送时间**:* 2016年2月13日 4:31
*收件人**:* user@trafodion.incubator.apache.org
*主**题**:* Re: Logo Proposal



Hi,



Based on the input from several people in this thread, I'd like to propose
the following draft as a candidate:
https://drive.google.com/open?id=0BxlwNhWxn8iTQV85eDlyc0RxWGc



I'm hoping that Ken, Sandhya, or someone else can make a real version for
consideration rather than my PowerPoint hack. It needs a bit of work on the
gray hue (s;lightly darker IMO), size of dragon, and alignment but that's
better done in a real designer program. The font was downloaded from Google
Fonts: Telex Regular.



As mentioned, the draft fits into the overall website well:
https://drive.google.com/open?id=0BxlwNhWxn8iTczNMLTRjYUVXSkE and I'm sure
it'll meet the other criteria, too.



Thanks,



Gunnar











On Fri, Feb 12, 2016 at 12:39 PM, Carol Pearson 
wrote:

We also need our logo to look good on our customers' and users' and
ecosystem denizens' websites so they're eager to show off that Trafodion
fits into the puzzle ;-) .



I don't think we're quite converged and ready for a real vote, and to be
honest, I hate to call for a 3 day vote on this over a long weekend in the
US and at the end of a holiday week in China since that's where most of our
traffic comes from.



Further discussion over the weekend and final vote on Tuesday?



-Carol P.


---

Email:carol.pearson...@gmail.com

Twitter:  @CarolP222

---



On Fri, Feb 12, 2016 at 9:34 AM, Venkat Muthuswamy <
venkat.muthusw...@esgyn.com> wrote:

Having Apache in our logo, brings a lot of credibility and shows we are
serious.

Let’s also keep in mind that the logo/text should look good on the website,
presentation slides and any brochures if you will.

Certainly we need multiple colors to bring attention to the product name.



Venkat



*From:* Christophe LeRo

RE: 答复: 答复: Logo Proposal

2016-02-15 Thread Selva Govindarajan
+1 for 8 and 13.  Consider reducing the space between the fonts if possible
in 8.



Selva



*From:* Ken Holt [mailto:knhknhknh...@gmail.com]
*Sent:* Saturday, February 13, 2016 4:50 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: 答复: 答复: Logo Proposal



Hi All,



A new set of Trafodion logo options are attached, taking account of much of
the input that I received.



One of these is a clear winner for me personally, but I submit them for
your consideration and hope we can zero in on one of them. (Or perhaps two
for a run-off.)



By the way, all logos use Google fonts, including the earlier front runner
option 4 where I found a comparable google font to use, now named option
4g.



Cheers, Ken





On Fri, Feb 12, 2016 at 6:34 PM, Liu, Ming (Ming)  wrote:

Thanks Gunnar,



I can see it now. I like dragon very much, but hope it is not too popular.

I think we also think about whale before?



The meaning behind whale is: it is the biggest animal on the earth , and
real J



And Hadoop projects seem to use cute animals (Hive, Hadoop). Maybe we can
draw a baby dragon?



And most people like sky blue? I used to have black or golden dragon as my
favorite J



My 2 cents.



Overall, I like this logo, it is already good enough for me. It shows to me
how powerful Trafodion is. I know it is hard to get consensus for this kind
of design.



Thanks,

Ming



*发件人**:* Gunnar Tapper [mailto:tapper.gun...@gmail.com]
*发送时间**:* 2016年2月13日 9:43
*收件人**:* user@trafodion.incubator.apache.org
*主**题**:* Re: 答复: Logo Proposal



Trying an attachment...



On Fri, Feb 12, 2016 at 6:26 PM, Liu, Ming (Ming)  wrote:

Hi, Gunnar,



Is it possible to put the logo as attachment? If they are not too big. I
saw QiFan once has an attachment of a logo in one of his post. Some people
cannot access Google drive .



Thanks,

Ming



*发件人**:* Gunnar Tapper [mailto:tapper.gun...@gmail.com]
*发送时间**:* 2016年2月13日 4:31
*收件人**:* user@trafodion.incubator.apache.org
*主**题**:* Re: Logo Proposal



Hi,



Based on the input from several people in this thread, I'd like to propose
the following draft as a candidate:
https://drive.google.com/open?id=0BxlwNhWxn8iTQV85eDlyc0RxWGc



I'm hoping that Ken, Sandhya, or someone else can make a real version for
consideration rather than my PowerPoint hack. It needs a bit of work on the
gray hue (s;lightly darker IMO), size of dragon, and alignment but that's
better done in a real designer program. The font was downloaded from Google
Fonts: Telex Regular.



As mentioned, the draft fits into the overall website well:
https://drive.google.com/open?id=0BxlwNhWxn8iTczNMLTRjYUVXSkE and I'm sure
it'll meet the other criteria, too.



Thanks,



Gunnar











On Fri, Feb 12, 2016 at 12:39 PM, Carol Pearson 
wrote:

We also need our logo to look good on our customers' and users' and
ecosystem denizens' websites so they're eager to show off that Trafodion
fits into the puzzle ;-) .



I don't think we're quite converged and ready for a real vote, and to be
honest, I hate to call for a 3 day vote on this over a long weekend in the
US and at the end of a holiday week in China since that's where most of our
traffic comes from.



Further discussion over the weekend and final vote on Tuesday?



-Carol P.


---

Email:carol.pearson...@gmail.com

Twitter:  @CarolP222

---



On Fri, Feb 12, 2016 at 9:34 AM, Venkat Muthuswamy <
venkat.muthusw...@esgyn.com> wrote:

Having Apache in our logo, brings a lot of credibility and shows we are
serious.

Let’s also keep in mind that the logo/text should look good on the website,
presentation slides and any brochures if you will.

Certainly we need multiple colors to bring attention to the product name.



Venkat



*From:* Christophe LeRouzo [mailto:c...@esgyn.com]
*Sent:* Friday, February 12, 2016 7:59 AM
*To:* user@trafodion.incubator.apache.org
*Cc:* Rohit Jain ; Gunnar Tapper <
tapper.gun...@gmail.com>
*Subject:* RE: Logo Proposal



It seems we might need multiple votes:

-  One color or multiple colors?

-  Thin font or thick font?

-  One font or two fonts?

-  Apache or not…



What was Selva saying about engineers and votes? J

-clr





*From:* Amanda Moran [mailto:amanda.mo...@esgyn.com]
*Sent:* Friday, February 12, 2016 7:21 AM
*To:* user@trafodion.incubator.apache.org
*Cc:* Rohit ; Gunnar Tapper 
*Subject:* Re: Logo Proposal



-1 all same color.



I still like option 4 the way it was/is but I can understand wanting to
remove Apache  But I still like it the way it is



Sent from my iPhone


On Feb 12, 2016, at 7:15 AM, Qifan Chen  wrote:

But is it obvious that Trafodion is an Apache project, if one starts to
read about it?



On Fri, Feb 12, 2016 at 8:48 AM, Rohit  wrote:

It depends on how much value you think Apache adds to Trafodion, as
compared to those other logos that perhaps already h

RE: HDFS ACLs dependency

2016-02-04 Thread Selva Govindarajan
echo "***INFO: Setting HDFS ACLs for snapshot scan support"



Yes. As the display indicates ACL changes are needed for snapshot scan
support.  Trafodion allows the user to create hbase snapshots on the
Trafodion tables and allows it to be accessed by its engine and ACL to be
enabled for this purpose.



Actually, it should be changed to create /hbase/archive/data/default
directory and set ACL too.  Possibly a JIRA has been already filed to make
this change.



Selva





*From:* Gunnar Tapper [mailto:tapper.gun...@gmail.com]
*Sent:* Thursday, February 4, 2016 2:39 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: HDFS ACLs dependency



Well yes, if you enable ACLs then you have to create the ACLs, too. :)



Perhaps the issue here is that Trafodion can't get access to
/hbase/archives w/o ACLs since that directory may be owned by another user
in a group other than trafodion?



I'm mostly trying to document requirements and reasoning for things like
this to provide "comfort" when doing installation: "during installation,
Trafodion will make these changes to your environment for these reasons" is
a lot better than "hey, what happened to my environment? HDFS ACLs are
enabled!"



Thanks,



Gunnar



On Thu, Feb 4, 2016 at 3:31 PM, Amanda Moran  wrote:

# NOTE: These command must be done AFTER acls are

#   enabled and HDFS has been restarted

echo "***INFO: Setting HDFS ACLs for snapshot scan support"

sudo su hdfs --command "$HADOOP_BIN_PATH/hdfs dfs -mkdir -p /hbase/archive"

if [ $? != 0 ]; then

   echo "***ERROR: ($HADOOP_BIN_PATH/hdfs dfs -mkdir -p /hbase/archive)
command failed"

   exit -1

fi

sudo su hdfs --command "$HADOOP_BIN_PATH/hdfs dfs -chown hbase:hbase
/hbase/archive"

if [ $? != 0 ]; then

   echo "***ERROR: ($HADOOP_BIN_PATH/hdfs dfs -chown hbase:hbase
/hbase/archive) command failed"

   exit -1

fi

sudo su hdfs --command "$HADOOP_BIN_PATH/hdfs dfs -setfacl -R -m
user:$TRAF_USER:rwx /hbase/archive"

if [ $? != 0 ]; then

   echo "***ERROR: ($HADOOP_BIN_PATH/hdfs dfs -setfacl -R -m
user:$TRAF_USER:rwx /hbase/archive) command failed"

   exit -1

fi

sudo su hdfs --command "$HADOOP_BIN_PATH/hdfs dfs -setfacl -R -m
default:user:$TRAF_USER:rwx /hbase/archive"

if [ $? != 0 ]; then

   echo "***ERROR: ($HADOOP_BIN_PATH/hdfs dfs -setfacl -R -m
default:user:$TRAF_USER:rwx /hbase/archive) command failed"

   exit -1

fi

sudo su hdfs --command "$HADOOP_BIN_PATH/hdfs dfs -setfacl -R -m mask::rwx
/hbase/archive"

if [ $? != 0 ]; then

   echo "***ERROR: ($HADOOP_BIN_PATH/hdfs dfs -setfacl -R -m mask::rwx
/hbase/archive) command failed"

   exit -1

fi





Here is the code that needs ACLS to be set to true. Maybe this helps ... :)



On Thu, Feb 4, 2016 at 2:26 PM, Gunnar Tapper 
wrote:

Hi,



I noticed that Trafodion requires that dfs.namenode.acls.enabled is set to
true. The reason for this seems to be a desire to do a set setfacl on
hbase/archive.



Is this a true requirement or an embedded best practices?



I'm wondering since we're now imposing security policies on the user even
if the user has chosen to rely on the traditional POSIX permission model
over implementing the extended POSIX ACL model. Also, how does this HDFS
configuration flag relate to a user that is using Kerberos?



-- 

Thanks,



Gunnar

*If you think you can you can, if you think you can't you're right.*





-- 

Thanks,



Amanda Moran





-- 

Thanks,



Gunnar

*If you think you can you can, if you think you can't you're right.*