Re: change the connection count

2017-07-29 Thread Suresh Subbiah
Hi,

This may not be the suggested approach. I use it on my dev environments and
it work OK.

Please edit the file $DCS_INSTALL_DIR/conf/servers
There should be two lines there with the value 50 at the end of each line.
Change the numbers as you wish.
Save the file and copy it to the other node, at the same location.
Stop DCS with dcsstop. Check all mxosrvrs are down with dcscheck. Start DCS
with dcsstart.
You should now see as many mxosrvrs as currently configured.

Thanks
Suresh

On Sat, Jul 29, 2017 at 2:19 AM, Huang, Jack  wrote:

> Hi Trafodioner,
>
> I installed the trafodion with the listed configuration.
>
> As a beginner, I set the dcs_cnt_per_node as 100 in my installation, and
> sqlci display like this.
>
> After some testing, I found the count number is not enough for my testing.
>
> So the question is how can I change the number but did not any
> re-installation?
>
>
>
> Process Configured  Actual  Down
>
> --- --  --  
>
> DTM 2   2
>
> RMS 4   4
>
> DcsMaster   1   1
>
> DcsServer   1   1
>
> mxosrvr 100 98  2
>
> RestServer  1   1
>
>
>
>
>
> +--+
> --+
>
> | config type  | value
>|
>
> +--+
> --+
>
> | dcs_cnt_per_node | 100
>|
>
> | dcs_ha   | N
>|
>
> | first_rsnode | trafodion
>   |
>
>
>
>
>
> *Jack Huang*
>
> *Dell* *EMC* | CTD MRES Cyclone Group
>
> mobile +86-13880577652
>
> jack.hu...@dell.com
>
> *[image: dell_emc_proven_badge_RGB_small]*
>
>
>
>
>
>
>


Re: Do we have cqd to force MDAM?

2017-07-28 Thread Suresh Subbiah
Hi Yuan,

We don't have a cqd to force mdam for all scans in a given query.
MDAM_SCAN_METHOD has 3 possible value; ON, OFF and MAXIMUM. ON is the
default. If cqd is set to MAXIMUM mdam scan on the right side of  a nested
join is considered.
The cqd in all three settings will only influence whether mdam is
considered. It still has to win under the cost model to be chosen.

With a Control query shape we can force mdam as you know. I think we can
also a control table statement, though I have not done this recently.

Since CQS syntax can get quite unwieldy, I am think of creating a JIRA
 asking that <<+ mdam(col-list)>> type hints be supported to force index
access. The problem with a cqd I think is that there could several scans in
one query and maybe we don't want mdam for all of them.

Thanks
Suresh


On Fri, Jul 28, 2017 at 2:57 AM, Liu, Yuan (Yuan)  wrote:

> Hi Trafodioneers,
>
>
>
> Does anyone know that do we have any cqd to force MDAM query plan? Cqd
> mdam_scan_method ‘on’ seems not work.
>
>
>
> Best regards,
>
> Yuan
>
> Email: yuan@esgyn.cn
>
> Cellphone: (+86) 13671935540 <+86%20136%207193%205540>
>
>
>


Re: question on using Batch for a select statement with jdbc

2017-01-17 Thread Suresh Subbiah
Hi Eric,

I don't there is a generic CQD here. If there was a core file and we
analyzed it we might be able to find something specific to this situation.
Can we please try with say 1000 values and 4000 parameters?

Thanks
Suresh

On Tue, Jan 17, 2017 at 4:53 PM, Eric Owhadi  wrote:

> Trying this workaround but hitting a timeout exception at prepare time:
>
> final static String missingVal1 = "select x.a, d.a from (values ";
>
> final static String missingValVar = "(?,?,?,?)";
>
> final static String missingIVal2 = ") as x(a,b,c,d) join d on x.b = d.b
> and x.c = d.c and x.d = d.d where x.a is not null";
>
>
>
> StringBuilder missingVals = new StringBuilder(101000);
>
>  missingVals.append(missingVal1);
>
>  missingVals.append(missingValVar);
>
>  for(int i=1; i< rowsetSize; i++){
>
>  missingVals.append(',').
> append(missingValVar);
>
>  }
>
>  missingVals.append(missingVal2);
>
>  PreparedStatement missingValsstsmt=
> conn2.prepareStatement(missingVals.toString());
>
>
>
>
>
> Exception in thread "main" org.trafodion.jdbc.t4.TrafT4Exception: Server
> aborted abnormally or Connection timed out
>
> at org.trafodion.jdbc.t4.TrafT4Messages.
> createSQLException(TrafT4Messages.java:284)
>
> at org.trafodion.jdbc.t4.InputOutput.doIO(InputOutput.
> java:376)
>
> at org.trafodion.jdbc.t4.T4Connection.getReadBuffer(
> T4Connection.java:157)
>
> at org.trafodion.jdbc.t4.T4Statement.getReadBuffer(
> T4Statement.java:196)
>
> at org.trafodion.jdbc.t4.T4Statement.Prepare(
> T4Statement.java:129)
>
> at org.trafodion.jdbc.t4.InterfaceStatement.prepare(
> InterfaceStatement.java:1126)
>
> at org.trafodion.jdbc.t4.TrafT4PreparedStatement.prepare(
> TrafT4PreparedStatement.java:2209)
>
> at org.trafodion.jdbc.t4.TrafT4Connection.
> prepareStatement(TrafT4Connection.java:775)
>
> at DirectoriesHelper.main(DirectoriesHelper.java:45)
>
>
>
> Am I doing something crazy? Or is there a CQD/param that would help
> prepare that statement containing 10 000 (?,?,?,?) , so 40 000 parameters
> in it?
>
>
>
> Eric
>
>
>
>
>
>
>
> *From:* Eric Owhadi
> *Sent:* Tuesday, January 17, 2017 2:47 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: question on using Batch for a select statement with jdbc
>
>
>
> Hi Selva
>
> I came across this to and believe that setArray is to support SQL Array
> type that we don’t support anyway.
>
> Eric
>
>
>
> *From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com
> ]
> *Sent:* Tuesday, January 17, 2017 2:46 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* Re: question on using Batch for a select statement with jdbc
>
>
>
> Hi Eric,
>
>
>
> Looking at the JDBC specification,  I am guessing
> preparedStatement.setArray and using Array interface can do the trick. But,
>  setArray is unsupported feature in Trafodion jdbc drivers.
>
>
>
> Selva
>
>
> --
>
> *From:* Eric Owhadi 
> *Sent:* Tuesday, January 17, 2017 12:08 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* question on using Batch for a select statement with jdbc
>
>
>
> Hi Trafodioneers,
>
> Have following jdbc question:
>
>
>
> select x.a, d.a from (values (?,?,?,?)) as x(a,b,c,d)
>
> join directories d on
>
> x.b = d.b and x.c= d.c and x.d = d.d;
>
>
>
> I was thinking of using Batch to fill the list of values, but then I
> struggle with how to invoke the query. executeBatch does not return a
> resultSet, as I guess it is used for upsert or inserts?
>
> Can I use executeQuery(), and that will do the trick as long as I use
> addBacth()?
>
>
>
> Or it is not possible to use addBatch for this use model?
>
> Thanks in advance for the help,
>
> Eric
>


Re: jdbc rowset usage?

2017-01-14 Thread Suresh Subbiah
Hi Eric,

1) The shape of the plan is always the same these rowset inserts. There is
no esp parallelism. If upsert is used it will be of vsbb type.It is
independent of the size of the rowset.

tuple_flow(unpack(values), insert)

2) The plan includes a max rowset size. At runtime another rowset size is
provided (the actual size for that execution). We can compile once with a
maximum size and then use the same prepared plan to execute repeatedly with
different actual sizes that are smaller.
Max size is set with the statement attribute SQL_ATTR_INPUT_ARRAY_MAXSIZE
(please see $MY_SQROOT/../conn/odbc/src/odbc/nsksrvrcore/sqlinterface.cpp),
prior to prepare. Actual size is given at runtime through the CLI call
SetRowsetDescPointers (declared in sql/cli/sqlcli.h)

3) The use case described (inserting arrays of rows, but with different
number of elements in the array each time), should not require a recompile
(even through a query cache hit). This can be verified by creating a
PreparedStatement and executing once with 10 rows and next with 5 rows in
the batch. Through offender or by selecting from query cache virtual table
we should be able to confirm that there was only one compile (i.e. there
was no recompile).

Code may have changed from the time I worked in this area and it could
behave differently now. It will be good to test. However we have trickle
loading applications ODB that are inserting thousands of rows per second
using rowsets, through a single connection, with a rowset size of a few
thousand. Usually we can get several executions to complete in a second.
Compile time seems to be negligible compared to execution time in the
instances I have seen. Rowset size typically does not change in ODB, but
the last insert in a connection usually has less than the previous one. I
don't think the last insert is getting a recompile, though even if it did,
with various cache hits, it should only take a few milliseconds.

Thanks
Suresh

PS The CLI calls and statement attributes are code level details. A JDBC
program writer is not exposed to them and cannot use them to change
behaviour directly.


On Fri, Jan 13, 2017 at 11:20 PM, Eric Owhadi  wrote:

> Hi Suresh, Arvin and Selva,
>
> I understand the addBatch method. But I have a hard time understanding how
> compiler can do a good job not knowing the cardinality of the rowset at
> prepare time.
>
> Since we support prepare statement like this:
>
> Insert into t (a,b,c) values( ?[1000], ?[1000], ?[1000])
>
>
>
> Where cardinality of the rowset is known at compile time, and I can see a
> very nice associated plan, I am wondering how efficient could be a process
> where a iterate over a prepare statement like this:
>
> Insert into t (a,b,c) values( ?, ?, ?)
>
>
>
> Where at compile time the prepare have no clue if a rowset plan is
> appropriate or a non rowset plan is better.
>
> Suresh has explained me over the phone that compiler would assume rowset,
> and recompile in case it is single value… but then if I keep calling this
> prepared statement with various random number of items, how is the compiler
> going to successfully prepare, and keep reuse the same plan? (this is
> really the use-case I am dealing with: variable length rowset at every
> calls, can go from 1 to 1)
>
>
>
> That is why I was hoping to force a fixed cardinality plan, to avoid this
> difficult situation with varying cardinality that I suspect will result in
> many recompiles?
>
>
>
> I know that this syntax would not be jdbc standard (so not portable)… But
> how about if I am ready to pay the price of using non-standard feature?
> Since there is no DB like ours anyway J…
>
> I guess this question is more for dev list, as I am inquiring about
> possible non standard stuff, given I see half of it is already working
> (successful compile of Insert into t (a,b,c) values( ?[1000], ?[1000], 
> ?[1000])
> – I suspect there might be way to do the other half with some magic words …
> ?
>
>
>
> Eric
>
>
>
>
>
> *From:* Arvind GMAIL [mailto:narain.arv...@gmail.com]
> *Sent:* Friday, January 13, 2017 10:52 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: jdbc rowset usage?
>
>
>
> Hi Eric, Suresh
>
>
>
> You could also look at some the jdbc tests under the following  for
> addbatch and related examples:
>
>
>
> https://github.com/apache/incubator-trafodion/tree/
> master/dcs/src/test/jdbc_test/src/test/java/org/trafodion/jdbc_test
>
>
>
> Thanks
>
> Arvind
>
>
>
>
>
>
>
> *From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com
> ]
> *Sent:* Friday, January 13, 2017 7:50 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* Re: jdbc rowset usage?
>
>
>
> Hi Eric,
>
>
>
> The

Re: jdbc rowset usage?

2017-01-13 Thread Suresh Subbiah
Hi Eric,

The steps Selva mentioned are shown in the examples at
https://examples.javacodegeeks.com/core-java/sql/jdbc-batch-insert-example/
 Section 4. in this example should be relevant. We should have similar
examples in JDBC tests, but I am not able to find them.

Thanks
Suresh


On Fri, Jan 13, 2017 at 6:40 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

> Just prepare the statement with '?' for parameters
>
> Do
>
>
> PreparedStatement.setXXX () for all parameters
>
> PreparedStatement.addBatch()
>
>
> Do the above in a loop, when you reach the required rowset size
>
>
> PreparedStatement.executeBatch().
>
>
> Selva
>
>
> --
> *From:* Eric Owhadi 
> *Sent:* Friday, January 13, 2017 3:52 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* jdbc rowset usage?
>
>
> Hello,
>
> I am struggling to find the jdbc syntax to set an dynamic array parameter:
>
>
>
> Assuming I prepared this statement s with “Insert into t (a,b,c)
> values( ?[1000], ?[1000], ?[1000])”
>
> How do I set each parameter?
>
>
>
> Assuming a is INT, b is CHAR[10], c is INT?
>
>
>
> Am I doing something not really supported? Should I use AddBatch instead?
>
> Thanks in advance for the help,
> Eric
>
>
>
>
>


Re: can Trafodion store Chinese character

2016-09-07 Thread Suresh Subbiah
Hi,

Yes, Trafodion can store Chinese characters. The DDL you have used is also
correct.
I think the problem is with display settings on the terminal. I am not good
with those settings, so hopefully someone will be able to guide us through
it.
I do see this on my terminal.

Thanks
Suresh

>>create table qiao1 (title varchar(100 bytes) character set utf8) ;

--- SQL operation complete.
>>insert into qiao1 values ('汉字异常') ;

--- 1 row(s) inserted.
>>insert into qiao1 values ('hello') ;

--- 1 row(s) inserted.
>>select * from qiao1 ;

TITLE


汉字异常
hello


--- 2 row(s) selected.
>>


On Wed, Sep 7, 2016 at 1:15 AM, 乔彦克  wrote:

> Hi, all,
>I insert some lines with Chinese character into trafodion, but only get
> '???' where should be Chinese characters.
>Yesterday I want to load some data from Hive, this problem also
> emerged.
>The field ddl contains Chinese character is 'title varchar(100 bytes)
>  CHARACTER SET UTF8 ,'.
>
> [image: 汉字异常.png]
> My question is that: can Trafodion store Chinese characters, if so How
> to define the ddl?
>
> Any reply is appreciated.
> Thanks,
> Qiao
>


Re: division by date when create table

2016-08-16 Thread Suresh Subbiah
Hi,

Please look at this page
https://cwiki.apache.org/confluence/display/TRAFODION/Cmp-divisioning

I think the syntax you are looking for is *division by (date_trunc('day',
ts));*

Thank you
Suresh

On Tue, Aug 16, 2016 at 2:44 AM, 乔彦克  wrote:

> Hi,all,
>Since Trafodion has the feature 'division by'  and I have the need for
> date query, so when create table
>I want to division by the date (eg '2016-07-01'), but I cannot find a
> good way to do so.
>   The 'cast' method cannot be used in the division by, and date_part only
> get the 'year' attribute.
> *  Can anyone help me with that or give me more detail info about
> 'division by'?*
>
> Any reply is appreciated, Thanks.
>
> *Qiao Yanke*
>
>


Re: how to reference a column from a TMUDF?

2016-06-01 Thread Suresh Subbiah
Hi Ming,

That is an interesting UDF.

The syntax you have shown should work. Please see the modified example
below from regression test udr/TEST001.
I think either we have a bug or it is an issue with delmited names. You
could try these two steps to understand the issue better.

a)  select * from  udf(solrUDF('db','iphone')) u where u."id " =
1; OR
b) cqd UDR_DEBUG_FLAGS '64'; select * from  udf(solrUDF('db','iphone')) u ;
Approach b) will print out the output column names which could be helpful.
Debug flags are documented by Hans at  
https://cwiki.apache.org/confluence/display/TRAFODION/Tutorial%3A+The+object-oriented+UDF+interface

Thanks
Suresh

Example from udr/test001 showing that syntax given above should work. The
bug I mentioned could be due to the fact that solrUDF does not have a table
input. We have had issues with 0 table valued input UDFs in the past.

SELECT *
FROM UDF(sessionize_dynamic(TABLE(SELECT userid,
 JULIANTIMESTAMP(ts) as TS,
 ipAddr
  FROM clicks
  PARTITION BY ipaddr ORDER BY ts),
'IPADDR',
cast('TS' as char(2)),
6000)) u
where u.session_id < 10;

SESSION_IDSEQUENCE_NO   USERID
   TSIPADDR
  
     ---

   1 1  super-user
 21233157839950  12.345.567.345
   2 1  super-user
 21233158559950  12.345.567.345
   2 2  super-services
 21233158559950  12.345.567.345
   2 3  super-services
 21233158559955  12.345.567.345

--- 4 row(s) selected.
>>

Notice how column names are in upper case in the output.




On Wed, Jun 1, 2016 at 4:18 AM, Liu, Ming (Ming)  wrote:

> Hi, all,
>
>
>
> I wrote a simple TMUDF which will perform a solr query and get result as a
> table value.
>
>
>
> It can work like this:
>
>
>
> >>select * from  udf(solrUDF('db','iphone'));
>
> id   description
>
> -- 
>
> 1   iphone 5
>
> 2  iphone 5s
>
> --- 2 row(s) selected.
>
>
>
>
>
> As you can see, it will return two columns: ‘id’ and ‘description’. Now I
> want to do a filter on id, so I try this:
>
>
>
> >>select * from  udf(solrUDF('db','iphone')) u where u.id = 1;
>
>
>
> It failed and report this error:
>
>
>
> *** ERROR[4003] Column U.ID is not a column in table U, or, after a
> NATURAL JOIN or JOIN USING, is no longer allowed to be specified with a
> table correlation name.
>
>
>
> *** ERROR[8822] The statement was not prepared.
>
>
>
> Because I want to join the udf result from the source Trafodion table, so
> I have to reference the columns in the UDF.
>
>
>
> Please help, how can I reference to the column returned from a UDF?
>
>
>
> Thanks,
>
> Ming
>
>
>


Re: add a comment to a table

2016-03-31 Thread Suresh Subbiah
Hi,

I do not think Trafodion currently supports Oracle's COMMENT ON syntax
*https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_4009.htm
*

*It will be a relatively simple feature to add but we do not have it today
as far as I know.*

*Thank you*
*Suresh*


On Thu, Mar 31, 2016 at 9:56 PM, yongqiang.z...@microinsight.com.cn <
yongqiang.z...@microinsight.com.cn> wrote:

> Hi , all,
>
>
> I want to add a comment to a table,like the use of oracle"
> comment on table tablename is annotate",
> How to use jdbc way to implement it?
>
>
>
>
> --
> yongqiang.z...@microinsight.com.cn
>


Re: Upsert semantics

2016-03-19 Thread Suresh Subbiah
Hi,

To me upsert has meant a faster performing version of insert, with
duplicate key errors ignored. I would claim that most users are drawn
towards upsert since it performs better than insert.
I do not think compatibility with Phoenix syntax is an important
requirement.
As everyone has said we would not want a statement to have different
semantics depending on row format.
I do not quite understand why an omitted CURRENT default is treated
differently from other omitted defaults, so I could see the last column in
the first row below also being transformed to "Replace the given columns",
but this I do feel is not crucial. Whichever is easier for us to implement
as long as it is defined should be sufficient.


With these principles in mind my vote would be for the proposal Hans gave
above.

I am sorry for not stating my opinion clearly during review.

Thank you
Suresh

 Aligned FormatAligned
format withNon-Aligned with Non-Aligned with

   With no omitted
   omitted columns   with no omitted
  omitted current default

columns
/ omitted
non-current columns



CQD off  Replaces row
MERGE Replace the given columns
MERGE

CQD on (default) Replaces row  Replaces
rowReplace all columns  Replace all columns






On Thu, Mar 17, 2016 at 4:58 PM, Selva Govindarajan <
selva.govindara...@esgyn.com> wrote:

> Here is what I found with phoenix, just to compare with phoenix’s behavior
> for upsert.
>
>
>
> Phoenix expects the table to have a primary key. Upsert specification is
>
> Inserts if not present and updates otherwise the value in the table. The
> list of columns is optional and if not present, the values will map to the
> column in the order they are declared in the schema. The values must
> evaluate to constants.
>
> create table phoenix.testtbl (c1 integer not null primary key, c2 integer
> , c3 integer) ;
>
> upsert into phoenix.testtbl (c1, c2)  values (1,1) ;
>
> upsert into phoenix.testtbl (c1,c3)  values (1,1) ;
>
> upsert into phoenix.testtbl (c1,c2)  values (1,null) ;
>
>
>
> 0: jdbc:phoenix:localhost:51670> select * from phoenix.testtbl ;
>
>
> +--+--+--+
>
> |C1|
> C2|C3|
>
>
> +--+--+--+
>
> | 1|
> null |
> 1|
>
>
> +--+--+--+
>
>
>
> In the raw hbase table, I see the following cells after the above 3
> upserts. It looks like phoenix deletes the cell if it updated with null
> value.
>
>
>
> hbase(main):006:0> scan 'PHOENIX.TESTTBL'
>
> ROW
> COLUMN+CELL
>
>
>  \x80\x00\x00\x01column=0:C3,
> timestamp=1458249350858,
> value=\x80\x00\x00\x01
>
>
>  \x80\x00\x00\x01column=0:_0,
> timestamp=1458249392491,
> value=
>
>
> 1 row(s) in 0.0210 seconds
>
>
>
> Selva
>
>
>
> *From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
> *Sent:* Thursday, March 17, 2016 11:09 AM
>
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: Upsert semantics
>
>
>
> Hi,
>
>
>
> It sounds to me like this makes the semantics of UPSERT depend on physical
> row layout, which seems contrary to the philosophy of SQL language design
> as a declarative language.
>
>
>
> I’d much rather have different syntax for each of these semantics. A
> different verb perhaps. Or a clause added to it. Then it is clear to the
> application developer what semantics he is getting. He does not have to
> examine the physical schema to figure this out.
>
>
>
> Dave
>
>
>
> *From:* Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
> *Sent:* Thursday, March 17, 2016 11:01 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: Upsert semantics
>
>
>
> I wonder if the CQD TRAF_UPSERT_WITH_INSERT_DEFAULT_SEMANTICS should be
> set to “SYSTEM” by default. It can take ‘SYSTEM’, ‘ON’ or ‘OFF’.
>
>
>
> For aligned format – SYSTEM would be treated as ‘ON’ – User can override
> it with ‘OFF’ if he/she needs merge semantics.
>
>
>
> For non-aligned format – SYSTEM would be treated as ‘OFF’. This would
> ensure that all the columns are not inserted all the time into raw hbase
> table.  User can avoid merge semantics for omitted default current columns
> by overriding

Re: question on nullable column in primary key and impact on upsert

2016-02-03 Thread Suresh Subbiah
Hi Eric,

A JIRA will be nice for  this failure. These predicates are generated and I
have made a mistake in not considering ISNULL predicates here. A fix will
be straightforward.

>>upsert into t values(null,2);



*** ERROR[4099] A NULL operand is not allowed in predicate
(TRAFODION.TPCDSGOOD.T.A = NULL).



The following issue is likely caused due to JIRA 1803. I think the upsert
did not add a row to both the table and the index. Only one worked. The
following select is likely going against the object that was not inserted
into.

A fix for 1803 will likely address this

>>upsert into t values(1,1);



--- 1 row(s) inserted.

>>select * from t;



AB

---  ---



  ?1



--- 1 row(s) selected.


Thanks

Suresh



On Wed, Feb 3, 2016 at 1:56 PM, Eric Owhadi  wrote:

> Hi Dave
>
> I am aware of the implication you mention. But for BI scenario, we don’t
> use key access for single row retrieval and we can always make sure we can
> supply the partitioning column. So no big deal. I agree that the limitation
> on STORE BY would be more elegant, since it will not force a non-natural PK
> implementation aimed at partitioning. But I guess that’s no big deal. Also
> in BI scenario, we can always manually deal with the right semantic in the
> load phase without using upsert…
>
> I am more worried about the issue discovered when trying to understand
> what scenario was implemented…
>
> Eric
>
>
>
>
>
>
>
> *From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
> *Sent:* Wednesday, February 3, 2016 1:16 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: question on nullable column in primary key and impact on
> upsert
>
>
>
> Hi Eric,
>
>
>
> Seems like this business of adding a nullable column to the primary key
> for partitioning purposes creates a lot of headaches. By doing so, it
> changes the definition of the primary key for the table. But then it
> appears in the UPSERT scenario, you really wanted the old primary key, not
> the new. I’m wondering if this is just the first exposure while opening a
> can of worms.
>
>
>
> Another approach might be to broaden STORE BY to allow it to take
> non-primary-key columns. This would come at a price in performance though.
> Unique key operations would now have to check all partitions, unless the
> non-primary-key value was supplied (e.g. through a WHERE clause or a VALUES
> clause). But you are going to run into this limitation anyway in your
> schema design. When you want the old primary key semantic, you’ll face
> similar performance issues.
>
>
>
> Dave
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Wednesday, February 3, 2016 10:43 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* question on nullable column in primary key and impact on upsert
>
>
>
> This following question is about checking if adding nullable columns on PK
> for the sake of data partitioning can be done without too much impact on
> initial PK uniqueness semantic… specially for upsert statements…
>
>
>
> If I have a composite primary key with column A and B.
>
> A nullable
>
> B non nullable and unique
>
>
>
> I was wondering if the upsert statement would correctly guess the intend
> and given:
>
> Null,1 in the table
>
> If I upsert (1,1), it will treat it as an update and not an insert given
> the unique constraint on column B
>
>
>
> Without the unique constraint on column B, I would expect an insert…
>
>
>
> This is important to keep the semantic of PK, while artificially adding
> composite column to force data partitioning…
>
>
>
> The same question if I have now a PK like:
>
> A nullable
>
> B non nullable
>
> C non nullable
>
>
>
> And (B,C) constraint to UNIQUE
>
> Will upsert do the PK semantic as if PK was B,C only?
>
>
>
> OK, so instead of asking, I was thinking, let’s try:
>
> And here you go:
>
>
>
> >>create table t(a int, b int not null not droppable unique, primary
> key(a,b));
>
>
>
> --- SQL operation complete.
>
> >>insert into t values(null,1);
>
>
>
> --- 1 row(s) inserted.
>
> >>select * from t;
>
>
>
> AB
>
> ---  ---
>
>
>
>   ?1
>
>
>
> --- 1 row(s) selected.
>
> >>upsert into t values(null,2);
>
>
>
> *** ERROR[4099] A NULL operand is not allowed in predicate
> (TRAFODION.TPCDSGOOD.T.A = NULL).
>
>
>
> *** ERROR[8822] The statement was not prepared.
>
>
>
> >>upsert into t values(2,2);
>
>
>
> --- 1 row(s) inserted.
>
> >>upsert into t values(1,1);
>
>
>
> --- 1 row(s) inserted.
>
> >>select * from t;
>
>
>
> AB
>
> ---  ---
>
>
>
>   ?1
>
>
>
> --- 1 row(s) selected.
>
> >>insert into t values (2,2);
>
>
>
> *** ERROR[8102] The operation is prevented by a unique constraint.
>
>
>
> --- 0 row(s) inserted.
>
> >>select * from t;
>
>
>
> AB
>
> ---  ---
>
>
>
>   ?1
>
>
>
> --- 1 row(s) selected.
>
> >>select * from t where b=2;
>
>
>
> --- 0 

Re: MRv1 vs. MRv2

2016-02-02 Thread Suresh Subbiah
Hi Gunnar,

Agree with everything you state above, except "However, YARN is required
for some of the backup/restore..."
I am sorry I said the wrong thing way up in this thread. backup/restore
uses map reduce to copy files. I think it will work with both MRv1 and
MRv2. So we should be good with this line alone "you install Hive and
whatever version of MapReduce you want to use for Hive". backup/restore
should be able to use the same version of MapReduce as the one Hive is
using.

However there is a caveat that Trafodion backup/restore use HBase's
ExportSnapshot java class. If this class is changed such that it can use
only MRv2 then we will have the same dependency. If we are looking for
simple install instructions then maybe we should just say that MRv2 (YARN)
is required and can be used for both Hive and backup/restore?

Thanks
Suresh


On Tue, Feb 2, 2016 at 6:20 PM, Gunnar Tapper 
wrote:

> Hi,
>
> I am a bit lost. Per previous messages, Hive requires MapReduce. So,
> MapReduce must be required for full function. I can see that MapReduce is
> not required if you don't use the Hive functionality.
>
> The Jira Hans pointed to seems to suggest to use MapReduce in lieu of
> YARN, which must mean MRv1 since MRv2 is part of YARN.
>
> From what I now understand, you install Hive and whatever version of
> MapReduce you want to use for Hive. However, YARN is required for some of
> the backup/restore capabilities so you always need to install MRv2 (since
> its part of YARN). So, MRv1 is relevant ONLY IF your installation is using
> MRv1 for Hive processing.
>
> Did I get that right?
>
> I don't think it's wise to discuss exceptions such as "you don't need
> MapReduce if you don't plan to use Hive via Trafodion" in the first
> revision of the Trafodion Provisioning Guide. Too many angles dancing on a
> needle's head. Instead, let's keep the requirements as simple as we can.
>
> Thanks,
>
> Gunnar
>
> On Tue, Feb 2, 2016 at 3:44 PM, Amanda Moran 
> wrote:
>
>> I have done many installs without Yarn or MapReduce installed at all.
>> Trafodion runs fine :)
>>
>> On Tue, Feb 2, 2016 at 2:39 PM, Hans Zeller 
>> wrote:
>>
>>> No, it is not required in the build environment.
>>>
>>> Hans
>>>
>>> On Tue, Feb 2, 2016 at 1:54 PM, Gunnar Tapper 
>>> wrote:
>>>
>>>> Does this mean that MRv1 is now required in the build environment?
>>>>
>>>> On Tue, Feb 2, 2016 at 2:12 PM, Hans Zeller 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> That decision would be made by Hive, not Trafodion. For people who use
>>>>> install_local_hadoop, we recently changed that setup to use local
>>>>> MapReduce, not YARN, see
>>>>> https://issues.apache.org/jira/browse/TRAFODION-1781.
>>>>>
>>>>> Hans
>>>>>
>>>>> On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper <
>>>>> tapper.gun...@gmail.com> wrote:
>>>>>
>>>>>> Hi Suresh:
>>>>>>
>>>>>> Thanks for the information.
>>>>>>
>>>>>> Given from what you write, it seems that YARN with MRv2 is required
>>>>>> for full functionality.
>>>>>>
>>>>>> MRv1 is a separate install in current distributions, which is why I
>>>>>> am asking about it. How does Trafodion decide to run the MapReduce job as
>>>>>> MRv1 vs. MRv2 if both are installed?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Gunnar
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
>>>>>> suresh.subbia...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I don't think Trafodion requires YARN for most activity.
>>>>>>>
>>>>>>> For Hive table access, Trafodion uses Hive metadata access Java API
>>>>>>> and libhdfs to actually scan the data file. Therefore YARN is not needed
>>>>>>> for Hive access.
>>>>>>> YARN is not needed for native Hbase or Trafodion table access too.
>>>>>>> YARN is needed for backup/restore, since the HBase exportSnapshot
>>>>>>> class Trafodion calls, used MapReduce to copy large snapshot files 
>>>>>>> to/from
>>>>>>> the backup location.
>>>>>>> YARN is also needed for developer regressions as some vanilla Hive
>>>>>>> commands are executed during the regression run.
>>>>>>> For the last 2 lines I think both MRv1 and MRv2 is supported.
>>>>>>>
>>>>>>> Thanks
>>>>>>> Suresh
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper <
>>>>>>> tapper.gun...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>>>>>>>>
>>>>>>>> --
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Gunnar
>>>>>>>> *If you think you can you can, if you think you can't you're right.*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Thanks,
>>>>>>
>>>>>> Gunnar
>>>>>> *If you think you can you can, if you think you can't you're right.*
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>>
>>>> Gunnar
>>>> *If you think you can you can, if you think you can't you're right.*
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Amanda Moran
>>
>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


Re: fixing/checking corrupted metadata?

2016-02-02 Thread Suresh Subbiah
Here is the syntax for cleanup.
https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup

We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.

Thanks
Suresh

On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi  wrote:

> Great thanks for the info, very helpful.
>
> You mention Trafodion documentation, in what DOC is it described? I looked
> for it in Trafodion Command Interface Guide and Trafodion SQL Reference
> Manual with no luck? The other doc titles did not look promising?
>
> Eric
>
>
>
>
>
> *From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 4:54 PM
>
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> Dave mentioned ‘cleanup table customer’. You can use that if you know
> which table is messed up in metadata.
>
>
>
> Or one can use:
>
>   cleanup metadata, check, return details; to find out all entries
> which may be corrupt.
>
> and then:
>
>   cleanup metadata, return details;
>
>
>
> Cleanup command is also documented in trafodion documentation which is a
> good place to check.
>
>
>
> anoop
>
>
>
> *From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:49 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> Right.  I mentioned this only because reinstalling local_hadoop was
> mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
> for existing data.
>
>
>
> *From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:43 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> Only do that if you’re willing to get rid of your entire database.
>
>
>
> *From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:41 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> You might want to try sqlci initialize trafodion, drop; initialize
> trafodion;
>
>
>
>
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:36 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* fixing/checking corrupted metadata?
>
>
>
> I have been playing on my dev environment with this DDL:
>
> create table Customer
>
> (
>
> c_customer_sk   int not null,
>
> c_customer_id   char(16) CHARACTER SET UTF8 not null,
>
> c_current_cdemo_sk  int,
>
> c_current_hdemo_sk  int,
>
> c_current_addr_sk   int,
>
> c_first_shipto_date_sk  int,
>
> c_first_sales_date_sk   int,
>
> c_salutationchar(10) CHARACTER SET UTF8,
>
> c_first_namechar(20) CHARACTER SET UTF8,
>
> c_last_name char(30) CHARACTER SET UTF8,
>
> c_preferred_cust_flag   char(1),
>
> c_birth_day integer,
>
> c_birth_month   integer,
>
> c_birth_yearinteger,
>
> c_birth_country varchar(20) CHARACTER SET UTF8,
>
> c_login char(13) CHARACTER SET UTF8,
>
> c_email_address char(50) CHARACTER SET UTF8,
>
> c_last_review_date_sk   int,
>
> primary key (c_customer_sk)
>
> )SALT USING 2 PARTITIONS
>
>   HBASE_OPTIONS
>
>   (
>
> DATA_BLOCK_ENCODING = 'FAST_DIFF',
>
>COMPRESSION = 'SNAPPY'
>
>   );
>
>
>
> After a long time and supposedly 35 retries, it complained about the lack
> of SNAPPY compression support in local_hadoop.
>
>
>
> That’s fine, so I decided to retry with:
>
> create table Customer
>
> (
>
> c_customer_sk   int not null,
>
> c_customer_id   char(16) CHARACTER SET UTF8 not null,
>
> c_current_cdemo_sk  int,
>
> c_current_hdemo_sk  int,
>
> c_current_addr_sk   int,
>
> c_first_shipto_date_sk  int,
>
> c_first_sales_date_sk   int,
>
> c_salutationchar(10) CHARACTER SET UTF8,
>
> c_first_namechar(20) CHARACTER SET UTF8,
>
> c_last_name char(30) CHARACTER SET UTF8,
>
> c_preferred_cust_flag   char(1),
>
> c_birth_day integer,
>
> c_birth_month   integer,
>
> c_birth_yearinteger,
>
> c_birth_country varchar(20) CHARACTER SET UTF8,
>
> c_login char(13) CHARACTER SET UTF8,
>
> c_email_address char(50) CHARACTER SET UTF8,
>
> c_last_review_date_sk   int,
>
> primary key (c_customer_sk)
>
> )SALT USING 2 PARTITIONS
>
>   HBASE_OPTIONS
>
>   (
>
> DATA_BLOCK_ENCODING = 'FAST_DIFF'
>
> -- not available in local_hadoop   COMPRESSION = 'SNAPPY'
>
>   );
>
>
>
> And this time it takes forever and never complete (waited 20 minute, then
> killed it).
>
>
>
> I am assuming that the second attempt might be the consequence of the
> first failure t

Re: How to change default CQD?

2016-02-02 Thread Suresh Subbiah
Yes, the same copy must be maintained on all nodes. This is similar to the
ms.env file that is in the same directory. I too feel that the _MD_.default
method is easier to manage. I mention it here only for the sake of
completeness.

With both _MD_.default and the file method, if access is through a
connectivity service, then a new instance of mxosrvr may be needed (as
these attributes are read at process initialization time).

Thanks
Suresh


On Tue, Feb 2, 2016 at 2:34 PM, Eric Owhadi  wrote:

> Interesting, but how do we manage this on a cluster? Do we have to
> maintain the same copy on each node? (just curious, I will use the 
> _*MD*_.default
> method)
>
>
>
> Eric
>
>
>
> *From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
> *Sent:* Tuesday, February 2, 2016 2:29 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* Re: How to change default CQD?
>
>
>
> Hi,
>
>
>
> One can also create (or add to the file it already exists) a text file
> called SQSystemDefaults.conf in the directory $MY_SQROOT/etc and add
> attribute name and the value it should be set to in the file.
>
> I think each attribute should in a separate line and the attribute value
> should NOT be surrounded by single quotes as we would do from sqlci/trafci.
>
> This file is also created by install_local_hadoop. if this script was
> used, you may see an empty file with a few comments in your instance.
>
>
>
> Thanks
>
> Suresh
>
>
>
>
>
> On Tue, Feb 2, 2016 at 12:34 PM, Eric Owhadi 
> wrote:
>
> Thanks Anoop,
>
> So I will use the following insert statement:
>
> insert into "_MD_".defaults values ('MY_CQD', 'value of CQD', 'comment why
> it is inserted');
>
>
>
> Eric
>
>
>
>
>
>
>
> *From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 12:29 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: How to change default CQD?
>
>
>
> You can change it in code by modifying the defaults array in
> sqlcomp/nadefaults.cpp
>
> or you can insert the default value in system defaults table
> trafodion.“_MD_”.defaults.
>
> If you do the insert, you will need to restart dcs servers to pick up the
> new value.
>
>
>
> anoop
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 10:26 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* How to change default CQD?
>
>
>
> Hello Trafodioneers,
>
> I have seen in the code that there is a way to globally change the default
> Control Query Default values. But I can’t find it in the docs.
>
> Can someone help me on this?
>
> Regards,
> Eric
>
>
>
>
>


Re: MRv1 vs. MRv2

2016-02-02 Thread Suresh Subbiah
Hi,

I don't think Trafodion requires YARN for most activity.

For Hive table access, Trafodion uses Hive metadata access Java API and
libhdfs to actually scan the data file. Therefore YARN is not needed for
Hive access.
YARN is not needed for native Hbase or Trafodion table access too.
YARN is needed for backup/restore, since the HBase exportSnapshot class
Trafodion calls, used MapReduce to copy large snapshot files to/from the
backup location.
YARN is also needed for developer regressions as some vanilla Hive commands
are executed during the regression run.
For the last 2 lines I think both MRv1 and MRv2 is supported.

Thanks
Suresh


On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper 
wrote:

> Hi,
>
> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


Re: How to change default CQD?

2016-02-02 Thread Suresh Subbiah
Hi,

One can also create (or add to the file it already exists) a text file
called SQSystemDefaults.conf in the directory $MY_SQROOT/etc and add
attribute name and the value it should be set to in the file.
I think each attribute should in a separate line and the attribute value
should NOT be surrounded by single quotes as we would do from sqlci/trafci.
This file is also created by install_local_hadoop. if this script was used,
you may see an empty file with a few comments in your instance.

Thanks
Suresh


On Tue, Feb 2, 2016 at 12:34 PM, Eric Owhadi  wrote:

> Thanks Anoop,
>
> So I will use the following insert statement:
>
> insert into "_MD_".defaults values ('MY_CQD', 'value of CQD', 'comment why
> it is inserted');
>
>
>
> Eric
>
>
>
>
>
>
>
> *From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 12:29 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: How to change default CQD?
>
>
>
> You can change it in code by modifying the defaults array in
> sqlcomp/nadefaults.cpp
>
> or you can insert the default value in system defaults table
> trafodion.“_MD_”.defaults.
>
> If you do the insert, you will need to restart dcs servers to pick up the
> new value.
>
>
>
> anoop
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 10:26 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* How to change default CQD?
>
>
>
> Hello Trafodioneers,
>
> I have seen in the code that there is a way to globally change the default
> Control Query Default values. But I can’t find it in the docs.
>
> Can someone help me on this?
>
> Regards,
> Eric
>
>
>


Re: nullable primary key index column?

2016-02-02 Thread Suresh Subbiah
Hi All,

Sorry for not responding to the most recent email previously.

This cqd needs to be set only at the time of table creation (i.e. DDL
time). It is not necessary for DML.

Thanks
Suresh


On Tue, Feb 2, 2016 at 2:19 PM, Eric Owhadi  wrote:

> Oh, that looks better. I think this will work with what I am trying to do.
>
> Let me try it.
>
> Oh, and this CQD must be set only at time of table creation? Or should it
> be globally set using _*MD*_.default?
>
> Eric
>
>
>
> *From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
> *Sent:* Tuesday, February 2, 2016 2:14 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* Re: nullable primary key index column?
>
>
>
> Hi,
>
>
>
> Hope I did not misunderstand the question.
>
>
>
> A table can have more than one column in its key to be nullable, as long
> as the cqd Anoop gave is set. All columns in the key can be nullable too.
>
> If the first column is nullable and there are other key columns that are
> either nullable or non-nullable, then the first column can have null value
> for more than 1 row, as long subsequent key columns have other values.
>
> For example
>
> >>cqd allow_nullable_unique_key_constraint 'on' ;
>
>
>
> --- SQL operation complete.
>
> >>create table t1 (a int, b int, primary key (a,b)) ;
>
>
>
> --- SQL operation complete.
>
> >>showddl t1 ;
>
>
>
> CREATE TABLE TRAFODION.JIRA.T1
>
>   (
>
> AINT DEFAULT NULL SERIALIZED
>
>   , BINT DEFAULT NULL SERIALIZED
>
>   , PRIMARY KEY (A ASC, B ASC)
>
>   )
>
> ;
>
>
>
> --- SQL operation complete.
>
> >>insert into t1(a) values (1);
>
>
>
> --- 1 row(s) inserted.
>
> >>insert into t1(b) values (2) ;
>
>
>
> --- 1 row(s) inserted.
>
> >>insert into t1(a) values(3) ;
>
>
>
> --- 1 row(s) inserted.
>
> >>select * from t1 ;
>
>
>
> AB
>
> ---  ---
>
>
>
>   1?
>
>   3?
>
>   ?2
>
>
>
> --- 3 row(s) selected.
>
>
>
> If the table has only one key column and it is nullable, then at most only
> one row can have null as is value for this column.
>
>
>
> There is an issue with inserting null value for all columns in the key as
> described in JIRA 1801, which also outlines  a fix suggested by Anoop.
>
>
>
> Thanks
>
> Suresh
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Tue, Feb 2, 2016 at 1:29 PM, Anoop Sharma 
> wrote:
>
>
>
> cqd ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT ‘ON’;
>
>
>
> then create table with nullable pkey col.
>
>
>
> only one null value is allowed.
>
>
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 11:27 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* nullable primary key index column?
>
>
>
> Dear Trafodioneers,
>
> I am wondering if it is possible to use a composite primary key with the
> first column making up the primary key composite being nullable?
>
> If yes, is there any restriction, like only one row can be null for that
> nullable column?
>
> Thanks in advance for the help,
> Eric
>
>
>


Re: nullable primary key index column?

2016-02-02 Thread Suresh Subbiah
Hi,

Hope I did not misunderstand the question.

A table can have more than one column in its key to be nullable, as long as
the cqd Anoop gave is set. All columns in the key can be nullable too.
If the first column is nullable and there are other key columns that are
either nullable or non-nullable, then the first column can have null value
for more than 1 row, as long subsequent key columns have other values.
For example
>>cqd allow_nullable_unique_key_constraint 'on' ;

--- SQL operation complete.
>>create table t1 (a int, b int, primary key (a,b)) ;

--- SQL operation complete.
>>showddl t1 ;

CREATE TABLE TRAFODION.JIRA.T1
  (
AINT DEFAULT NULL SERIALIZED
  , BINT DEFAULT NULL SERIALIZED
  , PRIMARY KEY (A ASC, B ASC)
  )
;

--- SQL operation complete.
>>insert into t1(a) values (1);

--- 1 row(s) inserted.
>>insert into t1(b) values (2) ;

--- 1 row(s) inserted.
>>insert into t1(a) values(3) ;

--- 1 row(s) inserted.
>>select * from t1 ;

AB
---  ---

  1?
  3?
  ?2

--- 3 row(s) selected.

If the table has only one key column and it is nullable, then at most only
one row can have null as is value for this column.

There is an issue with inserting null value for all columns in the key as
described in JIRA 1801, which also outlines  a fix suggested by Anoop.

Thanks
Suresh






On Tue, Feb 2, 2016 at 1:29 PM, Anoop Sharma  wrote:

>
>
> cqd ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT ‘ON’;
>
>
>
> then create table with nullable pkey col.
>
>
>
> only one null value is allowed.
>
>
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 11:27 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* nullable primary key index column?
>
>
>
> Dear Trafodioneers,
>
> I am wondering if it is possible to use a composite primary key with the
> first column making up the primary key composite being nullable?
>
> If yes, is there any restriction, like only one row can be null for that
> nullable column?
>
> Thanks in advance for the help,
> Eric
>


Re: Apache Trafodion release

2016-01-20 Thread Suresh Subbiah
Echoing everyone's thoughts and gratitude. Thank you Roberta on this
wonderful job on a task that was so crucial for Trafodion.

Best regards,
Suresh

On Wed, Jan 20, 2016 at 12:39 PM, Eric Owhadi  wrote:

> Yes that is very impressive, Thanks a million  Roberta.
>
> Eric
>
>
>
> *From:* Amanda Moran [mailto:amanda.mo...@esgyn.com]
> *Sent:* Wednesday, January 20, 2016 11:25 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* Re: Apache Trafodion release
>
>
>
> Agreed!!
>
>
>
> Thanks for all the hardwork Roberta!!! Really appreciate it!
>
>
>
> Should all be smooth sailing now :)
>
>
>
> On Wed, Jan 20, 2016 at 8:35 AM, Stack  wrote:
>
> Excellent writeup.
>
> St.Ack
>
>
>
> On Wed, Jan 20, 2016 at 8:07 AM, Roberta Marton 
> wrote:
>
> My term as Release Manager for Apache Trafodion release 1.3.0 has been
> completed.  It has been a pleasure working with the different Apache teams
> and Trafodion mentors, committers, and contributors.
>
> Now that release 1.3.0 is available, we are starting to working on our
> next release in about 6 to 8 weeks.  For the next release we would like to:
>
> a.   Release binaries along with the source tar file
>
> b.  Support vanilla releases for Hadoop and HBase
>
> c.   Improve the build process (ongoing)
>
> If you have any comments on these items or would like to see other items
> included please respond to this email.
>
>
>
> Before I sign-off, I would like to recap the work that went into this
> release.  This was our first Apache Incubator release but we did several
> open source releases before joining Apache.  The following is the list of
> new features we added since our last non Apache open source release:
>
> -  IDENTITY columns in SQL
>
> -  HBase region splitting and re-balancing with active transaction
>
> -  Support for batch feature in Installer
>
> -  Cloudera parcels feature in Installer
>
> -  Phase 1 of multi-temperate data in SQL
>
> -  Compile time interface for Table Mapping UDFs
>
> -  Allow columns attributes to be changes on table for SQL
>
> -  Enable HBase serialization feature
>
> -  Enable EXPLAIN_IN_RMS feature
>
> -  Reduc path length for IUD operations
>
> -  Enable compressed Internal Format for the scan operator
>
> -  Many build improvements
>
> In addition we fixed over 100 JIRA's.
>
> Finally and more importunately, I would like to thank everyone who helped
> with the release.  I received tremendous support from everyone.  Special
> thanks go to:
>
> -  Sandhya Sundaresan who spent weeks adding Apache license text
> and getting Apache RAT to run without incident.
>
> -  Anu Hedge for simplifying the Trafodion build by unifying
> several build steps into a simple make operation.
>
> -  Hans Zeller for providing many changes including build
> improvements to integrate SQL Java builds and removing the requirement for
> consumers to download and build specific versions of Hadoop and HBase.
>
> -  Steve Varnau who was invaluable in answering all my build
> questions, fixing all my git errors and helping with setting up the release
> 1.3 directories and artifacts
>
> -  Amanda Moran who tested all the binary artifacts created by
> the source builds on different Hadoop installations.
>
> -  Gunnar Tapper who worked countless hours designing and
> implementing our revamped Apache Trafodion web site – go check it out!
>
> -  Kevin Deyager and Justin Mclean for helping with licensing and
> copyright issues
>
> -  Many Trafodion contributors whom I pestered with questions
> where were competently answered, and then reviewed changes to the product.
>
> -  Michael Stack and Devaraj Das for handling mentoring tasks.
>
>Regards,
>
>Roberta
>
>
>
>
>
>
>
> --
>
> Thanks,
>
>
>
> Amanda Moran
>


Re: Spark integration with Trafodion

2016-01-14 Thread Suresh Subbiah
Hi Jerry,

We are in the process of creating connectors between Spark and Trafodion.
Right now we have  an example using JdbcRdd that Ming mentioned.
Could you please share more about how you would like to use Spark &
Trafodion together. Will you be interested in accessing Spark data from
Trafodion or vice versa?


Ming, Spark SQL could complement Trafodion in many ways since it can (a)
talk to wider variety of data sources than Trafodion can currently (b) has
an API in Scala and (c) has a large set of ML and other functions which
Trafodion currently does not. On the other hand Trafodion can provide
transactions, can likely handle OLTP type queries better and possibly other
benefits. I have not made a comparison, so this is a guess.

Thanks
Suresh

On Thu, Jan 14, 2016 at 7:14 PM, Liu, Ming (Ming)  wrote:

> Hi, Jerry,
>
>
>
> We used to have a Spark program using jdbcRDD to access Trafodion tables.
> It is a standard way of Spark to access any other relational database as
> well. But I never tried with Spark SQL.
>
> I know very little about Spark, so I think others in the list will give
> you better advices.
>
>
>
> I don’t know your requirements, so maybe not make any sense to you : I
> feel that is not necessary to run Spark SQL. Trafodion itself is a good SQL
> database. it supports more SQL standard than Spark SQL, even with full ACID
> transaction protection. So you can use SQL directly on Trafodion without a
> need to run Spark SQL on Trafodion.
>
>
>
> Thanks,
>
> Ming
>
>
>
> *发件人:* Jerry He [mailto:jerry...@gmail.com]
> *发送时间:* 2016年1月15日 7:14
> *收件人:* user@trafodion.incubator.apache.org
> *主题:* Spark integration with Trafodion
>
>
>
> Hi,
>
>
>
> Is there a Spark/Trafodion or SparkSQL/Trafodion connector available?  How
> do I use Trafodion with Spark SQL?
>
>
>
> Thanks.
>
>
>
> Jerry
>


Re: Command Interface Guide Ported to Asciidoc

2016-01-04 Thread Suresh Subbiah
The new document looks very nice Gunnar. It is pleasing to the eye and kept
me reading more even though I was not intending to.

Thanks
Suresh


On Sun, Dec 27, 2015 at 2:16 PM, Dave Birdsall 
wrote:

> You've been busy over the holidays!
>
> On Sat, Dec 26, 2015 at 11:25 PM, Gunnar Tapper 
> wrote:
>
>> Hi,
>>
>> I just finished porting the Trafodion Command Interface Guide to
>> asciidoc. LOTS of work!
>>
>> In case you're interested in looking at the guide before it's published
>> on trafodion.incubator.apache.org, then you can download the rendered
>> output from:
>>
>> https://drive.google.com/open?id=0BxlwNhWxn8iTVWs3U1JCYXpYYjg
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>