Re: MRv1 vs. MRv2

2016-02-02 Thread Gunnar Tapper
Thanks Suresh.

I agree, let's go YARN/MRv2 for the initial version of the Provisioning
Guide. Later revisions can deal with the more complex stuff.

Thanks,

Gunnar

On Tue, Feb 2, 2016 at 7:49 PM, Suresh Subbiah 
wrote:

> Hi Gunnar,
>
> Agree with everything you state above, except "However, YARN is required
> for some of the backup/restore..."
> I am sorry I said the wrong thing way up in this thread. backup/restore
> uses map reduce to copy files. I think it will work with both MRv1 and
> MRv2. So we should be good with this line alone "you install Hive and
> whatever version of MapReduce you want to use for Hive". backup/restore
> should be able to use the same version of MapReduce as the one Hive is
> using.
>
> However there is a caveat that Trafodion backup/restore use HBase's
> ExportSnapshot java class. If this class is changed such that it can use
> only MRv2 then we will have the same dependency. If we are looking for
> simple install instructions then maybe we should just say that MRv2 (YARN)
> is required and can be used for both Hive and backup/restore?
>
> Thanks
> Suresh
>
>
> On Tue, Feb 2, 2016 at 6:20 PM, Gunnar Tapper 
> wrote:
>
>> Hi,
>>
>> I am a bit lost. Per previous messages, Hive requires MapReduce. So,
>> MapReduce must be required for full function. I can see that MapReduce is
>> not required if you don't use the Hive functionality.
>>
>> The Jira Hans pointed to seems to suggest to use MapReduce in lieu of
>> YARN, which must mean MRv1 since MRv2 is part of YARN.
>>
>> From what I now understand, you install Hive and whatever version of
>> MapReduce you want to use for Hive. However, YARN is required for some of
>> the backup/restore capabilities so you always need to install MRv2 (since
>> its part of YARN). So, MRv1 is relevant ONLY IF your installation is using
>> MRv1 for Hive processing.
>>
>> Did I get that right?
>>
>> I don't think it's wise to discuss exceptions such as "you don't need
>> MapReduce if you don't plan to use Hive via Trafodion" in the first
>> revision of the Trafodion Provisioning Guide. Too many angles dancing on a
>> needle's head. Instead, let's keep the requirements as simple as we can.
>>
>> Thanks,
>>
>> Gunnar
>>
>> On Tue, Feb 2, 2016 at 3:44 PM, Amanda Moran 
>> wrote:
>>
>>> I have done many installs without Yarn or MapReduce installed at all.
>>> Trafodion runs fine :)
>>>
>>> On Tue, Feb 2, 2016 at 2:39 PM, Hans Zeller 
>>> wrote:
>>>
 No, it is not required in the build environment.

 Hans

 On Tue, Feb 2, 2016 at 1:54 PM, Gunnar Tapper 
 wrote:

> Does this mean that MRv1 is now required in the build environment?
>
> On Tue, Feb 2, 2016 at 2:12 PM, Hans Zeller 
> wrote:
>
>> Hi,
>>
>> That decision would be made by Hive, not Trafodion. For people who
>> use install_local_hadoop, we recently changed that setup to use local
>> MapReduce, not YARN, see
>> https://issues.apache.org/jira/browse/TRAFODION-1781.
>>
>> Hans
>>
>> On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper <
>> tapper.gun...@gmail.com> wrote:
>>
>>> Hi Suresh:
>>>
>>> Thanks for the information.
>>>
>>> Given from what you write, it seems that YARN with MRv2 is required
>>> for full functionality.
>>>
>>> MRv1 is a separate install in current distributions, which is why I
>>> am asking about it. How does Trafodion decide to run the MapReduce job 
>>> as
>>> MRv1 vs. MRv2 if both are installed?
>>>
>>> Thanks,
>>>
>>> Gunnar
>>>
>>>
>>>
>>> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
>>> suresh.subbia...@gmail.com> wrote:
>>>
 Hi,

 I don't think Trafodion requires YARN for most activity.

 For Hive table access, Trafodion uses Hive metadata access Java API
 and libhdfs to actually scan the data file. Therefore YARN is not 
 needed
 for Hive access.
 YARN is not needed for native Hbase or Trafodion table access too.
 YARN is needed for backup/restore, since the HBase exportSnapshot
 class Trafodion calls, used MapReduce to copy large snapshot files 
 to/from
 the backup location.
 YARN is also needed for developer regressions as some vanilla Hive
 commands are executed during the regression run.
 For the last 2 lines I think both MRv1 and MRv2 is supported.

 Thanks
 Suresh


 On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper <
 tapper.gun...@gmail.com> wrote:

> Hi,
>
> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're
> right.*
>


>>>
>>>
>>> --
>>> Thanks,
>>>
>>> Gunnar
>

Re: MRv1 vs. MRv2

2016-02-02 Thread Suresh Subbiah
Hi Gunnar,

Agree with everything you state above, except "However, YARN is required
for some of the backup/restore..."
I am sorry I said the wrong thing way up in this thread. backup/restore
uses map reduce to copy files. I think it will work with both MRv1 and
MRv2. So we should be good with this line alone "you install Hive and
whatever version of MapReduce you want to use for Hive". backup/restore
should be able to use the same version of MapReduce as the one Hive is
using.

However there is a caveat that Trafodion backup/restore use HBase's
ExportSnapshot java class. If this class is changed such that it can use
only MRv2 then we will have the same dependency. If we are looking for
simple install instructions then maybe we should just say that MRv2 (YARN)
is required and can be used for both Hive and backup/restore?

Thanks
Suresh


On Tue, Feb 2, 2016 at 6:20 PM, Gunnar Tapper 
wrote:

> Hi,
>
> I am a bit lost. Per previous messages, Hive requires MapReduce. So,
> MapReduce must be required for full function. I can see that MapReduce is
> not required if you don't use the Hive functionality.
>
> The Jira Hans pointed to seems to suggest to use MapReduce in lieu of
> YARN, which must mean MRv1 since MRv2 is part of YARN.
>
> From what I now understand, you install Hive and whatever version of
> MapReduce you want to use for Hive. However, YARN is required for some of
> the backup/restore capabilities so you always need to install MRv2 (since
> its part of YARN). So, MRv1 is relevant ONLY IF your installation is using
> MRv1 for Hive processing.
>
> Did I get that right?
>
> I don't think it's wise to discuss exceptions such as "you don't need
> MapReduce if you don't plan to use Hive via Trafodion" in the first
> revision of the Trafodion Provisioning Guide. Too many angles dancing on a
> needle's head. Instead, let's keep the requirements as simple as we can.
>
> Thanks,
>
> Gunnar
>
> On Tue, Feb 2, 2016 at 3:44 PM, Amanda Moran 
> wrote:
>
>> I have done many installs without Yarn or MapReduce installed at all.
>> Trafodion runs fine :)
>>
>> On Tue, Feb 2, 2016 at 2:39 PM, Hans Zeller 
>> wrote:
>>
>>> No, it is not required in the build environment.
>>>
>>> Hans
>>>
>>> On Tue, Feb 2, 2016 at 1:54 PM, Gunnar Tapper 
>>> wrote:
>>>
 Does this mean that MRv1 is now required in the build environment?

 On Tue, Feb 2, 2016 at 2:12 PM, Hans Zeller 
 wrote:

> Hi,
>
> That decision would be made by Hive, not Trafodion. For people who use
> install_local_hadoop, we recently changed that setup to use local
> MapReduce, not YARN, see
> https://issues.apache.org/jira/browse/TRAFODION-1781.
>
> Hans
>
> On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper <
> tapper.gun...@gmail.com> wrote:
>
>> Hi Suresh:
>>
>> Thanks for the information.
>>
>> Given from what you write, it seems that YARN with MRv2 is required
>> for full functionality.
>>
>> MRv1 is a separate install in current distributions, which is why I
>> am asking about it. How does Trafodion decide to run the MapReduce job as
>> MRv1 vs. MRv2 if both are installed?
>>
>> Thanks,
>>
>> Gunnar
>>
>>
>>
>> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
>> suresh.subbia...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I don't think Trafodion requires YARN for most activity.
>>>
>>> For Hive table access, Trafodion uses Hive metadata access Java API
>>> and libhdfs to actually scan the data file. Therefore YARN is not needed
>>> for Hive access.
>>> YARN is not needed for native Hbase or Trafodion table access too.
>>> YARN is needed for backup/restore, since the HBase exportSnapshot
>>> class Trafodion calls, used MapReduce to copy large snapshot files 
>>> to/from
>>> the backup location.
>>> YARN is also needed for developer regressions as some vanilla Hive
>>> commands are executed during the regression run.
>>> For the last 2 lines I think both MRv1 and MRv2 is supported.
>>>
>>> Thanks
>>> Suresh
>>>
>>>
>>> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper <
>>> tapper.gun...@gmail.com> wrote:
>>>
 Hi,

 Does Trafodion require YARN with MRv2 or is MRv1 supported, too?

 --
 Thanks,

 Gunnar
 *If you think you can you can, if you think you can't you're right.*

>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>


 --
 Thanks,

 Gunnar
 *If you think you can you can, if you think you can't you're right.*

>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Amanda Moran
>>
>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


RE: Trafodion 1.3 and Hortonworks

2016-02-02 Thread Jin, Jian (Seth)
Just got Aven comment: we didn’t test on HDP previously. Now it comes question, 
do we need to support it?
If yes, we need to plan to thoroughly  test on HDP and make it work.

Br,

Seth

From: Jin, Jian (Seth) [mailto:jian@esgyn.cn]
Sent: 2016年2月3日 10:37
To: user@trafodion.incubator.apache.org; Ma, Sheng-Chen (Aven) 

Subject: RE: Trafodion 1.3 and Hortonworks

Hi all,

Do we support HDP for Trafodion 1.3? My understanding shall be only vanilla 
Hbase and CDH Hbase 0.98.10.
Aven worked for this we may need get his comment.

Br,

Seth

From: Amanda Moran [mailto:amanda.mo...@esgyn.com]
Sent: 2016年2月3日 5:07
To: 
user@trafodion.incubator.apache.org
Subject: Re: Trafodion 1.3 and Hortonworks

HDP 2.2

On Tue, Feb 2, 2016 at 1:06 PM, Gunnar Tapper 
mailto:tapper.gun...@gmail.com>> wrote:
Hi Amanda,

Thanks for the CDH clarification.

What about HDP?

Thanks,

Gunnar

On Tue, Feb 2, 2016 at 2:05 PM, Amanda Moran 
mailto:amanda.mo...@esgyn.com>> wrote:
Trafodion support CDH 5.2 and 5.3 both on HBase 0.98.

On Tue, Feb 2, 2016 at 1:04 PM, Gunnar Tapper 
mailto:tapper.gun...@gmail.com>> wrote:
Hi,

It's my understanding that Trafodion supports both CDH 5.4.4 and CDH 5.4.5 and, 
therefore, HBase 1.0.

What about HDP 2.2? It uses HBase 0.98.4.

*working on the 1.3 Provisioning Guide*

--
Thanks,

Gunnar
If you think you can you can, if you think you can't you're right.



--
Thanks,

Amanda Moran



--
Thanks,

Gunnar
If you think you can you can, if you think you can't you're right.



--
Thanks,

Amanda Moran


RE: Trafodion 1.3 and Hortonworks

2016-02-02 Thread Jin, Jian (Seth)
Hi all,

Do we support HDP for Trafodion 1.3? My understanding shall be only vanilla 
Hbase and CDH Hbase 0.98.10.
Aven worked for this we may need get his comment.

Br,

Seth

From: Amanda Moran [mailto:amanda.mo...@esgyn.com]
Sent: 2016年2月3日 5:07
To: user@trafodion.incubator.apache.org
Subject: Re: Trafodion 1.3 and Hortonworks

HDP 2.2

On Tue, Feb 2, 2016 at 1:06 PM, Gunnar Tapper 
mailto:tapper.gun...@gmail.com>> wrote:
Hi Amanda,

Thanks for the CDH clarification.

What about HDP?

Thanks,

Gunnar

On Tue, Feb 2, 2016 at 2:05 PM, Amanda Moran 
mailto:amanda.mo...@esgyn.com>> wrote:
Trafodion support CDH 5.2 and 5.3 both on HBase 0.98.

On Tue, Feb 2, 2016 at 1:04 PM, Gunnar Tapper 
mailto:tapper.gun...@gmail.com>> wrote:
Hi,

It's my understanding that Trafodion supports both CDH 5.4.4 and CDH 5.4.5 and, 
therefore, HBase 1.0.

What about HDP 2.2? It uses HBase 0.98.4.

*working on the 1.3 Provisioning Guide*

--
Thanks,

Gunnar
If you think you can you can, if you think you can't you're right.



--
Thanks,

Amanda Moran



--
Thanks,

Gunnar
If you think you can you can, if you think you can't you're right.



--
Thanks,

Amanda Moran


Re: MRv1 vs. MRv2

2016-02-02 Thread Gunnar Tapper
Hi,

I am a bit lost. Per previous messages, Hive requires MapReduce. So,
MapReduce must be required for full function. I can see that MapReduce is
not required if you don't use the Hive functionality.

The Jira Hans pointed to seems to suggest to use MapReduce in lieu of YARN,
which must mean MRv1 since MRv2 is part of YARN.

>From what I now understand, you install Hive and whatever version of
MapReduce you want to use for Hive. However, YARN is required for some of
the backup/restore capabilities so you always need to install MRv2 (since
its part of YARN). So, MRv1 is relevant ONLY IF your installation is using
MRv1 for Hive processing.

Did I get that right?

I don't think it's wise to discuss exceptions such as "you don't need
MapReduce if you don't plan to use Hive via Trafodion" in the first
revision of the Trafodion Provisioning Guide. Too many angles dancing on a
needle's head. Instead, let's keep the requirements as simple as we can.

Thanks,

Gunnar

On Tue, Feb 2, 2016 at 3:44 PM, Amanda Moran  wrote:

> I have done many installs without Yarn or MapReduce installed at all.
> Trafodion runs fine :)
>
> On Tue, Feb 2, 2016 at 2:39 PM, Hans Zeller  wrote:
>
>> No, it is not required in the build environment.
>>
>> Hans
>>
>> On Tue, Feb 2, 2016 at 1:54 PM, Gunnar Tapper 
>> wrote:
>>
>>> Does this mean that MRv1 is now required in the build environment?
>>>
>>> On Tue, Feb 2, 2016 at 2:12 PM, Hans Zeller 
>>> wrote:
>>>
 Hi,

 That decision would be made by Hive, not Trafodion. For people who use
 install_local_hadoop, we recently changed that setup to use local
 MapReduce, not YARN, see
 https://issues.apache.org/jira/browse/TRAFODION-1781.

 Hans

 On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper >>> > wrote:

> Hi Suresh:
>
> Thanks for the information.
>
> Given from what you write, it seems that YARN with MRv2 is required
> for full functionality.
>
> MRv1 is a separate install in current distributions, which is why I am
> asking about it. How does Trafodion decide to run the MapReduce job as 
> MRv1
> vs. MRv2 if both are installed?
>
> Thanks,
>
> Gunnar
>
>
>
> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
> suresh.subbia...@gmail.com> wrote:
>
>> Hi,
>>
>> I don't think Trafodion requires YARN for most activity.
>>
>> For Hive table access, Trafodion uses Hive metadata access Java API
>> and libhdfs to actually scan the data file. Therefore YARN is not needed
>> for Hive access.
>> YARN is not needed for native Hbase or Trafodion table access too.
>> YARN is needed for backup/restore, since the HBase exportSnapshot
>> class Trafodion calls, used MapReduce to copy large snapshot files 
>> to/from
>> the backup location.
>> YARN is also needed for developer regressions as some vanilla Hive
>> commands are executed during the regression run.
>> For the last 2 lines I think both MRv1 and MRv2 is supported.
>>
>> Thanks
>> Suresh
>>
>>
>> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper <
>> tapper.gun...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>>>
>>> --
>>> Thanks,
>>>
>>> Gunnar
>>> *If you think you can you can, if you think you can't you're right.*
>>>
>>
>>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


>>>
>>>
>>> --
>>> Thanks,
>>>
>>> Gunnar
>>> *If you think you can you can, if you think you can't you're right.*
>>>
>>
>>
>
>
> --
> Thanks,
>
> Amanda Moran
>



-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


RE: fixing/checking corrupted metadata?

2016-02-02 Thread Sean Broeder
+1



*From:* Roberta Marton [mailto:roberta.mar...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:06 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



May want to add this to our knowledgeware as an FAQ.



   Roberta



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:00 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Awesome, bookmarked J. And yes it solved the problem.

Thanks all for the precious help,
Eric



*From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
*Sent:* Tuesday, February 2, 2016 5:09 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: fixing/checking corrupted metadata?



Here is the syntax for cleanup.

https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup



We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.



Thanks

Suresh



On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi  wrote:

Great thanks for the info, very helpful.

You mention Trafodion documentation, in what DOC is it described? I looked
for it in Trafodion Command Interface Guide and Trafodion SQL Reference
Manual with no luck? The other doc titles did not look promising?

Eric





*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:54 PM


*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details; to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_a

RE: fixing/checking corrupted metadata?

2016-02-02 Thread Roberta Marton
May want to add this to our knowledgeware as an FAQ.



   Roberta



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:00 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Awesome, bookmarked J. And yes it solved the problem.

Thanks all for the precious help,
Eric



*From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
*Sent:* Tuesday, February 2, 2016 5:09 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: fixing/checking corrupted metadata?



Here is the syntax for cleanup.

https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup



We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.



Thanks

Suresh



On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi  wrote:

Great thanks for the info, very helpful.

You mention Trafodion documentation, in what DOC is it described? I looked
for it in Trafodion Command Interface Guide and Trafodion SQL Reference
Manual with no luck? The other doc titles did not look promising?

Eric





*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:54 PM


*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details; to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

--

RE: fixing/checking corrupted metadata?

2016-02-02 Thread Eric Owhadi
Awesome, bookmarked J. And yes it solved the problem.

Thanks all for the precious help,
Eric



*From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
*Sent:* Tuesday, February 2, 2016 5:09 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: fixing/checking corrupted metadata?



Here is the syntax for cleanup.

https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup



We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.



Thanks

Suresh



On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi  wrote:

Great thanks for the info, very helpful.

You mention Trafodion documentation, in what DOC is it described? I looked
for it in Trafodion Command Interface Guide and Trafodion SQL Reference
Manual with no luck? The other doc titles did not look promising?

Eric





*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:54 PM


*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details; to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left t

RE: Dropping Schema blocked if histograms exist?

2016-02-02 Thread Venkat Muthuswamy
Hi Roberta,



Thanks for including this change along with the existing JIRA as it would
complete the scenario..



“Create the histogram tables when schema is created”…

“Drop the histogram tables when schema is dropped”…



Thanks

Venkat



*From:* Roberta Marton [mailto:roberta.mar...@esgyn.com]
*Sent:* Tuesday, February 02, 2016 3:21 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Dropping Schema blocked if histograms exist?



I have also noticed this behavior while working on a fix for JIRA
TRAFODION-1789.  IMHO, we should drop the schema without  requiring cascade
if only system created objects exist.  In fact, I was planning to deliver
this behavioral change as part of the fix for TRAFODION-1789.



Roberta





*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 3:16 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Dropping Schema blocked if histograms exist?



Hi,



I have run into this too. DROP SCHEMA treats the SB_HISTOGRAMS and
SB_HISTOGRAMS_INTERVALS as user tables. So when those exist, I either have
to explicitly drop them or use CASCADE on DROP SCHEMA.



I have no opinion on whether this is correct behavior or not.



Dave



*From:* Carol Pearson [mailto:carol.pearson...@gmail.com]
*Sent:* Tuesday, February 2, 2016 3:05 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Dropping Schema blocked if histograms exist?



I was trying to drop a schema after I updated statistics on the table.  I'd
dropped all of the objects that I created in the schema (one table,
cjpjunk, because that's all I was testing...) but my drop schema failed:



>>drop schema test_sandbox_schema;



*** ERROR[1028] The schema must be empty.  It contains at least one object
SB_HISTOGRAMS.



--- SQL operation failed with errors.





So I selected from the OBJECTS metadata table to see what's in the
test_sandbox_schema, and all I see are histograms tables, created when I
updated stats on good ol' cjpjunk:



>>select distinct object_name from "_MD_".objects where schema_name like
'TEST_SANDBOX_SCHEMA%';



OBJECT_NAME

---



SB_HISTOGRAMS

SB_HISTOGRAMS_PK

SB_HISTOGRAM_INTERVALS

SB_HISTOGRAM_INTERVALS_PK

__SCHEMA__

--- 5 row(s) selected.



Nothing but system-created tables, and the SB_HISTOGRAMS tables exist only
because I did an UPDATE STATISTICS command.



Now, I'm able to drop the schema with a drop schema cascade command:



>>drop schema test_sandbox_schema cascade;



--- SQL operation complete.





but why was cascade required?  Shouldn't I be able to drop the schema
without cascade since I didn't physically create any of these objects, and
wouldn't generally need to be aware of them from a user perspective?



Thanks!

-Carol P.

---

Email:carol.pearson...@gmail.com

Twitter:  @CarolP222

---


RE: Dropping Schema blocked if histograms exist?

2016-02-02 Thread Roberta Marton
I have also noticed this behavior while working on a fix for JIRA
TRAFODION-1789.  IMHO, we should drop the schema without  requiring cascade
if only system created objects exist.  In fact, I was planning to deliver
this behavioral change as part of the fix for TRAFODION-1789.



Roberta





*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 3:16 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: Dropping Schema blocked if histograms exist?



Hi,



I have run into this too. DROP SCHEMA treats the SB_HISTOGRAMS and
SB_HISTOGRAMS_INTERVALS as user tables. So when those exist, I either have
to explicitly drop them or use CASCADE on DROP SCHEMA.



I have no opinion on whether this is correct behavior or not.



Dave



*From:* Carol Pearson [mailto:carol.pearson...@gmail.com]
*Sent:* Tuesday, February 2, 2016 3:05 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Dropping Schema blocked if histograms exist?



I was trying to drop a schema after I updated statistics on the table.  I'd
dropped all of the objects that I created in the schema (one table,
cjpjunk, because that's all I was testing...) but my drop schema failed:



>>drop schema test_sandbox_schema;



*** ERROR[1028] The schema must be empty.  It contains at least one object
SB_HISTOGRAMS.



--- SQL operation failed with errors.





So I selected from the OBJECTS metadata table to see what's in the
test_sandbox_schema, and all I see are histograms tables, created when I
updated stats on good ol' cjpjunk:



>>select distinct object_name from "_MD_".objects where schema_name like
'TEST_SANDBOX_SCHEMA%';



OBJECT_NAME

---



SB_HISTOGRAMS

SB_HISTOGRAMS_PK

SB_HISTOGRAM_INTERVALS

SB_HISTOGRAM_INTERVALS_PK

__SCHEMA__

--- 5 row(s) selected.



Nothing but system-created tables, and the SB_HISTOGRAMS tables exist only
because I did an UPDATE STATISTICS command.



Now, I'm able to drop the schema with a drop schema cascade command:



>>drop schema test_sandbox_schema cascade;



--- SQL operation complete.





but why was cascade required?  Shouldn't I be able to drop the schema
without cascade since I didn't physically create any of these objects, and
wouldn't generally need to be aware of them from a user perspective?



Thanks!

-Carol P.

---

Email:carol.pearson...@gmail.com

Twitter:  @CarolP222

---


RE: Dropping Schema blocked if histograms exist?

2016-02-02 Thread Dave Birdsall
Hi,



I have run into this too. DROP SCHEMA treats the SB_HISTOGRAMS and
SB_HISTOGRAMS_INTERVALS as user tables. So when those exist, I either have
to explicitly drop them or use CASCADE on DROP SCHEMA.



I have no opinion on whether this is correct behavior or not.



Dave



*From:* Carol Pearson [mailto:carol.pearson...@gmail.com]
*Sent:* Tuesday, February 2, 2016 3:05 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Dropping Schema blocked if histograms exist?



I was trying to drop a schema after I updated statistics on the table.  I'd
dropped all of the objects that I created in the schema (one table,
cjpjunk, because that's all I was testing...) but my drop schema failed:



>>drop schema test_sandbox_schema;



*** ERROR[1028] The schema must be empty.  It contains at least one object
SB_HISTOGRAMS.



--- SQL operation failed with errors.





So I selected from the OBJECTS metadata table to see what's in the
test_sandbox_schema, and all I see are histograms tables, created when I
updated stats on good ol' cjpjunk:



>>select distinct object_name from "_MD_".objects where schema_name like
'TEST_SANDBOX_SCHEMA%';



OBJECT_NAME

---



SB_HISTOGRAMS

SB_HISTOGRAMS_PK

SB_HISTOGRAM_INTERVALS

SB_HISTOGRAM_INTERVALS_PK

__SCHEMA__

--- 5 row(s) selected.



Nothing but system-created tables, and the SB_HISTOGRAMS tables exist only
because I did an UPDATE STATISTICS command.



Now, I'm able to drop the schema with a drop schema cascade command:



>>drop schema test_sandbox_schema cascade;



--- SQL operation complete.





but why was cascade required?  Shouldn't I be able to drop the schema
without cascade since I didn't physically create any of these objects, and
wouldn't generally need to be aware of them from a user perspective?



Thanks!

-Carol P.

---

Email:carol.pearson...@gmail.com

Twitter:  @CarolP222

---


Re: fixing/checking corrupted metadata?

2016-02-02 Thread Suresh Subbiah
Here is the syntax for cleanup.
https://cwiki.apache.org/confluence/display/TRAFODION/Metadata+Cleanup

We need to add this to the manual that Gunnar created. I will file a JIRA
to raise an error an exit early if requested compression type is not
available.

Thanks
Suresh

On Tue, Feb 2, 2016 at 5:05 PM, Eric Owhadi  wrote:

> Great thanks for the info, very helpful.
>
> You mention Trafodion documentation, in what DOC is it described? I looked
> for it in Trafodion Command Interface Guide and Trafodion SQL Reference
> Manual with no luck? The other doc titles did not look promising?
>
> Eric
>
>
>
>
>
> *From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 4:54 PM
>
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> Dave mentioned ‘cleanup table customer’. You can use that if you know
> which table is messed up in metadata.
>
>
>
> Or one can use:
>
>   cleanup metadata, check, return details; to find out all entries
> which may be corrupt.
>
> and then:
>
>   cleanup metadata, return details;
>
>
>
> Cleanup command is also documented in trafodion documentation which is a
> good place to check.
>
>
>
> anoop
>
>
>
> *From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:49 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> Right.  I mentioned this only because reinstalling local_hadoop was
> mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
> for existing data.
>
>
>
> *From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:43 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> Only do that if you’re willing to get rid of your entire database.
>
>
>
> *From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:41 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: fixing/checking corrupted metadata?
>
>
>
> You might want to try sqlci initialize trafodion, drop; initialize
> trafodion;
>
>
>
>
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 2:36 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* fixing/checking corrupted metadata?
>
>
>
> I have been playing on my dev environment with this DDL:
>
> create table Customer
>
> (
>
> c_customer_sk   int not null,
>
> c_customer_id   char(16) CHARACTER SET UTF8 not null,
>
> c_current_cdemo_sk  int,
>
> c_current_hdemo_sk  int,
>
> c_current_addr_sk   int,
>
> c_first_shipto_date_sk  int,
>
> c_first_sales_date_sk   int,
>
> c_salutationchar(10) CHARACTER SET UTF8,
>
> c_first_namechar(20) CHARACTER SET UTF8,
>
> c_last_name char(30) CHARACTER SET UTF8,
>
> c_preferred_cust_flag   char(1),
>
> c_birth_day integer,
>
> c_birth_month   integer,
>
> c_birth_yearinteger,
>
> c_birth_country varchar(20) CHARACTER SET UTF8,
>
> c_login char(13) CHARACTER SET UTF8,
>
> c_email_address char(50) CHARACTER SET UTF8,
>
> c_last_review_date_sk   int,
>
> primary key (c_customer_sk)
>
> )SALT USING 2 PARTITIONS
>
>   HBASE_OPTIONS
>
>   (
>
> DATA_BLOCK_ENCODING = 'FAST_DIFF',
>
>COMPRESSION = 'SNAPPY'
>
>   );
>
>
>
> After a long time and supposedly 35 retries, it complained about the lack
> of SNAPPY compression support in local_hadoop.
>
>
>
> That’s fine, so I decided to retry with:
>
> create table Customer
>
> (
>
> c_customer_sk   int not null,
>
> c_customer_id   char(16) CHARACTER SET UTF8 not null,
>
> c_current_cdemo_sk  int,
>
> c_current_hdemo_sk  int,
>
> c_current_addr_sk   int,
>
> c_first_shipto_date_sk  int,
>
> c_first_sales_date_sk   int,
>
> c_salutationchar(10) CHARACTER SET UTF8,
>
> c_first_namechar(20) CHARACTER SET UTF8,
>
> c_last_name char(30) CHARACTER SET UTF8,
>
> c_preferred_cust_flag   char(1),
>
> c_birth_day integer,
>
> c_birth_month   integer,
>
> c_birth_yearinteger,
>
> c_birth_country varchar(20) CHARACTER SET UTF8,
>
> c_login char(13) CHARACTER SET UTF8,
>
> c_email_address char(50) CHARACTER SET UTF8,
>
> c_last_review_date_sk   int,
>
> primary key (c_customer_sk)
>
> )SALT USING 2 PARTITIONS
>
>   HBASE_OPTIONS
>
>   (
>
> DATA_BLOCK_ENCODING = 'FAST_DIFF'
>
> -- not available in local_hadoop   COMPRESSION = 'SNAPPY'
>
>   );
>
>
>
> And this time it takes forever and never complete (waited 20 minute, then
> killed it).
>
>
>
> I am assuming that the second attempt might be the consequence of the
> first failure t

Dropping Schema blocked if histograms exist?

2016-02-02 Thread Carol Pearson
I was trying to drop a schema after I updated statistics on the table.  I'd
dropped all of the objects that I created in the schema (one table,
cjpjunk, because that's all I was testing...) but my drop schema failed:

>>drop schema test_sandbox_schema;

*** ERROR[1028] The schema must be empty.  It contains at least one object
SB_HISTOGRAMS.

--- SQL operation failed with errors.



So I selected from the OBJECTS metadata table to see what's in the
test_sandbox_schema, and all I see are histograms tables, created when I
updated stats on good ol' cjpjunk:

>>select distinct object_name from "_MD_".objects where schema_name like
'TEST_SANDBOX_SCHEMA%';

OBJECT_NAME
---

SB_HISTOGRAMS
SB_HISTOGRAMS_PK
SB_HISTOGRAM_INTERVALS
SB_HISTOGRAM_INTERVALS_PK
__SCHEMA__
--- 5 row(s) selected.


Nothing but system-created tables, and the SB_HISTOGRAMS tables exist only
because I did an UPDATE STATISTICS command.

Now, I'm able to drop the schema with a drop schema cascade command:

>>drop schema test_sandbox_schema cascade;

--- SQL operation complete.



but why was cascade required?  Shouldn't I be able to drop the schema
without cascade since I didn't physically create any of these objects, and
wouldn't generally need to be aware of them from a user perspective?

Thanks!
-Carol P.
---
Email:carol.pearson...@gmail.com
Twitter:  @CarolP222
---


RE: fixing/checking corrupted metadata?

2016-02-02 Thread Eric Owhadi
Great thanks for the info, very helpful.

You mention Trafodion documentation, in what DOC is it described? I looked
for it in Trafodion Command Interface Guide and Trafodion SQL Reference
Manual with no luck? The other doc titles did not look promising?

Eric





*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 4:54 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details; to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric


RE: fixing/checking corrupted metadata?

2016-02-02 Thread Roberta Marton
Cleanup works good for me.



 Roberta



*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric


RE: fixing/checking corrupted metadata?

2016-02-02 Thread Anoop Sharma
Dave mentioned ‘cleanup table customer’. You can use that if you know which
table is messed up in metadata.



Or one can use:

  cleanup metadata, check, return details; to find out all entries
which may be corrupt.

and then:

  cleanup metadata, return details;



Cleanup command is also documented in trafodion documentation which is a
good place to check.



anoop



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:49 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric


RE: fixing/checking corrupted metadata?

2016-02-02 Thread Sean Broeder
Right.  I mentioned this only because reinstalling local_hadoop was
mentioned.  Reinitializing Trafodion would be quicker, but just as fatal
for existing data.



*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:43 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric


RE: fixing/checking corrupted metadata?

2016-02-02 Thread Dave Birdsall
Only do that if you’re willing to get rid of your entire database.



*From:* Sean Broeder [mailto:sean.broe...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:41 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: fixing/checking corrupted metadata?



You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric


Re: MRv1 vs. MRv2

2016-02-02 Thread Amanda Moran
I have done many installs without Yarn or MapReduce installed at all.
Trafodion runs fine :)

On Tue, Feb 2, 2016 at 2:39 PM, Hans Zeller  wrote:

> No, it is not required in the build environment.
>
> Hans
>
> On Tue, Feb 2, 2016 at 1:54 PM, Gunnar Tapper 
> wrote:
>
>> Does this mean that MRv1 is now required in the build environment?
>>
>> On Tue, Feb 2, 2016 at 2:12 PM, Hans Zeller 
>> wrote:
>>
>>> Hi,
>>>
>>> That decision would be made by Hive, not Trafodion. For people who use
>>> install_local_hadoop, we recently changed that setup to use local
>>> MapReduce, not YARN, see
>>> https://issues.apache.org/jira/browse/TRAFODION-1781.
>>>
>>> Hans
>>>
>>> On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper 
>>> wrote:
>>>
 Hi Suresh:

 Thanks for the information.

 Given from what you write, it seems that YARN with MRv2 is required for
 full functionality.

 MRv1 is a separate install in current distributions, which is why I am
 asking about it. How does Trafodion decide to run the MapReduce job as MRv1
 vs. MRv2 if both are installed?

 Thanks,

 Gunnar



 On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
 suresh.subbia...@gmail.com> wrote:

> Hi,
>
> I don't think Trafodion requires YARN for most activity.
>
> For Hive table access, Trafodion uses Hive metadata access Java API
> and libhdfs to actually scan the data file. Therefore YARN is not needed
> for Hive access.
> YARN is not needed for native Hbase or Trafodion table access too.
> YARN is needed for backup/restore, since the HBase exportSnapshot
> class Trafodion calls, used MapReduce to copy large snapshot files to/from
> the backup location.
> YARN is also needed for developer regressions as some vanilla Hive
> commands are executed during the regression run.
> For the last 2 lines I think both MRv1 and MRv2 is supported.
>
> Thanks
> Suresh
>
>
> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper  > wrote:
>
>> Hi,
>>
>> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>


 --
 Thanks,

 Gunnar
 *If you think you can you can, if you think you can't you're right.*

>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>


-- 
Thanks,

Amanda Moran


RE: fixing/checking corrupted metadata?

2016-02-02 Thread Sean Broeder
You might want to try sqlci initialize trafodion, drop; initialize
trafodion;







*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric


RE: fixing/checking corrupted metadata?

2016-02-02 Thread Dave Birdsall
Hi Eric,



There might be hung transactions. Do “dtmci” and then “status trans” to
see. If so, you can get rid of them by doing sqstop + sqstart. Though you
might need to do ckillall with sqstop because sometimes it hangs on these.



There likely is messed up metadata. To clean that up, you can do “cleanup
table customer”.



Actually, you can do this without cleaning up the hung transactions. Those
can just stay there.



Dave



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 2:36 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* fixing/checking corrupted metadata?



I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric


Re: MRv1 vs. MRv2

2016-02-02 Thread Hans Zeller
No, it is not required in the build environment.

Hans

On Tue, Feb 2, 2016 at 1:54 PM, Gunnar Tapper 
wrote:

> Does this mean that MRv1 is now required in the build environment?
>
> On Tue, Feb 2, 2016 at 2:12 PM, Hans Zeller  wrote:
>
>> Hi,
>>
>> That decision would be made by Hive, not Trafodion. For people who use
>> install_local_hadoop, we recently changed that setup to use local
>> MapReduce, not YARN, see
>> https://issues.apache.org/jira/browse/TRAFODION-1781.
>>
>> Hans
>>
>> On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper 
>> wrote:
>>
>>> Hi Suresh:
>>>
>>> Thanks for the information.
>>>
>>> Given from what you write, it seems that YARN with MRv2 is required for
>>> full functionality.
>>>
>>> MRv1 is a separate install in current distributions, which is why I am
>>> asking about it. How does Trafodion decide to run the MapReduce job as MRv1
>>> vs. MRv2 if both are installed?
>>>
>>> Thanks,
>>>
>>> Gunnar
>>>
>>>
>>>
>>> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
>>> suresh.subbia...@gmail.com> wrote:
>>>
 Hi,

 I don't think Trafodion requires YARN for most activity.

 For Hive table access, Trafodion uses Hive metadata access Java API and
 libhdfs to actually scan the data file. Therefore YARN is not needed for
 Hive access.
 YARN is not needed for native Hbase or Trafodion table access too.
 YARN is needed for backup/restore, since the HBase exportSnapshot class
 Trafodion calls, used MapReduce to copy large snapshot files to/from the
 backup location.
 YARN is also needed for developer regressions as some vanilla Hive
 commands are executed during the regression run.
 For the last 2 lines I think both MRv1 and MRv2 is supported.

 Thanks
 Suresh


 On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper 
 wrote:

> Hi,
>
> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


>>>
>>>
>>> --
>>> Thanks,
>>>
>>> Gunnar
>>> *If you think you can you can, if you think you can't you're right.*
>>>
>>
>>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


fixing/checking corrupted metadata?

2016-02-02 Thread Eric Owhadi
I have been playing on my dev environment with this DDL:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF',

   COMPRESSION = 'SNAPPY'

  );



After a long time and supposedly 35 retries, it complained about the lack
of SNAPPY compression support in local_hadoop.



That’s fine, so I decided to retry with:

create table Customer

(

c_customer_sk   int not null,

c_customer_id   char(16) CHARACTER SET UTF8 not null,

c_current_cdemo_sk  int,

c_current_hdemo_sk  int,

c_current_addr_sk   int,

c_first_shipto_date_sk  int,

c_first_sales_date_sk   int,

c_salutationchar(10) CHARACTER SET UTF8,

c_first_namechar(20) CHARACTER SET UTF8,

c_last_name char(30) CHARACTER SET UTF8,

c_preferred_cust_flag   char(1),

c_birth_day integer,

c_birth_month   integer,

c_birth_yearinteger,

c_birth_country varchar(20) CHARACTER SET UTF8,

c_login char(13) CHARACTER SET UTF8,

c_email_address char(50) CHARACTER SET UTF8,

c_last_review_date_sk   int,

primary key (c_customer_sk)

)SALT USING 2 PARTITIONS

  HBASE_OPTIONS

  (

DATA_BLOCK_ENCODING = 'FAST_DIFF'

-- not available in local_hadoop   COMPRESSION = 'SNAPPY'

  );



And this time it takes forever and never complete (waited 20 minute, then
killed it).



I am assuming that the second attempt might be the consequence of the first
failure that must have left things half done.



I know that I can do a full uninstall/re-install local Hadoop, but I was
wondering if there is a metadata clean up utility that I could try before
applying the bazooka?



Thanks in advance for the help,
Eric


Re: MRv1 vs. MRv2

2016-02-02 Thread Gunnar Tapper
Does this mean that MRv1 is now required in the build environment?

On Tue, Feb 2, 2016 at 2:12 PM, Hans Zeller  wrote:

> Hi,
>
> That decision would be made by Hive, not Trafodion. For people who use
> install_local_hadoop, we recently changed that setup to use local
> MapReduce, not YARN, see
> https://issues.apache.org/jira/browse/TRAFODION-1781.
>
> Hans
>
> On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper 
> wrote:
>
>> Hi Suresh:
>>
>> Thanks for the information.
>>
>> Given from what you write, it seems that YARN with MRv2 is required for
>> full functionality.
>>
>> MRv1 is a separate install in current distributions, which is why I am
>> asking about it. How does Trafodion decide to run the MapReduce job as MRv1
>> vs. MRv2 if both are installed?
>>
>> Thanks,
>>
>> Gunnar
>>
>>
>>
>> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
>> suresh.subbia...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I don't think Trafodion requires YARN for most activity.
>>>
>>> For Hive table access, Trafodion uses Hive metadata access Java API and
>>> libhdfs to actually scan the data file. Therefore YARN is not needed for
>>> Hive access.
>>> YARN is not needed for native Hbase or Trafodion table access too.
>>> YARN is needed for backup/restore, since the HBase exportSnapshot class
>>> Trafodion calls, used MapReduce to copy large snapshot files to/from the
>>> backup location.
>>> YARN is also needed for developer regressions as some vanilla Hive
>>> commands are executed during the regression run.
>>> For the last 2 lines I think both MRv1 and MRv2 is supported.
>>>
>>> Thanks
>>> Suresh
>>>
>>>
>>> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper 
>>> wrote:
>>>
 Hi,

 Does Trafodion require YARN with MRv2 or is MRv1 supported, too?

 --
 Thanks,

 Gunnar
 *If you think you can you can, if you think you can't you're right.*

>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>


-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


Re: MRv1 vs. MRv2

2016-02-02 Thread Gunnar Tapper
Thanks.

On Tue, Feb 2, 2016 at 2:36 PM, Hans Zeller  wrote:

> No, unless a user explicitly puts it into a UDF, none of the built-in UDFs
> and SPJs use MR.
>
> Hans
>
> On Tue, Feb 2, 2016 at 1:21 PM, Gunnar Tapper 
> wrote:
>
>> Hi,
>>
>> I think it's a configuration choice. For example:
>> http://www.cloudera.com/documentation/manager/5-0-x/Cloudera-Manager-Managing-Clusters/cm5mc_mapreduce_service.html
>>
>> Are there UDFs, SPJs, and similar functions that use MapReduce, too?
>>
>> Thanks,
>>
>> Gunnar
>>
>> On Tue, Feb 2, 2016 at 2:19 PM, Qifan Chen  wrote:
>>
>>>
>>> Hi Hans,
>>>
>>> I think Hive uses MapReduce to sort the data during table population,
>>> after disabling YARN. This is observed on a workstation.
>>>
>>> Thanks -Qifan
>>>
>>> On Tue, Feb 2, 2016 at 3:12 PM, Hans Zeller 
>>> wrote:
>>>
 Hi,

 That decision would be made by Hive, not Trafodion. For people who use
 install_local_hadoop, we recently changed that setup to use local
 MapReduce, not YARN, see
 https://issues.apache.org/jira/browse/TRAFODION-1781.

 Hans

 On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper >>> > wrote:

> Hi Suresh:
>
> Thanks for the information.
>
> Given from what you write, it seems that YARN with MRv2 is required
> for full functionality.
>
> MRv1 is a separate install in current distributions, which is why I am
> asking about it. How does Trafodion decide to run the MapReduce job as 
> MRv1
> vs. MRv2 if both are installed?
>
> Thanks,
>
> Gunnar
>
>
>
> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
> suresh.subbia...@gmail.com> wrote:
>
>> Hi,
>>
>> I don't think Trafodion requires YARN for most activity.
>>
>> For Hive table access, Trafodion uses Hive metadata access Java API
>> and libhdfs to actually scan the data file. Therefore YARN is not needed
>> for Hive access.
>> YARN is not needed for native Hbase or Trafodion table access too.
>> YARN is needed for backup/restore, since the HBase exportSnapshot
>> class Trafodion calls, used MapReduce to copy large snapshot files 
>> to/from
>> the backup location.
>> YARN is also needed for developer regressions as some vanilla Hive
>> commands are executed during the regression run.
>> For the last 2 lines I think both MRv1 and MRv2 is supported.
>>
>> Thanks
>> Suresh
>>
>>
>> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper <
>> tapper.gun...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>>>
>>> --
>>> Thanks,
>>>
>>> Gunnar
>>> *If you think you can you can, if you think you can't you're right.*
>>>
>>
>>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


>>>
>>>
>>> --
>>> Regards, --Qifan
>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>


-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


Re: MRv1 vs. MRv2

2016-02-02 Thread Hans Zeller
No, unless a user explicitly puts it into a UDF, none of the built-in UDFs
and SPJs use MR.

Hans

On Tue, Feb 2, 2016 at 1:21 PM, Gunnar Tapper 
wrote:

> Hi,
>
> I think it's a configuration choice. For example:
> http://www.cloudera.com/documentation/manager/5-0-x/Cloudera-Manager-Managing-Clusters/cm5mc_mapreduce_service.html
>
> Are there UDFs, SPJs, and similar functions that use MapReduce, too?
>
> Thanks,
>
> Gunnar
>
> On Tue, Feb 2, 2016 at 2:19 PM, Qifan Chen  wrote:
>
>>
>> Hi Hans,
>>
>> I think Hive uses MapReduce to sort the data during table population,
>> after disabling YARN. This is observed on a workstation.
>>
>> Thanks -Qifan
>>
>> On Tue, Feb 2, 2016 at 3:12 PM, Hans Zeller 
>> wrote:
>>
>>> Hi,
>>>
>>> That decision would be made by Hive, not Trafodion. For people who use
>>> install_local_hadoop, we recently changed that setup to use local
>>> MapReduce, not YARN, see
>>> https://issues.apache.org/jira/browse/TRAFODION-1781.
>>>
>>> Hans
>>>
>>> On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper 
>>> wrote:
>>>
 Hi Suresh:

 Thanks for the information.

 Given from what you write, it seems that YARN with MRv2 is required for
 full functionality.

 MRv1 is a separate install in current distributions, which is why I am
 asking about it. How does Trafodion decide to run the MapReduce job as MRv1
 vs. MRv2 if both are installed?

 Thanks,

 Gunnar



 On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
 suresh.subbia...@gmail.com> wrote:

> Hi,
>
> I don't think Trafodion requires YARN for most activity.
>
> For Hive table access, Trafodion uses Hive metadata access Java API
> and libhdfs to actually scan the data file. Therefore YARN is not needed
> for Hive access.
> YARN is not needed for native Hbase or Trafodion table access too.
> YARN is needed for backup/restore, since the HBase exportSnapshot
> class Trafodion calls, used MapReduce to copy large snapshot files to/from
> the backup location.
> YARN is also needed for developer regressions as some vanilla Hive
> commands are executed during the regression run.
> For the last 2 lines I think both MRv1 and MRv2 is supported.
>
> Thanks
> Suresh
>
>
> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper  > wrote:
>
>> Hi,
>>
>> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>


 --
 Thanks,

 Gunnar
 *If you think you can you can, if you think you can't you're right.*

>>>
>>>
>>
>>
>> --
>> Regards, --Qifan
>>
>>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


Re: MRv1 vs. MRv2

2016-02-02 Thread Gunnar Tapper
Hi,

I think it's a configuration choice. For example:
http://www.cloudera.com/documentation/manager/5-0-x/Cloudera-Manager-Managing-Clusters/cm5mc_mapreduce_service.html

Are there UDFs, SPJs, and similar functions that use MapReduce, too?

Thanks,

Gunnar

On Tue, Feb 2, 2016 at 2:19 PM, Qifan Chen  wrote:

>
> Hi Hans,
>
> I think Hive uses MapReduce to sort the data during table population,
> after disabling YARN. This is observed on a workstation.
>
> Thanks -Qifan
>
> On Tue, Feb 2, 2016 at 3:12 PM, Hans Zeller  wrote:
>
>> Hi,
>>
>> That decision would be made by Hive, not Trafodion. For people who use
>> install_local_hadoop, we recently changed that setup to use local
>> MapReduce, not YARN, see
>> https://issues.apache.org/jira/browse/TRAFODION-1781.
>>
>> Hans
>>
>> On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper 
>> wrote:
>>
>>> Hi Suresh:
>>>
>>> Thanks for the information.
>>>
>>> Given from what you write, it seems that YARN with MRv2 is required for
>>> full functionality.
>>>
>>> MRv1 is a separate install in current distributions, which is why I am
>>> asking about it. How does Trafodion decide to run the MapReduce job as MRv1
>>> vs. MRv2 if both are installed?
>>>
>>> Thanks,
>>>
>>> Gunnar
>>>
>>>
>>>
>>> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
>>> suresh.subbia...@gmail.com> wrote:
>>>
 Hi,

 I don't think Trafodion requires YARN for most activity.

 For Hive table access, Trafodion uses Hive metadata access Java API and
 libhdfs to actually scan the data file. Therefore YARN is not needed for
 Hive access.
 YARN is not needed for native Hbase or Trafodion table access too.
 YARN is needed for backup/restore, since the HBase exportSnapshot class
 Trafodion calls, used MapReduce to copy large snapshot files to/from the
 backup location.
 YARN is also needed for developer regressions as some vanilla Hive
 commands are executed during the regression run.
 For the last 2 lines I think both MRv1 and MRv2 is supported.

 Thanks
 Suresh


 On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper 
 wrote:

> Hi,
>
> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


>>>
>>>
>>> --
>>> Thanks,
>>>
>>> Gunnar
>>> *If you think you can you can, if you think you can't you're right.*
>>>
>>
>>
>
>
> --
> Regards, --Qifan
>
>


-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


Re: MRv1 vs. MRv2

2016-02-02 Thread Qifan Chen
Hi Hans,

I think Hive uses MapReduce to sort the data during table population, after
disabling YARN. This is observed on a workstation.

Thanks -Qifan

On Tue, Feb 2, 2016 at 3:12 PM, Hans Zeller  wrote:

> Hi,
>
> That decision would be made by Hive, not Trafodion. For people who use
> install_local_hadoop, we recently changed that setup to use local
> MapReduce, not YARN, see
> https://issues.apache.org/jira/browse/TRAFODION-1781.
>
> Hans
>
> On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper 
> wrote:
>
>> Hi Suresh:
>>
>> Thanks for the information.
>>
>> Given from what you write, it seems that YARN with MRv2 is required for
>> full functionality.
>>
>> MRv1 is a separate install in current distributions, which is why I am
>> asking about it. How does Trafodion decide to run the MapReduce job as MRv1
>> vs. MRv2 if both are installed?
>>
>> Thanks,
>>
>> Gunnar
>>
>>
>>
>> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah <
>> suresh.subbia...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I don't think Trafodion requires YARN for most activity.
>>>
>>> For Hive table access, Trafodion uses Hive metadata access Java API and
>>> libhdfs to actually scan the data file. Therefore YARN is not needed for
>>> Hive access.
>>> YARN is not needed for native Hbase or Trafodion table access too.
>>> YARN is needed for backup/restore, since the HBase exportSnapshot class
>>> Trafodion calls, used MapReduce to copy large snapshot files to/from the
>>> backup location.
>>> YARN is also needed for developer regressions as some vanilla Hive
>>> commands are executed during the regression run.
>>> For the last 2 lines I think both MRv1 and MRv2 is supported.
>>>
>>> Thanks
>>> Suresh
>>>
>>>
>>> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper 
>>> wrote:
>>>
 Hi,

 Does Trafodion require YARN with MRv2 or is MRv1 supported, too?

 --
 Thanks,

 Gunnar
 *If you think you can you can, if you think you can't you're right.*

>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>


-- 
Regards, --Qifan


Re: MRv1 vs. MRv2

2016-02-02 Thread Hans Zeller
Hi,

That decision would be made by Hive, not Trafodion. For people who use
install_local_hadoop, we recently changed that setup to use local
MapReduce, not YARN, see
https://issues.apache.org/jira/browse/TRAFODION-1781.

Hans

On Tue, Feb 2, 2016 at 12:58 PM, Gunnar Tapper 
wrote:

> Hi Suresh:
>
> Thanks for the information.
>
> Given from what you write, it seems that YARN with MRv2 is required for
> full functionality.
>
> MRv1 is a separate install in current distributions, which is why I am
> asking about it. How does Trafodion decide to run the MapReduce job as MRv1
> vs. MRv2 if both are installed?
>
> Thanks,
>
> Gunnar
>
>
>
> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah  > wrote:
>
>> Hi,
>>
>> I don't think Trafodion requires YARN for most activity.
>>
>> For Hive table access, Trafodion uses Hive metadata access Java API and
>> libhdfs to actually scan the data file. Therefore YARN is not needed for
>> Hive access.
>> YARN is not needed for native Hbase or Trafodion table access too.
>> YARN is needed for backup/restore, since the HBase exportSnapshot class
>> Trafodion calls, used MapReduce to copy large snapshot files to/from the
>> backup location.
>> YARN is also needed for developer regressions as some vanilla Hive
>> commands are executed during the regression run.
>> For the last 2 lines I think both MRv1 and MRv2 is supported.
>>
>> Thanks
>> Suresh
>>
>>
>> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper 
>> wrote:
>>
>>> Hi,
>>>
>>> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>>>
>>> --
>>> Thanks,
>>>
>>> Gunnar
>>> *If you think you can you can, if you think you can't you're right.*
>>>
>>
>>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


Re: Trafodion 1.3 and Hortonworks

2016-02-02 Thread Amanda Moran
HDP 2.2

On Tue, Feb 2, 2016 at 1:06 PM, Gunnar Tapper 
wrote:

> Hi Amanda,
>
> Thanks for the CDH clarification.
>
> What about HDP?
>
> Thanks,
>
> Gunnar
>
> On Tue, Feb 2, 2016 at 2:05 PM, Amanda Moran 
> wrote:
>
>> Trafodion support CDH 5.2 and 5.3 both on HBase 0.98.
>>
>> On Tue, Feb 2, 2016 at 1:04 PM, Gunnar Tapper 
>> wrote:
>>
>>> Hi,
>>>
>>> It's my understanding that Trafodion supports both CDH 5.4.4 and CDH
>>> 5.4.5 and, therefore, HBase 1.0.
>>>
>>> What about HDP 2.2? It uses HBase 0.98.4.
>>>
>>> *working on the 1.3 Provisioning Guide*
>>>
>>> --
>>> Thanks,
>>>
>>> Gunnar
>>> *If you think you can you can, if you think you can't you're right.*
>>>
>>
>>
>>
>> --
>> Thanks,
>>
>> Amanda Moran
>>
>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>



-- 
Thanks,

Amanda Moran


Re: Trafodion 1.3 and Hortonworks

2016-02-02 Thread Gunnar Tapper
Hi Amanda,

Thanks for the CDH clarification.

What about HDP?

Thanks,

Gunnar

On Tue, Feb 2, 2016 at 2:05 PM, Amanda Moran  wrote:

> Trafodion support CDH 5.2 and 5.3 both on HBase 0.98.
>
> On Tue, Feb 2, 2016 at 1:04 PM, Gunnar Tapper 
> wrote:
>
>> Hi,
>>
>> It's my understanding that Trafodion supports both CDH 5.4.4 and CDH
>> 5.4.5 and, therefore, HBase 1.0.
>>
>> What about HDP 2.2? It uses HBase 0.98.4.
>>
>> *working on the 1.3 Provisioning Guide*
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>
>
> --
> Thanks,
>
> Amanda Moran
>



-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


Re: Trafodion 1.3 and Hortonworks

2016-02-02 Thread Amanda Moran
Trafodion support CDH 5.2 and 5.3 both on HBase 0.98.

On Tue, Feb 2, 2016 at 1:04 PM, Gunnar Tapper 
wrote:

> Hi,
>
> It's my understanding that Trafodion supports both CDH 5.4.4 and CDH 5.4.5
> and, therefore, HBase 1.0.
>
> What about HDP 2.2? It uses HBase 0.98.4.
>
> *working on the 1.3 Provisioning Guide*
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>



-- 
Thanks,

Amanda Moran


Trafodion 1.3 and Hortonworks

2016-02-02 Thread Gunnar Tapper
Hi,

It's my understanding that Trafodion supports both CDH 5.4.4 and CDH 5.4.5
and, therefore, HBase 1.0.

What about HDP 2.2? It uses HBase 0.98.4.

*working on the 1.3 Provisioning Guide*

-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


Re: MRv1 vs. MRv2

2016-02-02 Thread Qifan Chen
Hi Gunnar,

Hive uses map/reduce to load hive tables in non-external format, such as
ORC.  So even though not required directly, YARN is something very relevant
for Hive to get the data into hive tables that Trafodion can access.

Not sure if YARN (or other possibility) is used in anyway for system
resource management for Trafodion.  Something we need to consider in the
near future.

Regards, --Qifan

On Tue, Feb 2, 2016 at 2:58 PM, Gunnar Tapper 
wrote:

> Hi Suresh:
>
> Thanks for the information.
>
> Given from what you write, it seems that YARN with MRv2 is required for
> full functionality.
>
> MRv1 is a separate install in current distributions, which is why I am
> asking about it. How does Trafodion decide to run the MapReduce job as MRv1
> vs. MRv2 if both are installed?
>
> Thanks,
>
> Gunnar
>
>
>
> On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah  > wrote:
>
>> Hi,
>>
>> I don't think Trafodion requires YARN for most activity.
>>
>> For Hive table access, Trafodion uses Hive metadata access Java API and
>> libhdfs to actually scan the data file. Therefore YARN is not needed for
>> Hive access.
>> YARN is not needed for native Hbase or Trafodion table access too.
>> YARN is needed for backup/restore, since the HBase exportSnapshot class
>> Trafodion calls, used MapReduce to copy large snapshot files to/from the
>> backup location.
>> YARN is also needed for developer regressions as some vanilla Hive
>> commands are executed during the regression run.
>> For the last 2 lines I think both MRv1 and MRv2 is supported.
>>
>> Thanks
>> Suresh
>>
>>
>> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper 
>> wrote:
>>
>>> Hi,
>>>
>>> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>>>
>>> --
>>> Thanks,
>>>
>>> Gunnar
>>> *If you think you can you can, if you think you can't you're right.*
>>>
>>
>>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>



-- 
Regards, --Qifan


RE: How to change default CQD?

2016-02-02 Thread Roberta Marton
I do have an SPJ that supports displaying and changing cqd  values in the
“_MD_”.defaults table.



Roberta



*From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
*Sent:* Tuesday, February 2, 2016 12:45 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: How to change default CQD?



Yes, the same copy must be maintained on all nodes. This is similar to the
ms.env file that is in the same directory. I too feel that the _MD_.default
method is easier to manage. I mention it here only for the sake of
completeness.



With both _MD_.default and the file method, if access is through a
connectivity service, then a new instance of mxosrvr may be needed (as
these attributes are read at process initialization time).



Thanks

Suresh





On Tue, Feb 2, 2016 at 2:34 PM, Eric Owhadi  wrote:

Interesting, but how do we manage this on a cluster? Do we have to
maintain the same copy on each node? (just curious, I will use the
_*MD*_.default
method)



Eric



*From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
*Sent:* Tuesday, February 2, 2016 2:29 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: How to change default CQD?



Hi,



One can also create (or add to the file it already exists) a text file
called SQSystemDefaults.conf in the directory $MY_SQROOT/etc and add
attribute name and the value it should be set to in the file.

I think each attribute should in a separate line and the attribute value
should NOT be surrounded by single quotes as we would do from sqlci/trafci.

This file is also created by install_local_hadoop. if this script was used,
you may see an empty file with a few comments in your instance.



Thanks

Suresh





On Tue, Feb 2, 2016 at 12:34 PM, Eric Owhadi  wrote:

Thanks Anoop,

So I will use the following insert statement:

insert into "_MD_".defaults values ('MY_CQD', 'value of CQD', 'comment why
it is inserted');



Eric







*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 12:29 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: How to change default CQD?



You can change it in code by modifying the defaults array in
sqlcomp/nadefaults.cpp

or you can insert the default value in system defaults table
trafodion.“_MD_”.defaults.

If you do the insert, you will need to restart dcs servers to pick up the
new value.



anoop



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 10:26 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* How to change default CQD?



Hello Trafodioneers,

I have seen in the code that there is a way to globally change the default
Control Query Default values. But I can’t find it in the docs.

Can someone help me on this?

Regards,
Eric


Re: MRv1 vs. MRv2

2016-02-02 Thread Gunnar Tapper
Hi Suresh:

Thanks for the information.

Given from what you write, it seems that YARN with MRv2 is required for
full functionality.

MRv1 is a separate install in current distributions, which is why I am
asking about it. How does Trafodion decide to run the MapReduce job as MRv1
vs. MRv2 if both are installed?

Thanks,

Gunnar



On Tue, Feb 2, 2016 at 1:50 PM, Suresh Subbiah 
wrote:

> Hi,
>
> I don't think Trafodion requires YARN for most activity.
>
> For Hive table access, Trafodion uses Hive metadata access Java API and
> libhdfs to actually scan the data file. Therefore YARN is not needed for
> Hive access.
> YARN is not needed for native Hbase or Trafodion table access too.
> YARN is needed for backup/restore, since the HBase exportSnapshot class
> Trafodion calls, used MapReduce to copy large snapshot files to/from the
> backup location.
> YARN is also needed for developer regressions as some vanilla Hive
> commands are executed during the regression run.
> For the last 2 lines I think both MRv1 and MRv2 is supported.
>
> Thanks
> Suresh
>
>
> On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper 
> wrote:
>
>> Hi,
>>
>> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>>
>> --
>> Thanks,
>>
>> Gunnar
>> *If you think you can you can, if you think you can't you're right.*
>>
>
>


-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


Re: How to change default CQD?

2016-02-02 Thread Suresh Subbiah
Yes, the same copy must be maintained on all nodes. This is similar to the
ms.env file that is in the same directory. I too feel that the _MD_.default
method is easier to manage. I mention it here only for the sake of
completeness.

With both _MD_.default and the file method, if access is through a
connectivity service, then a new instance of mxosrvr may be needed (as
these attributes are read at process initialization time).

Thanks
Suresh


On Tue, Feb 2, 2016 at 2:34 PM, Eric Owhadi  wrote:

> Interesting, but how do we manage this on a cluster? Do we have to
> maintain the same copy on each node? (just curious, I will use the 
> _*MD*_.default
> method)
>
>
>
> Eric
>
>
>
> *From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
> *Sent:* Tuesday, February 2, 2016 2:29 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* Re: How to change default CQD?
>
>
>
> Hi,
>
>
>
> One can also create (or add to the file it already exists) a text file
> called SQSystemDefaults.conf in the directory $MY_SQROOT/etc and add
> attribute name and the value it should be set to in the file.
>
> I think each attribute should in a separate line and the attribute value
> should NOT be surrounded by single quotes as we would do from sqlci/trafci.
>
> This file is also created by install_local_hadoop. if this script was
> used, you may see an empty file with a few comments in your instance.
>
>
>
> Thanks
>
> Suresh
>
>
>
>
>
> On Tue, Feb 2, 2016 at 12:34 PM, Eric Owhadi 
> wrote:
>
> Thanks Anoop,
>
> So I will use the following insert statement:
>
> insert into "_MD_".defaults values ('MY_CQD', 'value of CQD', 'comment why
> it is inserted');
>
>
>
> Eric
>
>
>
>
>
>
>
> *From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 12:29 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: How to change default CQD?
>
>
>
> You can change it in code by modifying the defaults array in
> sqlcomp/nadefaults.cpp
>
> or you can insert the default value in system defaults table
> trafodion.“_MD_”.defaults.
>
> If you do the insert, you will need to restart dcs servers to pick up the
> new value.
>
>
>
> anoop
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 10:26 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* How to change default CQD?
>
>
>
> Hello Trafodioneers,
>
> I have seen in the code that there is a way to globally change the default
> Control Query Default values. But I can’t find it in the docs.
>
> Can someone help me on this?
>
> Regards,
> Eric
>
>
>
>
>


Re: MRv1 vs. MRv2

2016-02-02 Thread Suresh Subbiah
Hi,

I don't think Trafodion requires YARN for most activity.

For Hive table access, Trafodion uses Hive metadata access Java API and
libhdfs to actually scan the data file. Therefore YARN is not needed for
Hive access.
YARN is not needed for native Hbase or Trafodion table access too.
YARN is needed for backup/restore, since the HBase exportSnapshot class
Trafodion calls, used MapReduce to copy large snapshot files to/from the
backup location.
YARN is also needed for developer regressions as some vanilla Hive commands
are executed during the regression run.
For the last 2 lines I think both MRv1 and MRv2 is supported.

Thanks
Suresh


On Tue, Feb 2, 2016 at 2:36 PM, Gunnar Tapper 
wrote:

> Hi,
>
> Does Trafodion require YARN with MRv2 or is MRv1 supported, too?
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>


MRv1 vs. MRv2

2016-02-02 Thread Gunnar Tapper
Hi,

Does Trafodion require YARN with MRv2 or is MRv1 supported, too?

-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


RE: How to change default CQD?

2016-02-02 Thread Eric Owhadi
Interesting, but how do we manage this on a cluster? Do we have to
maintain the same copy on each node? (just curious, I will use the
_*MD*_.default
method)



Eric



*From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
*Sent:* Tuesday, February 2, 2016 2:29 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: How to change default CQD?



Hi,



One can also create (or add to the file it already exists) a text file
called SQSystemDefaults.conf in the directory $MY_SQROOT/etc and add
attribute name and the value it should be set to in the file.

I think each attribute should in a separate line and the attribute value
should NOT be surrounded by single quotes as we would do from sqlci/trafci.

This file is also created by install_local_hadoop. if this script was used,
you may see an empty file with a few comments in your instance.



Thanks

Suresh





On Tue, Feb 2, 2016 at 12:34 PM, Eric Owhadi  wrote:

Thanks Anoop,

So I will use the following insert statement:

insert into "_MD_".defaults values ('MY_CQD', 'value of CQD', 'comment why
it is inserted');



Eric







*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 12:29 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: How to change default CQD?



You can change it in code by modifying the defaults array in
sqlcomp/nadefaults.cpp

or you can insert the default value in system defaults table
trafodion.“_MD_”.defaults.

If you do the insert, you will need to restart dcs servers to pick up the
new value.



anoop



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 10:26 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* How to change default CQD?



Hello Trafodioneers,

I have seen in the code that there is a way to globally change the default
Control Query Default values. But I can’t find it in the docs.

Can someone help me on this?

Regards,
Eric


Re: How to change default CQD?

2016-02-02 Thread Suresh Subbiah
Hi,

One can also create (or add to the file it already exists) a text file
called SQSystemDefaults.conf in the directory $MY_SQROOT/etc and add
attribute name and the value it should be set to in the file.
I think each attribute should in a separate line and the attribute value
should NOT be surrounded by single quotes as we would do from sqlci/trafci.
This file is also created by install_local_hadoop. if this script was used,
you may see an empty file with a few comments in your instance.

Thanks
Suresh


On Tue, Feb 2, 2016 at 12:34 PM, Eric Owhadi  wrote:

> Thanks Anoop,
>
> So I will use the following insert statement:
>
> insert into "_MD_".defaults values ('MY_CQD', 'value of CQD', 'comment why
> it is inserted');
>
>
>
> Eric
>
>
>
>
>
>
>
> *From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 12:29 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* RE: How to change default CQD?
>
>
>
> You can change it in code by modifying the defaults array in
> sqlcomp/nadefaults.cpp
>
> or you can insert the default value in system defaults table
> trafodion.“_MD_”.defaults.
>
> If you do the insert, you will need to restart dcs servers to pick up the
> new value.
>
>
>
> anoop
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 10:26 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* How to change default CQD?
>
>
>
> Hello Trafodioneers,
>
> I have seen in the code that there is a way to globally change the default
> Control Query Default values. But I can’t find it in the docs.
>
> Can someone help me on this?
>
> Regards,
> Eric
>
>
>


Re: nullable primary key index column?

2016-02-02 Thread Suresh Subbiah
Hi All,

Sorry for not responding to the most recent email previously.

This cqd needs to be set only at the time of table creation (i.e. DDL
time). It is not necessary for DML.

Thanks
Suresh


On Tue, Feb 2, 2016 at 2:19 PM, Eric Owhadi  wrote:

> Oh, that looks better. I think this will work with what I am trying to do.
>
> Let me try it.
>
> Oh, and this CQD must be set only at time of table creation? Or should it
> be globally set using _*MD*_.default?
>
> Eric
>
>
>
> *From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
> *Sent:* Tuesday, February 2, 2016 2:14 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* Re: nullable primary key index column?
>
>
>
> Hi,
>
>
>
> Hope I did not misunderstand the question.
>
>
>
> A table can have more than one column in its key to be nullable, as long
> as the cqd Anoop gave is set. All columns in the key can be nullable too.
>
> If the first column is nullable and there are other key columns that are
> either nullable or non-nullable, then the first column can have null value
> for more than 1 row, as long subsequent key columns have other values.
>
> For example
>
> >>cqd allow_nullable_unique_key_constraint 'on' ;
>
>
>
> --- SQL operation complete.
>
> >>create table t1 (a int, b int, primary key (a,b)) ;
>
>
>
> --- SQL operation complete.
>
> >>showddl t1 ;
>
>
>
> CREATE TABLE TRAFODION.JIRA.T1
>
>   (
>
> AINT DEFAULT NULL SERIALIZED
>
>   , BINT DEFAULT NULL SERIALIZED
>
>   , PRIMARY KEY (A ASC, B ASC)
>
>   )
>
> ;
>
>
>
> --- SQL operation complete.
>
> >>insert into t1(a) values (1);
>
>
>
> --- 1 row(s) inserted.
>
> >>insert into t1(b) values (2) ;
>
>
>
> --- 1 row(s) inserted.
>
> >>insert into t1(a) values(3) ;
>
>
>
> --- 1 row(s) inserted.
>
> >>select * from t1 ;
>
>
>
> AB
>
> ---  ---
>
>
>
>   1?
>
>   3?
>
>   ?2
>
>
>
> --- 3 row(s) selected.
>
>
>
> If the table has only one key column and it is nullable, then at most only
> one row can have null as is value for this column.
>
>
>
> There is an issue with inserting null value for all columns in the key as
> described in JIRA 1801, which also outlines  a fix suggested by Anoop.
>
>
>
> Thanks
>
> Suresh
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Tue, Feb 2, 2016 at 1:29 PM, Anoop Sharma 
> wrote:
>
>
>
> cqd ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT ‘ON’;
>
>
>
> then create table with nullable pkey col.
>
>
>
> only one null value is allowed.
>
>
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 11:27 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* nullable primary key index column?
>
>
>
> Dear Trafodioneers,
>
> I am wondering if it is possible to use a composite primary key with the
> first column making up the primary key composite being nullable?
>
> If yes, is there any restriction, like only one row can be null for that
> nullable column?
>
> Thanks in advance for the help,
> Eric
>
>
>


RE: nullable primary key index column?

2016-02-02 Thread Eric Owhadi
Oh, that looks better. I think this will work with what I am trying to do.

Let me try it.

Oh, and this CQD must be set only at time of table creation? Or should it
be globally set using _*MD*_.default?

Eric



*From:* Suresh Subbiah [mailto:suresh.subbia...@gmail.com]
*Sent:* Tuesday, February 2, 2016 2:14 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* Re: nullable primary key index column?



Hi,



Hope I did not misunderstand the question.



A table can have more than one column in its key to be nullable, as long as
the cqd Anoop gave is set. All columns in the key can be nullable too.

If the first column is nullable and there are other key columns that are
either nullable or non-nullable, then the first column can have null value
for more than 1 row, as long subsequent key columns have other values.

For example

>>cqd allow_nullable_unique_key_constraint 'on' ;



--- SQL operation complete.

>>create table t1 (a int, b int, primary key (a,b)) ;



--- SQL operation complete.

>>showddl t1 ;



CREATE TABLE TRAFODION.JIRA.T1

  (

AINT DEFAULT NULL SERIALIZED

  , BINT DEFAULT NULL SERIALIZED

  , PRIMARY KEY (A ASC, B ASC)

  )

;



--- SQL operation complete.

>>insert into t1(a) values (1);



--- 1 row(s) inserted.

>>insert into t1(b) values (2) ;



--- 1 row(s) inserted.

>>insert into t1(a) values(3) ;



--- 1 row(s) inserted.

>>select * from t1 ;



AB

---  ---



  1?

  3?

  ?2



--- 3 row(s) selected.



If the table has only one key column and it is nullable, then at most only
one row can have null as is value for this column.



There is an issue with inserting null value for all columns in the key as
described in JIRA 1801, which also outlines  a fix suggested by Anoop.



Thanks

Suresh













On Tue, Feb 2, 2016 at 1:29 PM, Anoop Sharma  wrote:



cqd ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT ‘ON’;



then create table with nullable pkey col.



only one null value is allowed.





*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 11:27 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* nullable primary key index column?



Dear Trafodioneers,

I am wondering if it is possible to use a composite primary key with the
first column making up the primary key composite being nullable?

If yes, is there any restriction, like only one row can be null for that
nullable column?

Thanks in advance for the help,
Eric


Re: Basic Is-it-working? Test

2016-02-02 Thread Carol Pearson
Yep, issue was the update stats command - needs to be "for" not "on".  I
thought I copied the right version, but apparently not. That's why source
control is a handy thing

Thanks Anoop!

-Carol P.

---
Email:carol.pearson...@gmail.com
Twitter:  @CarolP222
---

On Tue, Feb 2, 2016 at 10:59 AM, Carol Pearson 
wrote:

> Shoot, now I'm gonna have to go check whether I cut and pasted the right
> one... The one I used most recently didn't have any syntax errors in it
>
> -Carol P.
>
> ---
> Email:carol.pearson...@gmail.com
> Twitter:  @CarolP222
> ---
>
> On Tue, Feb 2, 2016 at 10:13 AM, Anoop Sharma 
> wrote:
>
>> hey
>>
>> did you run this script? Coz there is a syntax error in it.
>>
>> Now run it and find out which stmt has  syntax error J
>>
>> anoop
>>
>>
>>
>> *From:* Carol Pearson [mailto:carol.pearson...@gmail.com]
>> *Sent:* Tuesday, February 2, 2016 10:12 AM
>> *To:* u...@trafodion.apache.org
>> *Subject:* Basic Is-it-working? Test
>>
>>
>>
>> Hi Trafodion Fans,
>>
>>
>>
>> I wrote a little one-minute acid test script to make sure that my
>> Trafodion basic piece-parts are in working order before doing more complex
>> testing.  Whenever I do an installation or an sqstart after changing
>> configuration, I run this script just to make sure that nothing is horribly
>> broken.  If I get errors, I know there's no point in going further because
>> something basic went wrong.
>>
>>
>>
>> ---
>>
>> --
>>
>> --  Acid test script to make sure SQL has installed
>>
>> --
>>
>> --
>>
>>
>>
>> create schema test_sandbox_schema;
>>
>>
>>
>> set schema test_sandbox_schema;
>>
>>
>>
>> create table t (c1 int not null, c2 int not null, primary key (c1));
>>
>>
>>
>> insert into t values (1,1);
>>
>> insert into t values (2,3);
>>
>> insert into t values (3,2);
>>
>> begin work;
>>
>> insert into t values (4,5);
>>
>> insert into t values (5,2);
>>
>> commit work;
>>
>>
>>
>> insert into t values (7,3);
>>
>>
>>
>> select * from t order by c2;
>>
>>
>>
>> create index tix on t (c2);
>>
>>
>>
>> create view tview as select c1, c2 from t where c2 > 3;
>>
>>
>>
>> select * from tview where c2 < 3;
>>
>> select * from tview where c2 > 2;
>>
>>
>>
>> update statistics on t;
>>
>>
>>
>> explain select * from t order by c2;
>>
>> select * from t order by c2;
>>
>>
>>
>> drop view tview;
>>
>> drop table t;
>>
>> drop schema test_sandbox_schema;
>>
>>
>>
>>
>>
>>
>>
>> I put emphasis on sorting and indexes because of my long history with
>> those (old habits die hard). And my goal is only basic success (no errors)
>> and I don't mind repeating the same query multiple times and I do expect
>> the same results I don't want a huge complex script and automated
>> validation (those come next, depending on what I'm trying to do).  Really,
>> my goal is to get a fast warm fuzzy feeling that it's worth it for me to
>> actually do real work.
>>
>>
>>
>> Anyone have suggestions on other things I might  check as part of a
>> simple, less than one-minute test?  Is this (incredibly basic) script worth
>> contributing to Trafodion?
>>
>>
>>
>> Thanks!
>>
>> -Carol P.
>>
>>
>> ---
>>
>> Email:carol.pearson...@gmail.com
>>
>> Twitter:  @CarolP222
>>
>> ---
>>
>
>


Re: nullable primary key index column?

2016-02-02 Thread Suresh Subbiah
Hi,

Hope I did not misunderstand the question.

A table can have more than one column in its key to be nullable, as long as
the cqd Anoop gave is set. All columns in the key can be nullable too.
If the first column is nullable and there are other key columns that are
either nullable or non-nullable, then the first column can have null value
for more than 1 row, as long subsequent key columns have other values.
For example
>>cqd allow_nullable_unique_key_constraint 'on' ;

--- SQL operation complete.
>>create table t1 (a int, b int, primary key (a,b)) ;

--- SQL operation complete.
>>showddl t1 ;

CREATE TABLE TRAFODION.JIRA.T1
  (
AINT DEFAULT NULL SERIALIZED
  , BINT DEFAULT NULL SERIALIZED
  , PRIMARY KEY (A ASC, B ASC)
  )
;

--- SQL operation complete.
>>insert into t1(a) values (1);

--- 1 row(s) inserted.
>>insert into t1(b) values (2) ;

--- 1 row(s) inserted.
>>insert into t1(a) values(3) ;

--- 1 row(s) inserted.
>>select * from t1 ;

AB
---  ---

  1?
  3?
  ?2

--- 3 row(s) selected.

If the table has only one key column and it is nullable, then at most only
one row can have null as is value for this column.

There is an issue with inserting null value for all columns in the key as
described in JIRA 1801, which also outlines  a fix suggested by Anoop.

Thanks
Suresh






On Tue, Feb 2, 2016 at 1:29 PM, Anoop Sharma  wrote:

>
>
> cqd ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT ‘ON’;
>
>
>
> then create table with nullable pkey col.
>
>
>
> only one null value is allowed.
>
>
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 11:27 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* nullable primary key index column?
>
>
>
> Dear Trafodioneers,
>
> I am wondering if it is possible to use a composite primary key with the
> first column making up the primary key composite being nullable?
>
> If yes, is there any restriction, like only one row can be null for that
> nullable column?
>
> Thanks in advance for the help,
> Eric
>


RE: nullable primary key index column?

2016-02-02 Thread Eric Owhadi
Hi Dave

It is for cases like TPC-DS where we are dealing with snowflakes schema,
and in particular, the date dimension is of particular interest for data
partitioning.

By default, TPC-DS fact tables have composite primary keys that do not
include the date dimension. And that is fine.

Except that Trafodion horizontal partitioning (STORE BY) require that the
column in the “store by” be part of the primary key. So in order to
correctly partition the data by date, I have to set the xxx_date_sk part of
the primary key in fact tables.



But the fact data is not supposed to be “clean”, so you will find records
with null xxx_date_sk (about 4.5% in the test dataset).



So I will simply use 0 for null representation, alter the load accordingly,
and make sure we don’t have any xxx_date_sk is null/ is not null checks in
the tpc-ds queries.



Hope this clarifies?

Eric



*From:* Dave Birdsall [mailto:dave.birds...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 1:51 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: nullable primary key index column?



Hi Eric,



Just curious: What is your use case? Why do you need a nullable primary key
column?



Dave



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 11:43 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: nullable primary key index column?



Thank you Anoop for the prompt response. It won’t help for my use case
because I have more than one null. I will therefore assign a fake value to
represent NULLs in that column.

Eric



*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 1:30 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: nullable primary key index column?





cqd ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT ‘ON’;



then create table with nullable pkey col.



only one null value is allowed.





*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 11:27 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* nullable primary key index column?



Dear Trafodioneers,

I am wondering if it is possible to use a composite primary key with the
first column making up the primary key composite being nullable?

If yes, is there any restriction, like only one row can be null for that
nullable column?

Thanks in advance for the help,
Eric


RE: nullable primary key index column?

2016-02-02 Thread Dave Birdsall
Hi Eric,



Just curious: What is your use case? Why do you need a nullable primary key
column?



Dave



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 11:43 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: nullable primary key index column?



Thank you Anoop for the prompt response. It won’t help for my use case
because I have more than one null. I will therefore assign a fake value to
represent NULLs in that column.

Eric



*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 1:30 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: nullable primary key index column?





cqd ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT ‘ON’;



then create table with nullable pkey col.



only one null value is allowed.





*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 11:27 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* nullable primary key index column?



Dear Trafodioneers,

I am wondering if it is possible to use a composite primary key with the
first column making up the primary key composite being nullable?

If yes, is there any restriction, like only one row can be null for that
nullable column?

Thanks in advance for the help,
Eric


RE: nullable primary key index column?

2016-02-02 Thread Eric Owhadi
Thank you Anoop for the prompt response. It won’t help for my use case
because I have more than one null. I will therefore assign a fake value to
represent NULLs in that column.

Eric



*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 1:30 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: nullable primary key index column?





cqd ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT ‘ON’;



then create table with nullable pkey col.



only one null value is allowed.





*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 11:27 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* nullable primary key index column?



Dear Trafodioneers,

I am wondering if it is possible to use a composite primary key with the
first column making up the primary key composite being nullable?

If yes, is there any restriction, like only one row can be null for that
nullable column?

Thanks in advance for the help,
Eric


RE: nullable primary key index column?

2016-02-02 Thread Anoop Sharma
cqd ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT ‘ON’;



then create table with nullable pkey col.



only one null value is allowed.





*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 11:27 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* nullable primary key index column?



Dear Trafodioneers,

I am wondering if it is possible to use a composite primary key with the
first column making up the primary key composite being nullable?

If yes, is there any restriction, like only one row can be null for that
nullable column?

Thanks in advance for the help,
Eric


nullable primary key index column?

2016-02-02 Thread Eric Owhadi
Dear Trafodioneers,

I am wondering if it is possible to use a composite primary key with the
first column making up the primary key composite being nullable?

If yes, is there any restriction, like only one row can be null for that
nullable column?

Thanks in advance for the help,
Eric


Re: Basic Is-it-working? Test

2016-02-02 Thread Carol Pearson
Shoot, now I'm gonna have to go check whether I cut and pasted the right
one... The one I used most recently didn't have any syntax errors in it

-Carol P.

---
Email:carol.pearson...@gmail.com
Twitter:  @CarolP222
---

On Tue, Feb 2, 2016 at 10:13 AM, Anoop Sharma 
wrote:

> hey
>
> did you run this script? Coz there is a syntax error in it.
>
> Now run it and find out which stmt has  syntax error J
>
> anoop
>
>
>
> *From:* Carol Pearson [mailto:carol.pearson...@gmail.com]
> *Sent:* Tuesday, February 2, 2016 10:12 AM
> *To:* u...@trafodion.apache.org
> *Subject:* Basic Is-it-working? Test
>
>
>
> Hi Trafodion Fans,
>
>
>
> I wrote a little one-minute acid test script to make sure that my
> Trafodion basic piece-parts are in working order before doing more complex
> testing.  Whenever I do an installation or an sqstart after changing
> configuration, I run this script just to make sure that nothing is horribly
> broken.  If I get errors, I know there's no point in going further because
> something basic went wrong.
>
>
>
> ---
>
> --
>
> --  Acid test script to make sure SQL has installed
>
> --
>
> --
>
>
>
> create schema test_sandbox_schema;
>
>
>
> set schema test_sandbox_schema;
>
>
>
> create table t (c1 int not null, c2 int not null, primary key (c1));
>
>
>
> insert into t values (1,1);
>
> insert into t values (2,3);
>
> insert into t values (3,2);
>
> begin work;
>
> insert into t values (4,5);
>
> insert into t values (5,2);
>
> commit work;
>
>
>
> insert into t values (7,3);
>
>
>
> select * from t order by c2;
>
>
>
> create index tix on t (c2);
>
>
>
> create view tview as select c1, c2 from t where c2 > 3;
>
>
>
> select * from tview where c2 < 3;
>
> select * from tview where c2 > 2;
>
>
>
> update statistics on t;
>
>
>
> explain select * from t order by c2;
>
> select * from t order by c2;
>
>
>
> drop view tview;
>
> drop table t;
>
> drop schema test_sandbox_schema;
>
>
>
>
>
>
>
> I put emphasis on sorting and indexes because of my long history with
> those (old habits die hard). And my goal is only basic success (no errors)
> and I don't mind repeating the same query multiple times and I do expect
> the same results I don't want a huge complex script and automated
> validation (those come next, depending on what I'm trying to do).  Really,
> my goal is to get a fast warm fuzzy feeling that it's worth it for me to
> actually do real work.
>
>
>
> Anyone have suggestions on other things I might  check as part of a
> simple, less than one-minute test?  Is this (incredibly basic) script worth
> contributing to Trafodion?
>
>
>
> Thanks!
>
> -Carol P.
>
>
> ---
>
> Email:carol.pearson...@gmail.com
>
> Twitter:  @CarolP222
>
> ---
>


Re: How to Update the Apache Trafodion Website...

2016-02-02 Thread Amanda Moran
There will no longer be Trafodion installation instructions on the main
site?

http://trafodion.apache.org/install.html

This link is going away?

On Tue, Feb 2, 2016 at 10:33 AM, Gunnar Tapper 
wrote:

> Hi,
>
> Don't bother. It'll be replaced with the upcoming Trafodion Provision
> Guide.
>
> But, in general, all these how-to instructions are now documented in the
> Trafodion Contributor Guide:
> https://cwiki.apache.org/confluence/display/TRAFODION/Trafodion+Contributor+Guide
>
> Gunnar
>
> On Tue, Feb 2, 2016 at 11:30 AM, Amanda Moran 
> wrote:
>
>> Hi there All-
>>
>> I would like to update the installation page on the Apache Trafodion
>> Website. It's too complicated right now, and doesn't flow correctly.
>>
>> Can we please have the steps to that in bullet points here? A link to the
>> page on the website would also be fine.
>>
>> Thanks.
>>
>> --
>> Thanks,
>>
>> Amanda Moran
>>
>
>
>
> --
> Thanks,
>
> Gunnar
> *If you think you can you can, if you think you can't you're right.*
>



-- 
Thanks,

Amanda Moran


Re: Basic Is-it-working? Test

2016-02-02 Thread Carol Pearson
Thanks Dave!

I often use the regression tests as well, but that's my second step, after
I've validated that Trafodion seems to be working..  My acid test is a
simple trafci and then I obey this file.  That has a couple fewer moving
parts than the regression tests.

By the time I get sqstart + traf/sql ci + acid test run, I've exercised a
lot of the trafodion piece parts in 3 very simple commands:

sqstart
trafci
trafci>> obey acidtest;

Plus, this will work if I've just used the execution binaries - I'm not
sure everyone now and forever should have the whole regression suite.

-Carol P.



---
Email:carol.pearson...@gmail.com
Twitter:  @CarolP222
---

On Tue, Feb 2, 2016 at 10:20 AM, Dave Birdsall 
wrote:

> Hi Carol,
>
>
>
> As a developer, I often use the developer regression suite, fullstack2,
> for this purpose. It is just three tests, and runs fairly quickly. That
> said, I don’t know how easy it is for an end-user (who is not a developer)
> to use.
>
>
>
> Dave
>
>
>
> *From:* Carol Pearson [mailto:carol.pearson...@gmail.com]
> *Sent:* Tuesday, February 2, 2016 10:12 AM
> *To:* u...@trafodion.apache.org
> *Subject:* Basic Is-it-working? Test
>
>
>
> Hi Trafodion Fans,
>
>
>
> I wrote a little one-minute acid test script to make sure that my
> Trafodion basic piece-parts are in working order before doing more complex
> testing.  Whenever I do an installation or an sqstart after changing
> configuration, I run this script just to make sure that nothing is horribly
> broken.  If I get errors, I know there's no point in going further because
> something basic went wrong.
>
>
>
> ---
>
> --
>
> --  Acid test script to make sure SQL has installed
>
> --
>
> --
>
>
>
> create schema test_sandbox_schema;
>
>
>
> set schema test_sandbox_schema;
>
>
>
> create table t (c1 int not null, c2 int not null, primary key (c1));
>
>
>
> insert into t values (1,1);
>
> insert into t values (2,3);
>
> insert into t values (3,2);
>
> begin work;
>
> insert into t values (4,5);
>
> insert into t values (5,2);
>
> commit work;
>
>
>
> insert into t values (7,3);
>
>
>
> select * from t order by c2;
>
>
>
> create index tix on t (c2);
>
>
>
> create view tview as select c1, c2 from t where c2 > 3;
>
>
>
> select * from tview where c2 < 3;
>
> select * from tview where c2 > 2;
>
>
>
> update statistics on t;
>
>
>
> explain select * from t order by c2;
>
> select * from t order by c2;
>
>
>
> drop view tview;
>
> drop table t;
>
> drop schema test_sandbox_schema;
>
>
>
>
>
>
>
> I put emphasis on sorting and indexes because of my long history with
> those (old habits die hard). And my goal is only basic success (no errors)
> and I don't mind repeating the same query multiple times and I do expect
> the same results I don't want a huge complex script and automated
> validation (those come next, depending on what I'm trying to do).  Really,
> my goal is to get a fast warm fuzzy feeling that it's worth it for me to
> actually do real work.
>
>
>
> Anyone have suggestions on other things I might  check as part of a
> simple, less than one-minute test?  Is this (incredibly basic) script worth
> contributing to Trafodion?
>
>
>
> Thanks!
>
> -Carol P.
>
>
> ---
>
> Email:carol.pearson...@gmail.com
>
> Twitter:  @CarolP222
>
> ---
>


RE: Basic Is-it-working? Test

2016-02-02 Thread Anoop Sharma
hey

did you run this script? Coz there is a syntax error in it.

Now run it and find out which stmt has  syntax error J

anoop



*From:* Carol Pearson [mailto:carol.pearson...@gmail.com]
*Sent:* Tuesday, February 2, 2016 10:12 AM
*To:* u...@trafodion.apache.org
*Subject:* Basic Is-it-working? Test



Hi Trafodion Fans,



I wrote a little one-minute acid test script to make sure that my Trafodion
basic piece-parts are in working order before doing more complex testing.
Whenever I do an installation or an sqstart after changing configuration, I
run this script just to make sure that nothing is horribly broken.  If I
get errors, I know there's no point in going further because something
basic went wrong.



---

--

--  Acid test script to make sure SQL has installed

--

--



create schema test_sandbox_schema;



set schema test_sandbox_schema;



create table t (c1 int not null, c2 int not null, primary key (c1));



insert into t values (1,1);

insert into t values (2,3);

insert into t values (3,2);

begin work;

insert into t values (4,5);

insert into t values (5,2);

commit work;



insert into t values (7,3);



select * from t order by c2;



create index tix on t (c2);



create view tview as select c1, c2 from t where c2 > 3;



select * from tview where c2 < 3;

select * from tview where c2 > 2;



update statistics on t;



explain select * from t order by c2;

select * from t order by c2;



drop view tview;

drop table t;

drop schema test_sandbox_schema;







I put emphasis on sorting and indexes because of my long history with those
(old habits die hard). And my goal is only basic success (no errors) and I
don't mind repeating the same query multiple times and I do expect the same
results I don't want a huge complex script and automated validation
(those come next, depending on what I'm trying to do).  Really, my goal is
to get a fast warm fuzzy feeling that it's worth it for me to actually do
real work.



Anyone have suggestions on other things I might  check as part of a simple,
less than one-minute test?  Is this (incredibly basic) script worth
contributing to Trafodion?



Thanks!

-Carol P.


---

Email:carol.pearson...@gmail.com

Twitter:  @CarolP222

---


RE: How to change default CQD?

2016-02-02 Thread Eric Owhadi
Thanks Anoop,

So I will use the following insert statement:

insert into "_MD_".defaults values ('MY_CQD', 'value of CQD', 'comment why
it is inserted');



Eric







*From:* Anoop Sharma [mailto:anoop.sha...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 12:29 PM
*To:* user@trafodion.incubator.apache.org
*Subject:* RE: How to change default CQD?



You can change it in code by modifying the defaults array in
sqlcomp/nadefaults.cpp

or you can insert the default value in system defaults table
trafodion.“_MD_”.defaults.

If you do the insert, you will need to restart dcs servers to pick up the
new value.



anoop



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 10:26 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* How to change default CQD?



Hello Trafodioneers,

I have seen in the code that there is a way to globally change the default
Control Query Default values. But I can’t find it in the docs.

Can someone help me on this?

Regards,
Eric


Re: How to change default CQD?

2016-02-02 Thread Qifan Chen
Also, the insertion method will affect only the instance of the database,
and the CQD change can be backed out (by deleting that row).

--Qifan

On Tue, Feb 2, 2016 at 12:28 PM, Anoop Sharma 
wrote:

> You can change it in code by modifying the defaults array in
> sqlcomp/nadefaults.cpp
>
> or you can insert the default value in system defaults table
> trafodion.“_MD_”.defaults.
>
> If you do the insert, you will need to restart dcs servers to pick up the
> new value.
>
>
>
> anoop
>
>
>
> *From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
> *Sent:* Tuesday, February 2, 2016 10:26 AM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* How to change default CQD?
>
>
>
> Hello Trafodioneers,
>
> I have seen in the code that there is a way to globally change the default
> Control Query Default values. But I can’t find it in the docs.
>
> Can someone help me on this?
>
> Regards,
> Eric
>
>
>



-- 
Regards, --Qifan


Re: How to Update the Apache Trafodion Website...

2016-02-02 Thread Gunnar Tapper
Hi,

Don't bother. It'll be replaced with the upcoming Trafodion Provision Guide.

But, in general, all these how-to instructions are now documented in the
Trafodion Contributor Guide:
https://cwiki.apache.org/confluence/display/TRAFODION/Trafodion+Contributor+Guide

Gunnar

On Tue, Feb 2, 2016 at 11:30 AM, Amanda Moran 
wrote:

> Hi there All-
>
> I would like to update the installation page on the Apache Trafodion
> Website. It's too complicated right now, and doesn't flow correctly.
>
> Can we please have the steps to that in bullet points here? A link to the
> page on the website would also be fine.
>
> Thanks.
>
> --
> Thanks,
>
> Amanda Moran
>



-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


How to Update the Apache Trafodion Website...

2016-02-02 Thread Amanda Moran
Hi there All-

I would like to update the installation page on the Apache Trafodion
Website. It's too complicated right now, and doesn't flow correctly.

Can we please have the steps to that in bullet points here? A link to the
page on the website would also be fine.

Thanks.

-- 
Thanks,

Amanda Moran


RE: How to change default CQD?

2016-02-02 Thread Anoop Sharma
You can change it in code by modifying the defaults array in
sqlcomp/nadefaults.cpp

or you can insert the default value in system defaults table
trafodion.“_MD_”.defaults.

If you do the insert, you will need to restart dcs servers to pick up the
new value.



anoop



*From:* Eric Owhadi [mailto:eric.owh...@esgyn.com]
*Sent:* Tuesday, February 2, 2016 10:26 AM
*To:* user@trafodion.incubator.apache.org
*Subject:* How to change default CQD?



Hello Trafodioneers,

I have seen in the code that there is a way to globally change the default
Control Query Default values. But I can’t find it in the docs.

Can someone help me on this?

Regards,
Eric


How to change default CQD?

2016-02-02 Thread Eric Owhadi
Hello Trafodioneers,

I have seen in the code that there is a way to globally change the default
Control Query Default values. But I can’t find it in the docs.

Can someone help me on this?

Regards,
Eric


RE: Basic Is-it-working? Test

2016-02-02 Thread Dave Birdsall
Hi Carol,



As a developer, I often use the developer regression suite, fullstack2, for
this purpose. It is just three tests, and runs fairly quickly. That said, I
don’t know how easy it is for an end-user (who is not a developer) to use.



Dave



*From:* Carol Pearson [mailto:carol.pearson...@gmail.com]
*Sent:* Tuesday, February 2, 2016 10:12 AM
*To:* u...@trafodion.apache.org
*Subject:* Basic Is-it-working? Test



Hi Trafodion Fans,



I wrote a little one-minute acid test script to make sure that my Trafodion
basic piece-parts are in working order before doing more complex testing.
Whenever I do an installation or an sqstart after changing configuration, I
run this script just to make sure that nothing is horribly broken.  If I
get errors, I know there's no point in going further because something
basic went wrong.



---

--

--  Acid test script to make sure SQL has installed

--

--



create schema test_sandbox_schema;



set schema test_sandbox_schema;



create table t (c1 int not null, c2 int not null, primary key (c1));



insert into t values (1,1);

insert into t values (2,3);

insert into t values (3,2);

begin work;

insert into t values (4,5);

insert into t values (5,2);

commit work;



insert into t values (7,3);



select * from t order by c2;



create index tix on t (c2);



create view tview as select c1, c2 from t where c2 > 3;



select * from tview where c2 < 3;

select * from tview where c2 > 2;



update statistics on t;



explain select * from t order by c2;

select * from t order by c2;



drop view tview;

drop table t;

drop schema test_sandbox_schema;







I put emphasis on sorting and indexes because of my long history with those
(old habits die hard). And my goal is only basic success (no errors) and I
don't mind repeating the same query multiple times and I do expect the same
results I don't want a huge complex script and automated validation
(those come next, depending on what I'm trying to do).  Really, my goal is
to get a fast warm fuzzy feeling that it's worth it for me to actually do
real work.



Anyone have suggestions on other things I might  check as part of a simple,
less than one-minute test?  Is this (incredibly basic) script worth
contributing to Trafodion?



Thanks!

-Carol P.


---

Email:carol.pearson...@gmail.com

Twitter:  @CarolP222

---


Basic Is-it-working? Test

2016-02-02 Thread Carol Pearson
Hi Trafodion Fans,

I wrote a little one-minute acid test script to make sure that my Trafodion
basic piece-parts are in working order before doing more complex testing.
Whenever I do an installation or an sqstart after changing configuration, I
run this script just to make sure that nothing is horribly broken.  If I
get errors, I know there's no point in going further because something
basic went wrong.

---
--
--  Acid test script to make sure SQL has installed
--
--

create schema test_sandbox_schema;

set schema test_sandbox_schema;

create table t (c1 int not null, c2 int not null, primary key (c1));

insert into t values (1,1);
insert into t values (2,3);
insert into t values (3,2);
begin work;
insert into t values (4,5);
insert into t values (5,2);
commit work;

insert into t values (7,3);

select * from t order by c2;

create index tix on t (c2);

create view tview as select c1, c2 from t where c2 > 3;

select * from tview where c2 < 3;
select * from tview where c2 > 2;

update statistics on t;

explain select * from t order by c2;
select * from t order by c2;

drop view tview;
drop table t;
drop schema test_sandbox_schema;




I put emphasis on sorting and indexes because of my long history with those
(old habits die hard). And my goal is only basic success (no errors) and I
don't mind repeating the same query multiple times and I do expect the same
results I don't want a huge complex script and automated validation
(those come next, depending on what I'm trying to do).  Really, my goal is
to get a fast warm fuzzy feeling that it's worth it for me to actually do
real work.

Anyone have suggestions on other things I might  check as part of a simple,
less than one-minute test?  Is this (incredibly basic) script worth
contributing to Trafodion?

Thanks!
-Carol P.

---
Email:carol.pearson...@gmail.com
Twitter:  @CarolP222
---