RE: 答复: A failed Trafodion installation can lead to the hbase:meta table staying in the FAILED_OPEN state.

2016-03-09 Thread D. Markt
Hi,

 

  Yes Ming that’s an excellent point.  Though I didn’t mention it, my first 
attempt at recovery centered on trying to verify the hbase:meta table was okay 
using the HBase OfflineMetaRepair utility.  Even after that tool said the table 
was fine, I still tried another restart because the obvious symptom leads you 
to believe it is the file that is causing the problem.  It is very unusual to 
get into this situation but when you do, you have a tendency to overreact 
because HBase was working fine and after the restart no regions can be 
accessed.  So it’s important to examine all of the log files looking for the 
root cause of the problem.  The Master log file gave one view, but the Region 
Server’s log file made it very obvious what had to be resolved.

 

Thanks,

Dennis

 

From: Amanda Moran [mailto:amanda.mo...@esgyn.com] 
Sent: Wednesday, March 09, 2016 12:17 PM
To: user@trafodion.incubator.apache.org
Subject: Re: 答复: A failed Trafodion installation can lead to the hbase:meta 
table staying in the FAILED_OPEN state.

 

HI there All-

 

I have made a jira for the installer, based on this issue. 

 

https://issues.apache.org/jira/browse/TRAFODION-1884

 

Thanks! 

 

On Wed, Mar 9, 2016 at 8:41 AM, Liu, Ming (Ming) mailto:ming@esgyn.cn> > wrote:

Thanks Denies to share this. We saw this issue during an expansion of Trafodion 
from 4 nodes to 5 nodes, since newly add node is empty, META region should not 
be there, so it does no harm. But the problem is similar, the newly added RS 
cannot work until we update Trafodion into that RS node.

There are two related JIRAs:  TRAFODION-1729 and TRAFODION-1730.
we are working on them to solve the issue. Since Trafodion currently modify the 
HBase server's hbase-site.xml to add coprocessor, it affect *ALL* regions in 
the hbase, including META region. This is no need and not good. META region 
definitely no need to load Trafodion coprocessors. It is system region, 
Trafodion never need to access it directly, and once its open fail, the whole 
hbase system cannot work.
So with that JIRA fully addressed, we can remove hbase-site.xml modification 
from Trafodion installer, and no need to restart HBase. And as a proper 
installation, Trafodion should be installed on all RS node, so coprocessor jar 
files should be copied to all RS nodes. If Trafodion is not installed on all RS 
node, there may still be issues, I assume Installer still need to consider 
this. A better approach is to save coprocessor jar file on HDFS, but that is 
just a theory, need to study further.

Thanks,
Ming

-邮件原件-
发件人: D. Markt [mailto:dmarkt7...@gmail.com  ]
发送时间: 2016年3月9日 15:23
收件人: user@trafodion.incubator.apache.org 
 
主题: A failed Trafodion installation can lead to the hbase:meta table staying in 
the FAILED_OPEN state.


Hi,

  I ran into this situation during a recent installation and thought it might 
be useful if others were to hit a similar situation in the future.
This isn't the only way to recover from the situation but it is one option and 
was proven to work as expected.

Regards,
Dennis

  During a recent Trafodion cluster install the daily build was broken in such 
a way that much of the installation proceeded, but the Trafodion files were not 
copied to each node.  This system was using CDH but I assume the following 
would happen for HDP as well.  After HBase was restarted as part of the 
installation I noticed the HBase icon was red.  I know this will likely not 
look the best in plain text, but the hbase:meta showed (in a red
box):

Region  State   RIT time (ms)
1588230740  hbase:meta,,1.1588230740 state=FAILED_OPEN, ts=Mon Mar 07
07:19:00 UTC 2016 (1289s ago),
server=perf-sles-2.novalocal,60020,14573351205071289706

  Looking at the Region Server's log file that was assigned the hbase:meta 
table there was this output:

2016-03-07 16:45:27,243 INFO
org.apache.hadoop.hbase.regionserver.RSRpcServices: Open
hbase:meta,,1.1588230740
2016-03-07 16:45:27,249 ERROR
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of 
region=hbase:meta,,1.1588230740, starting to roll back the global memstore size.
java.lang.IllegalStateException: Could not instantiate a region instance.
at
org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5486)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5793)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5765)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5721)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5672)
at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(Op
enRegionHandler.java:356)
at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenR
egionHandler.java:126)
at
org.apache.hadoop.h

Re: 答复: A failed Trafodion installation can lead to the hbase:meta table staying in the FAILED_OPEN state.

2016-03-09 Thread Amanda Moran
HI there All-

I have made a jira for the installer, based on this issue.

https://issues.apache.org/jira/browse/TRAFODION-1884

Thanks!

On Wed, Mar 9, 2016 at 8:41 AM, Liu, Ming (Ming)  wrote:

> Thanks Denies to share this. We saw this issue during an expansion of
> Trafodion from 4 nodes to 5 nodes, since newly add node is empty, META
> region should not be there, so it does no harm. But the problem is similar,
> the newly added RS cannot work until we update Trafodion into that RS node.
>
> There are two related JIRAs:  TRAFODION-1729 and TRAFODION-1730.
> we are working on them to solve the issue. Since Trafodion currently
> modify the HBase server's hbase-site.xml to add coprocessor, it affect
> *ALL* regions in the hbase, including META region. This is no need and not
> good. META region definitely no need to load Trafodion coprocessors. It is
> system region, Trafodion never need to access it directly, and once its
> open fail, the whole hbase system cannot work.
> So with that JIRA fully addressed, we can remove hbase-site.xml
> modification from Trafodion installer, and no need to restart HBase. And as
> a proper installation, Trafodion should be installed on all RS node, so
> coprocessor jar files should be copied to all RS nodes. If Trafodion is not
> installed on all RS node, there may still be issues, I assume Installer
> still need to consider this. A better approach is to save coprocessor jar
> file on HDFS, but that is just a theory, need to study further.
>
> Thanks,
> Ming
>
> -邮件原件-
> 发件人: D. Markt [mailto:dmarkt7...@gmail.com]
> 发送时间: 2016年3月9日 15:23
> 收件人: user@trafodion.incubator.apache.org
> 主题: A failed Trafodion installation can lead to the hbase:meta table
> staying in the FAILED_OPEN state.
>
> Hi,
>
>   I ran into this situation during a recent installation and thought it
> might be useful if others were to hit a similar situation in the future.
> This isn't the only way to recover from the situation but it is one option
> and was proven to work as expected.
>
> Regards,
> Dennis
>
>   During a recent Trafodion cluster install the daily build was broken in
> such a way that much of the installation proceeded, but the Trafodion files
> were not copied to each node.  This system was using CDH but I assume the
> following would happen for HDP as well.  After HBase was restarted as part
> of the installation I noticed the HBase icon was red.  I know this will
> likely not look the best in plain text, but the hbase:meta showed (in a red
> box):
>
> Region  State   RIT time (ms)
> 1588230740  hbase:meta,,1.1588230740 state=FAILED_OPEN, ts=Mon Mar 07
> 07:19:00 UTC 2016 (1289s ago),
> server=perf-sles-2.novalocal,60020,14573351205071289706
>
>   Looking at the Region Server's log file that was assigned the hbase:meta
> table there was this output:
>
> 2016-03-07 16:45:27,243 INFO
> org.apache.hadoop.hbase.regionserver.RSRpcServices: Open
> hbase:meta,,1.1588230740
> 2016-03-07 16:45:27,249 ERROR
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed
> open of region=hbase:meta,,1.1588230740, starting to roll back the global
> memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
> at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5486)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5793)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5765)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5721)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5672)
> at
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(Op
> enRegionHandler.java:356)
> at
>
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenR
> egionHandler.java:126)
> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
> 45)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
> 15)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException:
> Class
> org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion
> not found
> at
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2112)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5475)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: Class
> org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion not
> found
> at
>
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2018)
> at
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2110)
> ... 11 more
> 2016-03-07 16:45:27,250

Re: perl-DBD-SQLite*

2016-03-09 Thread Amanda Moran
Okay perfect. This is exactly what I wanted to know.

Thanks to Eason and Dennis for actually digging into the code!!

Appreciate it!

On Wed, Mar 9, 2016 at 5:36 AM, Zhang, Yi (Eason)  wrote:

> Hi,
>
> In my VM environment there’s only perl-DBD-SQLite installed, both ‘sqgen’
> and Trafodion can work well, so I think the package perl-DBD-SQLite2 is not
> needed.
>
> Thanks,
> Eason
>
>
> From: "D. Markt"
> Reply-To: "user@trafodion.incubator.apache.org"
> Date: Wednesday, March 9, 2016 at 16:03
> To: "user@trafodion.incubator.apache.org"
> Subject: RE: perl-DBD-SQLite*
>
> Hi,
>
>
>
>   I’m not a yum expert, but could the ‘*’ actually be part of a glob
> expression?  For example on a CentOS 6.5 node I did:
>
>
>
> yum list installed "perl-DBD-SQLite*"
>
> …
>
> perl-DBD-SQLite.x86_64
> 1.27-3.el6@base
>
> perl-DBD-SQLite2.x86_64
> 0.33-12.el6   @epel
>
>
>
> Then again without the ‘*’:
>
>
>
> yum list installed "perl-DBD-SQLite"
>
> …
>
> perl-DBD-SQLite.x86_64
> 1.27-3.el6@base
>
>
>
>   So the ‘*’ was likely used to install both packages, but from an
> Internet search it looks like there is a SQLite2 but one site indicates it
> has been deprecated and that SQLite is actually version 3 of the software
> (and that was circa 2009).  I did find several files under my somewhat
> dated source tree that have the common Perl suffix of “.pl”.  Many of those
> files are under the local_hadoop directory so aren’t Trafodion files given
> a quick look.  Looking for the SQLite term that one site mentioned I see:
>
>
>
> ./core/sqf/sql/scripts/gensq.pl:use DBI;
>
> ./core/sqf/sql/scripts/gensq.pl:$DBH =
> DBI->connect("dbi:SQLite:dbname=sqconfig.db","","",$dbargs);
>
>
>
>   So I appears this is used by sqgen, and it doesn’t specifically ask for
> SQLite2.  Looking at the sqconfig.db from a today’s install I see:
>
>
>
> more sqconfig.db
>
> SQLite format 3
>
> …
>
>
>
> So if you install SQLite on RHEL 7.1, and see the generated sqconfig.db
> file has the same “SQLite format 3” (or higher), which I’m sure it will,
> then that should be enough.  Of course it’s always possible some other
> component is using SQLite but my recollection was sqgen was the one
> component that was using the package.  If other Perl scripts are using
> SQLite2 then they should probably be enhanced to use the newer version
> anyway.
>
>
>
> Regards,
>
> Dennis
>
>
>
> *From:* Amanda Moran [mailto:amanda.mo...@esgyn.com
> ]
> *Sent:* Tuesday, March 08, 2016 6:03 PM
> *To:* user@trafodion.incubator.apache.org
> *Subject:* Re: perl-DBD-SQLite*
>
>
>
> Yes, I totally agree with the way to test.
>
>
>
> I am waiting on some changes to be merged and I will do just that.
>
>
>
> Just was hoping, if anyone had any words of wisdom before then!
>
>
>
> Thanks!
>
>
>
> On Tue, Mar 8, 2016 at 3:59 PM, Carol Pearson 
> wrote:
>
> Hi Amanda,
>
>
>
> At one point, I know we used SQLite for some internal configuration
> information, but I've lost track of whether or not we still do.  Otherwise,
> SQLite would be needed for a dependency, and at that point and we'd have to
> track that one down to see what's really needed.
>
>
>
> If we don't install the full set, does the install complete and does
> Trafodion start? No guarantees that we don't have a problem if it installs
> and starts because the dependency could be later in the execution path.
> But if install/start fails, at least that tells us that the dependency
> matters and points us to at least one place *where* something cares.
>
>
>
> -Carol P.
>
>
> ---
>
> Email:carol.pearson...@gmail.com
>
> Twitter:  @CarolP222
>
> ---
>
>
>
> On Tue, Mar 8, 2016 at 2:55 PM, Amanda Moran 
> wrote:
>
> Hi there All-
>
>
>
> In the current installer we try to install this package: perl-DBD-SQLite*
> (note the *), on RHEL 6 and Centos 6 this has worked fine.
>
>
>
> I am testing the installer on RHEL 7.1 and it is not able to
> install perl-DBD-SQLite* only perl-DBD-SQLite.
>
>
>
> Is just installing perl-DBD-SQLite going to be an issue?
>
>
>
> Thanks!
>
>
>
> --
>
> Thanks,
>
>
>
> Amanda Moran
>
>
>
>
>
>
>
> --
>
> Thanks,
>
>
>
> Amanda Moran
>



-- 
Thanks,

Amanda Moran


答复: A failed Trafodion installation can lead to the hbase:meta table staying in the FAILED_OPEN state.

2016-03-09 Thread Liu, Ming (Ming)
Thanks Denies to share this. We saw this issue during an expansion of Trafodion 
from 4 nodes to 5 nodes, since newly add node is empty, META region should not 
be there, so it does no harm. But the problem is similar, the newly added RS 
cannot work until we update Trafodion into that RS node.

There are two related JIRAs:  TRAFODION-1729 and TRAFODION-1730.
we are working on them to solve the issue. Since Trafodion currently modify the 
HBase server's hbase-site.xml to add coprocessor, it affect *ALL* regions in 
the hbase, including META region. This is no need and not good. META region 
definitely no need to load Trafodion coprocessors. It is system region, 
Trafodion never need to access it directly, and once its open fail, the whole 
hbase system cannot work. 
So with that JIRA fully addressed, we can remove hbase-site.xml modification 
from Trafodion installer, and no need to restart HBase. And as a proper 
installation, Trafodion should be installed on all RS node, so coprocessor jar 
files should be copied to all RS nodes. If Trafodion is not installed on all RS 
node, there may still be issues, I assume Installer still need to consider 
this. A better approach is to save coprocessor jar file on HDFS, but that is 
just a theory, need to study further.

Thanks,
Ming

-邮件原件-
发件人: D. Markt [mailto:dmarkt7...@gmail.com] 
发送时间: 2016年3月9日 15:23
收件人: user@trafodion.incubator.apache.org
主题: A failed Trafodion installation can lead to the hbase:meta table staying in 
the FAILED_OPEN state.

Hi,

  I ran into this situation during a recent installation and thought it might 
be useful if others were to hit a similar situation in the future.
This isn't the only way to recover from the situation but it is one option and 
was proven to work as expected.

Regards,
Dennis

  During a recent Trafodion cluster install the daily build was broken in such 
a way that much of the installation proceeded, but the Trafodion files were not 
copied to each node.  This system was using CDH but I assume the following 
would happen for HDP as well.  After HBase was restarted as part of the 
installation I noticed the HBase icon was red.  I know this will likely not 
look the best in plain text, but the hbase:meta showed (in a red
box):

Region  State   RIT time (ms)
1588230740  hbase:meta,,1.1588230740 state=FAILED_OPEN, ts=Mon Mar 07
07:19:00 UTC 2016 (1289s ago),
server=perf-sles-2.novalocal,60020,14573351205071289706

  Looking at the Region Server's log file that was assigned the hbase:meta 
table there was this output:

2016-03-07 16:45:27,243 INFO
org.apache.hadoop.hbase.regionserver.RSRpcServices: Open
hbase:meta,,1.1588230740
2016-03-07 16:45:27,249 ERROR
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of 
region=hbase:meta,,1.1588230740, starting to roll back the global memstore size.
java.lang.IllegalStateException: Could not instantiate a region instance.
at
org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5486)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5793)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5765)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5721)
at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5672)
at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(Op
enRegionHandler.java:356)
at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenR
egionHandler.java:126)
at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
45)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
15)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException:
Class org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion
not found
at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2112)
at
org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5475)
... 10 more
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion not found
at
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2018)
at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2110)
... 11 more
2016-03-07 16:45:27,250 INFO
org.apache.hadoop.hbase.coordination.ZkOpenRegionCoordination: Opening of 
region {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY 
=> ''} failed, transitioning from OPENING to FAILED_OPEN in ZK, expecting 
version 115

After consulting with our installer expert, the issue was in fact that the 
needed files had not been copied to each node.  At that po

答复: how to tell the most time-consuming part for a given Trafodion query plan?

2016-03-09 Thread Liu, Ming (Ming)
Thank you Selva,

Gunnar pointed me to the sql manual guide before. I should read it more 
carefully.

There are a lot of information in your reply, I need some time to understand 
all of them. But for my major question mark ‘how to tell running time in 
different part for a given query’ is partially answered by your reply. I can 
use ‘Oper CPU time’ for this purpose, along with other tips described in your 
message. But as you correctly pointed out, it is an art ☺ So I need more 
practice to fully grasp it.
Once I have more concrete question, I will ask for help again.

Thanks,
Ming

发件人: Selva Govindarajan [mailto:selva.govindara...@esgyn.com]
发送时间: 2016年3月9日 13:36
收件人: user@trafodion.incubator.apache.org
主题: RE: how to tell the most time-consuming part for a given Trafodion query 
plan?

Hi Ming,

The counters or metrics returned by RMS in Trafodion are documented at 
http://trafodion.apache.org/docs/2.0.0/sql_reference/index.html#sql_runtime_statistics.

Counters displayed in operator stats:

The DOP(Degree of Parallelism> determines the number of ESPs involved in 
executing the Trafodion Operator or TDB(Task Definition Block). The TDB can be 
identified by the number in ID column. LC and RC denotes the left and right 
child of the operator. Using these IDs and the parent TDBID (PaID), one can 
construct the query plan from this output. Dispatches column gives an 
indication how often the operator is scheduled for execution by the Trafodion 
SQL Engine scheduler. An operator is scheduled and run traversing the different 
steps within itself till it can't continue or it gives up on its own for other 
operators to be scheduled.

During query execution, you will see these metrics being changed continuously 
for all the operators as the data flows across it till a blocking operator is 
encountered in the plan. The blocking operators are EX_SORT, EX_HASH_JOIN and 
EX_HASH_GRBY..

The operator cpu time is the sum of the cpu time spent in the operator in the 
executor thread of all the processes hosting the operator. Operator cpu time is 
real and measured in real time in microseconds and it is NOT a relative number. 
It doesn’t include the cpu time spent by other threads in executing the tasks 
on behalf of the executor thread. Usually, trafodion executor instance is run 
in a single thread and the engine can have multiple executor instances running 
in a process to support multi-threaded client applications. Most notably, the 
Trafodion engine uses thread pool to pre-fetch the rows while rows are fetched 
sequentially. It is also possible that Hbase uses thread pools to complete the 
operation requested by Trafodion. These thread timings are not included in the 
operator cpu time.  To account for this, RMS provides another counter in a 
different view. It is the pertable view.



GET STATISTICS FOR QID  PERTABLE provides the following counters:


HBase/Hive IOs

Numbers of messages sent to HBase Region Servers(RS)

HBase/Hive IO MBytes

The cumulative size of these messages in MB accounted at the Trafodion layer.

HBase/Hive Sum IO Time

The cumulative time taken in microseconds by RS to respond and summed up across 
all ESPs

HBase/Hive Max IO Time

The maximum of the cumulative time taken in microseconds by RS to respond for 
any ESP. This gives an indication how much of the elapsed time is spent in 
HBase because the messages to RS are blocking


The Sum and Max IO time are the elapsed time measured as wall clock time in 
microseconds.



The max IO time should be less than the elapsed time or response time of the 
query. If the max IO time is closer to the elapsed time, then most of the time 
is spent in Hbase.

The sum IO time should be less than the DOP * elapsed time.

The Operator time is the CPU time.

I sincerely hope you will find the above information useful to digest the 
output from RMS.  I would say reading, analyzing and interpreting the output 
from RMS is an art that you would develop over time and it is always difficult 
to document every usage scenario. If you find something that needs to be added 
or isn’t correct, please let us know.



Selva


From: Liu, Ming (Ming) [mailto:ming@esgyn.cn]
Sent: Tuesday, March 8, 2016 5:41 PM
To: 
user@trafodion.incubator.apache.org
Subject: how to tell the most time-consuming part for a given Trafodion query 
plan?

Hi, all,

We have running some complex queries using Trafodion, and need to analyze the 
performance. One question is, if we want to know which part of the plan take 
longest time, is there any good tool/skills to answer this?

I can use ‘get statistics for qid  default’ to get runtime stats. But it 
is rather hard to interpret the output. I assume the “Oper CPU Time” is the 
best one we can trust? But I am not sure it is the pure CPU time, or it also 
include ‘waiting time’? If I want to know the whole time an operation from 
start to end, is there any way?
And if i

Re: perl-DBD-SQLite*

2016-03-09 Thread Zhang, Yi (Eason)
Hi,

In my VM environment there’s only perl-DBD-SQLite installed, both ‘sqgen’ and 
Trafodion can work well, so I think the package perl-DBD-SQLite2 is not needed.

Thanks,
Eason


From: "D. Markt"
Reply-To: 
"user@trafodion.incubator.apache.org"
Date: Wednesday, March 9, 2016 at 16:03
To: 
"user@trafodion.incubator.apache.org"
Subject: RE: perl-DBD-SQLite*

Hi,

  I’m not a yum expert, but could the ‘*’ actually be part of a glob 
expression?  For example on a CentOS 6.5 node I did:

yum list installed "perl-DBD-SQLite*"
…
perl-DBD-SQLite.x86_64   1.27-3.el6 
   @base
perl-DBD-SQLite2.x86_64  
0.33-12.el6   @epel

Then again without the ‘*’:

yum list installed "perl-DBD-SQLite"
…
perl-DBD-SQLite.x86_64   1.27-3.el6 
   @base

  So the ‘*’ was likely used to install both packages, but from an Internet 
search it looks like there is a SQLite2 but one site indicates it has been 
deprecated and that SQLite is actually version 3 of the software (and that was 
circa 2009).  I did find several files under my somewhat dated source tree that 
have the common Perl suffix of “.pl”.  Many of those files are under the 
local_hadoop directory so aren’t Trafodion files given a quick look.  Looking 
for the SQLite term that one site mentioned I see:

./core/sqf/sql/scripts/gensq.pl:use DBI;
./core/sqf/sql/scripts/gensq.pl:$DBH = 
DBI->connect("dbi:SQLite:dbname=sqconfig.db","","",$dbargs);

  So I appears this is used by sqgen, and it doesn’t specifically ask for 
SQLite2.  Looking at the sqconfig.db from a today’s install I see:

more sqconfig.db
SQLite format 3
…

So if you install SQLite on RHEL 7.1, and see the generated sqconfig.db file 
has the same “SQLite format 3” (or higher), which I’m sure it will, then that 
should be enough.  Of course it’s always possible some other component is using 
SQLite but my recollection was sqgen was the one component that was using the 
package.  If other Perl scripts are using SQLite2 then they should probably be 
enhanced to use the newer version anyway.

Regards,
Dennis

From: Amanda Moran [mailto:amanda.mo...@esgyn.com]
Sent: Tuesday, March 08, 2016 6:03 PM
To: 
user@trafodion.incubator.apache.org
Subject: Re: perl-DBD-SQLite*

Yes, I totally agree with the way to test.

I am waiting on some changes to be merged and I will do just that.

Just was hoping, if anyone had any words of wisdom before then!

Thanks!

On Tue, Mar 8, 2016 at 3:59 PM, Carol Pearson 
mailto:carol.pearson...@gmail.com>> wrote:
Hi Amanda,

At one point, I know we used SQLite for some internal configuration 
information, but I've lost track of whether or not we still do.  Otherwise, 
SQLite would be needed for a dependency, and at that point and we'd have to 
track that one down to see what's really needed.

If we don't install the full set, does the install complete and does Trafodion 
start? No guarantees that we don't have a problem if it installs and starts 
because the dependency could be later in the execution path.  But if 
install/start fails, at least that tells us that the dependency matters and 
points us to at least one place *where* something cares.

-Carol P.

---
Email:carol.pearson...@gmail.com
Twitter:  @CarolP222
---

On Tue, Mar 8, 2016 at 2:55 PM, Amanda Moran 
mailto:amanda.mo...@esgyn.com>> wrote:
Hi there All-

In the current installer we try to install this package: perl-DBD-SQLite* (note 
the *), on RHEL 6 and Centos 6 this has worked fine.

I am testing the installer on RHEL 7.1 and it is not able to install 
perl-DBD-SQLite* only perl-DBD-SQLite.

Is just installing perl-DBD-SQLite going to be an issue?

Thanks!

--
Thanks,

Amanda Moran




--
Thanks,

Amanda Moran


Re: Request for information about the installer's new management nodes prompt.

2016-03-09 Thread Gunnar Tapper
Hi,

Amanda can correct me if I'm wrong but I think that this refers to
configurations where the distribution manager (Ambari or Cloudera) runs on
separate nodes in the cluster. That is, the nodes are part of the cluster
but is not used as edge nodes or storage nodes and, therefore, should not
have Trafodion services installed on them.

Gunnar

On Tue, Mar 8, 2016 at 2:35 PM, D. Markt  wrote:

> Hi,
>
>   Some time back I noticed a new prompt as I was installing a Trafodion
> build:
>
> Do you have a set of management nodes (Y/N), default is N:
>
>   I was wondering how to appropriately answer this prompt to use the new
> feature as it was intended.  For example, one cluster is using the HA
> configuration and has only the HBase Master and HDFS Name Node processes
> running on two nodes.  Several questions come to mind:
>
>   1) Is the expectation those nodes will be entered at this prompt?
>
>   2) Will entering a node on this line preclude certain processes from
> running on that node, for example:
>a) Are mxosrvrs still started on those nodes?
>b) Are other Trafodion processes started on those nodes?
>
>   3) Are there other considerations as to which nodes should or should not
> be listed as management nodes?
>
>   Any insights will be helpful and appreciated.
>
> Thanks,
> Dennis
>
>
>


-- 
Thanks,

Gunnar
*If you think you can you can, if you think you can't you're right.*


RE: perl-DBD-SQLite*

2016-03-09 Thread D. Markt
Hi,

 

  I’m not a yum expert, but could the ‘*’ actually be part of a glob 
expression?  For example on a CentOS 6.5 node I did:

 

yum list installed "perl-DBD-SQLite*"

…

perl-DBD-SQLite.x86_64   1.27-3.el6 
   @base

perl-DBD-SQLite2.x86_64  
0.33-12.el6   @epel

 

Then again without the ‘*’:

 

yum list installed "perl-DBD-SQLite"

…

perl-DBD-SQLite.x86_64   1.27-3.el6 
   @base

 

  So the ‘*’ was likely used to install both packages, but from an Internet 
search it looks like there is a SQLite2 but one site indicates it has been 
deprecated and that SQLite is actually version 3 of the software (and that was 
circa 2009).  I did find several files under my somewhat dated source tree that 
have the common Perl suffix of “.pl”.  Many of those files are under the 
local_hadoop directory so aren’t Trafodion files given a quick look.  Looking 
for the SQLite term that one site mentioned I see:

 

./core/sqf/sql/scripts/gensq.pl:use DBI;

./core/sqf/sql/scripts/gensq.pl:$DBH = 
DBI->connect("dbi:SQLite:dbname=sqconfig.db","","",$dbargs);

 

  So I appears this is used by sqgen, and it doesn’t specifically ask for 
SQLite2.  Looking at the sqconfig.db from a today’s install I see:

 

more sqconfig.db

SQLite format 3

…

 

So if you install SQLite on RHEL 7.1, and see the generated sqconfig.db file 
has the same “SQLite format 3” (or higher), which I’m sure it will, then that 
should be enough.  Of course it’s always possible some other component is using 
SQLite but my recollection was sqgen was the one component that was using the 
package.  If other Perl scripts are using SQLite2 then they should probably be 
enhanced to use the newer version anyway.

 

Regards,

Dennis

 

From: Amanda Moran [mailto:amanda.mo...@esgyn.com] 
Sent: Tuesday, March 08, 2016 6:03 PM
To: user@trafodion.incubator.apache.org
Subject: Re: perl-DBD-SQLite*

 

Yes, I totally agree with the way to test.

 

I am waiting on some changes to be merged and I will do just that. 

 

Just was hoping, if anyone had any words of wisdom before then! 

 

Thanks! 

 

On Tue, Mar 8, 2016 at 3:59 PM, Carol Pearson mailto:carol.pearson...@gmail.com> > wrote:

Hi Amanda,

 

At one point, I know we used SQLite for some internal configuration 
information, but I've lost track of whether or not we still do.  Otherwise, 
SQLite would be needed for a dependency, and at that point and we'd have to 
track that one down to see what's really needed.  

 

If we don't install the full set, does the install complete and does Trafodion 
start? No guarantees that we don't have a problem if it installs and starts 
because the dependency could be later in the execution path.  But if 
install/start fails, at least that tells us that the dependency matters and 
points us to at least one place *where* something cares.

 

-Carol P.




---

Email:carol.pearson...@gmail.com  

Twitter:  @CarolP222

---

 

On Tue, Mar 8, 2016 at 2:55 PM, Amanda Moran mailto:amanda.mo...@esgyn.com> > wrote:

Hi there All-

 

In the current installer we try to install this package: perl-DBD-SQLite* (note 
the *), on RHEL 6 and Centos 6 this has worked fine. 

 

I am testing the installer on RHEL 7.1 and it is not able to install 
perl-DBD-SQLite* only perl-DBD-SQLite.

 

Is just installing perl-DBD-SQLite going to be an issue?

 

Thanks! 

 

-- 

Thanks, 

 

Amanda Moran

 





 

-- 

Thanks, 

 

Amanda Moran