Hi Mike & Wally,

I tried install on SLES sp3, and it worked end to end with AMS.
Here are some details.

branch-2.0.0
ambari-server --hash: 3f1dad99bbaa7c0d58a09186b1f33637816ae7a7


ip-172-31-46-193:~ # cat /etc/*-release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 3

Although, the issue that is mentioned in this thread should not be OS related, 
unless the pre-complied binaries do not work for some reason.
HBase once it finds native libs of the correct version should work as desired.

BR,
Sid
________________________________________
From: Benbenek, Waldyn J <waldyn.benbe...@unisys.com>
Sent: Wednesday, March 04, 2015 10:57 AM
To: dev@ambari.apache.org
Subject: RE: AMS issue with SNAPPY

Thanks,

I'll update and try again.

Wally

-----Original Message-----
From: Siddharth Wagle [mailto:swa...@hortonworks.com]
Sent: Tuesday, March 03, 2015 3:01 PM
To: dev@ambari.apache.org
Subject: Re: AMS issue with SNAPPY

Hi Wally,

I see that you do not have the changes made to fix this issue (commit 
fdd80ec81d93b80899bef593420abb726884e3fc), since it went in on Feb 26 2015.
You should be able to port these changes over and re-test.

BR,
Sid

________________________________________
From: Benbenek, Waldyn J <waldyn.benbe...@unisys.com>
Sent: Tuesday, March 03, 2015 12:13 PM
To: dev@ambari.apache.org
Subject: RE: AMS issue with SNAPPY

Sid,

The Hadoop level is 2.6.  Sles is  11 sp3.

The libraries are there.  I don't what AMS is using for its HMaster process.    
I will cluster setup the setup and see what I can find out. About the call when 
it fails.

Wally



-----Original Message-----
From: Siddharth Wagle [mailto:swa...@hortonworks.com]
Sent: Tuesday, March 03, 2015 10:33 AM
To: dev@ambari.apache.org
Subject: Re: AMS issue with SNAPPY

Hi,

The native hadoop libs have been packaged with AMS at, 
/usr/lib/ams-hbase/lib/hadoop-native
including SNAPPY. This should be set as 
-Djava.library.path=/usr/lib/ams-hbase/lib/hadoop-native/, for the HMaster 
process.

1. What is the version of the Hadoop stack that is deployed?
2. What is the result of this operation on SLES: "which hadoop"?

Would one of you like to open a Jira for this?

Best Regards,
Sid
________________________________________
From: Benbenek, Waldyn J <waldyn.benbe...@unisys.com>
Sent: Tuesday, March 03, 2015 7:51 AM
To: dev@ambari.apache.org
Subject: RE: AMS issue with SNAPPY

I am having this issue as well.  Is this a build or an environment problem?

-----Original Message-----
From: Harp, Michael [mailto:michael.h...@teradata.com]
Sent: Tuesday, February 24, 2015 4:05 PM
To: dev@ambari.apache.org
Subject: AMS issue with SNAPPY

I am only able to get AMS up and running by changing
timeline.metrics.hbase.compression.scheme=SNAPPY to
timeline.metrics.hbase.compression.scheme=None.
Any additional steps needed for snappy beyond installing snappy and
snappy-devel?
Environment is sles11sp1 using trunk with AMS embedded mode.
Thanks,
Mike

06:43:39,634 ERROR [main] PhoenixHBaseAccessor:306 - Error creating
Metrics Schema in HBase using Phoenix.
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: Compression algorithm
'snappy' previously failed test.



Reply via email to