Are you running against a secure cluster? If so, you'd need to compile
Phoenix yourself as the jars in our distribution is for a non secure
cluster.

On Mon, Aug 11, 2014 at 10:29 AM, Jesse Yates <jesse.k.ya...@gmail.com> wrote:
> That seems correct. I'm not sure where the issue is either. It seems like
> the property isn't in the correct config files (also, you don't need it on
> the master configs, but it won't hurt).
>
> Is the property there when you dump the config from the RS's UI page?
>
> -------------------
> Jesse Yates
> @jesse_yates
> jyates.github.com
>
>
> On Mon, Aug 11, 2014 at 10:27 AM, Saravanan A <asarava...@alphaworkz.com>
> wrote:
>>
>> No am not sure where the issue is...
>>
>> Procedure i did for installation for Phoenix installation:
>>
>> 1.Extracted Phoenix-3.0.
>> 2.Added Phoenix core.jar in all region servers and in master..
>> 3. Added this <property>
>>
>>   <name>hbase.regionserver.wal.codec</name>
>>
>> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
>>
>> </property> property in all hbase-site.xml file in region servers,master
>> and in phoenix bin dir..
>> 4.Restarted hbase.
>>
>> is this right or am missing anything???
>>
>>
>>
>>
>> On Mon, Aug 11, 2014 at 10:38 PM, Jesse Yates <jesse.k.ya...@gmail.com>
>> wrote:
>>>
>>> Well now, that is strange. Maybe its something to do with CDH? Have you
>>> talked to those fellas? Or maybe someone from Cloudera has an insight?
>>>
>>> Seems like it should work
>>>
>>> On Aug 11, 2014 9:55 AM, "Saravanan A" <asarava...@alphaworkz.com> wrote:
>>>>
>>>> bin/hbase classpath:
>>>>
>>>>
>>>> /opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../conf:/usr/java/default/lib/tools.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/..:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../hbase-0.94.15-cdh4.7.0-security.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../hbase-0.94.15-cdh4.7.0-security-tests.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../hbase.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/avro-1.7.4.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/cloudera-jets3t-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-daemon-1.0.3.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-io-2.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-lang-2.5.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/core-3.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/gmbal-api-only-3.0.0-b023.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-framework-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-framework-2.1.1-tests.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-http-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-http-server-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-http-servlet-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/grizzly-rcm-2.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/javax.servlet-3.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jersey-client-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jersey-core-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jersey-grizzly2-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jersey-guice-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jersey-json-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jersey-server-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jersey-test-framework-core-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jersey-test-framework-grizzly2-1.8.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jets3t-0.6.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jetty-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jetty-util-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/kfs-0.3.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/libthrift-0.9.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/management-api-3.0.0-b012.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/metrics-core-2.1.2.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/netty-3.6.6.Final.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/phoenix-core-3.0.0-incubating.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/slf4j-api-1.6.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hbase/bin/../lib/zookeeper.jar:/etc/hadoop/conf/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/hadoop/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/hadoop/lib/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/zookeeper/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/zookeeper/lib/*::/etc/hadoop/conf:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop/lib/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop/.//*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop-yarn/.//*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/./:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/lib/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/.//*:/etc/hadoop/conf/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/hadoop/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/hadoop/lib/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/zookeeper/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/zookeeper/lib/*:
>>>>
>>>> this is the result i got for Hbase classpath command...and this is the
>>>> "/opt/cloudera/parcels/CDH/lib/hbase/lib/" path i executed the code...
>>>>
>>>>
>>>> On Mon, Aug 11, 2014 at 9:29 PM, Jesse Yates <jesse.k.ya...@gmail.com>
>>>> wrote:
>>>>>
>>>>> The classpath you are using above doesn't include the HBase config
>>>>> files, so the code executed will correctly tell you that the class exists,
>>>>> but is not configured.
>>>>>
>>>>> Have you tried running
>>>>> "bin/hbase classpath"
>>>>> to see what you're classpath is at RS startup? If its the same as the
>>>>> -cp argument, its missing the config files.
>>>>>
>>>>> On Aug 11, 2014 6:10 AM, "Saravanan A" <asarava...@alphaworkz.com>
>>>>> wrote:
>>>>>>
>>>>>> This is the command i run in hbase classpath (test1.jar is my jar
>>>>>> file): hbase -cp
>>>>>> .:hadoop-common-2.0.0-cdh4.7.0.jar:commons-logging-1.1.1.jar:hbase-0.94.15-cdh4.7.0-security.jar:com.google.collections.jar:commons-collections-3.2.1.jar:phoenix-core-3.0.0-incubating.jar:com.google.guava_1.6.0.jar:test1.jar
>>>>>> FixConfigFile
>>>>>>
>>>>>> The Output:
>>>>>> Found
>>>>>> Not Found
>>>>>>
>>>>>> This is my full code:
>>>>>>
>>>>>> import org.apache.hadoop.conf.Configuration;
>>>>>>
>>>>>> public class FixConfigFile {
>>>>>>
>>>>>> public static final String INDEX_WAL_EDIT_CODEC_CLASS_NAME =
>>>>>> "org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec";
>>>>>> public static final String WAL_EDIT_CODEC_CLASS_KEY =
>>>>>> "org.apache.hadoop.hbase.regionserver.wal.codec";
>>>>>> public static void main(String[] args) {
>>>>>> Configuration config=new Configuration();
>>>>>> isWALEditCodecSet(config);
>>>>>>
>>>>>> }
>>>>>> public static boolean isWALEditCodecSet(Configuration conf) {
>>>>>>         // check to see if the WALEditCodec is installed
>>>>>>         try {
>>>>>>             // Use reflection to load the IndexedWALEditCodec, since
>>>>>> it may not load with an older version
>>>>>>             // of HBase
>>>>>>             Class.forName(INDEX_WAL_EDIT_CODEC_CLASS_NAME);
>>>>>>             System.out.println("Found");
>>>>>>         } catch (Throwable t) {
>>>>>>         System.out.println("Error");
>>>>>>             return false;
>>>>>>         }
>>>>>>         if
>>>>>> (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY,
>>>>>> null))) {
>>>>>>             // its installed, and it can handle compression and
>>>>>> non-compression cases
>>>>>>         System.out.println("True");
>>>>>>             return true;
>>>>>>         }
>>>>>>         System.out.println("Not Found");
>>>>>>         return false;
>>>>>>     }
>>>>>>
>>>>>> }
>>>>>> ************
>>>>>>
>>>>>> am not sure this is how you want me to execute the code...If am wrong
>>>>>> please guide me...
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Aug 9, 2014 at 8:32 PM, Jesse Yates <jesse.k.ya...@gmail.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> When you run
>>>>>>>    $ bin/hbase classpath
>>>>>>> What do you get? Should help illuminate if everything is setup right.
>>>>>>>
>>>>>>> If the phoenix jar is there, then check the contents of the jar (
>>>>>>> http://docs.oracle.com/javase/tutorial/deployment/jar/view.html) and 
>>>>>>> make
>>>>>>> sure the classes are present.
>>>>>>>
>>>>>>> On Aug 9, 2014 1:03 AM, "Saravanan A" <asarava...@alphaworkz.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Hi Jesse,
>>>>>>>>
>>>>>>>> I ran the following code to test the existence of the classes you
>>>>>>>> asked me to check. I initialized the two constants to the following 
>>>>>>>> values.
>>>>>>>>
>>>>>>>> =======
>>>>>>>> public static final String INDEX_WAL_EDIT_CODEC_CLASS_NAME =
>>>>>>>> "org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec";
>>>>>>>>
>>>>>>>> public static final String WAL_EDIT_CODEC_CLASS_KEY =
>>>>>>>> "hbase.regionserver.wal.codec";
>>>>>>>> ======
>>>>>>>>
>>>>>>>> Then I ran the following code and got the error "Not found" in the
>>>>>>>> equality test.
>>>>>>>>
>>>>>>>> ====
>>>>>>>>         if
>>>>>>>> (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY,
>>>>>>>> null))) {
>>>>>>>>             // its installed, and it can handle compression and
>>>>>>>> non-compression cases
>>>>>>>>             System.out.println("True");
>>>>>>>>             return true;
>>>>>>>>         }
>>>>>>>>         System.out.println("Not Found");
>>>>>>>> ====
>>>>>>>>
>>>>>>>> I am not sure, if I initialized the values for the constants
>>>>>>>> correctly. If I did, then I think some jars are missing or I have 
>>>>>>>> incorrect
>>>>>>>> version.
>>>>>>>> We use CDH 4.7 which has the Hbase version of 0.94.15 and Phoenix
>>>>>>>> version of 3.0
>>>>>>>>
>>>>>>>> Can you tell me how to make this work? Your assistance is greatly
>>>>>>>> appreciated.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Saravanan.A
>>>>>>>>
>>>>>>>> Full code
>>>>>>>> ==========
>>>>>>>> public static void main(String[] args) {
>>>>>>>>         Configuration config=new Configuration();
>>>>>>>>         isWALEditCodecSet(config);
>>>>>>>>
>>>>>>>>     }
>>>>>>>>     public static boolean isWALEditCodecSet(Configuration conf) {
>>>>>>>>         // check to see if the WALEditCodec is installed
>>>>>>>>         try {
>>>>>>>>             // Use reflection to load the IndexedWALEditCodec, since
>>>>>>>> it may not load with an older version
>>>>>>>>             // of HBase
>>>>>>>>             Class.forName(INDEX_WAL_EDIT_CODEC_CLASS_NAME);
>>>>>>>>             System.out.println("Found");
>>>>>>>>         } catch (Throwable t) {
>>>>>>>>             System.out.println("Error");
>>>>>>>>             return false;
>>>>>>>>         }
>>>>>>>>         if
>>>>>>>> (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY,
>>>>>>>> null))) {
>>>>>>>>             // its installed, and it can handle compression and
>>>>>>>> non-compression cases
>>>>>>>>             System.out.println("True");
>>>>>>>>             return true;
>>>>>>>>         }
>>>>>>>>         System.out.println("Not Found");
>>>>>>>>         return false;
>>>>>>>>     }
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sat, Aug 9, 2014 at 12:02 AM, Jesse Yates
>>>>>>>> <jesse.k.ya...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> This error is thrown when on the server-side, the following code
>>>>>>>>> returns false (IndexManagementUtil#isWALEditCodecSet):
>>>>>>>>>
>>>>>>>>>>     public static boolean isWALEditCodecSet(Configuration conf) {
>>>>>>>>>>         // check to see if the WALEditCodec is installed
>>>>>>>>>>         try {
>>>>>>>>>>             // Use reflection to load the IndexedWALEditCodec,
>>>>>>>>>> since it may not load with an older version
>>>>>>>>>>             // of HBase
>>>>>>>>>>             Class.forName(INDEX_WAL_EDIT_CODEC_CLASS_NAME);
>>>>>>>>>>         } catch (Throwable t) {
>>>>>>>>>>             return false;
>>>>>>>>>>         }
>>>>>>>>>>         if
>>>>>>>>>> (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY,
>>>>>>>>>> null))) {
>>>>>>>>>>             // its installed, and it can handle compression and
>>>>>>>>>> non-compression cases
>>>>>>>>>>             return true;
>>>>>>>>>>         }
>>>>>>>>>>         return false;
>>>>>>>>>>     }
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  You could just put this into a main method in a java class, put
>>>>>>>>> that in the classpath of your HBase install on one of the machines on 
>>>>>>>>> your
>>>>>>>>> cluster and run it from the HBase command line to make sure that it 
>>>>>>>>> passes.
>>>>>>>>> Otherwise, you might not have the actual right configs (copy-paste 
>>>>>>>>> error?)
>>>>>>>>> or missing the right jars.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Also, FWIW, this property:
>>>>>>>>>
>>>>>>>>>>  <property>
>>>>>>>>>>      <name>hbase.region.server.rpc.scheduler.factory.class</name>
>>>>>>>>>>
>>>>>>>>>> <value>org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory</value>
>>>>>>>>>>      <description>Factory to create the Phoenix RPC Scheduler that
>>>>>>>>>> knows to put index updates into index queues</description>
>>>>>>>>>>
>>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  is only valid in HBase 0.98.4+ (as pointed out in the section
>>>>>>>>> "Advanced Setup - Removing Index Deadlocks (0.98.4+)"). However, it 
>>>>>>>>> should
>>>>>>>>> still be fine to have in older versions.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -------------------
>>>>>>>>> Jesse Yates
>>>>>>>>> @jesse_yates
>>>>>>>>> jyates.github.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Aug 8, 2014 at 2:18 AM, Saravanan A
>>>>>>>>> <asarava...@alphaworkz.com> wrote:
>>>>>>>>>>
>>>>>>>>>> This is my Hbase-site.xml file...
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> <?xml version="1.0" encoding="UTF-8"?>
>>>>>>>>>> <!--Autogenerated by Cloudera CM on 2014-06-16T11:10:16.319Z-->
>>>>>>>>>> <configuration>
>>>>>>>>>>
>>>>>>>>>>  <property>
>>>>>>>>>>      <name>hbase.regionserver.wal.codec</name>
>>>>>>>>>>
>>>>>>>>>> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
>>>>>>>>>>  </property>
>>>>>>>>>>  <property>
>>>>>>>>>>      <name>hbase.region.server.rpc.scheduler.factory.class</name>
>>>>>>>>>>
>>>>>>>>>> <value>org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory</value>
>>>>>>>>>>      <description>Factory to create the Phoenix RPC Scheduler that
>>>>>>>>>> knows to put index updates into index queues</description>
>>>>>>>>>>  </property>
>>>>>>>>>>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.rootdir</name>
>>>>>>>>>>     <value>hdfs://alpmas.alp.com:8020/hbase</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.client.write.buffer</name>
>>>>>>>>>>     <value>2097152</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.client.pause</name>
>>>>>>>>>>     <value>1000</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.client.retries.number</name>
>>>>>>>>>>     <value>10</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.client.scanner.caching</name>
>>>>>>>>>>     <value>1000</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.client.keyvalue.maxsize</name>
>>>>>>>>>>     <value>20971520</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.rpc.timeout</name>
>>>>>>>>>>     <value>1200000</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.security.authentication</name>
>>>>>>>>>>     <value>simple</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>zookeeper.session.timeout</name>
>>>>>>>>>>     <value>240000</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>zookeeper.retries</name>
>>>>>>>>>>     <value>5</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>zookeeper.pause</name>
>>>>>>>>>>     <value>5000</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>zookeeper.znode.parent</name>
>>>>>>>>>>     <value>/hbase</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>zookeeper.znode.rootserver</name>
>>>>>>>>>>     <value>root-region-server</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.zookeeper.quorum</name>
>>>>>>>>>>     <value>zk3.alp.com,zk2.alp.com,zk1.alp.com</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hbase.zookeeper.property.clientPort</name>
>>>>>>>>>>     <value>2181</value>
>>>>>>>>>>   </property>
>>>>>>>>>> </configuration>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Aug 8, 2014 at 2:46 PM, Saravanan A
>>>>>>>>>> <asarava...@alphaworkz.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I already included this property in hbase-site.xml in all region
>>>>>>>>>>> servers..but still am getting that error...If i define my view as
>>>>>>>>>>> IMMUTABLE_ROWS = true, then i can able to create view..but i want 
>>>>>>>>>>> to create
>>>>>>>>>>> index for mutable..
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Aug 8, 2014 at 2:10 PM, Abhilash L L
>>>>>>>>>>> <abhil...@capillarytech.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Really sorry, shared the wrong config
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> <property>
>>>>>>>>>>>>   <name>hbase.regionserver.wal.codec</name>
>>>>>>>>>>>>
>>>>>>>>>>>> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
>>>>>>>>>>>> </property>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Abhilash L L
>>>>>>>>>>>> Capillary Technologies
>>>>>>>>>>>> M:919886208262
>>>>>>>>>>>> abhil...@capillarytech.com | www.capillarytech.com
>>>>>>>>>>>>
>>>>>>>>>>>> Email from people at capillarytech.com may not represent
>>>>>>>>>>>> official policy of  Capillary Technologies unless explicitly 
>>>>>>>>>>>> stated. Please
>>>>>>>>>>>> see our Corporate-Email-Policy for details. Contents of this email 
>>>>>>>>>>>> are
>>>>>>>>>>>> confidential. Please contact the Sender if you have received this 
>>>>>>>>>>>> email in
>>>>>>>>>>>> error.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Aug 8, 2014 at 1:07 PM, Saravanan A
>>>>>>>>>>>> <asarava...@alphaworkz.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Abhilash,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks for the replay...i included above property and restarted
>>>>>>>>>>>>> the region servers but still am getting the same error...
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Aug 8, 2014 at 12:39 PM, Abhilash L L
>>>>>>>>>>>>> <abhil...@capillarytech.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Saravanan,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>     Please check the Setup section here
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> http://phoenix.apache.org/secondary_indexing.html
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>    You will need to add this config to all Region Servers in
>>>>>>>>>>>>>> hbase-site. xml, as the error says as well (You will need to 
>>>>>>>>>>>>>> restart the
>>>>>>>>>>>>>> servers after the change)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>>   <name>hbase.region.server.rpc.scheduler.factory.class</name>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <value>org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory</value>
>>>>>>>>>>>>>>   <description>Factory to create the Phoenix RPC Scheduler
>>>>>>>>>>>>>> that knows to put index updates into index queues</description>
>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>> Abhilash L L
>>>>>>>>>>>>>> Capillary Technologies
>>>>>>>>>>>>>> M:919886208262
>>>>>>>>>>>>>> abhil...@capillarytech.com | www.capillarytech.com
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Email from people at capillarytech.com may not represent
>>>>>>>>>>>>>> official policy of  Capillary Technologies unless explicitly 
>>>>>>>>>>>>>> stated. Please
>>>>>>>>>>>>>> see our Corporate-Email-Policy for details. Contents of this 
>>>>>>>>>>>>>> email are
>>>>>>>>>>>>>> confidential. Please contact the Sender if you have received 
>>>>>>>>>>>>>> this email in
>>>>>>>>>>>>>> error.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Aug 8, 2014 at 12:22 PM, Saravanan A
>>>>>>>>>>>>>> <asarava...@alphaworkz.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>     I have a table in hbase and created view in phoenix and
>>>>>>>>>>>>>>> try to create index on a column on the view..but i got 
>>>>>>>>>>>>>>> following error..
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Error: ERROR 1029 (42Y88): Mutable secondary indexes must
>>>>>>>>>>>>>>> have the hbase.regionserver.wal.codec property set to
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec in 
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> hbase-sites.xml of every region server tableName=tab2_col4
>>>>>>>>>>>>>>> (state=42Y88,code=1029)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> but i have added the hbase.regionserver.wal.codec property in
>>>>>>>>>>>>>>> all my region server...i can able to create IMMUTABLE index for 
>>>>>>>>>>>>>>> that...
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Am using Hbase ---0.94.15-cdh4.7.0
>>>>>>>>>>>>>>>             Phoenix---3.0
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> am i missing something???
>>>>>>>>>>>>>>> thanks in advance...
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>> Saravanan
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Email from people at capillarytech.com may not represent
>>>>>>>>>>>>>> official policy of Capillary Technologies unless explicitly 
>>>>>>>>>>>>>> stated. Please
>>>>>>>>>>>>>> see our Corporate-Email-Policy for details.Contents of this 
>>>>>>>>>>>>>> email are
>>>>>>>>>>>>>> confidential. Please contact the Sender if you have received 
>>>>>>>>>>>>>> this email in
>>>>>>>>>>>>>> error.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Email from people at capillarytech.com may not represent
>>>>>>>>>>>> official policy of Capillary Technologies unless explicitly 
>>>>>>>>>>>> stated. Please
>>>>>>>>>>>> see our Corporate-Email-Policy for details.Contents of this email 
>>>>>>>>>>>> are
>>>>>>>>>>>> confidential. Please contact the Sender if you have received this 
>>>>>>>>>>>> email in
>>>>>>>>>>>> error.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
>

Reply via email to