*This is the command i run in hbase classpath (test1.jar is my jar file)*:
hbase -cp
.:hadoop-common-2.0.0-cdh4.7.0.jar:commons-logging-1.1.1.jar:hbase-0.94.15-cdh4.7.0-security.jar:com.google.collections.jar:commons-collections-3.2.1.jar:phoenix-core-3.0.0-incubating.jar:com.google.guava_1.6.0.jar:test1.jar
FixConfigFile

*The Output:*
Found
Not Found

*This is my full code:*

import org.apache.hadoop.conf.Configuration;

public class FixConfigFile {

public static final String INDEX_WAL_EDIT_CODEC_CLASS_NAME =
"org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec";
public static final String WAL_EDIT_CODEC_CLASS_KEY =
"org.apache.hadoop.hbase.regionserver.wal.codec";
 public static void main(String[] args) {
Configuration config=new Configuration();
isWALEditCodecSet(config);

}
public static boolean isWALEditCodecSet(Configuration conf) {
        // check to see if the WALEditCodec is installed
        try {
            // Use reflection to load the IndexedWALEditCodec, since it may
not load with an older version
            // of HBase
            Class.forName(INDEX_WAL_EDIT_CODEC_CLASS_NAME);
            System.out.println("Found");
        } catch (Throwable t) {
        System.out.println("Error");
            return false;
        }
        if
(INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY,
null))) {
            // its installed, and it can handle compression and
non-compression cases
        System.out.println("True");
            return true;
        }
        System.out.println("Not Found");
        return false;
    }

}
************

am not sure this is how you want me to execute the code...If am wrong
please guide me...



On Sat, Aug 9, 2014 at 8:32 PM, Jesse Yates <jesse.k.ya...@gmail.com> wrote:

> When you run
>    $ bin/hbase classpath
> What do you get? Should help illuminate if everything is setup right.
>
> If the phoenix jar is there, then check the contents of the jar (
> http://docs.oracle.com/javase/tutorial/deployment/jar/view.html) and make
> sure the classes are present.
>  On Aug 9, 2014 1:03 AM, "Saravanan A" <asarava...@alphaworkz.com> wrote:
>
>> Hi Jesse,
>>
>> I ran the following code to test the existence of the classes you asked
>> me to check. I initialized the two constants to the following values.
>>
>> =======
>> public static final String INDEX_WAL_EDIT_CODEC_CLASS_NAME =
>> "org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec";
>>
>> public static final String WAL_EDIT_CODEC_CLASS_KEY =
>> "hbase.regionserver.wal.codec";
>> ======
>>
>> Then I ran the following code and got the error "Not found" in the
>> equality test.
>>
>> ====
>>         if
>> (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY,
>> null))) {
>>             // its installed, and it can handle compression and
>> non-compression cases
>>             System.out.println("True");
>>             return true;
>>         }
>>         System.out.println("Not Found");
>> ====
>>
>> I am not sure, if I initialized the values for the constants correctly.
>> If I did, then I think some jars are missing or I have incorrect version.
>> We use CDH 4.7 which has the Hbase version of 0.94.15 and Phoenix version
>> of 3.0
>>
>> Can you tell me how to make this work? Your assistance is greatly
>> appreciated.
>>
>> Regards,
>> Saravanan.A
>>
>> Full code
>> ==========
>> public static void main(String[] args) {
>>         Configuration config=new Configuration();
>>         isWALEditCodecSet(config);
>>
>>     }
>>     public static boolean isWALEditCodecSet(Configuration conf) {
>>         // check to see if the WALEditCodec is installed
>>         try {
>>             // Use reflection to load the IndexedWALEditCodec, since it
>> may not load with an older version
>>             // of HBase
>>             Class.forName(INDEX_WAL_EDIT_CODEC_CLASS_NAME);
>>             System.out.println("Found");
>>         } catch (Throwable t) {
>>             System.out.println("Error");
>>             return false;
>>         }
>>         if
>> (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY,
>> null))) {
>>             // its installed, and it can handle compression and
>> non-compression cases
>>             System.out.println("True");
>>             return true;
>>         }
>>         System.out.println("Not Found");
>>         return false;
>>     }
>>
>>
>>
>> On Sat, Aug 9, 2014 at 12:02 AM, Jesse Yates <jesse.k.ya...@gmail.com>
>> wrote:
>>
>>> This error is thrown when on the server-side, the following code returns
>>> false (IndexManagementUtil#isWALEditCodecSet):
>>>
>>>     public static boolean isWALEditCodecSet(Configuration conf) {
>>>>         // check to see if the WALEditCodec is installed
>>>>         try {
>>>>             // Use reflection to load the IndexedWALEditCodec, since it
>>>> may not load with an older version
>>>>             // of HBase
>>>>             Class.forName(INDEX_WAL_EDIT_CODEC_CLASS_NAME);
>>>>         } catch (Throwable t) {
>>>>             return false;
>>>>         }
>>>>         if
>>>> (INDEX_WAL_EDIT_CODEC_CLASS_NAME.equals(conf.get(WAL_EDIT_CODEC_CLASS_KEY,
>>>> null))) {
>>>>             // its installed, and it can handle compression and
>>>> non-compression cases
>>>>             return true;
>>>>         }
>>>>         return false;
>>>>     }
>>>>
>>>
>>>  You could just put this into a main method in a java class, put that in
>>> the classpath of your HBase install on one of the machines on your cluster
>>> and run it from the HBase command line to make sure that it passes.
>>> Otherwise, you might not have the actual right configs (copy-paste error?)
>>> or missing the right jars.
>>>
>>>
>>> Also, FWIW, this property:
>>>
>>>  <property>
>>>>      <name>hbase.region.server.rpc.scheduler.factory.class</name>
>>>>
>>>>  
>>>> <value>org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory</value>
>>>>      <description>Factory to create the Phoenix RPC Scheduler that
>>>> knows to put index updates into index queues</description>
>>>>
>>> </property>
>>>>
>>>
>>>  is only valid in HBase 0.98.4+ (as pointed out in the section "Advanced
>>> Setup - Removing Index Deadlocks (0.98.4+)"). However, it should still be
>>> fine to have in older versions.
>>>
>>>
>>>
>>>
>>> -------------------
>>> Jesse Yates
>>> @jesse_yates
>>> jyates.github.com
>>>
>>>
>>> On Fri, Aug 8, 2014 at 2:18 AM, Saravanan A <asarava...@alphaworkz.com>
>>> wrote:
>>>
>>>> This is my Hbase-site.xml file...
>>>>
>>>>
>>>> <?xml version="1.0" encoding="UTF-8"?>
>>>> <!--Autogenerated by Cloudera CM on 2014-06-16T11:10:16.319Z-->
>>>> <configuration>
>>>>
>>>>  <property>
>>>>      <name>hbase.regionserver.wal.codec</name>
>>>>
>>>>  
>>>> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
>>>>  </property>
>>>>  <property>
>>>>      <name>hbase.region.server.rpc.scheduler.factory.class</name>
>>>>
>>>>  
>>>> <value>org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory</value>
>>>>      <description>Factory to create the Phoenix RPC Scheduler that
>>>> knows to put index updates into index queues</description>
>>>>  </property>
>>>>
>>>>   <property>
>>>>     <name>hbase.rootdir</name>
>>>>     <value>hdfs://alpmas.alp.com:8020/hbase</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hbase.client.write.buffer</name>
>>>>     <value>2097152</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hbase.client.pause</name>
>>>>     <value>1000</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hbase.client.retries.number</name>
>>>>     <value>10</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hbase.client.scanner.caching</name>
>>>>     <value>1000</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hbase.client.keyvalue.maxsize</name>
>>>>     <value>20971520</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hbase.rpc.timeout</name>
>>>>     <value>1200000</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hbase.security.authentication</name>
>>>>     <value>simple</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>zookeeper.session.timeout</name>
>>>>     <value>240000</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>zookeeper.retries</name>
>>>>     <value>5</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>zookeeper.pause</name>
>>>>     <value>5000</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>zookeeper.znode.parent</name>
>>>>     <value>/hbase</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>zookeeper.znode.rootserver</name>
>>>>     <value>root-region-server</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hbase.zookeeper.quorum</name>
>>>>     <value>zk3.alp.com,zk2.alp.com,zk1.alp.com</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hbase.zookeeper.property.clientPort</name>
>>>>     <value>2181</value>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> On Fri, Aug 8, 2014 at 2:46 PM, Saravanan A <asarava...@alphaworkz.com>
>>>> wrote:
>>>>
>>>>> I already included this property in hbase-site.xml in all region
>>>>> servers..but still am getting that error...If i define my view as
>>>>> IMMUTABLE_ROWS = true, then i can able to create view..but i want to 
>>>>> create
>>>>> index for mutable..
>>>>>
>>>>>
>>>>> On Fri, Aug 8, 2014 at 2:10 PM, Abhilash L L <
>>>>> abhil...@capillarytech.com> wrote:
>>>>>
>>>>>> Really sorry, shared the wrong config
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> <property>
>>>>>>   <name>hbase.regionserver.wal.codec</name>
>>>>>>   
>>>>>> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
>>>>>> </property>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Abhilash L L
>>>>>> Capillary Technologies
>>>>>> M:919886208262
>>>>>> abhil...@capillarytech.com | www.capillarytech.com
>>>>>>
>>>>>> Email from people at capillarytech.com may not represent official
>>>>>> policy of  Capillary Technologies unless explicitly stated. Please see 
>>>>>> our
>>>>>> Corporate-Email-Policy
>>>>>> <http://support.capillary.co.in/policy-public/Corporate-Email-Policy.pdf>
>>>>>> for details. Contents of this email are confidential. Please contact the
>>>>>> Sender if you have received this email in error.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Aug 8, 2014 at 1:07 PM, Saravanan A <
>>>>>> asarava...@alphaworkz.com> wrote:
>>>>>>
>>>>>>> Hi Abhilash,
>>>>>>>
>>>>>>> Thanks for the replay...i included above property and restarted the
>>>>>>> region servers but still am getting the same error...
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Aug 8, 2014 at 12:39 PM, Abhilash L L <
>>>>>>> abhil...@capillarytech.com> wrote:
>>>>>>>
>>>>>>>> Hi Saravanan,
>>>>>>>>
>>>>>>>>     Please check the Setup section here
>>>>>>>>
>>>>>>>> http://phoenix.apache.org/secondary_indexing.html
>>>>>>>>
>>>>>>>>    You will need to add this config to all Region Servers in
>>>>>>>> hbase-site. xml, as the error says as well (You will need to restart 
>>>>>>>> the
>>>>>>>> servers after the change)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>>   <name>hbase.region.server.rpc.scheduler.factory.class</name>
>>>>>>>>   
>>>>>>>> <value>org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory</value>
>>>>>>>>   <description>Factory to create the Phoenix RPC Scheduler that knows 
>>>>>>>> to put index updates into index queues</description>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Abhilash L L
>>>>>>>> Capillary Technologies
>>>>>>>> M:919886208262
>>>>>>>> abhil...@capillarytech.com | www.capillarytech.com
>>>>>>>>
>>>>>>>> Email from people at capillarytech.com may not represent official
>>>>>>>> policy of  Capillary Technologies unless explicitly stated. Please see 
>>>>>>>> our
>>>>>>>> Corporate-Email-Policy
>>>>>>>> <http://support.capillary.co.in/policy-public/Corporate-Email-Policy.pdf>
>>>>>>>> for details. Contents of this email are confidential. Please contact 
>>>>>>>> the
>>>>>>>> Sender if you have received this email in error.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Aug 8, 2014 at 12:22 PM, Saravanan A <
>>>>>>>> asarava...@alphaworkz.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>>     I have a table in hbase and created view in phoenix and try to
>>>>>>>>> create index on a column on the view..but i got following error..
>>>>>>>>>
>>>>>>>>> Error: ERROR 1029 (42Y88): Mutable secondary indexes must have the
>>>>>>>>> hbase.regionserver.wal.codec property set to
>>>>>>>>> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec in the
>>>>>>>>> hbase-sites.xml of every region server tableName=tab2_col4
>>>>>>>>> (state=42Y88,code=1029)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> but i have added the hbase.regionserver.wal.codec property in all
>>>>>>>>> my region server...i can able to create IMMUTABLE index for that...
>>>>>>>>>
>>>>>>>>> Am using Hbase ---0.94.15-cdh4.7.0
>>>>>>>>>             Phoenix---3.0
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> am i missing something???
>>>>>>>>> thanks in advance...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Saravanan
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> Email from people at capillarytech.com may not represent official
>>>>>>>> policy of Capillary Technologies unless explicitly stated. Please see 
>>>>>>>> our
>>>>>>>> Corporate-Email-Policy for details.Contents of this email are 
>>>>>>>> confidential.
>>>>>>>> Please contact the Sender if you have received this email in error.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Email from people at capillarytech.com may not represent official
>>>>>> policy of Capillary Technologies unless explicitly stated. Please see our
>>>>>> Corporate-Email-Policy for details.Contents of this email are 
>>>>>> confidential.
>>>>>> Please contact the Sender if you have received this email in error.
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>

Reply via email to