[ 
https://issues.apache.org/jira/browse/PHOENIX-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

alex kamil updated PHOENIX-1041:
--------------------------------

    Description: 
Pls add instructions on building phoenix with hadoop/hbase from CDH. CDH 
deployments are very common and it's a nightmare to debug , e.g. in case hadoop 
version in phoenix doesn't match the one deployed on the cluster (e.g. see 
PHOENIX-56 - Phoenix/JDBC becomes unresponsive)

This example is for CDH4.6.0: hadoop-2.0.0-cdh4.6.0 and hbase-2.0.0-cdh4.6.0:

1. phoenix/pom.xml changes:
add cloudera repository

    <repository>
      <id>cloudera</id>
      <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
    </repository>

2. update hadoop.profile  version

<!-- Hadoop Versions -->
    <hadoop-one.version>1.0.4</hadoop-one.version>
    <!--hadoop-two.version>2.0.4-alpha</hadoop-two.version-->
    <hadoop-two.version>2.0.0-cdh4.6.0</hadoop-two.version>

 3. update dependency versions

<!-- Dependency versions -->
    <!--hbase.version>0.94.15</hbase.version-->
    <hbase.version>0.94.15-cdh4.6.0</hbase.version>
    <!--hadoop.version>2.0.0</hadoop.version-->   
    <hadoop.version>2.0.0-cdh4.6.0</hadoop.version>

4. build phoenix with hadoop2 profile

 mvn clean install -DskipTests -Dhadoop.profile=2

5. stop hadoop and hbase cluster if running

6. copy phoenix/phoenix-core/target/phoenix-core-3.1.0-snapshot.jar to each 
hadoop/hbase node in the cluster (e.g. to /opt/myapps/ directory)

7. add phoenix-core jar to hbase classpath
update /etc/hbase/conf/hbase-env.sh:     
                                                                                
       
export HBASE_CLASSPATH_PREFIX=/opt/myapps/phoenix-core-3.1.0-snapshot.jar  

(Note: if I'm using HBASE_CLASSPATH getting SLF4J multiple bindings error, this 
is avoided by loading phoenix jar before the rest of the hbase dependencies 
with HBASE_CLASSPATH_PREFIX)

8. add phoenix-core jar to hadoop classpath
update /etc/hadoop/conf/hadoop-env.sh (create the file in hadoop conf directory 
if doesn't exist)

export HADOOP_CLASSPATH=/opt/myapps/phoenix-core-3.1.0-snapshot.jar 


9. start hadoop/hbase cluster

start sqlline:
10. enable DEBUG logging in sqlline log4j.properties
(otherwise sqlline is just "hanging"in case  of hadoop version mismatch and 
hides the related error: DEBUG: NoSuchMethodError: 
org.apache.hadoop.net.NetUtils.getInputStream)

replace WARN and ERROR with DEBUG in phoenix/bin/log4j.properties

11. add commons-collections, hadoop-common and hadoop-auth jars to sqlline 
classpath
(Otherwise getting java.lang.NoClassDefFoundError: 
org/apache/commons/collections/map/UnmodifiableMap exception)

update phoenix/bin/sqlline.py:

extrajars= 
"/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/hadoop-common-2.0.0-cdh4.6.0.jar:/usr/lib/hadoop/hadoop-auth-2.0.0-
cdh4.6.0.jar"                                                                   
                                                                 
java_cmd = 'java -classpath ".' + os.pathsep +extrajars+ os.pathsep+ 
phoenix_utils.phoenix_client_jar + \  

12. add /etc/hbase/conf/hbase-site.xml from one of the hbase nodes to 
phoenix/bin/

13. test
phoenix/bin/sqlline.py <my_zookeeper_host>


  was:
Pls add instructions on building phoenix with hadoop/hbase from CDH. CDH 
deployments are very common and it's a nightmare to debug , e.g. in case hadoop 
version in phoenix doesn't match the one deployed on the cluster (e.g. see 
PHOENIX-56 - Phoenix/JDBC becomes unresponsive)

This example is for CDH4.6.0: hadoop-2.0.0-cdh4.6.0 and hbase-2.0.0-cdh4.6.0:

1. phoenix/pom.xml changes:
add cloudera repository

    <repository>
      <id>cloudera</id>
      <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
    </repository>

2. update hadoop.profile  version

<!-- Hadoop Versions -->
    <hadoop-one.version>1.0.4</hadoop-one.version>
    <!--hadoop-two.version>2.0.4-alpha</hadoop-two.version-->
    <hadoop-two.version>2.0.0-cdh4.6.0</hadoop-two.version>

 3. update dependency versions

<!-- Dependency versions -->
    <!--hbase.version>0.94.15</hbase.version-->
    <hbase.version>0.94.15-cdh4.6.0</hbase.version>
    <!--hadoop.version>2.0.0</hadoop.version-->   
    <hadoop.version>2.0.0-cdh4.6.0</hadoop.version>

4. build phoenix with hadoop2 profile

 mvn clean install -DskipTests -Dhadoop.profile=2

5. stop hadoop and hbase cluster if running

6. copy phoenix/phoenix-core/target/phoenix-core-3.1.0-snapshot.jar to each 
hadoop/hbase node in the cluster (e.g. to /opt/myapps/ directory)

7. add phoenix-core jar to hbase classpath
update /etc/hbase/conf/hbase-env.sh:     
                                                                                
       
export HBASE_CLASSPATH_PREFIX=/opt/myapps/phoenix-core-3.1.0-snapshot.jar  

(Note: if I'm using HBASE_CLASSPATH getting SLF4J multiple bindings error, this 
is avoided by loading phoenix jar before the rest of the hbase dependencies 
with HBASE_CLASSPATH_PREFIX)

8. add phoenix-core jar to hadoop classpath
update /etc/hadoop/conf/hadoop-env.sh (create the file in hadoop conf directory 
if doesn't exist)

export HADOOP_CLASSPATH=/opt/myapps/phoenix-core-3.1.0-snapshot.jar 


9. start hadoop/hbase cluster

start sqlline:
10. enable DEBUG logging in sqlline log4j.properties
(otherwise sqlline is just "hanging"in case  of hadoop version mismatch and 
hides the related error: DEBUG: NoSuchMethodError: 
org.apache.hadoop.net.NetUtils.getInputStream)

replace WARN and ERROR with DEBUG in phoenix/bin/log4j.properties

11. add commons-collections, hadoop-common and hadoop-auth jars to sqlline 
classpath
(Otherwise getting java.lang.NoClassDefFoundError: 
org/apache/commons/collections/map/UnmodifiableMap exception)

update phoenix/bin/sqlline.py:

extrajars= 
"/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/hadoop-common-2.0.0-cdh4.6.0.jar:/usr/lib/hadoop/hadoop-auth-2.0.0-
cdh4.6.0.jar"                                                                   
                                                                 
java_cmd = 'java -classpath ".' + os.pathsep +extrajars+ os.pathsep+ 
phoenix_utils.phoenix_client_jar + \  

12. test
phoenix/bin/sqlline.py <my_zookeeper_host>



> Building phoenix with hadoop/hbase from CDH
> -------------------------------------------
>
>                 Key: PHOENIX-1041
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1041
>             Project: Phoenix
>          Issue Type: Improvement
>            Reporter: alex kamil
>            Priority: Minor
>
> Pls add instructions on building phoenix with hadoop/hbase from CDH. CDH 
> deployments are very common and it's a nightmare to debug , e.g. in case 
> hadoop version in phoenix doesn't match the one deployed on the cluster (e.g. 
> see PHOENIX-56 - Phoenix/JDBC becomes unresponsive)
> This example is for CDH4.6.0: hadoop-2.0.0-cdh4.6.0 and hbase-2.0.0-cdh4.6.0:
> 1. phoenix/pom.xml changes:
> add cloudera repository
>     <repository>
>       <id>cloudera</id>
>       <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
>     </repository>
> 2. update hadoop.profile  version
> <!-- Hadoop Versions -->
>     <hadoop-one.version>1.0.4</hadoop-one.version>
>     <!--hadoop-two.version>2.0.4-alpha</hadoop-two.version-->
>     <hadoop-two.version>2.0.0-cdh4.6.0</hadoop-two.version>
>  3. update dependency versions
> <!-- Dependency versions -->
>     <!--hbase.version>0.94.15</hbase.version-->
>     <hbase.version>0.94.15-cdh4.6.0</hbase.version>
>     <!--hadoop.version>2.0.0</hadoop.version-->   
>     <hadoop.version>2.0.0-cdh4.6.0</hadoop.version>
> 4. build phoenix with hadoop2 profile
>  mvn clean install -DskipTests -Dhadoop.profile=2
> 5. stop hadoop and hbase cluster if running
> 6. copy phoenix/phoenix-core/target/phoenix-core-3.1.0-snapshot.jar to each 
> hadoop/hbase node in the cluster (e.g. to /opt/myapps/ directory)
> 7. add phoenix-core jar to hbase classpath
> update /etc/hbase/conf/hbase-env.sh:     
>                                                                               
>          
> export HBASE_CLASSPATH_PREFIX=/opt/myapps/phoenix-core-3.1.0-snapshot.jar  
> (Note: if I'm using HBASE_CLASSPATH getting SLF4J multiple bindings error, 
> this is avoided by loading phoenix jar before the rest of the hbase 
> dependencies with HBASE_CLASSPATH_PREFIX)
> 8. add phoenix-core jar to hadoop classpath
> update /etc/hadoop/conf/hadoop-env.sh (create the file in hadoop conf 
> directory if doesn't exist)
> export HADOOP_CLASSPATH=/opt/myapps/phoenix-core-3.1.0-snapshot.jar 
> 9. start hadoop/hbase cluster
> start sqlline:
> 10. enable DEBUG logging in sqlline log4j.properties
> (otherwise sqlline is just "hanging"in case  of hadoop version mismatch and 
> hides the related error: DEBUG: NoSuchMethodError: 
> org.apache.hadoop.net.NetUtils.getInputStream)
> replace WARN and ERROR with DEBUG in phoenix/bin/log4j.properties
> 11. add commons-collections, hadoop-common and hadoop-auth jars to sqlline 
> classpath
> (Otherwise getting java.lang.NoClassDefFoundError: 
> org/apache/commons/collections/map/UnmodifiableMap exception)
> update phoenix/bin/sqlline.py:
> extrajars= 
> "/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/hadoop-common-2.0.0-cdh4.6.0.jar:/usr/lib/hadoop/hadoop-auth-2.0.0-
> cdh4.6.0.jar"                                                                 
>                                                                    
> java_cmd = 'java -classpath ".' + os.pathsep +extrajars+ os.pathsep+ 
> phoenix_utils.phoenix_client_jar + \  
> 12. add /etc/hbase/conf/hbase-site.xml from one of the hbase nodes to 
> phoenix/bin/
> 13. test
> phoenix/bin/sqlline.py <my_zookeeper_host>



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to