Re: how to get hadoop HDFS path?

2013-07-12 Thread deepak rosario tharigopla
Configuration conf = new Configuration();
conf.addResource(new
Path("/home/dpancras/TradeStation/CassandraPigHadoop/WebContent/WEB-INF/core-site.xml"));
conf.addResource(new
Path("/home/dpancras/TradeStation/CassandraPigHadoop/WebContent/WEB-INF/hdfs-site.xml"));
FileSystem fs = FileSystem.get(conf);
 Path path = new
Path("/alex/test.jar<http://192.168.10.22:9000/alex/test.jar>");
//Use relative path here
  System.out.println(": "+path.toString()+"|"+TestMyCo.class.
getCanonicalName()+"|"+Coprocessor.PRIORITY_USER);

  htd.setValue("COPROCESSOR$1", path.toString()+"|"
+ TestMyCo.class.getCanonicalName()+"|"+Coprocessor.PRIORITY_USER);


On Fri, Jul 12, 2013 at 2:00 PM, deepak rosario tharigopla <
rozartharigo...@gmail.com> wrote:

> You can get the hdfs file system as follows
> Configuration conf = new Configuration();
> conf.addResource(new
> Path("/home/dpancras/TradeStation/CassandraPigHadoop/WebContent/WEB-INF/core-site.xml"));
> conf.addResource(new
> Path("/home/dpancras/TradeStation/CassandraPigHadoop/WebContent/WEB-INF/hdfs-site.xml"));
> FileSystem fs = FileSystem.get(conf);
>
>
> On Fri, Jul 12, 2013 at 4:40 AM, ch huang  wrote:
>
>> i want set hdfs path ,AND add the path into hbase,here is my code
>>
>>  Path path = new Path("hdfs:192.168.10.22:9000/alex/test.jar");
>>   System.out.println(":
>> "+path.toString()+"|"+TestMyCo.class.getCanonicalName()+"|"+Coprocessor.PRIORITY_USER);
>>
>>   htd.setValue("COPROCESSOR$1", path.toString()+"|"
>> + TestMyCo.class.getCanonicalName()+"|"+Coprocessor.PRIORITY_USER);
>>
>> and the real value which i find is
>>
>> hbase(main):012:0> describe 'mytest'
>> DESCRIPTION
>> ENABLED
>>  {NAME => 'mytest', COPROCESSOR$1 => 'hdfs:/
>> 192.168.10.22:9000/alex/test.jar|TestMyCo|1073741823<http://192.168.10.22:9000/alex/test.jar%7CTestMyCo%7C1073741823>',
>> FAMILIES => [{N true
>>  AME => 'myfl', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE',
>> REPLICATION_SCOPE => '0', VERSIONS => '3', C
>>  OMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647',
>> KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '6553
>>  6', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE =>
>> 'true'}]}
>> 1 row(s) in 0.0930 seconds
>>
>
>
>
> --
> Thanks & Regards
> Deepak Rosario Pancras
> *Achiever/Responsibility/Arranger/Maximizer/Harmony*
>



-- 
Thanks & Regards
Deepak Rosario Pancras
*Achiever/Responsibility/Arranger/Maximizer/Harmony*


Re: how to get hadoop HDFS path?

2013-07-12 Thread deepak rosario tharigopla
You can get the hdfs file system as follows
Configuration conf = new Configuration();
conf.addResource(new
Path("/home/dpancras/TradeStation/CassandraPigHadoop/WebContent/WEB-INF/core-site.xml"));
conf.addResource(new
Path("/home/dpancras/TradeStation/CassandraPigHadoop/WebContent/WEB-INF/hdfs-site.xml"));
FileSystem fs = FileSystem.get(conf);


On Fri, Jul 12, 2013 at 4:40 AM, ch huang  wrote:

> i want set hdfs path ,AND add the path into hbase,here is my code
>
>  Path path = new Path("hdfs:192.168.10.22:9000/alex/test.jar");
>   System.out.println(":
> "+path.toString()+"|"+TestMyCo.class.getCanonicalName()+"|"+Coprocessor.PRIORITY_USER);
>
>   htd.setValue("COPROCESSOR$1", path.toString()+"|"
> + TestMyCo.class.getCanonicalName()+"|"+Coprocessor.PRIORITY_USER);
>
> and the real value which i find is
>
> hbase(main):012:0> describe 'mytest'
> DESCRIPTION
> ENABLED
>  {NAME => 'mytest', COPROCESSOR$1 => 'hdfs:/
> 192.168.10.22:9000/alex/test.jar|TestMyCo|1073741823',
> FAMILIES => [{N true
>  AME => 'myfl', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE',
> REPLICATION_SCOPE => '0', VERSIONS => '3', C
>  OMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647',
> KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '6553
>  6', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE =>
> 'true'}]}
> 1 row(s) in 0.0930 seconds
>



-- 
Thanks & Regards
Deepak Rosario Pancras
*Achiever/Responsibility/Arranger/Maximizer/Harmony*


Re: stop-dfs.sh does not work

2013-07-09 Thread deepak rosario tharigopla
Also,
You can browse to this location which is the jdk root
/usr/lib/jvm/jdk1.6.0_43/bin/
and if you can find jps (jdk1.6 comes with jps but not openjdk and its
preferable to use sun jdk6 for hadoop) there simple type jps and execute
which will give you all the java process in the JVM

Good handy jps command. You can add the following in the .bashrc file in
the /home//
unalias jps $> /dev/null
alias jps="/usr/lib/jvm/jdk1.6.0_43/bin/jps"

and log out and login. So that you can use jps anywhere you dont have to
open the jdk file location for executing jps.





On Wed, Jul 10, 2013 at 12:30 AM, YouPeng Yang wrote:

> Hi users.
>
> I start my HDFS by using :start-dfs.sh. And add the node start
> successfully.
> However the stop-dfs.sh dose not work when I want to stop the HDFS.
> It shows : no namdenode to stop
>no datanode to stop.
>
> I have to stop it by the command: kill -9 pid.
>
>
> So I wonder that how the stop-dfs.sh does not  work no longer?
>
>
> Best regards
>



-- 
Thanks & Regards
Deepak Rosario Pancras
*Achiever/Responsibility/Arranger/Maximizer/Harmony*