Re: HBase Support

2013-05-11 Thread Raju muddana
Hey shashwat,!

I have already Seen some of your posts related to Hbase.At that time those
are Very Helpful to Me.
And Thank you to you once again fro this reply.

Shashwat i did not find any "addresources("",value)" method. I found
"addresource()"
and is takes only single parameter.
So i set the values using set(key,value) method.Is It ok?

Thanks & Regards
Raju.


On Sat, May 11, 2013 at 11:56 PM, shashwat shriparv <
dwivedishash...@gmail.com> wrote:

> in your code add this parameter
> zookeeper.session.timeout
> zookeeper tick tome
> conf.addresources("",value)
>
>
>
> *Thanks & Regards*
>
> ∞
> Shashwat Shriparv
>
>
>
> On Sat, May 11, 2013 at 9:22 PM, Mohammad Tariq 
> wrote:
>
> > Hello Raju,
> >
> > Please add the scheme i.e. "file://" in your URI so that is looks
> > like "file:///home/imadas/24H-DB". And yes, there is no need to
> > specify hbase.master
> > in hbase-site.xml, as Shahab has said.
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Sat, May 11, 2013 at 9:10 PM, Shahab Yunus  > >wrote:
> >
> > > @naga raju. You should not provide hbase.master property. The
> information
> > > is retrieved from the zookeeper.quorum property instead.
> > >
> > >
> > > On Sat, May 11, 2013 at 8:39 AM, naga raju 
> > wrote:
> > >
> > > > Thanks Tariq,
> > > >
> > > > my hbase-site.xml:
> > > >
> > > >  
> > > > 
> > > > hbase.rootdir
> > > > /home/imadas/24H-DB
> > > > 
> > > >
> > > > 
> > > >hbase.rpc.timeout
> > > >90 
> > > > 
> > > >
> > > > 
> > > >hbase.zookeeper.quorum
> > > >127.0.0.1
> > > > 
> > > >
> > > > 
> > > >hbase.master
> > > >127.0.0.1:6
> > > >The host and port that the HBase master runs
> > > > at.
> > > > 
> > > >
> > > > 
> > > >zookeeper.session.timeout
> > > >18
> > > > 
> > > >
> > > > 
> > > > hbase.zookeeper.property.dataDir
> > > > /home/imadas/zookeeperfolder
> > > > 
> > > >
> > > > 
> > > >
> > > >
> > > > Thanks & Regards
> > > > Raju.
> > > >
> > > >
> > > > On Fri, May 10, 2013 at 8:20 PM, Mohammad Tariq 
> > > > wrote:
> > > >
> > > > > You are welcome Raju,
> > > > >
> > > > >   There seems to be some connection related issue. Could
> you
> > > > please
> > > > > show me you hbase-site.xml?Have you added "hbase.zookeeper.quorum"
> in
> > > > > it?Which version are you using?
> > > > >
> > > > > Warm Regards,
> > > > > Tariq
> > > > > cloudfront.blogspot.com
> > > > >
> > > > >
> > > > > On Fri, May 10, 2013 at 8:01 PM, naga raju 
> > > > wrote:
> > > > >
> > > > > > Thank you Tariq,
> > > > > >
> > > > > > Have checked Hbase Master Logs(Which i have Provided to you).
> > > > > >  Did you find any clue to identify what is the Problem.
> > > > > >
> > > > > > Thanks & Regards
> > > > > > Raju.
> > > > > >
> > > > > >
> > > > > > On Fri, May 10, 2013 at 7:01 PM, Mohammad Tariq <
> > donta...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hello Raju,
> > > > > > >
> > > > > > >   ZK logs should be there  at the same place as HMaster
> logs.
> > > > > > >
> > > > > > > Warm Regards,
> > > > > > > Tariq
> > > > > > > cloudfront.blogspot.com
> > > > > > >
> > > > > > >
> > > > > > > On Fri, May 10, 2013 at 5:24 PM, naga raju <
> > rajumudd...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Thanks Tariq,
> > > > > > > >
> > > > > > > > I am Using hbase-0.92.1,
> > > > > > > >
> > > > > > > > This is some part of Hbase Master log file:
> > > > > > > >
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> environment:java.library.path=/home/imadas/softs/hbase-0.92.1/bin/../lib/native/Linux-i386-32
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > > environment:java.io.tmpdir=/tmp
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > > environment:java.compiler=
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> environment:
> > > > > os.name
> > > > > > > > =Linux
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > environment:os.arch=i386
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > > environment:os.version=2.6.32-220.el6.i686
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> environment:
> > > > > > user.name
> > > > > > > > =imadas
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > > environment:user.home=/home/imadas
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > > environment:user.dir=/home/imadas/softs/hbase-0.92.1
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Initiating client
> > > > > > connection,
> > > > > > > > connectString=127.0.0.1:2181 sessionTimeout=18
> > > > > watcher=hconnection
> > > > > > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Opening socket
> > > > > connection
> > > > > > to
>

Re: EC2 Elastic MapReduce HBase install recommendations

2013-05-11 Thread Asaf Mesika
We ran into that as well.
You need to make sure when sending List of Put that all rowkeys there are
unique, otherwise as Ted said, the for loop acquiring locks will run
multiple times for rowkey which repeats it self

On Sunday, May 12, 2013, Ted Yu wrote:

> High collision rate means high contention at taking the row locks.
> This results in poor write performance.
>
> Cheers
>
> On May 11, 2013, at 7:14 PM, Pal Konyves  wrote:
>
> > Hi,
> >
> > I decided not to make any tuning, because my whole project is about
> > experimenting with HBase (it's a scool project). However it turned out
> that
> > my sample data generated lots of rowkey collisions. 4 million inserts
> only
> > resulted in about 5000 rows. The data were different though in the
> columns.
> > When I changed my sample dataset to have no collisions in the rowkey, the
> > performance increased with a magnitude of 10. Why is that?
> >
> > Thanks,
> > Pal
> >
> >
> > On Thu, May 9, 2013 at 2:32 PM, Michel Segel  >wrote:
> >
> >> What I am saying is that by default, you get two mappers per node.
> >> x4large can run HBase w more mapred slots, so you will want to tune the
> >> defaults based on machine size. Not just mapred, but also HBase stuff
> too.
> >> You need to do this on startup of EMR cluster though...
> >>
> >> Sent from a remote device. Please excuse any typos...
> >>
> >> Mike Segel
> >>
> >> On May 9, 2013, at 2:39 AM, Pal Konyves  wrote:
> >>
> >>> Principally I chose to use Amazon, because they are supposedly high
> >>> performance, and what more important is: HBase is already set up if I
> >> chose
> >>> it as an EMR Workflow. I wanted to save up the time setting up the
> >> cluster
> >>> manually on EC2 instances.
> >>>
> >>> Are you saying I will reach higher performance when I set up the HBase
> on
> >>> the cluster manually, instead of the default Amazon HBase distribution?
> >> Or
> >>> is it worth to tune the Amazon distribution with a bootstrap action?
> How
> >>> long does it take, to set up the cluster with HDFS manually?
> >>>
> >>> I will also try larger instance types.
> >>>
> >>>
> >>> On Thu, May 9, 2013 at 6:47 AM, Michel Segel <
> michael_se...@hotmail.com
> >>> wrote:
> >>>
>  With respect to EMR, you can run HBase fairly easily.
>  You can't run MapR w HBase on EMR stick w Amazon's release.
> 
>  And you can run it but you will want to know your tuning parameters up
>  front when you instantiate it.
> 
> 
> 
>  Sent from a remote device. Please excuse any typos...
> 
>  Mike Segel
> 
>  On May 8, 2013, at 9:04 PM, Andrew Purtell 
> wrote:
> 
> > M7 is not Apache HBase, or any HBase. It is a proprietary NoSQL
> >> datastore
> > with (I gather) an Apache HBase compatible Java API.
> >
> > As for running HBase on EC2, we recently discussed some particulars,
> >> see
> > the latter part of this thread:
> >> http://search-hadoop.com/m/rI1HpK90guwhere
> > I hijack it. I wouldn't recommend launching HBase as part of an EMR
> >> flow
> > unless you want to use it only for temporary random access storage,
> and
>  in
> > which case use m2.2xlarge/m2.4xlarge instance types. Otherwise, set
> up
> >> a
> > dedicated HBase backed storage service on high I/O instance types.
> The
> > fundamental issue is IO performance on the EC2 platform is fair to
> >> poor.
> >
> > I have also noticed a large difference in baseline block device
> latency
>  if
> > using an old Amazon Linux AMI (< 2013) or the latest AMIs from this
> >> year.
> > Use the new ones, they cut the latency long tail in half. There were
> >> some
> > significant kernel level improvements I gather.
> >
> >
> > On Wed, May 8, 2013 a


Re: EC2 Elastic MapReduce HBase install recommendations

2013-05-11 Thread Ted Yu
High collision rate means high contention at taking the row locks. 
This results in poor write performance. 

Cheers

On May 11, 2013, at 7:14 PM, Pal Konyves  wrote:

> Hi,
> 
> I decided not to make any tuning, because my whole project is about
> experimenting with HBase (it's a scool project). However it turned out that
> my sample data generated lots of rowkey collisions. 4 million inserts only
> resulted in about 5000 rows. The data were different though in the columns.
> When I changed my sample dataset to have no collisions in the rowkey, the
> performance increased with a magnitude of 10. Why is that?
> 
> Thanks,
> Pal
> 
> 
> On Thu, May 9, 2013 at 2:32 PM, Michel Segel wrote:
> 
>> What I am saying is that by default, you get two mappers per node.
>> x4large can run HBase w more mapred slots, so you will want to tune the
>> defaults based on machine size. Not just mapred, but also HBase stuff too.
>> You need to do this on startup of EMR cluster though...
>> 
>> Sent from a remote device. Please excuse any typos...
>> 
>> Mike Segel
>> 
>> On May 9, 2013, at 2:39 AM, Pal Konyves  wrote:
>> 
>>> Principally I chose to use Amazon, because they are supposedly high
>>> performance, and what more important is: HBase is already set up if I
>> chose
>>> it as an EMR Workflow. I wanted to save up the time setting up the
>> cluster
>>> manually on EC2 instances.
>>> 
>>> Are you saying I will reach higher performance when I set up the HBase on
>>> the cluster manually, instead of the default Amazon HBase distribution?
>> Or
>>> is it worth to tune the Amazon distribution with a bootstrap action? How
>>> long does it take, to set up the cluster with HDFS manually?
>>> 
>>> I will also try larger instance types.
>>> 
>>> 
>>> On Thu, May 9, 2013 at 6:47 AM, Michel Segel >> wrote:
>>> 
 With respect to EMR, you can run HBase fairly easily.
 You can't run MapR w HBase on EMR stick w Amazon's release.
 
 And you can run it but you will want to know your tuning parameters up
 front when you instantiate it.
 
 
 
 Sent from a remote device. Please excuse any typos...
 
 Mike Segel
 
 On May 8, 2013, at 9:04 PM, Andrew Purtell  wrote:
 
> M7 is not Apache HBase, or any HBase. It is a proprietary NoSQL
>> datastore
> with (I gather) an Apache HBase compatible Java API.
> 
> As for running HBase on EC2, we recently discussed some particulars,
>> see
> the latter part of this thread:
>> http://search-hadoop.com/m/rI1HpK90guwhere
> I hijack it. I wouldn't recommend launching HBase as part of an EMR
>> flow
> unless you want to use it only for temporary random access storage, and
 in
> which case use m2.2xlarge/m2.4xlarge instance types. Otherwise, set up
>> a
> dedicated HBase backed storage service on high I/O instance types. The
> fundamental issue is IO performance on the EC2 platform is fair to
>> poor.
> 
> I have also noticed a large difference in baseline block device latency
 if
> using an old Amazon Linux AMI (< 2013) or the latest AMIs from this
>> year.
> Use the new ones, they cut the latency long tail in half. There were
>> some
> significant kernel level improvements I gather.
> 
> 
> On Wed, May 8, 2013 at 10:42 AM, Marcos Luis Ortiz Valmaseda <
> marcosluis2...@gmail.com> wrote:
> 
>> I think that you when you are talking about RMap, you are referring to
>> MapR´s distribution.
>> I think that MapR´s team released a very good version of its Hadoop
>> distribution focused on HBase called M7. You can see its overview
>> here:
>> http://www.mapr.com/products/mapr-editions/m7-edition
>> 
>> But this release was under beta testing, and I see that it´s not
 included
>> in the Amazon Marketplace yet:
>> https://aws.amazon.com/marketplace/seller-profile?id=802b0a25-877e-4b57-9007-a3fd284815a5
>> 
>> 
>> 
>> 
>> 2013/5/7 Pal Konyves 
>> 
>>> Hi,
>>> 
>>> Has anyone got some recommendations about running HBase on EC2? I am
>>> testing it, and so far I am very disappointed with it. I did not
>> change
>>> anything about the default 'Amazon distribution' installation. It has
 one
>>> MasterNode and two slave nodes, and write performance is around 2500
>> small
>>> rows per sec at most, but I expected it to be way  better. Oh, and
>> this
>> is
>>> with batch put operations with autocommit turned off, where each
>> batch
>>> containes about 500-1000 rows... When I do it with autocommit, it
>> does
>> not
>>> even reach the 1000 rows per sec.
>>> 
>>> Every nodes were m1.Large ones.
>>> 
>>> Any experiences, suggestions? Is it worth to try the RMap
>> distribution
>>> instead of the amazon one?
>>> 
>>> Thanks,
>>> Pal
>> 
>> 
>> 
>> --
>> Marcos Ortiz Valmaseda
>> Product Manager at PDVSA
>> http://

Re: EC2 Elastic MapReduce HBase install recommendations

2013-05-11 Thread Pal Konyves
Hi,

I decided not to make any tuning, because my whole project is about
experimenting with HBase (it's a scool project). However it turned out that
my sample data generated lots of rowkey collisions. 4 million inserts only
resulted in about 5000 rows. The data were different though in the columns.
When I changed my sample dataset to have no collisions in the rowkey, the
performance increased with a magnitude of 10. Why is that?

Thanks,
Pal


On Thu, May 9, 2013 at 2:32 PM, Michel Segel wrote:

> What I am saying is that by default, you get two mappers per node.
> x4large can run HBase w more mapred slots, so you will want to tune the
> defaults based on machine size. Not just mapred, but also HBase stuff too.
> You need to do this on startup of EMR cluster though...
>
> Sent from a remote device. Please excuse any typos...
>
> Mike Segel
>
> On May 9, 2013, at 2:39 AM, Pal Konyves  wrote:
>
> > Principally I chose to use Amazon, because they are supposedly high
> > performance, and what more important is: HBase is already set up if I
> chose
> > it as an EMR Workflow. I wanted to save up the time setting up the
> cluster
> > manually on EC2 instances.
> >
> > Are you saying I will reach higher performance when I set up the HBase on
> > the cluster manually, instead of the default Amazon HBase distribution?
> Or
> > is it worth to tune the Amazon distribution with a bootstrap action? How
> > long does it take, to set up the cluster with HDFS manually?
> >
> > I will also try larger instance types.
> >
> >
> > On Thu, May 9, 2013 at 6:47 AM, Michel Segel  >wrote:
> >
> >> With respect to EMR, you can run HBase fairly easily.
> >> You can't run MapR w HBase on EMR stick w Amazon's release.
> >>
> >> And you can run it but you will want to know your tuning parameters up
> >> front when you instantiate it.
> >>
> >>
> >>
> >> Sent from a remote device. Please excuse any typos...
> >>
> >> Mike Segel
> >>
> >> On May 8, 2013, at 9:04 PM, Andrew Purtell  wrote:
> >>
> >>> M7 is not Apache HBase, or any HBase. It is a proprietary NoSQL
> datastore
> >>> with (I gather) an Apache HBase compatible Java API.
> >>>
> >>> As for running HBase on EC2, we recently discussed some particulars,
> see
> >>> the latter part of this thread:
> http://search-hadoop.com/m/rI1HpK90guwhere
> >>> I hijack it. I wouldn't recommend launching HBase as part of an EMR
> flow
> >>> unless you want to use it only for temporary random access storage, and
> >> in
> >>> which case use m2.2xlarge/m2.4xlarge instance types. Otherwise, set up
> a
> >>> dedicated HBase backed storage service on high I/O instance types. The
> >>> fundamental issue is IO performance on the EC2 platform is fair to
> poor.
> >>>
> >>> I have also noticed a large difference in baseline block device latency
> >> if
> >>> using an old Amazon Linux AMI (< 2013) or the latest AMIs from this
> year.
> >>> Use the new ones, they cut the latency long tail in half. There were
> some
> >>> significant kernel level improvements I gather.
> >>>
> >>>
> >>> On Wed, May 8, 2013 at 10:42 AM, Marcos Luis Ortiz Valmaseda <
> >>> marcosluis2...@gmail.com> wrote:
> >>>
>  I think that you when you are talking about RMap, you are referring to
>  MapR´s distribution.
>  I think that MapR´s team released a very good version of its Hadoop
>  distribution focused on HBase called M7. You can see its overview
> here:
>  http://www.mapr.com/products/mapr-editions/m7-edition
> 
>  But this release was under beta testing, and I see that it´s not
> >> included
>  in the Amazon Marketplace yet:
> >>
> https://aws.amazon.com/marketplace/seller-profile?id=802b0a25-877e-4b57-9007-a3fd284815a5
> 
> 
> 
> 
>  2013/5/7 Pal Konyves 
> 
> > Hi,
> >
> > Has anyone got some recommendations about running HBase on EC2? I am
> > testing it, and so far I am very disappointed with it. I did not
> change
> > anything about the default 'Amazon distribution' installation. It has
> >> one
> > MasterNode and two slave nodes, and write performance is around 2500
>  small
> > rows per sec at most, but I expected it to be way  better. Oh, and
> this
>  is
> > with batch put operations with autocommit turned off, where each
> batch
> > containes about 500-1000 rows... When I do it with autocommit, it
> does
>  not
> > even reach the 1000 rows per sec.
> >
> > Every nodes were m1.Large ones.
> >
> > Any experiences, suggestions? Is it worth to try the RMap
> distribution
> > instead of the amazon one?
> >
> > Thanks,
> > Pal
> 
> 
> 
>  --
>  Marcos Ortiz Valmaseda
>  Product Manager at PDVSA
>  http://about.me/marcosortiz
> >>>
> >>>
> >>>
> >>> --
> >>> Best regards,
> >>>
> >>>  - Andy
> >>>
> >>> Problems worthy of attack prove their worth by hitting back. - Piet
> Hein
> >>> (via Tom White)
> >>
>


Re: Does Hbase read .bashrc file??

2013-05-11 Thread Mohammad Tariq
I see. It was actually a Linux "thing". Thank you so much for the
clarification.

Warm Regards,
Tariq
cloudfront.blogspot.com


On Sun, May 12, 2013 at 4:51 AM, Amandeep Khurana  wrote:

> The #!/usr/bin/bash command in the script tells the shell to execute the
> script as a bash script. The execution is done by forking off a new shell
> instance, inside of which the bashrc is sourced automatically. It's not a
> HBase script specific phenomena.
>
> Like you said - removing the HADOOP_HOME env variable from the bashrc would
> solve the problem.
>
>
> On Sat, May 11, 2013 at 1:02 PM, Mohammad Tariq 
> wrote:
>
> > Hello Aman,
> >
> > Thank you so much for the quick response. But why would that happen?
> I
> > mean the required env variables are present in hbase-env.sh already. What
> > is the need to source bashrc?
> >
> > Consider a scenario wherein you want to run Hbase in standalone mode. You
> > have a Hadoop setup on the same machine and HADOOP_HOME is set in bashrc,
> > but you don't want to use hadoop for some reason. In that case Hbase will
> > face connection issues as it'll try to contact the Hadoop host(as
> > HADOOP_HOME is present in bashrc) which is not running because it is
> > standalone setup. On the other hand if you are running Hbase in pseudo
> > distributed mode and if you haven't set HADOOP_HOME in bashrc, it would
> > still work.
> >
> > I'm sorry to be a pest of questions. I am actually not able to understand
> > this. Pardon my ignorance.
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Sun, May 12, 2013 at 1:14 AM, Amandeep Khurana 
> > wrote:
> >
> > > The start script is a shell script and it forks a new shell when the
> > > script is executed. That'll source the bashrc file.
> > >
> > > On May 11, 2013, at 12:39 PM, Mohammad Tariq 
> wrote:
> > >
> > > > Hello list,
> > > >
> > > >   Does Hbase read the environment variables set in
> > > > *~/.bashrc*file everytime I issue
> > > > *bin/start-hbase.sh*??What could be the possible reasons for
> > > that?Specially
> > > > if I have a standalone setup on my local FS.
> > > >
> > > > Thank you so much for your time.
> > > >
> > > > Warm Regards,
> > > > Tariq
> > > > cloudfront.blogspot.com
> > >
> >
>


Re: Does Hbase read .bashrc file??

2013-05-11 Thread Amandeep Khurana
The #!/usr/bin/bash command in the script tells the shell to execute the
script as a bash script. The execution is done by forking off a new shell
instance, inside of which the bashrc is sourced automatically. It's not a
HBase script specific phenomena.

Like you said - removing the HADOOP_HOME env variable from the bashrc would
solve the problem.


On Sat, May 11, 2013 at 1:02 PM, Mohammad Tariq  wrote:

> Hello Aman,
>
> Thank you so much for the quick response. But why would that happen? I
> mean the required env variables are present in hbase-env.sh already. What
> is the need to source bashrc?
>
> Consider a scenario wherein you want to run Hbase in standalone mode. You
> have a Hadoop setup on the same machine and HADOOP_HOME is set in bashrc,
> but you don't want to use hadoop for some reason. In that case Hbase will
> face connection issues as it'll try to contact the Hadoop host(as
> HADOOP_HOME is present in bashrc) which is not running because it is
> standalone setup. On the other hand if you are running Hbase in pseudo
> distributed mode and if you haven't set HADOOP_HOME in bashrc, it would
> still work.
>
> I'm sorry to be a pest of questions. I am actually not able to understand
> this. Pardon my ignorance.
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Sun, May 12, 2013 at 1:14 AM, Amandeep Khurana 
> wrote:
>
> > The start script is a shell script and it forks a new shell when the
> > script is executed. That'll source the bashrc file.
> >
> > On May 11, 2013, at 12:39 PM, Mohammad Tariq  wrote:
> >
> > > Hello list,
> > >
> > >   Does Hbase read the environment variables set in
> > > *~/.bashrc*file everytime I issue
> > > *bin/start-hbase.sh*??What could be the possible reasons for
> > that?Specially
> > > if I have a standalone setup on my local FS.
> > >
> > > Thank you so much for your time.
> > >
> > > Warm Regards,
> > > Tariq
> > > cloudfront.blogspot.com
> >
>


Re: Does Hbase read .bashrc file??

2013-05-11 Thread Mohammad Tariq
Hello sir,

  Long time :)

Yeah, my understanding was same as whatever you have written. But today I
noticed that Hbase is picking HADOOP_HOME from bashrc when I was trying to
run it in standalone mode(on my local FS). Although all the Hadoop daemons
were stopped and Hbase was configured to run in standalone mode, it was
still trying to reach "hdfs://localhost:9000" and hence getting :

java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on
connection exception: .

So I arrived at the conclusion that if I have HADOOP_HOME set in bashrc,
Hbase will try to search for the NN neglecting the properties set in
hbase-site.xml. And once I commented out HADOOP_HOME in bashrc, everything
started behaving just perfectly.

Thank you.

Warm Regards,
Tariq
cloudfront.blogspot.com


On Sun, May 12, 2013 at 2:02 AM, Stack  wrote:

> On Sat, May 11, 2013 at 1:02 PM, Mohammad Tariq 
> wrote:
>
> > Hello Aman,
> >
> > Thank you so much for the quick response. But why would that happen?
> I
> > mean the required env variables are present in hbase-env.sh already. What
> > is the need to source bashrc?
> >
> > Consider a scenario wherein you want to run Hbase in standalone mode. You
> > have a Hadoop setup on the same machine and HADOOP_HOME is set in bashrc,
> > but you don't want to use hadoop for some reason. In that case Hbase will
> > face connection issues as it'll try to contact the Hadoop host(as
> > HADOOP_HOME is present in bashrc) which is not running because it is
> > standalone setup. On the other hand if you are running Hbase in pseudo
> > distributed mode and if you haven't set HADOOP_HOME in bashrc, it would
> > still work.
> >
> > I'm sorry to be a pest of questions. I am actually not able to understand
> > this. Pardon my ignorance.
> >
>
>
> Hey Tariq:
>
> We generally try to avoid picking up anything from the environment.  This
> is why you have define where you want to get your JAVA, etc., from in the
> hbase configuration.  Is there something in particular that we are finding
> in .bashrc that you notice?
>
> Thanks,
> St.Ack
>


Re: Does Hbase read .bashrc file??

2013-05-11 Thread Stack
On Sat, May 11, 2013 at 1:02 PM, Mohammad Tariq  wrote:

> Hello Aman,
>
> Thank you so much for the quick response. But why would that happen? I
> mean the required env variables are present in hbase-env.sh already. What
> is the need to source bashrc?
>
> Consider a scenario wherein you want to run Hbase in standalone mode. You
> have a Hadoop setup on the same machine and HADOOP_HOME is set in bashrc,
> but you don't want to use hadoop for some reason. In that case Hbase will
> face connection issues as it'll try to contact the Hadoop host(as
> HADOOP_HOME is present in bashrc) which is not running because it is
> standalone setup. On the other hand if you are running Hbase in pseudo
> distributed mode and if you haven't set HADOOP_HOME in bashrc, it would
> still work.
>
> I'm sorry to be a pest of questions. I am actually not able to understand
> this. Pardon my ignorance.
>


Hey Tariq:

We generally try to avoid picking up anything from the environment.  This
is why you have define where you want to get your JAVA, etc., from in the
hbase configuration.  Is there something in particular that we are finding
in .bashrc that you notice?

Thanks,
St.Ack


Re: Does Hbase read .bashrc file??

2013-05-11 Thread Mohammad Tariq
Hello Aman,

Thank you so much for the quick response. But why would that happen? I
mean the required env variables are present in hbase-env.sh already. What
is the need to source bashrc?

Consider a scenario wherein you want to run Hbase in standalone mode. You
have a Hadoop setup on the same machine and HADOOP_HOME is set in bashrc,
but you don't want to use hadoop for some reason. In that case Hbase will
face connection issues as it'll try to contact the Hadoop host(as
HADOOP_HOME is present in bashrc) which is not running because it is
standalone setup. On the other hand if you are running Hbase in pseudo
distributed mode and if you haven't set HADOOP_HOME in bashrc, it would
still work.

I'm sorry to be a pest of questions. I am actually not able to understand
this. Pardon my ignorance.

Warm Regards,
Tariq
cloudfront.blogspot.com


On Sun, May 12, 2013 at 1:14 AM, Amandeep Khurana  wrote:

> The start script is a shell script and it forks a new shell when the
> script is executed. That'll source the bashrc file.
>
> On May 11, 2013, at 12:39 PM, Mohammad Tariq  wrote:
>
> > Hello list,
> >
> >   Does Hbase read the environment variables set in
> > *~/.bashrc*file everytime I issue
> > *bin/start-hbase.sh*??What could be the possible reasons for
> that?Specially
> > if I have a standalone setup on my local FS.
> >
> > Thank you so much for your time.
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
>


Re: Does Hbase read .bashrc file??

2013-05-11 Thread Amandeep Khurana
The start script is a shell script and it forks a new shell when the
script is executed. That'll source the bashrc file.

On May 11, 2013, at 12:39 PM, Mohammad Tariq  wrote:

> Hello list,
>
>   Does Hbase read the environment variables set in
> *~/.bashrc*file everytime I issue
> *bin/start-hbase.sh*??What could be the possible reasons for that?Specially
> if I have a standalone setup on my local FS.
>
> Thank you so much for your time.
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com


Does Hbase read .bashrc file??

2013-05-11 Thread Mohammad Tariq
Hello list,

   Does Hbase read the environment variables set in
*~/.bashrc*file everytime I issue
*bin/start-hbase.sh*??What could be the possible reasons for that?Specially
if I have a standalone setup on my local FS.

Thank you so much for your time.

Warm Regards,
Tariq
cloudfront.blogspot.com


Re: HBase Support

2013-05-11 Thread shashwat shriparv
in your code add this parameter
zookeeper.session.timeout
zookeeper tick tome
conf.addresources("",value)



*Thanks & Regards*

∞
Shashwat Shriparv



On Sat, May 11, 2013 at 9:22 PM, Mohammad Tariq  wrote:

> Hello Raju,
>
> Please add the scheme i.e. "file://" in your URI so that is looks
> like "file:///home/imadas/24H-DB". And yes, there is no need to
> specify hbase.master
> in hbase-site.xml, as Shahab has said.
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Sat, May 11, 2013 at 9:10 PM, Shahab Yunus  >wrote:
>
> > @naga raju. You should not provide hbase.master property. The information
> > is retrieved from the zookeeper.quorum property instead.
> >
> >
> > On Sat, May 11, 2013 at 8:39 AM, naga raju 
> wrote:
> >
> > > Thanks Tariq,
> > >
> > > my hbase-site.xml:
> > >
> > >  
> > > 
> > > hbase.rootdir
> > > /home/imadas/24H-DB
> > > 
> > >
> > > 
> > >hbase.rpc.timeout
> > >90 
> > > 
> > >
> > > 
> > >hbase.zookeeper.quorum
> > >127.0.0.1
> > > 
> > >
> > > 
> > >hbase.master
> > >127.0.0.1:6
> > >The host and port that the HBase master runs
> > > at.
> > > 
> > >
> > > 
> > >zookeeper.session.timeout
> > >18
> > > 
> > >
> > > 
> > > hbase.zookeeper.property.dataDir
> > > /home/imadas/zookeeperfolder
> > > 
> > >
> > > 
> > >
> > >
> > > Thanks & Regards
> > > Raju.
> > >
> > >
> > > On Fri, May 10, 2013 at 8:20 PM, Mohammad Tariq 
> > > wrote:
> > >
> > > > You are welcome Raju,
> > > >
> > > >   There seems to be some connection related issue. Could you
> > > please
> > > > show me you hbase-site.xml?Have you added "hbase.zookeeper.quorum" in
> > > > it?Which version are you using?
> > > >
> > > > Warm Regards,
> > > > Tariq
> > > > cloudfront.blogspot.com
> > > >
> > > >
> > > > On Fri, May 10, 2013 at 8:01 PM, naga raju 
> > > wrote:
> > > >
> > > > > Thank you Tariq,
> > > > >
> > > > > Have checked Hbase Master Logs(Which i have Provided to you).
> > > > >  Did you find any clue to identify what is the Problem.
> > > > >
> > > > > Thanks & Regards
> > > > > Raju.
> > > > >
> > > > >
> > > > > On Fri, May 10, 2013 at 7:01 PM, Mohammad Tariq <
> donta...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hello Raju,
> > > > > >
> > > > > >   ZK logs should be there  at the same place as HMaster logs.
> > > > > >
> > > > > > Warm Regards,
> > > > > > Tariq
> > > > > > cloudfront.blogspot.com
> > > > > >
> > > > > >
> > > > > > On Fri, May 10, 2013 at 5:24 PM, naga raju <
> rajumudd...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > Thanks Tariq,
> > > > > > >
> > > > > > > I am Using hbase-0.92.1,
> > > > > > >
> > > > > > > This is some part of Hbase Master log file:
> > > > > > >
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> environment:java.library.path=/home/imadas/softs/hbase-0.92.1/bin/../lib/native/Linux-i386-32
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > environment:java.io.tmpdir=/tmp
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > environment:java.compiler=
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client environment:
> > > > os.name
> > > > > > > =Linux
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > environment:os.arch=i386
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > environment:os.version=2.6.32-220.el6.i686
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client environment:
> > > > > user.name
> > > > > > > =imadas
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > environment:user.home=/home/imadas
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > > environment:user.dir=/home/imadas/softs/hbase-0.92.1
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Initiating client
> > > > > connection,
> > > > > > > connectString=127.0.0.1:2181 sessionTimeout=18
> > > > watcher=hconnection
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Opening socket
> > > > connection
> > > > > to
> > > > > > > server /127.0.0.1:2181
> > > > > > > 13/05/10 14:22:22 INFO zookeeper.RecoverableZooKeeper: The
> > > identifier
> > > > > of
> > > > > > > this process is 3145@localhost.localdomain
> > > > > > > 13/05/10 14:22:22 WARN client.ZooKeeperSaslClient:
> > > SecurityException:
> > > > > > > java.lang.SecurityException: Unable to locate a login
> > configuration
> > > > > > > occurred when trying to find JAAS configuration.
> > > > > > > 13/05/10 14:22:22 INFO client.ZooKeeperSaslClient: Client will
> > not
> > > > > > > SASL-authenticate because the default JAAS configuration
> section
> > > > > 'Client'
> > > > > > > could not be found. If you are not using SASL, you may ignore
> > this.
> > > > On
> > > > > > the
> > > > > > > other hand, if you expected SASL to work, please fi

Re: HBase Support

2013-05-11 Thread Mohammad Tariq
Hello Raju,

Please add the scheme i.e. "file://" in your URI so that is looks
like "file:///home/imadas/24H-DB". And yes, there is no need to
specify hbase.master
in hbase-site.xml, as Shahab has said.

Warm Regards,
Tariq
cloudfront.blogspot.com


On Sat, May 11, 2013 at 9:10 PM, Shahab Yunus wrote:

> @naga raju. You should not provide hbase.master property. The information
> is retrieved from the zookeeper.quorum property instead.
>
>
> On Sat, May 11, 2013 at 8:39 AM, naga raju  wrote:
>
> > Thanks Tariq,
> >
> > my hbase-site.xml:
> >
> >  
> > 
> > hbase.rootdir
> > /home/imadas/24H-DB
> > 
> >
> > 
> >hbase.rpc.timeout
> >90 
> > 
> >
> > 
> >hbase.zookeeper.quorum
> >127.0.0.1
> > 
> >
> > 
> >hbase.master
> >127.0.0.1:6
> >The host and port that the HBase master runs
> > at.
> > 
> >
> > 
> >zookeeper.session.timeout
> >18
> > 
> >
> > 
> > hbase.zookeeper.property.dataDir
> > /home/imadas/zookeeperfolder
> > 
> >
> > 
> >
> >
> > Thanks & Regards
> > Raju.
> >
> >
> > On Fri, May 10, 2013 at 8:20 PM, Mohammad Tariq 
> > wrote:
> >
> > > You are welcome Raju,
> > >
> > >   There seems to be some connection related issue. Could you
> > please
> > > show me you hbase-site.xml?Have you added "hbase.zookeeper.quorum" in
> > > it?Which version are you using?
> > >
> > > Warm Regards,
> > > Tariq
> > > cloudfront.blogspot.com
> > >
> > >
> > > On Fri, May 10, 2013 at 8:01 PM, naga raju 
> > wrote:
> > >
> > > > Thank you Tariq,
> > > >
> > > > Have checked Hbase Master Logs(Which i have Provided to you).
> > > >  Did you find any clue to identify what is the Problem.
> > > >
> > > > Thanks & Regards
> > > > Raju.
> > > >
> > > >
> > > > On Fri, May 10, 2013 at 7:01 PM, Mohammad Tariq 
> > > > wrote:
> > > >
> > > > > Hello Raju,
> > > > >
> > > > >   ZK logs should be there  at the same place as HMaster logs.
> > > > >
> > > > > Warm Regards,
> > > > > Tariq
> > > > > cloudfront.blogspot.com
> > > > >
> > > > >
> > > > > On Fri, May 10, 2013 at 5:24 PM, naga raju 
> > > > wrote:
> > > > >
> > > > > > Thanks Tariq,
> > > > > >
> > > > > > I am Using hbase-0.92.1,
> > > > > >
> > > > > > This is some part of Hbase Master log file:
> > > > > >
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> environment:java.library.path=/home/imadas/softs/hbase-0.92.1/bin/../lib/native/Linux-i386-32
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > environment:java.io.tmpdir=/tmp
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > environment:java.compiler=
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client environment:
> > > os.name
> > > > > > =Linux
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > environment:os.arch=i386
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > environment:os.version=2.6.32-220.el6.i686
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client environment:
> > > > user.name
> > > > > > =imadas
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > environment:user.home=/home/imadas
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > > environment:user.dir=/home/imadas/softs/hbase-0.92.1
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Initiating client
> > > > connection,
> > > > > > connectString=127.0.0.1:2181 sessionTimeout=18
> > > watcher=hconnection
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Opening socket
> > > connection
> > > > to
> > > > > > server /127.0.0.1:2181
> > > > > > 13/05/10 14:22:22 INFO zookeeper.RecoverableZooKeeper: The
> > identifier
> > > > of
> > > > > > this process is 3145@localhost.localdomain
> > > > > > 13/05/10 14:22:22 WARN client.ZooKeeperSaslClient:
> > SecurityException:
> > > > > > java.lang.SecurityException: Unable to locate a login
> configuration
> > > > > > occurred when trying to find JAAS configuration.
> > > > > > 13/05/10 14:22:22 INFO client.ZooKeeperSaslClient: Client will
> not
> > > > > > SASL-authenticate because the default JAAS configuration section
> > > > 'Client'
> > > > > > could not be found. If you are not using SASL, you may ignore
> this.
> > > On
> > > > > the
> > > > > > other hand, if you expected SASL to work, please fix your JAAS
> > > > > > configuration.
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Socket connection
> > > > > established
> > > > > > to localhost.localdomain/127.0.0.1:2181, initiating session
> > > > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Session
> establishment
> > > > > complete
> > > > > > on server localhost.localdomain/127.0.0.1:2181, sessionid =
> > > > > > 0x13e8d9972b4, negotiated timeout = 4
> > > > > >
> > > > > >
> > > > > >
> > > > > > and kindly Tell me Where I can Found Zookeeper logs.
> > > > > >
> > > > > > Thanks & Regards.
> > > > > > 

Re: HBase Support

2013-05-11 Thread Shahab Yunus
@naga raju. You should not provide hbase.master property. The information
is retrieved from the zookeeper.quorum property instead.


On Sat, May 11, 2013 at 8:39 AM, naga raju  wrote:

> Thanks Tariq,
>
> my hbase-site.xml:
>
>  
> 
> hbase.rootdir
> /home/imadas/24H-DB
> 
>
> 
>hbase.rpc.timeout
>90 
> 
>
> 
>hbase.zookeeper.quorum
>127.0.0.1
> 
>
> 
>hbase.master
>127.0.0.1:6
>The host and port that the HBase master runs
> at.
> 
>
> 
>zookeeper.session.timeout
>18
> 
>
> 
> hbase.zookeeper.property.dataDir
> /home/imadas/zookeeperfolder
> 
>
> 
>
>
> Thanks & Regards
> Raju.
>
>
> On Fri, May 10, 2013 at 8:20 PM, Mohammad Tariq 
> wrote:
>
> > You are welcome Raju,
> >
> >   There seems to be some connection related issue. Could you
> please
> > show me you hbase-site.xml?Have you added "hbase.zookeeper.quorum" in
> > it?Which version are you using?
> >
> > Warm Regards,
> > Tariq
> > cloudfront.blogspot.com
> >
> >
> > On Fri, May 10, 2013 at 8:01 PM, naga raju 
> wrote:
> >
> > > Thank you Tariq,
> > >
> > > Have checked Hbase Master Logs(Which i have Provided to you).
> > >  Did you find any clue to identify what is the Problem.
> > >
> > > Thanks & Regards
> > > Raju.
> > >
> > >
> > > On Fri, May 10, 2013 at 7:01 PM, Mohammad Tariq 
> > > wrote:
> > >
> > > > Hello Raju,
> > > >
> > > >   ZK logs should be there  at the same place as HMaster logs.
> > > >
> > > > Warm Regards,
> > > > Tariq
> > > > cloudfront.blogspot.com
> > > >
> > > >
> > > > On Fri, May 10, 2013 at 5:24 PM, naga raju 
> > > wrote:
> > > >
> > > > > Thanks Tariq,
> > > > >
> > > > > I am Using hbase-0.92.1,
> > > > >
> > > > > This is some part of Hbase Master log file:
> > > > >
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > >
> > > > >
> > > >
> > >
> >
> environment:java.library.path=/home/imadas/softs/hbase-0.92.1/bin/../lib/native/Linux-i386-32
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > environment:java.io.tmpdir=/tmp
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > environment:java.compiler=
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client environment:
> > os.name
> > > > > =Linux
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > environment:os.arch=i386
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > environment:os.version=2.6.32-220.el6.i686
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client environment:
> > > user.name
> > > > > =imadas
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > environment:user.home=/home/imadas
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > > environment:user.dir=/home/imadas/softs/hbase-0.92.1
> > > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Initiating client
> > > connection,
> > > > > connectString=127.0.0.1:2181 sessionTimeout=18
> > watcher=hconnection
> > > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Opening socket
> > connection
> > > to
> > > > > server /127.0.0.1:2181
> > > > > 13/05/10 14:22:22 INFO zookeeper.RecoverableZooKeeper: The
> identifier
> > > of
> > > > > this process is 3145@localhost.localdomain
> > > > > 13/05/10 14:22:22 WARN client.ZooKeeperSaslClient:
> SecurityException:
> > > > > java.lang.SecurityException: Unable to locate a login configuration
> > > > > occurred when trying to find JAAS configuration.
> > > > > 13/05/10 14:22:22 INFO client.ZooKeeperSaslClient: Client will not
> > > > > SASL-authenticate because the default JAAS configuration section
> > > 'Client'
> > > > > could not be found. If you are not using SASL, you may ignore this.
> > On
> > > > the
> > > > > other hand, if you expected SASL to work, please fix your JAAS
> > > > > configuration.
> > > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Socket connection
> > > > established
> > > > > to localhost.localdomain/127.0.0.1:2181, initiating session
> > > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Session establishment
> > > > complete
> > > > > on server localhost.localdomain/127.0.0.1:2181, sessionid =
> > > > > 0x13e8d9972b4, negotiated timeout = 4
> > > > >
> > > > >
> > > > >
> > > > > and kindly Tell me Where I can Found Zookeeper logs.
> > > > >
> > > > > Thanks & Regards.
> > > > > Raju.
> > > > >
> > > > >
> > > > > On Fri, May 10, 2013 at 4:14 PM, Mohammad Tariq <
> donta...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hello Raju,
> > > > > >
> > > > > >   This means that there is some problem with the connection
> > > between
> > > > > > HMaster and ZK. Make sure all your Hbase daemons are running
> fine.
> > > > Which
> > > > > > version are you using?Also showing complete HMaster and ZK logs
> > would
> > > > be
> > > > > > helpful.
> > > > > >
> > > > > > Warm Regards,
> > > > > > Tariq
> > > > > > cloudfront.blogspot.com
> > > > > >
> > > > > >
> > > > > > On Fri, May 10, 2013 at 11:54 AM, naga 

Re: HBase Support

2013-05-11 Thread naga raju
Thanks Tariq,

my hbase-site.xml:

 

hbase.rootdir
/home/imadas/24H-DB



   hbase.rpc.timeout
   90 



   hbase.zookeeper.quorum
   127.0.0.1



   hbase.master
   127.0.0.1:6
   The host and port that the HBase master runs
at.



   zookeeper.session.timeout
   18



hbase.zookeeper.property.dataDir
/home/imadas/zookeeperfolder





Thanks & Regards
Raju.


On Fri, May 10, 2013 at 8:20 PM, Mohammad Tariq  wrote:

> You are welcome Raju,
>
>   There seems to be some connection related issue. Could you please
> show me you hbase-site.xml?Have you added "hbase.zookeeper.quorum" in
> it?Which version are you using?
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Fri, May 10, 2013 at 8:01 PM, naga raju  wrote:
>
> > Thank you Tariq,
> >
> > Have checked Hbase Master Logs(Which i have Provided to you).
> >  Did you find any clue to identify what is the Problem.
> >
> > Thanks & Regards
> > Raju.
> >
> >
> > On Fri, May 10, 2013 at 7:01 PM, Mohammad Tariq 
> > wrote:
> >
> > > Hello Raju,
> > >
> > >   ZK logs should be there  at the same place as HMaster logs.
> > >
> > > Warm Regards,
> > > Tariq
> > > cloudfront.blogspot.com
> > >
> > >
> > > On Fri, May 10, 2013 at 5:24 PM, naga raju 
> > wrote:
> > >
> > > > Thanks Tariq,
> > > >
> > > > I am Using hbase-0.92.1,
> > > >
> > > > This is some part of Hbase Master log file:
> > > >
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > >
> > > >
> > >
> >
> environment:java.library.path=/home/imadas/softs/hbase-0.92.1/bin/../lib/native/Linux-i386-32
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > environment:java.io.tmpdir=/tmp
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > environment:java.compiler=
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client environment:
> os.name
> > > > =Linux
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > environment:os.arch=i386
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > environment:os.version=2.6.32-220.el6.i686
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client environment:
> > user.name
> > > > =imadas
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > environment:user.home=/home/imadas
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Client
> > > > environment:user.dir=/home/imadas/softs/hbase-0.92.1
> > > > 13/05/10 14:22:22 INFO zookeeper.ZooKeeper: Initiating client
> > connection,
> > > > connectString=127.0.0.1:2181 sessionTimeout=18
> watcher=hconnection
> > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Opening socket
> connection
> > to
> > > > server /127.0.0.1:2181
> > > > 13/05/10 14:22:22 INFO zookeeper.RecoverableZooKeeper: The identifier
> > of
> > > > this process is 3145@localhost.localdomain
> > > > 13/05/10 14:22:22 WARN client.ZooKeeperSaslClient: SecurityException:
> > > > java.lang.SecurityException: Unable to locate a login configuration
> > > > occurred when trying to find JAAS configuration.
> > > > 13/05/10 14:22:22 INFO client.ZooKeeperSaslClient: Client will not
> > > > SASL-authenticate because the default JAAS configuration section
> > 'Client'
> > > > could not be found. If you are not using SASL, you may ignore this.
> On
> > > the
> > > > other hand, if you expected SASL to work, please fix your JAAS
> > > > configuration.
> > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Socket connection
> > > established
> > > > to localhost.localdomain/127.0.0.1:2181, initiating session
> > > > 13/05/10 14:22:22 INFO zookeeper.ClientCnxn: Session establishment
> > > complete
> > > > on server localhost.localdomain/127.0.0.1:2181, sessionid =
> > > > 0x13e8d9972b4, negotiated timeout = 4
> > > >
> > > >
> > > >
> > > > and kindly Tell me Where I can Found Zookeeper logs.
> > > >
> > > > Thanks & Regards.
> > > > Raju.
> > > >
> > > >
> > > > On Fri, May 10, 2013 at 4:14 PM, Mohammad Tariq 
> > > > wrote:
> > > >
> > > > > Hello Raju,
> > > > >
> > > > >   This means that there is some problem with the connection
> > between
> > > > > HMaster and ZK. Make sure all your Hbase daemons are running fine.
> > > Which
> > > > > version are you using?Also showing complete HMaster and ZK logs
> would
> > > be
> > > > > helpful.
> > > > >
> > > > > Warm Regards,
> > > > > Tariq
> > > > > cloudfront.blogspot.com
> > > > >
> > > > >
> > > > > On Fri, May 10, 2013 at 11:54 AM, naga raju  >
> > > > wrote:
> > > > >
> > > > > > ya I have checked Logs ,in those logs i found ..
> > > > > >
> > > > > >
> > > > > > WARN client.ZooKeeperSaslClient: SecurityException:
> > > > > > java.lang.SecurityException: Unable to locate a login
> configuration
> > > > > > occurred when trying to find JAAS configuration.
> > > > > >
> > > > > > lient.ZooKeeperSaslClient: Client will not SASL-authenticate
> > because
> > > > the
> > > > > > default JAAS configuration section 'Client' could not be found.
> If
> > > you
> > > > > are
> > > > > > 

Re: How to remove dependency of jruby from HBase

2013-05-11 Thread Asaf Mesika
If JRuby is GPL it also means HBase is GPL, but it's Apache license.

On May 11, 2013, at 12:32 AM, xia_y...@dell.com wrote:

> Hi,
> 
> We are using Hbase 0.94.1. There is dependency from Hbase to Jruby. I found 
> below in hbase-0.94.1.pom.
> 
>
>  org.jruby
>  jruby-complete
>  ${jruby.version}
>
> 
> Our issue is that our project is not open source and this Jruby is in GPL 
> licence, so we cannot use it. Do you have suggestions on how to remove the 
> dependency of Jruby and what is the impact of HBase function.
> 
> Thanks,
> 
> Jane



Re: How to implement this check put and then update something logic?

2013-05-11 Thread Asaf Mesika
Maybe this problem is more in the graph domain? I know that there are projects 
aimed at representing graphs at large scale better. I'm saying this since you 
have one ID referencing another ID (using target ID).



On May 10, 2013, at 11:47 AM, "Liu, Raymond"  wrote:

> Thanks, seems there are no other better solution?
> 
> Really need a "GetAndPut" atomic op here ...
> 
>> 
>> You can do this by looping over a checkAndPut operation until it succeeds.
>> 
>> -Mike
>> 
>> On Thu, May 9, 2013 at 8:52 PM, Liu, Raymond 
>> wrote:
>>> Any suggestion?
>>> 
 
 Hi
 
  Say, I have four field for one record :id, status, targetid, and 
 count.
  Status is on and off, target could reference other id, and
 count will record the number of "on" status for all targetid from same id.
 
  The record could be add / delete, or updated to change the status.
 
  I could put count in another table, or put it in the same
 table, it doesn't matter. As long as it can work.
 
  My question is how can I ensure its correctness of the "count"
 field when run with multiple client update the table concurrently?
 
  The closet thing I can think of is checkAndPut, but I will need
 two steps to find out the change of count, since checkAndPut etc can
 only test a single value and with EQUAL comparator, thus I can only
 check upon null firstly, then on or off. Thus when thing change
 during this two step, I need to retry from first step until it succeed. 
 This
>> could be bad when a lot of concurrent op is on going.
 
  And then, I need to update count by checkAndIncrement, though
 if the above problem could be solved, the order of -1 +1 might not be
 important for the final result, but in some intermediate time, it
 might not reflect the real count of that time.
 
  I know this kind of transaction is not the target of HBASE, APP
 should take care of it, then , what's the best practice on this? Any
 quick simple solution for my problem? Client RowLock could solve this
 issue, But it seems to me that it is not safe and is not recommended and
>> deprecated?
 
  Btw. Is that possible or practice to implement something like
 PutAndGet which put in new row and return the old row back to client been
>> implemented?
 That would help a lot for my case.
 
 Best Regards,
 Raymond Liu
>>>