Re: Expert suggestion needed to create table in Hbase - Banking

2012-11-26 Thread syed kather
Hello Sir , For solving RS hotspotting you can also try this below http://blog.sematext.com/2012/04/09/hbasewd-avoid-regionserver-hotspotting-despite-writing-records-with-sequential-keys/ It works fine .. Regrading the Columns Family you can also try to group similar columns towards one family,

Re: ANN: HBase 0.94.2 is available for download

2012-10-18 Thread syed kather
+1 pushing to maven repo Thanks and Regards, S SYED ABDUL KATHER On Thu, Oct 18, 2012 at 10:32 PM, lars hofhansl wrote: > I'm on it. :) > > > > - Original Message - > From: Amit Sela > To: user@hbase.apache.org > Cc: > Sent: Thursday, October 18, 2012 8:25 AM > Su

Re: Encryption in HBase

2012-08-08 Thread syed kather
Tariq , As wkt hbase is in Memory database when we need to scan or get from meta file it will hit specified hfile . See if we encrypting the hfile then internally we are trying to encrypting set of Key Length + Value Length + Row Length + CF Length + Timestamp + Key Value also. We are interest

Re: CheckAndAppend Feature

2012-08-07 Thread syed kather
Team , Sorry my mail box not yet got updated when i am replying. Now only i came to know that . after other people reply Thanks and Regards, S SYED ABDUL KATHER On Tue, Aug 7, 2012 at 9:39 PM, syed kather wrote: > Hi Jerry > > I also had similar use case .

Re: CheckAndAppend Feature

2012-08-07 Thread syed kather
Hi Jerry I also had similar use case . i had not found anything similar to that append :( . Thanks and Regards, S SYED ABDUL KATHER On Tue, Aug 7, 2012 at 8:52 PM, Jerry Lam wrote: > Hi HBase community: > > I checked the HTable API, it has checkAndPut and checkAndDelete

Re: an emtry region

2012-08-07 Thread syed kather
samar kumar, If all the rows of an region are deleted what happens to the region. Would it still exist .? Yes it will exist . Whenever you create a family HBase will create a separate Space for that Column Family . If there data available for then that data will be written to HFile which is c

Auto Splitting in hbase

2012-08-06 Thread syed kather
Team , I understood knew how pre-Splitting is working. Can i know how Auto Splitting is working ? Is there any Threshold for Splitting other than Hfile size? Thanks and Regards, S SYED ABDUL KATHER

Re: Where to run Thrift

2012-07-31 Thread syed kather
Eric , why you are trying to run thrift on all the server.why don't you run on only master machine . Really after seeing your post i also had this doubt whether we need separate thrift setup or not ? Is it enough to run thrift on single machine . Thanks and Regards, S SYED

Re: Recommend protocol connecting to HBASE

2012-07-29 Thread syed kather
Hi trung , If you are using different language then go for ThriftAPI.. if you had seen prevoius mail . They had suggested ThriftAPI . I had worked with Thrift its nice . I tried to push my data from .net . It helps me . Syed Abdul kather send from Samsung S3 On Jul 29, 2012 10:06 PM, "Trung Pham

Re: Cluster load

2012-07-27 Thread syed kather
ore flush): > 1566523617482885717, > size: 1993369 bytes. > > btw, 2MB looks weird: very small flush size (in this case, in other cases > this may happen - long story). May be compression does very well :) > > Alex Baranau > -- > Sematext :: http://blog.sematext.

Re: Cluster load

2012-07-27 Thread syed kather
eed > > to query on .META. table (column: info:server) > > > > > > > > --K > > On Fri, Jul 27, 2012 at 9:07 AM, Mohit Anchlia > >wrote: > > > > > Is there a way to see how much data does each node have per Hbase > table? > > >

Re: Cluster load

2012-07-26 Thread syed kather
First check whether the data in hbase is consistent ... check this by running hbck (bin/hbase hbck ) If all the region is consistent . Now check no of splits in localhost:60010 for the table mention .. On Jul 27, 2012 4:02 AM, "Mohit Anchlia" wrote: > I added new regions and the performance didn'

Re: importtsv example

2012-07-26 Thread syed kather
Can you try this http://hbase.apache.org/book/ops_mgt.html#import $ bin/hbase org.apache.hadoop.hbase.mapreduce.Import Thanks and Regards, S SYED ABDUL KATHER On Thu, Jul 26, 2012 at 8:57 PM, Kevin O'dell wrote: > Please let me know if this helps you out: > > http://ccp

Re: Basic Question on Partitioner,Combiner and Co-Processor

2012-07-23 Thread syed kather
Tue, Jul 24, 2012 at 12:03 AM, syed kather wrote: > Thanks Shashwat Shriparv.. > Is there any interface or abstract class partitioner avaliable for hbase > specifically .. > Thanks and Regards, > S SYED ABDUL KATHER > > > > On Mon, Jul 23, 2012 at

Re: Basic Question on Partitioner,Combiner and Co-Processor

2012-07-23 Thread syed kather
nk may be will help you.. > > http://developer.yahoo.com/hadoop/tutorial/module5.html#partitioning > > http://www.ashishpaliwal.com/blog/2012/05/hadoop-recipe-implementing-custom-partitioner/ > > > Regards > > > ∞ > Shashwat Shriparv > > > > On Mon,

Basic Question on Partitioner,Combiner and Co-Processor

2012-07-23 Thread syed kather
Hi , I am very much interested to know how to implement the custom Partitioner . Is there any blog let me know . As i knew the number of reducer is depends upon the partitioner . Correct me if i am wrong How to implement Co-Processor(Min,Max) . Is there any tutorial available on implementing

Re: HConnectionManager get closed

2012-07-20 Thread syed kather
to talk to HBase correctly? > > J-D > > On Wed, Jul 18, 2012 at 9:07 AM, syed kather wrote: > > Team , > >While Running the Map Reduce Program on 3 node cluster . i had noticed > > that one of node is throw Error most often > > > > ja

Re: Rowkey hashing to avoid hotspotting

2012-07-19 Thread syed kather
Anand , i had a case which i had combine 4 fields and made one row key . serial number can be first part of rowkey and model number can be second part . So that B-Search on Row key will be more faster because we can reduce lot jump while doing B- Search Note : if serial number is changing freq

Re: hbase map reduce is talking lot of time

2012-07-18 Thread syed kather
diate > > result on the local disk, just check if you have enough disk space. and > > also make sure that you have cleared the tmp directory and its writable. > > Just provide more space and try else try with small number of users and > > check if its working > > > &g

hbase map reduce is talking lot of time

2012-07-16 Thread syed kather
Team , i had wrote a mapreduce program . scenario of my program is to emit . Total no user : 825 Total no seqid:6583100 No of map which the program will emit is : 825 * 6583100 I have Hbase table called ObjectSequence : which consist of 6583100(rows) i had use TableMapper and

Region server dies ..

2012-07-09 Thread syed kather
Hi all , When map reduce program is executed the regionserver is get closed automatically . See the error below org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 15 actions: NotServingRegionException: 15 times, servers with issues: alok:60020, at org.apache.ha

Re: hbase-site.xml Content is not allowed in prolog.

2012-07-02 Thread syed kather
Thanks Marcin Cylke now its is working Thanks and Regards, S SYED ABDUL KATHER On Mon, Jul 2, 2012 at 6:05 PM, Marcin Cylke wrote: > On 02/07/12 10:34, syed kather wrote: > > > java.lang.RuntimeException: org.xml.sax.SAXParseException: Content is not > >

hbase-site.xml Content is not allowed in prolog.

2012-07-02 Thread syed kather
Team , while i am trying to import the data from exported backup . I am getting this "Content is not allowed in prolog". Please help me * Error:* java.lang.RuntimeException: org.xml.sax.SAXParseException: Content is not allowed in prolog. at org.apache.hadoop.conf.Configuration.loadResou