Please take a look at:
>
>
> http://hbase.apache.org/book.html#_server_side_configuration_for_simple_user_access_operation
>
> On Fri, May 6, 2016 at 10:51 AM, Mohit Anchlia <mohitanch...@gmail.com>
> wrote:
>
> > Is there a way to implement a simple user/pass aut
Is there a way to implement a simple user/pass authentication in HBase
instead of using a Kerberos? Are the coprocessor the right way of
implementing such authentication?
Better approach would be to break the data in chunks and create a behaviour
similar to indirect blocks.
On Mon, Jun 3, 2013 at 9:12 PM, Asaf Mesika asaf.mes...@gmail.com wrote:
I guess one can hack opening a socket from a Coprocessor Endpoint and push
its scanned data, thus achieving a stream.
Thanks, that's a good point about last byte being max :)
When I query 1234555..1234556 do I also get row for 1234556 if one exist?
On Sat, Mar 30, 2013 at 6:55 AM, Asaf Mesika asaf.mes...@gmail.com wrote:
Yes.
Watch out for last byte being max
On Fri, Mar 29, 2013 at 7:31 PM, Mohit Anchlia
: Mohit Anchlia [mailto:mohitanch...@gmail.com]
Sent: Friday, March 29, 2013 1:18 AM
To: user@hbase.apache.org
Subject: Re: Understanding scan behaviour
Could the prefix filter lead to full tablescan? In other words is
PrefixFilter applied after fetching the rows?
Another question I
some other row? Or it is giving you a row
that does not exist?
Or you mean it is doing a full table scan?
Which version of HBase and what type of filters are you using?
Regards
Ram
On Thu, Mar 28, 2013 at 9:45 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
I have key
My understanding is that the row key would start with + for instance.
On Thu, Mar 28, 2013 at 7:53 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Mohit,
I see nothing wrong with the results below. What would I have expected?
JM
2013/3/28 Mohit Anchlia mohitanch
.
JM
2013/3/28 Mohit Anchlia mohitanch...@gmail.com:
My understanding is that the row key would start with + for instance.
On Thu, Mar 28, 2013 at 7:53 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Mohit,
I see nothing wrong with the results below. What would I
'))) AND (TimestampsFilter ( 123,
456))}
Cheers
On Thu, Mar 28, 2013 at 9:02 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
I see then I misunderstood the behaviour. My keys are id + timestamp so
that I can do a range type search. So what I really want is to return a
row
where id matches
I am seeing a wierd issue where zk is going to primarymaster (hostname)
as a ROOT region. This host doesn't exist. Everything was working ok until
I ran truncate on few tables. Does anyone know what might be the issue?
is lost then IMO that's a Critical bug in HBase.
Thanks,
Anil Gupta
On Thu, Jan 10, 2013 at 7:37 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Data also gets written in WAL. See:
http://hbase.apache.org/book/perf.writing.html
On Thu, Jan 10, 2013 at 7:36 AM
-12TB per server of disks. Inserting 600,000 images per day. We
have relatively little of compaction activity as we made our write
cache much larger than read cache - so we don't experience region file
fragmentation as much.
-Jack
On Fri, Jan 11, 2013 at 9:40 AM, Mohit Anchlia mohitanch
for close to 2 years without issues and serves
delivery of images for Yfrog and ImageShack. If you have any
questions about the setup, I would be glad to answer them.
-Jack
On Sun, Jan 6, 2013 at 1:09 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
I have done extensive testing
Data also gets written in WAL. See:
http://hbase.apache.org/book/perf.writing.html
On Thu, Jan 10, 2013 at 7:36 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Yes definitely you will get back the data.
Please read the HBase Book that explains things in detail.
-
From: Mohit Anchlia [mohitanch...@gmail.com]
Sent: Monday, January 07, 2013 9:47 AM
To: user@hbase.apache.org
Subject: Re: HBase - Secondary Index
Hi Anoop,
Am I correct in understanding that this indexing mechanism is only
applicable
.
Thanks for the explanation.
Shengjie
On 28 December 2012 04:14, Anoop Sam John anoo...@huawei.com wrote:
Do you have link to that presentation?
http://hbtc2012.hadooper.cn/subject/track4TedYu4.pdf
-Anoop-
From: Mohit
I have done extensive testing and have found that blobs don't belong in the
databases but are rather best left out on the file system. Andrew outlined
issues that you'll face and not to mention IO issues when compaction occurs
over large files.
On Sun, Jan 6, 2013 at 12:52 PM, Andrew Purtell
-
From: Mohit Anchlia [mohitanch...@gmail.com]
Sent: Friday, December 28, 2012 9:12 AM
To: user@hbase.apache.org
Subject: Re: HBase - Secondary Index
On Thu, Dec 27, 2012 at 7:33 PM, Anoop Sam John anoo...@huawei.com
wrote:
Yes as you say when the no of rows to be returned
IMHO Use dfs unread for blobs and use Hbase for meta data
Sent from my iPhone
On Jan 5, 2013, at 7:58 PM, 谢良 xieli...@xiaomi.com wrote:
Just out of curiousity, why not considering a blob storage system ?
Best Regards,
Liang
发件人: kavishahuja
On Thu, Dec 27, 2012 at 7:33 PM, Anoop Sam John anoo...@huawei.com wrote:
Yes as you say when the no of rows to be returned is becoming more and
more the latency will be becoming more. seeks within an HFile block is
some what expensive op now. (Not much but still) The new encoding prefix
On Mon, Dec 24, 2012 at 8:27 AM, Ivan Balashov ibalas...@gmail.com wrote:
Vincent Barat vbarat@... writes:
Hi,
Balancing regions between RS is correctly handled by HBase : I mean
that your RSs always manage the same number of regions (the balancer
takes care of it).
Also, check how balanced your region servers are accross all the nodes
On Sat, Dec 22, 2012 at 8:50 AM, Varun Sharma va...@pinterest.com wrote:
Note that adding nodes will improve throughput and not latency. So, if your
client application for benchmarking is single threaded, do not expect an
,
You might this link
http://hbase.apache.org/book/ops.monitoring.htmluseful.
Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/
On Sat, Dec 22, 2012 at 2:09 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Could someone help me understand what this really means
?is swapping too high at RS side?anything odd in your RS logs?
Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/
On Sat, Dec 22, 2012 at 4:36 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
I looked at that link, but couldn't find anything useful. How do I check
if
it was client who
On Nov 28, 2012, at 9:07 AM, Adrien Mogenet adrien.moge...@gmail.com wrote:
Does HBase really benefit from 64 GB of RAM since allocating too large heap
might increase GC time ?
Benefit you get is from OS cache
Another question : why not RAID 0, in order to aggregate disk bandwidth ?
2012/11/28, Mohit Anchlia mohitanch...@gmail.com:
On Nov 28, 2012, at 9:07 AM, Adrien Mogenet adrien.moge...@gmail.com
wrote:
Does HBase really benefit from 64 GB of RAM since allocating too large
heap
might increase GC time ?
Benefit you get is from OS cache
Another question : why
:
On Mon, Nov 26, 2012 at 2:16 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
I have a need to move hbas-site.xml to an external location. So in order
to
do that I changed my configuration as shown below. But this doesn't seem
to
be working. It picks up the file but I get error, seems
) for client to
drive workload into hbase/hdfs... one thread is used in client side. For
this workload, it looks client should not be the bottleneck... Btw, is
there anyway to verify this.
Thanks,
Yun
On Sat, Nov 3, 2012 at 1:04 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
What load do
What load do you see on the system? I am wondering if bottleneck is on the
client side.
On Fri, Nov 2, 2012 at 9:07 PM, yun peng pengyunm...@gmail.com wrote:
Hi, All,
In my HBase cluster, I observed Put() executes faster than a Get(). Since
HBase is optimized towards write, I wonder what may
What's the best way to see if all handlers are occupied? I am probably
running into similar issue but would like to check.
On Wed, Oct 10, 2012 at 8:24 PM, Stack st...@duboce.net wrote:
On Wed, Oct 10, 2012 at 5:51 AM, Ricardo Vilaça rmvil...@di.uminho.pt
wrote:
However, when adding an
/hadoop/hbase/client/HTable.html#
getRegionLocation%28byte[],%20boolean%29
Š to know if you are continually hitting the same RS or spreading the load.
On 10/9/12 1:27 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
I just have 5 stress client threads writing timeseries data. What I see is
after
:
So you're running on a single regionserver?
On 10/9/12 1:44 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
I am using HTableInterface as a pool but I don't see any setautoflush
method. I am using 0.92.1 jar.
Also, how can I see if RS is getting overloaded? I looked at the UI and I
don't
in terms of hardware, we don't have a good starting point.
On Oct 5, 2012, at 7:47 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Do most people start out with default values and then tune HBase? Or
are
there some important configuration parameter that should always be
changed
Are these 120K rows from a single region server?
On Mon, Oct 1, 2012 at 4:01 PM, Juan P. gordoslo...@gmail.com wrote:
Hi guys,
I'm trying to get familiarized with HBase and one thing I noticed is that
reads seem to very slow. I just tried doing a scan 'my_table' to get 120K
records and it
.
Regards
Ram
-Original Message-
From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
Sent: Thursday, September 27, 2012 5:09 AM
To: user@hbase.apache.org
Subject: Re: disable table
I did /hbase/table/SESSIONID_TIMELINE and that seem to work. I'll
restart
hbase
discussion is going on this issue. You can follow jira associated
with this issue at
https://issues.apache.org/jira/browse/HBASE-6469
On Thu, Sep 27, 2012 at 8:11 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Thanks everyone for the input, it's helpful. I did remove the znode from
/hbase
. Having a look at the
logs could also be useful.
Regards,
Mohammad Tariq
On Thu, Sep 27, 2012 at 2:24 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
When I try to disable table I get:
hbase(main):011:0 disable 'SESSIONID_TIMELINE'
ERROR
at 2:55 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Which node should I look at for logs? Is this the master node? I'll try
hbck.
On Wed, Sep 26, 2012 at 2:19 PM, Mohammad Tariq donta...@gmail.com
wrote:
Hello Mohit,
Try hbck once and see if it shows any
quorum means problem with Hbase custer.
Regards,
Mohammad Tariq
On Thu, Sep 27, 2012 at 3:39 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Thanks! I do see Inconsistency. How do I remove the znode. And also could
you please help me understand how this might have happened?
ERROR
, Mohit Anchlia mohitanch...@gmail.comwrote:
I don't see path like /hbase/SESSIONID_TIMELINE
This is what I see
[zk: pprfdaaha303:5181(CONNECTED) 5] ls /hbase/table
[SESSIONID_TIMELINE]
[zk: pprfdaaha303:5181(CONNECTED) 6] get /hbase/table
cZxid = 0x100fe
ctime = Mon Sep 10 15:31:45 PDT 2012
.
On 9/12/12 6:50 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
I am using client 0.90.5 jar
Is there a way to limit how many rows can be fetched in one scan call?
Similarly is there something for colums?
scanner caching is set to 1.
Thanks! If caching is set 1 then is there a way to limit no of rows
that's fetched from the server?
From: Mohit Anchlia mohitanch...@gmail.com
To: user@hbase.apache.org
Sent: Wednesday, September 12, 2012 4:29 PM
Subject: Re
to
read? In timeseries we might be interested in only most recent data point.
On Mon, Sep 10, 2012 at 10:56 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Is there any recommendation on how many columns one should have per row.
My
columns are 200 bytes. This will help me to decide if I should
,b? This ways I can
just get the most recent qualifier or for timeseries most recent qualifier.
On Mon, Sep 10, 2012 at 11:04 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Mon, Sep 10, 2012 at 10:30 AM, Harsh J ha...@cloudera.com wrote:
Hey Mohit,
See http://hbase.apache.org
You can also look at pre-splitting the regions for timeseries type data.
On Mon, Sep 3, 2012 at 1:11 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
Initially your table will contain only one region.
When you will reach its maximum size, it will split into 2 regions
will are going to
On Thu, Aug 30, 2012 at 11:52 PM, Stack st...@duboce.net wrote:
On Thu, Aug 30, 2012 at 5:04 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
In general isn't it better to split the regions so that the load can be
spread accross the cluster to avoid HotSpots?
Time series data
On Wed, Aug 29, 2012 at 10:50 PM, Stack st...@duboce.net wrote:
On Wed, Aug 29, 2012 at 9:38 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Wed, Aug 29, 2012 at 9:19 PM, Stack st...@duboce.net wrote:
On Wed, Aug 29, 2012 at 3:56 PM, Mohit Anchlia mohitanch...@gmail.com
wrote
predictability of spreading load, so much as
predictability of uptime latency (they don't want an automated split to
happen at a random busy time). Maybe that's what you mean, Mohit?
Ian
On Aug 30, 2012, at 5:45 PM, Stack wrote:
On Thu, Aug 30, 2012 at 7:35 AM, Mohit Anchlia mohitanch
On Wed, Aug 29, 2012 at 9:19 PM, Stack st...@duboce.net wrote:
On Wed, Aug 29, 2012 at 3:56 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
If I use md5 hash + timestamp rowkey would hbase automatically detect the
difference in ranges and peforms split? How does split work in such cases
with HBase) = performance ++
- Make writes idempotent and independent
before: start rows at arbitrary points in time
after: align rows on 10m (then 1h) boundaries
- Store more data per Key/Value
- Compact your data
- Use short family names
Best wishes
El 28/08/2012 20:21, Mohit Anchlia
You timestamp as in version? Can you describe your scenario with more
concrete example?
On Mon, Aug 27, 2012 at 5:01 PM, Ioakim Perros imper...@gmail.com wrote:
Hi,
Is there any way of retrieving two values with totally different
timestamps from a table?
I am using timestamps as iteration
not sure if this is possible and the
solution I described at my previous message (by storing columns 0 and 1 at
all timestamps up to 40 for example) seems inefficient.
Any ideas?
Thanks and regards,
IP
On 08/28/2012 03:33 AM, Mohit Anchlia wrote:
You timestamp as in version? Can you describe
On Wed, Aug 22, 2012 at 10:20 AM, Pamecha, Abhishek apame...@x.com wrote:
So then a GET query means one needs to look in every HFile where key falls
within the min/max range of the file.
From another parallel thread, I gather, HFile comprise of blocks which, I
think, is an atomic unit of
It's possible that there is a bad or slower disk on Gurjeet's machine. I
think details of iostat and cpu would clear things up.
On Tue, Aug 21, 2012 at 4:33 PM, lars hofhansl lhofha...@yahoo.com wrote:
I get roughly the same (~1.8s) - 100 rows, 200.000 columns, segment size
100
On Mon, Aug 20, 2012 at 3:06 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
On Sat, Aug 18, 2012 at 1:30 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Is it also possible to setup bi-directional replication? In other words
is
it possible to write to the same table to both HBase
On Sat, Aug 18, 2012 at 12:35 PM, Stack st...@duboce.net wrote:
On Fri, Aug 17, 2012 at 5:36 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Are clients local to slave DC able to read data from HBase slave when
replicating data from one DC to remote DC?
Yes.
Is it also possible to setup
I think availability is sacrificed in the sense that if region server fails
clients will have data inaccessible for the time region comes up on some other
server, not to confuse with data loss.
Sent from my iPad
On Aug 7, 2012, at 11:56 PM, Lin Ma lin...@gmail.com wrote:
Thank you Wei!
pattern of HBase
And consistency is not sacrificed? i.e. all distributed clients' update
will results in sequential / real time update? Once update is done by one
client, all other client could see results immediately?
regards,
Lin
On Wed, Aug 8, 2012 at 11:17 PM, Mohit Anchlia
column family.
regards,
Lin
On Mon, Aug 6, 2012 at 12:08 AM, Mohit Anchlia mohitanch...@gmail.comwrote:
On Sun, Aug 5, 2012 at 6:04 AM, Lin Ma lin...@gmail.com wrote:
Hi guys,
I am wondering whether HBase is using column based storage or row based
storage?
- I read some technical
On Sun, Aug 5, 2012 at 6:04 AM, Lin Ma lin...@gmail.com wrote:
Hi guys,
I am wondering whether HBase is using column based storage or row based
storage?
- I read some technical documents and mentioned advantages of HBase is
using column based storage to store similar data together to
From: Mohit Anchlia mohitanch...@gmail.com
To: user@hbase.apache.org
Sent: Tuesday, July 31, 2012 6:09 PM
Subject: sync on writes
In the HBase book it mentioned that the default behaviour of write is to
call sync on each node before sending replica copies to the nodes in the
pipeline
and bring the regions up.
Regards,
Mohammad Tariq
On Thu, Aug 2, 2012 at 12:11 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
I was reading blog
http://www.cloudera.com/blog/2012/07/hbase-log-splitting/ and
it looks like if region servers fails then all the regions on that region
HBase 90.4
On Tue, Jul 31, 2012 at 4:18 PM, Michael Segel michael_se...@hotmail.comwrote:
Which release?
On Jul 31, 2012, at 5:13 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
I am seeing null row key and I am wondering how I got the nulls in there.
Is it possible when using
Not sure how but I am getting one null row per 9 writes when I do a GET in
result.getRow(). Is it even possible to write null rows?
On Tue, Jul 31, 2012 at 4:49 PM, Mohit Anchlia mohitanch...@gmail.comwrote:
HBase 90.4
On Tue, Jul 31, 2012 at 4:18 PM, Michael Segel
michael_se
\xFF\xFE\xC7'\x05\x11
column=S_T_MTX:\x00\x00?\xB8, timestamp=1343670017892, value=1343670136312
\xBF
Alex Baranau
--
Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
Solr
On Fri, Jul 27, 2012 at 8:43 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Fri, Jul
));
}
}
} catch (UnsupportedEncodingException e) {
LOG.error(ISO-8859-1 not supported?, e);
}
return result.toString();
}
On Mon, Jul 30, 2012 at 1:56 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Fri, Jul 27, 2012 at 6:03 PM, Alex Baranau alex.barano...@gmail.com
to alter splits or only way is to re-create the tables?
Alex Baranau
--
Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
Solr
On Fri, Jul 27, 2012 at 8:43 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Fri, Jul 27, 2012 at 4:51 PM, Alex Baranau
guide)?
Thanks this is helpful
Alex Baranau
--
Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
Solr
On Thu, Jul 26, 2012 at 8:38 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Thu, Jul 26, 2012 at 1:52 PM, Minh Duc Nguyen mdngu...@gmail.com
wrote
/master.jsp to see load for each
regionserver.
That's
the overall load. If you want to see load per node per table, you
will
need
to query on .META. table (column: info:server)
--K
On Fri, Jul 27, 2012 at 9:07 AM, Mohit Anchlia
mohitanch
=1343350646458
F
Alex Baranau
--
Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
Solr
On Fri, Jul 27, 2012 at 7:24 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Fri, Jul 27, 2012 at 11:48 AM, Alex Baranau alex.barano...@gmail.com
wrote:
You can read
://github.com/sematext/HBaseWD
http://blog.sematext.com/2012/04/09/hbasewd-avoid-regionserver-hotspotting-despite-writing-records-with-sequential-keys/
On Wed, Jul 25, 2012 at 7:54 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Wed, Jul 25, 2012 at 6:53 AM, Alex Baranau alex.barano
://github.com/sematext/HBaseWD
http://blog.sematext.com/2012/04/09/hbasewd-avoid-regionserver-hotspotting-despite-writing-records-with-sequential-keys/
On Wed, Jul 25, 2012 at 7:54 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Wed, Jul 25, 2012 at 6:53 AM, Alex Baranau alex.barano...@gmail.com
://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
Solr
On Thu, Jul 26, 2012 at 11:43 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Thu, Jul 26, 2012 at 7:16 AM, Alex Baranau alex.barano...@gmail.com
wrote:
Looks like you have only one region in your table. Right?
If you want your
-internals-and-schema-desig/
or any other intro to hbase presentations over the web.
On Thu, Jul 26, 2012 at 3:50 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
On Thu, Jul 26, 2012 at 10:34 AM, Alex Baranau alex.barano...@gmail.com
wrote:
Is there any specific best practice on how
will require more storage, you need to do the math to
determine whether it is worth the extra resources.
Thanks! I have a timeseries data so I am thinking I should enable bloom
filters for only rows
~ Minh
On Thu, Jul 26, 2012 at 4:30 PM, Mohit Anchlia mohitanch...@gmail.com
wrote
no of splits in localhost:60010 for the table mention ..
On Jul 27, 2012 4:02 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
I added new regions and the performance didn't improve. I think it still
is
the load balancing issue. I want to ensure that my rows are getting
distrbuted accross
your own balance viewer through the HBase API
(list
of RS, regions, storeFiles, their size, etc.)
On Wed, Jul 25, 2012 at 7:32 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Is there an easy way to tell how my nodes are balanced and how the rows
are
distributed in the cluster
On Tue, Jul 24, 2012 at 3:09 AM, Lyska Anton ant...@wildec.com wrote:
Hi,
after first insert you are closing your table in finally block. thats why
thread hangs
I thought I need to close HTableInterface to return it back to the pool. Is
that not the case?
24.07.2012 3:41, Mohit Anchlia
I removed the close call and it works. So it looks like close call should
be called only at the end. But then how does the pool know that the object
is available if it's not returned to the pool explicitly?
On Tue, Jul 24, 2012 at 10:00 AM, Mohit Anchlia mohitanch...@gmail.comwrote:
On Tue
multiple threads.[2]
1
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html
2
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTablePool.html
On Mon, Jul 23, 2012 at 3:48 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
I am writing
Thanks! I was trying it out and I see this message when I use COMPRESSION,
but it works when I don't use it. Am I doing something wrong?
hbase(main):012:0 create 't2', {NAME = 'f1', VERSIONS = 1, COMPRESSION
= 'LZO'}
ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1
with hadoop just fine and HBase is running on the same
cluster. Is there something special I need to do for HBase?
Regards,
Dhaval
- Original Message -
From: Mohit Anchlia mohitanch...@gmail.com
To: user@hbase.apache.org
Cc:
Sent: Tuesday, 24 July 2012 4:39 PM
Subject: Re
Is there an easy way to tell how my nodes are balanced and how the rows are
distributed in the cluster?
I am trying to drop one of the tables but on the shell I get run
major_compact. I have couple of questions:
1. How to see if this table has more than one region?
2. And why do I need to run major compact
hbase(main):010:0* drop 'SESSION_TIMELINE'
ERROR: Table SESSION_TIMELINE is enabled.
must be disabled first in order to get deleted.
Regards,
Mohammad Tariq
On Tue, Jul 24, 2012 at 1:38 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
I am trying to drop one of the tables but on the shell I get run
major_compact. I have couple of questions:
1. How to see if this table
} is enabled. Disable it
first.' if enabled?(table_name)
@admin.deleteTable(table_name)
flush(org.apache.hadoop.hbase.HConstants::META_TABLE_NAME)
major_compact(org.apache.hadoop.hbase.HConstants::META_TABLE_NAME)
end
On Mon, Jul 23, 2012 at 1:22 PM, Mohit Anchlia
I am writing a stress tool to test my specific use case. In my current
implementation HTable is a global static variable that I initialize just
once and use it accross multiple threads. Is this ok?
My row key consists of (timestamp - (timestamp % 1000)) and cols are
counters. What I am seeing is
://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTablePool.html
Thanks! I'll change my code to use HtablePool
On Mon, Jul 23, 2012 at 3:48 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
I am writing a stress tool to test my specific use case. In my current
implementation HTable is a global
DataStoreException(e);
} finally{
cleanUp();
}
}
private void cleanUp() {
if(null != tableInt){
try {
tableInt.close();
} catch (IOException e) {
log.error(Failed while closing table interface, e);
}
}
}
On Mon, Jul 23, 2012 at 4:15 PM, Mohit Anchlia mohitanch
org.apache.hadoop.hbase.util.Bytes.toString(\x48\x65\x6c\x6c\x6f\x20\x48\x42\x61\x73\x65.to_java_bytes)
= Hello HBase
Thanks for the pointers. I'll try it out.
--
Alex K
On Fri, Jul 20, 2012 at 5:39 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Is there a command on the shell that convert byte
Ah I see, you mean I change the HBase shell code?
Regards,
Dhaval
From: Mohit Anchlia mohitanch...@gmail.com
To: user@hbase.apache.org
Sent: Friday, 20 July 2012 8:39 PM
Subject: HBase shell
Is there a command on the shell that convert byte into char array
I am designing a HBase schema as a timeseries model. Taking advice from the
definitive guide and tsdb I am planning to use my row key as
metricname:Long.MAX_VALUE - basetimestamp. And the column names would be
timestamp-base timestamp. My col names would then look like 1,2,3,4,5
.. for instance. I
I just wanted to check if most people copu hbase-site.xml in the classpath
or use some properties file as a resource and then set it in Configuration
object returned by HBaseConfiguration.*create*();
in
defining your keys and columns.
-Amandeep
On Tue, Jun 26, 2012 at 1:34 PM, Mohit Anchlia mohitanch...@gmail.comwrote:
I am starting out with a new application where I need to store users
clickstream data. I'll have Visitor Id, session id along with other page
related data. I am wondering if I
I am starting out with a new application where I need to store users
clickstream data. I'll have Visitor Id, session id along with other page
related data. I am wondering if I should just key off randomly generated
session id and store all the page related data as columns inside that row
assuming
Why is HBase consisdered high in consistency and that it gives up
parition tolerance? My understanding is that failure of one data node
still doesn't impact client as they would re-adjust the list of
available data nodes.
by one region server (even if it resides on
multiple data nodes). If it dies, clients need to wait for the log
replay and region reassignment.
J-D
On Fri, Dec 2, 2011 at 11:57 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
Why is HBase consisdered high in consistency and that it gives up
-Guide-Lars-George/dp/1449396100
And/Or read the Bigtable paper.
J-D
On Fri, Dec 2, 2011 at 12:01 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
Where can I read more on this specific subject?
Based on your answer I have more questions, but I want to read more
specific information about
: single-row put
atomicity, atomic check-and-set operations, atomic increment operations,
etc.--things that are only possible if you know for sure that exactly one
machine is in control of the row.
Ian
On Dec 2, 2011, at 2:54 PM, Mohit Anchlia wrote:
Thanks for the overview. It's helpful
I have some questions about ACID after reading this page,
http://hbase.apache.org/acid-semantics.html
- Atomicity point 5 : row must either be a=1,b=1,c=1 or
a=2,b=2,c=2 and must not be something like a=1,b=2,c=1.
How is this internally handled in hbase such that above is possible?
1 - 100 of 106 matches
Mail list logo