ook at:
>
>
> http://hbase.apache.org/book.html#_server_side_configuration_for_simple_user_access_operation
>
> On Fri, May 6, 2016 at 10:51 AM, Mohit Anchlia
> wrote:
>
> > Is there a way to implement a simple user/pass authentication in HBase
> > instead of usi
Is there a way to implement a simple user/pass authentication in HBase
instead of using a Kerberos? Are the coprocessor the right way of
implementing such authentication?
Better approach would be to break the data in chunks and create a behaviour
similar to indirect blocks.
On Mon, Jun 3, 2013 at 9:12 PM, Asaf Mesika wrote:
> I guess one can hack opening a socket from a Coprocessor Endpoint and push
> its scanned data, thus achieving a stream.
>
>
> On Sun, Jun 2
Thanks, that's a good point about last byte being max :)
When I query 1234555..1234556 do I also get row for 1234556 if one exist?
On Sat, Mar 30, 2013 at 6:55 AM, Asaf Mesika wrote:
> Yes.
> Watch out for last byte being max
>
>
> On Fri, Mar 29, 2013 at 7:31 PM, M
HBase scans.
> Regards
> Ram
>
> On Fri, Mar 29, 2013 at 11:18 AM, Li, Min wrote:
>
> > Hi, Mohit,
> >
> > Try using ENDROW. STARTROW&ENDROW is much faster than PrefixFilter.
> >
> > "+" ascii code is 43
> > ","
27;) AND
> (QualifierFilter (>=, 'binary:xyz'))) AND (TimestampsFilter ( 123,
> 456))"}
>
> Cheers
>
> On Thu, Mar 28, 2013 at 9:02 AM, Mohit Anchlia >wrote:
>
> > I see then I misunderstood the behaviour. My keys are id + timestamp so
> > that I can
Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Mohit,
>
> "+" ascii code is 43
> "9" ascii code is 57.
>
> So "+9" is coming after "++". If you don't have any row with the exact
> key "+", HBase will look for the f
My understanding is that the row key would start with + for instance.
On Thu, Mar 28, 2013 at 7:53 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Mohit,
>
> I see nothing wrong with the results below. What would I have expected?
>
> JM
>
> 2013/3/2
rote:
>
> > Could you give us some more insights on this?
> > So you mean when you set the row key as 'azzzaaa', though this row does
> > not exist, the scanner returns some other row? Or it is giving you a row
> > that does not exist?
> >
> > Or you mea
I am seeing a wierd issue where zk is going to "primarymaster" (hostname)
as a ROOT region. This host doesn't exist. Everything was working ok until
I ran truncate on few tables. Does anyone know what might be the issue?
> > Hi Mohammad,
> >
> > If the Write Ahead Log(WAL) is "turned on" then in **NO** case data
> should
> > be lost. HBase is strongly-consistent. If you know of any case when WAL
> is
> > turned on and data is lost then IMO that's a Critical bug in HB
10-12TB per server of disks. Inserting 600,000 images per day. We
> have relatively little of compaction activity as we made our write
> cache much larger than read cache - so we don't experience region file
> fragmentation as much.
>
> -Jack
>
> On Fri, Jan 11, 2013 at
w Purtell
> > >> Sent: Thursday, January 10, 2013 9:24 AM
> > >> Subject: Re: Storing images in Hbase
> > >>
> > >> We stored about 1 billion images into hbase with file size up to 10MB.
> > >> Its been running for close to 2 years without
Data also gets written in WAL. See:
http://hbase.apache.org/book/perf.writing.html
On Thu, Jan 10, 2013 at 7:36 AM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:
> Yes definitely you will get back the data.
>
> Please read the HBase Book that explains things in detail.
> http:/
order so that
> the
> > intersections would be fairly straight forward to find.
> >
> > Doing this at the region level isn't so simple.
> >
> > So I have to again ask why go through and over complicate things?
> >
> > Just saying...
> >
&
Hi Anoop,
Am I correct in understanding that this indexing mechanism is only
applicable when you know the row key? It's not an inverted index truly
based on the column value.
Mohit
On Sun, Jan 6, 2013 at 7:48 PM, Anoop Sam John wrote:
> Hi Adrien
> We are
I have done extensive testing and have found that blobs don't belong in the
databases but are rather best left out on the file system. Andrew outlined
issues that you'll face and not to mention IO issues when compaction occurs
over large files.
On Sun, Jan 6, 2013 at 12:52 PM, Andrew Purtell wrot
;
> > On 28 December 2012 04:14, Anoop Sam John wrote:
> >
> > > > Do you have link to that presentation?
> > >
> > > http://hbtc2012.hadooper.cn/subject/track4TedYu4.pdf
> > >
> > > -Anoop-
> > >
> > >
> > > From: Mohit Anchl
IMHO Use dfs unread for blobs and use Hbase for meta data
Sent from my iPhone
On Jan 5, 2013, at 7:58 PM, 谢良 wrote:
> Just out of curiousity, why not considering a blob storage system ?
>
> Best Regards,
> Liang
>
> 发件人: kavishahuja [kavishah...@yahoo.c
On Thu, Dec 27, 2012 at 7:33 PM, Anoop Sam John wrote:
> Yes as you say when the no of rows to be returned is becoming more and
> more the latency will be becoming more. seeks within an HFile block is
> some what expensive op now. (Not much but still) The new encoding prefix
> trie will be a hu
On Mon, Dec 24, 2012 at 8:27 AM, Ivan Balashov wrote:
>
> Vincent Barat writes:
>
> >
> > Hi,
> >
> > Balancing regions between RS is correctly handled by HBase : I mean
> > that your RSs always manage the same number of regions (the balancer
> > takes care of it).
> >
> > Unfortunately, balanci
Also, check how balanced your region servers are accross all the nodes
On Sat, Dec 22, 2012 at 8:50 AM, Varun Sharma wrote:
> Note that adding nodes will improve throughput and not latency. So, if your
> client application for benchmarking is single threaded, do not expect an
> improvement in nu
ng the Xceivers could solve this problem if they are of shortage.
>
> Regards
> Ram
>
> On Sat, Dec 22, 2012 at 5:42 PM, Mohammad Tariq wrote:
>
>> yeah
>>
>> Best Regards,
>> Tariq
>> +91-9741563634
>> https://mtariq.jux.com/
>>
>&
o high cpu consumption
>
> Best Regards,
> Tariq
> +91-9741563634
> https://mtariq.jux.com/
>
>
> On Sat, Dec 22, 2012 at 5:23 AM, Mohit Anchlia >wrote:
>
> > I am just doing a put. This operation generally takes 10ms but in this
> case
> > it took more than 1
ng too high at RS side?anything odd in your RS logs?
>
> Best Regards,
> Tariq
> +91-9741563634
> https://mtariq.jux.com/
>
>
> On Sat, Dec 22, 2012 at 4:36 AM, Mohit Anchlia >wrote:
>
> > I looked at that link, but couldn't find anything useful. How do
ich your client is communicating is getting closed
> before the operation could get finished. May be it is taking longer than
> usual or something.
>
> Best Regards,
> Tariq
> +91-9741563634
> https://mtariq.jux.com/
>
>
> On Sat, Dec 22, 2012 at 4:08 AM, Mohammad Tari
s hbase-site.xml to be in the classpath.
>
> If your file is located in some subdirectory of your classpath base,
> you would have to give the full path. Or use getResourceAsStream() to
> get it as an InputStream and then use your
> Configuration.addResource(InputStream) approach to loa
rmances are still correct with it. I will most
>>> probably give it a try and bench that too... I have one new hard drive
>>> which should arrived tomorrow. Perfect timing ;)
>>>
>>>
>>>
>>> JM
>>>
>>> 2012/11/28, Mohit Anch
On Nov 28, 2012, at 9:07 AM, Adrien Mogenet wrote:
> Does HBase really benefit from 64 GB of RAM since allocating too large heap
> might increase GC time ?
>
Benefit you get is from OS cache
> Another question : why not RAID 0, in order to aggregate disk bandwidth ?
> (and thus keep 3x repli
; On Mon, Nov 26, 2012 at 2:16 PM, Mohit Anchlia
> wrote:
> > I have a need to move hbas-site.xml to an external location. So in order
> to
> > do that I changed my configuration as shown below. But this doesn't seem
> to
> > be working. It picks up the file but I
ent to
> drive workload into hbase/hdfs... one thread is used in client side. For
> this workload, it looks client should not be the bottleneck... Btw, is
> there anyway to verify this.
> Thanks,
> Yun
>
> On Sat, Nov 3, 2012 at 1:04 AM, Mohit Anchlia >wrote:
>
> > W
What load do you see on the system? I am wondering if bottleneck is on the
client side.
On Fri, Nov 2, 2012 at 9:07 PM, yun peng wrote:
> Hi, All,
> In my HBase cluster, I observed Put() executes faster than a Get(). Since
> HBase is optimized towards write, I wonder what may affect Put performa
What's the best way to see if all handlers are occupied? I am probably
running into similar issue but would like to check.
On Wed, Oct 10, 2012 at 8:24 PM, Stack wrote:
> On Wed, Oct 10, 2012 at 5:51 AM, Ricardo Vilaça
> wrote:
> > However, when adding an additional client node, with also 400 c
It looks as if RS is able to take the load but at some point memory buffer
on the server is full and it slows everything down.
Some interseting points I am seeing is memstore size of 50MB,
fssynclatency_num_ops= 300k, fswritelatency=180k
On Tue, Oct 9, 2012 at 11:03 AM, Mohit Anchlia wrote
unning on a single regionserver?
>
>
>
>
> On 10/9/12 1:44 PM, "Mohit Anchlia" wrote:
>
> >I am using HTableInterface as a pool but I don't see any setautoflush
> >method. I am using 0.92.1 jar.
> >
> >Also, how can I see if RS is gettin
ttp://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#
> getRegionLocation%28byte[],%20boolean%29
>
>
> Š to know if you are continually hitting the same RS or spreading the load.
>
>
>
> On 10/9/12 1:27 PM, "Mohit Anchlia" wrote:
>
> >
wrote:
> Mohit,
>
> Michael is right most parameters usually go one way or the other depending
> on what you are trying to accomplish.
>
> Memstore - raise for high write
>
> Blockcache - raise for high reads
>
> hbase blocksize - higher for sequential workload lower for ra
Are these 120K rows from a single region server?
On Mon, Oct 1, 2012 at 4:01 PM, Juan P. wrote:
> Hi guys,
> I'm trying to get familiarized with HBase and one thing I noticed is that
> reads seem to very slow. I just tried doing a "scan 'my_table'" to get 120K
> records and it took about 50 seco
I did restart entire cluster and still that didn't help. Looks like once I
get in this Race condition there is no way to come out of it?
On Thu, Sep 27, 2012 at 8:00 AM, rajesh babu chintaguntla <
chrajeshbab...@gmail.com> wrote:
> Hi Mohit,
>
> We should not delete znode
0 AM, Mohammad Tariq wrote:
> Hello Mohit,
>
> It should be /hbase/hbase/table/SESSIONID_TIMELINE..Apologies for the
> typo. For rest of the things, I feel Ramkrishna sir has provided a good and
> proper explanation. Please let us know if you still have any doubt or
> question
2012 at 4:27 PM, Mohit Anchlia wrote:
> I don't see path like /hbase/SESSIONID_TIMELINE
> This is what I see
>
> [zk: pprfdaaha303:5181(CONNECTED) 5] ls /hbase/table
> [SESSIONID_TIMELINE]
> [zk: pprfdaaha303:5181(CONNECTED) 6] get /hbase/table
>
> cZxid = 0x100fe
&g
thing behind the scene. As a result, any problem to the
> ZK quorum means problem with Hbase custer.
>
> Regards,
> Mohammad Tariq
>
>
>
> On Thu, Sep 27, 2012 at 3:39 AM, Mohit Anchlia >wrote:
>
> > Thanks! I do see Inconsistency. How do I remove the znode.
.
> >
> > Regards,
> > Mohammad Tariq
> >
> >
> >
> > On Thu, Sep 27, 2012 at 2:55 AM, Mohit Anchlia >wrote:
> >
> >> Which node should I look at for logs? Is this the master node? I'll try
> >> hbck.
> >>
>
Which node should I look at for logs? Is this the master node? I'll try
hbck.
On Wed, Sep 26, 2012 at 2:19 PM, Mohammad Tariq wrote:
> Hello Mohit,
>
> Try hbck once and see if it shows any inconsistency. Also, you can try
> restarting your cluster and deleting the tabl
r caching is set to 1.
>
Thanks! If caching is set > 1 then is there a way to limit no of rows
that's fetched from the server?
>
>
> ____
> From: Mohit Anchlia
> To: user@hbase.apache.org
> Sent: Wednesday, September 12, 2012 4:29
cf:column") to a scan.
>
>
>
>
> On 9/12/12 6:50 PM, "Mohit Anchlia" wrote:
>
> >I am using client 0.90.5 jar
> >
> >Is there a way to limit how many rows can be fetched in one scan call?
> >
> >Similarly is there something for colums?
>
>
>
b? This ways I can
just get the most recent qualifier or for timeseries most recent qualifier.
>
> On Mon, Sep 10, 2012 at 11:04 PM, Mohit Anchlia
> wrote:
> > On Mon, Sep 10, 2012 at 10:30 AM, Harsh J wrote:
> >
> >> Hey Mohit,
> >>
> >> S
On Mon, Sep 10, 2012 at 10:30 AM, Harsh J wrote:
> Hey Mohit,
>
> See http://hbase.apache.org/book.html#schema.smackdown.rowscols
Thanks! Is there a way in HBase to get the most recent inserted column? Or
a way to sort columns such that I can manage how many columns I want to
You can also look at pre-splitting the regions for timeseries type data.
On Mon, Sep 3, 2012 at 1:11 PM, Jean-Marc Spaggiari wrote:
> Initially your table will contain only one region.
>
> When you will reach its maximum size, it will split into 2 regions
> will are going to be distributed over
On Thu, Aug 30, 2012 at 11:52 PM, Stack wrote:
> On Thu, Aug 30, 2012 at 5:04 PM, Mohit Anchlia
> wrote:
> > In general isn't it better to split the regions so that the load can be
> > spread accross the cluster to avoid HotSpots?
> >
>
> Time series da
r as I
> remember,
> > the reason for that isn't predictability of spreading load, so much as
> > predictability of uptime & latency (they don't want an automated split to
> > happen at a random busy time). Maybe that's what you mean, Mohit?
> >
On Wed, Aug 29, 2012 at 10:50 PM, Stack wrote:
> On Wed, Aug 29, 2012 at 9:38 PM, Mohit Anchlia
> wrote:
> > On Wed, Aug 29, 2012 at 9:19 PM, Stack wrote:
> >
> >> On Wed, Aug 29, 2012 at 3:56 PM, Mohit Anchlia >
> >> wrote:
> >> &g
On Wed, Aug 29, 2012 at 9:19 PM, Stack wrote:
> On Wed, Aug 29, 2012 at 3:56 PM, Mohit Anchlia
> wrote:
> > If I use md5 hash + timestamp rowkey would hbase automatically detect the
> > difference in ranges and peforms split? How does split work in such cases
> > or
at tool created by Twitter
> engineers to work with HBase) = performance ++
> - Make writes idempotent and independent
>before: start rows at arbitrary points in time
>after: align rows on 10m (then 1h) boundaries
> - Store more data per Key/Value
> - Compact your data
>
mns 0 and 1 with timestamp 0.
>
> With the current HBase's API, I am not sure if this is possible and the
> solution I described at my previous message (by storing columns 0 and 1 at
> all timestamps up to 40 for example) seems inefficient.
>
> Any ideas?
>
> Thanks and r
You timestamp as in version? Can you describe your scenario with more
concrete example?
On Mon, Aug 27, 2012 at 5:01 PM, Ioakim Perros wrote:
> Hi,
>
> Is there any way of retrieving two values with totally different
> timestamps from a table?
>
> I am using timestamps as iteration counts, and I
On Wed, Aug 22, 2012 at 10:20 AM, Pamecha, Abhishek wrote:
> So then a GET query means one needs to look in every HFile where key falls
> within the min/max range of the file.
>
> From another parallel thread, I gather, HFile comprise of blocks which, I
> think, is an atomic unit of persisted dat
It's possible that there is a bad or slower disk on Gurjeet's machine. I
think details of iostat and cpu would clear things up.
On Tue, Aug 21, 2012 at 4:33 PM, lars hofhansl wrote:
> I get roughly the same (~1.8s) - 100 rows, 200.000 columns, segment size
> 100
>
>
>
> _
On Mon, Aug 20, 2012 at 3:06 PM, Jean-Daniel Cryans wrote:
> On Sat, Aug 18, 2012 at 1:30 PM, Mohit Anchlia
> wrote:
> > Is it also possible to setup bi-directional replication? In other words
> is
> > it possible to write to the same table to both HBase instances local
On Sat, Aug 18, 2012 at 12:35 PM, Stack wrote:
> On Fri, Aug 17, 2012 at 5:36 PM, Mohit Anchlia
> wrote:
> > Are clients local to slave DC able to read data from HBase slave when
> > replicating data from one DC to remote DC?
>
> Yes.
>
> Is it also possible to se
d (regardless of the location it is
> > issued from) will see the latest value.
> > This is because at any given time exactly RegionServer is responsible for
> > a specific Key
> > (through assignment of key ranges to regions and regions to
> RegionServers).
> >
I think availability is sacrificed in the sense that if region server fails
clients will have data inaccessible for the time region comes up on some other
server, not to confuse with data loss.
Sent from my iPad
On Aug 7, 2012, at 11:56 PM, Lin Ma wrote:
> Thank you Wei!
>
> Two more comment
On Sun, Aug 5, 2012 at 8:03 PM, Lin Ma wrote:
> Thank you for the informative reply, Mohit!
>
> Some more comments,
>
> 1. actually my confusion about column based storage is from the book
> "HBase The Definitive Guide", chapter 1, section "the Dawn of Big Data
On Sun, Aug 5, 2012 at 6:04 AM, Lin Ma wrote:
> Hi guys,
>
> I am wondering whether HBase is using column based storage or row based
> storage?
>
>- I read some technical documents and mentioned advantages of HBase is
>using column based storage to store similar data together to foster
>
org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:87)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69)
On Fri, Aug 3, 2012 at 11:44 AM, Jerry Lam wrote:
> Hi Mohit:
>
> You might need to install Cygwin if the tool has dependency on Linux
>
On Wed, Aug 1, 2012 at 12:52 PM, Mohammad Tariq wrote:
> Hello Mohit,
>
> If replication factor is set to some value > 1, then the data is
> still present on some other node(perhaps within the same rack or a
> different one). And, as far as this post is concerned it tell
com/2012/05/hbase-hdfs-and-durable-sync.html
>
> -- Lars
>
>
>
Thanks this post is very helpful
>
>
> From: Mohit Anchlia
> To: user@hbase.apache.org
> Sent: Tuesday, July 31, 2012 6:09 PM
> Subject: sync on writes
>
> In the HBase book it mentioned tha
Not sure how but I am getting one null row per 9 writes when I do a GET in
result.getRow(). Is it even possible to write null rows?
On Tue, Jul 31, 2012 at 4:49 PM, Mohit Anchlia wrote:
> HBase 90.4
>
>
> On Tue, Jul 31, 2012 at 4:18 PM, Michael Segel
> wrote:
>
>> Wh
HBase 90.4
On Tue, Jul 31, 2012 at 4:18 PM, Michael Segel wrote:
> Which release?
>
>
> On Jul 31, 2012, at 5:13 PM, Mohit Anchlia wrote:
>
> > I am seeing null row key and I am wondering how I got the nulls in there.
> > Is it possible when using HBaseClient tha
gt; if ( (ch >= '0' && ch <= '9')
> || (ch >= 'A' && ch <= 'Z')
> || (ch >= 'a' && ch <= 'z')
> || " `~!@#$%^&*()-_=+[]{}\\|;:'\",.&l
F\xFF\xFE\xC7'\x05\x11
column=S_T_MTX:\x00\x00?\xB8, timestamp=1343670017892, value=1343670136312
\xBF
> Alex Baranau
> --
> Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
> Solr
>
>
> On Fri, Jul 27, 2012 at 8:43 PM, Mohit Anchlia >
possible to alter splits or only way is to re-create the tables?
> Alex Baranau
> --
> Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
> Solr
>
>
> On Fri, Jul 27, 2012 at 8:43 PM, Mohit Anchlia >wrote:
>
> > On Fri, Jul 27, 2012 at
\xFF\xFE\xC7:\x10@\x9 column=S_T_MTX:\x00\x00gZ,
timestamp=1343350528880, value=1343350646458
F
> Alex Baranau
> --
> Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
> Solr
>
> On Fri, Jul 27, 2012 at 7:24 PM, Mohit Anchlia >wrote:
>
> > On Fri,
; >
> > > > > You may want to check the START/END keys of this region (via master
> > web
> > > > ui
> > > > > or in .META.). Then you can compare with the keys generated by your
> > > app.
> > > > > This should give yo
ide)?
>
Thanks this is helpful
>
> Alex Baranau
> --
> Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
> Solr
>
> On Thu, Jul 26, 2012 at 8:38 PM, Mohit Anchlia >wrote:
>
> > On Thu, Jul 26, 2012 at 1:52 PM, Minh Duc Nguyen
> &g
splits in localhost:60010 for the table mention ..
> On Jul 27, 2012 4:02 AM, "Mohit Anchlia" wrote:
>
> > I added new regions and the performance didn't improve. I think it still
> is
> > the load balancing issue. I want to ensure that my rows are getting
>
On Thu, Jul 26, 2012 at 1:52 PM, Minh Duc Nguyen wrote:
> Mohit,
>
> According to HBase: The Definitive Guide,
>
> The row+column Bloom filter is useful when you cannot batch updates for a
> specific row, and end up with store files which all contain parts of the
> row.
gt; Alex Baranau
> --
> Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
> Solr
>
> [1]
> http://blog.sematext.com/2012/07/09/introduction-to-hbase/
>
> http://blog.sematext.com/2012/07/09/intro-to-hbase-internals-and-schema-desig/
> or
string md5 split. Just trying to understand the difference in how
different the key range is.
> Alex Baranau
> --
> Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
> Solr
>
> On Thu, Jul 26, 2012 at 11:43 AM, Mohit Anchlia >wrote:
>
> >
ing.valueOf(i));
> }
>
> HBaseAdmin admin = new HBaseAdmin(conf);
> admin.createTable(tableDescriptor, splitKeys);
>
> [2]
> https://github.com/sematext/HBaseWD
>
> http://blog.sematext.com/2012/04/09/hbasewd-avoid-regionserver-hotspotting-despite-writing-
ytes(String.valueOf(i));
> }
>
> HBaseAdmin admin = new HBaseAdmin(conf);
> admin.createTable(tableDescriptor, splitKeys);
>
> [2]
> https://github.com/sematext/HBaseWD
>
> http://blog.sematext.com/2012/04/09/hbasewd-avoid-regionserver-hotspotting-despite-
On Wed, Jul 25, 2012 at 6:53 AM, Alex Baranau wrote:
> Hi Mohit,
>
> 1. When talking about particular table:
>
> For viewing rows distribution you can check out how regions are
> distributed. And each region defined by the start/stop key, so depending on
> your key format, e
Is there an easy way to tell how my nodes are balanced and how the rows are
distributed in the cluster?
op just fine and HBase is running on the same
cluster. Is there something special I need to do for HBase?
>
> Regards,
> Dhaval
>
>
> - Original Message -
> From: Mohit Anchlia
> To: user@hbase.apache.org
> Cc:
> Sent: Tuesday, 24 July 2012 4:39 PM
> Subjec
Thanks! I was trying it out and I see this message when I use COMPRESSION,
but it works when I don't use it. Am I doing something wrong?
hbase(main):012:0> create 't2', {NAME => 'f1', VERSIONS => 1, COMPRESSION
=> 'LZO'}
ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1
r
> > Subject: Re: Insert blocked
> >
> > HTable is not thread safe[1]. It's better to use HTablePool if you want
> to
> > share things across multiple threads.[2]
> >
> > 1
> >
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable
I removed the close call and it works. So it looks like close call should
be called only at the end. But then how does the pool know that the object
is available if it's not returned to the pool explicitly?
On Tue, Jul 24, 2012 at 10:00 AM, Mohit Anchlia wrote:
>
>
> On Tue, Jul
On Tue, Jul 24, 2012 at 3:09 AM, Lyska Anton wrote:
> Hi,
>
> after first insert you are closing your table in finally block. thats why
> thread hangs
>
I thought I need to close HTableInterface to return it back to the pool. Is
that not the case?
>
> 24.07.2012 3:41
e) {
log.error("Error writing " , e);
throw new DataStoreException(e);
} finally{
cleanUp();
}
}
private void cleanUp() {
if(null != tableInt){
try {
tableInt.close();
} catch (IOException e) {
log.error("Failed while closing table interface"
tp://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTablePool.html
>
> Thanks! I'll change my code to use HtablePool
> On Mon, Jul 23, 2012 at 3:48 PM, Mohit Anchlia >wrote:
>
> > I am writing a stress tool to test my specific use case. In my current
>
I am writing a stress tool to test my specific use case. In my current
implementation HTable is a global static variable that I initialize just
once and use it accross multiple threads. Is this ok?
My row key consists of (timestamp - (timestamp % 1000)) and cols are
counters. What I am seeing is t
gt; >> first.'" if enabled?(table_name)
> >>
> >> @admin.deleteTable(table_name)
> >> flush(org.apache.hadoop.hbase.HConstants::META_TABLE_NAME)
> >> major_compact(org.apache.hadoop.hbase.HConstants::META_TABLE_NAME)
> >>
Thanks! but I am still trying to understand these 2 questions:
1. How to see if this table has more than one region?
2. And why do I need to run major compact if I have more than one region?
On Mon, Jul 23, 2012 at 1:14 PM, Mohammad Tariq wrote:
> Hi Mohit,
>
> A table must be
I am trying to drop one of the tables but on the shell I get run
major_compact. I have couple of questions:
1. How to see if this table has more than one region?
2. And why do I need to run major compact
hbase(main):010:0* drop 'SESSION_TIMELINE'
ERROR: Table SESSION_TIMELINE is enabled. Disabl
quot;Hello HBase"
>
> hbase(main):006:0>
>
> org.apache.hadoop.hbase.util.Bytes.toString("\x48\x65\x6c\x6c\x6f\x20\x48\x42\x61\x73\x65".to_java_bytes)
>
> => "Hello HBase"
>
Thanks for the pointers. I'll try it out.
>
> --
> Alex K
>
>
> On Fri,
On Fri, Jul 20, 2012 at 6:18 PM, Dhaval Shah wrote:
> Mohit, HBase shell is a JRuby wrapper and as such has all functions
> available which are available using Java API.. So you can import the Bytes
> class and the do a Bytes.toString() similar to what you'd do in Java
>
>
I just wanted to check if most people copu hbase-site.xml in the classpath
or use some properties file as a resource and then set it in Configuration
object returned by HBaseConfiguration.*create*();
I am designing a HBase schema as a timeseries model. Taking advice from the
definitive guide and tsdb I am planning to use my row key as
"metricname:Long.MAX_VALUE - basetimestamp". And the column names would be
"timestamp-base timestamp". My col names would then look like 1,2,3,4,5
.. for instance
On Jun 27, 2012, at 2:01 PM, Amandeep Khurana wrote:
> Mohit,
>
> What would be your read patterns later on? Are you going to read per
> session, or for a time period, or for a set of users, or process through
> the entire dataset every time? That would play an important role in
I am starting out with a new application where I need to store users
clickstream data. I'll have Visitor Id, session id along with other page
related data. I am wondering if I should just key off randomly generated
session id and store all the page related data as columns inside that row
assuming t
1 - 100 of 132 matches
Mail list logo