cluster.
Thanks,
Gautam
Thanks Vladimir. We will try this out soon.
Regards,
Gautam
On Mon, Jun 1, 2015 at 12:22 AM, Vladimir Rodionov
wrote:
> InternalScan has ctor from Scan object
>
> See https://issues.apache.org/jira/browse/HBASE-12720
>
> You can instantiate InternalScan from Scan, set checkOnl
Hi all,
Here is our use case,
We have a very write heavy cluster. Also we run periodic end point co
processor based jobs that operate on the data written in the last 10-15
mins, every 10 minute.
Is there a way to only query in the MemStore from the end point
co-processor? The periodic job scans
and check the status.
Thanks again.
Gautam
On Wed, May 27, 2015 at 2:15 PM, Esteban Gutierrez
wrote:
> Gautam,
>
> Yes, you can increase the size of the memstore to values larger to 128MB
> but usually you go by increasing hbase.hregion.memstore.block.multiplier
> only. D
memstore flushes.
At Hbase.hregion.memstore.flush.size=512MB, we are able to increase the
heap utilization to by memstore to 35%.
It would be very helpful for us to understand the implication of higher
Hbase.hregion.memstore.flush.size for a long running cluster.
Thanks,
Gautam
would really speed up that process.
Thanks again guys. All of this helps.
-Gautam.
On Thu, Apr 30, 2015 at 7:35 AM, James Estes wrote:
> Guatam,
>
> Michael makes a lot of good points. Especially the importance of analyzing
> your use case for determining the row key design. We (J
ld be looking at for
this. Would like to learn about the memory/netwokr footprint of write calls.
thank you,
-Gautam.
On Wed, Apr 29, 2015 at 5:48 PM, Esteban Gutierrez
wrote:
> Hi Gautam,
>
> Your reasoning is correct and that will improve the write performance,
> specially if you alw
.. I'd like to add that we have a very fat rowkey.
- Thanks.
On Wed, Apr 29, 2015 at 5:30 PM, Gautam wrote:
> Hello,
>We'v been fighting some ingestion perf issues on hbase and I have
> been looking at the write path in particular. Trying to optimize on write
>
efits?
Cheers,
-Gautam.
per node: ~100
The table is constantly being major And minor compacted although i'v
turned off major compactions (hbase.hregion.majorcompaction = 0)
What should I look at next?
Cheers,
-Gautam.
On Thu, Nov 6, 2014 at 1:45 AM, Qiang Tian wrote:
> just in case...did you set memstore flush
*0.4/100 = 192M ? I still consistently see the memstore flushes
at ~128M.. it barely ever goes above that number. Also uploaded last
1000 lines of RS log after above settings + restart [3]
Here's the verbatim hbase-site.xml [4]
Cheers,
-Gautam.
[1] - postimg.org/image/t2cxb18sh
[2]
i can tell.
One of my main concerns was why even after setting the memstore flush size to
512M is it still flushing at 128M. Is there a setting i’v missed ? I’l try to
get more details as i find them.
Thanks and Cheers,
-Gautam.
On Oct 31, 2014, at 10:47 AM, Stack wrote:
> What version
size vs. flushQueueLen. the block
caches are utilizing the extra heap space but not the memstore. The flush
Queue lengths have increased which leads me to believe that it's flushing
way too often without any increase in throughput.
Please let me know where i should dig further. That's a long email, thanks
for reading through :-)
Cheers,
-Gautam.
atteo Bertozzi"
> wrote:
>
> > can you post the full exception and the file path ?
> > maybe there is a bug in looking up the reference file.
> > It seems to not be able to find enough data in the file...
> >
> > Matteo
> >
> >
> > On Mo
you post the full exception and the file path ?
> maybe there is a bug in looking up the reference file.
> It seems to not be able to find enough data in the file...
>
> Matteo
>
>
> On Mon, Sep 15, 2014 at 10:08 PM, Gautam wrote:
>
> > Thanks for the reply Matteo.
>
m now left with the option of downgrading my dest cluster to
94, copying data and then upgrading using the upgrade migration tool.
Wanted to know if others have tried this or there are other things I can
do. If not, i'l just go ahead and do this :-)
Cheers,
-Gautam.
On Mon, Sep 15, 201
t snapshot
should I just downgrade to 94, export snapshot and then upgrade to 98? Is
the upgrade migration path different from what export snapshot does (i'd
imagine yes)?
Cheers,
-Gautam.
On Mon, Sep 15, 2014 at 5:14 PM, Ted Yu wrote:
> bq. 98.1 on dest cluster
>
> Lo
To be clear I am running this command on the dest cluster.
-G.
On Mon, Sep 15, 2014 at 4:58 PM, Gautam wrote:
> Hello,
> I'm trying to copy data between Hbase clusters on different
> versions. I am using :
>
> /usr/bin/hbase org.apache.hadoop.hbase.sna
is without
having to upgrade my source cluster or downgrade my dest cluster.
I'm using 94.6 on source cluster and 98.1 on dest cluster.
Cheers,
-Gautam.
end to use for running
the MR jobs over snapshots. Just wanted to know how easy/lightweight
snapshotting can be before we set our eyes on moving the whole thing over.
Cheers,
-Gautam.
On Tue, Aug 12, 2014 at 3:24 PM, Ted Yu wrote:
> Gautum:
> Please take a look at this:
> HBASE-
-use the snapshot based
on freshness. At the least, we need the snapshot to be fresh until the last
hour.
Also from what I understand in Hbase, scans are not consistent at the table
level but are at the row level. Are there other ways I can query the online
table without hurting the write throughput?
Cheers,
-Gautam.
An earlier thread[1] talks about a similar problem. If the 0.96
cluster is fresh you can copy files across and upgrade
1.
http://mail-archives.apache.org/mod_mbox/hbase-user/201311.mbox/%3ccaflnt_ofhg1xgvwygpauymt-m3ncujr9rdqopdi-ad0pzca...@mail.gmail.com%3E
On Wed, Jul 9, 2014 at 1:52 PM, ch h
Thanks Ted for your response, and clarifying the behavior for using HTable
interface.
What would be the behavior for inserting data using map reduce job? would
the recently added records be in the memstore? or I need to load them for
read queries after the insert is done?
Thanks,
Gautam
On Fri
forever, I could purge older records periodically.
Thanks,
Gautam
On Fri, Aug 23, 2013 at 3:20 AM, Ted Yu wrote:
> Can you tell us the average size of your records and how much heap is
> given to the region servers ?
>
> Thanks
>
> On Aug 23, 2013, at 12:11 AM, Gautam Borah wrot
read
operations?
Thanks,
Gautam
25 matches
Mail list logo