ill be any data locality. If not
> please explain
>
> Thanks
>
--
Kevin O'Dell
Field Engineer
850-496-1298 | ke...@rocana.com
@kevinrodell
<http://www.rocana.com>
ple, I have difficulties answering the following questions:
> * can I shorten my off-peak hours range?
> * can I afford to do compactions more often? or more aggressively?
> * how much degrades my performance if region size is becoming too large?
>
> HBase version I'm using is 1
ing something terribly wrong?
>
> Thanks in advance!
> Best regards,
> Lydia
--
Kevin O'Dell
Field Engineer
850-496-1298 | ke...@rocana.com
@kevinrodell
<http://www.rocana.com>
1M/s input data will result in only 70MByte/s write
> > throughput to the cluster, which is quite a small amount compare to the 6
> > region servers. The performance should not be bad like this.
> >
> > Is anybody has idea why the performance stops at 600K/s?
> > Is there anything I have to tune to increase the HBase write throughput?
> >
>
>
> If you double the clients writing do you see an up in the throughput?
>
> If you thread dump the servers, can you tell where they are held up? Or if
> they are doing any work at all relative?
>
> St.Ack
>
--
Kevin O'Dell
Field Engineer
850-496-1298 | ke...@rocana.com
@kevinrodell
<http://www.rocana.com>
Fri, Mar 17, 2017 at 1:55 PM, Kevin O'Dell wrote:
> Hi Jeff,
>
> You can definitely lower the memstore, the last time I looked there it
> had to be set to .1 at lowest it could go. I would not recommend disabling
> compactions ever, bad things will occur and it can end up i
g some regular operations to save CPU time. I think
> Compaction is one of those we'd like to stop.
>
> thanks
>
> Jeff
>
--
Kevin O'Dell
Field Engineer
850-496-1298 | ke...@rocana.com
@kevinrodell
<http://www.rocana.com>
case a bit more.
> > >>>>>>>>>>
> > >>>>>>>>>> Yes, it's a pretty big row and it's "close" to worst case.
> > >> Normally
> > >>>>>>>>> there
> > >>>>>>>>>> would be fewer qualifiers and the largest qualifiers would be
> > >>>> smaller.
> > >>>>>>>>>>
> > >>>>>>>>>> The reason why these rows gets big is because they stores
> > >> aggregated
> > >>>>>>>>> data
> > >>>>>>>>>> in indexed compressed form. This format allow for extremely
> fast
> > >>>>>> queries
> > >>>>>>>>>> (on local disk format) over billions of rows (not rows in
> HBase
> > >>>>>> speak),
> > >>>>>>>>>> when touching smaller areas of the data. If would store the
> data
> > >> as
> > >>>>>>>>> regular
> > >>>>>>>>>> HBase rows things would get very slow unless I had many many
> > >> region
> > >>>>>>>>>> servers.
> > >>>>>>>>>>
> > >>>>>>>>>> The coprocessor is used for doing custom queries on the
> indexed
> > >> data
> > >>>>>>>>> inside
> > >>>>>>>>>> the region servers. These queries are not like a regular row
> > scan,
> > >>>> but
> > >>>>>>>>> very
> > >>>>>>>>>> specific as to how the data is formatted withing each column
> > >>>>>> qualifier.
> > >>>>>>>>>>
> > >>>>>>>>>> Yes, this is not possible if HBase loads the whole 500MB each
> > >> time i
> > >>>>>>>>> want
> > >>>>>>>>>> to perform this custom query on a row. Hence my question :-)
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> On Tue, Apr 7, 2015 at 11:03 PM, Michael Segel <
> > >>>>>>>>> michael_se...@hotmail.com>
> > >>>>>>>>>> wrote:
> > >>>>>>>>>>
> > >>>>>>>>>>> Sorry, but your initial problem statement doesn’t seem to
> > parse …
> > >>>>>>>>>>>
> > >>>>>>>>>>> Are you saying that you a single row with approximately
> 100,000
> > >>>>>>>>> elements
> > >>>>>>>>>>> where each element is roughly 1-5KB in size and in addition
> > there
> > >>>> are
> > >>>>>>>>> ~5
> > >>>>>>>>>>> elements which will be between one and five MB in size?
> > >>>>>>>>>>>
> > >>>>>>>>>>> And you then mention a coprocessor?
> > >>>>>>>>>>>
> > >>>>>>>>>>> Just looking at the numbers… 100K * 5KB means that each row
> > would
> > >>>> end
> > >>>>>>>>> up
> > >>>>>>>>>>> being 500MB in size.
> > >>>>>>>>>>>
> > >>>>>>>>>>> That’s a pretty fat row.
> > >>>>>>>>>>>
> > >>>>>>>>>>> I would suggest rethinking your strategy.
> > >>>>>>>>>>>
> > >>>>>>>>>>>> On Apr 7, 2015, at 11:13 AM, Kristoffer Sjögren <
> > >> sto...@gmail.com
> > >>>>>
> > >>>>>>>>>>> wrote:
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> Hi
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> I have a row with around 100.000 qualifiers with mostly
> small
> > >>>> values
> > >>>>>>>>>>> around
> > >>>>>>>>>>>> 1-5KB and maybe 5 largers ones around 1-5 MB. A coprocessor
> do
> > >>>>>>>>> random
> > >>>>>>>>>>>> access of 1-10 qualifiers per row.
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> I would like to understand how HBase loads the data into
> > memory.
> > >>>>>>>>> Will
> > >>>>>>>>>> the
> > >>>>>>>>>>>> entire row be loaded or only the qualifiers I ask for (like
> > >>>> pointer
> > >>>>>>>>>>> access
> > >>>>>>>>>>>> into a direct ByteBuffer) ?
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> Cheers,
> > >>>>>>>>>>>> -Kristoffer
> > >>>>>>>>>>>
> > >>>>>>>>>>> The opinions expressed here are mine, while they may reflect
> a
> > >>>>>>>>> cognitive
> > >>>>>>>>>>> thought, that is purely accidental.
> > >>>>>>>>>>> Use at your own risk.
> > >>>>>>>>>>> Michael Segel
> > >>>>>>>>>>> michael_segel (AT) hotmail.com
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>
> > >>>>>> The opinions expressed here are mine, while they may reflect a
> > >> cognitive
> > >>>>>> thought, that is purely accidental.
> > >>>>>> Use at your own risk.
> > >>>>>> Michael Segel
> > >>>>>> michael_segel (AT) hotmail.com
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>
> > >>>> The opinions expressed here are mine, while they may reflect a
> > cognitive
> > >>>> thought, that is purely accidental.
> > >>>> Use at your own risk.
> > >>>> Michael Segel
> > >>>> michael_segel (AT) hotmail.com
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>
> > >> The opinions expressed here are mine, while they may reflect a
> cognitive
> > >> thought, that is purely accidental.
> > >> Use at your own risk.
> > >> Michael Segel
> > >> michael_segel (AT) hotmail.com
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> >
> > The opinions expressed here are mine, while they may reflect a cognitive
> > thought, that is purely accidental.
> > Use at your own risk.
> > Michael Segel
> > michael_segel (AT) hotmail.com
> >
> >
> >
> >
> >
> >
>
>
> --
> Best regards,
>
>- Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>
--
Kevin O'Dell
Field Enablement, Cloudera
unning the load
> balancer. (HDFS)
> But the point I am trying to make is that with respect to HBase, you still
> need to think about the cluster as a whole.
>
>
> > On Apr 2, 2015, at 7:41 AM, Kevin O'dell
> wrote:
> >
> > Hi Mike,
> >
> > Sorry f
limit for how much each node can
> utilize.
> >>>
> >>> My question this time around has to do with nodes w/ unequal numbers of
> >>> volumes: Does HBase allocate regions based on nodes or volumes on the
> >>> nodes? I am hoping I can add a node with 8 volumes totaling 8X TB and
> >> all
> >>> the volumes will be filled. This even though legacy nodes have 5
> volumes
> >>> and total storage of 5X TB.
> >>>
> >>> Fact or fantasy?
> >>>
> >>> Thanks,
> >>> Ted
> >>>
> >>>
> >>
> >
>
> The opinions expressed here are mine, while they may reflect a cognitive
> thought, that is purely accidental.
> Use at your own risk.
> Michael Segel
> michael_segel (AT) hotmail.com
>
>
>
>
>
>
--
Kevin O'Dell
Field Enablement, Cloudera
until you kill all the cache right? Or was this an old JIRA I was thinking
of?
On Thu, Nov 20, 2014 at 3:37 PM, Ted Yu wrote:
> The indices are always cached.
>
> Cheers
>
> On Nov 20, 2014, at 12:33 PM, "Kevin O'dell"
> wrote:
>
> > I am also un
erstand what is block cache used
> for?
> >
> > Another question: HBase write will first write to WAL then to memstore.
> > Will the write to WAL go to disk directly before hbase write memstore, a
> > sync operation or it is possible that write to WAL is still buffered
> > somewhere when hbase put the data into the memstore?
> >
> > Reading src code may cost me months, so a kindly reply will help me a
> > lot... ...
> > Thanks very much!
> >
> > Best Regards,
> > Ming
> >
>
>
>
>
--
Kevin O'Dell
Systems Engineer, Cloudera
se.apache.org/book.html#trouble.rs.runtime.zkexpired
>
> We are using hbase.client.scanner.caching=1000. I suspect this may be a
> block cache issue. My question is if/how to disable the block cache for the
> scan queries? This is taking out writes and causing instability on the
> cluster.
>
> Thanks,
> Pere
>
--
Kevin O'Dell
Systems Engineer, Cloudera
size doesnt matter. If memstore flush size 128 mb, does java take some
> memory for each memstore on region startup. Or it only takes memory while
> you are using it to insert data.
> Thanks a lot
> 3 Ağu 2014 21:27 tarihinde "Kevin O'dell [via Apache HBase]" <
> ml-node+s
Hi Ozhang,
If you are only bulk loading into HBase, then memstore flush size should
not matter. You most likely you looking to lower the upper/global memstore
limits.
On Aug 3, 2014 2:23 PM, "ozhang" wrote:
> Hello,
> In our hbase cluster memstore flush size is 128 mb. And to insert data to
>
Hi Jeremy,
I always recommend turning on snappy compression, I have ~20%
performance increases.
On Jun 14, 2014 10:25 AM, "Ted Yu" wrote:
> You may have read Doug Meil's writeup where he tried out different
> ColumnFamily
> compressions :
>
> https://blogs.apache.org/hbase/
>
> Cheers
>
>
> O
ee of the intended recipient, please note that any review, use,
> disclosure or distribution of this message or its attachments, in any form,
> is strictly prohibited. If you have received this message in error, please
> immediately notify the sender and/or notificati...@carrieriq.com and
> delete or destroy any copy of this message and its attachments.
>
--
Kevin O'Dell
Systems Engineer, Cloudera
;> 发送时间: 2014年3月3日 19:20
> > >> 收件人: user@hbase.apache.org
> > >> 主题: what is the default size of each Column family memstore
> > >>
> > >> Hi
> > >>
> > >> what is the default size of each Column family memstore
> > >>
> > >
> > >
> >
>
--
Kevin O'Dell
Systems Engineer, Cloudera
ondering if it's possible do the 0.92 ==> 0.96
> > jump
> > > without making two jumps: 0.92 ==> 0.94 and then 0.94. ==> 0.96 ?
> > >
> > > Thanks,
> > > Otis
> > > --
> > > Performance Monitoring * Log Analytics * Search Analytics
> > > Solr & Elasticsearch Support * http://sematext.com/
> > >
> >
>
--
Kevin O'Dell
Systems Engineer, Cloudera
Rohit,
64GB heap is not ideal, you will run into some weird issues. How many
regions are you running per server, how many drives in each node, any other
settings you changed from default?
On Jan 24, 2014 6:22 PM, "Rohit Dev" wrote:
> Hi,
>
> We are running Opentsdb on CDH 4.3 hbase cluster, wi
completely.
>
>
>
> On Sun, Jan 5, 2014 at 2:19 AM, Kevin O'dell >wrote:
>
> > Have you tried writing out an hfile and then bulk loading the data?
> > On Jan 4, 2014 4:01 PM, "Ted Yu" wrote:
> >
> > > bq. Output is written to either Hba
Have you tried writing out an hfile and then bulk loading the data?
On Jan 4, 2014 4:01 PM, "Ted Yu" wrote:
> bq. Output is written to either Hbase
>
> Looks like Akhtar wants to boost write performance to HBase.
> MapReduce over snapshot files targets higher read throughput.
>
> Cheers
>
>
> On
; > > >
> >> > > > > > On Fri, Dec 13, 2013 at 5:09 PM, Ted Yu
> >> > wrote:
> >> > > > > >
> >> > > > > > > Hi,
> >> > > > > > > See http://hbase.apache.org/book.html#client
> >> > > > > > > and http://hbase.apache.org/book.html#rest
> >> > > > > > >
> >> > > > > > > Cheers
> >> > > > > > >
> >> > > > > > >
> >> > > > > > > On Fri, Dec 13, 2013 at 2:06 PM, ados1...@gmail.com <
> >> > > > > ados1...@gmail.com
> >> > > > > > > >wrote:
> >> > > > > > >
> >> > > > > > > > Hello All,
> >> > > > > > > >
> >> > > > > > > > I am newbie in hbase and wanted to see if there are any
> good
> >> > > hbase
> >> > > > > > client
> >> > > > > > > > that i can use to query underlying hbase datastore or what
> >> is
> >> > the
> >> > > > > best
> >> > > > > > > tool
> >> > > > > > > > to use?
> >> > > > > > > >
> >> > > > > > > > I am using command line but looking for any other best
> >> > > alternative.
> >> > > > > > > >
> >> > > > > > > > Regards,
> >> > > > > > > > Andy.
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> >
>
--
Kevin O'Dell
Systems Engineer, Cloudera
>
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/Get-all-columns-in-a-column-family-tp4053696.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
--
Kevin O'Dell
Systems Engineer, Cloudera
o look in the RS logs to see what this region
> can not come back online...
>
> JM
>
>
> 2013/12/10 Kevin O'dell
>
> > Hey Raheem,
> >
> > You can sideline the table into tmp(mv /hbase/table /tmp/table, then
> > bring HBase back online. Once HBas
to OFFLINE, (assuming it is
> possible), and try bringing up the cluster again. hbck will not work as
> none of the region servers are up. Any one have any other ideas?
> Thanks,
> Raheem
>
>
>
>
>
--
Kevin O'Dell
Systems Engineer, Cloudera
Latency(us)=9738.99] [CLEANUP
> > AverageLatency(us)=27089976]
> > 210 sec: 893742 operations; 502.7 current ops/sec; [UPDATE
> > AverageLatency(us)=14887298.5] [INSERT AverageLatency(us)=6937.27]
> [CLEANUP
> > AverageLatency(us)=14887312.5]
> > 221 sec: 928277 operat
e that grows very fast so the
> region keeps splitting, is it possible that the table could have as many
> regions as it could until all the resource run out?
>
> Thanks.
>
> Kim
>
--
Kevin O'Dell
Systems Engineer, Cloudera
tore.lowerLimit.
> >
> > So, my questions are:
> >
> > Does it make sense to touch these options in our case?
> > Is this memory reserved or other processes inside regionserver can use
> it?
> >
> > Thanks in advance!
> >
> > --
> > Best Regards
> >
ge the batch size in the
> hbase shell? Whats OOME?
>
> @Dhaval: there is only the *.out file in /var/log/hbase. Is the .log file
> located in another directory?
>
>
> 2013/9/11 Kevin O'dell
>
> > You can also check the messages file in /var/log. The OOME m
te:
> >>>>>
> >>>>>> Hi,
> >>>>>>
> >>>>>> thanks for your fast answer! with size becoming too big I mean I
> >>> have
> >>>> one
> >>>>>> row with thousands of columns. For example:
> >>>>>>
> >>>>>> myrowkey1 -> column1, column2, column3 ... columnN
> >>>>>>
> >>>>>> What do you mean with "change the batch size"? I try to create a
> >>> little
> >>>>>> java test code to reproduce the problem. It will take a moment
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> 2013/9/11 Jean-Marc Spaggiari
> >>>>>>
> >>>>>>> Hi John,
> >>>>>>>
> >>>>>>> Just to be sure. What is " the size become too big"? The size of a
> >>>>> single
> >>>>>>> column within this row? Or the number of columns?
> >>>>>>>
> >>>>>>> If it's the number of columns, you can change the batch size to get
> >>>> less
> >>>>>>> columns in a single call? Can you share the relevant piece of code
> >>>> doing
> >>>>>>> the call?
> >>>>>>>
> >>>>>>> JM
> >>>>>>>
> >>>>>>>
> >>>>>>> 2013/9/11 John
> >>>>>>>
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> I store a lot of columns for one row key and if the size become to
> >>>> big
> >>>>>>> the
> >>>>>>>> relevant Region Server crashs if I try to get or scan the row. For
> >>>>>>> example
> >>>>>>>> if I try to get the relevant row I got this error:
> >>>>>>>>
> >>>>>>>> 2013-09-11 12:46:43,696 WARN org.apache.hadoop.ipc.HBaseServer:
> >>>>>>>> (operationTooLarge): {"processingtimems":3091,"client":"
> >>>>>>> 192.168.0.34:52488
> >>>>>>>> ","ti$
> >>>>>>>>
> >>>>>>>> If I try to load the relevant row via Apache Pig and the
> >>> HBaseStorage
> >>>>>>>> Loader (use the scan operation) I got this message and after that
> >>> the
> >>>>>>>> Region Servers crashs:
> >>>>>>>>
> >>>>>>>> 2013-09-11 10:30:23,542 WARN org.apache.hadoop.ipc.HBaseServer:
> >>>>>>>> (responseTooLarge):
> >>>>>>>> {"processingtimems":1851,"call":"next(-588368116791418695,
> >>>>>>>> 1), rpc version=1, client version=29,$
> >>>>>>>>
> >>>>>>>> I'm using Cloudera 4.4.0 with 0.94.6-cdh4.4.0
> >>>>>>>>
> >>>>>>>> Any clues?
> >>>>>>>>
> >>>>>>>> regards
> >>>>>>>
> >>>>>
> >>>>
> >>>
> >>
> >>
>
> The opinions expressed here are mine, while they may reflect a cognitive
> thought, that is purely accidental.
> Use at your own risk.
> Michael Segel
> michael_segel (AT) hotmail.com
>
>
>
>
>
>
--
Kevin O'Dell
Systems Engineer, Cloudera
1503836,"responsesize":17766,"class":"HRegionServer","table":"mytestTable","cacheBlocks":true,"families":{"mycf":["ALL"]},"row":"sampleRowKey","queuetimems":0,"method&q
from Yahoo! Mail on Android
>
>
--
Kevin O'Dell
Systems Engineer, Cloudera
Shengjie,
Looks like you are binding to localhost on your services. Please make
sure you correct it so you bind on the interface for zk.
On Aug 25, 2013 10:32 AM, "Shengjie Min" wrote:
> Sure, Kevin,
>
> http://imgur.com/SQ3Zao9
>
> Shengjie
>
>
> On 25 Augu
access your VM from outside of your VM? Or your
> > > client application is into the VM too?
> > >
> > > If you are outside of your VM, are you able to access the VM from
> > outside?
> > >
> > > Like, are you able to access the WebUI from outside
Can you attach a screen shot of the HMaster UI? It appears ZK is connecting
fine, but can't find .META.
On Aug 25, 2013 8:57 AM, "Shengjie Min" wrote:
> Hi Jean-Marc,
>
> You meant my cloudera vm or my client? Here is my /etc/hosts
>
> cloudera vm:
>
> 127.0.0.1 localhost.localdomain localhos
QQ what is your caching set to?
On Aug 22, 2013 11:25 AM, "Pavan Sudheendra" wrote:
> Hi all,
>
> A serious question.. I know this isn't one of the best hbase practices but
> I really want to know..
>
> I am doing a join across 3 table in hbase.. One table contain 19m records,
> one contains 2m a
> > > example what is the length of the byte array ? Also for java
> primitive,
> > > is
> > > > it 8-byte long ? 4-byte int ?
> > > > In addition to that, what is in the row key ? How long is that in
> > bytes ?
> > > > Same for column family, can you share the names of the column family
> ?
> > > How
> > > > about qualifiers ?
> > > >
> > > > If you have disabled major compactions, you should run it once a few
> > days
> > > > (if not once a day) to consolidate the # of files that each scan will
> > > have
> > > > to open.
> > > >
> > > > 2) I had ran scan keeping in mind the CPU,IO and other system related
> > > > > parameters.I found them to be normal with system load being
> 0.1-0.3.
> > > > >
> > > >
> > > > How many disks do you have in your box ? Have you ever benchmarked
> the
> > > > hardware ?
> > > >
> > > > Thanks,
> > > > Viral
> > > >
> > >
> > >
> > >
> > > --
> > > Thanks and Regards,
> > > Vimal Jain
> >
>
--
Kevin O'Dell
Systems Engineer, Cloudera
>
> inder
> "you are the average of 5 people you spend the most time with"
> On Aug 4, 2013 8:15 PM, "Kevin O'dell" wrote:
>
> > Hi Vimal,
> >
> > It really depends on your usage pattern but HBase != Bigtable.
> > On Aug 4, 20
Hi Vimal,
It really depends on your usage pattern but HBase != Bigtable.
On Aug 4, 2013 2:29 AM, "Vimal Jain" wrote:
> Hi,
> I have tested read performance after reducing number of column families
> from 14 to 3 and yes there is improvement.
> Meanwhile i was going through the paper published
My questions are :
1) How this thing is working ? It is working because java can over allocate
memory. You will know you are using too much memory when the kernel starts
killing processes.
2) I just have one table whose size at present is about 10-15 GB , so what
should be ideal memory distribution
not CM managed correct?
On Aug 1, 2013 1:49 PM, "Jean-Marc Spaggiari"
wrote:
> So I had to remove few reference files and run few hbck to get everything
> back online.
>
> Summary: don't stop your cluster while it's major compacting huge tables ;)
>
> Thanks
If that doesn't work you probably have an invalid reference file and you
will find that in RS logs for the HLog split that is never finishing.
On Aug 1, 2013 1:38 PM, "Kevin O'dell" wrote:
> JM,
>
> Stop HBase
> rmr /hbase from zkcli
> Sideline META
> Run
Aug 1, 2013 at 7:08 AM, Jean-Marc Spaggiari <
> > jean-m...@spaggiari.org
> > > wrote:
> >
> > > I tried to remove the znodes but got the same result. So I shutted down
> > all
> > > the RS and restarted HBase, and now I have 0 regions for this table.
>
2405345b58470,
> b7ebfeb63b10997736fd12920fde2bb8, d95bb27cc026511c2a8c8ad155e79bf6,
> 270a9c371fcbe9cd9a04986e0b77d16b, aff4d1d8bf470458bb19525e8aef0759]
>
> Can I just delete those zknodes? Worst case hbck will find them back from
> HDFS if required?
>
> JM
>
> 2013/8/1 Kevin O'dell
>
Does it exist in meta or hdfs?
On Aug 1, 2013 8:24 AM, "Jean-Marc Spaggiari"
wrote:
> My master keep logging that:
>
> 2013-07-31 21:52:59,201 WARN
> org.apache.hadoop.hbase.master.AssignmentManager: Region
> 270a9c371fcbe9cd9a04986e0b77d16b not found on server
> node7,60020,1375319044055; failed
>> >
> >> > --
> >> > Best regards,
> >> >
> >> >- Andy
> >> >
> >> > Problems worthy of attack prove their worth by hitting back. - Piet
> Hein
> >> > (via Tom White)
> >> >
> >>
> >
> >
> >
> > --
> > Best regards,
> >
> >- Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
>
--
Kevin O'Dell
Systems Engineer, Cloudera
tice: The information contained in this message,
> including any attachments hereto, may be confidential and is intended to be
> read only by the individual or entity to whom this message is addressed. If
> the reader of this message is not the intended recipient or an agent or
> designee of the intended recipient, please note that any review, use,
> disclosure or distribution of this message or its attachments, in any form,
> is strictly prohibited. If you have received this message in error, please
> immediately notify the sender and/or notificati...@carrieriq.com and
> delete or destroy any copy of this message and its attachments.
>
--
Kevin O'Dell
Systems Engineer, Cloudera
ay ... but just trying to get a feel of what sort of tuning
> options had to be used to have a stable HBase cluster with 16 or 24GB RS
> heaps).
>
> Thanks in advance,
> --Suraj
>
--
Kevin O'Dell
Systems Engineer, Cloudera
> > delete all copies of this message.
> >
> >
> > On Sat, Jun 22, 2013 at 10:05 AM, Mohammad Tariq
> > wrote:
> >
> > > Yeah, I forgot to mention that no. of ZKs should be odd. Perhaps those
> > > parentheses made that statement look like an opti
If you run ZK with a DN/TT/RS please make sure to dedicate a hard drive and
a core to the ZK process. I have seen many strange occurrences.
On Jun 22, 2013 12:10 PM, "Jean-Marc Spaggiari"
wrote:
> You HAVE TO run a ZK3, or else you don't need to have ZK2 and any ZK
> failure will be an issue. You
t; Hi all,
> >>
> >> I am curious if it is possible to have a version(0.94.7) deployed on
> >> cluster and use 0.94.1 version on client to access it.
> >>
> >> Thanks in advance!
> >
>
--
Kevin O'Dell
Systems Engineer, Cloudera
and offpeak hours (when
> > blocking is tolerable). I am wondering if there is any more intelligent
> > solution (say a clever scheduling policy that blocks only at offpeak
> hours)
> > exist in the latest HBase version that could minimizes the effect of
> write
> > stream block?
> >
> > Regards
> > Yun*
> > *
> >
>
--
Kevin O'Dell
Systems Engineer, Cloudera
t; > http://www.nosql.se/2011/09/activating-lzo-compression-in-hbase/
> > >
> > > I am getting error at the 7th step in that link (ant compile-native)
> > > you can see the error here* http://paste.ubuntu.com/5745079/*
> > >
> >
> > correct e
reDeadException: Server REPORT rejected;
> currently processing smartdeals-hbase14-snc1.snc1,60020,1370373396890 as
> dead server
>
> (Not sure why it says 3000ms when we have timeout at 30ms)
>
> We have done some GC tuning as well. Wondering what I can tune from making
> RS going down? Any ideas?
> This is batch heavy cluster, and we care less about read latency. We can
> increase RAM bit more but not much (Already RS has 20GB memory)
>
> Thanks in advance.
>
> Ameya
>
--
Kevin O'Dell
Systems Engineer, Cloudera
; > > > > > > hbase.rummycircle.com
> >> %2C60020%2C1369877672964.1370382721642
> >> > :
> >> > > > > > > > > java.io.IOException: All datanodes 192.168.20.30:50010
> >>are
> >> > > bad.
> >> > > > > > > > > Aborting...
> >> > > > > > > > > java.io.IOException: All datanodes 192.168.20.30:50010
> >>are
> >> > > bad.
> >> > > > > > > > > Aborting...
> >> > > > > > > > > at
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >>org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFS
> >>Client.java:3096)
> >> > > > > > > > >
> >> > > > > > > > >
> >> > > > > > > > > *HMaster logs:*
> >> > > > > > > > > 2013-06-05 05:12:50,701 WARN
> >> > > > org.apache.hadoop.hbase.util.Sleeper:
> >> > > > > We
> >> > > > > > > > > slept 4702394ms instead of 1ms, this is likely due
> >>to a
> >> > > long
> >> > > > > > > garbage
> >> > > > > > > > > collecting pause and it's usually bad, see
> >> > > > > > > > >
> >> > http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> >> > > > > > > > > 2013-06-05 05:12:50,701 WARN
> >> > > > org.apache.hadoop.hbase.util.Sleeper:
> >> > > > > We
> >> > > > > > > > > slept 4988731ms instead of 30ms, this is likely due
> >>to
> >> a
> >> > > long
> >> > > > > > > garbage
> >> > > > > > > > > collecting pause and it's usually bad, see
> >> > > > > > > > >
> >> > http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> >> > > > > > > > > 2013-06-05 05:12:50,701 WARN
> >> > > > org.apache.hadoop.hbase.util.Sleeper:
> >> > > > > We
> >> > > > > > > > > slept 4988726ms instead of 30ms, this is likely due
> >>to
> >> a
> >> > > long
> >> > > > > > > garbage
> >> > > > > > > > > collecting pause and it's usually bad, see
> >> > > > > > > > >
> >> > http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> >> > > > > > > > > 2013-06-05 05:12:50,701 WARN
> >> > > > org.apache.hadoop.hbase.util.Sleeper:
> >> > > > > We
> >> > > > > > > > > slept 4698291ms instead of 1ms, this is likely due
> >>to a
> >> > > long
> >> > > > > > > garbage
> >> > > > > > > > > collecting pause and it's usually bad, see
> >> > > > > > > > >
> >> > http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> >> > > > > > > > > 2013-06-05 05:12:50,711 WARN
> >> > > > org.apache.hadoop.hbase.util.Sleeper:
> >> > > > > We
> >> > > > > > > > > slept 4694502ms instead of 1000ms, this is likely due
> >>to a
> >> > long
> >> > > > > > garbage
> >> > > > > > > > > collecting pause and it's usually bad, see
> >> > > > > > > > >
> >> > http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> >> > > > > > > > > 2013-06-05 05:12:50,714 WARN
> >> > > > org.apache.hadoop.hbase.util.Sleeper:
> >> > > > > We
> >> > > > > > > > > slept 4694492ms instead of 1000ms, this is likely due
> >>to a
> >> > long
> >> > > > > > garbage
> >> > > > > > > > > collecting pause and it's usually bad, see
> >> > > > > > > > >
> >> > http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> >> > > > > > > > > 2013-06-05 05:12:50,715 WARN
> >> > > > org.apache.hadoop.hbase.util.Sleeper:
> >> > > > > We
> >> > > > > > > > > slept 4695589ms instead of 6ms, this is likely due
> >>to a
> >> > > long
> >> > > > > > > garbage
> >> > > > > > > > > collecting pause and it's usually bad, see
> >> > > > > > > > >
> >> > http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> >> > > > > > > > > 2013-06-05 05:12:52,263 FATAL
> >> > > > > org.apache.hadoop.hbase.master.HMaster:
> >> > > > > > > > > Master server abort: loaded coprocessors are: []
> >> > > > > > > > > 2013-06-05 05:12:52,465 INFO
> >> > > > > > > > org.apache.hadoop.hbase.master.ServerManager:
> >> > > > > > > > > Waiting for region servers count to settle; currently
> >> checked
> >> > > in
> >> > > > 1,
> >> > > > > > > slept
> >> > > > > > > > > for 0 ms, expecting minimum of 1, maximum of 2147483647,
> >> > > timeout
> >> > > > of
> >> > > > > > > 4500
> >> > > > > > > > > ms, interval of 1500 ms.
> >> > > > > > > > > 2013-06-05 05:12:52,561 ERROR
> >> > > > > org.apache.hadoop.hbase.master.HMaster:
> >> > > > > > > > > Region server hbase.rummycircle.com,60020,1369877672964
> >> > > > reported a
> >> > > > > > > fatal
> >> > > > > > > > > error:
> >> > > > > > > > >
> >> org.apache.zookeeper.KeeperException$SessionExpiredException:
> >> > > > > > > > > KeeperErrorCode = Session expired
> >> > > > > > > > > 2013-06-05 05:12:53,970 INFO
> >> > > > > > > > org.apache.hadoop.hbase.master.ServerManager:
> >> > > > > > > > > Waiting for region servers count to settle; currently
> >> checked
> >> > > in
> >> > > > 1,
> >> > > > > > > slept
> >> > > > > > > > > for 1506 ms, expecting minimum of 1, maximum of
> >>2147483647,
> >> > > > timeout
> >> > > > > > of
> >> > > > > > > > 4500
> >> > > > > > > > > ms, interval of 1500 ms.
> >> > > > > > > > > 2013-06-05 05:12:55,476 INFO
> >> > > > > > > > org.apache.hadoop.hbase.master.ServerManager:
> >> > > > > > > > > Waiting for region servers count to settle; currently
> >> checked
> >> > > in
> >> > > > 1,
> >> > > > > > > slept
> >> > > > > > > > > for 3012 ms, expecting minimum of 1, maximum of
> >>2147483647,
> >> > > > timeout
> >> > > > > > of
> >> > > > > > > > 4500
> >> > > > > > > > > ms, interval of 1500 ms.
> >> > > > > > > > > 2013-06-05 05:12:56,981 INFO
> >> > > > > > > > org.apache.hadoop.hbase.master.ServerManager:
> >> > > > > > > > > Finished waiting for region servers count to settle;
> >> checked
> >> > in
> >> > > > 1,
> >> > > > > > > slept
> >> > > > > > > > > for 4517 ms, expecting minimum of 1, maximum of
> >>2147483647,
> >> > > > master
> >> > > > > is
> >> > > > > > > > > running.
> >> > > > > > > > > 2013-06-05 05:12:57,019 INFO
> >> > > > > > > > > org.apache.hadoop.hbase.catalog.CatalogTracker: Failed
> >> > > > verification
> >> > > > > > of
> >> > > > > > > > > -ROOT-,,0 at address=hbase.rummycircle.com
> >> > > ,60020,1369877672964;
> >> > > > > > > > > java.io.EOFException
> >> > > > > > > > > 2013-06-05 05:17:52,302 WARN
> >> > > > > > > > > org.apache.hadoop.hbase.master.SplitLogManager: error
> >>while
> >> > > > > splitting
> >> > > > > > > > logs
> >> > > > > > > > > in [hdfs://
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >>
> 192.168.20.30:9000/hbase/.logs/hbase.rummycircle.com,60020,1369877672964-
> >>splitting
> >> > > > > > > > ]
> >> > > > > > > > > installed = 19 but only 0 done
> >> > > > > > > > > 2013-06-05 05:17:52,321 FATAL
> >> > > > > org.apache.hadoop.hbase.master.HMaster:
> >> > > > > > > > > master:6-0x13ef31264d0
> >> master:6-0x13ef31264d0
> >> > > > > > received
> >> > > > > > > > > expired from ZooKeeper, aborting
> >> > > > > > > > >
> >> org.apache.zookeeper.KeeperException$SessionExpiredException:
> >> > > > > > > > > KeeperErrorCode = Session expired
> >> > > > > > > > > java.io.IOException: Giving up after tries=1
> >> > > > > > > > > Caused by: java.lang.InterruptedException: sleep
> >> interrupted
> >> > > > > > > > > 2013-06-05 05:17:52,381 ERROR
> >> > > > > > > > > org.apache.hadoop.hbase.master.HMasterCommandLine:
> >>Failed
> >> to
> >> > > > start
> >> > > > > > > master
> >> > > > > > > > > java.lang.RuntimeException: HMaster Aborted
> >> > > > > > > > >
> >> > > > > > > > >
> >> > > > > > > > >
> >> > > > > > > > > --
> >> > > > > > > > > Thanks and Regards,
> >> > > > > > > > > Vimal Jain
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > > >
> >> > > > > > > >
> >> > > > > > > > --
> >> > > > > > > > Thanks and Regards,
> >> > > > > > > > Vimal Jain
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > >
> >> > > > > > --
> >> > > > > > Thanks and Regards,
> >> > > > > > Vimal Jain
> >> > > > > >
> >> > > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > > Thanks and Regards,
> >> > > > Vimal Jain
> >> > > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks and Regards,
> >> > Vimal Jain
> >> >
> >>
> >
> >
> >
> >--
> >Thanks and Regards,
> >Vimal Jain
>
>
--
Kevin O'Dell
Systems Engineer, Cloudera
r:: 1,3 replyHeader:: 1,51539607605,0
> > > request::
> > > > > > > '/hbase-master/hbaseid,F response::
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> s{17179869216
93735633736333966343233,s{17179869216,51539607572,1368812077009,1369166263798,12,0,0,0,59,0,17179869216}
> > > > >
> > > > > 13/05/21 15:59:08 DEBUG zookeeper.ClientCnxn: Reading reply
> > > > > sessionid:0x33ec8aac0f40002, packet:: clientPath:null
> serverPath:null
> > > > > finished:false header:: 3,3 replyHeader:: 3,51539607605,0
> request::
> > > > > '/hbase-master/master,T response::
> > > > >
> > > > >
> > > >
> > >
> >
> s{51539607570,51539607570,1369166263477,1369166263477,0,0,0,17672084227293186,54,0,51539607570}
> > > > >
> > > > > 13/05/21 15:59:08 DEBUG zookeeper.ClientCnxn: Reading reply
> > > > > sessionid:0x33ec8aac0f40002, packet:: clientPath:null
> serverPath:null
> > > > > finished:false header:: 4,4 replyHeader:: 4,51539607605,0
> request::
> > > > > '/hbase-master/master,T response::
> > > > >
> > > > >
> > > >
> > >
> >
> #0001231323237394068626173652d6d6173746572006c6f63616c686f73742c36303030302c31333639313636323633313032,s{51539607570,51539607570,1369166263477,1369166263477,0,0,0,17672084227293186,54,0,51539607570}
> > > > >
> > > > > 13/05/21 15:59:08 DEBUG zookeeper.ClientCnxn: Reading reply
> > > > > sessionid:0x33ec8aac0f40002, packet:: clientPath:null
> serverPath:null
> > > > > finished:false header:: 5,3 replyHeader:: 5,51539607605,0
> request::
> > > > > '/hbase-master/root-region-server,T response::
> > > > >
> > > > >
> > > >
> > >
> >
> s{51539607579,51539607579,1369166269726,1369166269726,0,0,0,0,52,0,51539607579}
> > > > >
> > > > > 13/05/21 15:59:08 DEBUG zookeeper.ClientCnxn: Reading reply
> > > > > sessionid:0x33ec8aac0f40002, packet:: clientPath:null
> serverPath:null
> > > > > finished:false header:: 6,4 replyHeader:: 6,51539607605,0
> request::
> > > > > '/hbase-master/root-region-server,T response::
> > > > >
> > > > >
> > > >
> > >
> >
> #0001231323035384068626173652d6d61737465726c6f63616c686f73742c36303032302c31333639313636323335353932,s{51539607579,51539607579,1369166269726,1369166269726,0,0,0,0,52,0,51539607579}
> > > > >
> > > > > 13/05/21 15:59:08 DEBUG zookeeper.ClientCnxn: Reading reply
> > > > > sessionid:0x33ec8aac0f40002, packet:: clientPath:null
> serverPath:null
> > > > > finished:false header:: 7,3 replyHeader:: 7,51539607605,0
> request::
> > > > > '/hbase-master,F response::
> > > > >
> > > > >
> > > >
> > >
> >
> s{17179869186,17179869186,1368811379390,1368811379390,0,88,0,0,0,12,51539607579}
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Jay Vyas
> > > > > http://jayunit100.blogspot.com
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Jay Vyas
> > > http://jayunit100.blogspot.com
> > >
> >
>
>
>
> --
> Jay Vyas
> http://jayunit100.blogspot.com
>
--
Kevin O'Dell
Systems Engineer, Cloudera
HBCK should pick this up as an orphan if you
run it.
-Kevin
On Fri, May 3, 2013 at 10:34 AM, Dimitri Goldin
wrote:
> Hi Kevin,
>
>
> On 05/03/2013 02:57 PM, Kevin O'dell wrote:
> > That is interesting. I have seen this before, can you please send a
> > hadoop fs -lsr /
the format)
>
>
> Thanks in advance,
> Dimitry
>
> --
> --**
> Dimitry Goldin
> Software Developer
>
> Neofonie GmbH
> Robert-Koch-Platz 4
> 10115 Berlin
>
> T: +49 30 246 27 413
>
> gol...@neofonie.de <mailto:gol...@neofonie.de>
> http://www.neofonie.de
>
> Handelsregister
> Berlin-Charlottenburg: HRB 67460
>
> Geschäftsführung:
> Thomas Kitlitschko
>
--
Kevin O'Dell
Systems Engineer, Cloudera
t 8:39 PM, Kiran >
> > wrote:
> > > >
> > > > > But in HBase data can be said to be in denormalised state as the
> > > > > methodology
> > > > > used for storage is a (column family:column) based flexible schema
> > > .Also,
> > >
David,
I have only seen this once before and I actually had to drop the META
table and rebuild it with HBCK. After that the import worked. I am pretty
sure I cleaned up the ZK as well. It was very strange indeed. If you can
reproduce this can you open a JIRA as this is no longer a one off scen
ncing the region server).
>
> BTW: There's nothing relevant in the region server log and the garbage
> collector log is normal.
>
>
> --
> Ron Buckley
--
Kevin O'Dell
Systems Engineer, Cloudera
at
>>>>
>>>> org.apache.hadoop.hbase.**regionserver.**CompactSplitThread.run(**
>>> CompactSplitThread.java:81)
>>>
>>>> 2013-04-22 16:47:56,830 INFO
>>>>
>>> org.apache.hadoop.hbase.**regionserver.HRegion:
>>>
>>>> aborted compaction on region
>>>>
>>>> t1_webpage,com.pandora.www:**http/shaggy,1366670139658.**
>>> 9f565d5da3468c0725e590dc232abc**23.
>>>
>>>> after 5mins, 58sec
>>>> 2013-04-22 16:47:56,830 INFO
>>>> org.apache.hadoop.hbase.**regionserver.**CompactSplitThread:
>>>> regionserver60020.compactor exiting
>>>> 2013-04-22 16:47:56,832 INFO
>>>>
>>> org.apache.hadoop.hbase.**regionserver.HRegion:
>>>
>>>> Closed
>>>>
>>>> t1_webpage,com.pandora.www:**http/shaggy,1366670139658.**
>>> 9f565d5da3468c0725e590dc232abc**23.
>>>
>>>> 2013-04-22 16:47:57,363 INFO
>>>>
>>> org.apache.hadoop.hbase.**regionserver.wal.HLog:
>>>
>>>> regionserver60020.logSyncer exiting
>>>> 2013-04-22 16:47:57,366 INFO org.apache.hadoop.hbase.**
>>>> regionserver.Leases:
>>>> regionserver60020 closing leases
>>>> 2013-04-22 16:47:57,366 INFO org.apache.hadoop.hbase.**
>>>> regionserver.Leases:
>>>> regionserver60020 closed leases
>>>> 2013-04-22 16:47:57,366 INFO
>>>> org.apache.hadoop.hbase.**regionserver.HRegionServer: regionserver60020
>>>> exiting
>>>> 2013-04-22 16:47:57,497 INFO
>>>> org.apache.hadoop.hbase.**regionserver.ShutdownHook: Shutdown hook
>>>>
>>> starting;
>>>
>>>> hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-**15,5,main]
>>>> 2013-04-22 16:47:57,497 INFO
>>>> org.apache.hadoop.hbase.**regionserver.HRegionServer: STOPPED: Shutdown
>>>>
>>> hook
>>>
>>>> 2013-04-22 16:47:57,497 INFO
>>>> org.apache.hadoop.hbase.**regionserver.ShutdownHook: Starting fs
>>>> shutdown
>>>>
>>> hook
>>>
>>>> thread.
>>>> 2013-04-22 16:47:57,504 INFO org.apache.hadoop.hbase.**
>>>> regionserver.Leases:
>>>> regionserver60020.leaseChecker closing leases
>>>> 2013-04-22 16:47:57,504 INFO org.apache.hadoop.hbase.**
>>>> regionserver.Leases:
>>>> regionserver60020.leaseChecker closed leases
>>>> 2013-04-22 16:47:57,598 INFO
>>>> org.apache.hadoop.hbase.**regionserver.ShutdownHook: Shutdown hook
>>>>
>>> finished.
>>>
>>>> I would appreciate it very much if someone could explain to me what just
>>>> happened here.
>>>>
>>>> thanks,
>>>>
>>>
>
--
Kevin O'Dell
Systems Engineer, Cloudera
elete it/ remove all the file in habse
> > folder. But the regions still is in transition.
> >
> > Do you have an idea why ?
> >
> > Regards
> >
> > --
> > *CHUNG Fabien
> >
> > *
> >
>
>
>
> --
> Chung Fabien
>
>
>
> EFREI Promo 2013
> Tel : 06 48 03 54 92
>
--
Kevin O'Dell
Systems Engineer, Cloudera
I just have 1 column family.
>>> The number of columns per row is variable (1~ few thousands)
>>>
>>> Currently i don't use compression or the data_block_encoding.
>>>
>>> Should i?
>>> I want to have faster reads.
>>>
>>> Please suggest.
>>>
>>>
>>> Sincerely,
>>> Prakash Kadel
>
--
Kevin O'Dell
Systems Engineer, Cloudera
okeeper.out.
>
>
>
>
> On Sat, Mar 30, 2013 at 10:31 PM, Kevin O'dell >wrote:
>
> > Hi Hua,
> >
> > I believe(don't quote me) that you can use the rolling file appender to
> > set the files to a max size. I know HBase does this, but I am not
eeper.out is too large, how to reduce this file? it is a online
> > system.
> >
> >Best R.
> >
> > beatls
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
hat data size from requests couldn't be loaded at once on memories.
>
> What situations could be expected inside of hbase?
>
> flush memstores?
> firstly hbase accept all requests, and then response datas in order although
> the responses are so late?
> or deny requests?
>
>
tables got into a state that
>>> was not disabled nor enabled. We found that the root cause was the linux
>>> clock skewed more than 5 hours. I googled and understood that hbase can
>>> only handle about a couple of seconds time skew. We were wondering if
>>&
og into the zookeeper server I can do:
>
> -bash-3.2$ hostname
> master
> -bash-3.2$ hbase shell
> HBase Shell; enter 'help' for list of supported commands.
> Type "exit" to leave the HBase Shell
> Version 0.90.4-cdh3u3, r, Thu Jan 26 10:13:36 PST 2012
&g
Ted,
Yes that is correct, sorry 3 is newer than 1 when speak TSs. Sorry for
the confusion :)
On Thu, Mar 7, 2013 at 1:48 PM, Ted Yu wrote:
> I think there was typo in Kevin's email: t3 should be t1
>
> On Thu, Mar 7, 2013 at 10:42 AM, Kevin O'dell >wrote:
>
> &
n Thu, Mar 7, 2013 at 1:44 PM, Jean-Marc Spaggiari wrote:
> Today are you not going to delete t1 and t2 and keep t3? The marker
> will delete everything older than t2 only, right?
>
> To today we will keep only t3 but the idea is to keep also t1 if I
> understand correctly.
>
be fantastic. If you follow the Kiji/Wibi model
> > of using many versioned cells, being able to delete a specific cell
> without
> > deleting all cells prior to it would be very useful.
> >
> > Jeff
> >
> >
> > On Thu, Mar 7, 2013 at 10:26 AM, Kevin O'd
lls prior to it would be very useful.
>
> Jeff
>
>
> On Thu, Mar 7, 2013 at 10:26 AM, Kevin O'dell >wrote:
>
> > The problem is it kills all older cells. We should probably file a JIRA
> > for this, as this behavior would be nice. Thoughts?:
> >
> >
me that there might be a way to delete a cell in
> a
> > > > column for a particular timestamp, without masking all older values.
> Is
> > > > this true? Or have I been fed lies?
> > > >
> > > > Thanks!
> > > > Natty
> > > >
> > > > --
> > > > http://www.wibidata.com
> > > > office: 1.415.496.9424 x208
> > > > cell: 1.609.577.1600
> > > > twitter: @nattyice <http://www.twitter.com/nattyice>
> > >
> >
> >
> >
> > --
> > http://www.wibidata.com
> > office: 1.415.496.9424 x208
> > cell: 1.609.577.1600
> > twitter: @nattyice <http://www.twitter.com/nattyice>
> >
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
at value. The file does not seem to
> exist when I try ls'ing it myself. I'm not sure where it comes from or how
> it should be created.
>
>
> On Tue, Mar 5, 2013 at 4:35 PM, Kevin O'dell >wrote:
>
> > Bryan,
> >
> &
> some problem. Has anyone ever seen this and know what it is or how to fix
> it?
>
> http://pastebin.com/PA6Y9pJN
>
> Thanks!
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
x27;m getting java.io.IOException: java.io.IOException:
> > java.lang.IllegalArgumentException: No 44 in
> > <2e40c841-af5b-4a5e-be0f-e06a953f05cc,1359958540596>, length=13,
> offset=37
> > Caused my master to die, can't restart it.
> >
> > Is there any way
and distcp has different application. Use discp when you need to
> move data across clusters. Do you want to export table data outside your
> cluster? If not then export table is better.
>
> Sent from HTC via Rocket! excuse typo.
>
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
Sorry for the silly question JM, but I have to ask :)
On Mon, Feb 25, 2013 at 10:28 AM, Kevin O'dell wrote:
> If you look at META do you have anything that starts with z?
>
>
> On Mon, Feb 25, 2013 at 10:24 AM, Jean-Marc Spaggiari <
> jean-m...@spaggiari.org> wrote:
ws starting with z. That's fine.
>
> But when I run this:
> scan '.META.' , {STARTROW => 'z', LIMIT => 10}
> it's scanning the .META. from the beginning. Like if startrow was not
> considered.
>
> Is there a special character at the beginning of the .META. keys? It seems
> not.
>
> JM
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
e them? Or it will be to risky?
>
> JM
>
>
>
>
>
> 2013/2/23 Kevin O'dell
>
> > JM,
> >
> > Here is what I am seeing:
> >
> > 2013-02-23 15:46:14,630 ERROR
> > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed
es. Now it's fixed, but I still have the logs.
>
> I'm redoing all the steps I did. Many I will face the issue again. If I'm
> able to reproduce, we might be able to figure where the issue is...
>
> JM
>
> 2013/2/23 Kevin O'dell
>
> > JM,
> >
>
.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1927)
>
> I'm running with 0.94.5 +
> HBASE-7824<https://issues.apache.org/jira/browse/HBASE-7824>+
> HBASE-7865 <https://issues.apache.org/jira/browse/HBASE-7865>. I don't
> think the 2 patchs are related to this issue.
>
> Hadoop fsck reports "The filesystem under path '/' is HEALTHY" without any
> issue.
>
> /hbase/entry/2ebfef593a3d715b59b85670909182c9/a/62b0aae45d59408dbcfc513954efabc7
> does exist in the FS.
>
> What I don't understand is why is the master going down? And how can I fix
> that?
>
> I will try to create the missing directory and see the results...
>
> Thanks,
>
> JM
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
nt installed HBase version is 0.92.1+156.
> I want to upgrade it to the latest stable version.
>
> Can anyone please let me know what is the latest stable version and how can
> it be upgraded to it from Cloudera Manager only.
>
> Thanks,
> Vidosh
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
t;multi"}
> >
> > It's strange because this is a new hbase setup with almost no traffic on
> > it. I am running a perf test and would not expect this to happen. The
> > regionservers have 12GB heap space and are only using 1GB when that error
> > happens. I just pushed close to 33K rows via a batch and I see the
> > responseTooSlow.
> >
> > I enabled GC logging, but I don't see any GC lockups, and each GC attempt
> > is only taking a few 100 ms.
> >
> > What else could be happening here, any pointers on debugging ? My setup
> is
> > 1 Master running with 1 NN (on the same server) with 3 regionservers
> > running alongside the datanodes.
> >
> > Thanks,
> > Viral
> >
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
ta center or within WAN?
>6. Is hotspoting in HBase cluster is really a issue (big!) nowadays for
>OLAP workloads and real-time analytics?
>
>
> Further directions to more information about region/table hotspotting is
> most welcome.
>
> Many thanks in advance.
>
> Regards,
> Joarder Kamal
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
re tables in hbase with iidentical schemas, i.e. if table A has 100M
> records put 50M into table B, 50M into table C and delete table A.
> Currently, I use hbase-0.92.1 and hadoop-1.4.0
>
> Thanks.
> Alex.
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
to have? 100? Less? It will all
> be SATA3 drives and I will configure all in RAID0.
>
> It doesn't seems to me to be an issue to lose one node, since data
> will be replicated everywhere else. I will "simply" have to replace
> the failing disk and restart the node, no?
211,45 3,09 44753,09333,07
> >
> > But I'm not sure what it means. I will wait for tomorrow to get more
> > results, but my job will be done over night, so I'm not sure the
> > average will be accurate...
> >
> > JM
> >
> >
&
h is looking like a comb.
>
> I just retried sar and some data is coming.. I will need to let it run
> for few more minutes to get some more data ...
>
> JM
>
>
> 2013/2/7, Kevin O'dell :
> > JM,
> >
> > Okay, I think I see what was happening. Yo
last tests
> I did, it was slower.
>
> Since I will have to re-format the disks anyway, I can re-run the
> tests just in case I did not configured something properly
>
> JM
>
> 2013/2/7, Kevin O'dell :
> > Hey JM,
> >
> > Why RAID0? That has a lot o
em in RAID0, but I'm wondering how low shoud I
> go?
>
> JM
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
regions? Truncate
> > will remove everything even the splitting. But I want to keep the
> > regions the way they are. I just want to clean them. Is there a simple
> > way to do that with the shell or something like that?
> >
> > Thanks,
> >
> > JM
> >
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
count look like as that can affect your flush
> size?
> Initial split is 37 regions on 6 RegionServers, but at the moment there are
> 71 regions.
>
>
>
> Kevin O'dell wrote
> > Kzurek,
> >
> > Just because you turn off time based major compactions, it does not
ler 10 on 60020'
> 2013-02-01 15:44:55,087 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Finished memstore flush of ~1.5g/1624094576, currentsize=0.0/0 for region
>
> test,\x00\x00\x00\x00~i\x91\x00\x00\x00\x0D,1359115210217.3b710693d6314c2a987b07dd82451158.
> in 5635ms, sequenceid=58642, compaction requested=false
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/MemStoreFlusher-region-has-too-many-store-files-client-timeout-tp4037887.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
DFS, but not listed in META or deployed on any region server
> >
> >
> > I'm absolutely stumped. I've done some poking around and I can't find
any
> > sort of data surrounding this issue with the exception of similar
symptoms
> > in an exchange on this list of March this year (though the inquirer had
> > different questions). Any help would be appreciated, though I suspect I
will
> > be told 'Upgrade to CDH4' or 'Drop and re-create the table'.
> >
> > Thanks in advance.
> >
> > --
> > Brandon Peskin
> > Senior Systems Administrator
> > Adobe Systems
> > bpes...@adobe.com
> >
> >
> >
> >
> >
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
, but
anything above 4GB is not recommended and can *greatly *impact performance.
On Mon, Jan 28, 2013 at 7:14 PM, Lsshiu wrote:
>
> 3gb
> More than one thousand.
>
>
> "Kevin O'dell"
>
> > What are you currently using? Also, what is your current region
What are you currently using? Also, what is your current region per node
count?
On Jan 28, 2013 6:50 PM, "Lashing" wrote:
> Hi
> I'm running high in region number, can someone tell me what's the max
> storefile size in CDH3u4, thanks.
>
>
;
> hbase.hregion.max.filesize
> 1073741824
>
>
> Best Regards.
> James Chang
>
--
Kevin O'Dell
Customer Operations Engineer, Cloudera
1 - 100 of 200 matches
Mail list logo