Yeah, Thanks. I think it works for me.
Thanks for all of you and all the responses.
2009/8/15 stack
> Or just add below to cron:
>
> echo "flush TABLENAME" |./bin/hbase shell
>
> Or adjust the configuration in hbase so it flushes once a day (see
> hbase-default.xml for all options).
>
> St.Ack
>
Or just add below to cron:
echo "flush TABLENAME" |./bin/hbase shell
Or adjust the configuration in hbase so it flushes once a day (see
hbase-default.xml for all options).
St.Ack
On Fri, Aug 14, 2009 at 2:13 AM, Chen Xinli wrote:
> Thanks for your suggestion.
> As our insertion is daily, that
I'm game only you are arriving at bad time Tim. Thats labor day w/e.
Fellas won't be around (generally). They'll be out in the desert covered in
dust dancing around giant dolls. So, 8th is probably the day you'll catch
most of the SF crew. Just FYI. Can help you w/ places to go np.
St.Ack
On
Thanks for trying. Looks like that region is now gone (split is my guess).
Check the master log for mentions of this region to see its history. Can
you correlate the client failure with an event on this region in master
log? It looks like client was being pig-headed fixated on the parent of a
sp
Absolutely not. VM = low performance, no good.
While it seems that 16 GB ram is a lot, it really isnt. I'd rather
have twice, since java sucks up ram like no tomorrow, and also we want
a really really effective OS buffer cache. This improves random reads
quite a bit.
In fact my newer machines
hbase(main):003:0> get '.META.', 'TestTable,0001749889,1250092414985',
{COLUMNS =>'info'}
09/08/14 12:28:10 DEBUG client.HConnectionManager$TableServers: Cache hit
for row <> in tableName .META.: location server 192.168.0.196:60020,
location region name .META.,,1
NativeException: java.lang.NullPoi
Yes, you can definitely do that.
We have tables that we put constraints on in that way. Flushing the
table ensures all data is written to HDFS and then you will not have any
data loss under HBase fault scenarios.
Chen Xinli wrote:
Thanks for your suggestion.
As our insertion is daily, that'
Is that region offline?
Do a:
hbase> get ".META.", "TestTable,0001749889,1250092414985", {COLUMNS =>
"info"}.
If so, can you get its history so we can figure how it went offline? (See
region history in UI or grep it in master logs?)
St.Ack
On Fri, Aug 14, 2009 at 9:55 AM, llpind wrote:
>
>
Hey Stack, I tried the following command:
hadoop-0.20.0/bin/hadoop jar hbase-0.20.0/hbase-0.20.0-test.jar randomWrite
10
running a map/reduce job, it failed with the following exceptions in each
node:
org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact
region server So
Hi All,
I'm going to be in the San Fran area the 6,7,8 September and would
love the chance to meet with some of the HBase users, developers if
anyone is interested?
I work with a Global Biodiversity Information network (GBIF) that has
several thousand databases publishing data using well defined
Hello,
I am working on a project involving monitoring a large number of
rss/atom feeds. I want to use hbase for data storage and I have some
problems designing the schema. For the first iteration I want to be
able to generate an aggregated feed (last 100 posts from all feeds in
reverse chronologic
Hey Ryan,
Do you mean you run multiple VMs on the 1950s using xen or something?
isn't 16gb a lot for single box
Ryan Rawson wrote:
>
> we are using dell 1950s, 8cpu 16gb ram, dual 1tb disk. you can get
> machines in this range for in the $2k range. I run hbase on 1tb of
> data on 20 of t
I want to get PIG running against hadoop/hbase v 0.20.0.
I am having errs compiling after applying all the shim-660 patches.
Does anyone know of an already patched set of PIG src code sitting anywhere
that can be downloaded and built?
THx
_mc
--
View this message in context:
http://w
Thanks for the response JD,
i modified the code according to ur suggesstion.
and i want to display the number of rows:
for example
09/08/14 15:40:01 INFO mapred.JobClient: RowCounter
09/08/14 15:40:01 INFO mapred.JobClient: Rows=171
so ideally the below code should print 171 right:
Counters c = Jo
I will be out of the office starting 08/14/2009 and will not return until
09/02/2009.
I will be in europe and plan to check email.
Thanks for your suggestion.
As our insertion is daily, that's to insert lots of records at fixed time,
can we just call HBaseAdmin.flush to avoid loss?
I have done some experiments and find it works. I wonder if it will cause
some other problem?
2009/8/14 Ryan Rawson
> HDFS doesnt allow you to
16 matches
Mail list logo