You could use something like the row log library from the Lilly project to
queue the processing from one table to the other every time you do a put in
HBase. As the queue is in HBase itself too it's all atomic, I'm sure you could
get it working with minimal lag between the two tables.
http://ww
Ah ok, most of the time we were using the default Hadoop configuration object
and not the HBase one.
I guess that's a change between 0.20 and 0.90? Would it not make sense for the
TableMapReduceUtil class to do that for you? As you'll need it in every HBase
map reduce job.
Anyway, I guess we s
doop/Hbase/PoweredBy
> >
> > Thanks. and happy holidays!
> > -Todd
> > --
> > Todd Lipcon
> > Software Engineer, Cloudera
> >
>
--
Dan Harvey | Datamining Engineer
www.mendeley.com/profiles/dan-harvey
Mendeley Limited | London, UK | www.mendeley.com
Registered in England and Wales | Company Number 6419015
e found when I
get some time.
Thanks,
On 20 October 2010 15:29, Stack wrote:
> Hey Dan:
>
> On Wed, Oct 20, 2010 at 2:09 AM, Dan Harvey
> wrote:
> > Hey,
> >
> > We're just looking into ways to run multiple instances/versions of HBase
> for
> &g
to either of the
above?
I'm tempted to go towards the single cluster for more efficient use of
hardware but I'm not sure if that's a good idea or not.
Thanks,
--
Dan Harvey | Datamining Engineer
www.mendeley.com/profiles/dan-harvey
Mendeley Limited | London, UK | www.mendeley.com
gion
canonical_documents,aaebeb30-b624-11df-a52e-0024e8453de6,1284984906477 'IPC
Server handler 99 on 60020'
Is there a reason for hbase blocking for this long with the flushing? or
does it seem to be a bug?
If no one else is getting this is there maybe a way to reduce the chance of
this happening to a region?
Thanks,
--
Dan Harvey | Datamining Engineer
www.mendeley.com/profiles/dan-harvey
Mendeley Limited | London, UK | www.mendeley.com
Registered in England and Wales | Company Number 6419015
Also we have put some of our own scripts based on the ruby scripts up here
:- http://github.com/Mendeley/hbase-scripts which might work a little
better.
Thanks to Fred our sysadmin!
On 18 August 2010 17:39, Dan Harvey wrote:
> We've fixed issues with the META being out of sync with th
gt; >
>
> I believe this jira almost exactly describes our issue:
>
> https://issues.apache.org/jira/browse/HBASE-869
>
> -Luke
>
>
--
Dan Harvey | Datamining Engineer
www.mendeley.com/profiles/dan-harvey
Mendeley Limited | London, UK | www.mendeley.com
Registered in England and Wales | Company Number 6419015
Just looked into hdfs630 and it looks like it was added in
cdh2 0.20.1+169.89 and we're currently on 0.20.1+169.68. So would it help
prevent some of these issues by updating to that so we have the patch?
Thanks,
On 4 July 2010 18:12, Dan Harvey wrote:
> Hey,
>
> We're using
hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2542)
> >
> > 2010-06-29 11:22:10,344 ERROR
> > org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to close log
> in
> > abort
> > java.io.IOException: All datanodes 10.0.11.4:500
d how long do you wait before moving to the next
node?
Just so you know we currently have 5 nodes and are getting another 10 to add
soon.
Thanks,
--
Dan Harvey | Datamining Engineer
www.mendeley.com/profiles/dan-harvey
Mendeley Limited | London, UK | www.mendeley.com
Registered in England and
very page a
> single aggregated version.
> Maybe some has an idea, how to design the database? Just like an
> typical not normalized sql database?
> Hope you have some ideas :)
> Johannes
>
--
Dan Harvey | Datamining Engineer
www.mendeley.com/profiles/dan-harvey
Mendeley Lim
e the JVM from 1.6.0_18 to 1.6.0_16. What about JVM
>> version 1.6.0_20? Is this safe to run Hbase with this one?
>>
>
> That one is fine as far as I know. It prints warning about
> EscapeAnalysis not being enabled any more but otherwise it seems to
> work for me -- as oppose
e, Jun 1, 2010 at 2:57 AM, Dan Harvey wrote:
>> In what cases would a datanode failure (for example running out of
>> memory in ourcase) cause HBase data loss?
>
> We should just move past the damaged DN on to the other replicas but
> there are probably places where we ca
ho had the same issue if something doesn't
> seem clear.
>
> I'd also like to point out that those edits were lost because HDFS
> won't support fsSync until 0.21, so data loss is likely in the face of
> machine and process failure.
>
> J-D
>
> On Mon, May 24, 201
this missing split, is there a way to force the master to
rescan the META table? Will it fix problems like this given time?
Thanks,
--
Dan Harvey | Datamining Engineer
www.mendeley.com/profiles/dan-harvey
Mendeley Limited | London, UK | www.mendeley.com
Registered in England and Wales | Company Number 6419015
Hi,
Whilst loading data via a mapreduce job into HBase I started getting
this error :-
org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
contact region server Some server, retryOnlyOne=true, index=0,
islastrow=false, tries=9, numtries=10, i=0, listsize=19,
region=source_document
17 matches
Mail list logo