Cool!
Thanks a lot. That is exactly what I was looking for.
Cheers and a nice weekend
Julian
Am 21.11.2014 18:29 schrieb "Ted Yu" :
> Take a look at slide #4 in this talk:
> http://www.slideshare.net/ddlatham/hbase-at-flurry
>
> Cheers
>
> On Fri, Nov 21, 2014
gt;
> Regards,
>
> Dave
>
> -----Original Message-
> From: Julian Wissmann [mailto:julianwissm...@gmail.com]
> Sent: Friday, November 21, 2014 7:43 AM
> To: user@hbase.apache.org
> Subject: Re: Current Deployment Sizes
>
> Hi,
>
> thank you! The meetup link comes in handy
> HBase Sizing Notes
> <http://files.meetup.com/1350427/HBase%20Sizing%20Notes.pdf>
>
> On Fri, Nov 21, 2014 at 6:19 AM, Julian Wissmann >
> wrote:
>
> > Hi,
> >
> > I'm currently writing my thesis, in part it is about HBase. I was
> wondering
> >
Hi,
I'm currently writing my thesis, in part it is about HBase. I was wondering
if there are some current numbers for large deployments, i.e Facebook or
Yahoo. I'm particularly interested in things like number of nodes, amount
of data managed and (if available) query throughput.
The most recent
Hi,
I am pleased to announce the TAggregator [
https://github.com/juwi/HBase-TAggregator].
It is a coprocessor capable of returning an interval based map of
aggregates.
So far it supports max,min,avg and sum.
It can handle timestamps embedded in the key (as integers) or, as an
alternative timestam
Hi,
I'm in need of grabbing the row key from a coprocessor's scan. However I am
wondering what the best way for doing this would be.
I just came across the fact that Cell.getRow() has been deprecated and
relaced with CellUtil.getRowByte(). However looking at the Code gives me
the impression, that
Yup, that would be the question.
Having played around with endpoint coprocessors to implement real-time
timeseries Aggregationen in 0.92 it would seem to be worth the effort to
implement it if it didn't already exist, now that the coprocessor API has
become somewhat stable.
Am 29.01.2014 18:59 schr
hen you will need to restart the cluster
> once
> > so that the regionserver will the load the jar containing BDCI.
> >
> > Basically you just need to put the utility class(which i sent a couple of
> > weeks ago) into a jar, copy the jar into lib folder, restart the cluster
aster and
> restarted the cluster. If yes, can you share the logs of regionserver
> hosting the region test table?
>
> Best Regards,
> Anil
>
> On Sep 21, 2012, at 4:14 AM, Julian Wissmann
> wrote:
>
> > Hi Anil,
> >
> > found some time to test it,
Hi, can you give some more information on this?
What did you put into the jar file? Just DoubleColumnInterpreter, or also
an AgregateImplementation?
What exactly did you put in hbase-site.xml?
Julian
2012/9/22 J Mohamed Zahoor
> Ok. I tried two things.
>
> 1) Loading the coproc HTD and loadin
? Did it run
> successfully?
>
> I will try to have a look at your unit test over weekend.
> Thanks,
> Anil Gupta
>
> On Thu, Sep 20, 2012 at 10:29 AM, Julian Wissmann
> wrote:
>
> > Hi,
> >
> > as I've also mentioned in the JIRA Issue:
> >
&g
Hi,
as I've also mentioned in the JIRA Issue:
I've written a Test, but have problems with Medium Tests requiring
MiniDFSCluster:
---
Test set: org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol
imalColumnInterpreter, it would
> become much easier for anyone to debug this issue.
>
> On Wed, Sep 12, 2012 at 9:27 AM, Julian Wissmann
> wrote:
>
> > Hi,
> >
> > so I'm slowly getting an overview of the code, here. I haven't really
> > understo
works for you, then i will try to write an utility
> to test BigDecimalColumnInterpreter on your setup also.
>
> Thanks,
> Anil
>
> On Mon, Sep 10, 2012 at 9:36 AM, Julian Wissmann
> wrote:
>
> > Hi,
> >
> > I haven't really gotten to working on this, si
;
> return Bytes.toBigDecimal(kv.getValue());
> }
>
> Thanks,
> Anil
>
>
> On Thu, Sep 6, 2012 at 11:43 AM, Julian Wissmann
> wrote:
>
> > 0.92.1 from cdh4. I assume we use the same thing.
> >
> > 2012/9/6 anil gupta
scanner. You have
> > only added the column family in the scanner.
> > I am also assuming that you are writing a ByteArray of BigDecimal object
> > as value of these cells in HBase. Is that right?
> >
> > Thanks,
> > Anil
> >
> >
> > On Thu,
,
> > Scan object and Byte Array of TableName. It should work. Let me know if
> it
> > doesn't work this way.*
> > *
> > Thanks,
> > Anil Gupta
> > *
> > *
> >
> > On Wed, Sep 5, 2012 at 1:30 PM, Julian Wissmann <
> jul
They do the exact same thing. In fact the non-KV add looks like this:
public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) {
List list = getKeyValueList(family);
KeyValue kv = createPutKeyValue(family, qualifier, ts, value);
list.add(kv);
familyMap.put(kv.g
What problem are you trying to solve? Do you want encryption between server
and client, between servers or encryption of data within Hbase? You need to
be more specific.
If one of the first two is what you want: This kind of stuff can easily be
achieved with stunnel or OpenVPN and can probably be
;
> On Wed, Sep 5, 2012 at 12:49 PM, Julian Wissmann
> wrote:
>
> > I get supplied with doubles from sensors, but in the end I loose too much
> > precision if I do my aggregations on double, otherwise I'd go for it.
> > I use 0.92.1, from Cloudera CDH4.
> > I
is ?
>
> Since you use Double.parseDouble(), looks like it would be more efficient
> to develop DoubleColumnInterpreter.
>
> On Wed, Sep 5, 2012 at 12:07 PM, Julian Wissmann
> wrote:
>
> > Hi,
> > the schema looks like this:
> > RowKey: id,timerange_timestamp,offset (Strin
/9/5 Ted Yu
> You haven't told us the schema of your table yet.
> Your table should have column whose value can be interpreted by
> BigDecimalColumnInterpreter.
>
> Cheers
>
> On Wed, Sep 5, 2012 at 9:17 AM, Julian Wissmann >wrote:
>
> > Hi,
> &g
Hi,
I am currently experimenting with the BigDecimalColumnInterpreter from
https://issues.apache.org/jira/browse/HBASE-6669.
I was thinking the best way for me to work with it would be to use the Java
class and just use that as is.
Imported it into my project and tried to work with it as is, by
Hi,
I'm pretty new to hbase and currently evaluate it for use in a project I'm
working on.
I use hbase from Cloudera CDH4, which is 0.92.1.
I'm trying to calculate an average via a coprocessor with this code:
Scan scan = new Scan((metricID + "," +
basetime_begin).getBytes(), (metricI
24 matches
Mail list logo