know if I should use
> SKIP_WAL
> > to
> > > > get the same semantic of writeToWAL (true). I'm doubting it because
> the
> > > > name SKIP_WAL implies writeToWAL false. :)
> > > >
> > > > Best Regards,
> > > >
> >
se SKIP_WAL
> to
> > > get the same semantic of writeToWAL (true). I'm doubting it because the
> > > name SKIP_WAL implies writeToWAL false. :)
> > >
> > > Best Regards,
> > >
> > > Jerry
> > >
> > >
> >
t;
> > On Tue, Jun 9, 2015 at 12:03 PM, Ted Yu wrote:
> >
> > > I see code in this formation in 0.98 branch.
> > >
> > > Looking at the unit tests which exercise incrementColumnValue(), they
> all
> > > call:
> > > public long incrementColumn
P_WAL implies writeToWAL false. :)
>
> Best Regards,
>
> Jerry
>
>
>
> On Tue, Jun 9, 2015 at 12:03 PM, Ted Yu wrote:
>
> > I see code in this formation in 0.98 branch.
> >
> > Looking at the unit tests which exercise incrementColumnValue(), the
ooking at the unit tests which exercise incrementColumnValue(), they all
> call:
> public long incrementColumnValue(final byte [] row, final byte [] family,
> final byte [] qualifier, final long amount)
> Possibly because the one mentioned by Jerry is deprecated.
>
> FYI
&g
I see code in this formation in 0.98 branch.
Looking at the unit tests which exercise incrementColumnValue(), they all
call:
public long incrementColumnValue(final byte [] row, final byte [] family,
final byte [] qualifier, final long amount)
Possibly because the one mentioned by Jerry is
Hi, Jerry
Which version of HBase is it?
-Vlad
On Tue, Jun 9, 2015 at 8:05 AM, Jerry Lam wrote:
> Hi HBase community,
>
> Can anyone confirm that the method incrementColumnValue is implemented
> correctly?
>
> I'm talking about mainly the deprecated method:
>
>
Hi HBase community,
Can anyone confirm that the method incrementColumnValue is implemented
correctly?
I'm talking about mainly the deprecated method:
@Deprecated
@Override
public long incrementColumnValue(final byte [] row, final byte [] family,
final byte [] qualifier, final
tions (with threading and without), but the threaded solution is
> causing the problem.
>
> We are processing log files with PUTs in the Map and a followup
> incrementColumnValue() to a separate "counts" table in the Reducer. The
> reduce phase uses multi-threading. The
ti-threading in an MR job it would be very helpful. We are testing both
implementations (with threading and without), but the threaded solution is
causing the problem.
We are processing log files with PUTs in the Map and a followup
incrementColumnValue() to a separate "counts" table in
t;>> thanks
> > > >>>
> > > >>>
> > > >>> On Wed, Jul 20, 2011 at 6:28 PM, Doug Meil
> > > >>> wrote:
> > > >>>
> > > >>>>
> > > >>>> Hi there-
> > > >>&g
> >>>>
> > >>>>
> > >>>> 1) the fact that HTable isn't thread-safe
> > >>>>
> > >>>> 2) how counters work
> > >>>>
> > >>>> Even if you are incrementing counters, you shou
ting counters, you shouldn't be sharing HTable
> >>>> instances across threads.
> >>>>
> >>>> Counters get updated atomically on the RS, not on the client.
> >>>>
> >>>> Counter behavior isn't in the Hbase book and it needs to be. I'll add
> >>>> it
> >>>> to the list.
> >>>>
> >>>>
> >>>> On 7/20/11 7:44 PM, "large data" wrote:
> >>>>
> >>>>> I have an HTable instance instantiated as part of a singleton
> service.
> >>>>> This
> >>>>> singleton service is called from different threads from different
> >>>> parts of
> >>>>> the app. Reading through the HTable docs suggests not to use single
> >>>> HTable
> >>>>> instance for updates, if it's true how can incrementColumnValue
> provide
> >>>>> thread safety?
> >>>>>
> >>>>> thanks
> >>>>
> >>>>
> >>
> >>
>
ot on the client.
>>>>
>>>> Counter behavior isn't in the Hbase book and it needs to be. I'll add
>>>> it
>>>> to the list.
>>>>
>>>>
>>>> On 7/20/11 7:44 PM, "large data" wrote:
>>>>
>>>>> I have an HTable instance instantiated as part of a singleton service.
>>>>> This
>>>>> singleton service is called from different threads from different
>>>> parts of
>>>>> the app. Reading through the HTable docs suggests not to use single
>>>> HTable
>>>>> instance for updates, if it's true how can incrementColumnValue provide
>>>>> thread safety?
>>>>>
>>>>> thanks
>>>>
>>>>
>>
>>
t;> Counter behavior isn't in the Hbase book and it needs to be. I'll add
> >>it
> >> to the list.
> >>
> >>
> >> On 7/20/11 7:44 PM, "large data" wrote:
> >>
> >> >I have an HTable instance instantiated as part of a singleton service.
> >> >This
> >> >singleton service is called from different threads from different
> >>parts of
> >> >the app. Reading through the HTable docs suggests not to use single
> >>HTable
> >> >instance for updates, if it's true how can incrementColumnValue provide
> >> >thread safety?
> >> >
> >> >thanks
> >>
> >>
>
>
d as part of a singleton service.
>> >This
>> >singleton service is called from different threads from different
>>parts of
>> >the app. Reading through the HTable docs suggests not to use single
>>HTable
>> >instance for updates, if it's true how can incrementColumnValue provide
>> >thread safety?
>> >
>> >thanks
>>
>>
parts of
> >the app. Reading through the HTable docs suggests not to use single HTable
> >instance for updates, if it's true how can incrementColumnValue provide
> >thread safety?
> >
> >thanks
>
>
parts of
>the app. Reading through the HTable docs suggests not to use single HTable
>instance for updates, if it's true how can incrementColumnValue provide
>thread safety?
>
>thanks
I have an HTable instance instantiated as part of a singleton service. This
singleton service is called from different threads from different parts of
the app. Reading through the HTable docs suggests not to use single HTable
instance for updates, if it's true how can incrementColumnValue pr
ging the 10 to 10L made it work.
>> >
>> > On Tue, Mar 29, 2011 at 8:59 AM, Jesse Hutton > >wrote:
>> >
>> >> Hi,
>> >>
>> >> It looks like the problem is that the initial value you're inserting in
>> >> the c
e the problem is that the initial value you're inserting in
> >> the column is an int, while HTable#incrementColumnValue() expects a
> long.
> >> Instead of:
> >>
> >>
> >>> I enter data by :-
> >>> theput.add(Bytes.toBytes("uid&q
t; On Tue, Mar 29, 2011 at 8:59 AM, Jesse Hutton wrote:
>
>> Hi,
>>
>> It looks like the problem is that the initial value you're inserting in
>> the column is an int, while HTable#incrementColumnValue() expects a long.
>> Instead of:
>>
>>
>>>
Thanks Jesse. Changing the 10 to 10L made it work.
On Tue, Mar 29, 2011 at 8:59 AM, Jesse Hutton wrote:
> Hi,
>
> It looks like the problem is that the initial value you're inserting in the
> column is an int, while HTable#incrementColumnValue() expects a long.
> Instead of:
at org.apache.hadoop.hbase.util.Bytes.toLong(Bytes.java:480)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.incrementColumnValue(HRegion.java:3134)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.incrementColumnValue(HRegionServer.java:2486)
> > at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> > at sun.reflec...
> >
> >
> > I guess I have tried all possible combinations of datatypesI could
> not
> > even find a decent example of incrementColumnValue()
> >
>
Hi,
It looks like the problem is that the initial value you're inserting in the
column is an int, while HTable#incrementColumnValue() expects a long.
Instead of:
> I enter data by :-
> theput.add(Bytes.toBytes("uid"),Bytes.toBytes("1"), 130108782L +
ngLengthOrOffset(Bytes.java:502)
> at org.apache.hadoop.hbase.util.Bytes.toLong(Bytes.java:480)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.incrementColumnValue(HRegion.java:3134)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.incrementColumnValue(HRegionServer.java:2486)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at sun.reflec...
>
>
> I guess I have tried all possible combinations of datatypesI could not
> even find a decent example of incrementColumnValue()
>
:3134)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.incrementColumnValue(HRegionServer.java:2486)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflec...
I guess I have tried all possible combinations of datatypesI could not
even find a decent example of incrementColumnValue()
ut again, not
recommended.
You can also summarize your data and use a secondary process to
execute a roll up of ICVs... if the number isnt too massive this might
be acceptable.
On Tue, Jan 11, 2011 at 4:07 PM, Billy Pearson
wrote:
Is there a way to make a mapreduce job and use incrementColumnValue
a mapreduce job and use incrementColumnValue in
> place of Put?
>
> I am trying to move a job over from thrift and have to be able to use
> incrementColumnValue
> as a output but I can not seams to work it out with out calling HTable
> every map.
>
> small example would be nice if anyone uses it now
> Billy
>
>
>
process to
execute a roll up of ICVs... if the number isnt too massive this might
be acceptable.
On Tue, Jan 11, 2011 at 4:07 PM, Billy Pearson
wrote:
> Is there a way to make a mapreduce job and use incrementColumnValue in place
> of Put?
>
> I am trying to move a job over f
Is there a way to make a mapreduce job and use incrementColumnValue in place
of Put?
I am trying to move a job over from thrift and have to be able to use
incrementColumnValue
as a output but I can not seams to work it out with out calling HTable every
map.
small example would be nice if
31 matches
Mail list logo