=14235013, compaction requested=true
On Mon, Mar 27, 2017 at 10:47 PM, Dave Latham <lat...@davelink.net> wrote:
> Do you have compression enabled, and is your data highly compressible?
>
> On Mon, Mar 27, 2017 at 6:26 AM, Hef <hef.onl...@gmail.com> wrote:
>
> > Hi,
> &
EAR\xD7,1490340797381.82f4e7d6931a57edf9654f32625f9fb7.
in 1052ms, sequenceid=13042496, compaction requested=false
A 128MB memstore flushed to be only 7.3MB hfile, is that normal?
In what case could this happen?
Thanks
Hef
Hi Allan,
I didn't see any improvement either after decrease the compact threads
number or increase the memstore flush size. :(
How much write tps can your cluster handler per region server?
Thanks
Hef
On Wed, Mar 22, 2017 at 10:07 AM, Allan Yang <allan...@gmail.com>
6186 be/4 hdfs 1379.58 K/s0.00 B/s 0.00 % 90.78 % du -sk
/data/11/dfs/dn/curre~632-10.1.1.100-1457937043486
What were all this reading for? And what are thos du -sk processes? Could
this be a reason to slow down the write throughput?
On Tue, Mar 21, 2017 at 7:48 PM, Hef <hef.
I change my application to use ProtocolBuffers 2.5 then this issue
resolved.
On Fri, Mar 17, 2017 at 12:36 PM, Hef <hef.onl...@gmail.com> wrote:
> Hi group,
> I have a problem using ProtocolBuffers 3 in my application with CDH5.6
> HBase 1.0.
>
> When creating BufferedM
Hi group,
I have a problem using ProtocolBuffers 3 in my application with CDH5.6
HBase 1.0.
When creating BufferedMutator and flush data into HBase, it shows an error
as below:
java.lang.NoClassDefFoundError: Could not initialize class
org.apache.hadoop.hbase.util.ByteStringer
at
bed by HBASE-15378.
>
> I would suggest you upgrade to a release with HBASE-15378.
>
> On Thu, Mar 2, 2017 at 7:59 PM, Hef <hef.onl...@gmail.com> wrote:
>
> > Thanks for the hint, which led me to investigate from the client side and
> > finally had this problem re
ra.com/cdh5/cdh/5/hbase-1.2.0-cdh5.9.
> 1.CHANGES.txt?_ga=1.10311413.1914112506.1454459553
> >
> > On Wed, Mar 1, 2017 at 5:46 AM, Hef <hef.onl...@gmail.com> wrote:
> >
> >> I'm using CDH 5.9, the document show its HBase version is
> >> hbase-1.2.0+cdh5.9.1+222. (
> >
hih...@gmail.com> wrote:
> Which hbase version are you using ?
>
> Does it include HBASE-15378 ?
>
> > On Mar 1, 2017, at 5:02 AM, Hef <hef.onl...@gmail.com> wrote:
> >
> > Hi,
> > I'm encountering a strange behavior on MapReduce when using HBase as
> input
Hi,
I'm encountering a strange behavior on MapReduce when using HBase as input
format. I run my MR tasks on a same table, same dataset, with a same
pattern of Fuzzy Row Filter, multiple times. The Input Records counters
shown are not consistent, the smallest number can be 40% less than the
largest
10 matches
Mail list logo