It does not bother just you. :)
Can we bound where all this shows up? In most cases we can address
compatibility concerns by making a new API and transitioning to it as the
guidelines allow for breakage.
That generally means sooner is better than later; our next big effort to
make sure obsolete t
+1
I found the mix of ascii/hex very painful when I need to compare two binary
keys/values
On Tue, Apr 14, 2015 at 6:16 AM, Dave Latham wrote:
> Wish I had started this conversation 5 years ago...
>
> When we're using binary data, especially in row keys (and therefore
> region boundaries) the o
Wish I had started this conversation 5 years ago...
When we're using binary data, especially in row keys (and therefore
region boundaries) the output created by toStringBinary is very
painful to use:
- mix of ascii / hex representation is trouble
- it's quite long (4 characters per binary byte)
Thanks Dejan,
Please keep us posted!
cheers,
esteban.
--
Cloudera, Inc.
On Mon, Apr 13, 2015 at 11:08 AM, Dejan Menges
wrote:
> Hi Esteban,
>
> Thanks for pointing to that, will try to collect all logs tomorrow and to
> take deeper look and post here specific errors. Yes, good news are that
Hi Esteban,
Thanks for pointing to that, will try to collect all logs tomorrow and to
take deeper look and post here specific errors. Yes, good news are that all
logs are preserved.
Thanks a lot,
Dejan
On Mon, Apr 13, 2015 at 8:01 PM Esteban Gutierrez
wrote:
> Hi Dejan,
>
> Do you have the log
Hi Dejan,
Do you have the logs from any of those failed region servers? Usually in
case of a critical failure the RS will shutdown itself or if the RS "hangs"
for a long time and the master will start processing the expiration of that
RS and reject the RS if it tries to reconnect with a YouAreDead
There was a hadoop version issue. Thanks.
Il 13/apr/2015 15:13 "Shahab Yunus" ha scritto:
> Silvio did your problem got resolved or not? I am assuming you have already
> seen the example 7.2.4 from here
> http://hbase.apache.org/0.94/book/mapreduce.example.html
>
> There seem to be some type mism
Silvio did your problem got resolved or not? I am assuming you have already
seen the example 7.2.4 from here
http://hbase.apache.org/0.94/book/mapreduce.example.html
There seem to be some type mismatch in the job setup, along with hadoop
version.
If you still have an issue then, can you paste you
the signature of the write method is:
write(ImmutableBytesWritable
arg0, Writable
arg1)
arg0 don't accept NulWritable.get(), instead "null"
2015-04-13 14:13 GMT+02:00 Silvio Di gregorio :
> in documentation (http://hbase.apache.org/0.94/book/mapreduce.example.html),
> when the Reduce extends t
in documentation (http://hbase.apache.org/0.94/book/mapreduce.example.html),
when the Reduce extends the TableReducer class the write method
put the key null, or rather NullWritable like Shahab Say.
However, the error has disappeared when i removed the
"hbase-client-0.96.0-hadoop1.jar" and inserted
Oh, Shahab is write! That's what happend when you write emails before your
coffee ;) I confused with your "Put" key ;) Looked to quickly...
JM
2015-04-13 7:46 GMT-04:00 Shahab Yunus :
> For the null key you should use NullWritable class, as discussed here:
>
> http://stackoverflow.com/questions
For the null key you should use NullWritable class, as discussed here:
http://stackoverflow.com/questions/16198752/advantages-of-using-nullwritable-in-hadoop
Regards,
Shahab
On Mon, Apr 13, 2015 at 7:01 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Silvio,
>
> What is the key yo
Hi Silvio,
What is the key you try to write into your HBase table? From your code,
sound like you want your key to be null for all your values, which is not
possible in HBase.
JM
2015-04-13 6:37 GMT-04:00 Silvio Di gregorio :
> Hi,
> In Reduce phase when i write to Hbase Table "PFTableNa"
> con
Hi,
In Reduce phase when i write to Hbase Table "PFTableNa"
context.write(null , put);
Eclipse say me:
*"The method write(ImmutableBytesWritable, Writable) in the type
TaskInputOutputContext
is not applicable for the arguments (null, Put)"*
*put *is org
.apache
.hadoop
.hbase
.client
.Put
.Pu
Hi,
We had some issues recently with HDFS - hardware issue with one of the
nodes, nodes died, HDFS recovered, but we figured out that something is
wrong with HBase. Checking HMaster log, we saw that bunch of our region
servers got to the famous failed servers list, and it was going on and on
until
15 matches
Mail list logo