Thanks for your response Ted, I've solved this. Just do some settings as
follows:
*conf.setStrings("io.serializations", conf.get("io.serializations"),
ResultSerialization.class.getName());*
and problem has gone.
Ted Yu 于2016年4月12日周二 上午8:54写道:
> It seems your code
>From region server log:
2016-04-11 03:11:51,589 WARN org.apache.zookeeper.ClientCnxnSocket:
Connected to an old server; r-o mode will be unavailable
2016-04-11 03:11:51,589 INFO org.apache.zookeeper.ClientCnxn: Unable to
reconnect to ZooKeeper service, session 0x52ee1452fec5ac has expired,
It seems your code didn't go through.
Please take a look at ResultSerialization and related classes.
Cheers
On Mon, Apr 11, 2016 at 5:29 PM, 乔彦克 wrote:
> Hi, all
> recently we upgrade our HBase cluster from cdh-0.94 to cdh-1.0. In 0.94
> we use Result.java(implement
Hi, all
recently we upgrade our HBase cluster from cdh-0.94 to cdh-1.0. In 0.94 we
use Result.java(implement Writable) as the map out value.
[image: pasted1]
but int the newer HBase version Result.java has changed, it can't be
serialized any more. Is there any alternative methods to use Result
Hello -
We've started experiencing regular failures of our HBase cluster. For the last
week we've had nightly failures about 1hr after a heavy batch process starts.
In the logs below we see the failure starting at 2016-04-11 03:11 in zookeeper,
master and region server logs:
zookeeper:
Hi Zheng,
Your intuition is correct. If the client does not specify a timestamp for
writes, then the region server will use the system clock to do so. If you
send a Put to a region hosted by a server with a clock that is 50 seconds
slow, and that region has existing Cell(s) with the same row &
The HBase team is happy to announce the immediate availability of HBase 1.2.1.
Apache HBase is an open-source, distributed, versioned, non-relational database.
Apache HBase gives you low latency random access to billions of rows with
millions of columns atop non-specialized hardware. To learn
that should be fixed in 1.2.1 with HBASE-15422
Matteo
On Mon, Apr 11, 2016 at 5:46 AM, Ted Yu wrote:
> Can you look at master log during this period to see what procedure was
> retried ?
>
> Turning on DEBUG logging if necessary and pastebin relevant portion of
> master
Yes, sorry, I mean 1.x
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Monday, April 11, 2016 9:51 AM
To: user@hbase.apache.org
Subject: Re: Question about open table
bq. I am using hbase 2.x
2.0 has not been released yet.
Probably you meant 1.x ?
On Mon, Apr 11,
bq. I am using hbase 2.x
2.0 has not been released yet.
Probably you meant 1.x ?
On Mon, Apr 11, 2016 at 6:48 AM, Yi Jiang wrote:
> Thanks
> I am using hbase 2.x, so only once to create the connection in my project.
> According to Ted, the getTable is not expensive, then
Thanks
I am using hbase 2.x, so only once to create the connection in my project.
According to Ted, the getTable is not expensive, then I am able to get and
close table in each request.
Jacky
-Original Message-
From: Yu Li [mailto:car...@gmail.com]
Sent: Monday, April 11, 2016 12:05 AM
bq. if they are located in the same split?
Probably you meant same region.
Can you show the getSplits() for the InputFormat of your MapReduce job ?
Thanks
On Mon, Apr 11, 2016 at 5:48 AM, Ivan Cores gonzalez
wrote:
> Hi all,
>
> I have a small question regarding the
Have you looked at :
http://hbase.apache.org/book.html#ttl
Please describe your use case.
Thanks
On Mon, Apr 11, 2016 at 2:11 AM, hsdcl...@163.com wrote:
> hi,
>
> I want to know the principle of HBase TTL,I would like to use the same
> principle to develop Rowkey of TTL,
Please take a look at:
http://hbase.apache.org/book.html#disable.splitting
especially the section titled:
Determine the Optimal Number of Pre-Split Regions
For writing data evenly across the cluster, can you tell us some more about
your use case(s) ?
Thanks
On Tue, Apr 5, 2016 at 11:48 PM,
Hi all,
I have a small question regarding the MapReduce jobs behaviour with HBase.
I have a HBase test table with only 8 rows. I splitted the table with the hbase
shell
split command into 2 splits. So now there are 4 rows in every split.
I create a MapReduce job that only prints the row
Can you look at master log during this period to see what procedure was retried
?
Turning on DEBUG logging if necessary and pastebin relevant portion of master
log.
Thanks
> On Apr 11, 2016, at 1:11 AM, Kevin Bowling wrote:
>
> Hi,
>
> I'm running HBase 1.2.0 on
Hi,
I'm wondering which service on HBase service family is responsible for setting
timestamp of data when saving it to HBase?
Recently I found one of my server (has region server and some HDFS services) in
HBase cluser has 50 seconds behinds to others on system clock (same time
region). For
Hi,
I'm running HBase 1.2.0 on FreeBSD via the ports system (
http://www.freshports.org/databases/hbase/), and it is generally working
well. However, in an HA setup, the HBase master spins at 200% CPU usage
when it is active and this follows the active master and disappears when
standby. Since
Hi, there!
I'm using hbase-client 1.1.2 and I found that Get#setMaxVersions would
throw an IOException if the the parameter maxVersions is less than 0. I'm
just wondering what's the point in throwing an IOException instead of
IllegalArgumentException.
19 matches
Mail list logo