The exception originated from Web UI corresponding
to HBaseAdmin.listTables().
At that moment, master was unable to process the request - it needed some
more time.
Cheers
On Sun, May 17, 2015 at 11:03 PM, Louis Hust louis.h...@gmail.com wrote:
Yes, ted, can you tell me what the following
Xiaobo:
Can you download the source tarball for the release you're using ?
You can find all the API information from the source code.
Cheers
On Mon, May 18, 2015 at 1:33 AM, guxiaobo1982 guxiaobo1...@qq.com wrote:
Hi,
http://hbase.apache.org/apidocs/ shows the latest version, but where I
Same for me, I had faced similar issues especially on my virtual machines
since I would restart them more often than my host machine.
Moving ZK from /tmp which could get cleared on reboots fixed the issue for
me.
Thanks,
Viral
On Sun, May 17, 2015 at 10:39 PM, Lars George lars.geo...@gmail.com
Hi,
http://hbase.apache.org/apidocs/ shows the latest version, but where I find
the document for a specific version such as 0.98.5?
Thanks,
hbase.client.operation.timeout is used by HBaseAdmin operations, by
RegionReplicaFlushHandler
and by various HTable operations (including Get).
hbase.rpc.timeout is for the RPC layer to define how long HBase client
applications take for a remote call to time out. It uses pings to check
On Mon, May 18, 2015 at 11:47 AM, Andrew Purtell apurt...@apache.org
wrote:
You need to not overcommit memory on servers running JVMs for HDFS and
HBase (and YARN, and containers, if colocating Hadoop MR). Sum the -Xmx
parameter, the maximum heap size, for all JVMs that will be concurrently
Thanks for pinging us on this. There's currently an open jira for properly
providing access to 0.98, 1.0, and 1.1 specific javadocs[1].
Unfortunately, no one has had the time to take care of things yet. You can
follow that ticket if you'd like to know when there's movement.
For now, your only
You need to not overcommit memory on servers running JVMs for HDFS and
HBase (and YARN, and containers, if colocating Hadoop MR). Sum the -Xmx
parameter, the maximum heap size, for all JVMs that will be concurrently
executing on the server. The total should be less than the total amount of
RAM
You don't need to build from the src tgz, the bin tgz contains a docs
directory, wherein you'll find both public-facing (@Public annotated
classes) and full javadocs in apidocs and devapidocs respectively. The
whole site and book are there too, but our release policy is to copy site
and book from
How should I go about creating and loading a bunch of lookup tables on HBASE?
These are the typical RDBMS kind of data - where the data is row-oriented. All
the data is coming from a flat file that's again row-oriented.
How best can I load this data into HBASE? I first created the table in Hive,
Lot of options depending upon your specifics of the usecase:
In addition to Hive...
You can use Sqoop
http://www.dummies.com/how-to/content/importing-data-into-hbase-with-sqoop.html
You can use Pig
bq. Caused by: java.io.IOException: Invalid HFile block magic:
\x00\x00\x00\x00\x00\x00\x00\x00
Looks like you have some corrupted HFile(s) in your cluster - which should
be fixed first.
Which hbase release are you using ?
Do you use data block encoding ?
You can use
hello list,
is there a way to load the existing data(HFiles) from CDH4.3.0 to CDH5.4.0?
we use the complete bulkload utility which reference the link:
http://hbase.apache.org/0.94/book/ops_mgt.html#completebulkload
the command: hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
Hi Benoit,
I think you need to move the directory out of /tmp and give it a shot.
/tmp/hbase-${user.name}
/zk will get cleaned up during restart.
~Anil
On Mon, May 18, 2015 at 9:45 PM, tsuna tsuna...@gmail.com wrote:
I added this to hbase-site.xml:
property
I added this to hbase-site.xml:
property
namehbase.zookeeper.property.dataDir/name
value/tmp/hbase-${user.name}/zk/value
/property
Didn’t change anything. Once I kill/shutdown HBase, it won’t come back up.
On Mon, May 18, 2015 at 1:14 AM, Viral Bajaria viral.baja...@gmail.com wrote:
Same
Wait. Benoit, you mean restart the laptop or stop/start HBase? I agree that
contents of /tmp are not stable across system reboot, across stop/start of
HBase process there should be no problems. Should.
For what it's worth, on the Mac and local mode testing, I usually use
$HBASE_HOME/data. This is
Sorry if I'm asking a silly question... Are you sure your RSs and Datanodes
are all up and running? Are you sure they are collocated?
Datanode on l-hbase[26-31].data.cn8 and regionserver on
l-hbase[25-31].data.cn8,
Could be that your only live RS is on l-hbase25.data.cn8, which would cause
Hi, we are using extremely cheap HW:
2 HHD 7200
4*2 core (Hyperthreading)
32GB RAM
We met serious IO performance issues.
We have more or less even distribution of read/write requests. The same for
datasize.
ServerName Request Per Second Read Request Count Write Request Count
Hi Ted,
Thanks.
Hbase version is: HBase 0.98.0.2.1.2.0-402-hadoop2
Data block encoding: DATA_BLOCK_ENCODING = 'DIFF'
I tried to run the hfile tool to scan, and it looks good though:
hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f
Hi Ted,
Thanks for your information.
My application queries the HBase, and for some of the queries it just hang
there and throw exception after several minutes (5-8minutes). As a workaround,
I try to set the timeout to a shorter time, so my app won’t hang for minutes
but for several seconds.
But from the log:
2015-05-15 12:16:40,522 INFO
[MASTER_SERVER_OPERATIONS-l-namenode1:6-0]
handler.ServerShutdownHandler: Finished processing of shutdown of
l-hbase31.data.cn8.qunar.com,60020,1427789773001
2015-05-15 12:17:11,301 WARN [686544788@qtp-660252776-212]
Hi, Alex,
May be the Block locality display wrong? cause I checked some region file
and found some replica on the same machine!
2015-05-19 7:18 GMT+08:00 Alex Baranau alex.barano...@gmail.com:
Sorry if I'm asking a silly question... Are you sure your RSs and Datanodes
are all up and running?
Yes, ted, can you tell me what the following excpetion means in
l-namenode1.log?
2015-05-15 12:16:40,522 INFO
[MASTER_SERVER_OPERATIONS-l-namenode1:6-0]
handler.ServerShutdownHandler: Finished processing of shutdown of
l-hbase31.data.cn8.qunar.com,60020,1427789773001
2015-05-15 12:17:11,301
Hi,
I need to set tight timeout for get/scan operations and I think HBase
Client already support it.
I found three related keys:
- hbase.client.operation.timeout
- hbase.rpc.timeout
- hbase.client.retries.number
What's the difference between hbase.client.operation.timeout and
When I start a new cluster, package is hbase-1.0.1-bin.tar.gz, error occurs:
2015-05-18 17:21:09,514 ERROR [main] regionserver.HRegionServerCommandLine:
Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class
org.apache.hadoop.hbase.regionserver.HRegionServer
25 matches
Mail list logo