Seems to be an issue with your web-application and not HBase. Reading
over
http://stackoverflow.com/questions/1858463/java-error-only-a-type-can-be-imported-xyz-resolves-to-a-package
may help you.
On Mon, Jan 7, 2013 at 12:12 PM, gopi.l hbigdata.g...@gmail.com wrote:
n error occurred at line: 6
Sorry. I should have sent it to the hadoop list.
We have got the issue resolved.
The issue was: earlier hadoop was picking up dfs.tmp.dir/dfs/data as the
dfs dir. Later when we specified the dfs.data.dir property in the config,
hadoop did not append /dfs/data to the path and the datanode was
Hi there,
The HBase RefGuide has a comprehensive case study on such a case. This
might not be the exact problem, but the diagnostic approach should help.
http://hbase.apache.org/book.html#casestudies.slownode
On 1/4/13 10:37 PM, Liu, Raymond raymond@intel.com wrote:
Hi
I encounter
Hi,
It is inverted index based on column(s) value(s)
It will be region wise indexing. Can work when some one knows the rowkey range
or NOT.
-Anoop-
From: Mohit Anchlia [mohitanch...@gmail.com]
Sent: Monday, January 07, 2013 9:47 AM
To:
Hi,
I'm running a query on cluster and result list can be too large. When
running query(for example on 5.Node), 5.Node stopped.
I looked at log files and I got an error message as
java.lang.OutOfMemoryError: java heap space
-XX:OnOutOfMemoryError=Kill -9 %p
Executing /bin/sh -c kill -9 28321
Hey Nurettin,
It will be good if you can give us some details on the configuration. What
is the heap size of the regionserver set to.
Devaraj
On Jan 7, 2013 7:39 AM, Nurettin Şimşek nurettinsim...@gmail.com wrote:
Hi,
I'm running a query on cluster and result list can be too large. When
Where did he mention he was attempting to bond the ports?
Sorry if I missed it?
On Jan 7, 2013, at 7:37 AM, Doug Meil doug.m...@explorysmedical.com wrote:
Hi there,
The HBase RefGuide has a comprehensive case study on such a case. This
might not be the exact problem, but the diagnostic
Have you read through http://hbase.apache.org/book.html#performance ?
What version of HBase are you using ?
Cheers
On Mon, Jan 7, 2013 at 9:05 PM, Farrokh Shahriari
mohandes.zebeleh...@gmail.com wrote:
Hi there
I have a cluster with 12 nodes that each of them has 2 core of CPU. Now,I
want
Hi again,
I'm using HBase 0.92.1-cdh4.0.0.
I have two server machine with 48Gb RAM,12 physical core 24 logical core
that contain 12 nodes(6 nodes on each server). Each node has 8Gb RAM 2
VCPU.
I've set some parameter that get better result like set WAL=off on put,but
some parameters like
Have you tuned the JVM parameter of hbase ?
If you have Ganglia, did you observe high variation in network latency on
the 6 nodes ?
HBase 0.92.2 has been released. Do you plan to upgrade to 0.92.2 or 0.94.3 ?
Cheers
On Mon, Jan 7, 2013 at 9:38 PM, Farrokh Shahriari
Please take a look at http://hbase.apache.org/book.html#jvm
Section 12.2.3, “JVM Garbage Collection
Logs”http://hbase.apache.org/book.html#trouble.log.gcshould be read
as well.
There is more recent effort to reduce GC activity. Namely HBASE-7404 Bucket
Cache:A solution about CMS,Heap Fragment
11 matches
Mail list logo