I think you can refer the following link...
http://rajeev1982.blogspot.com/2009/06/hbase-setup-0193.html
http://rajeev1982.blogspot.com/2009/06/hbase-setup-0193.html
If you still have problem, let me know...
--
View this message in context:
http://old.nabble.com/Zoo-keeper-exception-in-the-mid
Hello. We plan to run a set of queries on tables with multiple
columns. What is the most efficient method to say, insert 1000 rows,
and/or read 1000 rows.
We are considering just using REST. But what about jython? Will it be
faster? Another way to have our apps talk to nginx and some sort of
a
Just noticed that too. My bad. I still love IntelliJ
-Pete
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Friday, December 10, 2010 4:42 PM
To: user@hbase.apache.org
Subject: Re: Result different between remote Client and HBase Shell
final String startingRow = (af
Never mind, after going over my code it appears that IntelliJ's 'auto-complete'
got me. ;-)
Thanks
-Pete
-Original Message-
From: Peter Haidinyak [mailto:phaidin...@local.com]
Sent: Friday, December 10, 2010 4:32 PM
To: user@hbase.apache.org
Subject: Result different between remote Cli
final String startingRow = (affiliate + SPACE_CHARACTER + m_startDate +
STARTING_INDEX + STARTING_INDEX);
What's the reason for including starting index twice ?
On Fri, Dec 10, 2010 at 4:31 PM, Peter Haidinyak wrote:
> Hi all,
> I've run across an interesting problem. I have imported a few th
Hi all,
I've run across an interesting problem. I have imported a few thousand rows
into HBase and when I do a 'scan' from the shell tool I get back a different
amount of rows than if I do the same query with a remote Java Client.
scan tool command...
hbase(main):030:0> scan 'TrafficLog', {ST
I was looking through the thirft API again and noticed that it said if a
transaction - comprised of updates to one or more rows - throws an exception
then the whole transaction is aborted. Does this mean that it is atomic and
none of the updates will be executed or could some subset of them be e
It seems your RS was under relatively high read and write load as you
can tell by both LRU evictions and that the global memstore size limit
was reached. With so much heap used and multiple clients coming in, it
could put a lot of pressure on your JVM's memory.
That, and the fact that you are on E
So - in your original logs, I see entries like:
2010-12-08 21:13:54,055 INFO org.apache.zookeeper.ZooKeeper: Initiating
client connection, *connectString=localhost:2181 *sessionTimeout=6
watcher=org.apache.hadoop.hbase.client.HConnectionManager
$clientzkwatc...@1687e7c
2010-12-08 21:13:54,106
Yes, I say that.
To my understanding you wanted to avoid scan, so i thought you might
denormalize a bit more with another table. If you want to keep just one
of the two, your usage pattern will help you with your choice.
If you'll stick for a scan on your actual table, you'll have a big
number of
Claudio,
You say that I can flip the data, right?
If I understand your suggestion correctly, then getting products that a
cluster includes will be the problem.
Actually, in google's BigTable paper, their example table is similar, also
their example code to retrieve data:
Scanner scanner(T);
ScanSt
What about a thin table? rowkey:productid columname:clusterid?
On 12/10/10 10:52 AM, Gökhan Çapan wrote:
> Hi,
>
> We have the output of a clustering algorithm in an hbase table which has the
> following structure:
>
> {NAME => 'clusters', FAMILIES => [{NAME => 'products', COMPRESS
> true
> ION
Hi J-D,
Thank you for response. Right, I see in logs the pause. I see it in both ZK
log and regionserver (I pasted respective parts of logs with previous
message). Just not sure about the cause of the pause.
Anyways, I restarted things with GC being logged this time. Will inspect the
issue if I f
Hi,
We have the output of a clustering algorithm in an hbase table which has the
following structure:
{NAME => 'clusters', FAMILIES => [{NAME => 'products', COMPRESS
true
ION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
'655
36', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}
r
Hey,
Need some more info.
Can you paste logs from the MR tasks that fail? What's going on in the cluster
while the MR job is running (cpu, io-wait, memory, etc)?
And what is the setup of your cluster... how many nodes, specs of nodes (cores,
memory, RS heap), and then how many concurrent map
15 matches
Mail list logo