Hi,
We are using Cloudera CDH3u5 distribution of HBase (0.90.6). The RS goes
down suddenly & from the logs we see the following exception in the region
server :
2013-08-07 20:36:58,008 INFO org.apache.hadoop.hbase.regionserver.Store:
Completed compaction of 18 file(s), new file=hdfs://
192.168.0.
Lars checked in HBASE-6580 today where HTablePool is Deprecated.
Please take a look.
On Wed, Aug 7, 2013 at 6:08 PM, ch huang wrote:
> table.close(); this is not close the table ,just get the connect back to
> pool,because
>
> *putTable methode in HTablePool is Deprecated,see*
>
>
> http://hba
table.close(); this is not close the table ,just get the connect back to
pool,because
*putTable methode in HTablePool is Deprecated,see*
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTablePool.html#putTable(org.apache.hadoop.hbase.client.HTableInterface)
On Wed, Aug 7, 2013
1stly, why you wanna move away from MongoDB to HBase? well, if it's a RDBMS
we are talking about here, sqoop sounds like the tool you should be using.
But seems it's mongoDB, you will have to look into Flume/Scribe and see if
it's feasible.
Shengjie
On 7 August 2013 18:58, JC wrote:
> I have j
Hi Scott,
What do you mean by "Running a major compaction does not significantly
improve the locality."? If there is no other writes on your table
while/after the major compaction, it should be at 100%.
hdfsBlocksLocalityIndex is for the entire node, not just for a specific
table. If you have oth
I'd like to improve block locality on a system where nearly 100% of data
ingest is via bulkloading. Presently, I measure block locality by
monitoring the hdfsBlocksLocalityIndex metric. On a 10 node cluster with
block replication of 3, the block locality index is about 30%, which is
what I'd expe
Vimal:
For your question #2, see:
http://hbase.apache.org/book.html#d2617e2382
http://hbase.apache.org/book.html#recommended_configurations
Cheers
On Wed, Aug 7, 2013 at 10:00 AM, Dhaval Shah wrote:
> You are way underpowered. I don't think you are going to get reasonable
> performance out of t
You are way underpowered. I don't think you are going to get reasonable
performance out of this hardware with so many processes running on it
(specially memory heavy processes like HBase), obviously severity depends on
your use case
I would say you can decrease memory allocation to namenode/dat
Hi Ted,
I am using centOS.
I could not get output of "ps aux | grep pid" as currently the hbase/hadoop
is down in production due to some internal reasons.
Can you please help me in figuring out memory distribution for my single
node cluster ( pseudo-distributed mode) ?
Currently its just 4GB RAM
I found this thread
http://stackoverflow.com/questions/16847319/cassandra-on-solaris-10-64-bit-crashing-with-unsafe-getlong
.
May be you found this already. Seems like a bug in JVM?
Regards
Ram
On Wed, Aug 7, 2013 at 1:19 PM, Yuri Levinsky wrote:
> Dear HBase Users/Developers,
>
> Plea
Seems to be the same issue as here:
http://stackoverflow.com/questions/16847319/cassandra-on-solaris-10-64-bit-crashing-with-unsafe-getlong
It says it a jvm bug and it seems right.
You may want to try the very last jvm on your platform (it's unlikely to
work), as well as the jdk 1.6 (it could work
I have just started on a real/near time data loading project working with
HBase as the data store. The primary, but not only, source of the data that
I need to load is stored in mongodb. I do expect volumes to be high,
although with the near to real time loads this may be minimized. I am
relatively
Dear HBase Users/Developers,
Please help with issue below:
JVM crashes on HTable initialization. We tested it on JVM 32 bit on SUN and it
works. It works for us on Linux as well. On SUN Solaris 9/10 JVM crashes. Any
ideas?
Sincerely yours,
[Description: Celltick logo_highres]
Yuri Levinsky, DB
Please take a look at:
http://hbase.apache.org/book.html#d2617e2382
On Wed, Aug 7, 2013 at 6:51 AM, manish dunani wrote:
> No i didn't ..Is there any need to change in hbase-site.xml??
> Could u pls tell what i need to do??
>
>
> On Wed, Aug 7, 2013 at 7:12 PM, Ted Yu wrote:
>
> > I don't see
No i didn't ..Is there any need to change in hbase-site.xml??
Could u pls tell what i need to do??
On Wed, Aug 7, 2013 at 7:12 PM, Ted Yu wrote:
> I don't see hbase.zookeeper.quorum in your config.
>
> Where do you run your zookeeper ?
>
> Cheers
>
> On Wed, Aug 7, 2013 at 6:26 AM, manish dunan
Table is closed for each Put in writeRow(). This is not efficient.
Take a look at http://hbase.apache.org/book.html#client , 9.3.1 Connections
On Wed, Aug 7, 2013 at 5:11 AM, Lu, Wei wrote:
> decrease cache size (say, 1000) and increase batch or just set it as
> default if #qualifiers in a row
I don't see hbase.zookeeper.quorum in your config.
Where do you run your zookeeper ?
Cheers
On Wed, Aug 7, 2013 at 6:26 AM, manish dunani wrote:
> 13/08/07 06:20:21 INFO zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
> 13/08/07 06:20:21
13/08/07 06:20:21 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/08/07 06:20:21 INFO zookeeper.ZooKeeper: Client environment:host.name
=localhost
13/08/07 06:20:21 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.7.0_21
13/0
Thanx a lot!!
On Wed, Aug 7, 2013 at 6:35 PM, Subroto wrote:
> Hi Manish,
>
> Please include protobuf-java-*.jar in the dependencies.
>
> Cheers,
> Subroto Sanyal
> On Aug 7, 2013, at 3:02 PM, manish dunani wrote:
>
> > Message
>
>
--
MANISH DUNANI
-THANX
+91 9426881954,+91 8460656443
manish
Hi Manish,
Please include protobuf-java-*.jar in the dependencies.
Cheers,
Subroto Sanyal
On Aug 7, 2013, at 3:02 PM, manish dunani wrote:
> Message
hello,
I wrote the programme to insert the data into hbase table..
*code:*
package maddy.test;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hba
decrease cache size (say, 1000) and increase batch or just set it as default
if #qualifiers in a row is not too many.
-Original Message-
From: ch huang [mailto:justlo...@gmail.com]
Sent: Wednesday, August 07, 2013 5:18 PM
To: user@hbase.apache.org
Subject: issue about search speed and
Please see the link below:
http://stackoverflow.com/questions/17725645/hbase-shell-scan-bytes-to-string-conversion
Sincerely yours,
Yuri Levinsky, DBA
Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel
Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222
-Ori
i use hbase shell, and i always get result from scan operation like
rowkey
somethingelse
\x00\x00\x01Q\xED\xFF\xC0\x00\x00\x01\x00\x0
column=t:\x01w\x057\x08\xF7\x0C\xB7\x10w\x137\x147\x16\xF7\x17\xF7\x1A\xB7\x1B\xB7
what tools i can use under hbase shell commandline to translate these
c
hi,all:
i have problem in running following code,MoveData method is used to
get data from source table ,and modify each row rowkey ,and insert into
destination table,and i always get error,anyone can help?
public static void writeRow(HTablePool htp,String tablename, String
rowkey,String
25 matches
Mail list logo