20, 2013 at 1:06 PM, Jason Huang jason.hu...@icare.com
wrote:
Hello,
The hbase book at apache website has the following statement about
controlling major compaction: A common administrative technique is to
manage major compactions manually, rather than letting HBase do it. By
default
Hello,
The hbase book at apache website has the following statement about
controlling major compaction: A common administrative technique is to
manage major compactions manually, rather than letting HBase do it. By
default,HConstants.MAJOR_COMPACTION_PERIOD is one day and major compactions
may
2013/9/18 Ted Yu yuzhih...@gmail.com
The fix is in 0.94.4
It would be easier for you to upgrade to newer release since rolling
restart is supported.
Cheers
On Wed, Sep 18, 2013 at 12:24 PM, Jason Huang jason.hu...@icare.com
wrote:
Hello,
We are using hadoop 1.1.2
Hello,
We are using hadoop 1.1.2 and HBase 0.94.3 and we found the following
entries appear every minute in namenode's log:
2013-09-17 14:00:25,710 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 54310, call delete(/hbase/.archive/mytable, false)
from **.**.**.**:42912 error:
Hello,
I am trying to get some advice on pros/cons of using separator/delimiter as
part of HBase row key.
Currently one of our user activity tables has a rowkey design of
UserID^TimeStamp with a separator of ^. (UserID is a string that won't
include '^').
This is designed for the two common use
thanks for all these valuable comments.
Jason
On Mon, Jul 8, 2013 at 12:25 PM, Michael Segel michael_se...@hotmail.comwrote:
Where is murmur?
In your app?
So then every app that wants to fetch that row must now use murmur.
Added to Hadoop/HBase?
Then when you do upgrades you have to make
Hello,
I am a bit confused how configurations of hbase replication and dfs
replication works together.
My application deploys on an HBase cluster (0.94.3) with two Region
servers. The two hadoop datanodes run on the same two Region severs.
Because we only have two datanodes, dfs.replication was
are running the client from.
There is no replicating HBase tables within the same cluster - you're
just accessing the same table from different clients.
Hope this helps,
- Dave
On Thu, Jun 27, 2013 at 7:04 AM, Jason Huang jason.hu...@icare.com
wrote:
Hello,
I am a bit confused how
Not sure if I should post this to the zookeeper list or here but I will try
here first for my luck.
The application that I am working on runs a small HBase cluster (0.94.3)
with an external zookeeper(3.4.4). Within the java client, when we invoke
the first call to fetch data from HBase table, a
cool.
thanks Stack.
On Wed, Jun 26, 2013 at 10:52 AM, Stack st...@duboce.net wrote:
On Wed, Jun 26, 2013 at 6:57 AM, Jason Huang jason.hu...@icare.com
wrote:
My question is - is this kind of heartbeat expected and useful? Our
normal use case involves fetching data to HBase table every 60
in background with
phonetic keys as row keys.
http://en.wikipedia.org/wiki/Soundex
hth,
Abhishek
-Original Message-
From: Jason Huang [mailto:jason.hu...@icare.com]
Sent: Tuesday, October 02, 2012 2:38 PM
To: user@hbase.apache.org
Subject: Re: HBase table row key design question
Hello,
I am designing a HBase table for users and hope to get some
suggestions for my row key design. Thanks...
This user table will have columns which include user information such
as names, birthday, gender, address, phone number, etc... The first
time user comes to us we will ask all these
, Oct 2, 2012 at 7:58 PM, Jason Huang jason.hu...@icare.com wrote:
Hello,
I am designing a HBase table for users and hope to get some
suggestions for my row key design. Thanks...
This user table will have columns which include user information such
as names, birthday, gender, address, phone
Thanks all for the responses!
Now I have a much better idea.
thanks!
Jason
On Fri, Sep 28, 2012 at 5:34 AM, Bruno Dumon br...@ngdata.com wrote:
Hi,
On Thu, Sep 27, 2012 at 10:58 PM, Jason Huang jason.hu...@icare.com wrote:
Hello,
I am exploring HBase Lily and I have a few starter
Hello,
I am exploring HBase Lily and I have a few starter questions hoping
to get some help from users in this group who had tried that before:
(1) Do I need to post all the HBase table contents to Lily (treat Lily
as another DataStore) in order to enable the index and search
functionality? If
like
BigDecimal, HBase has a utility class named as
org.apache.hadoop.hbase.util.Bytes. You might take advantage of this class
in your code if you can.
HTH,
Anil
On Wed, Sep 19, 2012 at 10:11 AM, Jason Huang jason.hu...@icare.com wrote:
I think I should clarify my question - is there any
Hello,
I am interested in learning the possibility of integrating Lucene
HBase. Google search points me to HBASE-3529 (Add search to HBase).
This project is currently listed as patch available but
unresolved. Does that mean there are reported bugs in the patch that
haven't been resolved yet?
/browse/HDFS-2004. The proposal was
vetoed. Therefore, further progress on HBASE-3529 as currently
implemented is not possible.
On Thu, Sep 20, 2012 at 12:38 PM, Jason Huang jason.hu...@icare.com wrote:
Hello,
I am interested in learning the possibility of integrating Lucene
HBase. Google
that will work.
thanks again for all your time,
Jason
On Tue, Sep 18, 2012 at 1:34 PM, Jean-Daniel Cryans jdcry...@apache.org wrote:
On Tue, Sep 18, 2012 at 10:21 AM, Jason Huang jason.hu...@icare.com wrote:
I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
but that had already been
Hello,
I am designing a HBase table for user activity and I want to use a
nested entity to store the login information as following:
(1) Define a column qualifier named user login use this as a nested entity;
(2) This nested entity will have the following attributes:
total # of login
I think I should clarify my question - is there any existing tool to
help convert these type of nested entities to byte arrays so I can
easily write them to the column, or do I need to write my own
serialization/deserialization?
thanks!
Jason
On Wed, Sep 19, 2012 at 12:53 PM, Jason Huang
at 11:41 AM, Jason Huang jason.hu...@icare.com wrote:
Thanks JD and Shumin for the responses.
I realized that this chain is getting longer and longer and I've tried
many different things in an between. I will clean out every previous
installs and start a fresh one with the newest version
email. Any help will be greatly appreciated!
Jason
On Thu, Sep 13, 2012 at 6:42 PM, Jason Huang jason.hu...@icare.com wrote:
Hello,
I am trying to set up HBase at pseudo-distributed mode on my Macbook.
I was able to installed hadoop and HBase and started the nodes.
$JPS
5417 TaskTracker
5083
dfs.datanode.data.dir
J-D
On Tue, Sep 18, 2012 at 9:21 AM, Jason Huang jason.hu...@icare.com wrote:
I've done some more research but still can't start the HMaster node
(with similar error). Here is what I found in the Master Server log:
Tue Sep 18 11:50:22 EDT 2012 Starting master on Jasons-MacBook-Pro.local
where to go next.. Any suggestions?
thanks!
Jason
On Fri, Sep 14, 2012 at 4:25 PM, Jason Huang jason.hu...@icare.com wrote:
Thanks Marcos.
I applied the change you mentioned but it still gave me error. I then stop
everything and restart Hadoop and tried to run a simple Map-Reduce job
there is something wrong with my Hadoop setup. I will do more
research and see if I can find out why.
thanks,
Jason
On Thu, Sep 13, 2012 at 7:56 PM, Marcos Ortiz mlor...@uci.cu wrote:
Regards, Jason.
Answers in line
On 09/13/2012 06:42 PM, Jason Huang wrote:
Hello,
I am trying to set up HBase
Hello,
I am trying to set up HBase at pseudo-distributed mode on my Macbook.
I was able to installed hadoop and HBase and started the nodes.
$JPS
5417 TaskTracker
5083 NameNode
5761 HRegionServer
5658 HMaster
6015 Jps
5613 HQuorumPeer
5171 DataNode
5327 JobTracker
5262 SecondaryNameNode
that data in Memstore?
thanks,
Jason
On Wed, Sep 12, 2012 at 5:19 PM, Adrien Mogenet
adrien.moge...@gmail.com wrote:
WAL is just there for recover. Reads will meet the Memstore on their read
path, that's how LSM Trees are working.
On Wed, Sep 12, 2012 at 11:15 PM, Jason Huang jason.hu...@icare.com
, and thus
it's located on the regionserver itself.
On Wed, Sep 12, 2012 at 11:24 PM, Jason Huang jason.hu...@icare.com wrote:
So - I guess at the time of the query we don't know if the data is in
Memstore or in the RegionServer. In order to ensure we get the most
recent version of data, every Hbase
Hello,
I am trying to set up HBase at pseudo-distributed mode on my Macbook.
I've installed Hadoop 1.0.3 in pseudo-distributed mode and was able to
successfully start the nodes:
$ bin/start-all.sh
$ jps
1002 NameNode
1246 JobTracker
1453 Jps
1181 SecondaryNameNode
1335 TaskTracker
1091 DataNode
Yes! This is it!
Thanks Shrijeet!
On Tue, Sep 11, 2012 at 2:47 PM, Shrijeet Paliwal
shrij...@rocketfuel.com wrote:
Your HDFS server is listening on a different port than the one you
configured in hbase-site (9000 != 8020).
On Tue, Sep 11, 2012 at 11:44 AM, Jason Huang jason.hu...@icare.com
31 matches
Mail list logo