Jean-Daniel,
I changed dfs.data.dir and dfs.name.dir to new paths in the hdfs-site.xml.
I really cannot figure out why the HBase/Hadoop got a problem after a
couple of days of shutting down. If I use it frequently, no such a master
problem happens.
Each time, I have to reinstall not only HBase/H
Dear Manish and Jean-Daniel,
After starting DFS (/opt/hadoop/bin/start-dfs.sh), I got the following
daemons after tying "jps".
5212 Jps
5150 SecondaryNameNode
4932 DataNode
4737 NameNode
Then, I started the HBase (/opt/hbase/bin/start-hbase.sh). The following
daemons were available.
5797 Jps
55
Neetu,
In my opinion it is a bad idea to "copy paste" a normalized schema
from an RDBMS to a nosql database like hbase. Hbase encourages
denormalization. Hbase does not support indexing out of the box like
mysql/postgres/etc. So your retrieval time would be affected as the
data size grows. For inde
Hi Jean,
Thanks A lot for reply, I got your point about HBASE , let me give a
little clear picture of what I am desiring from HBase , If any body is
willing to migrate ones application on Hadoop and at teh same time migrate
Mysql database to Hbase to get the advantage of the Hbase in that case i
It says you have not started the hbase master. Once you restarted the system
have you confirmed whether all hadoop daemons are running?
sudo jps
If you are using CDH package then you can automatically start the hadoop
daemons on boot using reconfig package.
Sent from my BlackBerry, pls excuse t
Hi,
I'm using ycsb to evaluate Hbase. It worked well when loading data,
but when do the all update operations ,the exception come out.
Could you give any advices?
Thanks a lot
Hadoop 1.0.1
HBase 0.90.2
2012-03-27 22:04:06,605 WARN org.apache.hadoop.ipc.HBaseServer:
(responseTooSlow):
{"processi
Hi Bing,
Two questions:
- Can you look at the master log and see what's preventing the master
from starting?
- Did you change dfs.data.dir and dfs.name.dir in hdfs-site.xml? By
default it writes to /tmp which can get cleaned up.
J-D
On Tue, Mar 27, 2012 at 12:52 PM, Bing Li wrote:
> Dear all,
1. It is good idea to manage the region splits manually. For best practice, read
http://hbase.apache.org/book.html - 2.8.2.7. Managed Splitting
2. default hbase mapreduce splitter create a map-tasks for each of the regions,
read more details at http://hbase.apache.org/book.html#splitter
Saurabh
Dear all,
I got a weird problem when programming on the pseudo-distributed mode of
HBase/Hadoop.
The HBase/Hadoop were installed correctly. It also ran well with my Java
code.
However, if after shutting down the server for some time, for example, four
or five days, I noticed that HBase/Hadoop go
Hi,
HBase is not a relational database, it doesn't have foreign keys or
constraints. I'd suggest you familiarize yourself with HBase by
reading the refence manual[1] or buying the book[2].
Regards,
J-D
1. http://hbase.apache.org/book/book.html
2. http://www.amazon.com/dp/1449396100
On Tue, Mar
Yes, this is quite confusing. Delete.deleteColumn should really be called
deleteVersion or deleteColumnVersion.
I wonder whether we could make a change like this in the 0.96 "singularity"
release.
-- Lars
From: Shawn Quinn
To: user@hbase.apache.org; lars hof
Hi Shwan,
My hbase-version is 0.92.0. I have to mention that in recently I
noticed that the delete semantics between shell and Java api are
different. In shell, if you delete one version, it will mask the
versions whose timestamps are older than that version, it means that
scan will not return the
Hi Lars,
Thanks for the quick reply! In this case we we're doing a column delete
like so:
Delete delete = new Delete(rowKey);
delete.deleteColumn(Bytes.toBytes("thing"),
Bytes.toBytes(value));
table.delete(delete);
However, your response caused me to notice t
Hey Shawn,
how exactly did you delete the column?
There are three types of delete markers: family, column, version.
Your observation would be consistent with having used a version delete marker,
which just marks are a specific version (the latest by default) for delete.
Check out the HBase Refer
Hello,
In a couple of situations we were noticing some odd problems with old data
appearing in the application, and I finally found a reproducible scenario.
Here's what we're seeing in one basic case:
1. Using a scan in hbase shell one of our column cells (both the column
name and value are simpl
Roberto,
That error is not quite what I expected.
$HBASE_CONF_DIR is what the hbase scripts use as an alternate to the
default location of the conf files in $HBASE_HOME/conf.
How do you know that ZooKeeper is not starting?
Also, did you define HBASE_MANAGES_ZK to be "true" (it is "true" by
defa
Index 20 corresponds to RS_ZK_REGION_FAILED_OPEN which was added by:
HBASE-5490 Move the enum RS_ZK_REGION_FAILED_OPEN to the last of the enum
list in 0.90 EventHandler
(Ram)
As of now, is there any server that is still running 0.90.4 ? Such
server(s) wouldn't be able to interpret
Hello,
We recently migrated to 0.90.7-SNAPSHOT, and are encountering the above
exception, which seems to fail various HBase operations.
How it came to be:
* We upgraded from 0.90.4 to 0.90.7, however not all slaves were restarted,
i.e. we ran slaves from different versions for a couple of days.
On Tue, Mar 27, 2012 at 5:10 AM, Rita wrote:
> Hello,
>
> I was wondering if there is an easy way to serialize hbase data so I can
> store it on a Unix filesystem. Since the data is unstructured I was
> thinking of creating a XML file which would represent it for each key and
> value. Any thoughts
Hello,
I was wondering if there is an easy way to serialize hbase data so I can
store it on a Unix filesystem. Since the data is unstructured I was
thinking of creating a XML file which would represent it for each key and
value. Any thoughts or ideas about this?
--
--- Get your facts first, th
Thanks for the detailed answer.
we are running 0.90.2, and the problem resolved after running major
compaction manually.
It seems that the problem was with client request waiting in the queue (so
I don't understand why major compaction solved it...).
Anyhow, I will try to apply the configurations
Hello All,
I have some doubts about hbase that hopefully you can help me.
My architecture is the next:
I have 4 servers(server_{1,2,3,4}) with 6GB Ram and 2 cores. I installed
hadoop in all of them, this is the configuration:
- server_1 is namenode, datanode and secondarynamenode, jobtracker
- ser
Hello,
I put it in my .bashrc and resource it, and the problem is still there. I
haven't seen to many documents in the web about this HBASE_CONF_DIR,
something about PIG, but I am not using it.
I have 4 servers, after putting HBASE_CONF_DIR, it only starts the first
node, in the others there is n
23 matches
Mail list logo