Hello,
We are trying to achieve the same using Hadoop and Hive (some times pig) by
using power of clustering. we use Natezza for data analysis and cache
building currently. I feel Hbase may not be right choice in this situation.
I will also be interested about the use cases where Hbase can help re
By no means I am a Netezza expert, but my manager seems to believe that our
existing Netezza based system can be replaced with a NOSQL (Key/Value) type
of database. If anyone has done Netezza to HBase migration, please share
your experiences.
As always, greatly appreciate the help.
This is GREAT information folks. This is why I like open source communities
-:) I will present this to management, but in the mean time, the management
has thrown another *monkey* wrench. They want me to check the possibility
of replacing Netezza with *something*. Of course, I want to propose
r
> While generalizations are dangerous, the one place when C++ code could
> shine over java (JVM really) is one does not have to fight the GC.
Yes.
> That being said, the folks working on hbase
> have been actively been addressing this problem to the extent possible
> in pure java by using unmanag
2011/9/7 bijieshan :
> Yes, I'm not very used to it. Maybe I'll feel better after a few days:)
> A large empty area takes up the top of the site.
>
Hopefully a new site design will show soon. Meantime if anyone wants
to have a go at it, feel free. It wouldn't be hard to improve it.
St.Ack
Yes, I'm not very used to it. Maybe I'll feel better after a few days:)
A large empty area takes up the top of the site.
-邮件原件-
发件人: Gaojinchao [mailto:gaojinc...@huawei.com]
发送时间: 2011年9月8日 9:50
收件人: user@hbase.apache.org
主题: RE: Site and Book updated
Everything will be fine when you g
Inline.
J-D
On Wed, Sep 7, 2011 at 8:02 PM, Tom Goren wrote:
> It completed successfully on server A as destination and as source, however
> only after I created the table with all the correlating column families
> (specified by "--new.name=new_table_name"). Without that step being done
> manual
Thanks, I did now.
However it failed miserably.
It completed successfully on server A as destination and as source, however
only after I created the table with all the correlating column families
(specified by "--new.name=new_table_name"). Without that step being done
manually it failed as well.
Everything will be fine when you get used to it.
My company logo changed as well. :)
-邮件原件-
发件人: saint@gmail.com [mailto:saint@gmail.com] 代表 Stack
发送时间: 2011年9月8日 7:10
收件人: user@hbase.apache.org
主题: Re: Site and Book updated
the logo don't look too bad, does it?
On Wed, Sep 7,
A little bladerunner action on the typography. I like it.
Sent from my iPhone
On Sep 7, 2011, at 4:10 PM, "Stack" wrote:
> the logo don't look too bad, does it?
>
>
> On Wed, Sep 7, 2011 at 3:29 PM, Doug Meil
> wrote:
> >
> > Hi folks-
> >
> > Stack deployed the book update last night and t
Hi there-
Have you tried this?
http://hbase.apache.org/book.html#copytable
It's the Java invocation of the copy-table function (without the ruby)
On 9/7/11 8:29 PM, "Tom Goren" wrote:
>So I have read http://blog.sematext.com/2011/03/11/hbase-backup-options/
>
>The built-in solution, namely
So I have read http://blog.sematext.com/2011/03/11/hbase-backup-options/
The built-in solution, namely the 'Export' and 'Import' classes are
relatively straight forward to use, however, the exports include the data
alone, and nothing in regards to the table description (column families,
versioning
the logo don't look too bad, does it?
On Wed, Sep 7, 2011 at 3:29 PM, Doug Meil wrote:
>
> Hi folks-
>
> Stack deployed the book update last night and this contains some new material
> (more MapReduce examples).
>
> http://hbase.apache.org/book.html
>
> Plus, the book and website contain the ne
Hi folks-
Stack deployed the book update last night and this contains some new material
(more MapReduce examples).
http://hbase.apache.org/book.html
Plus, the book and website contain the new logo!
Doug Meil
Chief Software Architect, Explorys
doug.m...@explorys.com
Kein Problem :)
Actually you could even use the slave cluster for writes (those changes would
not get replicated back, though).
The timestamping of HBase makes that all possible and easy.
I am working on Master-Master replication (see HBASE-2195).
-- Lars
Hallo Lars,
thanks for your response. I had a discussion with my tutor about the
cluster-replication feature. He thought, this is only for backup
purposes. And I didn't find the decisive hint, that the slave can be
used for read-only queries.
Jens
Am 07.09.2011 18:51, schrieb lars hofhansl:
On Sep 07, lars hofhansl wrote:
>Hi Arvind,
>
>This is interesting:
>
>> * Multiple machines can concurrently/actively handle requests for the
>> same key, so the loss of one server does not mean that a range of keys
>> is temporarily unavailable. A hbase cluster does have a partial,
>> temporary o
Hi Arvind,
This is interesting:
> * Multiple machines can concurrently/actively handle requests for the
> same key, so the loss of one server does not mean that a range of keys
> is temporarily unavailable. A hbase cluster does have a partial,
> temporary outage when a region server dies. Thing
Not in front of my laptop now, but I'll email our hole fixing code to the list
this PM.
Sent from my iPhone
On Sep 7, 2011, at 11:57 AM, "Jonathan Hsieh" wrote:
> Hey Geoff,
>
> I've been working on some code that should help show (HBASE-4321, HBASE-
> 43222) where holes are (and overlaps and
Hey Geoff,
I've been working on some code that should help show (HBASE-4321, HBASE-
43222) where holes are (and overlaps and other kinds of meta problems).
Can you point me to the jiras/code with fixup routines?
Thanks,
Jon.
On Sun, Sep 4, 2011 at 8:15 PM, Geoff Hendrey wrote:
> a "hole" mea
On Sep 06, Something Something wrote:
>Anyway, before I spent a lot of time on it, I thought I should check if
>anyone has compared HBase against CitrusLeaf. If you've, I would greatly
>appreciate it if you would share your experiences.
Disclaimer: I was an early evaluator/tester of citrusleaf ab
(Branching this discussion since it's not directly relevant to the other thread)
I think if we ever come up with a formula, it needs to come with a big
"your mileage may vary" sign. The reasons being:
- If only a subset of the regions are getting written to, then only
those regions need to be ac
St.Ack you are always helping us!
Thank you very much!!!
The cluster has an NFS where the default directory of all users is saved. (when
I log in my working directory is in the NFS)
I have Hadoop and HBase in the local filesystem of each node. However, is there
any possibility t
Hallo Jens,
yes, you can use the slave cluster for read-only queries (but be aware that the
replication is asynchronous, which means the slave can be behind).
Beyond setting up replication there is no other setup needed for this.
We (Salesforce.com) might be adding code to supporting multiple sl
Same answer as last time this was asked:
http://search-hadoop.com/m/z1aDB4my9g2
J-D
On Wed, Sep 7, 2011 at 6:15 AM, Arsalan Bilal wrote:
> Hi Dear
>
> Plateform: Ubuntu
> Hadoop & Hbase From CDH3
>
> Tool: NetBeans 6.0.9
>
> I am trying to write Hbase Batch Import Insertion Method, I am new in
On Wed, Sep 7, 2011 at 2:04 AM, Stuti Awasthi wrote:
> Now I want my different ruby applications should use same cluster to store
> data.
> Currently all the tables are created under /hbase/. I want
> different projects to have different directory inside which they can create
> their own specif
2011/9/7 Panagiotis Antonopoulos :
> Although the map tasks which run first complete fast (in 2 minutes for
> example) then the next map tasks need much more time to complete (4mins) and
> even later the following map tasks need more that 15 mins to complete.
>
Are all maps in flight when some c
Seems like an issue with the paths you are using:
- java.lang.IllegalArgumentException: Can't read partitions file
- Caused by: java.io.FileNotFoundException: File _partition.lst does not
exist.
Perhaps its doing local filesystem when the partitions are up in hdfs?
Change the file spec?
Well said, Stack. :) Maybe HBase needs more celebrity endorsements? ;)
Another important point you should mention to your manager is that (as far as I
can see) CitrusLeaf is a closed-source, proprietary product. While there's no
harm in this, it does introduce a dependency on Citrusleaf to fix i
Where is it slow? You've seen http://hbase.apache.org/book.html#performance?
St.Ack
On Wed, Sep 7, 2011 at 7:03 AM, Steinmaurer Thomas
wrote:
> Hello,
>
>
>
> are there any guidelines on how to improve the execution time of the
> Export/Import MR-jobs?
>
>
>
> Thanks,
>
> Thomas
>
>
>
>
On Tue, Sep 6, 2011 at 10:24 PM, Something Something
wrote:
> I am a HUGE fan of HBase, but our management team wants us to evaluate
> CitrusLeaf (http://citrusleaf.net/index.php). I have NO idea why!
Their website features lucelle ball!
> Our
> management claims that CitrusLeaf is (got to be
Hello,
are there any guidelines on how to improve the execution time of the
Export/Import MR-jobs?
Thanks,
Thomas
Hi Dear
Plateform: Ubuntu
Hadoop & Hbase From CDH3
Tool: NetBeans 6.0.9
I am trying to write Hbase Batch Import Insertion Method, I am new in Hbase
& Hadoop.
Can any one tell me example, Or can give info link.
I have tried this one, Please see it and indicate any error, I will be thank
full to y
Ah well, I just answered looking at your snippet and result - sorry.
You require to set this property for proper ZK hosts discovery:
hbase.zookeeper.quorum
Note: You can also place a directory containing configured hadoop
*-site.xmls, zoo.cfg and hbase-site.xml onto your classpath to have
HBaseCo
Harsh-
I can access HDFS directly from eclipse using "fs.default.name" but I want to
create a connection with Hbase so that I can perform Hbase specific task eg
create table etc from my eclipse directly.
I tried copying hbase-site.xml to my eclipse directory but still getting the
error :
Error
Try setting "fs.default.name" instead. That's the one FileSystem would
utilize. The 'hbase.rootdir' prop is HBase-specific and wouldn't apply
to hadoop common/hdfs elements directly.
On Wed, Sep 7, 2011 at 5:41 PM, Stuti Awasthi wrote:
> Hi,
>
> I have Hbase server on distant machine. I want to t
Hi,
I have Hbase server on distant machine. I want to test my java code through my
eclipse. So to achieve this I created Hbase connection :
Code is :
public static HBaseConfiguration conf = new
HBaseConfiguration();
conf.set("hbase.rootdir", "hdfs://:54310/hbase");
Hello everyone,
I have to evaluate HBase for a university project. Therefore I read the
cluster replication document on HBase main site (
http://hbase.apache.org/replication.html ).
According to the missing feature, there can be only one slave cluster
currently. Can the slave cluster be used
Hello everyone,
I am running a MapReduce job where the the map task executes one GET for each
key/value pair it processes.
Although the map tasks which run first complete fast (in 2 minutes for example)
then the next map tasks need much more time to complete (4mins) and even later
the fol
Hi Dmitry.
Looks like high network latency. Do you run this test with client and server
on the same
machine, or you test from another machine? May be over wireless?
2011/9/6 Дмитрий
> Hello everyone!
> We started using hbase (hadoop) system and faced some performance issues.
> Actually we are
Hi Friends,
I have Hadoop and Hbase cluster in distributed mode. I am using thrift
interface to access Hbase from Ruby.
Hbase root dir url is :
hdfs://:54310/hbase
Now I want my different ruby applications should use same cluster to store data.
Currently all the tables are created under /hbase/
Something Something writes:
>
> I am a HUGE fan of HBase, but our management team wants us to evaluate
> CitrusLeaf (http://citrusleaf.net/index.php). I have NO idea why! Our
> management claims that CitrusLeaf is (got to be) faster because it's written
> in C++. Trying to find if there's any
Hi,
I have indexed group of documents and inserted in hbase.Now i am curious to
know how many maximum get request it can handle at a time.eg like a search
application.
How much it performs for searching.
43 matches
Mail list logo