On Sat, Oct 29, 2011 at 1:34 PM, lars hofhansl wrote:
> This is more of "theoretical problem" really.
> Yahoo and others claim they lost far more data due to human error than any
> HDFS problems (including Namenode failures).
>
Actually it is not theoretical at all.
SPOF != data-loss.
Data-l
On Mon, Dec 12, 2011 at 11:52 PM, Yves Langisch wrote:
> Hi,
>
> from time to time I get a NPE on my regionserver. Apparently it's occurring
> while obtaining a row lock:
>
> ---
> 2011-12-13 02:00:19,582 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache:
> Block cache LRU eviction completed;
On Tue, Dec 13, 2011 at 5:15 AM, Lord Khan Han wrote:
> Hi Again,
>
> One more symptom : When I look the one of the hbase table column ie:
> PureText (which we configured to be LZO) from the hadoop dfs , I can read
> the file. Isnt it looks like a lzo'ed file ? Is this shows LZO is not
> worke
On Tue, Dec 13, 2011 at 11:44 AM, Otis Gospodnetic
wrote:
> Are there some obvious flaws that would really cause operational of
> performance pains?
> Would such a cluster have major performance issues because of data that needs
> to be transferred between DNs that are on all nodes and RSs runni
Thank you, thank you, thank you! I would never have HBase on EC2 working
without
this advice.
Mark :)
The shell lets you only do that much.
HBase does not support % wildcard. It just happens to work in your case because
% has a low ascii code.
You set the startRow of the scan. It does not need to exist, but the value must
sort before the rows your are looking for and after all rows before it.
S
Thanks Doug. I am looking more from HBase shell for this.
- Original Message -
From: Doug Meil
To: "user@hbase.apache.org" ; Sreeram K
; lars hofhansl
Cc:
Sent: Tuesday, December 13, 2011 2:01 PM
Subject: Re: HBase- Scan with wildcard character
Hi there-
At some point you're probab
Hi there-
At some point you're probably going to want to get out of the shell, take
a look at this...
http://hbase.apache.org/book.html#scan
On 12/13/11 4:43 PM, "Sreeram K" wrote:
>Thanks Lars. I am looking into that.
>
>Is there a way we can search all the entries starting with 565HGO
Thanks Lars. I am looking into that.
Is there a way we can search all the entries starting with 565HGOUO and print
all the rows?
Example:
scan 'SAMPLE_TABLE' ,{COLUMNS
=>['sample_info:FILENAME','event_info:FILENAME'],STARTROW=>'sample1%'}
I am seeing all the Rows and information after that sa
Hi,
I was wondering if I could get some feedback on the craziness (or not) of
setting up a hybrid HBase-Hadoop cluster that has the following primary uses:
1) continuous writes to HBase
2) disk and CPU intensive reads from HBase by MR jobs and writes of aggregated
data back to HBase by those jo
info:regioninfo is actually a serialized Java object (HRegionInfo). What you
see in the shell the result of HRegionInfo.toString(), which looks like a
ruby object, but it is really just a string (see HRegionInfo.toString()).
From: Sreeram K
To: "user@hbase.a
Royston,
Hadoop 1.0 was/is also known as the 0.20.20x branch. It's confusing, I
apologize.
The current release of that branch is
0.20.205: http://hadoop.apache.org/common/releases.html#17+Oct%2C+2011%3A+release+0.20.205.0+available .
A "1.0" release from this lineage should be happening soon.
Hi Andy,
Thanks for your reply. I had not realised there is a Hadoop 1.0. Can you tell
me if there is a release for it or do I have to build my own from the source?
Royston
On 13 Dec 2011, at 16:45, Andrew Purtell wrote:
>> Does hbase-0.92.0-candidate-0 run with hadoop 0.20.2?
>
> This is n
Thanks i will go through it once again,
On Tue, Dec 13, 2011 at 11:56 PM, Doug Meil
wrote:
>
> I would recommend reading this...
>
> http://hbase.apache.org/book.html
>
> ... and per what JD already suggested downloading and trying HBase.
>
>
>
>
>
> On 12/13/11 12:40 PM, "shashwat shriparv"
> w
I would recommend reading this...
http://hbase.apache.org/book.html
... and per what JD already suggested downloading and trying HBase.
On 12/13/11 12:40 PM, "shashwat shriparv"
wrote:
>Till now i was trying to use nutch to crawl from http addresses and index
>into solr, and i got require
Till now i was trying to use nutch to crawl from http addresses and index
into solr, and i got requirement to crawl from hbase, means the data will
be stored in hbase and i need to crawl it and index into solr, up to now as
i have researched internet dint find anything to crawl or collect data from
Do you have a more specific question? Have you tried anything yet?
Thanks for helping us helping you,
J-D
On Tue, Dec 13, 2011 at 5:24 AM, shashwat shriparv
wrote:
> We are putting data into HBase in specific format, since the data in
> HBase will be very large hence we need to crawl data from
Arsalan,
> Can you provide example of coprocessors using 0.92
The unit tests provide some limited
examples: http://svn.apache.org/viewvc/hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/coprocessor/
A complete example implementation is the AccessController in
0.92: http://svn.apache.o
> Does hbase-0.92.0-candidate-0 run with hadoop 0.20.2?
This is not recommended or supported, due to lack of append support in HDFS in
that release. Consider Hadoop 1.0 or CDH3.
Best regards,
- Andy
Problems worthy of attack prove their worth by hitting back. - Piet Hein (via
Tom White)
Hi,
Does hbase-0.92.0-candidate-0 run with hadoop 0.20.2?
Thanks,
Royston
On 13 Dec 2011, at 09:39, Harsh J wrote:
> Akhtar,
>
> You may find them under
> http://people.apache.org/~stack/hbase-0.92.0-candidate-0/
>
> On 13-Dec-2011, at 3:04 PM, Akhtar Muhammad Din wrote:
>
>> Hi,
>> From w
@Harsh: How can we implement coprocessors in hbase? Can you provide example
of coprocessors using 0.92 or any other informative link to use?
We are putting data into HBase in specific format, since the data in
HBase will be very large hence we need to crawl data from HBase and index
it into Solr, so which is the tool available for this requirement, or how
to approach on this requirement.
Regards
Shashwat
Hi Again,
One more symptom : When I look the one of the hbase table column ie:
PureText (which we configured to be LZO) from the hadoop dfs , I can read
the file. Isnt it looks like a lzo'ed file ? Is this shows LZO is not
worked ? Or when I look the hbase LZO'ed file from dfs its automatica
If you can chmod a+w the directory /user/dorner/bulkload/output/Tsp, hbase
should be able to do what it needs to do (I am assuming the error is coming
from completebulkload). It is trying to rename the files.
-Original Message-
From: Christopher Dorner [mailto:christopher.dor...@gmail.co
Hi,
i stumbled upon an error which was not present in pseudo distributed mode.
When i try to run a bulkload, it fails after creating the hfiles with
following error:
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.securi
Akhtar,
You may find them under
http://people.apache.org/~stack/hbase-0.92.0-candidate-0/
On 13-Dec-2011, at 3:04 PM, Akhtar Muhammad Din wrote:
> Hi,
> From where I can download release candidate for 0.92, I can only find
> 0.90.4 version available as the latest release
>
>
>
>
>
> On Tue
Hi,
>From where I can download release candidate for 0.92, I can only find
0.90.4 version available as the latest release
On Tue, Dec 13, 2011 at 1:18 PM, Andrew Purtell wrote:
> Arsalan,
>
> For the Coprocessor feature, you will need to upgrade to the 0.92 release
> or later.
>
> Best regar
Arsalan,
For the Coprocessor feature, you will need to upgrade to the 0.92 release or
later.
Best regards,
- Andy
On Dec 13, 2011, at 12:04 AM, Arsalan Bilal wrote:
> I am currently using habse 0.90.4.
> I want to implement Coprocessors. I know that coprocessors were part of
> 0.92 and
Hi,
The current release candidate for 0.92 has all the coprocessor goodness
included. Try it if you like and let us know if it worked out for you.
Cheers,
Lars
On Dec 13, 2011, at 8:04 AM, Arsalan Bilal wrote:
> I am currently using habse 0.90.4.
> I want to implement Coprocessors. I know that
Thanks Lars, I will look into that .
one more question: on hbase shell.
If I have :
hbase> scan 't1.', {COLUMNS => 'info:regioninfo'} , it is printing
all the colums of regioninfo.
can I have a condition like:if colum,info.regioninfo=2 (value) than print all
the associated columns
Forgot to mention the versions: I'm running HBase 0.90.4 with Hadoop 0.20.205.0.
On Dec 13, 2011, at 8:52 AM, Yves Langisch wrote:
> Hi,
>
> from time to time I get a NPE on my regionserver. Apparently it's occurring
> while obtaining a row lock:
>
> ---
> 2011-12-13 02:00:19,582 DEBUG org.ap
I am currently using habse 0.90.4.
I want to implement Coprocessors. I know that coprocessors were part of
0.92 and in hbase book, it is clearly mention that Coprocessors are
currently on TRUNK.
Is there any way to implement coprocessors in 0.90.4 or will it be
available in next release?
--
Best
32 matches
Mail list logo