2012/1/24 Andrey Stepachev :
> 2012/1/24 Praveen Sripati :
>
> a) As in 1), add something to key. For example each 5 minutes. Later your
> can issue 16 queries and merge them (for realtime)
eah... 3 minutes :)
--
Andrey.
2012/1/24 Praveen Sripati :
> Thanks for the response. I am just getting started with HBase. And before
> getting into the code/api level details, I am trying to understand the
> problem area HBase is trying to address through it's architecture/design.
>
> 1) So, what are the recommendations for ha
Thanks for the response. I am just getting started with HBase. And before
getting into the code/api level details, I am trying to understand the
problem area HBase is trying to address through it's architecture/design.
1) So, what are the recommendations for having many columns and with dense
data
On Mon, Jan 23, 2012 at 5:32 PM, Fei Dong wrote:
> Hello guys,
>
> I setup a Hadoop and HBase in EC2. My Settings as follows:
> Apache Official Version
> Hadoop 0.20.203.0
HBase won't work on this version of hadoop. See
http://hbase.apache.org/book.html#hadoop
> export HADOOP_CLASSPATH="$HADOO
2012/1/23 Gaojinchao :
> 0.92.0 is released at Chinese new year(Year of the Dragon)!
I like that Jinchao!
St.Ack
HBase uses the Java libraries for DNS lookups which should normally
use the default lookup mechanisms on your machine, it looks like
/etc/hosts isn't looked at first in your OS configuration.
J-D
On Mon, Jan 23, 2012 at 10:07 AM, Ben Cuthbert wrote:
> All
>
> Is there a way to have hardcoded /et
Hi All
YeeeahGreat job to every one who had contributed to the release.
!!!
An applause to the whole of HBASE community for bringing out this release...
Regards
Ram
-Original Message-
From: Gaojinchao [mailto:gaojinc...@huawei.com]
Sent: Tuesday, January 24, 2012 9:06 AM
Good job! Good luck! :)
0.92.0 is released at Chinese new year(Year of the Dragon)!
May this new year bring more success to our Community!
-邮件原件-
发件人: saint@gmail.com [mailto:saint@gmail.com] 代表 Stack
发送时间: 2012年1月24日 7:57
收件人: Hbase-User; gene...@hadoop.apache.org
主题: ANN: HBase 0
Hello guys,
I setup a Hadoop and HBase in EC2. My Settings as follows:
Apache Official Version
Hadoop 0.20.203.0
HBase 0.90.4
1 master node for Hadoop and HBase , 1 tasktracker/regionserver for
Hadoop/HBase.
I already set the HADOOP_CLASSPATH in hadoop-env.sh
export HADOOP_CLASSPATH="$HADOOP_CLA
On Mon, Jan 23, 2012 at 4:09 PM, Dave Latham wrote:
> Woohoo! Many thanks to everyone who contributed to this big release. One
> of HBase's biggest strengths is its community.
>
> Stack, the link to the upgrade guide doesn't seem to be working, and I
> don't see any information on the page about
Woohoo! Many thanks to everyone who contributed to this big release. One
of HBase's biggest strengths is its community.
Stack, the link to the upgrade guide doesn't seem to be working, and I
don't see any information on the page about upgrading to 0.92.
Dave
On Mon, Jan 23, 2012 at 3:57 PM, St
Your HBase crew are pleased to announce the release of HBase 0.92.0.
Download it from your favorite Apache mirror [1].
HBase 0.92.0 includes a wagon-load of new features including coprocessors,
security, a new (self-migrating) file format, distributed log
splitting, etc. For a
complete list of ch
It flushes when it reaches the memstore size, not plus the global max
memstore size.
J-D
On Mon, Jan 23, 2012 at 2:50 PM, Yves Langisch wrote:
> Hi,
>
> I'm currently looking through all the metrics hbase provides and I don't
> understand the memstore flushing behavior I see. I thought the mems
Hi,
I'm currently looking through all the metrics hbase provides and I don't
understand the memstore flushing behavior I see. I thought the memstore is not
flushed until it reaches the maximum memstore size plus the flush size. In my
case this would be 1.6GB+128MB. But I see the following graph
Can you please more elaborate on this metric? I've ganglia for collecting the
metrics (poll interval is 15s) and I see a constant value for
hbase.regionserver.requests which is around 6700 requests. What does that mean
exactly? 6700 unprocessed requests at this point in time?
-
Yves
On Dec 1,
You could always try going with a little smaller heap and see how it works
for your particular workload, maybe 4G. 1G block cache, 1G memstores, ~1G
GC overhead(?), leaving 1G for active program data.
If trying to squeeze memory, you should be aware there is a limitation in
0.90 where storefile i
Hi folks-
The book/refGuide has been updated on the website.
http://hbase.apache.org/book.html
Doug Meil
Chief Software Architect, Explorys
doug.m...@explorys.com
All
Is there a way to have hardcoded /etc/hosts entries for the region and master
address in hbase.
When we try
10.10.10.1 master01
10.10.10.2 slave01
in the /etc/hosts
Hbase starts but the region server attemps to connect back to the master on its
dns name not master01 as per the
Royston / Tom:
I would encourage you to explore other aggregations where
AggregationProtocol is of help.
Feel free to discuss any limitation in current implementation, propose
suggestions, etc.
Thanks
On Mon, Jan 23, 2012 at 9:03 AM, Ted Yu wrote:
> Thanks Tom for the investigation.
> I will a
Thanks Tom for the investigation.
I will apply the null check in an addendum to HBASE-5139.
Operation.toJSON() does bring in Jackson JSON processor
Please confirm that the jackson jars are in your CLASSPATH:
$ ls lib/*jackson*
lib/jackson-core-asl-1.5.5.jar lib/jackson-jaxrs-1.5.5.jar
lib/jackson
Hi Ted,
Following from what you have said, we have edited AggregateClient.java with the
following modification to the median() method:
...
// scan the region with median and find it
Scan scan2 = new Scan(scan);
// inherit stop row from method parameter
if (startRow != null)
Royston:
The exception came from this line:
ResultScanner scanner = table.getScanner(scan2);
Can you help me review the logic starting with:
// scan the region with median and find it
Scan scan2 = new Scan(scan);
You can log the String form of scan and scan2 before the table.getScanner(
Our memory problems might be as simple as not closing a scanner every time
one is opened, but I know we had to implement nagios based restarts of
thrift as our 4g thrift memory gets eaten up and it eventually freezes and
stop responding to requests after less than 1 week of running. We are
running
Hi Ted,
Finally rebuilt branch/0.92 and applied your patch and rebuilt my code.
Using AggregationClient.sum() on my test table I get the correct result.
Just swapping to AggregationClient.median() I get the following error:
[sshexec] org.apache.hadoop.hbase.client.RetriesExhaustedException: Fa
Thanks again Matt! I will try out this instance type, but i'm concerned
about the MapReduce cluster running apart from HBase in my case, since we
have some MapReduces running and planning to run more. Feels like losing
the great strength of MapReduce, by running it far from data.
2012/1/21 Matt Co
Check it: http://hbase.apache.org/book.html#hadoop
-Original Message-
From: Stuti Awasthi
Sent: Monday, January 23, 2012 2:25 PM
To: user@hbase.apache.org
Subject: RE: Fresh setup hadoop 1.0 and hbase 0.90.5 unable to start master
Ya it does, but I read it somewhere that those warning ne
Ya it does, but I read it somewhere that those warning needs to be ignored
-Original Message-
From: kim young ill [mailto:khi...@googlemail.com]
Sent: Monday, January 23, 2012 2:01 PM
To: user@hbase.apache.org
Subject: Re: Fresh setup hadoop 1.0 and hbase 0.90.5 unable to start master
y
you dont see any exceptions/warning in the log files, complaining about
append... ???
On Mon, Jan 23, 2012 at 7:27 AM, Stuti Awasthi wrote:
> Hi,
> You will find common-* . jar in $Hadoop_HOME/lib directory. I have setup
> Hadoop-1.0.0 with Hbase-0.90.5 and it is working fine for me.
> I hav
28 matches
Mail list logo