Tom,
I was way too curious to resist a reply here.
If you want to store a bytearray estimating the unique count for a
particular OLAP cell, will you not see a lot of updates to the same
cell and create a hotspot ?
I think another option comes to my mind. I assume that you get all the
user activit
On Tue, Apr 10, 2012 at 4:35 PM, Alan Chaney wrote:
> I installed maven (maven2.2), downloaded the
> /home/ajc/Downloads/hbase-0.90.5.tar.gz, decompressed and within the root
>
Try mvn3. I had same issue and using mvn3 it downloaded an asm fine.
St.Ack
Thank you for your reply, Alex.
In my business case, it is unnecessary to store or access more then
one version of data.
I will set the MAX_VERSIONS => 1 for every table.
On Tue, Apr 10, 2012 at 8:54 PM, Alex Baranau wrote:
> Compression applies to the files stored on disks. All versions of a co
> Even my implementation of an atomic increment
> (using a coprocessor) is two orders of magnitude slower than the
> provided implementation. Are there properties inherent to
> coprocessors or Incrementors that would force this kind of performance
> difference?
No.
You may be seeing a performa
Hi
I installed maven (maven2.2), downloaded the
/home/ajc/Downloads/hbase-0.90.5.tar.gz, decompressed and within the
root dir ran:
mvn -DskipTests package
... and got:
[INFO] Scanning for projects...
[INFO]
[INFO] Bu
Andy,
I have attempted to use coprocessors to achieve a passable performance
but have failed so far. Even my implementation of an atomic increment
(using a coprocessor) is two orders of magnitude slower than the
provided implementation. Are there properties inherent to
coprocessors or Incrementor
On Tue, Apr 10, 2012 at 9:19 AM, Tom Brown wrote:
> Jacques,
>
> The technique I've been trying to use is similar to a bloom filter
> (except that it's more space efficient).
Got it. I didn't realize.
> It's my understanding that
> bloom filters in HBase are only implemented in the context o
Replace 127.0.1.1 with 127.0.0.1 in /etc/hosts..Also add the
hadoop-core*.jar from your HADOOP_HOME and common-cofigurations from
the HADOOP_HOME/lib to your HBASE_HOME/lib folder.
Regards,
Mohammad Tariq
On Tue, Apr 10, 2012 at 8:18 PM, shashwat shriparv
wrote:
> Comment out 127.0.1.1, i
Here are two coming meetups for those interested.
First up, on the day after hbasecon, we are going to do an all-day bug
bashing session down at Cloudera's Palo Alto office:
http://www.meetup.com/hackathon/events/58953522/ All up for bug
squashing are welcome, especially the out-of-towners!
On t
What AMI are you using as your base?
I recently started using the new Linux AMI (2012.03.1) and noticed what looks
like significant improvement over what I had been using before (2011.02 IIRC).
I ran four simple tests repeated three times with FIO: a read bandwidth test, a
write bandwidth test,
Do you have bloom filters enabled? And compression? Both of those can help reduce disk io load
which seems to be the main issue you are having on the ec2 cluster.
~Jeff
On 4/9/2012 8:28 AM, Jack Levin wrote:
Yes, from %util you can see that your disks are working at 100%
pretty much. Which
Tom,
> I am a big fan of the Increment class. Unfortunately, I'm not doing
> simple increments for the viewer count. I will be receiving duplicate
> messages from a particular client for a specific cube cell, and don't
> want them to be counted twice
Gotcha.
> I created an RPC endpoint coprocesso
The CLASSPATH(S) are here: http://pastebin.com/wbwEL9Li
Looks to me like the client is 0.95-SNAPSHOT as is our HBase server.
However I just noticed the client is built with ZK 3.4.3 but our ZK server is
3.3.3. Is there any incompatibility between those versions of ZK? (I'm going to
make them the
Thanks for your explanation. Now it's clear for me.
Regards!
Yong
On Tue, Apr 10, 2012 at 6:13 PM, Gary Helmling wrote:
> Each and every HRegion on a given region server will have it's own
> distinct instance of your configured RegionObserver class.
> RegionCoprocessorEnvironment.getRegion() re
Jacques,
The technique I've been trying to use is similar to a bloom filter
(except that it's more space efficient). It's my understanding that
bloom filters in HBase are only implemented in the context of finding
individual columns (for improving read performance). Are there
specific bloom operat
Each and every HRegion on a given region server will have it's own
distinct instance of your configured RegionObserver class.
RegionCoprocessorEnvironment.getRegion() returns a reference to the
HRegion containing the current coprocessor instance.
The hierarchy is essentially:
HRegionServer
\_ HR
On Tue, Apr 10, 2012 at 2:58 AM, Royston Sellman
wrote:
> [sshexec] java.lang.IllegalArgumentException: Not a host:port pair: �[][][]
>
We changed how we persist names to zookeeper in 0.92.x. It used to be
a host:port but now is a ServerName which is host comma port comma
startcode and all is p
Hi Todd,
we don't see any problems in dmesg, neither disk controllers nor any other
problem. We have checked the controller status and it reports no failures
of any kind on the disks.
We really don't have a clue as to what might be happening.
On Mon, Apr 9, 2012 at 7:15 PM, Todd Lipcon wrote:
Comment out 127.0.1.1, if present in the /etc/hosts file. check if you can
ssh to localhost,
On Tue, Apr 10, 2012 at 6:38 PM, Dave Wang wrote:
> Shaharyar,
>
> Did you format the namenode ("hadoop namenode -format")?
>
> What do the namenode logs say?
>
> - Dae
>
> On Tue, Apr 10, 2012 at 6:00
Shaharyar,
Did you format the namenode ("hadoop namenode -format")?
What do the namenode logs say?
- Dae
On Tue, Apr 10, 2012 at 6:00 AM, shaharyar khan wrote:
>
> when i try to start hadoop then its all services like TaskTracker,
> JobTracker, DataNode ,SecondaryNameNode are running except N
Please post your log files under /var/log directory. You are moreeither
missing something in the hdfs conf file or it is a problem with your hosts
file and name resolution for the namenode.
On 4/10/12 8:00 AM, "shaharyar khan" wrote:
>
>when i try to start hadoop then its all services like TaskT
when i try to start hadoop then its all services like TaskTracker,
JobTracker, DataNode ,SecondaryNameNode are running except NameNode.So,
HBase is unable to find hadoop as namenode is not up.So please guide me why
this is happening.i have checked all configuration files as well as iptables
/ fire
Compression applies to the files stored on disks. All versions of a column
are stored the same way (HBase doesn't differentiate them at the time of
writing and they are not placed "near" each other in the file). Given that,
yes you are likely to get the same level of compression (compr. ratio) if
y
use this version http://code.google.com/a/apache-extras.org/p/hadoop-gpl-
compression/?redir=1
add codes in LzoDecompressor
@Override
public int getRemaining()
{
return uncompressedDirectBuf.remaining();
}
change return type in LzopCodec
protected int getCompressedData()
{
.
Sounds like coprocessor is what you need
https://blogs.apache.org/hbase/entry/coprocessor_introduction
Mikael
On Apr 10, 2012 2:07 PM, "Mohammad Tariq" wrote:
> Hello list,
>
> Is there any feature provided by HBase API using which we can write
> monitoring jobs.For example, I have to perform
Hello list,
Is there any feature provided by HBase API using which we can write
monitoring jobs.For example, I have to perform some analytics on the
data stored in individual rows, and everytime a new row is added I
have to repeat the task for the newly created row.For this I need to
write a co
I replaced the hadoop-core-0.20.2-cdh3u1.jar to hadoop-core-1.0.1.jar ,but
build failed .
compile-java:
[javac] Compiling 24 source files to
/app/setup/cloud/toddlipcon-hadoop-lzo-c7d54ff/build/classes
[javac]
/app/setup/cloud/toddlipcon-hadoop-lzo-c7d54ff/src/java/com/hadoop/compression/l
Why can't i use this version:
http://code.google.com/a/apache-extras.org/p/hadoop-gpl-compression/?redir=1
On 17 March 2011 17:29, Ferdy Galema wrote:
> Updating the lzo libraries resolved the problem. Thanks for pointing it
> out and thanks to Todd Lipcon for his hadoop-lzo-packager.
>
>
> On
We have been running M-R jobs successfully on Hadoop v1 and HBase 0.93 SNAPSHOT
(built from trunk) using the HBase Java API. We recently updated our Hadoop and
HBase installations to the latest versions of the code from the source
repositories.
We now have a working Hadoop 1.0.2 cluster wit
Hello,
The description of this method is " /** @return the region associated
with this coprocessor */" and the return value is an HRegion instance.
If I configure the region-coprocessor class in hbase-site.xml. It
means that this coprocessor will be applied to every HRegion which
resides on this
+1, this kind of tools is very nice.
Le 09/04/2012 21:39, Ian Varley a écrit :
Thanks, Andy. Yeah, a tool that compares a schema definition with a running
cluster, and gives you a way to apply changes (without offlining, where
possible), would be pretty sweet.
Anybody else think so? Or, do yo
31 matches
Mail list logo