Looks like ipv6 address is not being parsed correctly. Maybe related
to : https://bugs.openjdk.java.net/browse/JDK-6991580
Alok
On Wed, Mar 18, 2015 at 3:13 PM, Serega Sheypak
serega.shey...@gmail.com wrote:
Hi, I'm trying to use HBaseStorage to read data from HBase
1. I do persist smth to
Assuming the cluster is not manually balanced, hbase will try to
maintain roughly equal number of regions on each region server. So,
when you pre-split a table, the regions should get evenly spread out
to all of the region servers. That said, if you are pre-splitting a
new table on a cluster that
I meant, in the normal course of operation, rebalancing will not
affect writes in flight. This is never an issue when pre splitting
because, by definition, splits occurred before data was written to the
regions.
If I choose to automatically split rows, but choosing a row key like
we described in
You can use a key like (user_id + timestamp + alert_id) to get
clustering of rows related to a user. To get better write throughput
and distribution over the cluster, you could pre-split the table and
use a consistent hash of the user_id as a row key prefix.
Have you looked at the rowkey design
You don't want a lot of columns in a write heavy table. HBase stores
the row key along with each cell/column (Though old, I find this
still useful:
http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html)
Having a lot of columns will amplify the amount of data being stored.
That
Have you considered placing something like Kafka queue in between the
data stream and hbase consumer/writer? I have used Kafka in the past
to consume very high volume of event data and write it to hbase.
Problems we ran into when writing large amounts of data continuously
to hbase are
How are you going to access the results? Do you first lookup the order
and then the results? If so, you could do something like this:
Table 1: Order
row_key = order_id
Column Family = order { columns: order.prop1, order.prop2}
Table 2: order_result
row_key = order_id:result_id
Column Family
interchangeably creates confusion.
On Tue, Feb 10, 2015 at 9:57 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Oh, you're right? I read the question too quickly and skipped the column
information... FuzzyRowFilter is only for the key.
2015-02-11 0:53 GMT-05:00 Alok Singh aloksi...@gmail.com
You could use a QualifierFilter with a RegexStringComparator to do the same.
Alok
On Tue, Feb 10, 2015 at 7:23 PM, anil gupta anilgupt...@gmail.com wrote:
Hi,
I want to get all the columns of a row that ends with xyz. I know there
is ColumnPrefixFilter. Is there any other column filter in
an efficient
way of implement such kind of filer.
On Wed, Feb 11, 2015 at 1:39 PM, Alok Singh aloksi...@gmail.com wrote:
You could use a QualifierFilter with a RegexStringComparator to do the
same.
Alok
On Tue, Feb 10, 2015 at 7:23 PM, anil gupta anilgupt...@gmail.com
wrote:
Hi
Have you looked at the phoenix project? http://phoenix.apache.org/
It is an SQL layer on top of hbase, an alternative to map-reduce jobs
when ad hoc/realtime queries are needed.
Your use case seems like it would work fairly well with phoenix.
Alok
On Mon, Jan 12, 2015 at 7:06 PM, Wilm Schumacher
One way to model the data would be to use a composite key that is made
up of the RDMS primary_key + . + field_name. Then just have a single
column that contains the value of the field.
Individual field lookups will be a simple get and to get all of fields
of a record, you would do a scan with
We ran into this a few weeks ago when while adding new nodes into an
existing cluster. Due to a misconfiguration, the new nodes were assigned a
wrong zookeeper quorum, and ended up forming a new cluster.
We saw a similar error in our logs:
2014-01-30 16:47:19,196 ERROR
Hello everyone,
could anyone tell me small query?
Does Hbase decompress data before executing query or it execute queries on
compressed data? and how snappy and lzo actually behave ?
thanks
it reply to your question?
JM
2013/7/11 Alok Singh Mahor alokma...@gmail.com
Hello everyone,
could anyone tell me small query?
Does Hbase decompress data before executing query or it execute queries
on
compressed data? and how snappy and lzo actually behave ?
thanks
On Sat, Jun 8, 2013 at 7:04 PM, priyanka raichand
raichand.priya...@gmail.com wrote:
Hello everyone,
I am trying to activate LZO compression in HBase with the help of following
http://www.nosql.se/2011/09/activating-lzo-compression-in-hbase/
I am getting error at the 7th step in that link
-m...@spaggiari.org
wrote:
What's the output of ~/packages/hbase-0.94.6/bin/hbase classpath?
2013/4/3 Alok Singh Mahor alokma...@gmail.com:
thank you Harsha
with your advice I used command to compile that example code
alok@alok:~/exp/hbase/exp$ javac -classpath
`~/packages/hbase-0.94.6
2013/4/4 Alok Singh Mahor alokma...@gmail.com:
thanks Jean, output of ~/packages/hbase-0.94.6/bin/hbase classpath is
alok@alok:~/exp/hbase/exp$ ~/packages/hbase-0.94.6/bin/hbase classpath
/home/alok/packages/hbase-0.94.6/bin/../conf:/usr/lib/jvm/default-java/lib/tools.jar:/home/alok
Spaggiari jean-m...@spaggiari.org
wrote:
org.apache.hadoop.conf.Configuration is missing from the imports...
2013/4/4 Alok Singh Mahor alokma...@gmail.com:
thanks again JM :)
you gave very important clue.
now am trying example code in
http://hbase.apache.org/0.94/apidocs/org/apache
. That
will help you a lot.
Again, here, you are simply missing the classpath for you java command.
JM
2013/4/4 Alok Singh Mahor alokma...@gmail.com:
wow I am not getting any error now while compiling using (javac
-classpath
`~/packages/hbase-0.94.6/bin/hbase classpath` MyLittleHBaseClient.java
will recommand you to read some HBase related books where you
will learn that column familly names need to be as small as
possible... one byte is the best.
JM
2013/4/4 Alok Singh Mahor alokma...@gmail.com:
yes thank you so much jean, I will switch to eclipse.
now I tried with (java -classpath
Hi all,
today I start afresh a Example code on
http://blog.rajeevsharma.in/2009/06/using-hbase-in-java-0193.html
but I guess luck is not with me.
I run
javac -classpath
:
Have you gone over the related section in HBase book which I mentioned ?
From the screen shot, you were missing HBase project which you can import
through File - Import, then existing project.
Cheers
On Sat, Mar 30, 2013 at 8:00 PM, Alok Singh Mahor alokma...@gmail.com
wrote:
thank you
Eclipse the path of 0.94
workspace root
Establish dependency on HBase project for your Java app
Cheers
On Sun, Mar 31, 2013 at 3:48 AM, Alok Singh Mahor alokma...@gmail.com
wrote:
Hello Tez,
I went through http://hbase.apache.org/book.html#developing
but I didnt find relevancy or I could
, 2013 at 9:46 AM, Alok Singh Mahor alokma...@gmail.com
wrote:
which external jab I imported you can see in project tree at
http://i.troll.ws/209459a4.png
your method is looking more nice and sophisticated so I also tried to use
mvn.
but I am behind proxy so I set proxy in ~/.m2
Hi all,
I have set up Hbase in pseudo distributed mode.
I am using hadoop-1.1.2 hbase-0.94.6 and setup files are in Home
directory.
content of ~/hbase-0.94.6/conf/hbase-site.xml is
configuration
property
namehbase.cluster.distributed/name
valuetrue/value
/property
property
Hi all,
should I strictly use only oracle's java 6 only?
will I really face problem if I am using openjdk7?
if then then what type of problem I will face?
is/are anyone using openjdk? or tried that ?
and why Hbase is built on top of sun's java ? I guess openjdk exist from
long back. should hbase
Boesch java...@gmail.com wrote:
have you included the jar files under:
CLASSPATH=$HADOOP_HOME/*:$HADOOP_HOME/lib/*:$HBASE_HOME/*:$HBASE_HOME/lib/*:$CLASSPATH
cd $HBASE_HOME/src
javac
examples/mapreduce/org/apache/hadoop/hbase/mapreduce/IndexBuilder.java
2013/3/30 Alok Singh Mahor
Dan,
One of the ways we get around the scanner timeouts is to keep track of
the last row that was read and restart the scan from that row.
--
boolean scanComplete = false;
while (!scanComplete){
long lastFetchTs = 0;
scanner = table.getScanner(scan);
Result
PM, Alok Singh Mahor alokma...@gmail.com
wrote:
Hi all,
I want to setup HBase in standalone mode on local filesystem.
I want to use local file system so I guess no need to install hadoop
and zookeeper.
I followed the instructions from
http://hbase.apache.org/book/quickstart.html
i
-in-pseudo.html
I have outlined the whole process there.
HTH
Regards,
Mohammad Tariq
On Mon, Nov 26, 2012 at 4:24 PM, Alok Singh Mahor alokma...@gmail.com
wrote:
wow :)
thanks a lot , my hbase shell commands are working now :)
I will try to setup pseudo-distributed mode
please tell me
Web UI)?
On Mon, Nov 26, 2012 at 9:59 PM, Alok Singh Mahor alokma...@gmail.com
wrote:
Hi all,
I have set up standalone Hbase on my laptop. HBase shell is working fine.
and I am not using hadoop and zookeeper
I found one frontend for HBase
https://sourceforge.net/projects/hbasemanagergui
mean it is completely useless. Hbase guys have done really a great
work. You can even perform some operation from the webUI as well.
HTH
Regards,
Mohammad Tariq
On Tue, Nov 27, 2012 at 12:55 AM, Alok Singh Mahor alokma...@gmail.com
wrote:
I need frontend for HBase shell like we
Do your PUTs and GETs have small amounts of data? If yes, then you can
increase the number of handlers.
We have a 8-node cluster on 0.92.1, and these are some of the setting
we changed from 0.90.4
hbase.regionserver.handler.count = 150
hbase.hregion.max.filesize=2147483648 (2GB)
The regions
Sorry for jumping on this thread late, but, I have seen very similar
behavior in our cluster with hadoop 0.23.2 (CDH4B2 snapshot) and hbase
0.23.1. We have a small, 7 node cluster (48GB/16Core/6x10Kdisk/GigE
network) with about 500M rows/4Tb of data. The random read performance
is excellent, but,
I built a distributed FS on top of cassandra a while ago, pretty sure
the same approach will work on hbase too.
1. Create a two tables: file_info, file_data
2. Break each file into chunks of size N (I used 256K).
3. In the file_info table, store the metadata, SHA1 hash of the data,
number of
?
- Look in the Namenode log for that file, is it being asked to delete
it or something before it sends a NotReplicatedYetException?
Hope this helps,
J-D
On Thu, Dec 29, 2011 at 3:11 PM, Alok Singh a...@urbanairship.com wrote:
When attempting to gracefully shutdown a regionserver, I saw
When attempting to gracefully shutdown a regionserver, I saw a couple
of NotReplicatedYet exceptions in the logs (below). Can't find the
file that is causing this exception in on the HDFS filesystem. Have we
potentially lost the data, or is this exception benign?
Alok
hbase: 0.90.3
hadoop:
38 matches
Mail list logo