Thanks!
java org.apache.hadoop.util.PlatformName
Linux-amd64-64
On 27 Aug, 2014, at 8:31 am, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
This command will give you the exact name:
java org.apache.hadoop.util.PlatformName | sed -e s/ /_/g
Can you try to run it?
But it's most
Hello,
This is the first time I am sending a query on hbase mailing list. Hopefully
this is the correct group to ask hbase/hadoop related questions.
I am running hbase 0.92, hadoop 2.0 (cdh 4.1.3). Recently, there were some
instability in my dns service and host lookups request failed
I tried to create a table but it is giving me below error. Kindly check
hbase(main):003:0 create 't1', 'f1'
ERROR: java.io.IOException: Table Namespace Manager not ready yet, try
again later
at
org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3205)
at
Hi Arun,
My 2cents.
I've seen this sometime in the past and after doing some research, the
issue seems to be related to
https://issues.apache.org/jira/browse/HADOOP-6356 . HLog (SequenceFile)
internally uses FileContext( unlike other HBase components which use
FileSystem), which doesn't cache
Hi, Praveen.
Can you share more details on HBase version?
You can look at hbase:namespace table to find out is it enabled and
deployed somewhere.
(typically you can find this on master
http://master:60010/table.jsp?name=hbase:namespace
Andrey.
On Wed, Aug 27, 2014 at 10:56 AM, Praveen G
Hi,
Many thanks for your advices!
Finally, I managed to make it work.
I needed to add:
export JAVA_LIBRARY_PATH=$HBASE_HOME/lib/native/Linux-amd64-64”
then run:
bin/hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/snappy-test
snappy
2014-08-27 15:51:39,459 INFO [main]
Hi tobe,
yes we are replicating during verify.
So as I understand, the problem is that during the verify job the one key is
updated (with new timestamp)
while this key will be verified. So on one side the key timestamp is in the
verify timerange but on the
other side no more. So the key is
Hi All,
As we know, All rows are always sorted lexicographically by their row key.
In lexicographical order, each key is compared at binary level, byte by
byte and from left to right.
See the example below , where row key is some integer value and output of
scan show lexicographical order of
Hi Arthur,
Glad to hear you got it!
Regarding #2, was JAVA_LIBRARY_PATH already set before? If so, that might
have been the issue. HBase will append to this path all what it needs (if
required) so I don't think there is anything else you need to add.
Regarding #1 I don't think it's an error.
Hi JM,
Thank you so much!
I had not set JAVA_LIBRARY_PATH before.
Now I added [export JAVA_LIBRARY_PATH=$HBASE_HOME/lib/native/Linux-amd64-64”]
to hbase-env.sh
also added [export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native/Linux-amd64-64”]
to hadoop-env.sh
I hope this is correct way.
Can you
Hi Sanjiv ;)
If you want your keys to be ordered as Integers, why do you not simply
store them as Integers and not as Strings? HBase order the rows
alphabetically, and you can not change that. Yes you can implement a key
comparator if you want but I don't think it's going to change anything
Hi Arthur,
Welcome in our world ;)
For JAVA_LIBRARY_PATH I don't even set it anywhere.
hbase@node3:~/hbase-0.94.3$ echo $JAVA_LIBRARY_PATH
hbase@node3:~/hbase-0.94.3$ grep JAVA_LIBRARY_PATH conf/hbase-env.sh
base@node3:~/hbase-0.94.3$ grep JAVA_LIBRARY_PATH bin/*
bin/hbase:#
Are yo usure your HBase is running? Can you paste the 200 last lines of
your master logs in pastebin?
JM
2014-08-27 3:32 GMT-04:00 Andrey Stepachev oct...@gmail.com:
Hi, Praveen.
Can you share more details on HBase version?
You can look at hbase:namespace table to find out is it enabled
Check your pom.xml, some artifacts changed from earlier releases.
Artem Ervits
Data Analyst
New York Presbyterian Hospital
- Original Message -
From: Malte Otten [mailto:malte.maltesm...@gmail.com]
Sent: Monday, August 25, 2014 02:12 PM
To: user@hbase.apache.org user@hbase.apache.org
Hi JM,
Thanks for link... I agree with you that i can be done when key is an
integer.
Reason why i am asking for custom KeyComparator is that Something key is
not just integer or some value , it can be of composition of multiple
values like COUNTRYCITY where key is made up of two values, one
Sanjiv:
Is country code of fixed width ?
If so, as long as country is the prefix, it would be sorted first.
Cheers
On Wed, Aug 27, 2014 at 8:00 AM, @Sanjiv Singh sanjiv.is...@gmail.com
wrote:
Hi JM,
Thanks for link... I agree with you that i can be done when key is an
integer.
Reason
Also, you might want to take a look at Apache Phoenix if you want to play
with composite keys without having to do all the coding...
2014-08-27 11:04 GMT-04:00 Ted Yu yuzhih...@gmail.com:
Sanjiv:
Is country code of fixed width ?
If so, as long as country is the prefix, it would be sorted
Hi Ted,
Yes it would work for country code like IND for 'india' , AUS for australia.
But in my use-case, It's full country name ( not just three alphabet
country code).
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Wed, Aug 27, 2014 at 8:34 PM, Ted Yu yuzhih...@gmail.com wrote:
Sanjiv:
Hi JM,
I am exploring HBase only whether it is possible or not in HBase.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Wed, Aug 27, 2014 at 8:39 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Also, you might want to take a look at Apache Phoenix if you want to play
with
Sanjiv:
Is there a reason for you to choose full country name ?
Row key would be stored for every KeyValue in the same row, choosing
abbreviation would reduce storage cost.
Cheers
On Wed, Aug 27, 2014 at 8:38 AM, @Sanjiv Singh sanjiv.is...@gmail.com
wrote:
Hi Ted,
Yes it would work for
Hi Ted,
Yes definitely, i can make it as Fixed country code.
The example i choose is just one of the use-case of specific ordering need.
I am thinking of if we can use any user object as row-key and ordering of
rows within HBase are defined explicitly by Custom KeyComparator.
Regards
A brief search for KeyComparator using http://search-hadoop.com/ didn't
turn up previous discussion on using custom KeyComparator.
I would suggest conforming to best practices of row key design and
leaving custom
KeyComparator as last resort.
Cheers
On Wed, Aug 27, 2014 at 9:24 AM, @Sanjiv
Hi,
The reason we cannot close the ResultScanner (or issue a multi-get), is
that we have wide rows with many columns, and we want to iterate over them
rather than get all the columns at once.
There's a special but common case that for each row we only need the first
column. Is there a better way
On Thu, Aug 28, 2014 at 1:20 AM, Jianshi Huang jianshi.hu...@gmail.com
wrote:
There's a special but common case that for each row we only need the first
column. Is there a better way to do this than multiple scans + take(1)?
We still need to set a column range, is there a way to get the
So you want to specify several columns. e.g. c2, c3, and c4, the GET is
supposed to return the first one of them (doesn't have to be c2, can be c3
if c2 is absent) ?
To my knowledge there is no such capability now.
Cheers
On Wed, Aug 27, 2014 at 10:28 AM, Jianshi Huang jianshi.hu...@gmail.com
You might also have a look at using OrderedBytes [0] instead of Bytes for
encoding your values to byte[]. This is the kind of use-case those encoders
are intended to support.
Thanks,
Nick
[0]:
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/OrderedBytes.html
On Wed, Aug 27, 2014
On Wed, Aug 27, 2014 at 10:20 AM, Ted Yu yuzhih...@gmail.com wrote:
A brief search for KeyComparator using http://search-hadoop.com/ didn't
turn up previous discussion on using custom KeyComparator.
You cannot change key comparator: http://search-hadoop.com/m/8XeC02JW6xV
St.Ack
Hi,
I would like to find out how I can really reduce number of mappers to less
than the number of regions in the hbase table.
Could someone please let me know how to do that in pig while using load
command as:
LOAD 'hbase://$HBASE_TABLE' USING
org.apache.pig.backend.hadoop.hbase.HBaseStorage
Hi Fahri,
It will be one mapper per region, but if you want to have less mappers
running at the same time, you can reduce the size of your queue? That way
you will still have X mappers in total, but only Y mappers will run in
parallel. You can not configure X, you can configure Y.
JM
Yes, you're right. After applying that patch, a little inconsistency is
acceptable, but not like what you saw before.
Actually now we count ONLY_IN_SOURCE_TABLE_ROWS, ONLY_IN_PEER_TABLE_ROWS
and CONTENT_DIFFERENT_ROWS rather than just BADROWS.
On Wed, Aug 27, 2014 at 4:25 PM, Hansi Klose
Very similar. We setup a column range (we're using ColumnRangeFilter right
now), and we want the first column in the range.
The problem we have a lot of rows.
If there's no such capability, then we need to control the parallelism
ourselves.
Shall I sort the rows first before scanning? Will a
You can enhance ColumnRangeFilter to return the first column in the range.
In its filterKeyValue(Cell kv) method:
int cmpMax = Bytes.compareTo(buffer, qualifierOffset, qualifierLength,
this.maxColumn, 0, this.maxColumn.length);
if (this.maxColumnInclusive cmpMax = 0 ||
Thanks Stack/Ted,
I got the logic ,not to have key comparator for row or column families.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Thu, Aug 28, 2014 at 12:36 AM, Stack st...@duboce.net wrote:
On Wed, Aug 27, 2014 at 10:20 AM, Ted Yu yuzhih...@gmail.com wrote:
A brief search for
33 matches
Mail list logo