Did you specify hbase.rootdir ?
Cheers
On Sat, Dec 7, 2013 at 12:27 AM, Rural Hunter wrote:
> Hi,
>
> I set up a hbase cluster(hbase-0.96.0-hadoop2 + hadoop-2.2.0) with 1
> master and 3 slaves. Everything seems fine when I use hbase shell on the
> master. I could create/scan tables. But when I
What version of Hadoop / HBase are you using ?
Cheers
On Sat, Dec 7, 2013 at 9:38 AM, iain wright wrote:
> Hi folks,
>
> One of our RS/DN/TT nodes went down dirty (kernel panic). Users contacted
> about reports failing, and i saw wierd logs in the jobtracker for "pending
> shutdown" etc.
>
> P
Hi everyone,
I have a question about the hbase thrift server and running scans in
particular. The thrift server maintains a map of -> ResultScanner(s).
These integers are passed back to the client. Now in a typical setting
people run many thrift servers and round robin rpc(s) to them.
It seems t
Filed https://issues.apache.org/jira/browse/HBASE-10102
From: lars hofhansl
To: "user@hbase.apache.org" ; hbase-dev
Sent: Friday, December 6, 2013 5:31 PM
Subject: Re: HBase returns old values even with max versions = 1
+ dev list
Specifically:
Currentl
Hi folks,
One of our RS/DN/TT nodes went down dirty (kernel panic). Users contacted
about reports failing, and i saw wierd logs in the jobtracker for "pending
shutdown" etc.
Proceeded to stop jobtracker/tt nodes, hbase, and hdfs.
On attempting to turn back on the NN + DN's our namenode is failin
+ dev list
Specifically:
Currently the workflow in ScanQueryMatcher is something like this:
1. = min(, )
2. filter by timerange
3. filter out columns (i.e. columns not specified in the scan)
4. apply customer filters
5. filter by
Every KV is passed through this filtering process.
What we sho
The old versions can still be around until a flush and/or compaction.
During a user-level scan, HBase first filters by timerange and then counts the
versions.
I agree, this is counter intuitive in this case. In other cases people want to
first limit by timerange, and then get x numbers of versio
To add some color... HBase will store version of KeyValues next to each other
(at least after a compaction).
If your queries typically request most of the versions of a KV that works out
nicely.
If, however, you typically query only the latest version or a specific version
then HBase will load
hello all,
i was just taking a look at HTable source code to get a bit more
understanding about hbase from a client perspective.
i noticed that puts are put into a bugger (writeAsyncBuffer) that gets
flushed if it gets to a certain size.
writeAsyncBuffer can take objects of type Row, which includes
Hi Igor,
Have you looked at this constructor?
/**
* Constructs KeyValue structure filled with null value.
* @param row - row key (arbitrary byte array)
* @param family family name
* @param qualifier column qualifier
*/
public KeyValue(final byte [] row, final byte [] family,
Before the second get command was executed, was there compaction on server
side ?
You can find out by going to region server hosting row 'r1' and check
server log.
Cheers
On Sat, Dec 7, 2013 at 12:05 AM, Niels Basjes wrote:
> Hi,
>
> I have the desire to find the columns that have not been up
bq. HDP2(HBase 0.95.2.2.0.5.0-64
HDP2 goes with 0.96.0
bq. java.lang.ClassNotFoundException: org.apache.hadoop.hbase.filter.
WritableByteArrayComparable.
Can you show us the stack trace ?
WritableByteArrayComparable doesn't exist in 0.96 and later branches.
Cheers
On Sat, Dec 7, 2013 at 4:22
Sounds like hbase's HFileOutputFormat depends on KeyValue's "family" field.
I don't want that.
All I want is to keep keys and values in an indexed filed. TFile would work
as well. But it seems there is no TFileOutputFormat available.
On Fri, Dec 6, 2013 at 4:47 PM, Igor Gatis wrote:
> That's t
Do a register of your hbase and zookeeper jars in the pig script.
-Rohini
On Fri, Dec 6, 2013 at 1:56 AM, Kyle Lin wrote:
> Hey there
>
> First, my Environment: Hortonworks HDP2(HBase 0.95.2.2.0.5.0-64, Pig
> 0.11.1).
>
> I use pig to load data from hbase, then got Exception Message of
That's the kind of solution I'm looking for.
Here is what I have:
String jobName = "Seq2HFile";
Job job = new Job(getConf(), jobName);
job.setJarByClass(Seq2HFile.class);
job.setMapperClass(*MyIdentityMapper.class*);
job.setMapOutputKeyClass(BytesWritable.class);
job.setM
Michael,
Both: columns and timestamps are valid choices. Events have sources and in my
approach source is in rowkey and time is in timestamp.
In your approach you embed time into column qualifier.
Its easy to get last N events in my approach using "Give first N
key-values"-type of Filter in y
Hi,
I have the desire to find the columns that have not been updated for more
than a specific time period.
So I want to do a scan against the columns with a timerange.
The normal behavior of HBase is that you then get the latest value in that
time range (which is not what I want).
As far as I un
Hi,
I set up a hbase cluster(hbase-0.96.0-hadoop2 + hadoop-2.2.0) with 1
master and 3 slaves. Everything seems fine when I use hbase shell on the
master. I could create/scan tables. But when I tried to run a test Java
class on the remote server, it just hung up. This is the simple program:
We are evaluating a few different options at the moment. One of them is an
option where we are refactor the column identifiers as you suggested. The
use of HBase versions is another potential option. Rest assured we will end
up doing a more thorough analysis and take the overall design into accou
Hi Igor,
I will say, MapReduce.
SequenceFileInputFormat
HFileOutputFormat
JM
2013/12/5 Igor Gatis
> I have SequenceFiles I'd like to convert to HFile. How do I that?
>
unsubscribe
On Dec 5, 2013, at 7:58 AM, oc tsdb wrote:
> Hi,
>
>
> While exporting HBase snapshots we need to specify number of mappers to use
> as mentioned below.To get better performance how many mappers can be used
> and please let us know based on which parameters we need to decide on
> nu
Look,
Just because you can do something, doesn't mean its a good idea.
From a design perspective its not a good idea.
Ask yourself why does versioning exist? What purpose does versioning serve in
HBase?
From a design perspective you have to ask yourself what are you attempting to
do.
H
Hi,
That means if i have cluster with 5 data nodes,i can have mappers upto 5.
more mappers means more throughput.
Am I correct?
Thanks
-OC
On Thu, Dec 5, 2013 at 7:31 PM, Matteo Bertozzi wrote:
> to make it simple, the number of mappers is the number of "machines" that
> you want to use.
> each
Hey there
First, my Environment: Hortonworks HDP2(HBase 0.95.2.2.0.5.0-64, Pig
0.11.1).
I use pig to load data from hbase, then got Exception Message of
java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.filter.WritableByteArrayComparable.
My script is like below:
samples = LO
24 matches
Mail list logo