Hansi:
HBASE-10395 is fixed in 0.98.0
Have you considered upgrading to 0.98.x ?
Cheers
On Tue, May 13, 2014 at 4:59 AM, john guthrie wrote:
> can't you just pick an old date - January 1, 1970 maybe?
>
>
> On Tue, May 13, 2014 at 4:58 AM, Hansi Klose wrote:
>
> > Hi because of the Issue
> >
>
I use two hbase tables as mapper input.
one is url table, the other is links between url
sample rows of url tabel: http://abc.com/index.htm, content1
http://abc.com/news/123.htm,content
sample rows of linkstable
http://abc.com/index.htm++http://abc.com/news/123.htm anchor1
mapper will aggregate url
You can also create a table via the hbase shell with pre-split tables like
this...
Here is a 32-byte split into 16 different regions, using base16 (ie a md5
hash) for the key-type.
create 't1', {NAME => 'f1'},
{SPLITS=> ['1000',
'2000',
'300
Hi HBasers!
Subash and I are organizing the HBase Birds of a Feather (BoF) session at
Hadoop Summit San Jose this year. We're looking for 4-5 brave souls willing
to standup for 15 minutes and tell the community what's working for them
and what isn't. Have a story about how this particular feature
Hi,
I have sent several subscribe messages to the address(
user-subscr...@hbase.apache.org), but no any response. Is the mail list
closed for subscription?
Can you see this message?
A patch for the refguide would be great, perhaps in the troubleshooting
mapreduce section here http://hbase.apache.org/book.html#trouble.mapreduce?
St.Ack
On Tue, May 13, 2014 at 7:07 AM, Geovanie Marquez <
geovanie.marq...@gmail.com> wrote:
> The following property does exactly what I wanted ou
Sorry I could not get on this sooner. I do typically run a list of tests
from my checklist to verify the release, which is why I sometimes do not
vote on releases. Let me do the dutiful for this.
Agreed with the general sentiment. Please reconsider.
Enis
On Mon, May 12, 2014 at 3:59 PM, Stack
Here is my belated +1:
checked signature
ran test suite - passed, see bottom of email
pointed Phoenix at 0.98.2-hadoop2 and ran tests - passed
-
[INFO] HBase . SUCCESS [1.565s]
[INFO] HBase - Common SUCCESS [
The following property does exactly what I wanted our environment to do. I
had a 4GiB Heap and ran the job and no jobs failed. Then I dropped our
cluster heap to 1GiB and reran the same resource intensive task.
This property must be added to the "HBase Service Advanced Configuration
Snippet (Safet
you can pre-splite table using you hex characters string for start key, end
key and using number of regions to spilit
**
HTableDescriptor tableDes = new HTableDescriptor(tableName);
tableDe
Possibly this was due to HBASE-7186 or HBASE-7188. It's especially odd
since I don't see usages outside the mapreduce package (at least for the
classes that were of interest to me), so there shouldn't be any issue with
changing the artifact the package is deployed in.
Is this more a question for t
Have you looked at http://hbase.apache.org/book/performance.html ?
Cheers
On May 13, 2014, at 3:14 AM, Flavio Pompermaier wrote:
> So just to summarize the result of this discussion..
> do you confirm me that the last version of HBase should (in theory) support
> mapreduce jobs on tables that i
Dear all,
I am using HBase 0.96.1.1-hadoop2 with CDH-5.0.0.
I have an application where I have registered a coprocessor to my table to get
few statistics on the read/write/delete requests.
I have implemented preGetOp, prePut and preDelete accordingly and it is working
as expected in case of read
can't you just pick an old date - January 1, 1970 maybe?
On Tue, May 13, 2014 at 4:58 AM, Hansi Klose wrote:
> Hi because of the Issue
>
> https://issues.apache.org/jira/browse/HBASE-10395
>
> i want to start my verification job with a starttime.
>
> To verify the whole time range i need the ol
Hi because of the Issue
https://issues.apache.org/jira/browse/HBASE-10395
i want to start my verification job with a starttime.
To verify the whole time range i need the oldest timestamp in that table.
I it possible with the hbase shell to get the key with the oldest timestamp?
So that i can
So just to summarize the result of this discussion..
do you confirm me that the last version of HBase should (in theory) support
mapreduce jobs on tables that in the meantime could be updated by external
processes (i.e. not by the mapred job)?
One of the answer about this was saying: "Poorly tuned
Downloaded the srcs built the tar ball and ran tests and did some testing
from that mainly with visibility.
Compaction, scanning etc. All seemed fine with me.
Very belated +1 from me too.
Regards
Ram
On Tue, May 13, 2014 at 8:17 AM, Anoop John wrote:
> Thanks Andy. Will help you better with
Getting folks test releases is like pulling teeths, or herding cats, or getting
three kids to agree which song from Frozen is the best :)
All jokes aside, this *is* a problem. With the goal of more frequent releases
release verification needs to be a smooth process.
I do not know how to fix it,
yes, it only ocurrs occasionally. it's now ok
On Thu, May 8, 2014 at 4:11 AM, Ted Yu wrote:
> The warning came from:
>
> try {
> // pre-fetch certain number of regions info at region cache.
> MetaScanner.metaScan(conf, this, visitor, tableName, row,
> this.prefet
Dear all,
I am using HBase 0.96.1.1-hadoop2 with CDH-5.0.0.
I have an application where I have registered a coprocessor to my table to get
few statistics on the read/write/delete requests.
I have implemented preGetOp, prePut and preDelete accordingly and it is working
as expected in case of read
Yes Ted, i am using hbase-0.94.19 and i got '-Dcompile-protobuf' from
http://hbase.apache.org/book.html#build.protobuf.
As of now, my hbase is working fine. i will update you if i face any issue.
Regards,
Raja
On Monday, 12 May 2014 10:05 PM, Ted Yu wrote:
Did you get '-Dcompile-protobuf'
21 matches
Mail list logo