Mikhail Bautin created HBASE-7946:
-
Summary: Support VLong counters
Key: HBASE-7946
URL: https://issues.apache.org/jira/browse/HBASE-7946
Project: HBase
Issue Type: Improvement
Hello,
Has anyone seen the following error building HBase trunk? I must admit I
haven't built the official trunk for a while, certainly not after the split
into hbase-common and hbase-server happened.
mvn -DskipTest package
[INFO] Scanning for projects...
[INFO] Reactor build order:
[INFO]
Hello HBase Developers,
I have noticed that TestThriftServerCmdLine is consistently failing in
trunk. That was even true before I committed Yongqiang's compression/data
block encoding refactoring patch this morning. Does anyone know why this
test has been failing and when this started?
Thanks,
Hello,
I was looking at HBase versions at
https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Aversions-panel
and wondering how to add another version to that list. Specifically, it
would be nice to add 0.89-fb there so we could tag JIRAs with it.
to the HBase JIRA. I've added you to the
group so you can handle it.
Jon.
On Fri, Mar 9, 2012 at 12:21 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
Hello,
I was looking at HBase versions at
https://issues.apache.org/jira/browse/HBASE#selectedTab
@Stack, Jonathan: thank you for your replies.
After some more internal discussion, we decided it might not be too hard
for us to implement stubs in our version of HDFS to accommodate the new API
requirements on the HBase side.
Putting some of the HDFS multi-version support plumbing in
+1 on avoiding using wildcard imports. Eclipse's Organize Imports
command gets rid of them pretty well.
Thanks,
--Mikhail
On Wed, 22 Feb 2012 12:46:47 -0800, Ted Yu yuzhih...@gmail.com wrote:
Hi,
If you wonder why occasionally we have broken Jenkins build, I can give
you
one reason.
,
--Mikhail
On Sun, Feb 12, 2012 at 10:02 PM, Roman Shaposhnik r...@apache.org wrote:
On Sun, Feb 12, 2012 at 12:36 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
One difficulty with simply importing libhadoop.so into HBase codebase is
that the dynamic library is probably a bit different
we should just get a copy of _one_ of these versions on the
hudson build boxes, and have a new hudson job which runs whichever
tests depend on the native code there?
-Todd
On Mon, Feb 13, 2012 at 10:52 AM, Roman Shaposhnik r...@apache.org wrote:
On Mon, Feb 13, 2012 at 1:58 AM, Mikhail
Hello,
Does anyone know how to increase heap allocation for Hadoop QA runs, or at
least check the available amount of memory?
Thanks,
--Mikhail
/argLine
+argLine-d32 -enableassertions -Xmx2300m
-Djava.security.egd=file:/dev/./urandom/argLine
redirectTestOutputToFiletrue/redirectTestOutputToFile
/configuration
/plugin
On Fri, Feb 10, 2012 at 12:48 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com
Hello,
We currently do not enable native Hadoop libraries in unit tests (at least
when running on Hadoop QA), but we do use them in production. Should we
try to close this discrepancy between tests and production? Some possible
approaches would be:
- Enable native libraries by default (e.g.
Hello Everyone,
Some of you have probably been wondering about what these [89-fb] patches
that our team submits for review are, so I would like to clarify that a
little bit. We run a custom version of HBase based on 0.89 at Facebook,
codenamed 0.89-fb, but we do our best effort to submit all of
6, 2012 at 11:36 AM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
Hello Everyone,
Some of you have probably been wondering about what these [89-fb]
patches
that our team submits for review are, so I would like to clarify that a
little bit. We run a custom version of HBase based
Hi Stack,
Sorry about that. We meant to tag this commit as 89-fb-only, but we are
still figuring out our tagging system. I guess [master] would have been
the right tag in this case, since this changes log-splitting functionality.
Thanks,
--Mikhail
On Fri, Feb 3, 2012 at 10:52 AM, Nicolas
/37d6e996-cba6-4a12-85bc-dbcf2e91d297
Cheers
On Tue, Dec 6, 2011 at 5:07 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
Hello,
I've been running into the following issue when running HBase tests. A
lot of them would fail with an exception similar to the one shown
below
Hello,
I've been running into the following issue when running HBase tests. A lot
of them would fail with an exception similar to the one shown below (I
added more information to the exception messages). Exit code 141 seems to
correspond to SIGPIPE, but I did not find anything obvious in
(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Thanks,
--Mikhail
On Thu, Dec 1, 2011 at 11:20 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
Dhruba:
It's 0.20.205.0, the default one
Hello,
The following reflection hack is from SequenceFileLogReader.java.
try {
Field fIn = FilterInputStream.class.getDeclaredField(in);
fIn.setAccessible(true);
Object realIn = fIn.get(this.in);
Method getFileLength = realIn.getClass().
in the data loading part, not
because of killing the regionserver.
Thanks,
--Mikhail
On Thu, Dec 1, 2011 at 10:33 PM, Stack st...@duboce.net wrote:
On Thu, Dec 1, 2011 at 9:59 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
11/12/01 21:40:07 WARN wal.SequenceFileLogReader: Error while
(SplitLogWorker.java:197)
at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
Thanks,
--Mikhail
On Thu, Dec 1, 2011 at 10:58 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
@Stack: I am using hadoop-0.20.205.0 (the default Hadoop version from
pom.xml
, Dec 1, 2011 at 11:12 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
After fixing the getFileLength() method access bug, the error I'm seeing
in
my local multi-process cluster load test is different. Do we ever expect
to
see checksum errors on the local filesystem?
11/12/01
Hello,
I am getting the following when trying to create a table from the
load-tester tool ported from 0.89-fb (https://reviews.facebook.net/D549).
It is weird that configuration instantiation fails given that it succeeded
earlier in the tool's workflow. Does anyone know why are we instantiating a
Hello,
I saw this error when testing my patch on Jenkins:
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.initPipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:49)
at
Hello,
I just saw this in my five-node, three-regionserver cluster test. The
regionserver crashed with this error. Could this be related to some recent
changes involving ZK? Alternatively, this could be a concurrency issue of
its own.
2011-11-21 01:30:15,188 FATAL
-fb for now, though. I guess HBASE-4746 would be
good enough for parallelizing 89-fb tests at the moment.
Thanks,
--Mikhail
From: Ted Yu yuzhih...@gmail.commailto:yuzhih...@gmail.com
Date: Sat, 5 Nov 2011 19:08:58 -0700
To: Mikhail Bautin mbau...@fb.commailto:mbau...@fb.com
Subject: Re: HBASE-4746
Hello,
While working on a new unit test for multi-column scanning I noticed that
deletes are apparently handed differently when the timestamp is 0. I am adding
a delete using deleteColumns(family, qualifier, timestamp), and the column is
not being deleted. Does a zero timestamp have some
27 matches
Mail list logo