OK, I've switched to RawLocalFileSystem and it seemed to fix log splitting
issues. However, I'm still seeing the following when loading random data
(not killing the regionserver yet). Any idea about what this could be?
java.util.concurrent.ExecutionException: java.io.IOException:
On Thu, Dec 1, 2011 at 10:58 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
@Stack: I am using hadoop-0.20.205.0 (the default Hadoop version from
pom.xml). There is a private getFileLength() method, but getMethod() does
not allow to retrieve it. We should use getDeclaredMethod() --
I think it's good to remove the reflection when we can, more because it's
easier to catch compile-time errors than run-time. The perf is negligible
when you cache. As I recall, the problem here is the function was private
in older versions. We just need to make sure that we don't support
Hello,
The following reflection hack is from SequenceFileLogReader.java.
try {
Field fIn = FilterInputStream.class.getDeclaredField(in);
fIn.setAccessible(true);
Object realIn = fIn.get(this.in);
Method getFileLength = realIn.getClass().
This reflection should occur only once, not at every write to the HLog, so
the performance impact should be minimal, is it not?
why are you seeing the exception now? are u using a new unit test or a new
hdfs jar?
On Thu, Dec 1, 2011 at 9:59 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com
On Thu, Dec 1, 2011 at 9:59 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
11/12/01 21:40:07 WARN wal.SequenceFileLogReader: Error while trying to get
accurate file length. Truncation / data loss may occur if RegionServers
die.
java.lang.NoSuchMethodException:
@Stack: I am using hadoop-0.20.205.0 (the default Hadoop version from
pom.xml). There is a private getFileLength() method, but getMethod() does
not allow to retrieve it. We should use getDeclaredMethod() -- this appears
to work in my testing. I will include that fix in the HBaseClusterTest
diff.
After fixing the getFileLength() method access bug, the error I'm seeing in
my local multi-process cluster load test is different. Do we ever expect to
see checksum errors on the local filesystem?
11/12/01 22:52:52 INFO wal.HLogSplitter: Splitting hlog:
what hadoop version are you using?
On Thu, Dec 1, 2011 at 11:12 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
After fixing the getFileLength() method access bug, the error I'm seeing in
my local multi-process cluster load test is different. Do we ever expect to
see checksum
ChecksumFileSystem doesn't support hflush/sync()/etc -- so I can
imagine if you kill -9 it while writing you'd get a truncated commit
log, or even one where the last checksum chunk is incorrect.
Maybe best to run this test against a pseudo-distributed HDFS? Or
RawLocalFileSystem?
-Todd
On Thu,
Dhruba:
It's 0.20.205.0, the default one for the open-source HBase trunk. I'll try
to follow Todd's advice and run the test against a different filesystem.
Thanks,
--Mikhail
On Thu, Dec 1, 2011 at 11:16 PM, Dhruba Borthakur dhr...@gmail.com wrote:
what hadoop version are you using?
On Thu,
11 matches
Mail list logo