On Thu, Aug 4, 2011 at 9:01 PM, Todd Lipcon t...@cloudera.com wrote:
On Thu, Aug 4, 2011 at 8:36 PM, lohit lohit.vijayar...@gmail.com wrote:
2011/8/4 Ryan Rawson ryano...@gmail.com
Yes, that is what JD is referring to, the so-called IO fence.
It works like so:
- regionserver is
The normal behavior would be for the HMaster to make the hlog read-only
before processing it very simple fencing and works on all Posix or
close-to-Posix systems. Does that not work on HDFS?
On Fri, Aug 5, 2011 at 7:07 AM, M. C. Srivas mcsri...@gmail.com wrote:
On Thu, Aug 4, 2011 at
On Fri, Aug 5, 2011 at 8:52 AM, M. C. Srivas mcsri...@gmail.com wrote:
The normal behavior would be for the HMaster to make the hlog read-only
before processing it very simple fencing and works on all Posix or
close-to-Posix systems. Does that not work on HDFS?
I'm sure you know the
The IO fencing was an accidental byproduct of how HDFS-200 was
implemented, so in fact, HBase won't run correctly on HDFS-265 which
does NOT have that IO fencing, right?
On Fri, Aug 5, 2011 at 9:42 AM, Jean-Daniel Cryans jdcry...@apache.org wrote:
On Fri, Aug 5, 2011 at 8:52 AM, M. C. Srivas
HDFS-1520 was forward ported to trunk by Stack:
https://issues.apache.org/jira/browse/HDFS-1948
J-D
On Fri, Aug 5, 2011 at 9:45 AM, Ryan Rawson ryano...@gmail.com wrote:
The IO fencing was an accidental byproduct of how HDFS-200 was
implemented, so in fact, HBase won't run correctly on
On Fri, Aug 5, 2011 at 9:42 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
On Fri, Aug 5, 2011 at 8:52 AM, M. C. Srivas mcsri...@gmail.com wrote:
The normal behavior would be for the HMaster to make the hlog read-only
before processing it very simple fencing and works on all Posix or
On Fri, Aug 5, 2011 at 10:21 AM, M. C. Srivas mcsri...@gmail.com wrote:
On Fri, Aug 5, 2011 at 9:42 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
On Fri, Aug 5, 2011 at 8:52 AM, M. C. Srivas mcsri...@gmail.com wrote:
The normal behavior would be for the HMaster to make the hlog read-only
On Fri, Aug 5, 2011 at 11:28 AM, Todd Lipcon t...@cloudera.com wrote:
On Fri, Aug 5, 2011 at 10:21 AM, M. C. Srivas mcsri...@gmail.com wrote:
On Fri, Aug 5, 2011 at 9:42 AM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
On Fri, Aug 5, 2011 at 8:52 AM, M. C. Srivas mcsri...@gmail.com
Thanks for the feedback. So you're inclined to think it would be at the dfs
layer?
Is it accurate to say the most likely places where the data could have been
lost were:
1. wal writes didn't actually get written to disk (no log entries to suggest
any issues)
2. wal corrupted (no log entries
Another possibility is the logs were not replayed correctly during the
region startup. We put in a lot of tests to cover this case, so it
should not be so.
Essentially the WAL replay looks at the current HFiles state, then
decides which log entries to replay or skip. This is because a log
might
Do you have any suggestions of things I should look at to confirm/deny these
possibilities?
The tables are very small and inactive (probably only 50-100 rows changing
per day).
Thanks,
Jacques
On Thu, Aug 4, 2011 at 9:09 AM, Ryan Rawson ryano...@gmail.com wrote:
Another possibility is the
The regionserver logs that talk about the hlog replay might shed some
light, it should tell you what entries were skipped, etc. Having a
look at the hfile structure of the regions, see if there are holes,
the HFile.main tool can come in handy here, you can run it as:
hbase
Thanks for the feedback. So you're inclined to think it would be at the dfs
layer?
That's where the evidence seems to point.
Is it accurate to say the most likely places where the data could have been
lost were:
1. wal writes didn't actually get written to disk (no log entries to suggest
I will take a look and see what I can figure out.
Thanks for your help.
Jacques
On Thu, Aug 4, 2011 at 9:52 AM, Ryan Rawson ryano...@gmail.com wrote:
The regionserver logs that talk about the hlog replay might shed some
light, it should tell you what entries were skipped, etc. Having a
On Thu, Aug 4, 2011 at 10:34 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Thanks for the feedback. So you're inclined to think it would be at the
dfs
layer?
That's where the evidence seems to point.
Is it accurate to say the most likely places where the data could have
been
Yes, that is what JD is referring to, the so-called IO fence.
It works like so:
- regionserver is appending to an HLog, continues to do so, hasnt
gotten the ZK kill yourself signal yet
- hmaster splits the logs
- the hmaster yanks the writer from under the regionserver, and the RS
then starts to
2011/8/4 Ryan Rawson ryano...@gmail.com
Yes, that is what JD is referring to, the so-called IO fence.
It works like so:
- regionserver is appending to an HLog, continues to do so, hasnt
gotten the ZK kill yourself signal yet
- hmaster splits the logs
- the hmaster yanks the writer from
On Thu, Aug 4, 2011 at 8:36 PM, lohit lohit.vijayar...@gmail.com wrote:
2011/8/4 Ryan Rawson ryano...@gmail.com
Yes, that is what JD is referring to, the so-called IO fence.
It works like so:
- regionserver is appending to an HLog, continues to do so, hasnt
gotten the ZK kill yourself
on 90.4 rc2 after partial zookeeper network
partition (on MapR)
2011/8/4 Ryan Rawson ryano...@gmail.com
Yes, that is what JD is referring to, the so-called IO fence.
It works like so:
- regionserver is appending to an HLog, continues to do so, hasnt
gotten the ZK kill yourself signal yet
Hi Jacques,
Sorry to hear about that.
Regarding MapR, I personally don't have hands-on experience so it's a
little bit hard for me to help you. You might want to ping them and
ask their opinion (and I know they are watching, Ted? Srivas?)
What I can do is telling you if things look normal from
Given the hardy reviews and timing, we recently shifted from 90.3 (apache)
to 90.4rc2 (the July 24th one that Stack posted -- 0.90.4, r1150278).
We had a network switch go down last night which caused an apparent network
partition between two of our region servers and one or more zk nodes.
21 matches
Mail list logo