thanks J.D I think I made a stupid mistake and compared the old HTbale.java with a same old file copy.
On Fri, May 28, 2010 at 12:57 AM, Jean-Daniel Cryans <[email protected]> wrote: > I see HBASE-2481's code in the official 0.20.4 release, so I'm not > sure what you are referring to. One thing we did forget was setting > the "Fix Version" filed in jira, and I just changed that. > > 0.20.4 is the latest release at the moment, although 0.20.5 is due > soon with one important fix for a regression that was added in 0.20.4 > > J-D > > On Thu, May 27, 2010 at 4:21 AM, steven zhuang > <[email protected]> wrote: >> thanks, J.D. >> I have solved this problem brutally, cause now we are still using >> Hbase to developing some prototypes, I dropped the table and created >> another one. >> Thanks for your help anyway. :) >> >> There is one more question, I noticed in the release notes of >> Hbase 0.20.4 there is not big-fix for HBASE-2481, which is said to be >> fixed in 0.20.4. Checked the code, the code is still not changed. Will >> this bug be fixed in next release? >> >> I just download release from hadoop.apache.org, the newest one is >> Hbase-0.20.4 there, is this the newest official release? >> >> >> >> On Wed, May 26, 2010 at 1:38 AM, Jean-Daniel Cryans <[email protected]> >> wrote: >>> Probably missing updates in .META., if your region server that was >>> serving it failed or you had to kill -9 it then it lost the last edits >>> to it (unless you patched your HDFS to support fsSync, I guess not). >>> >>> Currently to fix .META. it requires a manual intervention, which is >>> running bin/add_table.rb. Disable your table before running it. Look >>> in this mailing list's archive for stories of others users who had to >>> experience it. >>> >>> J-D >>> >>> On Tue, May 25, 2010 at 2:03 AM, steven zhuang >>> <[email protected]> wrote: >>>> I have checked the HDFS, it seems that the data from "da_2010/01/09" >>>> to "r2_2010/01/10" is in HDFS, it is weird Hbase cannot online the >>>> corresponding region. >>>> I have enable the table in shell several times, still it doesn't work. >>>> >>>> On Tue, May 25, 2010 at 4:55 PM, steven zhuang >>>> <[email protected]> wrote: >>>>> hi, all, >>>>> I have a table imported some data already, but I failed to >>>>> import more data into it(still checking). for some reason I restarted >>>>> the cluster, and in the Web interface I have found out that there are >>>>> some regions missing. Below is what it shows, while I am sure there >>>>> are rows like "ee_2010/01/09" or "ff_2010/01/07" in the table. >>>>> I use the ruby shell, and scan the table with a missing >>>>> row key, and the command just hang there. >>>>> I am using HBase 0.20.3. >>>>> If we close the cluster while the cluster is doing >>>>> compaction, would the data get lost? I might have closed the cluster >>>>> while some regionserver is doing a compaction operation. >>>>> >>>>> The Table on Web UI: >>>>> >>>>> Name >>>>> Region Server Encoded Name Start Key >>>>> End Key >>>>> hbt2table33, ,1274761089803 >>>>> dx-9j50d07.off.tn:60030 1484851145 bi_2010/01/19_3 >>>>> hbt2table33,bi_2010/01/19_3,1274775708656 dx-9j50d07.off:60030 >>>>> 2038135176 bi_2010/01/19_3 bp_2010/01/05 >>>>> hbt2table33,bp_2010/01/05,1274775708656 dx-9j50d08.off.tn:60030 >>>>> 1165242562 bp_2010/01/05 da_2010/01/09 >>>>> hbt2table33,r2_2010/01/10,1274760679583 dd-9c34d07.off.tn:60030 >>>>> 1829006811 r2_2010/01/10 >>>>> >>>> >>> >> >
