On Tue, Dec 6, 2016 at 10:19 AM, Ted Yu <[email protected]> wrote: > Looking at http://hbase.apache.org/book.html#executing.the.0.96.upgrade , > there is step of running "bin/hbase upgrade -check" > > How about adding a sample hfile which contains > CellComparator$MetaCellComparator > so that the upgrade check can read and verify ? > >
We don't support downgrade. Never have. St.Ack > On Tue, Dec 6, 2016 at 8:50 AM, Ted Yu <[email protected]> wrote: > > > The build I used yesterday didn't include HBASE-16189 > > <https://issues.apache.org/jira/browse/HBASE-16189> > > > > Once it is included, the cluster can be downgraded fine. > > > > I wonder how users would know that their existing deployment has > > HBASE-16189 <https://issues.apache.org/jira/browse/HBASE-16189> before > > upgrading to 2.0 release. > > > > On Tue, Dec 6, 2016 at 2:29 AM, ramkrishna vasudevan < > > [email protected]> wrote: > > > >> @Ted > >> Does your version have this fix > >> https://issues.apache.org/jira/browse/HBASE-16189 > >> > >> Regards > >> Ram > >> > >> On Tue, Dec 6, 2016 at 3:56 PM, Ted Yu <[email protected]> wrote: > >> > >> > Is the assumption that hbase:meta would not split ? > >> > > >> > In other thread, Francis Liu was proposing supporting splittable > >> > hbase:meta in 2.0 release. > >> > > >> > Cheers > >> > > >> > > On Dec 6, 2016, at 2:20 AM, 张铎 <[email protected]> wrote: > >> > > > >> > > Could this be solved by hosting meta only on master? > >> > > > >> > > BTW, MetaCellComparator is introduced in HBASE-10800. > >> > > > >> > > Thanks. > >> > > > >> > > 2016-12-06 17:44 GMT+08:00 Ted Yu <[email protected]>: > >> > > > >> > >> Hi, > >> > >> When I restarted a cluster with 1.1 , I found that hbase:meta > region > >> > >> (written to by the previously deployed 2.0) couldn't be opened: > >> > >> > >> > >> Caused by: java.io.IOException: > >> > >> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem > >> reading > >> > >> HFile Trailer from file hdfs://yz1.xx.com:8020/apps/ > >> hbase/data/data/ > >> > >> hbase/meta/1588230740/info/599fc8a37311414e876803312009a986 > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles( > >> HStore.java: > >> > >> 579) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles( > >> HStore.java: > >> > >> 534) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HStore.<init>( > HStore.java:275) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHSto > >> re(HRegion. > >> > >> java:5150) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion. > >> java:912) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion. > >> java:909) > >> > >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) > >> > >> at > >> > >> java.util.concurrent.Executors$RunnableAdapter.call( > >> Executors.java:511) > >> > >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) > >> > >> ... 3 more > >> > >> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: > >> > Problem > >> > >> reading HFile Trailer from file hdfs:// > >> > >> yz1.xx.com:8020/apps/hbase/data/data/hbase/ meta/1588230740/ > >> > >> info/599fc8a37311414e876803312009a986 > >> > >> at > >> > >> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion( > >> > HFile.java:483) > >> > >> at > >> > >> org.apache.hadoop.hbase.io.hfile.HFile.createReader( > HFile.java:511) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.StoreFile$Reader. > >> > >> <init>(StoreFile.java:1128) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.StoreFileInfo. > >> > >> open(StoreFileInfo.java:267) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFil > >> e.java:409) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.StoreFile. > >> > >> createReader(StoreFile.java:517) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileA > >> ndReader( > >> > >> HStore.java:687) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HStore.access$000( > >> HStore.java:130) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HStore$1.call( > HStore.java:554) > >> > >> at > >> > >> org.apache.hadoop.hbase.regionserver.HStore$1.call( > HStore.java:551) > >> > >> ... 6 more > >> > >> Caused by: java.io.IOException: java.lang.ClassNotFoundException: > >> > >> org.apache.hadoop.hbase.CellComparator$MetaCellComparator > >> > >> at > >> > >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getCompara > >> torClass( > >> > >> FixedFileTrailer.java:581) > >> > >> at > >> > >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer. > deserializeFromPB( > >> > >> FixedFileTrailer.java:300) > >> > >> at > >> > >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer. > >> > >> deserialize(FixedFileTrailer.java:242) > >> > >> at > >> > >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream( > >> > >> FixedFileTrailer.java:407) > >> > >> at > >> > >> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion( > >> > HFile.java:468) > >> > >> ... 15 more > >> > >> Caused by: java.lang.ClassNotFoundException: > >> > >> org.apache.hadoop.hbase.CellComparator$MetaCellComparator > >> > >> at java.net.URLClassLoader.findClass(URLClassLoader.java: > 381) > >> > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > >> > >> at sun.misc.Launcher$AppClassLoader.loadClass( > Launcher.java: > >> 331) > >> > >> at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > >> > >> at java.lang.Class.forName0(Native Method) > >> > >> at java.lang.Class.forName(Class.java:264) > >> > >> at > >> > >> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.getCompara > >> torClass( > >> > >> FixedFileTrailer.java:579) > >> > >> > >> > >> When user does rolling upgrade from 1.1 to 2.0, the above may cause > >> > problem > >> > >> if hbase:meta region is updated by server running 2.0 but later > >> > assigned to > >> > >> a region server which still runs 1.1 (due to crash of the server > >> running > >> > >> 2.0, e.g.) > >> > >> > >> > >> I want to get community feedback on the severity of this issue. > >> > >> > >> > >> Thanks > >> > >> > >> > > >> > > > > >
