Do you not have a pseudo cluster for testing anywhere?

On Tue, Aug 14, 2012 at 4:46 PM, anil gupta <anilgupt...@gmail.com> wrote:

> Hi Jerry,
>
> I am wiling to do that but the problem is that i wiped off the HBase0.90
> cluster. Is there a way to store a table in HFilev1 in HBase0.92? If i can
> store a file in HFilev1 in 0.92 then i can do the comparison.
>
> Thanks,
> Anil Gupta
>
> On Tue, Aug 14, 2012 at 1:28 PM, Jerry Lam <chiling...@gmail.com> wrote:
>
> > Hi Anil:
> >
> > Maybe you can try to compare the two HFile implementation directly? Let
> say
> > write 1000 rows into HFile v1 format and then into HFile v2 format. You
> can
> > then compare the size of the two directly?
> >
> > HTH,
> >
> > Jerry
> >
> > On Tue, Aug 14, 2012 at 3:36 PM, anil gupta <anilgupt...@gmail.com>
> wrote:
> >
> > > Hi Zahoor,
> > >
> > > Then it seems like i might have missed something when doing hdfs usage
> > > estimation of HBase. I usually do hadoop fs -dus /hbase/$TABLE_NAME for
> > > getting the hdfs usage of a table. Is this the right way? Since i wiped
> > of
> > > the HBase0.90 cluster so now i cannot look into hdfs usage of it. Is it
> > > possible to store a table in HFileV1 instead of HFileV2 in HBase0.92?
> > > In this way i can do a fair comparison.
> > >
> > > Thanks,
> > > Anil Gupta
> > >
> > > On Tue, Aug 14, 2012 at 12:13 PM, jmozah <jmo...@gmail.com> wrote:
> > >
> > > > Hi Anil,
> > > >
> > > > I really doubt that there is 50% drop in file sizes... As far as i
> > know..
> > > > there is no drastic space conserving feature in V2. Just as  an after
> > > > thought.. do a major compact and check the sizes.
> > > >
> > > > ./Zahoor
> > > > http://blog.zahoor.in
> > > >
> > > >
> > > > On 15-Aug-2012, at 12:31 AM, anil gupta <anilgupt...@gmail.com>
> wrote:
> > > >
> > > > > l
> > > >
> > > >
> > >
> > >
> > > --
> > > Thanks & Regards,
> > > Anil Gupta
> > >
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Kevin O'Dell
Customer Operations Engineer, Cloudera

Reply via email to