Yes, Major Compact is not my problem that is mentioned in the HBASE-10118,
My problem is surviving row after delete even logically, And the row is
still in the scanner result.
I think the problem relates to timestamp of Delete object. When i set
it to Long.MAX_VALUE
(if you set no timestamp in Delete class constructor, Long.MAX_VALUE was
set as default value), these problem happens and according to the raw
scan output,
timestamp of delete record is time that delete occured instead of
Long.MAX_VALUE
value.

Raw scan result:
ROW     COLUMN+CELL
key1    column=C:, timestamp=*1466931500501*, type=*DeleteFamily*
key1    column=C:q1, timestamp=2000000000000, value=test-val

On Mon, Jun 27, 2016 at 5:57 PM, Ted Yu <yuzhih...@gmail.com> wrote:

> HBASE-10118 was integrated into 0.98.2
>
> The user was running 0.98.9
>
> Hmm
>
> On Sun, Jun 26, 2016 at 12:05 PM, Dima Spivak <dspi...@cloudera.com>
> wrote:
>
> > Hey M.,
> >
> > Just to follow up on what JMS said, this was fixed in April 2014 (details
> > at https://issues.apache.org/jira/browse/HBASE-10118), so running a
> > version
> > of HBase in which the patch went in is probably your best option.
> >
> > -Dima
> >
> > On Sunday, June 26, 2016, Jean-Marc Spaggiari <jean-m...@spaggiari.org>
> > wrote:
> >
> > > Hi,
> > >
> > > This is a known issue and I think it is solved is more recent versions.
> > Do
> > > you have the option to upgrade?
> > >
> > > JMS
> > > Le 2016-06-26 07:00, "M. BagherEsmaeily" <mbesmae...@gmail.com
> > > <javascript:;>> a écrit :
> > >
> > > > these problem doesn't solve with major compact!! Assuming the problem
> > is
> > > > solved with major compact, in this case, it's still a bug.
> > > >
> > > > On Sun, Jun 26, 2016 at 3:08 PM, Lise Regnier <
> lise.regn...@gmail.com
> > > <javascript:;>>
> > > > wrote:
> > > >
> > > > > you need to run a major compact after deletion
> > > > > lise
> > > > >
> > > > > > On 26 Jun 2016, at 11:20, M. BagherEsmaeily <
> mbesmae...@gmail.com
> > > <javascript:;>>
> > > > > wrote:
> > > > > >
> > > > > > Hello
> > > > > > I use HBase version 0.98.9-hadoop1 with Hadoop version 1.2.1 .
> > when i
> > > > > > delete row that has columns with future timestamp, delete not
> > affect
> > > > and
> > > > > > row still surviving.
> > > > > >
> > > > > > For example when i put a row with future timestamp:
> > > > > > Put p = new Put(Bytes.toBytes("key1"));
> > > > > > p.add(Bytes.toBytes("C"), Bytes.toBytes("q1"), 2000000000000L,
> > > > > > Bytes.toBytes("test-val"));
> > > > > > table.put(p);
> > > > > >
> > > > > > After put, when i scan my table, the result is:
> > > > > > ROW     COLUMN+CELL
> > > > > > key1    column=C:q1, timestamp=2000000000000, value=test-val
> > > > > >
> > > > > > When i delete this row with following code:
> > > > > > Delete d = new Delete(Bytes.toBytes("key1"));
> > > > > > table.delete(d);
> > > > > >
> > > > > > OR with this code:
> > > > > > Delete d = new Delete(Bytes.toBytes("key1"), Long.MAX_VALUE);
> > > > > > table.delete(d);
> > > > > >
> > > > > > After each two deletes the result of scan is:
> > > > > > ROW     COLUMN+CELL
> > > > > > key1    column=C:q1, timestamp=2000000000000, value=test-val
> > > > > >
> > > > > > And raw scan result is:
> > > > > > ROW     COLUMN+CELL
> > > > > > key1    column=C:, timestamp=1466931500501, type=DeleteFamily
> > > > > > key1    column=C:q1, timestamp=2000000000000, value=test-val
> > > > > >
> > > > > >
> > > > > > But when i change the timestamp of delete to Long.MAX_VALUE-1,
> this
> > > > > delete
> > > > > > works. Can anyone help me with this?
> > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to