Re: [Maria-developers] Sachin weekly report

2016-08-28 Thread Sachin Setia
Hi Sergei!

Actually I changed code as I was suggesting you in previous email.
Duplicate scanning , update etc all works fine using this new approach.
 Currently I am working on optimizer part using this new approach
although the normal where works but now
I am working on cases involving join and delete and update optimization.
I also applied most of the changes suggested by you. but some are left.

Regards
sachin

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-26 Thread Sergei Golubchik
Hi, Vicențiu!

On Aug 26, Vicențiu Ciorbaru wrote:
> Hi Sachim, Sergei!
> 
> One quick thing I wanted to point out. I did not specifically look at
> how things get called, but, when defining constants, I don't agree
> with:
> 
> > > +#define HA_HASH_STR_LEN strlen(HA_HASH_STR)
> >
> Or:
> 
> > > +#define HA_HASH_STR_INDEX_LEN   strlen(HA_HASH_STR_INDEX)
> 
> This hides an underlying strlen. Better make it a real constant value.
> Perhaps the compiler is smart enough to optimize it away, but why risk it?

Right, we usually use sizeof() for this. With the pattern

  const LEX_CSTRING ha_hash_str= { STRING_WITH_LEN("HASH") };

although I'm pretty sure that any modern compiler will replace strlen("string")
with a constant.

> Another one is why not define them as const char * and const int? This also
> helps during debugging, as you can do:
> 
> (gdb) $ print HA_HAST_STR_INDEX_LEN

if you compile with -ggdb3, gdb will show macro values too :)

> I know that a lot of the code makes use of defines with #define, but
> why not enforce a bit of type safety while we're at it?

I don't disagree.
As far as this patch is concerned, I hope that in the final version there
will be no define for "HASH" or "HASH_INDEX" at all.
As a general rule, I agree that typed constants and sizeof() is
preferrable.

Regards,
Sergei
Chief Architect MariaDB
and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-26 Thread Vicențiu Ciorbaru
Hi Sachim,

Sergei's suggestion with STRING_WITH_LEN macro or sizeof("string") should
fix the problem you're raising.

Regards,
Vicentiu

On Fri, 26 Aug 2016 at 18:11 Sachin Setia  wrote:

> Hi Vicențiu
>
> Thanks Vicențiu for your comment. Although I agree with you but defining
> #define HA_HASH_STR_LEN 4
> or
> const int HA_HASH_STR_LEN = 4;
>
> make it independent of length of "hash".  Although we rarely gone
> change "hash", so I think it
> is a good idea. What do you think , Sergei?
> Regards
> sachin
>
> On Fri, Aug 26, 2016 at 7:26 PM, Vicențiu Ciorbaru 
> wrote:
> > Hi Sachin, Sergei!
> >
> > One quick thing I wanted to point out. I did not specifically look at how
> > things get called, but,
> > when defining constants, I don't agree with:
> >
> >> > +#define HA_HASH_STR_LEN strlen(HA_HASH_STR)
> >
> > Or:
> >>
> >> > +#define HA_HASH_STR_INDEX_LEN   strlen(HA_HASH_STR_INDEX)
> >
> >
> > This hides an underlying strlen. Better make it a real constant value.
> > Perhaps the compiler is smart enough to optimize it away, but why risk
> it?
> >
> > Another one is why not define them as const char * and const int? This
> also
> > helps during debugging, as you can do:
> >
> > (gdb) $ print HA_HAST_STR_INDEX_LEN
> >
> > I know that a lot of the code makes use of defines with #define, but why
> not
> > enforce a bit of type safety while we're at it?
> >
> > Just my 2 cents, feel free to disagree. :)
> > Vicentiu
> >
> >
>
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-26 Thread Sachin Setia
Hi Vicențiu

Thanks Vicențiu for your comment. Although I agree with you but defining
#define HA_HASH_STR_LEN 4
or
const int HA_HASH_STR_LEN = 4;

make it independent of length of "hash".  Although we rarely gone
change "hash", so I think it
is a good idea. What do you think , Sergei?
Regards
sachin

On Fri, Aug 26, 2016 at 7:26 PM, Vicențiu Ciorbaru  wrote:
> Hi Sachin, Sergei!
>
> One quick thing I wanted to point out. I did not specifically look at how
> things get called, but,
> when defining constants, I don't agree with:
>
>> > +#define HA_HASH_STR_LEN strlen(HA_HASH_STR)
>
> Or:
>>
>> > +#define HA_HASH_STR_INDEX_LEN   strlen(HA_HASH_STR_INDEX)
>
>
> This hides an underlying strlen. Better make it a real constant value.
> Perhaps the compiler is smart enough to optimize it away, but why risk it?
>
> Another one is why not define them as const char * and const int? This also
> helps during debugging, as you can do:
>
> (gdb) $ print HA_HAST_STR_INDEX_LEN
>
> I know that a lot of the code makes use of defines with #define, but why not
> enforce a bit of type safety while we're at it?
>
> Just my 2 cents, feel free to disagree. :)
> Vicentiu
>
>

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-26 Thread Vicențiu Ciorbaru
Hi Sachim, Sergei!

One quick thing I wanted to point out. I did not specifically look at how
things get called, but,
when defining constants, I don't agree with:

> +#define HA_HASH_STR_LEN strlen(HA_HASH_STR)
>
Or:

> > +#define HA_HASH_STR_INDEX_LEN   strlen(HA_HASH_STR_INDEX)
>

This hides an underlying strlen. Better make it a real constant value.
Perhaps the compiler is smart enough to optimize it away, but why risk it?

Another one is why not define them as const char * and const int? This also
helps during debugging, as you can do:

(gdb) $ print HA_HAST_STR_INDEX_LEN

I know that a lot of the code makes use of defines with #define, but why
not enforce a bit of type safety while we're at it?

Just my 2 cents, feel free to disagree. :)
Vicentiu
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-24 Thread Sergei Golubchik
Hi, Sachin!

On Aug 23, Sachin Setia wrote:
> >
> > Looks simpler, agree. The length of the keypart should not matter,
> > because it should never be used. May be it would be good to set it to -1
> > as it might help to catch errors (where it is erroneously used).
> 
> I think it should , because we make buffer size according to key_ptr->length.
> For example  , this code at test_quick_select
> 
> (param.min_key= (uchar*)alloc_root(&alloc,max_key_len)
> 
> here max_key_length is sum of lengths of all key_part->store_length
> and also in function get_mm_leaf we use
> 
>   field->get_key_image(str+maybe_null, key_part->length,
>   key_part->image_type);
> 
> So I think length will matter.

For normal keys, yes. But for your HA_UNIQUE_HASH keys the buffer size
is not the sum of key_part lengths. So using key_part->length to
calculate the buffer size for HA_UNIQUE_HASH is wrong, I'd say.

Regards,
Sergei
Chief Architect MariaDB
and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-23 Thread Sachin Setia
Hi Sergei!

On Tue, Aug 23, 2016 at 4:35 PM, Sergei Golubchik  wrote:
>
> Hi, Sachin!
>
> On Aug 22, Sachin Setia wrote:
> > Hi Sergei!
> >
> > Actually I completed the work on update and delete. Now they will use
> > index for looking up records.
> >
> > But I am thinking I have done a lot of changes in optimizer which may
> > break it, and also there are lots of queries where my code does not
> > work, fixing this might take a long amount of time.  I am thinking of
> > a change in my existing code :-
> > Suppose a table t1
> > create table t1 (a blob, b blob, c blob, unique(a,b,c));
> > In current code, for query like there will a KEY with only one
> > keypart which points to field DB_ROW_HASH_1.
> > It was okay for normal updates, insert and delete, but in the case
> > of where optimization  I have do a lot of stuff, first to match field
> > (like in add_key_part), then see whether all the fields in hash_str
> > are present in where or not, then create keys by calculating hash. I
> > do this by checking  the HA_UNIQUE_HASH flag in KEY, but this also
> > makes (I think) optimizer code bad because of too much dependence.
> > Also  I need to patch get_mm_parts and get_mm_leaf function, which I
> > think should not be patched.
>
> Later today I'll know exactly what you mean, when I'll finish
> reviewing your optimizer changes.
>
> But for now, let's say I agree on a general principle :)
> Optimizer is kinda complex and fragile, so it's good to avoid doing many
> changes in it - the effect might be difficult to predict.
>
> > I am thinking of a another approach to this problem at server level
> > instead of having just one keypart we can have 1+3 keypart. Last three
> > keyparts will be for field a, b, c and first one for
> > DB_ROW_HASH_1.These will be only at server level not at storage level.
> > key_info->key_part will point at keypart containing field a, while
> > key_part having field DB_ROW_HASH_1 will -1 index. By this way I do
> > not have to patch more of optimizer code. But there is one problem,
> > what should be the length of key_part? I am thinking of it equal to
> > field->pack_length(), this would not work because while creating keys
> > optimizer calls get_key_image() (which is real data so can exceed
> > pack_lenght() in case of blob), so to get this work I have to patch
> > optimizer  where it calls  get_key_image() and see if key is
> > HA_UNIQUE_HASH. If yes then instead of get_key_image just use
> >memcpy(key, field->ptr(), field->pack_length());
> > this wont copy the actual data, but we do not need actual data. I will
> > patch handler methods like ha_index_read, ha_index_idx_read,
> > multi_range_read_info_const basically handler methods which are
> > related to index or range search.  In these methods i  need to
> > calculate hash, which I can calculate from key_ptr but key_ptr doe
> > not have actual data(in case of blobs etc).So to get the data for
> > hash, I will make a field clone of (a,b,c etc) but there ptr will
> > point in key_ptr. Then field->val_str() method will work simply and i
> > can calculate hash. And also I can compare returned  result with
> > actual key in handler method itself.
> > What do you think of this approach ?
>
> Looks simpler, agree. The length of the keypart should not matter,
> because it should never be used. May be it would be good to set it to -1
> as it might help to catch errors (where it is erroneously used).

I think it should , because we make buffer size according to key_ptr->length.
For example  , this code at test_quick_select

(param.min_key= (uchar*)alloc_root(&alloc,max_key_len)

here max_key_length is sum of lengths of all key_part->store_length
and also in function get_mm_leaf we use

  field->get_key_image(str+maybe_null, key_part->length,
  key_part->image_type);

So I think length will matter.

>
> I didn't understand why you need to clone fields though :(

Ohh , this is actually just for reading data from key_ptr, because
key_ptr does not have direct
data  , so i thought that i have to make a clone of field and then set
its ptr. But I guess this
function would work.

inline void move_field(uchar *ptr_arg,uchar *null_ptr_arg,uchar null_bit_arg)
{
ptr=ptr_arg; null_ptr=null_ptr_arg; null_bit=null_bit_arg;
  }

>
>
> Regards,
> Sergei
> Chief Architect MariaDB
> and secur...@mariadb.org
Regards
sachin

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-23 Thread Sergei Golubchik
Hi, Sachin!

On Aug 22, Sachin Setia wrote:
> Hi Sergei!
> 
> Actually I completed the work on update and delete. Now they will use
> index for looking up records.
> 
> But I am thinking I have done a lot of changes in optimizer which may
> break it, and also there are lots of queries where my code does not
> work, fixing this might take a long amount of time.  I am thinking of
> a change in my existing code :-
> Suppose a table t1
> create table t1 (a blob, b blob, c blob, unique(a,b,c));
> In current code, for query like there will a KEY with only one
> keypart which points to field DB_ROW_HASH_1.
> It was okay for normal updates, insert and delete, but in the case
> of where optimization  I have do a lot of stuff, first to match field
> (like in add_key_part), then see whether all the fields in hash_str
> are present in where or not, then create keys by calculating hash. I
> do this by checking  the HA_UNIQUE_HASH flag in KEY, but this also
> makes (I think) optimizer code bad because of too much dependence.
> Also  I need to patch get_mm_parts and get_mm_leaf function, which I
> think should not be patched.

Later today I'll know exactly what you mean, when I'll finish
reviewing your optimizer changes.

But for now, let's say I agree on a general principle :)
Optimizer is kinda complex and fragile, so it's good to avoid doing many
changes in it - the effect might be difficult to predict.

> I am thinking of a another approach to this problem at server level
> instead of having just one keypart we can have 1+3 keypart. Last three
> keyparts will be for field a, b, c and first one for
> DB_ROW_HASH_1.These will be only at server level not at storage level.
> key_info->key_part will point at keypart containing field a, while
> key_part having field DB_ROW_HASH_1 will -1 index. By this way I do
> not have to patch more of optimizer code. But there is one problem,
> what should be the length of key_part? I am thinking of it equal to
> field->pack_length(), this would not work because while creating keys
> optimizer calls get_key_image() (which is real data so can exceed
> pack_lenght() in case of blob), so to get this work I have to patch
> optimizer  where it calls  get_key_image() and see if key is
> HA_UNIQUE_HASH. If yes then instead of get_key_image just use
>memcpy(key, field->ptr(), field->pack_length());
> this wont copy the actual data, but we do not need actual data. I will
> patch handler methods like ha_index_read, ha_index_idx_read,
> multi_range_read_info_const basically handler methods which are
> related to index or range search.  In these methods i  need to
> calculate hash, which I can calculate from key_ptr but key_ptr doe
> not have actual data(in case of blobs etc).So to get the data for
> hash, I will make a field clone of (a,b,c etc) but there ptr will
> point in key_ptr. Then field->val_str() method will work simply and i
> can calculate hash. And also I can compare returned  result with
> actual key in handler method itself.
> What do you think of this approach ?

Looks simpler, agree. The length of the keypart should not matter,
because it should never be used. May be it would be good to set it to -1
as it might help to catch errors (where it is erroneously used).

I didn't understand why you need to clone fields though :(

Regards,
Sergei
Chief Architect MariaDB
and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-22 Thread Sachin Setia
Hi Sergei!

Actually I completed the work on update and delete. Now they will use index
for looking up records.

But I am thinking I have done a lot of changes in optimizer which may break
it , and also there are lots of queries where my
code does not work, fixing this might take a long amount of time.
I am thinking of a change in my existing code :-
Suppose a table t1
create table t1 (a blob, b blob, c blob, unique(a,b,c));
In current code , for query like there will a KEY with only one keypart
which points to field DB_ROW_HASH_1.
It was okay for normal updates , insert and delete , but in the case of
where optimization  I have do a lot of stuff , first to match field (like
in add_key_part), then see whether all the fields in hash_str are present
in where or not, then create keys by calculating hash. I do this by
checking  the HA_UNIQUE_HASH flag in KEY , but this also makes (I think)
optimizer code
bad because of too much dependence. Also  I need to patch get_mm_parts and
get_mm_leaf function , which I think
should not be patched.

I am thinking of a another approach to this problem at server level instead
of having just one keypart we can have 1+3
keypart. Last three keypart will be for field a, b, c and first one for
DB_ROW_HASH_1 .These will be only at server level not at
storage level. key_info->key_part will point at keypart containing field a
, while key_part having field DB_ROW_HASH_1 will
-1 index. By this way I do not have to patch more of optimizer code. But
there is one problem , what should be the length of
key_part? I am thinking of it equal to field->pack_length(), this would not
work because while creating keys optimizer
calls get_key_image() (which is real data so can exceed pack_lenght() in
case of blob), so to get this work I have to patch
optimizer  where it calls  get_key_image() and see if key is HA_UNIQUE_HASH
. If yes then instead of get_key_image just use
 memcpy(key, field->ptr(),
field->pack_length());

this wont copy the actual data, but we do not need actual data. I will
patch handler methods like ha_index_read, ha_index_idx_read ,
multi_range_read_info_const
basically handler methods which are related to index or range search.
In these methods i  need to calculate hash , which I can calculate from
key_ptr but key_ptr doe not have actual data(in case
 of blobs etc).So to get the date for hash , I will make a field clone of
 (a,b,c etc) but there ptr will point in key_ptr. Then
field->val_str() method will work simply and i can calculate hash. And also
I can compare returned  result with actual key in
handler method itself.
What do you think of this approach ?

Regards
sachin

On Sat, Aug 20, 2016 at 11:16 PM, Sergei Golubchik  wrote:

> Hi, Sachin!
>
> On Aug 19, Sachin Setia wrote:
> > On Fri, Aug 19, 2016 at 2:42 PM, Sergei Golubchik 
> wrote:
> >
> > > First. I believe you'll need to do your final evaluation soon, and
> > > it will need to have a link to the code. Did you check google
> > > guidelines about it? Is everything clear there? Do you need help
> > > publishing your work in a format that google requires?
> > >
> > > They don't accept delays for any reasons, so even if your code is
> > > not 100% complete and ready, you'd better still publish it and
> > > submit the evaluation, because otherwise google will fail you and
> > > that'd be too sad.
> > >
> > > If you'd like you can publish the google-way only the
> > > unique-constraint part without further optimizer work. Or at least
> > > please mention that you'd completed the original project and went
> > > working on extensions.  I mean, it's better than saying "the code is
> > > not 100% complete" :)
> > >
> > Okay I am thinking of writing a blog post with a link to my github
> > repository.
> > Blog Link 
> > Please check this.
>
> I think that'll do, yes.
>
> Regards,
> Sergei
> Chief Architect MariaDB
> and secur...@mariadb.org
>
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-19 Thread Sergei Golubchik
Hi, Sachin!

First. I believe you'll need to do your final evaluation soon, and it
will need to have a link to the code. Did you check google guidelines
about it? Is everything clear there? Do you need help publishing your
work in a format that google requires?

They don't accept delays for any reasons, so even if your code is not
100% complete and ready, you'd better still publish it and submit the
evaluation, because otherwise google will fail you and that'd be too sad.

If you'd like you can publish the google-way only the unique-constraint
part without further optimizer work. Or at least please mention that
you'd completed the original project and went working on extensions.
I mean, it's better than saying "the code is not 100% complete" :)

On Aug 18, Sachin Setia wrote:
> Hello Sergei!
> I am stuck at one problem
> consider table t1(a blob , b blob , unique(a,b));
> although select * from t1 where a= 12 and b= 23 works
> but consider the case like select * from t1 where ( a= 12  or a=45 ) and
> (b= 23 or b=45 ) does not works
> and also update and delete using long index does not work
> simple query like
> delete/ update table t1 where a=1 and b=3;
> does not works

"does not work" means "return wrong results"? Or "does not use the index
but results are correct" ?

> The reason is that because these query uses test_quick_select function
> which does not recognize hash index basically in get_mm_parts it always
> return false because it compare db_row_hash_1 to a and b i solved this
> problem
> but now the problems
> 1. first how to assure we have both column a, and b is in where currently i
> do this as crawl through tree->key[i]->next_key_part untill
> next_key_part is null this works for simple case like
> a= 1 and b=2 but i am not sure how will i do this in
> conditions like
> ((a =  1 or a =34) and (b=34 or b=33)) or (b=34)
> ^^^
> in this condition before or past it is okay to use hash but for b=34 we
> should not use hash index but do not know how to do

Range optimizer can do it. With normal indexes, for example, it needs to
convert that query into four "ranges" (I use quotes, because these are
degenerate ranges of one value only, but they're still ranges
internally):

  (1,33), (1,34), (34,44), (34, 34)

For every range the optimizer will call handler::records_in_range() to
do its cost estimations. If you'd run a test, like

  create table t1 (a int, b int, index(a,b), c int);
  insert t1 values (1,2,3),(3,4,5),(5,6,7);
  explain select * from t1 force index (a) where (a=1 or a=3) and (b=2 or b=4);

and put a breakpoint on ha_myisam::records_in_range(), you'll be able to
find where the SEL_ARG tree is converted to ranges.

check_quick_select() sets

 RANGE_SEQ_IF seq_if = {NULL, sel_arg_range_seq_init, sel_arg_range_seq_next, 
0, 0};

and later in handler::multi_range_read_info_const() these two functions
(seq_if.init() and seq_if.next()) are used to iterate the tree and
create ranges. The main work is done in sel_arg_range_seq_next().

But please try not to copy-paste it.

[ Side note: could you please try to use more punctuation in your
emails?  :) It's a bit difficult to understand what you mean when
sentences get long. ]

> 2. when SEL_TREE is evaluated ?

I believe I've answered that above :)
handler::multi_range_read_info_const() converts the tree into two (min
and max) key images.

> 3. where should i evaluate hash i can do that but main problem is that i
> need to preserve original data  like a=1 and b=3
> etc  because hash does not guaranty same.

Hmm... You can compute the hash as above, where the key image is
created. Original data... SQL layer does not always trust the engine,
in some cases it evaluates the WHERE expression after getting the row
from the engine. I think (not sure) that if you keep the a=1 and b=3
part in the COND then the upper layer will evaluate it normally, like

  if (!cond || cond->val_int() != 0)

so you just need to make sure that
 1. a=1 and b=3 is not removed from COND
 2. when cond->val_int() is 0, upper layer does *not* treat it
as reaching the end of the index range and does *not* stop
the loop of index_next() calls. (for a normal index, if you search
for a=5 and do index_next, as soon as you found a!=5 you can stop
searching).

> 4 there is one more problem  in test_quick_select
> 
> there is one code
> 
> if ((range_trp= get_key_scans_params(¶m, tree, FALSE, TRUE,
>best_read_time)))
> 
> in the case of hash key before reaching this function we must have to
> calculate hash after this it will work but i guess upto this point our
> tree is not evaluated , i do not know

Sorry, I didn't quite understand that :(

> please review branch
> https://github.com/SachinSetiya/server/tree/unique_index_where_up_de
> and tell me what should i do and also review the orignal branch
> unique_index_sachin  i added more test cases and reinsert also works

I won't be able to do that until nex

Re: [Maria-developers] Sachin weekly report

2016-08-18 Thread Sachin Setia
Hello Sergei!
I am stuck at one problem
consider table t1(a blob , b blob , unique(a,b));
although select * from t1 where a= 12 and b= 23 works
but consider the case like select * from t1 where ( a= 12  or a=45 ) and
(b= 23 or b=45 ) does not works
and also update and delete using long index does not work
simple query like
delete/ update table t1 where a=1 and b=3;
does not works

The reason is that because these query uses test_quick_select function
which does not recognize hash index basically in get_mm_parts it always
return false because it compare db_row_hash_1 to a and b i solved this
problem
but now the problems
1. first how to assure we have both column a, and b is in where currently i
do this as crawl through tree->key[i]->
next_key_part untill next_key_part is null this works for simple case like
a= 1 and b=2 but i am not sure how will i do this in
conditions like
((a =  1 or a =34) and (b=34 or b=33)) or (b=34)
^^^
in this condition before or past it is okay to use hash but for b=34 we
should not use hash index but do not know how to do

2. when SEL_TREE is evaluated ?

3. where should i evaluate hash i can do that but main problem is that i
need to preserve original data  like a=1 and b=3
etc  because hash does not guaranty same.

4 there is one more problem  in test_quick_select

there is one code

if ((range_trp= get_key_scans_params(¶m, tree, FALSE, TRUE,

   best_read_time)))

in the case of hash key before reaching this function we must have to
calculate hash after this it will work
but i guess upto this point our tree is not evaluated , i do not know

 please review branch
https://github.com/SachinSetiya/server/tree/unique_index_where_up_de  and
tell me what should
i do and also review the orignal branch  unique_index_sachin  i added more
test cases and reinsert also works

On Sun, Aug 14, 2016 at 1:46 PM, Sergei Golubchik  wrote:
>
> Hi, Sachin!
>
> I'm reviewing.
> Here I only comment on your reply to the previous review.
>
> > >> +longlong  Item_func_hash::val_int()
> > >> +{
> > >> +  unsigned_flag= true;
> > >> +  ulong nr1= 1,nr2= 4;
> > >> +  CHARSET_INFO *cs;
> > >> +  for(uint i= 0;i > >> +  {
> > >> +String * str = args[i]->val_str();
> > >> +if(args[i]->null_value)
> > >> +{
> > >> +  null_value= 1;
> > >> +  return 0;
> > >> +}
> > >> +cs= str->charset();
> > >> +uchar l[4];
> > >> +int4store(l,str->length());
> > >> +cs->coll->hash_sort(cs,l,sizeof(l), &nr1, &nr2);
> > > looks good, but use my_charset_binary for the length.
> > did not get it :(
>
> You use
>
>   cs->coll->hash_sort(cs,l,sizeof(l), &nr1, &nr2);
>
> to sort the length, the byte value of str->length().
> This is binary data, you should have
>
>   cs= my_charset_binary;
i do not find any variable name my_charset_binary but i used

my_charset_utf8_bin is it ok ?

>
> but you have
>
>   cs= str->charset();
>
> for example, it can be latin1_general_ci, and then 'a' and 'A' will be
> considered equal. But 'a' is the string length 97, while 'A' is string
> length 65. String lengths are binary data, they do not consist of
> "letters" or "characters", so you need to use binary charset for
> lengths, but str->charset() for actual string data.
>
> > >> +cs->coll->hash_sort(cs, (uchar *)str->ptr(), str->length(),
&nr1, &nr2);
>
> ^^^ here cs= str->charset() is correct.
>
> > >> +  }
> > >> +  return   (longlong)nr1;
> > >> +}
> > >> +
> > >> +
> > >> diff --git a/sql/sql_yacc.yy b/sql/sql_yacc.yy
> > >> index e614692..4872d20 100644
> > >> --- a/sql/sql_yacc.yy
> > >> +++ b/sql/sql_yacc.yy
> > >>   | COMMENT_SYM TEXT_STRING_sys { Lex->last_field->comment=
$2; }
> > >> +| HIDDEN
> > >> +  {
> > >> +  LEX *lex =Lex;
> > > no need to do that ^^^ you can simply write
Lex->last_field->field_visibility=...
> > change but in sql_yacc.yy this type of code is use everywhere
>
> Yes :) Many years ago Lex was defined to be current_thd->lex and on some
> platforms current_thd (which is pthread_getspecific) is expensive, so we
> were trying to avoid using it when possible. That's why Lex was saved in
> a local variable.
>
> But now (also, for many years) Lex is YYTHD->Lex, where YYTHD is the
> argument of the MYSQLyyparse function, not pthread_getspecific.
> So there is no need to avoid Lex anymore. Nobody bothered to fix the old
> code, though.
>
> Regards,
> Sergei
> Chief Architect MariaDB
> and secur...@mariadb.org
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-14 Thread Sergei Golubchik
Hi, Sachin!

I'm reviewing.
Here I only comment on your reply to the previous review.

> >> +longlong  Item_func_hash::val_int()
> >> +{
> >> +  unsigned_flag= true;
> >> +  ulong nr1= 1,nr2= 4;
> >> +  CHARSET_INFO *cs;
> >> +  for(uint i= 0;i >> +  {
> >> +String * str = args[i]->val_str();
> >> +if(args[i]->null_value)
> >> +{
> >> +  null_value= 1;
> >> +  return 0;
> >> +}
> >> +cs= str->charset();
> >> +uchar l[4];
> >> +int4store(l,str->length());
> >> +cs->coll->hash_sort(cs,l,sizeof(l), &nr1, &nr2);
> > looks good, but use my_charset_binary for the length.
> did not get it :(

You use

  cs->coll->hash_sort(cs,l,sizeof(l), &nr1, &nr2);

to sort the length, the byte value of str->length().
This is binary data, you should have

  cs= my_charset_binary;

but you have

  cs= str->charset();

for example, it can be latin1_general_ci, and then 'a' and 'A' will be
considered equal. But 'a' is the string length 97, while 'A' is string
length 65. String lengths are binary data, they do not consist of
"letters" or "characters", so you need to use binary charset for
lengths, but str->charset() for actual string data.

> >> +cs->coll->hash_sort(cs, (uchar *)str->ptr(), str->length(), &nr1, 
> >> &nr2);

^^^ here cs= str->charset() is correct.

> >> +  }
> >> +  return   (longlong)nr1;
> >> +}
> >> +
> >> +
> >> diff --git a/sql/sql_yacc.yy b/sql/sql_yacc.yy
> >> index e614692..4872d20 100644
> >> --- a/sql/sql_yacc.yy
> >> +++ b/sql/sql_yacc.yy
> >>   | COMMENT_SYM TEXT_STRING_sys { Lex->last_field->comment= $2; }
> >> +| HIDDEN
> >> +  {
> >> +  LEX *lex =Lex;
> > no need to do that ^^^ you can simply write 
> > Lex->last_field->field_visibility=...
> change but in sql_yacc.yy this type of code is use everywhere

Yes :) Many years ago Lex was defined to be current_thd->lex and on some
platforms current_thd (which is pthread_getspecific) is expensive, so we
were trying to avoid using it when possible. That's why Lex was saved in
a local variable.

But now (also, for many years) Lex is YYTHD->Lex, where YYTHD is the
argument of the MYSQLyyparse function, not pthread_getspecific.
So there is no need to avoid Lex anymore. Nobody bothered to fix the old
code, though.

Regards,
Sergei
Chief Architect MariaDB
and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-13 Thread sachin setiya

Hello sergei

Please review the code

Work left


1 Where works on complex query  including join , in , subquery but fails 
in this query do not know why


CREATE table t1 (a blob unique not null , b blob unique not null );
select * from t1 where a=b;

2 Although alter works but thinking of changing the code so that it will 
become less complex.


3 delete , update using where is optimized in complex case involving 
joins subquery but normal case wont work

because they use function like sql_quick _select which i do not optimized

4 need to write test cases for update.

5 tried to make prototype for update problem
In this prototype what happenes is that when there is some dupp key tey 
then it caches the record in stack ,
and does this until some record insert is succeded then it checks 
succeded record's  old data to key stack last element
if it matches then we pop the stack and do update. update fails when 
there is some element left in stack

prototype is on https://github.com/SachinSetiya/server/tree/up_proto_2
it works on table like
create table t1 (a int unique , b int );
insert into t1 values(1,1),(2,2),(3,3),(4,4);
update table t1 set a= a+1;
currently this works only for sorted key but can be worked for unsorted 
key be sorting the records in stack with respect to key

to extend this to multiple keys we can use 1 key stack for each key

On 07/11/2016 11:41 PM, Sergei Golubchik wrote:

Hi, Sachin!

Here's a review of your commits from
b69e141a32 to 674eb4c4277.

Last time I've reviewed up to b69e141a32,
so next time I'll review from 674eb4c4277 and up.

Thanks for your work!


diff --git a/mysql-test/r/long_unique.result b/mysql-test/r/long_unique.result
new file mode 100644
index 000..fc6ff12
--- /dev/null
+++ b/mysql-test/r/long_unique.result
@@ -0,0 +1,160 @@
+create table z_1(abc blob unique);
+insert into z_1 values(112);
+insert into z_1 
values('5666');
+insert into z_1 values('sachin');
+insert into z_1 values('sachin');
+ERROR 23000: Can't write; duplicate key in table 'z_1'
+select * from z_1;
+abc
+112
+5666
+sachin
+select db_row_hash_1 from z_1;
+ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
+desc z_1;
+Field  TypeNullKey Default Extra
+abcblobYES NULL
+select * from information_schema.columns where table_schema='mtr' and 
table_name='z_1';
+TABLE_CATALOG  TABLE_SCHEMATABLE_NAME  COLUMN_NAME 
ORDINAL_POSITIONCOLUMN_DEFAULT  IS_NULLABLE DATA_TYPE   
CHARACTER_MAXIMUM_LENGTHCHARACTER_OCTET_LENGTH  NUMERIC_PRECISION   
NUMERIC_SCALE   DATETIME_PRECISION  CHARACTER_SET_NAME  COLLATION_NAME  
COLUMN_TYPE COLUMN_KEY  EXTRA   PRIVILEGES  COLUMN_COMMENT
+create table tst_1(xyz blob unique , x2 blob unique);
+insert into tst_1 values(1,22);
+insert into tst_1 values(2,22);
+ERROR 23000: Can't write; duplicate key in table 'tst_1'
+select * from tst_1;
+xyzx2
+1  22
+select db_row_hash_1 from tst_1;
+ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
+select db_row_hash_2 from tst_1;
+ERROR 42S22: Unknown column 'db_row_hash_2' in 'field list'
+select db_row_hash_1,db_row_hash_2 from tst_1;
+ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
+desc tst_1;
+Field  TypeNullKey Default Extra
+xyzblobYES NULL
+x2 blobYES NULL
+select * from information_schema.columns where table_schema='mtr' and 
table_name='tst_1';
+TABLE_CATALOG  TABLE_SCHEMATABLE_NAME  COLUMN_NAME 
ORDINAL_POSITIONCOLUMN_DEFAULT  IS_NULLABLE DATA_TYPE   
CHARACTER_MAXIMUM_LENGTHCHARACTER_OCTET_LENGTH  NUMERIC_PRECISION   
NUMERIC_SCALE   DATETIME_PRECISION  CHARACTER_SET_NAME  COLLATION_NAME  
COLUMN_TYPE COLUMN_KEY  EXTRA   PRIVILEGES  COLUMN_COMMENT
+create table t1 (empnum smallint, grp int);
+create table t2 (empnum int, name char(5));
+insert into t1 values(1,1);
+insert into t2 values(1,'bob');
+create view v1 as select * from t2 inner join t1 using (empnum);
+select * from v1;
+empnum namegrp
+1  bob 1

what is this test for? (with t1, t2, v1)

Removed this is one of the test in inoodb which was failing so i added this
but now it is removed.



+create table c_1(abc blob unique, db_row_hash_1 int unique);
+desc c_1;
+Field  TypeNullKey Default Extra
+abcblobYES   

Re: [Maria-developers] Sachin weekly report

2016-08-02 Thread Sergei Golubchik
Hi, Sachin!

On Aug 02, Sachin Setia wrote:
> Hello Sergei!
> 
> Sir I am stuck at one problem.
> Consider the case
> create table t1 (abc blob unique, xyz int unique);
> /* Insert data */
> update t1 set abc = 33 where xyz =23;
> This is not working because xyz = 23 will require index scan and at the time 
> of
> ha_update_row the inited != NONE and if inited != NONE then
> ha_index_read_idx_map would fail so how can I update row in case of
> blob unique ? In this case I think the only option left is sequential scan
> but that will defeat the purpose of blob unique.

inited!=NONE, I see. Right, when there's an ongoing table scan (with
rnd_next) or index scan (with index_next) there's kind of a cursor
inside the storage engine. And ha_index_read_idx_map will disrupt it, so
you cannot do ha_index_read_idx_map during a table or index scan...

This is a problem only for updates, not for inserts.

I could see two solutions for that. A simpler and slower solution is not
to allow that at all. Consider a test case (there's an index on idx):

  UPDATE t1 SET idx=idx+10 WHERE idx > 5;

in this case MariaDB cannot scan the index idx and update all rows that
it finds, because after idx=idx+10 the row move further in the same
index and the index scan might see the same row again when it do
index_next(), so it will be indefinitely adding 10 to it :)

For cases like this (when the index that is used for searching is also
updated), UPDATE has a special mode, it first performs the index search,
collects all row ids, then uses this list of row ids to do the update.

You can enable this mode when your unique columns are updated (take care
not to enable it if unique hash columns are present, but not updated).

Anoter, may be more complex, but faster solution is to use
handler::clone method. It is used in selects to do two index scans in
parallel - just what we need. It could be stored in a TABLE, like this:

  // early in mysql_update
  if (hash indexes are updated)
  {
// prepare handler clone
if (table->x == NULL)
  table->x= table->file->clone(table->s->normalized_path.str, 
table->mem_root);
  }

later, you use table->x->ha_index_read_idx_map() instead of
table->file->ha_index_read_idx_map(). But these both handlers use the
same TABLE and the same TABLE::record[], so you still need to take care
not to overwrite TABLE::record[], even if you use handler::clone.

I would try to use handler::clone(), it doesn't look that complex
actually. And if that wouldn't work, use the first approach.

Btw, don't forget to ha_close the clone, when TABLE closes it's main
handler.

Regards,
Sergei
Chief Architect MariaDB
and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-08-01 Thread Sachin Setia
Hello Sergei!

Sir I am stuck at one problem.
Consider the case
create table t1 (abc blob unique, xyz int unique);
/* Insert data */
update t1 set abc = 33 where xyz =23;
This is not working because xyz = 23 will require index scan and at the time of
ha_update_row the inited != NONE and if inited != NONE then
ha_index_read_idx_map would fail so how can I update row in case of
blob unique ? In this case I think the only option left is sequential scan
but that will defeat the purpose of blob unique.
Regards
sachin

On Thu, Jul 28, 2016 at 1:10 PM, Sergei Golubchik  wrote:
> Hi, Sachin!
>
> On Jul 27, Sergei Golubchik wrote:
>> >
>> > Please review branch
>> > https://github.com/SachinSetiya/server/tree/unique_index_where
>>
>> Sure. Will do.
>
> Sorry. Correction :(
>
> I'm on vacations two weeks in August, 9th to 22nd. And I have an awful
> lot to do for 10.2. Naturally I won't work on 10.2 on vacations, but I
> will take my laptop with me and I will answer your questions and review
> your code. Which also means that I'd better use every minute before
> vacations to work on 10.2, you see...
>
> So, I'll review you patch in a couple of weeks.
> Until then I can only answer questions, sorry :(
> And when on vacations, mind it, I will be only rarely on irc, so if I'm
> not there, do not wait for me, send an email, please. I will reply to
> your emails and I will do my reviews.
>
> By the way, did you do a full mysql-test run? Like this:
>
>   ./mtr --force --parallel 5
>
> In fact, what I do is:
>
>   cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_EMBEDDED_SERVER=ON
>   make -j5
>   cd mysql-test-run
>   script -c './mtr --force --parallel 5; ./mtr --force --parallel 5 --ps; 
> ./mtr --force --parallel 3 --embed'
>
> and that last command takes 4-5 hours on my laptop. I don't do that very
> often :) but it's throurough. You can run it overnight. Because of
> 'script' the complete log will be in the typescript file, so there's no
> need to monitor manually anything.
>
> Regards,
> Sergei
> Chief Architect MariaDB
> and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-27 Thread Sergei Golubchik
Hi, Sachin!

On Jul 27, Sachin Setia wrote:
> Hello Sergei!
> Weekly Report for 9 week of gsoc
> 
> Unique Long
> 
> 1. Changed mysql_prepare_create function as suggested by you , now addition
> of hash column will not be added in function start.
> 2. Sorted out problem of full_hidden detection now it is detected as soon
> as it is found.

I hope you've added a test for it, this issue is not trivial

> Where Optimization
> 1. In case of unique(a) if hash collides then it fetches the next record
> and compares it and so on.

good! with a test case? :)

> 2. Now unique(a,b,c ..) also works and also in case of hash collision  it
> fetches the next record and compares it and so on.
> 
> Please review branch
> https://github.com/SachinSetiya/server/tree/unique_index_where

Sure. Will do.

> The only problem i have is explain query fails , trying to solve it let you
> know if something happens.

Regards,
Sergei
Chief Architect MariaDB
and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-26 Thread Sachin Setia
Hello Sergei!
Weekly Report for 9 week of gsoc

Unique Long

1. Changed mysql_prepare_create function as suggested by you , now addition
of hash column will not be added in function start.
2. Sorted out problem of full_hidden detection now it is detected as soon
as it is found.

Where Optimization
1. In case of unique(a) if hash collides then it fetches the next record
and compares it and so on.
2. Now unique(a,b,c ..) also works and also in case of hash collusion  it
fetches the next record and compares it and so on.

Please review branch
https://github.com/SachinSetiya/server/tree/unique_index_where
The only problem i have is explain query fails , trying to solve it let you
know if something happens.
Regards
sachin

On Mon, Jul 25, 2016 at 1:34 AM, Sachin Setia 
wrote:

> Actually  i find that my_strnncoll  wil work :)
> Regards
> sachin
>
> On Mon, Jul 25, 2016 at 1:17 AM, Sachin Setia 
> wrote:
>
>> Hello Sergei,
>> I am getting one problem related to my_strcasecmp() function currently
>> this function does not allow
>> string comparison upto length l, is there any functon which can do
>> comparison upto length l, or should i
>> write mine.
>> Regards
>> sachin
>>
>> On Fri, Jul 22, 2016 at 9:56 PM, Sachin Setia 
>> wrote:
>>
>>> Hello Sergei,
>>> I have one problem my where optimization works for  query like
>>> select * from t1 where abc=1;
>>>
>>> but in query like
>>> select * from t1 where abc=(select xyz from t2 where xyz=1);
>>> does not work because in these query the charset is different from what
>>> used in
>>> t1 for inserting data and hence generation different hash for same data
>>> how i solve this problem.
>>> Regards
>>> sachin
>>>
>>> On Tue, Jul 19, 2016 at 5:52 PM, Sachin Setia >> > wrote:
>>>
 Just give me 3 days after you can review. yes I already merged with
 10.2.1 days ago.
 regards
 Sachin

 On Jul 19, 2016 17:28, "Sergei Golubchik"  wrote:

> Hi, Sachin!
>
> On Jul 19, Sachin Setia wrote:
> > Weekly Report for 8 week of gsoc
> >
> > 1 Changed the key flags as suggested by you.
>
> okay
>
> > 2 Now update will use only one buffer as suggested by you but there
> was one
> > problem some time offset can be
> > negative so i changed the field cmp_offset parameter from uint to
> long
>
> sure
>
> > 3 Still working on coding conventions.
> >
> > 4 I have made prototype for optimizing where for keys like unique(a)
> , it
> > is on branch
> > https://github.com/SachinSetiya/server/tree/unique_index_where
> >
> > Currently I am working on muliple keys like unique(a,b,c) i think
> this
> > should take 2-3 days  and edits suggested by you.
>
> sounds good.
> did you merge with 10.2.1?
>
> tell me when you'd want me to do another review.
>
> Regards,
> Sergei
> Chief Architect MariaDB
> and secur...@mariadb.org
>

>>>
>>
>
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-24 Thread Sachin Setia
Actually  i find that my_strnncoll  wil work :)
Regards
sachin

On Mon, Jul 25, 2016 at 1:17 AM, Sachin Setia 
wrote:

> Hello Sergei,
> I am getting one problem related to my_strcasecmp() function currently
> this function does not allow
> string comparison upto length l, is there any functon which can do
> comparison upto length l, or should i
> write mine.
> Regards
> sachin
>
> On Fri, Jul 22, 2016 at 9:56 PM, Sachin Setia 
> wrote:
>
>> Hello Sergei,
>> I have one problem my where optimization works for  query like
>> select * from t1 where abc=1;
>>
>> but in query like
>> select * from t1 where abc=(select xyz from t2 where xyz=1);
>> does not work because in these query the charset is different from what
>> used in
>> t1 for inserting data and hence generation different hash for same data
>> how i solve this problem.
>> Regards
>> sachin
>>
>> On Tue, Jul 19, 2016 at 5:52 PM, Sachin Setia 
>> wrote:
>>
>>> Just give me 3 days after you can review. yes I already merged with
>>> 10.2.1 days ago.
>>> regards
>>> Sachin
>>>
>>> On Jul 19, 2016 17:28, "Sergei Golubchik"  wrote:
>>>
 Hi, Sachin!

 On Jul 19, Sachin Setia wrote:
 > Weekly Report for 8 week of gsoc
 >
 > 1 Changed the key flags as suggested by you.

 okay

 > 2 Now update will use only one buffer as suggested by you but there
 was one
 > problem some time offset can be
 > negative so i changed the field cmp_offset parameter from uint to long

 sure

 > 3 Still working on coding conventions.
 >
 > 4 I have made prototype for optimizing where for keys like unique(a)
 , it
 > is on branch
 > https://github.com/SachinSetiya/server/tree/unique_index_where
 >
 > Currently I am working on muliple keys like unique(a,b,c) i think this
 > should take 2-3 days  and edits suggested by you.

 sounds good.
 did you merge with 10.2.1?

 tell me when you'd want me to do another review.

 Regards,
 Sergei
 Chief Architect MariaDB
 and secur...@mariadb.org

>>>
>>
>
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-24 Thread Sachin Setia
Hello Sergei,
I am getting one problem related to my_strcasecmp() function currently this
function does not allow
string comparison upto length l, is there any functon which can do
comparison upto length l, or should i
write mine.
Regards
sachin

On Fri, Jul 22, 2016 at 9:56 PM, Sachin Setia 
wrote:

> Hello Sergei,
> I have one problem my where optimization works for  query like
> select * from t1 where abc=1;
>
> but in query like
> select * from t1 where abc=(select xyz from t2 where xyz=1);
> does not work because in these query the charset is different from what
> used in
> t1 for inserting data and hence generation different hash for same data
> how i solve this problem.
> Regards
> sachin
>
> On Tue, Jul 19, 2016 at 5:52 PM, Sachin Setia 
> wrote:
>
>> Just give me 3 days after you can review. yes I already merged with
>> 10.2.1 days ago.
>> regards
>> Sachin
>>
>> On Jul 19, 2016 17:28, "Sergei Golubchik"  wrote:
>>
>>> Hi, Sachin!
>>>
>>> On Jul 19, Sachin Setia wrote:
>>> > Weekly Report for 8 week of gsoc
>>> >
>>> > 1 Changed the key flags as suggested by you.
>>>
>>> okay
>>>
>>> > 2 Now update will use only one buffer as suggested by you but there
>>> was one
>>> > problem some time offset can be
>>> > negative so i changed the field cmp_offset parameter from uint to long
>>>
>>> sure
>>>
>>> > 3 Still working on coding conventions.
>>> >
>>> > 4 I have made prototype for optimizing where for keys like unique(a) ,
>>> it
>>> > is on branch
>>> > https://github.com/SachinSetiya/server/tree/unique_index_where
>>> >
>>> > Currently I am working on muliple keys like unique(a,b,c) i think this
>>> > should take 2-3 days  and edits suggested by you.
>>>
>>> sounds good.
>>> did you merge with 10.2.1?
>>>
>>> tell me when you'd want me to do another review.
>>>
>>> Regards,
>>> Sergei
>>> Chief Architect MariaDB
>>> and secur...@mariadb.org
>>>
>>
>
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-22 Thread Sachin Setia
Hello Sergei,
I have one problem my where optimization works for  query like
select * from t1 where abc=1;

but in query like
select * from t1 where abc=(select xyz from t2 where xyz=1);
does not work because in these query the charset is different from what
used in
t1 for inserting data and hence generation different hash for same data
how i solve this problem.
Regards
sachin

On Tue, Jul 19, 2016 at 5:52 PM, Sachin Setia 
wrote:

> Just give me 3 days after you can review. yes I already merged with 10.2.1
> days ago.
> regards
> Sachin
>
> On Jul 19, 2016 17:28, "Sergei Golubchik"  wrote:
>
>> Hi, Sachin!
>>
>> On Jul 19, Sachin Setia wrote:
>> > Weekly Report for 8 week of gsoc
>> >
>> > 1 Changed the key flags as suggested by you.
>>
>> okay
>>
>> > 2 Now update will use only one buffer as suggested by you but there was
>> one
>> > problem some time offset can be
>> > negative so i changed the field cmp_offset parameter from uint to long
>>
>> sure
>>
>> > 3 Still working on coding conventions.
>> >
>> > 4 I have made prototype for optimizing where for keys like unique(a) ,
>> it
>> > is on branch
>> > https://github.com/SachinSetiya/server/tree/unique_index_where
>> >
>> > Currently I am working on muliple keys like unique(a,b,c) i think this
>> > should take 2-3 days  and edits suggested by you.
>>
>> sounds good.
>> did you merge with 10.2.1?
>>
>> tell me when you'd want me to do another review.
>>
>> Regards,
>> Sergei
>> Chief Architect MariaDB
>> and secur...@mariadb.org
>>
>
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-19 Thread Sachin Setia
Just give me 3 days after you can review. yes I already merged with 10.2.1
days ago.
regards
Sachin

On Jul 19, 2016 17:28, "Sergei Golubchik"  wrote:

> Hi, Sachin!
>
> On Jul 19, Sachin Setia wrote:
> > Weekly Report for 8 week of gsoc
> >
> > 1 Changed the key flags as suggested by you.
>
> okay
>
> > 2 Now update will use only one buffer as suggested by you but there was
> one
> > problem some time offset can be
> > negative so i changed the field cmp_offset parameter from uint to long
>
> sure
>
> > 3 Still working on coding conventions.
> >
> > 4 I have made prototype for optimizing where for keys like unique(a) , it
> > is on branch
> > https://github.com/SachinSetiya/server/tree/unique_index_where
> >
> > Currently I am working on muliple keys like unique(a,b,c) i think this
> > should take 2-3 days  and edits suggested by you.
>
> sounds good.
> did you merge with 10.2.1?
>
> tell me when you'd want me to do another review.
>
> Regards,
> Sergei
> Chief Architect MariaDB
> and secur...@mariadb.org
>
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-19 Thread Sergei Golubchik
Hi, Sachin!

On Jul 19, Sachin Setia wrote:
> Weekly Report for 8 week of gsoc
> 
> 1 Changed the key flags as suggested by you.

okay

> 2 Now update will use only one buffer as suggested by you but there was one
> problem some time offset can be
> negative so i changed the field cmp_offset parameter from uint to long

sure

> 3 Still working on coding conventions.
> 
> 4 I have made prototype for optimizing where for keys like unique(a) , it
> is on branch
> https://github.com/SachinSetiya/server/tree/unique_index_where
> 
> Currently I am working on muliple keys like unique(a,b,c) i think this
> should take 2-3 days  and edits suggested by you.

sounds good.
did you merge with 10.2.1?

tell me when you'd want me to do another review.

Regards,
Sergei
Chief Architect MariaDB
and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-19 Thread Sachin Setia
Weekly Report for 8 week of gsoc

1 Changed the key flags as suggested by you.
2 Now update will use only one buffer as suggested by you but there was one
problem some time offset can be
negative so i changed the field cmp_offset parameter from uint to long
3 Still working on coding conventions.

4 I have made prototype for optimizing where for keys like unique(a) , it
is on branch
https://github.com/SachinSetiya/server/tree/unique_index_where

Currently I am working on muliple keys like unique(a,b,c) i think this
should take 2-3 days  and edits suggested by you.
Regards
sachin

On Fri, Jul 15, 2016 at 5:29 PM, Sachin Setia 
wrote:

> okay sir, i will just store the is_hash flag and on
> init_from_binary_frm_image
> add flag to key. thanks
> Regards
> sachin
>
> On Fri, Jul 15, 2016 at 3:58 PM, Sergei Golubchik 
> wrote:
> > Hi, Sachin!
> >
> > On Jul 14, Sachin Setia wrote:
> >> Hello Sergei
> >> Actually i have one doubt  there is two options
> >> 1 add is_hash flag to field
> >> 2 add ha_unique_hash flag to field ,but i think all 16 bits of key flag
> is used
> >> so to do this i need to add another ex_flag variable in key struct and
> >> store it and retrieve it from frm
> >> currently i have done both ,but as you pointed out in review that  it
> >> is better to have it in key only but
> >> my question is that, whether is this approach is right ?
> >
> > From the storage point of view, it's much easier to put a special flag
> > on the field. You can store it in EXTRA2 or you can store it in the
> > Field::unireg_check enum.
> >
> > And then you recognize your HA_UNIQUE_HASH keys by having is_hash
> > property on the field of the first keypart. You can even set the
> > HA_UNIQUE_HASH flag in the key, if HA_UNIQUE_HASH=65536, for example.
> > That is, it won't be stored in the frm, and you set the flag in
> > init_from_binary_frm_image - the flag can be used to simplify run-time
> > checks, but it won't be stored on disk.
> >
> > Regards,
> > Sergei
> > Chief Architect MariaDB
> > and secur...@mariadb.org
>
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-15 Thread Sachin Setia
okay sir, i will just store the is_hash flag and on init_from_binary_frm_image
add flag to key. thanks
Regards
sachin

On Fri, Jul 15, 2016 at 3:58 PM, Sergei Golubchik  wrote:
> Hi, Sachin!
>
> On Jul 14, Sachin Setia wrote:
>> Hello Sergei
>> Actually i have one doubt  there is two options
>> 1 add is_hash flag to field
>> 2 add ha_unique_hash flag to field ,but i think all 16 bits of key flag is 
>> used
>> so to do this i need to add another ex_flag variable in key struct and
>> store it and retrieve it from frm
>> currently i have done both ,but as you pointed out in review that  it
>> is better to have it in key only but
>> my question is that, whether is this approach is right ?
>
> From the storage point of view, it's much easier to put a special flag
> on the field. You can store it in EXTRA2 or you can store it in the
> Field::unireg_check enum.
>
> And then you recognize your HA_UNIQUE_HASH keys by having is_hash
> property on the field of the first keypart. You can even set the
> HA_UNIQUE_HASH flag in the key, if HA_UNIQUE_HASH=65536, for example.
> That is, it won't be stored in the frm, and you set the flag in
> init_from_binary_frm_image - the flag can be used to simplify run-time
> checks, but it won't be stored on disk.
>
> Regards,
> Sergei
> Chief Architect MariaDB
> and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-15 Thread Sergei Golubchik
Hi, Sachin!

On Jul 14, Sachin Setia wrote:
> Hello Sergei
> Actually i have one doubt  there is two options
> 1 add is_hash flag to field
> 2 add ha_unique_hash flag to field ,but i think all 16 bits of key flag is 
> used
> so to do this i need to add another ex_flag variable in key struct and
> store it and retrieve it from frm
> currently i have done both ,but as you pointed out in review that  it
> is better to have it in key only but
> my question is that, whether is this approach is right ?

>From the storage point of view, it's much easier to put a special flag
on the field. You can store it in EXTRA2 or you can store it in the
Field::unireg_check enum.

And then you recognize your HA_UNIQUE_HASH keys by having is_hash
property on the field of the first keypart. You can even set the
HA_UNIQUE_HASH flag in the key, if HA_UNIQUE_HASH=65536, for example.
That is, it won't be stored in the frm, and you set the flag in
init_from_binary_frm_image - the flag can be used to simplify run-time
checks, but it won't be stored on disk.

Regards,
Sergei
Chief Architect MariaDB
and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-14 Thread Sachin Setia
Hello Sergei
Actually i have one doubt  there is two options
1 add is_hash flag to field
2 add ha_unique_hash flag to field ,but i think all 16 bits of key flag is used
so to do this i need to add another ex_flag variable in key struct and
store it and retrieve it from frm
currently i have done both ,but as you pointed out in review that  it
is better to have it in key only but
my question is that , whether is this approach is right ?
Regards
sachin

On Tue, Jul 12, 2016 at 2:43 PM, Sachin Setia  wrote:
> Hello everyone,
>
> Weekly Report for 7th week of gsoc
>
> 1Now we can alter blob columns query like
> create table t1(abc blob unique);
> alter table t1 change column abc a blob;
>   Even we can some multiple changes in one alter
> create table t1(abc blob unique, xyz blob unique);
> alter table t1 change column abc a blob , change xyz x blob;
>   Works.
>
> 2.Now we can delete blob columns if only one blob unique in
>   key then db_row_hash_ will be removed other wise hash_str
>   will be modified.If the query fails then there will be no
>   side effect
>
> 3.chaning of delete operations
>   create table t1(abc blob , xyz blob , pqr blob, unique(abc,xyz,pqr));
>   alter table t1 drop column abc, drop column xyz;
> 4. we will get right error message instead of duplicate hash
> 5. these was an glich in code when we try to add db_row_hash_ column to
> table with first key work mysql_prepare_alter_table select this
> instead of real hash
> now solved.
> 6. Added some test case.Will add more soon.
> @Sergei will reply to you soon.
> Regards
> sachin

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-12 Thread Sachin Setia
Hello everyone,

Weekly Report for 7th week of gsoc

1Now we can alter blob columns query like
create table t1(abc blob unique);
alter table t1 change column abc a blob;
  Even we can some multiple changes in one alter
create table t1(abc blob unique, xyz blob unique);
alter table t1 change column abc a blob , change xyz x blob;
  Works.

2.Now we can delete blob columns if only one blob unique in
  key then db_row_hash_ will be removed other wise hash_str
  will be modified.If the query fails then there will be no
  side effect

3.chaning of delete operations
  create table t1(abc blob , xyz blob , pqr blob, unique(abc,xyz,pqr));
  alter table t1 drop column abc, drop column xyz;
4. we will get right error message instead of duplicate hash
5. these was an glich in code when we try to add db_row_hash_ column to
table with first key work mysql_prepare_alter_table select this
instead of real hash
now solved.
6. Added some test case.Will add more soon.
@Sergei will reply to you soon.
Regards
sachin

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-07-11 Thread Sergei Golubchik
Hi, Sachin!

Here's a review of your commits from
b69e141a32 to 674eb4c4277.

Last time I've reviewed up to b69e141a32,
so next time I'll review from 674eb4c4277 and up.

Thanks for your work!

> diff --git a/mysql-test/r/long_unique.result b/mysql-test/r/long_unique.result
> new file mode 100644
> index 000..fc6ff12
> --- /dev/null
> +++ b/mysql-test/r/long_unique.result
> @@ -0,0 +1,160 @@
> +create table z_1(abc blob unique);
> +insert into z_1 values(112);
> +insert into z_1 
> values('5666');
> +insert into z_1 values('sachin');
> +insert into z_1 values('sachin');
> +ERROR 23000: Can't write; duplicate key in table 'z_1'
> +select * from z_1;
> +abc
> +112
> +5666
> +sachin
> +select db_row_hash_1 from z_1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +desc z_1;
> +FieldTypeNullKey Default Extra
> +abc  blobYES NULL
> +select * from information_schema.columns where table_schema='mtr' and 
> table_name='z_1';
> +TABLE_CATALOGTABLE_SCHEMATABLE_NAME  COLUMN_NAME 
> ORDINAL_POSITIONCOLUMN_DEFAULT  IS_NULLABLE DATA_TYPE   
> CHARACTER_MAXIMUM_LENGTHCHARACTER_OCTET_LENGTH  NUMERIC_PRECISION 
>   NUMERIC_SCALE   DATETIME_PRECISION  CHARACTER_SET_NAME  
> COLLATION_NAME  COLUMN_TYPE COLUMN_KEY  EXTRA   PRIVILEGES  
> COLUMN_COMMENT
> +create table tst_1(xyz blob unique , x2 blob unique);
> +insert into tst_1 values(1,22);
> +insert into tst_1 values(2,22);
> +ERROR 23000: Can't write; duplicate key in table 'tst_1'
> +select * from tst_1;
> +xyz  x2
> +122
> +select db_row_hash_1 from tst_1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +select db_row_hash_2 from tst_1;
> +ERROR 42S22: Unknown column 'db_row_hash_2' in 'field list'
> +select db_row_hash_1,db_row_hash_2 from tst_1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +desc tst_1;
> +FieldTypeNullKey Default Extra
> +xyz  blobYES NULL
> +x2   blobYES NULL
> +select * from information_schema.columns where table_schema='mtr' and 
> table_name='tst_1';
> +TABLE_CATALOGTABLE_SCHEMATABLE_NAME  COLUMN_NAME 
> ORDINAL_POSITIONCOLUMN_DEFAULT  IS_NULLABLE DATA_TYPE   
> CHARACTER_MAXIMUM_LENGTHCHARACTER_OCTET_LENGTH  NUMERIC_PRECISION 
>   NUMERIC_SCALE   DATETIME_PRECISION  CHARACTER_SET_NAME  
> COLLATION_NAME  COLUMN_TYPE COLUMN_KEY  EXTRA   PRIVILEGES  
> COLUMN_COMMENT
> +create table t1 (empnum smallint, grp int);
> +create table t2 (empnum int, name char(5));
> +insert into t1 values(1,1);
> +insert into t2 values(1,'bob');
> +create view v1 as select * from t2 inner join t1 using (empnum);
> +select * from v1;
> +empnum   namegrp
> +1bob 1

what is this test for? (with t1, t2, v1)

> +create table c_1(abc blob unique, db_row_hash_1 int unique);
> +desc c_1;
> +FieldTypeNullKey Default Extra
> +abc  blobYES NULL
> +db_row_hash_1int(11) YES UNI NULL
> +insert into c_1 values(1,1);
> +insert into c_1 values(1,2);
> +ERROR 23000: Can't write; duplicate key in table 'c_1'
> +create table c_2(abc blob unique,xyz blob unique, db_row_hash_2
> +int,db_row_hash_1 int unique);
> +desc c_2;
> +FieldTypeNullKey Default Extra
> +abc  blobYES NULL
> +xyz  blobYES NULL
> +db_row_hash_2int(11) YES NULL
> +db_row_hash_1int(11) YES UNI NULL
> +insert into c_2 values(1,1,1,1);
> +insert into c_2 values(1,23,4,5);
> +ERROR 23000: Can't write; duplicate key in table 'c_2'
> +create table u_1(abc int primary key , xyz blob unique);
> +insert into u_1 values(1,2);
> +insert into u_1 values(2,3);
> +update u_1 set xyz=2 where abc=1;
> +alter table z_1 drop db_row_hash_1;
> +ERROR 42000: Can't DROP 'db_row_hash_1'; check that column/key exists
> +alter table c_1 drop db_row_hash_2;
> +ERROR 42000: Can't DROP 'db_row_hash_2'; check that column/key exists
> +alter table c_1 drop db_row_hash_1;
> +alter table z_1 add column  db_row_hash_1 int unique;
> +show create table z_1;
> +TableCreate Table
> +z_1  CREATE TABLE `z_1` (
> +  `abc` blob,
> +  `db_row_hash_1` int(11) DEFAULT NULL,
> +  UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),

Re: [Maria-developers] Sachin weekly report

2016-07-04 Thread Sachin Setia
Hello everyone,

Weekly Report for 6th week of gsoc
1.hidden fields fully supported , they also show in extra column
2.suppose if we declare key as unique(abc,df) then instead of key name
to be abc it will be abc_df
3.prototype and prototype for how to store information about hash and
type of hash in field and KEY
finally decided to add bool flag (is_row_hash) in Field and added a
new long flag in KEY named ex_flags
it is stored and retried from extra2 region
4. studying optimizer code for  where condition on unique key.
5. Now it will give error for these statements
   create table t1(abc blob unique,index(db_row_hash_1));
   alter table t2 add column abc blob unique,add index(db_row_hash_1);
6.now we can delete unique blob columns.Db_row_hash will be
automatically removed
currently working on case like
create table t1(a blob,b blob ,unique(a,b));
alter table t1 drop column a;
this works after alter but records stored in table can be duplicate
which voids the
unique key rule.
Regards
sachin

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-06-26 Thread Sachin Setia
Hello everyone,

Weekly Report for 5th week of gsoc

1. select worked without showing db_row_hash
2. db_row_hash column  is put in first place rather then last
3. show create table will not show db_row_hash_*
4. select will not show hidden level  3
5. user defined hidden field worked.
6  we can also create definition like create table t1(abc int blob
unique, db_row_hash_1 int unique); //this
will automatically create column db_row_hash_2 for storing hash
7. db_row_hash_1 are like invisible to alter command if we add column
db_row_hash_1 using alter.  internal hidden column db_row_hash_1 will
automatically rename to db_row_hash_2.
8 we can also add unique constraint to blob in alter command and it
can also be deleted.
9 show create table will not show db_row_hash_* , and if it find hash
column it will show like unique(column names)

Regards
sachin


On Mon, Jun 20, 2016 at 11:50 AM, Sachin Setia
 wrote:
> Hi Sergei,
>
> Weekly Report for 4th week of gsoc
>
> 1. Field property is_row_hash , field_visibility successfully saved and
> retrived from frm , using extra2 space
> 2. Some tests added.
> 3. Solved the error when there is another primary key(it used to accept
> duplicate in this case ).
> 4. Added hidden in parser.
> 5. Identified the memory leak 1 is because of malloc db_row_hash str.I did
> not freed it. second memory leak i am searching for it.
> Work for this week.
> 1 First solve the memory leak problem.
> 2 Work on FULL_HIDDEN_FIELDS.
> 3 in mysql_prepare_create_table I am using an iterator it would be better if
> i can add custom field when it says an error. So will not have to use
> iterator as suggested by you sir.
> 4 rename the hash field automatically in the case clash.
> This week
> On Thu, Jun 16, 2016 at 11:46 PM, Sergei Golubchik  wrote:
>>
>> Hi, Sachin!
>>
>> On Jun 15, Sachin Setia wrote:
>> >
>> > But the major problem is:-
>> >  Consider this case
>> >
>> > create table tbl(abc int primary key,xyz blob unique);
>> >
>> > In this case , second key_info will have one user_defined_key_parts but
>> > two
>> > ext_key_parts
>> > second key_part refers to primary key.
>> > because of this ha_index_read_idx_map always return HA_ERR_KEY_NOT_FOUND
>> > I am trying to solve this problem.
>>
>> I've seen you solved this, but I do not understand the problem (and so I
>> cannot understand the fix either).
>
>
> Problem was
> consider this
> create table tbl(abc int primary key , xyz blob unique);
> insert into tbl value(1,12);
> insert into tbl value(2,12); # no error , details in commit comment
> https://github.com/MariaDB/server/commit/baecc73380084c61b9323a30f3e25977176e98b0
>>
>> Please, try to add a test case for
>> the problem you're fixing. In the same commit, preferrably.
>>
>> Now you can still commit a test case for this problem and your fix,
>> then, I hope, I'll be able to understand better what the problem was.
>>
>> Regards,
>> Sergei
>> Chief Architect MariaDB
>> and secur...@mariadb.org
>
>

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-06-19 Thread Sachin Setia
Hi Sergei,

Weekly Report for 4th week of gsoc

1. Field property is_row_hash , field_visibility successfully saved and
retrived from frm , using extra2 space
2. Some tests added.
3. Solved the error when there is another primary key(it used to accept
duplicate in this case ).
4. Added hidden in parser.
5. Identified the memory leak 1 is because of malloc db_row_hash str.I did
not freed it. second memory leak i am searching for it.
Work for this week.
1 First solve the memory leak problem.
2 Work on FULL_HIDDEN_FIELDS.
3 in mysql_prepare_create_table I am using an iterator it would be better
if i can add custom field when it says an error. So will not have to use
iterator as suggested by you sir.
4 rename the hash field automatically in the case clash.
This week
On Thu, Jun 16, 2016 at 11:46 PM, Sergei Golubchik  wrote:

> Hi, Sachin!
>
> On Jun 15, Sachin Setia wrote:
> >
> > But the major problem is:-
> >  Consider this case
> >
> > create table tbl(abc int primary key,xyz blob unique);
> >
> > In this case , second key_info will have one user_defined_key_parts but
> two
> > ext_key_parts
> > second key_part refers to primary key.
> > because of this ha_index_read_idx_map always return HA_ERR_KEY_NOT_FOUND
> > I am trying to solve this problem.
>
> I've seen you solved this, but I do not understand the problem (and so I
> cannot understand the fix either).


Problem was
consider this
create table tbl(abc int primary key , xyz blob unique);
insert into tbl value(1,12);
insert into tbl value(2,12); # no error , details in commit comment
https://github.com/MariaDB/server/commit/baecc73380084c61b9323a30f3e25977176e98b0

> Please, try to add a test case for
> the problem you're fixing. In the same commit, preferrably.
>
Now you can still commit a test case for this problem and your fix,
> then, I hope, I'll be able to understand better what the problem was.
>
> Regards,
> Sergei
> Chief Architect MariaDB
> and secur...@mariadb.org
>
___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp


Re: [Maria-developers] Sachin weekly report

2016-06-14 Thread Sergei Golubchik
Hi, Sachin!

Sounds good. Please continue with tests. It's difficult to see what is
working and what isn't without them...

What do you plan to do this week? Row comparison? Hidden columns?

On Jun 14, Sachin Setia wrote:
> Hello Sergei,
> Weekly Report for third week of gsoc
> 
> 1. Improved code as suggested by you.
> 2.Update worked.
> 3.Tried with with tests but ./mtr failed , so tried some bug fixes now it
> is working.
> 4.Tried unique index for text , and trying for varchar length >  file->
> max_key_part_length()
> 
> 
> Regards
> sachin
> 
Regards,
Sergei
Chief Architect MariaDB
and secur...@mariadb.org

___
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp