Re: purge thread

2014-09-04 Thread Akshay Suryavanshi
Hi Geetanjali,

Well word of caution with this setting, it can block the whole server if
the purge thread is delayed too much. Also look into other things like IO
saturation or issues with disk as to why the purge thread is not able to
keep up with the backlog. If IO is not the issue then there is a very old
read view, which might be blocking the purge thread, so long running
transactions not recommended.

We have seen situations where a bad disk had caused huge History_length.

Cheers!!!
Akshay


On Tue, Sep 2, 2014 at 12:27 PM, geetanjali mehra <
mailtogeetanj...@gmail.com> wrote:

> Could you please put some more light on it?
>
> Best Regards,
> Geetanjali Mehra
> Senior Oracle and MySQL DBA Corporate Trainer and Database Security
> Specialist
>
>
>
> On Thu, Aug 28, 2014 at 2:09 AM, shawn l.green 
> wrote:
>
> > Hello Geetanjali,
> >
> >
> > On 8/26/2014 1:16 AM, geetanjali mehra wrote:
> >
> >> I want to understand how to tune  innodb_max_purge_lag
> >>  >> parameters.html#sysvar_innodb_max_purge_lag>
> >>
> >> when history list length is high.
> >>
> >>
> >> Could anyone explain me.
> >>
> >>
> >> Best Regards,
> >> Geetanjali Mehra
> >> Senior Oracle and MySQL DBA Corporate Trainer and Database Security
> >> Specialist
> >>
> >>
> > This setting not related to the history value. It is there to help
> control
> > the gap between these two values of the InnoDB status report
> >
> > Trx id counter 0 290328385
> > Purge done for trx's n:o < 0 290315608 undo n:o < 0 17
> >
> > What is the current transaction compared what is the oldest transaction
> > that still has aged copies of data left in the data area of the
> tablespace.
> > The difference between those two values is the "purge backlog"
> >
> > Yours,
> > --
> > Shawn Green
> > MySQL Senior Principal Technical Support Engineer
> > Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
> > Office: Blountville, TN
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
> >
> >
>


Re: next-key lock

2014-09-04 Thread Akshay Suryavanshi
Hi Geeetanjali,

I retried the scenario you mentioned, however I am getting consistent
locking results on both unique and non-unique index, the preceding records
are getting updated however just the next record is being locked "next-key
locking". If I try to insert a new record after the "next key" it is
getting inserted.

I wonder how it is different in your situation. All I can say for now is
InnoDB indexes are not ordered, so unsure exactly if 20 falls just right
after the gap. Also did you delete the records from 7 through 19 or they
are just not inserted in your test table, because that it wont be a gap,
for the index records they are just values 6 and 20 which might sit next to
each other in the innodb page.

There are multiple blogs by experts on how the locking internals work,

http://dom.as/2011/07/03/innodb-index-lock/

https://www.facebook.com/note.php?note_id=479123255932

https://blogs.oracle.com/mysqlinnodb/entry/introduction_to_transaction_locks_in

http://dev.mysql.com/doc/internals/en/innodb-user-records.html

"In the User Records part of a page, you'll find all the records that the
user inserted.

There are two ways to navigate through the user records, depending whether
you want to think of their organization as an unordered or an ordered list.

An unordered list is often called a "heap". If you make a pile of stones by
saying "whichever one I happen to pick up next will go on top" — rather
than organizing them according to size and colour — then you end up with a
heap. Similarly, InnoDB does not want to insert new rows according to the
B-tree's key order (that would involve expensive shifting of large amounts
of data), so it inserts new rows right after the end of the existing rows
(at the top of the Free Space part) or wherever there's space left by a
deleted row.
But by definition the records of a B-tree must be accessible in order by
key value, so there is a record pointer in each record (the "next" field in
the Extra Bytes) which points to the next record in key order. In other
words, the records are a one-way linked list. So InnoDB can access rows in
key order when searching."

Hope this helps.

Cheers!!!
Akshay


On Thu, Sep 4, 2014 at 9:48 AM, geetanjali mehra  wrote:

> Dear Akshay,
>
>
> Thanks for you reply, You are really great. Now , one more confusion.
>
>
> mysql> create table test.new as select id,name from City;
>
> Query OK, 4079 rows affected (0.18 sec)
>
> Records: 4079  Duplicates: 0  Warnings: 0
>
>
>
> mysql> select * from new limit 15;
>
> ++---+
>
> | id | name  |
>
> ++---+
>
> |  1 | Kabul |
>
> |  2 | Qandahar  |
>
> |  3 | Herat |
>
> |  4 | Mazar-e-Sharif|
>
> |  5 | Amsterdam |
>
> |  6 | Rotterdam |
>
> |11|hhh   |
>
> | 20 | ´s-Hertogenbosch  |
>
> | 21 | Amersfoort|
>
> | 22 | Maastricht|
>
> | 23 | Dordrecht |
>
> | 24 | Leiden|
>
> | 25 | Haarlemmermeer|
>
> | 26 | Zoetermeer|
>
> | 27 | Emmen |
>
> | 28 | Zwolle|
>
> Now,
>
> *Session 1*
>
> *Session 2*
>
>
>
> mysql> begin;
>
> Query OK, 0 rows affected (0.01 sec)
>
>
>
> mysql> select * from new where id between 9 and 15 for update;
>
> ++--+
>
> | id | name |
>
> ++--+
>
> | 11 | hhh  |
>
> ++--+
>
> 1 row in set (0.00 sec)
>
>
>
>
>
>
>
>
>
> mysql> begin;
>
> Query OK, 0 rows affected (0.00 sec)
>
>
>
> mysql> insert into new values(17,'fff');
>
> (session is hanging)
>
> mysql> insert into new values(18,'fff');
>
> (session is hanging)
>
> mysql> insert into new values(19,'fff');
>
> (session is hanging).
>
>
>
>
>
> mysql> insert into new values(20,'fff');
>
> Query OK, 1 row affected (0.00 sec)
>
>
>
> mysql> insert into new values(21,'fff');
>
> Query OK, 1 row affected (0.01 sec)
>
>
>
> mysql> insert into new values(8,'fff');
>
> (session hang)
>
> mysql>
>
> mysql> insert into new values(7,'fff');
>
> (session hang)
>
> mysql> insert into new values(6,'fff');
>
> (session hang)
>
> mysql> insert into new values(5,'ggg');
>
> Query OK, 1 row affected (0.00 sec)
>
>
>
>
>
> I tried the above scenario with index and without index.  Without index
> it is showing the same behaviour as before. Using non-unique inde

Re: next-key lock

2014-09-03 Thread Akshay Suryavanshi
Can you try the same on a big table, I think optimizer is choosing a FTS
over an index lookup.


On Wed, Sep 3, 2014 at 8:25 PM, geetanjali mehra  wrote:

> This is what I am doing.
>
> mysql> select * from new;
> +--+--+
> | c1   | c2   |
> +--+--+
> |5 |5 |
> |   10 |   10 |
> |   15 |   15 |
> |   20 |   20 |
> |   30 |   30 |
> +--+--+
>
>
> Now,
>
> Session 1
>
>
> mysql> begin;
>
> Query OK, 0 rows affected (0.00 sec)
>
>
>
> mysql> select * from new where c1 between 10 and 25 for update;
>
> +--+--+
>
> | c1   | c2   |
>
> +--+--+
>
> |   10 |   10 |
>
> |   15 |   15 |
>
> |   20 |   20 |
>
> +--+--+
>
> 3 rows in set (0.00 sec)
>
>
>
>
>
>
>
>
>
> mysql>begin;
>
> mysql> insert into new values(29,29);
>
> (session is hanging)
>
> mysql> insert into new values(31,31);
>
> (session is hanging
>
>
> The last value on my table is 30.  Still it is not allowing me to insert
> 31.
>
> I tried the scenario without index on column c1 and then with non-unique
> index on column c1 and then unique index.  I am getting the same result. It
> seems that it is putting up lock on complete table.
>
> The scenario is working fine only when I made c1 primary key.  After
> making c1 primary key, I am able to insert value higher than 30.
>
> Can you please try the same scenario at your end?
>
>
>
>
> Best Regards,
> Geetanjali Mehra
> Senior Oracle and MySQL DBA Corporate Trainer and Database Security
> Specialist
>
>
>
> On Tue, Sep 2, 2014 at 1:53 PM, Akshay Suryavanshi <
> akshay.suryavansh...@gmail.com> wrote:
>
>> Hello Geetanjali,
>>
>> Apologies if I have confused you with the "normal Select" notation. I
>> meant to write with repeatable-read mode in mind, but looks like that is
>> not an issue, since you already tested this scenario with that isolation
>> mode.
>>
>> Moving further to the original issue.
>>
>> Do you have an index on column c1. Is the query "Select * from new where
>> c1 between 12 and 17 for update;" using index ? If there is no index on
>> the particular column, then InnoDB locks out whole table from other
>> transactions. Which is the case you mentioned. Also this can be dangerous.
>> Once you have indexed the column checkout the innodb status, you will see
>> the necessary locking. Also try updating values beyond the boundary values.
>>
>> So most important fact to know here is the involvement of secondary
>> indexes to introduce record locking, gap locking, and how their absence
>> will affect the transaction. As to "Why" this is happening ? It should be
>> understood that in InnoDB secondary keys are appended to PRIMARY index, so
>> if there is no index to search the records PRIMARY index values cannot be
>> filtered. In absence of secondary indexes a full scan is needed. And
>> finally Innodb table is one big Index (Clustered table).
>>
>> *By default, InnoDB operates in REPEATABLE READ
>> <http://dev.mysql.com/doc/refman/5.6/en/set-transaction.html#isolevel_repeatable-read>
>>  transaction
>> isolation level and with theinnodb_locks_unsafe_for_binlog
>> <http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_locks_unsafe_for_binlog>
>>  system
>> variable disabled. In this case, InnoDB uses next-key locks for searches
>> and index scans, which prevents phantom rows.*
>>
>> So Index scan above is infact a Full-table-scan (full index scan)
>>
>> Please try it out and let me know if you observe any difference.
>>
>> Cheers!!!
>>  Akshay
>>
>>
>> On Tue, Sep 2, 2014 at 11:59 AM, geetanjali mehra <
>> mailtogeetanj...@gmail.com> wrote:
>>
>>> Dear Akshay,
>>>
>>> ASFIK, normal selects are always non-locking read and they do not put any
>>> locks.
>>> Select..., Select..where..,Select where..between
>>>
>>> Does above select statement will use next-key locking and/or gap locking?
>>> I dont think so.
>>>
>>> Please correct me if I am wrong.
>>>
>>>
>>> Best Regards,
>>> Geetanjali Mehra
>>> Senior Oracle and MySQL DBA Corporate Trainer and Database Security
>>> Specialist
>>>
>>>
>>>
>>> On Fri, Aug 29, 2014 at 11:46 AM, Akshay Suryavanshi <
>>> akshay.suryavansh...@gmail.com> wrote:
>>>
>>> > G

Re: next-key lock

2014-09-02 Thread Akshay Suryavanshi
Hello Geetanjali,

Apologies if I have confused you with the "normal Select" notation. I meant
to write with repeatable-read mode in mind, but looks like that is not an
issue, since you already tested this scenario with that isolation mode.

Moving further to the original issue.

Do you have an index on column c1. Is the query "Select * from new where c1
between 12 and 17 for update;" using index ? If there is no index on the
particular column, then InnoDB locks out whole table from other
transactions. Which is the case you mentioned. Also this can be dangerous.
Once you have indexed the column checkout the innodb status, you will see
the necessary locking. Also try updating values beyond the boundary values.

So most important fact to know here is the involvement of secondary indexes
to introduce record locking, gap locking, and how their absence will affect
the transaction. As to "Why" this is happening ? It should be understood
that in InnoDB secondary keys are appended to PRIMARY index, so if there is
no index to search the records PRIMARY index values cannot be filtered. In
absence of secondary indexes a full scan is needed. And finally Innodb
table is one big Index (Clustered table).

*By default, InnoDB operates in REPEATABLE READ
<http://dev.mysql.com/doc/refman/5.6/en/set-transaction.html#isolevel_repeatable-read>
transaction
isolation level and with theinnodb_locks_unsafe_for_binlog
<http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_locks_unsafe_for_binlog>
system
variable disabled. In this case, InnoDB uses next-key locks for searches
and index scans, which prevents phantom rows.*

So Index scan above is infact a Full-table-scan (full index scan)

Please try it out and let me know if you observe any difference.

Cheers!!!
Akshay


On Tue, Sep 2, 2014 at 11:59 AM, geetanjali mehra <
mailtogeetanj...@gmail.com> wrote:

> Dear Akshay,
>
> ASFIK, normal selects are always non-locking read and they do not put any
> locks.
> Select..., Select..where..,Select where..between
>
> Does above select statement will use next-key locking and/or gap locking?
> I dont think so.
>
> Please correct me if I am wrong.
>
>
> Best Regards,
> Geetanjali Mehra
> Senior Oracle and MySQL DBA Corporate Trainer and Database Security
> Specialist
>
>
>
> On Fri, Aug 29, 2014 at 11:46 AM, Akshay Suryavanshi <
> akshay.suryavansh...@gmail.com> wrote:
>
> > Geetanjali,
> >
> > There is a difference between next-key locking, gap locking and locking
> > reads.
> >
> > Next-key locking and gap-locking are used with normal Selects statement
> in
> > Innodb, whereas locking reads wont release a lock on the whole column
> until
> > transaction completed, and not just selected values.
> >
> > May be you can try your example with SELECT... LOCK IN SHARE MODE;
> >
> > Cheers!!!
> > Akshay Suryawanshi
> >
> >
> > On Fri, Aug 29, 2014 at 11:22 AM, geetanjali mehra <
> > mailtogeetanj...@gmail.com> wrote:
> >
> >> Thanks for your reply.
> >>
> >> I read those docs. Still my doubt is at the same stage.  Please clarify
> >> the
> >> same to me.
> >>  Should not other sessions be allowed to insert the rows beyond that
> >> range.?
> >>
> >> As far as I understand, Innodb brought the concept of next-key locks so
> as
> >> to prevent phantom problem.   So, it is clear to me that issuing the
> below
> >> query
> >>
> >> Select * from new where c1 between 12 and 17 for update;
> >>
> >> will not allow other sessions to insert any value between 12 and 17.
> >>
> >> But if i am trying to insert 20 from other session, it is not allowed.
> Why
> >> this is so? The session is hanging.
> >>
> >> Best Regards,
> >> Geetanjali Mehra
> >> Senior Oracle and MySQL DBA Corporate Trainer and Database Security
> >> Specialist
> >>
> >>
> >>
> >> On Thu, Aug 28, 2014 at 2:26 AM, shawn l.green <
> shawn.l.gr...@oracle.com>
> >> wrote:
> >>
> >> >
> >> >
> >> > On 8/26/2014 1:12 AM, geetanjali mehra wrote:
> >> >
> >> >> Hello to all,
> >> >> In repeatable read isolation level, when we issue:
> >> >>
> >> >> Select * from new where c1 between 12 and 17 for update;
> >> >>
> >> >> this range will be locked by innodb by using next-key locks.
> >> >>
> >> >> But, why is is preventing any other session to insert any value
>

Re: next-key lock

2014-08-28 Thread Akshay Suryavanshi
Geetanjali,

There is a difference between next-key locking, gap locking and locking
reads.

Next-key locking and gap-locking are used with normal Selects statement in
Innodb, whereas locking reads wont release a lock on the whole column until
transaction completed, and not just selected values.

May be you can try your example with SELECT... LOCK IN SHARE MODE;

Cheers!!!
Akshay Suryawanshi


On Fri, Aug 29, 2014 at 11:22 AM, geetanjali mehra <
mailtogeetanj...@gmail.com> wrote:

> Thanks for your reply.
>
> I read those docs. Still my doubt is at the same stage.  Please clarify the
> same to me.
>  Should not other sessions be allowed to insert the rows beyond that
> range.?
>
> As far as I understand, Innodb brought the concept of next-key locks so as
> to prevent phantom problem.   So, it is clear to me that issuing the below
> query
>
> Select * from new where c1 between 12 and 17 for update;
>
> will not allow other sessions to insert any value between 12 and 17.
>
> But if i am trying to insert 20 from other session, it is not allowed. Why
> this is so? The session is hanging.
>
> Best Regards,
> Geetanjali Mehra
> Senior Oracle and MySQL DBA Corporate Trainer and Database Security
> Specialist
>
>
>
> On Thu, Aug 28, 2014 at 2:26 AM, shawn l.green 
> wrote:
>
> >
> >
> > On 8/26/2014 1:12 AM, geetanjali mehra wrote:
> >
> >> Hello to all,
> >> In repeatable read isolation level, when we issue:
> >>
> >> Select * from new where c1 between 12 and 17 for update;
> >>
> >> this range will be locked by innodb by using next-key locks.
> >>
> >> But, why is is preventing any other session to insert any value beyond
> >> that
> >> range; any value above the range and any value below the range. I am
> >> unable
> >> to understand this.
> >>
> >
> > I believe you are confusing gap locking (the space between the values)
> and
> > next-key locking (the space after the range).
> >
> > http://dev.mysql.com/doc/refman/5.6/en/innodb-record-level-locks.html
> >
> > See also:
> > http://dev.mysql.com/doc/refman/5.6/en/innodb-next-key-locking.html
> > http://dev.mysql.com/doc/refman/5.6/en/innodb-locks-set.html
> >
> >
> >
> >> Best Regards,
> >> Geetanjali Mehra
> >> Senior Oracle and MySQL DBA Corporate Trainer and Database Security
> >> Specialist
> >>
> >>
> > Yours,
> > --
> > Shawn Green
> > MySQL Senior Principal Technical Support Engineer
> > Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
> > Office: Blountville, TN
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
> >
> >
>


Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-15 Thread Akshay Suryavanshi
Harald,

I somewhat dont agree with your statement of mysqldump backup size being
way bigger than the actual datasets, just beacuse its SQL plain text. What
I can tell you is, mysqldump files would be significantly smaller than the
total dataset size, because it doesnt contain "index data". So, if out of
400G, 100G is index data then the dump file should be 300G.

I hope you agree...

Cheers!

On Sat, Feb 16, 2013 at 4:24 AM, Reindl Harald wrote:

> "our database is 400 GB, mysqldump is 600MB" was not a typo and you
> honestly believed that you can import this dump to somewhat?
>
> WTF - as admin you should be able to see if the things in front
> of you are theoretically possible before your start any action
> and 1:400 is impossible, specially because mysql-dumps are
> ALWAYS WAY LARGER then the databasses because they contain
> sql-statements and not only data
>
> Am 15.02.2013 23:37, schrieb Mike Franon:
> > Your right I am going to run another mysqldump, maybe something
> > happened and pick this up next week..
> >
> > Thanks all.
> >
> > On Fri, Feb 15, 2013 at 5:03 PM, Keith Murphy 
> wrote:
> >> Something doesn't add up. If the data set is 400 GB then your dump has
> to
> >> bigger than 600 mb. That is better than a 400:1 ratio. Maybe the dump
> isn't
> >> working correctly or your data set is much smaller? If the dump output
> is
> >> less than a gig I would just edit it with something like vi and look at
> the
> >> offending line.
> >>
> >> Keith
> >>
> >> On Feb 15, 2013 3:55 PM, "Mike Franon"  wrote:
> >>>
> >>> I am having a real hard time upgrading just from 5.0.96 to 5.1
> >>>
> >>> I did a full mysqldump and then restore the database, keep in mind our
> >>> database is 400 GB, mysqldump is 600MB file, about 30 minutes into the
> >>> restore get this error on one table on an insert:
> >>>
> >>> ERROR 1064 (42000) at line 1388: You have an error in your SQL syntax;
> >>> check the manual that corresponds to your MySQL server version for the
> >>> right syntax to use near ''2010-04-10 20' at line 1
> >>>
> >>> It weird because If I upgrade 5.1 right over 5.0 without doing a
> >>> mysqldump, and then do a mysqlcheck it works, except for 5 tables, and
> >>> triggers, so trying to think of the best way to get to 5.1
> >>>
> >>> On Fri, Feb 15, 2013 at 12:39 PM, Keith Murphy  >
> >>> wrote:
> >>>> While it might be GA I would not recommend that you deploy it for a
> >>>> while.
> >>>> ... at least several point releases. There will be new bugs uncovered
> as
> >>>> it
> >>>> moves out to a wider audience.
> >>>>
> >>>> Upgrade to 5.5 (through 5.1) first as it is quite proven. Slave 5.6
> off
> >>>> it
> >>>> and test. Be patient. Save yourself some heartache. Just my two cents.
> >>>>
> >>>> Keith
> >>>>
> >>>> On Feb 15, 2013 9:27 AM, "Mike Franon"  wrote:
> >>>>>
> >>>>> Thanks everyone for suggestions.
> >>>>>
> >>>>> I am doing this on a test box  with a copy of our db before doing
> this
> >>>>> on production db servers.
> >>>>>
> >>>>> I just upgraded from 5.0 to 5.1, and ran mysql_upgrade
> >>>>>
> >>>>> and see I have a few tables with the following error:
> >>>>>
> >>>>> error: Table upgrade required. Please do "REPAIR TABLE
> >>>>> `tablename`" or dump/reload to fix it!
> >>>>>
> >>>>> I got this on 4 tables so far, but it still checking, my database is
> >>>>> huge so might be a while.
> >>>>>
> >>>>> The question I have what is the best way to fix this?
> >>>>>
> >>>>> To install all I did was remove all of the 5.0, and then did a yum
> >>>>> install 5.1 on my AWS machine.  and then just started mysql.
> >>>>>
> >>>>> Should I instead do a complete mysqldump, and use that instead?
> >>>>>
> >>>>> On Thu, Feb 14, 2013 at 7:40 PM, Rick James 
> >>>>> wrote:
> >>>>>> Sounds like something that, once discovered, can be fixed in the old
> >>>>>> version
> &

Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-14 Thread Akshay Suryavanshi
Mike,

5.6 is GA now, so its stable release. Also you should not jump to 5.6
directly, atleast from 5.0.

There are many bug fixes and changes in 5.1, so you should consider this
way.

5.0-->5.1-->5.5 (all slaves first, and then the master)

And further 5.5 --> 5.6 (again all slaves first and then the master)

Hope this helps.

Cheers!

On Fri, Feb 15, 2013 at 2:38 AM, Mike Franon  wrote:

> I have 1 master with many slaves, using the master only for inserts
> and the rest are readers.
>
>
> Is 5.6 stable?  Or better off to go to 5.5?
>
> If so do I need to make a few steps or can go straight from 5.0 to 5.6?
>
>
> Any best practices and recommendations?
>
> Thanks
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: Relay log Question

2013-01-08 Thread Akshay Suryavanshi
Also, you may want to see, if at all new file is really getting every hour
exactly, if any cron'd script runs, which executes "flush logs" on the
slave server. That will also rotate relay log.

Cheers

On Wed, Jan 9, 2013 at 1:35 AM, Akshay Suryavanshi <
akshay.suryavansh...@gmail.com> wrote:

> Hi,
>
> Please re-phrase your question. The relay logs are created as and when
> required by the Slave_SQL thread. Once all the events in the relay logs are
> executed the relay log would be purged by the Slave_SQL thread.
>
> By setting relay_log_purge=0 you are disabling this automatic purge
> option. So the new relay log files will be created however the older ones
> would not be deleted.
>
> Creation or rotation of relay or binary logs is not time-based, for some
> situations it might create in one hour, however for many others it might
> create after 1 day, week, or even months.
>
> Usually the relay logs in your case should be of definite size, check out
> this setting "max_relay_log_size" and "max_binlog_size", the latter would
> come into picture if the prior one is disabled.
>
> Thanks,
> Akshay S
>
>
> On Wed, Jan 9, 2013 at 1:21 AM, Wayne Leutwyler 
> wrote:
>
>> Hello List,
>>
>> Quick question. I am running MySQL 5.1.66, and I noticed that every hour,
>> my slave is creating a new relay log. I added this:
>>
>> relay_log_purge = 0
>>
>> to the my.cnf and it did not solve the issue.
>>
>> What am I missing? Never seen this issue before.
>>
>> Thanks,
>>
>> Walter
>>
>> "Courage is being scared to death, but saddling up anyway." --John Wayne
>>
>>
>


Re: Relay log Question

2013-01-08 Thread Akshay Suryavanshi
Hi,

Please re-phrase your question. The relay logs are created as and when
required by the Slave_SQL thread. Once all the events in the relay logs are
executed the relay log would be purged by the Slave_SQL thread.

By setting relay_log_purge=0 you are disabling this automatic purge option.
So the new relay log files will be created however the older ones would not
be deleted.

Creation or rotation of relay or binary logs is not time-based, for some
situations it might create in one hour, however for many others it might
create after 1 day, week, or even months.

Usually the relay logs in your case should be of definite size, check out
this setting "max_relay_log_size" and "max_binlog_size", the latter would
come into picture if the prior one is disabled.

Thanks,
Akshay S

On Wed, Jan 9, 2013 at 1:21 AM, Wayne Leutwyler wrote:

> Hello List,
>
> Quick question. I am running MySQL 5.1.66, and I noticed that every hour,
> my slave is creating a new relay log. I added this:
>
> relay_log_purge = 0
>
> to the my.cnf and it did not solve the issue.
>
> What am I missing? Never seen this issue before.
>
> Thanks,
>
> Walter
>
> "Courage is being scared to death, but saddling up anyway." --John Wayne
>
>


Re: using LIMIT without ORDER BY

2012-12-13 Thread Akshay Suryavanshi
Well Johan,

I was referring to a condition when there is no index on the tables, not
even primary keys. Your explanation makes complete sense about the
optimizer and the pagination queries.

Thanks,
Akshay S

On Thu, Dec 13, 2012 at 2:34 PM, Johan De Meersman wrote:

>
>
> - Original Message -
> > From: "Akshay Suryavanshi" 
> >
> > I am not sure, but if its a MyISAM table, it should be ordered by the
> > records insertion order, and in case of InnoDB it should be ordered
> > by the clustered index, not necessarily it should be a defined one.
>
> No.
>
> The optimizer may choose to do a full table scan, or it may choose to use
> an index scan. That decision may change due to changes in the data, or
> because the next version of mysql you upgrade to has different (and
> hopefully better...) alghorithms, et cetera.
>
> The ONLY way to ensure consecutive queries return your data in the same
> order, is specifying an order by clause.
>
> Apart from that, I personally prefer to avoid the limit 0,10 /limit 11/20
> technique, because a) rows might have gotten inserted and/or deleted, and
> b) limit is applied to the full resultset.
>
> Instead, order by the PK (or another unique index or combination of
> indices), remember the last record's value(s) and use that as starting
> point for your next query.
>
>
>
> --
> Unhappiness is discouraged and will be corrected with kitten pictures.
>


Re: using LIMIT without ORDER BY

2012-12-12 Thread Akshay Suryavanshi
I am not sure, but if its a MyISAM table, it should be ordered by the
records insertion order, and in case of InnoDB it should be ordered by the
clustered index, not necessarily it should be a defined one.


On Thu, Dec 13, 2012 at 12:58 PM, jiangwen jiang wrote:

> hi, all:
>
> There's a confusion. I want to get all the data in table t by pages, using
> Limit SQL without ORDER BY:
> SELECT * FROM t Limit 0,10
> SELECT * FROM t Limit 10, 10
> ...
>
> Is it right without ORDER BY?
> Is there any default order in table t, to make suer I can get all data in
> the table?
>
> Thanks
> Regards!
>
> White
>


Re: query running very slow, need a little help

2012-12-04 Thread Akshay Suryavanshi
Hi,

A subquery with IN clause is not a good idea. If you want to tune this
query, try adding indexes on the tables accessed in the inner query
"credits". A composite index on (success,promoter_id) would be sufficient,
then the optimizer will use this index for the where clause and as a
covering index for column "promoter_id".

This should improve performance by orders of magnitude.

Still we would recommend to turn this subquery in a join, which will
perform faster even if there are many records retrieved from the inner
query.

Hope this helps, let us know the results.

Regards,
Akshay S

On Wed, Dec 5, 2012 at 10:24 AM, Paul Nowosielski  wrote:

> Hi,
>
> I'm running this query:
>
> mysql> SELECT email FROM promoters where id NOT IN (SELECT promoter_id
> FROM credits WHERE success = 1 ) and active = 1;
> Empty set (31.89 sec)
>
> its returning an empty set and take over 30 seconds to return.
>
>  mysql> describe promoters;
> +---+-**-+--+-**
> +---+-**---+
> | Field | Type | Null | Key | Default
>   | Extra  |
> +---+-**-+--+-**
> +---+-**---+
> | id| int(11) unsigned | NO   | PRI | NULL
>  | auto_increment |
> | company_name  | varchar(40)  | YES  | | NULL
>  ||
> | first_name| varchar(40)  | YES  | | NULL
>  ||
> | last_name | varchar(40)  | YES  | | NULL
>  ||
> | address   | varchar(40)  | YES  | | NULL
>  ||
> | zip   | varchar(10)  | YES  | | NULL
>  ||
> | city  | varchar(40)  | YES  | | NULL
>  ||
> | country   | varchar(40)  | YES  | | NULL
>  ||
> | phone | varchar(20)  | YES  | | NULL
>  ||
> | email | varchar(100) | YES  | UNI | NULL
>  ||
> | website   | varchar(100) | YES  | | NULL
>  ||
> | payments_id   | varchar(10)  | YES  | MUL | NULL
>  ||
> | password  | varchar(100) | YES  | | NULL
>  ||
> | active| tinyint(1)   | YES  | MUL | NULL
>  ||
> | activation_key| varchar(50)  | YES  | | NULL
>  ||
> | new_email | varchar(100) | YES  | | NULL
>  ||
> | new_email_activation_key  | varchar(50)  | YES  | | NULL
>  ||
> | registered| timestamp| YES  | |
> CURRENT_TIMESTAMP ||
> | referral  | int(10) unsigned | YES  | | NULL
>  ||
> | whitelabel_beginner_modus | tinyint(1)   | YES  | | 1
>   ||
> +---+-**-+--+-**
> +---+-**---+
> 20 rows in set (0.00 sec)
>
> mysql> describe credits;
> ++**-+--+-+---**
> ++
> | Field  | Type| Null | Key | Default   |
> Extra  |
> ++**-+--+-+---**
> ++
> | id | int(11) unsigned| NO   | PRI | NULL  |
> auto_increment |
> | type   | tinyint(1) unsigned | NO   | | NULL  |
>|
> | credits| int(11) | YES  | | NULL  |
>|
> | success| tinyint(1)  | YES  | MUL | NULL  |
>|
> | profit | float   | NO   | | NULL  |
>|
> | price  | float   | NO   | | NULL  |
>|
> | date   | timestamp   | NO   | MUL | CURRENT_TIMESTAMP |
>|
> | user_id| int(11) unsigned| NO   | | NULL  |
>|
> | promoter_id| int(10) unsigned| YES  | MUL | NULL  |
>|
> | referrer   | varchar(10) | YES  | | NULL  |
>|
> | domain_id  | int(11) unsigned| NO   | | NULL  |
>|
> | string | varchar(100)| YES  | | NULL  |
>|
> | client_info| varchar(200)| YES  | | NULL  |
>|
> | promoter_paid  | tinyint(1)  | YES  | | NULL  |
>|
> | status | tinyint(4)  | 

Re: Table crashed error

2012-10-18 Thread Akshay Suryavanshi
Hi,

This table can be repaired using Repair table ; in mysql.

This should fix the corrupted index file, or if mysql is shutdown, you can
run myisamchk, also if its a myisam table.

Hope this helps.

Regards,
Akshay S

On Fri, Oct 19, 2012 at 11:52 AM, a bv  wrote:

> Hi on a log management server from the web interface i get the below
> error . What can be the reason and how can i repair it ?
>
> Regards
>
>
> Mysql_error: Table ./x/y ' is marked as crashed and should be repaired
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: innodb_lock_wait_timeout

2012-10-11 Thread Akshay Suryavanshi
Hi,

Check the transactions which are causing locks. Use show engine innodb
status \G to find out the transactions acquiring locks for so long. As the
scenario you mentioned (like you use innodb at simpler level), you might be
in a situation where there are SELECTs causing the issue.

It is strange that selects will hold such locks, but a SELECT inside a
transaction with tx_isolation level set to "Repeatable-read" (which is
default), can be CONSISTENT reads. And the locks acquired by those SELECTs
inside transactions will not be released unless transactions end.

Find out if such SELECTs exist, if it is so, then use a more granular lock
level "read-committed". This will make the SELECTs release locks as soon as
they finish.

Usually increasing the timeout value will make it wait more before it dies.
Hence not suggested. And retrying transactions is not needed since this can
be solved at a DB level. Also if at all possible or desired, use the
tx_isolation level at per session level, before starting any transaction.

Hope this helps.

Thanks,
Akshay S

On Thu, Oct 11, 2012 at 7:35 PM, Andrés Tello  wrote:

> Are you managing transactions with mysql + innodb?
>
> I had a similar issue, and I need to rework the application.
>
> innodb does a row level transaction lock. Read locks aren't exclusive,
> update locks are exclusive.
>
> What I was doing was something like this:
>
> Thread P1
> begin;
> innodb: update field set field + K where filter="A";
> (other transactions)
>
> in a parallel thread P2 the same excecution,
> begin;
> innodb: update field set field + K where filter="A";
> (other transactions)
>
>
> P2-> commit;
> P1-> commit;
> SInce I was trying to update the same field, with a self reference, within
> 2 separete threads, I  had a lock issue, because, at the end, it was a non
> deterministic situation. What was the field value at the end I wanted to
> update? what version?
>
> The command
> show engine innodb status;
>
> will provide information about the last wait lock / dead lock, and that
> will give you an Idea of what sql is making the fuss...
>
>
> My current lock wait ratio after code rework:
>
> Current Lock Wait ratio = 1 : 1110458921
>
> (using tuning-primer.sh)
>
> Why is better to retry, because since "things happens almost at random"
> maybe your next retry will not encouter the issue.
>
> But neither retrying or making bigger the wait time for the lock will solve
> the issue, and the more load you have, the more this trouble will arise...
>
>
>
>
>
>
> On Thu, Oct 11, 2012 at 7:43 AM, Johan De Meersman  >wrote:
>
> >
> > - Original Message -
> > > From: "Markus Falb" 
> > >
> > > But why is retrying better than raising the value?
> > >
> > > So what is better, adjusting the timeout or retrying application side
> > > and why?
> >
> > Well, raising the timeout would probably help, but may cause more
> > concurrent connections to stay open until they block the server.
> >
> > The idea of retrying is that everything you've done before that
> particular
> > transaction has already been committed, so you (probably) can't easily
> undo
> > it. A retry gives you a chance to still finish what you were doing with
> no
> > other loss than some time waiting. Regardless of how many retries fail,
> you
> > will still be in the same position as you were when the first attempt
> > failed.
> >
> >
> >
> > --
> > Linux Bier Wanderung 2012, now also available in Belgium!
> > August, 12 to 19, Diksmuide, Belgium - http://lbw2012.tuxera.be
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
> >
> >
>


Re: checking progress of alter table on an InnoDB table (Was: Re: checking progress of alter table on a MyISAM table)

2012-09-27 Thread Akshay Suryavanshi
Hi,

22 indexes are simply too many, assuming they are not composite, which
means you already have a very large table. Secondly the most important
bottleneck is shown by the inserts/sec, only 405 inserts is very very slow.
This could take ages to complete. And the 405 inserts/sec are averages
calculated over some small period of time mostly under a minute, not from
the uptime, you can see that at the top (initial lines) of the Show engine
innodb status \G output.

Depending upon the machine footprint, inserts/sec should atleast be more
than 1 inserts/sec even on a busy server.

Indexes are slowing this down. your calculation of 79 hours should be
correct, only if there are no unique indexes, otherwise this will slow down
more as the data increases.

Regards,
Akshay Surayavanshi

On Fri, Sep 28, 2012 at 2:22 AM, Larry Martell wrote:

> On Thu, Sep 27, 2012 at 2:37 PM, Akshay Suryavanshi
>  wrote:
> > Hi,
> >
> > The alter taking such a long time, could be due to composite indexes on
> the
> > table.
>
> There are 22 indexes on the table, but none are composites.
>
> > we understand the table is big but not so big to take such a long
> > time. Also we can get a hold of the process looking at the disk space
> > consumed. Usually a tmp table created in data directory would also give a
> > good understanding of the process, remember sizes need not be exact since
> > there might be some defragmentation at file level.
>
> Ok, I will look for a temp file.
>
> > Next you can check inserts/sec in Show engine innodb status \G and
> calculate
> > the time it should take for the number of rows in the table.
>
> The ROW OPERATIONS section has this:
>
> 1 queries inside InnoDB, 0 queries in queue
> 1 read views open inside InnoDB
> Main thread process no. 7913, id 14020684432, state: sleeping
> Number of rows inserted 75910241, updated 15602, deleted 70, read
> 9130481311
> 405.80 inserts/s, 0.00 updates/s, 0.00 deletes/s, 405.80 reads/s
>
> This appears to be statistics over the life of the invocation of the
> server, correct? But using 405.80 inserts/s give that the alter will
> take almost 79 hours.
>
> >
> > Usually, you carry this operation by adding the secondary indexes after
> the
> > data import or such alters are complete.
> >
> > Regards,
> > Akshay Suryavanshi
> >
> >
> > On Fri, Sep 28, 2012 at 1:56 AM, Rick James 
> wrote:
> >>
> >> Isn't ALTER a DDL, not DML?  So I don't think you would find anything in
> >> undo logs.
> >>
> >> > -Original Message-
> >> > From: Larry Martell [mailto:larry.mart...@gmail.com]
> >> > Sent: Thursday, September 27, 2012 1:20 PM
> >> > To: Rick James
> >> > Cc: mysql mailing list
> >> > Subject: checking progress of alter table on an InnoDB table (Was: Re:
> >> > checking progress of alter table on a MyISAM table)
> >> >
> >> > So we changed the table from MyISAM to InnoDB. I read that the 'undo
> >> > log entries' shown in 'show engine innodb status' would correspond to
> >> > the number of rows that have been operated on throughout the process
> of
> >> > the ALTER. The table we're altering has 115,096,205 rows, and the
> >> > alter's been running for 28 hours, and the undo log entries is 9309.
> >> > Also that number seems to go up and down. So clearly, it's not what I
> >> > think.
> >> >
> >> > So anyone know a way to monitor the status of the alter now that it's
> >> > an InnoDB table?
> >> >
> >> >
> >> > On Wed, Sep 26, 2012 at 10:31 AM, Rick James 
> >> > wrote:
> >> > > Not really.
> >> > > You could look at the .TYD and .TYI file sizes and compare to the
> >> > .MYD and .MYI, but that can be deceptive.  If the table is really big,
> >> > and has lots of indexes, the generation of the indexes might go slower
> >> > and slower -- hence any math on the sizes would be optimistic.
> >> > >
> >> > >> -Original Message-
> >> > >> From: Larry Martell [mailto:larry.mart...@gmail.com]
> >> > >> Sent: Wednesday, September 26, 2012 8:52 AM
> >> > >> To: mysql mailing list
> >> > >> Subject: checking progress of alter table on a MyISAM table
> >> > >>
> >> > >> Is there any way to check on the progress of a long running alter
> >> > >> table on a MyISAM table? I know it can be done with an InnoDB
> table,
> >> > >> but I haven't found a way to do it on with a MyISAM table.
> >> >
> >> > --
> >> > MySQL General Mailing List
> >> > For list archives: http://lists.mysql.com/mysql
> >> > To unsubscribe:http://lists.mysql.com/mysql
> >>
> >>
> >> --
> >> MySQL General Mailing List
> >> For list archives: http://lists.mysql.com/mysql
> >> To unsubscribe:http://lists.mysql.com/mysql
> >>
> >
>


Re: checking progress of alter table on an InnoDB table (Was: Re: checking progress of alter table on a MyISAM table)

2012-09-27 Thread Akshay Suryavanshi
Hi,

The alter taking such a long time, could be due to composite indexes on the
table. we understand the table is big but not so big to take such a long
time. Also we can get a hold of the process looking at the disk space
consumed. Usually a tmp table created in data directory would also give a
good understanding of the process, remember sizes need not be exact since
there might be some defragmentation at file level.

Next you can check inserts/sec in Show engine innodb status \G and
calculate the time it should take for the number of rows in the table.

Usually, you carry this operation by adding the secondary indexes after the
data import or such alters are complete.

Regards,
Akshay Suryavanshi

On Fri, Sep 28, 2012 at 1:56 AM, Rick James  wrote:

> Isn't ALTER a DDL, not DML?  So I don't think you would find anything in
> undo logs.
>
> > -Original Message-
> > From: Larry Martell [mailto:larry.mart...@gmail.com]
> > Sent: Thursday, September 27, 2012 1:20 PM
> > To: Rick James
> > Cc: mysql mailing list
> > Subject: checking progress of alter table on an InnoDB table (Was: Re:
> > checking progress of alter table on a MyISAM table)
> >
> > So we changed the table from MyISAM to InnoDB. I read that the 'undo
> > log entries' shown in 'show engine innodb status' would correspond to
> > the number of rows that have been operated on throughout the process of
> > the ALTER. The table we're altering has 115,096,205 rows, and the
> > alter's been running for 28 hours, and the undo log entries is 9309.
> > Also that number seems to go up and down. So clearly, it's not what I
> > think.
> >
> > So anyone know a way to monitor the status of the alter now that it's
> > an InnoDB table?
> >
> >
> > On Wed, Sep 26, 2012 at 10:31 AM, Rick James 
> > wrote:
> > > Not really.
> > > You could look at the .TYD and .TYI file sizes and compare to the
> > .MYD and .MYI, but that can be deceptive.  If the table is really big,
> > and has lots of indexes, the generation of the indexes might go slower
> > and slower -- hence any math on the sizes would be optimistic.
> > >
> > >> -Original Message-
> > >> From: Larry Martell [mailto:larry.mart...@gmail.com]
> > >> Sent: Wednesday, September 26, 2012 8:52 AM
> > >> To: mysql mailing list
> > >> Subject: checking progress of alter table on a MyISAM table
> > >>
> > >> Is there any way to check on the progress of a long running alter
> > >> table on a MyISAM table? I know it can be done with an InnoDB table,
> > >> but I haven't found a way to do it on with a MyISAM table.
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: Temporary table creation fails

2012-09-10 Thread Akshay Suryavanshi
Hi,

If you can afford try changing the tmpdir for mysql. This is a static
variable and will require a mysql restart.

thanks

On Mon, Sep 10, 2012 at 6:08 PM, Machiel Richards - Gmail <
machiel.richa...@gmail.com> wrote:

> Hi
>
>
> at the moment this does not really matter to us.
>
> we have even tried to create a temp table with only one field in order
> to insert one row for testing, but we are currently not able to create any
> temporary tables whatsoever as even the simplest form of table still gives
> the same error.
>
> Regards
>
>
>
> On 09/10/2012 02:33 PM, Ananda Kumar wrote:
>
>> this temp table will hold how many rows, what would be its size.
>>
>> On Mon, Sep 10, 2012 at 5:03 PM, Machiel Richards - Gmail <
>> machiel.richa...@gmail.com 
>> >
>> wrote:
>>
>> Hi,
>> We confirmed that the /tmp directory permissions is set to
>> rwxrwxrwxt and is owned by root , the same as all our other servers.
>>
>> There is also about 60Gb of free space on the filesystem where
>> /tmp resides.
>>
>> Regards
>>
>>
>>
>>
>>
>> On 09/10/2012 01:11 PM, Rik Wasmus wrote:
>>
>> the message "ERROR 1005 (HY000): Can't create table
>> '/tmp/#sql4a27_68eed1_0.frm' (errno: -1)"
>>
>> Basics first:
>> 1) Is the /tmp directory write & readable for the user mysql
>> runs as?
>> 2) Has the /tmp directory enough free space?
>>
>>
>>
>> -- MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe: http://lists.mysql.com/mysql
>>
>>
>>
>


Re: Temporary table creation fails

2012-09-10 Thread Akshay Suryavanshi
Hi,

If you dont have data on the server, would you please initialize the data
directory.

Use mysql-install-db and give proper data directory and proper cnf file if
you are giving so.

Also specify the user as root if you have root access.

Thanks

On Mon, Sep 10, 2012 at 3:34 PM, Machiel Richards - Gmail <
machiel.richa...@gmail.com> wrote:

> Hi,
>  still no luck, same error being given immediately after pressing
> enter.
>
>
>
>
> On 09/10/2012 12:02 PM, Ananda Kumar wrote:
>
>> start with 500MB and try
>>
>> On Mon, Sep 10, 2012 at 3:31 PM, Machiel Richards - Gmail <
>> machiel.richa...@gmail.com 
>> >
>> wrote:
>>
>> Hi, the sort_buffer_size was set to 8Mb as well as 32M  for the
>> session (currently 1M) and retried with same result.
>>
>>
>>
>>
>>
>> On 09/10/2012 11:55 AM, Ananda Kumar wrote:
>>
>>> can you trying setting sort_buffer_size to big value at your
>>> session level and create the table
>>>
>>> On Mon, Sep 10, 2012 at 2:54 PM, Machiel Richards - Gmail
>>> >> >> >>
>>>
>>> wrote:
>>>
>>> Hi
>>>
>>> We tried that as well, however the databases are quite
>>> busy and either other transactions overwrite the info, or
>>> there is nothing logged.
>>>
>>> We even tried running the create statement and
>>> immediately running Show innodb status, but nothing for that
>>> statement.
>>>
>>> Regards
>>>
>>>
>>>
>>>
>>>
>>> On 09/10/2012 11:05 AM, Ananda Kumar wrote:
>>>
 try this command and see if you can get more info about the
 error

 show innodb status\G

 On Mon, Sep 10, 2012 at 2:25 PM, Machiel Richards - Gmail
 >>> >
 wrote:

 Hi All

 I am hoping someone can point me in the right direction.

 We have a mysql 5.0 database which is giving us
 hassles when trying to create a temporary table.

 The creation works perfectly on the slave machine as
 well as all our other db's, however on the one specific
 main server we are getting the message "ERROR 1005
 (HY000): Can't create table '/tmp/#sql4a27_68eed1_0.frm'
 (errno: -1)"


  We have been trying to figure this out for more
 than a week now with no luck.

  When looking in the mysql error.log file, the
 innodb states that there is either another mysqld
 running (confirmed and not the case) or it is a
 filesystem permissions problem.


   We have checked several times and the permissions
 on the /tmp filesystem is the same as on all servers.


 Any suggestions would be greatly appreciated as
 google didnt help much either.


 Regards


 -- MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql



>>>
>>>
>>
>>
>


Re: mysql failed login attempts

2012-07-15 Thread Akshay Suryavanshi
Hi Tim,

The information you provided is not enough, do you have the error which you
encountered. The configs you mentioned are not related to connection.
Please provide the full error list may be trace from the application logs,
the way you are trying to connect, plus the configs associated with connect.

Thanks
Akshay Suryavanshi

On Mon, Jul 16, 2012 at 8:51 AM, Tim Dunphy  wrote:

> hello,
>
>  currently my php app is failing logins to it's mysql database. My config
> file is set like this:
>
> [mysqld_safe]
> general-log=1
> general-log-file=/var/log/mysqld-general.log
> log-output=/var/log/mysqld-general.log
> log=/var/log/mysqld.log
> log-error=/var/log/mysqld-error.log
> pid-file=/var/run/mysqld/mysqld.pid
>
> Yet if I tail the /var/log/mysqld-general.log I cannot see the login
> attempts.
>
> This is how my environment variables are set:
>
> mysql> show variables like 'log%';
> +-+---+
> | Variable_name   | Value |
> +-+---+
> | log | ON|
> | log_bin | OFF   |
> | log_bin_trust_function_creators | OFF   |
> | log_error   | /var/log/mysqld-error.log |
> | log_output  | FILE  |
> | log_queries_not_using_indexes   | OFF   |
> | log_slave_updates   | OFF   |
> | log_slow_queries| OFF   |
> | log_warnings| 1 |
> +-+---+
>
> Can someone please tell me what I am doing wrong and how I can see failed
> login attempts?
>
> Tim
>
> --
> GPG me!!
>
> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>


Re: why does "select * from table oder by indexed_field" not use key?

2012-07-10 Thread Akshay Suryavanshi
Hi,

The statement will do a Full table scan, because of the following things :
Not using "Where" clause, and selecting "all columns (*)" within the query.
Filesort is used since no index is used, use a where clause with condition
on column which is indexed and notice the explain plan. Also you can
retrieve specific columns on which indexes are created to use the feature
of "Covering index".

On Wed, Jul 11, 2012 at 3:19 AM, Reindl Harald wrote:

> my reason for create a key on "qg_sort" was primary
> for this query - but why is here 'filesort' used?
>
> mysql> EXPLAIN SELECT * FROM cms1_quickbar_groups ORDER BY qg_sort ASC;
>
> ++-+--+--+---+--+-+--+--++
> | id | select_type | table| type | possible_keys | key  |
> key_len | ref  | rows | Extra  |
>
> ++-+--+--+---+--+-+--+--++
> |  1 | SIMPLE  | cms1_quickbar_groups | ALL  | NULL  | NULL |
> NULL| NULL |2 | Using filesort |
>
> ++-+--+--+---+--+-+--+--++
> 1 row in set (0.01 sec)
> -
> cms1_quickbar_groups | CREATE TABLE `cms1_quickbar_groups` (
>   `qg_id` mediumint(7) unsigned NOT NULL AUTO_INCREMENT,
>   `qg_titel` varchar(255) COLLATE latin1_german1_ci NOT NULL DEFAULT '',
>   `qg_titel_en` varchar(255) COLLATE latin1_german1_ci NOT NULL DEFAULT '',
>   `qg_sort` smallint(3) unsigned NOT NULL DEFAULT '0',
>   PRIMARY KEY (`qg_id`),
>   KEY `qbq_key` (`qg_sort`)
> ) ENGINE=MyISAM AUTO_INCREMENT=3 DEFAULT CHARSET=latin1
> COLLATE=latin1_german1_ci PACK_KEYS=1 DELAY_KEY_WRITE=1
>
>