Re: replication between two tables in same database

2011-10-03 Thread Anupam Karmarkar
Try out functionality you desire with trigger to replicate data




From: Derek Downey de...@orange-pants.com
To: [MySQL] mysql@lists.mysql.com
Sent: Thursday, 29 September 2011 10:46 PM
Subject: Re: replication between two tables in same database

You could look into the sphinx engine http://sphinxsearch.com/about/sphinx/

No experience with this personally though

- Derek
On Sep 29, 2011, at 1:07 PM, Tompkins Neil wrote:

 Yes, unless I can set-up some sort of replication between the two tables.
 
 On Thu, Sep 29, 2011 at 6:05 PM, Reindl Harald h.rei...@thelounge.netwrote:
 
 please do NOT post off-list!
 
 so your only workaround is like '%whatever%' currently
 
 Am 29.09.2011 19:04, schrieb Tompkins Neil:
 The reason I'm using Innodb is because the usage of foreign keys in short
 however, we need a couple of the tables to support FULLTEXT searching which 
 Innodb
 does not support.
 
 
 On Thu, Sep 29, 2011 at 6:01 PM, Reindl Harald h.rei...@thelounge.netwrote:
 
 
 
 Am 29.09.2011 18:56, schrieb Tompkins Neil:
 Hi
 
 I've a Innodb and MyISAM table in the SAME database that I wish to
 replicate
 the data between the two because I need to use FULLTEXT searching on
 the MyISAM table.  Is this possible ? If so how do I do it.
 
 in short: no
 
 in longer:
 why in the world are you using the wrong engine if you need
 fulltext-search?
 mysql 5.6 semmes to start support this in innodb, but currently not
 
 replication is based on binlogs and contains database/table so there is
 no dirty trick do this on one server except let run a script and copy
 the table-contents per cronjob
 
 
 
 --
 
 Mit besten Grüßen, Reindl Harald
 the lounge interactive design GmbH
 A-1060 Vienna, Hofmühlgasse 17
 CTO / software-development / cms-solutions
 p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
 icq: 154546673, http://www.thelounge.net/
 http://www.thelounge.net/signature.asc.what.htm
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=sb_akarmar...@yahoo.com

Re: Join 2 tables and compare then calculate something

2011-10-03 Thread Anupam Karmarkar
Try out query with UNION also

select A,B,C from
(select A,B,C from X
UNION
select A,B,C from Y)
group by A,B,C
having count(*)1





From: Gian Karlo C webdev...@gmail.com
To: mysql@lists.mysql.com
Sent: Sunday, 2 October 2011 12:49 PM
Subject: Join 2 tables and compare then calculate something

Hi All,

I decided to join and write to the list hoping someone could help and shed a
light on me.

Here's the scenario.

I have a database running in mysql 5.x in Centos 5. The database has 2
tables that is almost identical with some additional fields.

Table 1
Name, IPAddress, Description, Issue, Severity, Timestamp, DateReceived

Table 2
Name, IPAddress, Description, Issue, Severity, Timestamp, DataReceived,
Owner

Here's my SQL statement to compare both tables if fields data are the same
then consider it as a valid record.

select Table1.Name, Table1.IPAddress, Table1.Description, Table1.Issue,
Table1.Severity, Table1.Timestamp FROM Table1 LEFT JOIN Table2 ON
Table1.Name = Table2.Name WHERE Table1.Name = Table2.Name AND
Table1.IPAddress = Table2.IPAddress AND Table1.Description =
Table2.Description AND Table1.Issue = Table2.Issue AND Table1.Severity =
Table2.Severity AND Table1.Timestamp = Table2.Timestamp  group by 1;

I need to compare Name, IPAddress, Description, Issue, Severity and
Timestamp to consider as I valid data then I group it so that only one
record will show although there is no duplicate entry on the results. Just
want to make sure.

Using that SQL statement I was able to get and compare data (which I don't
know if this is a good approach), now when I get a valid results, I want to
compute the DateReceived.

Table1
DateReceived = 10:05

Table2
DateReceived = 10:15

So the computation is to get the difference between DateReceived which the
result should be 10 minutes.

How would I add that computation to my existing SQL statement and maybe
someone suggest a good approach with regards to my current statement.

Thanks in advance.

Re: how to shrink ibdata1

2011-10-03 Thread Andrew Moore
File per table is required if you want to implement compression via the
barracuda file format.
On 3 Oct 2011 06:39, Adarsh Sharma adarsh.sha...@orkash.com wrote:
 innnodb_file per table creates ibdata files for each table and What to
 do if some tables data are deleted frequently.
 I have a innodb table which was truncated after 150GB in mysql database.


 Reindl Harald wrote:
 Am 02.10.2011 22:10, schrieb Dhaval Jaiswal:

 Hi All,

 How to shrink the Mysql ibdata1 files.

 The actual size of database is only hardly 10G, but ibdata1 is showing
73G
 full. I did some googling and found the following procedure.

 Do a mysqldump of all databases, procedures, triggers etc
 Drop all databases
 Stop mysql
 Delete ibdata1 and ib_log files
 Start mysql
 Restore from dump

 When you start MySQL in step 5 the ibdata1 and ib-log files will be
 recreated.


 Is this the only option with mysql? Can't we update the statistics of
 database  reduce the file size.

 I am currently using 5.0.X.


 innodb_file_per_table is your friend, but way too late
 that should have been considered BEFORE the setup

 now you have only the option dump your databases
 reset the server
 configure innodb_file_per_table
 restore the backup





Re: 4 minute slow on select count(*) from table - myisam type

2011-10-03 Thread Joey L
Thanks for the input -
1. I will wait 48 hours and see what happens.
2. can you tell me what are some performance tests I can do to help me
better tune my server ?
3. I am concerned about this table : | w6h8a_sh404sef_urls |
MyISAM |  10 | Dynamic| 8908402 |174 |  1551178184 |
 281474976710655 |   2410850304 | 0 |8908777 | 2011-09-22
11:16:03 | 2011-10-02 21:17:20 | 2011-10-02 10:12:04 | utf8_general_ci   |
  NULL ||   |
what can I do to make it run faster - i did not write the code...but need to
optimize server to handle this table when it gets larger.  It is used for
url re-writes - so it has a lot of urls.
thanks
mjh

On Mon, Oct 3, 2011 at 12:38 AM, Bruce Ferrell bferr...@baywinds.orgwrote:


 The meaning is:

 increase max_connections
 reduce wait_timeout
 -- 28800 is wait 8 hours before closing out dead connections
 same for interactive_timeout


 increase key_buffer_size ( 7.8G) increase join_buffer_size
 -- This keeps mysql from having to run to disk constantly for keys
 -- Key buffer size / total MyISAM indexes: 256.0M/7.8G
 -- You have a key buffer of 256M and 7.8G of keys

 join_buffer_size ( 128.0K, or always use indexes with joins)
 Joins performed without indexes: 23576 of 744k queries.
 -- You probably want to look at the slow query log.  Generalize the queries
 and the do an explain on the query.  I have seen instances where a query I
 thought was using an index wasn't and I had to re-write... with help from
 this list :-)  Thanks gang!


 increase tmp_table_size ( 16M)
 increase max_heap_table_size ( 16M)
 -- When making adjustments, make tmp_table_size/max_heap_table_size equal

 increase table_cache (  1k )
 -- Table cache hit rate: 7% (1K open / 14K opened)
 -- Increase table_cache gradually to avoid file descriptor limits

 All of the aside, you need to let this run for at least 24 hours. I
 prefer 48 hours.  The first line says mysql has only been running 9
 hours.   You can reset the timeouts interactivly by entering at the
 mysql prompt:

 set global wait_timeout=some value

 You can do the same for the interactive_timeout.

 Setting these values too low will cause long running queries to abort


 On 10/02/2011 07:02 PM, Joey L wrote:
  Variables to adjust:
   max_connections ( 100)
   wait_timeout ( 28800)
   interactive_timeout ( 28800)
   key_buffer_size ( 7.8G)
   join_buffer_size ( 128.0K, or always use indexes with joins)
   tmp_table_size ( 16M)
   max_heap_table_size ( 16M)
   table_cache ( 1024)


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mjh2...@gmail.com




Re: 4 minute slow on select count(*) from table - myisam type

2011-10-03 Thread Andrés Tello
have you tried

select count(yourindex) instead of select count(*) ?


On Mon, Oct 3, 2011 at 7:53 AM, Joey L mjh2...@gmail.com wrote:

 Thanks for the input -
 1. I will wait 48 hours and see what happens.
 2. can you tell me what are some performance tests I can do to help me
 better tune my server ?
 3. I am concerned about this table : | w6h8a_sh404sef_urls
 |
 MyISAM |  10 | Dynamic| 8908402 |174 |  1551178184 |
  281474976710655 |   2410850304 | 0 |8908777 | 2011-09-22
 11:16:03 | 2011-10-02 21:17:20 | 2011-10-02 10:12:04 | utf8_general_ci   |
  NULL ||   |
 what can I do to make it run faster - i did not write the code...but need
 to
 optimize server to handle this table when it gets larger.  It is used for
 url re-writes - so it has a lot of urls.
 thanks
 mjh

 On Mon, Oct 3, 2011 at 12:38 AM, Bruce Ferrell bferr...@baywinds.org
 wrote:

 
  The meaning is:
 
  increase max_connections
  reduce wait_timeout
  -- 28800 is wait 8 hours before closing out dead connections
  same for interactive_timeout
 
 
  increase key_buffer_size ( 7.8G) increase join_buffer_size
  -- This keeps mysql from having to run to disk constantly for keys
  -- Key buffer size / total MyISAM indexes: 256.0M/7.8G
  -- You have a key buffer of 256M and 7.8G of keys
 
  join_buffer_size ( 128.0K, or always use indexes with joins)
  Joins performed without indexes: 23576 of 744k queries.
  -- You probably want to look at the slow query log.  Generalize the
 queries
  and the do an explain on the query.  I have seen instances where a query
 I
  thought was using an index wasn't and I had to re-write... with help from
  this list :-)  Thanks gang!
 
 
  increase tmp_table_size ( 16M)
  increase max_heap_table_size ( 16M)
  -- When making adjustments, make tmp_table_size/max_heap_table_size equal
 
  increase table_cache (  1k )
  -- Table cache hit rate: 7% (1K open / 14K opened)
  -- Increase table_cache gradually to avoid file descriptor limits
 
  All of the aside, you need to let this run for at least 24 hours. I
  prefer 48 hours.  The first line says mysql has only been running 9
  hours.   You can reset the timeouts interactivly by entering at the
  mysql prompt:
 
  set global wait_timeout=some value
 
  You can do the same for the interactive_timeout.
 
  Setting these values too low will cause long running queries to abort
 
 
  On 10/02/2011 07:02 PM, Joey L wrote:
   Variables to adjust:
max_connections ( 100)
wait_timeout ( 28800)
interactive_timeout ( 28800)
key_buffer_size ( 7.8G)
join_buffer_size ( 128.0K, or always use indexes with joins)
tmp_table_size ( 16M)
max_heap_table_size ( 16M)
table_cache ( 1024)
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?unsub=mjh2...@gmail.com
 
 



Re: 4 minute slow on select count(*) from table - myisam type

2011-10-03 Thread Joey L
this is not a real query on the site - it is just a way i am measuring
performance on mysql - I do not know if it is such a great way to test.
Looking for a better way to get a performance read on my site...do you have
any ?? besides just viewing pages on it.
thanks
mjh


On Mon, Oct 3, 2011 at 9:58 AM, Andrés Tello mr.crip...@gmail.com wrote:

 have you tried

 select count(yourindex) instead of select count(*) ?


 On Mon, Oct 3, 2011 at 7:53 AM, Joey L mjh2...@gmail.com wrote:

 Thanks for the input -
 1. I will wait 48 hours and see what happens.
 2. can you tell me what are some performance tests I can do to help me
 better tune my server ?
 3. I am concerned about this table : | w6h8a_sh404sef_urls
 |
 MyISAM |  10 | Dynamic| 8908402 |174 |  1551178184 |
  281474976710655 |   2410850304 | 0 |8908777 | 2011-09-22
 11:16:03 | 2011-10-02 21:17:20 | 2011-10-02 10:12:04 | utf8_general_ci   |
  NULL ||   |
 what can I do to make it run faster - i did not write the code...but need
 to
 optimize server to handle this table when it gets larger.  It is used for
 url re-writes - so it has a lot of urls.
 thanks
 mjh

 On Mon, Oct 3, 2011 at 12:38 AM, Bruce Ferrell bferr...@baywinds.org
 wrote:

 
  The meaning is:
 
  increase max_connections
  reduce wait_timeout
  -- 28800 is wait 8 hours before closing out dead connections
  same for interactive_timeout
 
 
  increase key_buffer_size ( 7.8G) increase join_buffer_size
  -- This keeps mysql from having to run to disk constantly for keys
  -- Key buffer size / total MyISAM indexes: 256.0M/7.8G
  -- You have a key buffer of 256M and 7.8G of keys
 
  join_buffer_size ( 128.0K, or always use indexes with joins)
  Joins performed without indexes: 23576 of 744k queries.
  -- You probably want to look at the slow query log.  Generalize the
 queries
  and the do an explain on the query.  I have seen instances where a query
 I
  thought was using an index wasn't and I had to re-write... with help
 from
  this list :-)  Thanks gang!
 
 
  increase tmp_table_size ( 16M)
  increase max_heap_table_size ( 16M)
  -- When making adjustments, make tmp_table_size/max_heap_table_size
 equal
 
  increase table_cache (  1k )
  -- Table cache hit rate: 7% (1K open / 14K opened)
  -- Increase table_cache gradually to avoid file descriptor limits
 
  All of the aside, you need to let this run for at least 24 hours. I
  prefer 48 hours.  The first line says mysql has only been running 9
  hours.   You can reset the timeouts interactivly by entering at the
  mysql prompt:
 
  set global wait_timeout=some value
 
  You can do the same for the interactive_timeout.
 
  Setting these values too low will cause long running queries to abort
 
 
  On 10/02/2011 07:02 PM, Joey L wrote:
   Variables to adjust:
max_connections ( 100)
wait_timeout ( 28800)
interactive_timeout ( 28800)
key_buffer_size ( 7.8G)
join_buffer_size ( 128.0K, or always use indexes with joins)
tmp_table_size ( 16M)
max_heap_table_size ( 16M)
table_cache ( 1024)
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?unsub=mjh2...@gmail.com
 
 





Re: 4 minute slow on select count(*) from table - myisam type

2011-10-03 Thread Eric Bergen
Can you run show processlist in another connection while the select
count(*) query is running and say what the state column is?

On Mon, Oct 3, 2011 at 7:00 AM, Joey L mjh2...@gmail.com wrote:
 this is not a real query on the site - it is just a way i am measuring
 performance on mysql - I do not know if it is such a great way to test.
 Looking for a better way to get a performance read on my site...do you have
 any ?? besides just viewing pages on it.
 thanks
 mjh


 On Mon, Oct 3, 2011 at 9:58 AM, Andrés Tello mr.crip...@gmail.com wrote:

 have you tried

 select count(yourindex) instead of select count(*) ?


 On Mon, Oct 3, 2011 at 7:53 AM, Joey L mjh2...@gmail.com wrote:

 Thanks for the input -
 1. I will wait 48 hours and see what happens.
 2. can you tell me what are some performance tests I can do to help me
 better tune my server ?
 3. I am concerned about this table : | w6h8a_sh404sef_urls
 |
 MyISAM |      10 | Dynamic    | 8908402 |            174 |  1551178184 |
  281474976710655 |   2410850304 |         0 |        8908777 | 2011-09-22
 11:16:03 | 2011-10-02 21:17:20 | 2011-10-02 10:12:04 | utf8_general_ci   |
  NULL |                |                                   |
 what can I do to make it run faster - i did not write the code...but need
 to
 optimize server to handle this table when it gets larger.  It is used for
 url re-writes - so it has a lot of urls.
 thanks
 mjh

 On Mon, Oct 3, 2011 at 12:38 AM, Bruce Ferrell bferr...@baywinds.org
 wrote:

 
  The meaning is:
 
  increase max_connections
  reduce wait_timeout
  -- 28800 is wait 8 hours before closing out dead connections
  same for interactive_timeout
 
 
  increase key_buffer_size ( 7.8G) increase join_buffer_size
  -- This keeps mysql from having to run to disk constantly for keys
  -- Key buffer size / total MyISAM indexes: 256.0M/7.8G
  -- You have a key buffer of 256M and 7.8G of keys
 
  join_buffer_size ( 128.0K, or always use indexes with joins)
  Joins performed without indexes: 23576 of 744k queries.
  -- You probably want to look at the slow query log.  Generalize the
 queries
  and the do an explain on the query.  I have seen instances where a query
 I
  thought was using an index wasn't and I had to re-write... with help
 from
  this list :-)  Thanks gang!
 
 
  increase tmp_table_size ( 16M)
  increase max_heap_table_size ( 16M)
  -- When making adjustments, make tmp_table_size/max_heap_table_size
 equal
 
  increase table_cache (  1k )
  -- Table cache hit rate: 7% (1K open / 14K opened)
  -- Increase table_cache gradually to avoid file descriptor limits
 
  All of the aside, you need to let this run for at least 24 hours. I
  prefer 48 hours.  The first line says mysql has only been running 9
  hours.   You can reset the timeouts interactivly by entering at the
  mysql prompt:
 
  set global wait_timeout=some value
 
  You can do the same for the interactive_timeout.
 
  Setting these values too low will cause long running queries to abort
 
 
  On 10/02/2011 07:02 PM, Joey L wrote:
   Variables to adjust:
    max_connections ( 100)
    wait_timeout ( 28800)
    interactive_timeout ( 28800)
    key_buffer_size ( 7.8G)
    join_buffer_size ( 128.0K, or always use indexes with joins)
    tmp_table_size ( 16M)
    max_heap_table_size ( 16M)
    table_cache ( 1024)
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:    http://lists.mysql.com/mysql?unsub=mjh2...@gmail.com
 
 







-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL Community Server 5.6.3 has been released (part 1)

2011-10-03 Thread Hery Ramilison

Dear MySQL users,


MySQL Server 5.6.3 (Milestone Release) is a new version of the world's
most popular open source database.

The new features in these releases are of beta quality. As with any
other pre-production release, caution should be taken when installing on
production level systems or systems with critical data.

Note that 5.6.3 includes all features in MySQL 5.5. For an overview of
what's new in MySQL 5.6, please see the section What Is New in MySQL
5.6 below, or view it online at

http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html

For information on installing MySQL 5.6.3 on new servers, please see the
MySQL installation documentation at

http://dev.mysql.com/doc/refman/5.6/en/installing.html

For upgrading from previous MySQL releases, please see the important
upgrade considerations at

http://dev.mysql.com/doc/refman/5.6/en/upgrading-from-previous-series.html

Please note that *downgrading* from these releases to a previous release
series is not supported.

MySQL Server 5.6 is available in source and binary form for a number of
platforms from the Development Releases selection of our download
pages at

http://dev.mysql.com/downloads/mysql/

Not all mirror sites may be up to date at this point in time, so if you
can't find this version on some mirror, please try again later or choose
another download site.

We welcome and appreciate your feedback, bug reports, bug fixes,
patches, etc.:

 http://forge.mysql.com/wiki/Contributing

The list of all Bugs Fixed for 5.6.3 may also be viewed online at

 http://dev.mysql.com/doc/refman/5.6/en/news-5-6-3.html

If you are running a MySQL production level system, we would like to
direct your attention to MySQL Enterprise Edition, which includes the
most comprehensive set of MySQL production, backup, monitoring,
modeling, development, and administration tools so businesses can
achieve the highest levels of MySQL performance, security and uptime.

 http://mysql.com/products/enterprise/

Enjoy!

Changes in MySQL 5.6.3 (03 October 2011)

Parallel Event Execution (multi-threaded slave)

  * Replication: MySQL replication now supports a multi-threaded
slave executing replication events from the master across
different databases in parallel, which can result in
significant improvements in application throughput when
certain conditions are met. The optimum case is that the data
be partitioned per database, and that updates within a given
database occur in the same order relative to one another as
they do on the master. However, transactions do not need to be
coordinated between different databases.
The slave_parallel_workers server system variable (added in
this release) sets the number of slave worker threads for
executing replication events in parallel. When parallel
execution is enabled, the slave SQL thread acts as the
coordinator for the slave worker threads, among which
transactions are distributed on a per-database basis. This
means that a worker thread on the slave slave can process
successive transactions on a given database without waiting
for updates on other databases to complete.
Due to the fact that transactions on different databases can
occur in a different order on the slave than on the master,
checking for the most recently executed transaction does not
guarantee that all previous transactions from the master have
been executed on the slave. This has implications for logging
and recovery when using a multi-threaded slave. For
information about how to interpret binary logging information
when using multi-threading on the slave, see Section
12.4.5.35, SHOW SLAVE STATUS Syntax.

Optimizer Features

  * These query optimizer improvements were implemented:

   + The EXPLAIN statement now provides execution plan
 information for DELETE, INSERT, REPLACE, and UPDATE
 statements. Previously, EXPLAIN provided information only
 about SELECT statements.

   + The optimizer more efficiently handles subqueries in the
 FROM clause (that is, derived tables):
o Materialization of subqueries in the FROM clause is
  postponed until their contents are needed during
  query execution, which improves performance.
  Previously, subqueries in the FROM clause were
  materialized for EXPLAIN SELECT statements. This
  resulted in partial SELECT execution, even though
  the purpose of EXPLAIN, is to obtain query plan
  information, not to execute the query. The
  materialization no longer occurs, so EXPLAIN is
  faster for such queries. For non-EXPLAIN queries,
  delay of materialization may result in not having to
  do it at all. Consider a query that joins the result
  of a subquery in the FROM clause to another table.
  If the 

MySQL Community Server 5.6.3 has been released (part 2 - bug fixes)

2011-10-03 Thread Hery Ramilison

Dear MySQL users,

This is the list of bug fixes. For the functional enhancements, see part
1 of this mail:

   Bugs fixed:

  * Incompatible Change: For socket I/O, an optimization for the
case when the server used alarms for timeouts could cause a
slowdown when socket timeouts were used instead.
The fix for this issue results in several changes:

   + Previously, timeouts applied to entire packet-level send
 or receive operations. Now timeouts apply to individual
 I/O operations at a finer level, such as sending 10 bytes
 of a given packet.

   + The handling of packets larger than max_allowed_packet
 has changed. Previously, if an application sent a packet
 bigger than the maximum permitted size, or if the server
 failed to allocate a buffer sufficiently large to hold
 the packet, the server kept reading the packet until its
 end, then skipped it and returned an
 ER_NET_PACKET_TOO_LARGE error. Now the server disconnects
 the session if it cannot handle such large packets.

   + On Windows, the default value for the
 MYSQL_OPT_CONNECT_TIMEOUT option to mysql_options() is no
 longer 20 seconds. Now the default is no timeout
 (infinite), the same as on other platforms.

   + Building and running MySQL on POSIX systems now requires
 support for poll() and O_NONBLOCK. These should be
 available on any modern POSIX system.
(Bug #54790, Bug #11762221, Bug #36225, Bug #11762221)

  * InnoDB Storage Engine: Replication: Trying to update a column,
previously set to NULL, of an InnoDB table with no primary key
caused replication to fail with Can't find record in 'table'
on the slave. (Bug #11766865, Bug #60091)

  * InnoDB Storage Engine: A failed CREATE INDEX operation for an
InnoDB table could result in some memory being allocated but
not freed. This memory leak could affect tables created with
the ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED setting. (Bug
#12699505)

  * InnoDB Storage Engine: Stability is improved when using BLOB
values within InnoDB tables in a heavily loaded system,
especially for tables using the ROW_FORMAT=DYNAMIC or
ROW_FORMAT=COMPRESSED setting. (Bug #12612184)

  * InnoDB Storage Engine: The server could halt if InnoDB
interpreted a very heavy I/O load for 15 minutes or more as an
indication that the server was hung. This change fixes the
logic that measures how long InnoDB threads were waiting,
which formerly could produce false positives. (Bug #11877216,
Bug #11755413, Bug #47183)

  * InnoDB Storage Engine: With the setting
lower_case_table_names=2, inserts into InnoDB tables covered
by foreign key constraints could fail after a server restart.
(Bug #11831040, Bug #60196, Bug #60909)

  * InnoDB Storage Engine: If the server crashed while an XA
transaction was prepared but not yet committed, the
transaction could remain in the system after restart, and
cause a subsequent shutdown to hang. (Bug #11766513, Bug
#59641)

  * InnoDB Storage Engine: With the setting
lower_case_table_names=2, inserts into InnoDB tables covered
by foreign key constraints could fail after a server restart.
This is a similar problem to the foreign key error in Bug
#11831040 / Bug #60196 / Bug #60909, but with a different root
cause and occurring on Mac OS X.

  * Partitioning: The internal get_partition_set() function did
not take into account the possibility that a key specification
could be NULL in some cases. (Bug #12380149)

  * Partitioning: When executing a row-ordered retrieval index
merge, the partitioning handler used memory from that
allocated for the table, rather than that allocated to the
query, causing table object memory not to be freed until the
table was closed. (Bug #11766249, Bug #59316)

  * Partitioning: Attempting to use ALTER TABLE ... EXCHANGE
PARTITION to exchange a view with a (nonexistent) partition of
a table that was not partitioned caused the server to crash.
(Bug #11766232, Bug #60039)

  * Partitioning: Auto-increment columns of partitioned tables
were checked even when they were not being written to. In
debug builds, this could lead to a server crash. (Bug
#11765667, Bug #58655)

  * Partitioning: The UNIX_TIMESTAMP() function was not treated as
a monotonic function for purposes of partition pruning. (Bug
#11746819, Bug #28928)

  * Replication: A mistake in thread cleanup could cause a
replication master to crash. (Bug #12578441)

  * Replication: When using row-based replication and attribute
promotion or demotion (see Section 15.4.1.6.2, Replication of
Columns Having Different Data Types), memory allocated
internally for conversion of BLOB columns was not freed
afterwards. (Bug #12558519)

  * Replication: A memory leak could occur when 

How MyISAM handle auto_increment

2011-10-03 Thread Angela liu
Hi, Folks:


I have questions regarding how MyISAM handles auto_increment clolumn?

1. is there a auto_increment counter for MyISAM to assign a new value to 
auto_increment columns?
 
2. if MyISAM has the counter, is the counter stored in memory or disk?

Thnaks 

Re: How MyISAM handle auto_increment

2011-10-03 Thread mos

At 04:46 PM 10/3/2011, you wrote:
Hi, Folks: I have questions regarding how MyISAM handles 
auto_increment clolumn? 1. is there a auto_increment counter for 
MyISAM to assign a new value to auto_increment columns?


Yes

  2. if MyISAM has the counter, is the counter stored in memory or 
disk? Thnaks


It is stored with the table definition.  It is only reset to 0 when 
the table is (re)created. You can get the last AutoInc for the record 
that was just added by Select Last_Insert_Id(). See 
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html


Mike



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



In general, cheaper to INNER JOIN or two separate queries

2011-10-03 Thread Dotan Cohen
I need two fields from two different tables. I could either run two
queries, or a single INNER JOIN query:

$r1=mysql_query(SELECT fruit FROM fruits WHERE userid  = 1);
$r2=mysql_query(SELECT beer FROM beers WHERE userid  = 1);
--or--
$r=mysql_query(SELECT fruits.fruit, beers.beer FROM fruits INNER JOIN
beers ON fruits.userid = beers.userid WHERE beers.userid  = 1);

In general, which is preferable? I don't have access to the production
machine to benchmark at the moment, but which is best practice?
Thanks!

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How MyISAM handle auto_increment

2011-10-03 Thread Angela liu
Thanks, what about if mysqld restart, does auto_increment gets reset  ? 

I saw this happened to Innodb, if table is empty and server restart, 
auto_incremnet gets reset to 0



From: mos mo...@fastmail.fm
To: mysql@lists.mysql.com
Sent: Monday, October 3, 2011 3:01 PM
Subject: Re: How  MyISAM handle auto_increment

At 04:46 PM 10/3/2011, you wrote:
 Hi, Folks: I have questions regarding how MyISAM handles auto_increment 
 clolumn? 1. is there a auto_increment counter for MyISAM to assign a new 
 value to auto_increment columns?

Yes

   2. if MyISAM has the counter, is the counter stored in memory or disk? 
Thnaks

It is stored with the table definition.  It is only reset to 0 when the table 
is (re)created. You can get the last AutoInc for the record that was just added 
by Select Last_Insert_Id(). See 
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html

Mike



-- MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=yyll2...@yahoo.com

Re: How MyISAM handle auto_increment

2011-10-03 Thread Reindl Harald


Am 03.10.2011 23:46, schrieb Angela liu:
 Hi, Folks:
 
 
 I have questions regarding how MyISAM handles auto_increment clolumn?
 
 1. is there a auto_increment counter for MyISAM to assign a new value to 
 auto_increment columns?
 2. if MyISAM has the counter, is the counter stored in memory or disk?

it is a table-property and you hould NOT touch it without godd reasons
because it is named AUTO




signature.asc
Description: OpenPGP digital signature


Re: How MyISAM handle auto_increment

2011-10-03 Thread mos

At 06:21 PM 10/3/2011, Angela liu wrote:

Thanks, what about if mysqld restart, does auto_increment gets reset  ?


No. The next auto increment value stays with the table. As another 
person already stated, you should never manually change the auto 
increment value on a table that already has rows in it.  MySQL always 
handles the incrementing of an AutoInc field.



I saw this happened to Innodb, if table is empty and server restart, 
auto_incremnet gets reset to 0


This shouldn't happen unless MySQL crashes.  Perhaps you are 
confusing the next Auto Increment value with the Last_Insert_Id() 
(stored in server memory) which has a value only AFTER the user has 
inserted a row. There can be a hundred connections (each with their 
own session) adding rows to the same table at the same time. 
Last_Insert_Id() will retrieve the autoinc value of the record that 
was just inserted for YOUR session.  You won't get someone else's 
AutoInc value, only the autoinc value for the record that YOU just 
inserted.  So if you insert a record, wait 5 minutes, then execute a 
Select Last_Insert_Id(), you will get the correct autoinc value 
that was used when YOUR record was inserted, even though another 
hundred records were added while you waited to execute the Select 
statement. You can never really reliably know what the autoinc value 
will be for the record you are about to insert until AFTER the record 
has been inserted and you execute Last_Insert_Id() to retrieve this 
autoinc value.


 I hope this clears it up.

Mike




From: mos mo...@fastmail.fm
To: mysql@lists.mysql.com
Sent: Monday, October 3, 2011 3:01 PM
Subject: Re: How MyISAM handle auto_increment

At 04:46 PM 10/3/2011, you wrote:
 Hi, Folks: I have questions regarding how MyISAM handles 
auto_increment clolumn? 1. is there a auto_increment counter for 
MyISAM to assign a new value to auto_increment columns?


Yes

  2. if MyISAM has the counter, is the counter stored in memory or 
disk? Thnaks


It is stored with the table definition.  It is only reset to 0 when 
the table is (re)created. You can get the last AutoInc for the 
record that was just added by Select Last_Insert_Id(). See 
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html


Mike



-- MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=yyll2...@yahoo.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org