backup of databases which have a mix of MyISAM- and InnoDB-tables

2014-08-22 Thread Lentes, Bernd
Hi,

i've been already reading the documentation the whole day, but still confused 
and unsure what to do.

We have two databases which are important for our work. So both are stored 
hourly. Now I recognized that each database has a mixture of MyISAM- and 
InnoDB-tables. A backup of this mix does not seem to be easy. Until now it was 
dumped using mysqldump --opt -u root --databases mausdb  What I 
understand until now is that --opt is not necessary because it is default. It 
includes, among others, --lock-tables which is senseful for saving 
MyISAM-tables. For InnoDB-tables --single-transaction is useful. But both are 
mutually exclusive 
(http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_single-transaction
 ). The dump of both take about 10 seconds. If the db is locked for that period 
I can live with.
When I use --single-transaction only the InnoDB-tables are consistent. Using 
--lock-tables the MyISAM-tables are stored consistently. What is about 
--lock-tables in conjunction with InnoDB-tables ?
Are they stored consistently ? Are they locked during the dumping ? As I said, 
I could live with a small lock period ( 30 sec). Would --lock-all-tables be 
better ?
Lock all tables across all databases. This is achieved by acquiring a global 
read lock for the duration of the whole dump. This option automatically turns 
off --single-transaction and --lock-tables (from the manpage). I can live with 
a global read lock for the duration of the whole dump.
--lock-tables causes any pending transactions to be committed implicitly 
(http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_single-transaction
 ). Is that a problem for the InnoDB tables ?

Our system is:
mysql-5.0.26-12.29.1 on a SLES 10 SP4 64 bit host.


Bernd



--
Bernd Lentes

Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.len...@helmholtz-muenchen.de
phone: +49 89 3187 1241
fax:   +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg

Die Freiheit wird nicht durch weniger Freiheit verteidigt



Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: backup of databases which have a mix of MyISAM- and InnoDB-tables

2014-08-22 Thread Reindl Harald

Am 22.08.2014 um 19:40 schrieb Lentes, Bernd:
 i've been already reading the documentation the whole day, but still confused 
 and unsure what to do.
 
 We have two databases which are important for our work. So both are stored 
 hourly. Now I recognized that each database has a mixture of MyISAM- and 
 InnoDB-tables. A backup of this mix does not seem to be easy. Until now it 
 was dumped using mysqldump --opt -u root --databases mausdb  What I 
 understand until now is that --opt is not necessary because it is default. It 
 includes, among others, --lock-tables which is senseful for saving 
 MyISAM-tables. For InnoDB-tables --single-transaction is useful. But both are 
 mutually exclusive 
 (http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_single-transaction
  ). The dump of both take about 10 seconds. If the db is locked for that 
 period I can live with.
 When I use --single-transaction only the InnoDB-tables are consistent. Using 
 --lock-tables the MyISAM-tables are stored consistently. What is about 
 --lock-tables in conjunction with InnoDB-tables ?
 Are they stored consistently ? Are they locked during the dumping ? As I 
 said, I could live with a small lock period ( 30 sec). Would 
 --lock-all-tables be better ?
 Lock all tables across all databases. This is achieved by acquiring a global 
 read lock for the duration of the whole dump. This option automatically turns 
 off --single-transaction and --lock-tables (from the manpage). I can live 
 with a global read lock for the duration of the whole dump.
 --lock-tables causes any pending transactions to be committed implicitly 
 (http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_single-transaction
  ). Is that a problem for the InnoDB tables ?
 
 Our system is:
 mysql-5.0.26-12.29.1 on a SLES 10 SP4 64 bit host

why that complex?

just setup replication because you have a lot of benefits:

* in case your master crashs and the FS got damaged you have a real-time 
backup
* for backups you can stop the slave, tar the whole datadir and start the slave
* after it is restarted it pulls any change happened on the master due backup
* the backup is likely smaller than verbose sql dumps
* you do not need to care about table types and what not else



signature.asc
Description: OpenPGP digital signature


Re: backup of databases which have a mix of MyISAM- and InnoDB-tables

2014-08-22 Thread Hartmut Holzgraefe
XTrabackup can handle both InnoDB and MyISAM in
a consistent way while minimizing lock time on
MyISAM tables ...

http://www.percona.com/doc/percona-xtrabackup/2.1/

-- 
Hartmut Holzgraefe, Principal Support Engineer (EMEA)
SkySQL - The MariaDB Company | http://www.skysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-30 Thread Antonio Fernández Pérez
​Hi Johan,

Thanks for your reply. Theorically the fragmented tables not offer the best
performance to the InnoDB engine, that's correct or not?

I don't know if is a problem or not, is a doubt/question for me. I'm not
sure if is an atypical behaviour.

Thanks in advance.

Regards,

Antonio.​


Re: Optimizing InnoDB tables

2014-06-30 Thread Reindl Harald
*please* don't use reply-all on mailing-lists
the list by definition distributes your message

Am 30.06.2014 13:14, schrieb Antonio Fernández Pérez:
 Thanks for your reply. Theorically the fragmented tables not offer the best 
 performance to the InnoDB engine,
 that's correct or not?

practically it don't matter because the hot data should
anways be in innodb_buffer_pool and so in memory and
the fragmentation don't really matter as long it is
not extremely

you just can't have always unfragmented data because
that would mean the must be space reserved left and
right to fill growing data there

how much space will you reserve and how will the holes
impact performance if it comes to read data at startup

anyways: a state of no single fragmentation is not
possible and seeking for a solution because some tool
displays data without any emotion is a fool with a
tool still is a fool






signature.asc
Description: OpenPGP digital signature


Re: Optimizing InnoDB tables

2014-06-27 Thread Antonio Fernández Pérez
​Hi Andre,

Thanks for your reply. I have checked the link and my configuration.
Innodb_file_per_table is enabled and in data directory appears a set of
files by each table.

Any ideas?

Thanks in advance.

Regards,

Antonio.​


Re: Optimizing InnoDB tables

2014-06-27 Thread Reindl Harald

Am 27.06.2014 09:48, schrieb Antonio Fernández Pérez:
 Thanks for your reply. I have checked the link and my configuration.
 Innodb_file_per_table is enabled and in data directory appears a set of
 files by each table.
 
 Any ideas?

ideas for what?

* which files don't get shrinked (ls -lha)
* which evidence do you have that they should
* show create table
* what *exactly* do you enter in your myscl client



signature.asc
Description: OpenPGP digital signature


Re: Optimizing InnoDB tables

2014-06-27 Thread Antonio Fernández Pérez
​Hi Reindl,

Thanks for your attention.

Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in InnoDB (In this case).
I would like to know, if is possible, why after execute an analyze table
command on some fragmented table, after that, appears fragmented again.

Regards,

Antonio.​


Re: Optimizing InnoDB tables

2014-06-27 Thread shawn l.green

Hello Antonio,

On 6/27/2014 9:31 AM, Antonio Fernández Pérez wrote:

​Hi Reindl,

Thanks for your attention.

Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in InnoDB (In this case).
I would like to know, if is possible, why after execute an analyze table
command on some fragmented table, after that, appears fragmented again.

Regards,

Antonio.​



InnoDB operates by storing multiple rows on pages. Each page is 16K. 
Of that 1K is reserved for metadata (a tiny index showing where on a 
page each row sits, links to various other locations, checksums,  ...) 
The remaining 15K can be used for your actual data.


If you delete a row of data, that space on a page is made available but 
the page does not change size. It is always 16K.


InnoDB stores data in the order of your PK.  If you need to insert a new 
row between other rows on a 'full' page, then the page needs to split. 
This creates 2 new pages that are about 50% full.


If two adjacent pages (A and B) become too 'empty' they can be combined 
into one page. This puts the data from both pages onto one of them (page 
A, for example). However page B remains empty and becomes available for 
any other purpose.


Is that what you are calling 'fragmentation' ?

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-27 Thread Johan De Meersman
- Original Message -
 From: Antonio Fernández Pérez antoniofernan...@fabergames.com
 Subject: Re: Optimizing InnoDB tables
 
 I would like to know, if is possible, why after execute an analyze table
 command on some fragmented table, after that, appears fragmented again.

Simple question: why do you believe this is a problem?


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-25 Thread Antonio Fernández Pérez
​Hi again,

I have enabled innodb_file_per_table (Its value is on).
I don't have clear what I should to do ...

Thanks in advance.

Regards,

Antonio. ​


Re: Optimizing InnoDB tables

2014-06-25 Thread Johan De Meersman
- Original Message -
 From: Antonio Fernández Pérez antoniofernan...@fabergames.com
 Subject: Re: Optimizing InnoDB tables
 
 I have enabled innodb_file_per_table (Its value is on).
 I don't have clear what I should to do ...

Then all new tables will be created in their own tablespace now. It's easy to 
convert an existing table, too, simply do alter table yourtable 
engine=innodb - but that will of course take a while on large tables.

The problem, however, is that there is no way to shrink the main tablespace 
afterwards. Your tables will all be in their own space, but the ibdata1 will 
still be humoungous, even though it's close to empty. Don't just delete it, 
btw, as it still contains metadata.

The only way to get rid of those, is to export ALL innodb tables, shut down 
mysqld, delete all innodb files (iblog0/1, ibdata1 etc, but also db/*.ibd and 
the associated db/*.frm files; then start the server (it'll recreate ibdata1 
as specified in your my.cnf, so shrink there, too, if required) and then import 
the lot again.

Note that, if you have the space, you don't *have* to do that - the huge 
ibdata1 file doesn't do any harm; but do consider that as your dataset grows 
over the years, it'll become more and more of a bother to actually do it.

Make sure you have backups when attempting :-)

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-25 Thread Andre Matos
Have a look at this:

https://rtcamp.com/tutorials/mysql/enable-innodb-file-per-table/

--
Andre Matos
andrema...@mineirinho.org


On Jun 25, 2014, at 2:22 AM, Antonio Fernández Pérez 
antoniofernan...@fabergames.com wrote:

 ​Hi again,
 
 I have enabled innodb_file_per_table (Its value is on).
 I don't have clear what I should to do ...
 
 Thanks in advance.
 
 Regards,
 
 Antonio. ​


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Optimizing InnoDB tables

2014-06-24 Thread Antonio Fernández Pérez
​Hi list,

I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.

​​SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN (information_schema,mysql) AND
Data_free  0​

After that, I have seen that there are 49 fragmented tables. With one
table, I have executed optimize table table_name; and analyze table
table_name;. The result is the same, the table continuos fragmented.

Any ideas? I have followed the mysqltuner recomendations ...

Thanks in advance.

Regards,

Antonio.


Re: Optimizing InnoDB tables

2014-06-24 Thread Wagner Bianchi
Hi Antonio, como esta?

What's the mysql version you're running? Have you tried to ALTER TABLE x 
ENGINE=InnoDB?

-- WB, MySQL Oracle ACE

 Em 24/06/2014, às 08:03, Antonio Fernández Pérez 
 antoniofernan...@fabergroup.es escreveu:
 
 ​Hi list,
 
 I was trying to optimize the InnoDB tables. I have executed the next query
 to detect what are the fragmented tables.
 
 ​​SELECT TABLE_SCHEMA,TABLE_NAME
 FROM TABLES WHERE TABLE_SCHEMA NOT IN (information_schema,mysql) AND
 Data_free  0​
 
 After that, I have seen that there are 49 fragmented tables. With one
 table, I have executed optimize table table_name; and analyze table
 table_name;. The result is the same, the table continuos fragmented.
 
 Any ideas? I have followed the mysqltuner recomendations ...
 
 Thanks in advance.
 
 Regards,
 
 Antonio.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-24 Thread Antonio Fernández Pérez
​Hi Wagner,

I'm running ​
​MySQL Percona Server 5.5.30 64Bits. No, I don't have tried to execute
ALTER TABLE (Analyze with InnoDB tables do that, or not?).

Thanks in advance.

Regards,

Antonio.​


Re: Optimizing InnoDB tables

2014-06-24 Thread shawn l.green

Hello Antonio,

On 6/24/2014 7:03 AM, Antonio Fernández Pérez wrote:

​Hi list,

I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.

​​SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN (information_schema,mysql) AND
Data_free  0​

After that, I have seen that there are 49 fragmented tables. With one
table, I have executed optimize table table_name; and analyze table
table_name;. The result is the same, the table continuos fragmented.

Any ideas? I have followed the mysqltuner recomendations ...

Thanks in advance.

Regards,

Antonio.



It makes a huge difference if the tables you are trying to optimize have 
their own tablespace files or if they live inside the common tablespace.


http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-24 Thread Reindl Harald


Am 24.06.2014 21:07, schrieb shawn l.green:
 It makes a huge difference if the tables you are trying to optimize have 
 their own tablespace files or if they live
 inside the common tablespace.
 
 http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table

which is the most stupid default in case of innodb and only survivable
without a lot of work for people who realize that *before* start
operations and enable innodb_file_per_table from the very begin

having defaults which can't be changed later without complete re-import
of data and prevent from ever get disk space for long ago deleted data
free is the most wrong thing a software developer can do



signature.asc
Description: OpenPGP digital signature


Re: Optimizing InnoDB tables

2014-06-24 Thread shawn l.green

Hello Reindl,

On 6/24/2014 3:29 PM, Reindl Harald wrote:



Am 24.06.2014 21:07, schrieb shawn l.green:

It makes a huge difference if the tables you are trying to optimize have their 
own tablespace files or if they live
inside the common tablespace.

http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table


which is the most stupid default in case of innodb and only survivable
without a lot of work for people who realize that *before* start
operations and enable innodb_file_per_table from the very begin

having defaults which can't be changed later without complete re-import
of data and prevent from ever get disk space for long ago deleted data
free is the most wrong thing a software developer can do



The tables can be moved from the common tablespace into their own 
tablespace at any time after the option is enabled. The space they once 
occupied within the primary tablespace will remain and it will be marked 
as 'available' for any general purpose (such as the UNDO log)


The only way to shrink the primary tablespace is, as you correctly 
described, through a dump/restore of your data. This process to resize 
the primary tablespace (such as to shrink it) must be followed precisely 
or problems will result.


http://dev.mysql.com/doc/refman/5.6/en/innodb-data-log-reconfiguration.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Manuel Arostegui
2014-05-15 14:26 GMT+02:00 Antonio Fernández Pérez 
antoniofernan...@fabergroup.es:

 ​

 ​
 ​Hi,

 I have in my server database some tables that are too much big and produce
 some slow query, even with correct indexes created.

 For my application, it's necessary to have all the data because we make an
 authentication process with RADIUS users (AAA protocol) to determine if one
 user can or not navigate in Internet (Depending on the time of all his
 sessions).

 So, with 8GB of data in one table, what are your advices to follow?
 Fragmentation and sharding discarted because we are working with disk
 arrays, so not apply. Another option is to delete rows, but in this case, I
 can't. For the other hand, maybe de only possible solution is increase the
 resources (RAM).


Adding more RAM will only save you for a few weeks/months until the data
isn't able to fit in memory any longer. You will face the same problem soon
(if your data is and will be still growing).
There will be a point where you just can't buy more and better hardware
(actually you kinda can, but you will spend load of money and might end up
with nice servers just doing nothing because they support more memory in
their motherboard so you need to upgrade it too).

You should give your application a thought and start considering
noSQL/table sharding/partitioning/archiving.
Maybe it is too late, but before needing another hardware upgrade, yo
should've thought about a solution that would allow you keep growing
without needing to spend all in hardware (unless you have unlimited money).

Good luck!
Manuel.


Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Johan De Meersman

- Original Message -
 From: Manuel Arostegui man...@tuenti.com
 Subject: Re: Big innodb tables, how can I work with them?
 
 noSQL/table sharding/partitioning/archiving.

I keep wondering how people believe that NoSQL solutions magically don't need 
RAM to work. Nearly all of them slow down to a crawl, many even worse than an 
SQL database, as soon as the full or working set no longer fits in memory, too.

Don't get me wrong - they have certain benefits and definite usecases, but it's 
time people stop presenting them as a magic bullet. They require understanding 
and work, just like any other technology.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Manuel Arostegui
2014-05-19 11:49 GMT+02:00 Johan De Meersman vegiv...@tuxera.be:


 - Original Message -
  From: Manuel Arostegui man...@tuenti.com
  Subject: Re: Big innodb tables, how can I work with them?
 
  noSQL/table sharding/partitioning/archiving.

 I keep wondering how people believe that NoSQL solutions magically don't
 need RAM to work. Nearly all of them slow down to a crawl, many even worse
 than an SQL database, as soon as the full or working set no longer fits in
 memory, too.

 Don't get me wrong - they have certain benefits and definite usecases, but
 it's time people stop presenting them as a magic bullet. They require
 understanding and work, just like any other technology.


I was thinking about its distributed system as it might speed up reads :-)
We do have a huge noSQL cluster here at work and it certainly needs lot of
RAM.

Manuel


Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Andrew Moore
What kind of queries is this table serving? 8GB is not a huge amount of
data at all and IMO it's not enough to warrant sharding.


On Thu, May 15, 2014 at 1:26 PM, Antonio Fernández Pérez 
antoniofernan...@fabergroup.es wrote:

 ​

 ​
 ​Hi,

 I have in my server database some tables that are too much big and produce
 some slow query, even with correct indexes created.

 For my application, it's necessary to have all the data because we make an
 authentication process with RADIUS users (AAA protocol) to determine if one
 user can or not navigate in Internet (Depending on the time of all his
 sessions).

 So, with 8GB of data in one table, what are your advices to follow?
 Fragmentation and sharding discarted because we are working with disk
 arrays, so not apply. Another option is to delete rows, but in this case, I
 can't. For the other hand, maybe de only possible solution is increase the
 resources (RAM).

 Any ideas?

 Thanks in advance.

 Regards,

 Antonio.​



Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Antonio Fernández Pérez
​Hi,

Thanks for your replies.

In our case, we can't implement NOSQL solution. Thats requires modify/check
all our application and all services (Including FreeRADIUS that I'm not
sure if it's compatible).

Andrew, I have heard about people that has a lot of data, more than me. I
know that MySQL support this amount but in this case and thinking in the
future, I have this problem with my architecture; how can I grow in
database servers without delete rows in the tables.
I have checked slow queries and now there aren't.
These tables are serving queries from FreeRADIUS service. For example,
SUMs, COUNTS, nomal SELECTs ... Always with a where condition.

Excuse me, what is the meaning of IMO?

Thanks.

Regards,

Antonio.​


Big innodb tables, how can I work with them?

2014-05-15 Thread Antonio Fernández Pérez
​

​
​Hi,

I have in my server database some tables that are too much big and produce
some slow query, even with correct indexes created.

For my application, it's necessary to have all the data because we make an
authentication process with RADIUS users (AAA protocol) to determine if one
user can or not navigate in Internet (Depending on the time of all his
sessions).

So, with 8GB of data in one table, what are your advices to follow?
Fragmentation and sharding discarted because we are working with disk
arrays, so not apply. Another option is to delete rows, but in this case, I
can't. For the other hand, maybe de only possible solution is increase the
resources (RAM).

Any ideas?

Thanks in advance.

Regards,

Antonio.​


Re: Big innodb tables, how can I work with them?

2014-05-15 Thread Reindl Harald


Am 15.05.2014 14:26, schrieb Antonio Fernández Pérez:
 I have in my server database some tables that are too much big and produce
 some slow query, even with correct indexes created.
 
 For my application, it's necessary to have all the data because we make an
 authentication process with RADIUS users (AAA protocol) to determine if one
 user can or not navigate in Internet (Depending on the time of all his
 sessions).
 
 So, with 8GB of data in one table, what are your advices to follow?
 Fragmentation and sharding discarted because we are working with disk
 arrays, so not apply. Another option is to delete rows, but in this case, I
 can't. For the other hand, maybe de only possible solution is increase the
 resources (RAM)

rule of thumbs is innodb_buffer_pool = database-size or at least
as much RAM that frequently accessed data stays always in the pool



signature.asc
Description: OpenPGP digital signature


RE: fragmentation in innodb tables

2013-05-21 Thread Rick James
The fragmented message is bogus.  It says it to everyone.  Almost no one 
needs to OPTIMIZE their tables.

 -Original Message-
 From: Miguel González [mailto:miguel_3_gonza...@yahoo.es]
 Sent: Tuesday, May 21, 2013 2:03 PM
 To: mysql@lists.mysql.com
 Subject: fragmentation in innodb tables
 
 Dear all,
 
I'm a newbie in MySQL so bare my questions.
 
I have run mysqltuner.pl and It says I have fragmentation in my
 tables. Searching around I found this script which reports the
 fragmentation in my tables:
 
 
   #!/bin/sh
 
 echo -n MySQL username:  ; read username echo -n MySQL password:  ;
 stty -echo ; read password ; stty echo ; echo
 
 mysql -u $username -p$password -NBe SHOW DATABASES; | grep -v
 'lost+found' | while read database ; do mysql -u $username -
 p$password -NBe SHOW TABLE STATUS; $database | while read name
 engine version rowformat rows avgrowlength datalength maxdatalength
 indexlength datafree autoincrement createtime updatetime checktime
 collation checksum createoptions comment ; do if [ $datafree -gt 0 ]
 ; then fragmentation=$(($datafree * 100 / $datalength)) echo
 $database.$name is $fragmentation% fragmented.
 mysql -u $username -p$password -NBe OPTIMIZE TABLE $name;
 $database
 fi
 done
 done
 
 
 I have run it and reports that several of my innodb tables are
 fragmented
 
 I have read several articles and I'm a bit confused. I have enabled
 innodb_file_per_table from the very beginning
 
 # INNODB #
 
 innodb_log_files_in_group  = 2
 innodb_log_file_size   = 512M
 innodb_flush_log_at_trx_commit = 1
 innodb_file_per_table  = 1
 innodb_buffer_pool_size= 2G
 
 I have run either optimize table and alter table mytable engine=INNODB
 and both commands don't end up shrinking the space in the idb files.
 The script above reports the same fragmentation.
 
 Regards,
 
 Miguel
 
 
 
 
 
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



fragmentation in innodb tables

2013-05-21 Thread Miguel González

Dear all,

  I'm a newbie in MySQL so bare my questions.

  I have run mysqltuner.pl and It says I have fragmentation in my 
tables. Searching around I found this script which reports the 
fragmentation in my tables:



 #!/bin/sh

echo -n MySQL username:  ; read username
echo -n MySQL password:  ; stty -echo ; read password ; stty echo ; echo

mysql -u $username -p$password -NBe SHOW DATABASES; | grep -v 
'lost+found' | while read database ; do
mysql -u $username -p$password -NBe SHOW TABLE STATUS; $database | 
while read name engine version rowformat rows avgrowlength datalength 
maxdatalength indexlength datafree autoincrement createtime updatetime 
checktime collation checksum createoptions comment ; do

if [ $datafree -gt 0 ] ; then
fragmentation=$(($datafree * 100 / $datalength))
echo $database.$name is $fragmentation% fragmented.
mysql -u $username -p$password -NBe OPTIMIZE TABLE $name; $database
fi
done
done


I have run it and reports that several of my innodb tables are fragmented

I have read several articles and I'm a bit confused. I have enabled 
innodb_file_per_table from the very beginning


# INNODB #

innodb_log_files_in_group  = 2
innodb_log_file_size   = 512M
innodb_flush_log_at_trx_commit = 1
innodb_file_per_table  = 1
innodb_buffer_pool_size= 2G

I have run either optimize table and alter table mytable engine=INNODB 
and both commands don't end up shrinking the space in the idb files. The 
script above reports the same fragmentation.


Regards,

Miguel










--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: problems with INNODB tables

2012-05-02 Thread Rick James
Some mixture.  Perhaps 35% of RAM for buffer_pool and 10% for key_buffer.  It 
depends on which needs more caching.

Note:  The key_buffer does not need to be bigger than the total of all MyISAM 
indexes (Index_length in SHOW TABLE STATUS, or size of .MYI files).  The 
buffer_pool does not need to be bigger than the total of data+index for InnoDB 
files.

 -Original Message-
 From: Malka Cymbalista [mailto:malki.cymbali...@weizmann.ac.il]
 Sent: Wednesday, April 25, 2012 3:15 AM
 To: Rick James; Andrés Tello
 Cc: mysql@lists.mysql.com; Shlomit Afgin; Ronen Hayun
 Subject: RE: problems with INNODB tables
 
 Thanks for your answer. I read http://mysql.rjweb.org/doc.php/memory
 where it tells you to do one thing if using MYIASM tables and another
 if using INNODB tables.  We are using both. Any suggestions?
 Thanks for any help.
 
 Malki Cymbalista
 Webmaster, Weizmann Institute of Science
 malki.cymbali...@weizmann.ac.il
 08-9343036
 
 
 -Original Message-
 From: Rick James [mailto:rja...@yahoo-inc.com]
 Sent: Monday, April 23, 2012 9:42 PM
 To: Andrés Tello; Malka Cymbalista
 Cc: mysql@lists.mysql.com; Shlomit Afgin; Ronen Hayun
 Subject: RE: problems with INNODB tables
 
 Check your memory usage according to
 http://mysql.rjweb.org/doc.php/memory
 
  -Original Message-
  From: Andrés Tello [mailto:mr.crip...@gmail.com]
  Sent: Monday, April 23, 2012 9:00 AM
  To: Malka Cymbalista
  Cc: mysql@lists.mysql.com; Shlomit Afgin; Ronen Hayun
  Subject: Re: problems with INNODB tables
 
  Weird, I use a lot Innodb, and no issue, I even kill bravely the
 mysql
  process with pkill -9 -f mysql
 
  Y suppose the way drupal is being programed.
  PHP open and closes database connections each time a webpage with db
  access is issued.
  When a php exceution ends and the apache webserver have fullfilled
 the
  http request, again, php memory is freed and connections closed...
  UNLESS:.. you are using a mem cached db connection, wich I doubt it
  since drupal doens't requiere one, or using persistent connections,
  again, I doubt it, because persistante database connections aren't
  recommended to innodb tables...
 
  Mysql server by default can handles 100 conections, if you get to
  thata limit you need to fine tune the number of connections allowed.
 
  show full processlist can give you a better idea of what is going on,
  connections with the sleep status, are open connections with no
  currently no transacctions...
 
  I never use script based stop, I always use mysqladmin -u root -p -h
  localhost shutdown which properly tells mysql to flush tables and
  terminate.
 
  I can almost bet that you are using Ubuntu... ubuntu had given me
  sometimes very hard times because of the edgy  code they use to
 use,
  ext4 last version, and so on... what can you tell us about that?
 
  How much amount of memory you have?
  How much concurrent apache/php users you have?
  Can you provide more cuantitive data please? Hardware, php version,
  distro, kernel...
 
  Cheers...
 
 
 
  To start, 100 process is quite a lot, something isn't fine. Each time
 
  On Mon, Apr 23, 2012 at 9:10 AM, Malka Cymbalista 
  malki.cymbali...@weizmann.ac.il wrote:
 
   We are running MySQL version 5.0.45 on a Linux machine. Over the
   past few months we have been having several problems:
  
   1.   Our mysql processes have increased the memory used from
  about .3%
   per process to 8% per process
  
   2.   We sometimes can have over 100 processes running which
  brings the
   machine to its knees and we have to stop and start MySQL in order
 to
   kill all the processes. We think that maybe the processes are not
   finishing normally and are just hanging around.
  
   3.   The machine is a web server and in the last few months we
  are
   moving over to drupal 7 to build our sites and Drupal 7 requires
  INNODB
   tables.   Sometimes, when we restart MySQL using the commands
   /etc/init.d/mysql stop and /etc/init.d/mysql start our sites that
  were
   built in drupal 7 do not come up.  In order for the INNODB tables
 to
   work, we have to stop mysql, rename the ibdata1 file, copy it back
   to
   ibdata1 and then restart mysql. Otherwise the INNODB tables are not
  accessable.
  
  
  
   In the past all our tables were MYIASM.  Our problems started as we
   started using more and more INNODB tables. Is there anything
 special
   that has to be done to configure MySQL when using INNODB tables?
   We clearly have a problem  but we have no idea where to start
  looking. Our
   error logs don't show anything.   If anyone has any suggestions, we
  will be
   happy to hear them.
   We are considering hiring a consultant who is an expert in MySQL.
 We
   are in Israel and we are open to suggestions.
  
   Thanks for any help.
  
   Malki Cymbalista
   Webmaster, Weizmann Institute of Science
  
 
 malki.cymbali...@weizmann.ac.ilmailto:malki.cymbali...@weizmann.ac.il
   
   08-9343036
  
  

--
MySQL General

RE: problems with INNODB tables

2012-04-25 Thread Malka Cymbalista
Thanks for your answer. I read http://mysql.rjweb.org/doc.php/memory where it 
tells you to do one thing if using MYIASM tables and another if using INNODB 
tables.  We are using both. Any suggestions?
Thanks for any help.

Malki Cymbalista
Webmaster, Weizmann Institute of Science
malki.cymbali...@weizmann.ac.il
08-9343036


-Original Message-
From: Rick James [mailto:rja...@yahoo-inc.com] 
Sent: Monday, April 23, 2012 9:42 PM
To: Andrés Tello; Malka Cymbalista
Cc: mysql@lists.mysql.com; Shlomit Afgin; Ronen Hayun
Subject: RE: problems with INNODB tables

Check your memory usage according to
http://mysql.rjweb.org/doc.php/memory

 -Original Message-
 From: Andrés Tello [mailto:mr.crip...@gmail.com]
 Sent: Monday, April 23, 2012 9:00 AM
 To: Malka Cymbalista
 Cc: mysql@lists.mysql.com; Shlomit Afgin; Ronen Hayun
 Subject: Re: problems with INNODB tables
 
 Weird, I use a lot Innodb, and no issue, I even kill bravely the mysql 
 process with pkill -9 -f mysql
 
 Y suppose the way drupal is being programed.
 PHP open and closes database connections each time a webpage with db 
 access is issued.
 When a php exceution ends and the apache webserver have fullfilled the 
 http request, again, php memory is freed and connections closed...
 UNLESS:.. you are using a mem cached db connection, wich I doubt it 
 since drupal doens't requiere one, or using persistent connections, 
 again, I doubt it, because persistante database connections aren't 
 recommended to innodb tables...
 
 Mysql server by default can handles 100 conections, if you get to 
 thata limit you need to fine tune the number of connections allowed.
 
 show full processlist can give you a better idea of what is going on, 
 connections with the sleep status, are open connections with no 
 currently no transacctions...
 
 I never use script based stop, I always use mysqladmin -u root -p -h 
 localhost shutdown which properly tells mysql to flush tables and 
 terminate.
 
 I can almost bet that you are using Ubuntu... ubuntu had given me 
 sometimes very hard times because of the edgy  code they use to use,
 ext4 last version, and so on... what can you tell us about that?
 
 How much amount of memory you have?
 How much concurrent apache/php users you have?
 Can you provide more cuantitive data please? Hardware, php version, 
 distro, kernel...
 
 Cheers...
 
 
 
 To start, 100 process is quite a lot, something isn't fine. Each time
 
 On Mon, Apr 23, 2012 at 9:10 AM, Malka Cymbalista  
 malki.cymbali...@weizmann.ac.il wrote:
 
  We are running MySQL version 5.0.45 on a Linux machine. Over the 
  past few months we have been having several problems:
 
  1.   Our mysql processes have increased the memory used from
 about .3%
  per process to 8% per process
 
  2.   We sometimes can have over 100 processes running which
 brings the
  machine to its knees and we have to stop and start MySQL in order to 
  kill all the processes. We think that maybe the processes are not 
  finishing normally and are just hanging around.
 
  3.   The machine is a web server and in the last few months we
 are
  moving over to drupal 7 to build our sites and Drupal 7 requires
 INNODB
  tables.   Sometimes, when we restart MySQL using the commands
  /etc/init.d/mysql stop and /etc/init.d/mysql start our sites that
 were
  built in drupal 7 do not come up.  In order for the INNODB tables to 
  work, we have to stop mysql, rename the ibdata1 file, copy it back 
  to
  ibdata1 and then restart mysql. Otherwise the INNODB tables are not
 accessable.
 
 
 
  In the past all our tables were MYIASM.  Our problems started as we 
  started using more and more INNODB tables. Is there anything special 
  that has to be done to configure MySQL when using INNODB tables?
  We clearly have a problem  but we have no idea where to start
 looking. Our
  error logs don't show anything.   If anyone has any suggestions, we
 will be
  happy to hear them.
  We are considering hiring a consultant who is an expert in MySQL. We 
  are in Israel and we are open to suggestions.
 
  Thanks for any help.
 
  Malki Cymbalista
  Webmaster, Weizmann Institute of Science
 
 malki.cymbali...@weizmann.ac.ilmailto:malki.cymbali...@weizmann.ac.il
  
  08-9343036
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: problems with INNODB tables

2012-04-25 Thread Andrés Tello
switch to innodb...

and use one_file_per_table

I use both, but I try to use myisam for cataloges.

Innodb and myisam are truly different engines, they do things completely
different, for example, with myisam you have parameters to configure the
size of the memory for the indexes, and several others, meanwhile most of
innodb performase is bound to innodb buffer pools, and with the newerst
mysql version, yo
u can have several innodb buffer pools lowering your mutex wait a lot...

Also you can switch from myisam to innodb quickly, for that type of tasks,
I do a mysqldump with tab formatted texts because it gives 2 files per
table, 1 file with the sql query to create de database and other, tab
delimited file with all the data of that table, to be used with mysqlimport

For a properly recommendation, we would need to know much more about the
system using the database, some statistics...

What is the database used for?




On Wed, Apr 25, 2012 at 5:14 AM, Malka Cymbalista 
malki.cymbali...@weizmann.ac.il wrote:

 Thanks for your answer. I read http://mysql.rjweb.org/doc.php/memorywhere it 
 tells you to do one thing if using MYIASM tables and another if
 using INNODB tables.  We are using both. Any suggestions?
 Thanks for any help.

 Malki Cymbalista
 Webmaster, Weizmann Institute of Science
 malki.cymbali...@weizmann.ac.il
 08-9343036


 -Original Message-
 From: Rick James [mailto:rja...@yahoo-inc.com]
 Sent: Monday, April 23, 2012 9:42 PM
 To: Andrés Tello; Malka Cymbalista
 Cc: mysql@lists.mysql.com; Shlomit Afgin; Ronen Hayun
 Subject: RE: problems with INNODB tables

 Check your memory usage according to
 http://mysql.rjweb.org/doc.php/memory

  -Original Message-
  From: Andrés Tello [mailto:mr.crip...@gmail.com]
  Sent: Monday, April 23, 2012 9:00 AM
  To: Malka Cymbalista
  Cc: mysql@lists.mysql.com; Shlomit Afgin; Ronen Hayun
  Subject: Re: problems with INNODB tables
 
  Weird, I use a lot Innodb, and no issue, I even kill bravely the mysql
  process with pkill -9 -f mysql
 
  Y suppose the way drupal is being programed.
  PHP open and closes database connections each time a webpage with db
  access is issued.
  When a php exceution ends and the apache webserver have fullfilled the
  http request, again, php memory is freed and connections closed...
  UNLESS:.. you are using a mem cached db connection, wich I doubt it
  since drupal doens't requiere one, or using persistent connections,
  again, I doubt it, because persistante database connections aren't
  recommended to innodb tables...
 
  Mysql server by default can handles 100 conections, if you get to
  thata limit you need to fine tune the number of connections allowed.
 
  show full processlist can give you a better idea of what is going on,
  connections with the sleep status, are open connections with no
  currently no transacctions...
 
  I never use script based stop, I always use mysqladmin -u root -p -h
  localhost shutdown which properly tells mysql to flush tables and
  terminate.
 
  I can almost bet that you are using Ubuntu... ubuntu had given me
  sometimes very hard times because of the edgy  code they use to use,
  ext4 last version, and so on... what can you tell us about that?
 
  How much amount of memory you have?
  How much concurrent apache/php users you have?
  Can you provide more cuantitive data please? Hardware, php version,
  distro, kernel...
 
  Cheers...
 
 
 
  To start, 100 process is quite a lot, something isn't fine. Each time
 
  On Mon, Apr 23, 2012 at 9:10 AM, Malka Cymbalista 
  malki.cymbali...@weizmann.ac.il wrote:
 
   We are running MySQL version 5.0.45 on a Linux machine. Over the
   past few months we have been having several problems:
  
   1.   Our mysql processes have increased the memory used from
  about .3%
   per process to 8% per process
  
   2.   We sometimes can have over 100 processes running which
  brings the
   machine to its knees and we have to stop and start MySQL in order to
   kill all the processes. We think that maybe the processes are not
   finishing normally and are just hanging around.
  
   3.   The machine is a web server and in the last few months we
  are
   moving over to drupal 7 to build our sites and Drupal 7 requires
  INNODB
   tables.   Sometimes, when we restart MySQL using the commands
   /etc/init.d/mysql stop and /etc/init.d/mysql start our sites that
  were
   built in drupal 7 do not come up.  In order for the INNODB tables to
   work, we have to stop mysql, rename the ibdata1 file, copy it back
   to
   ibdata1 and then restart mysql. Otherwise the INNODB tables are not
  accessable.
  
  
  
   In the past all our tables were MYIASM.  Our problems started as we
   started using more and more INNODB tables. Is there anything special
   that has to be done to configure MySQL when using INNODB tables?
   We clearly have a problem  but we have no idea where to start
  looking. Our
   error logs don't show anything.   If anyone has

Re: problems with INNODB tables

2012-04-23 Thread Andrés Tello
Weird, I use a lot Innodb, and no issue, I even kill bravely the mysql
process with pkill -9 -f mysql

Y suppose the way drupal is being programed.
PHP open and closes database connections each time a webpage with db access
is issued.
When a php exceution ends and the apache webserver have fullfilled the http
request, again, php memory is freed and connections closed... UNLESS:.. you
are using a mem cached db connection, wich I doubt it since drupal doens't
requiere one, or using persistent connections, again, I doubt it, because
persistante database connections aren't recommended to innodb tables...

Mysql server by default can handles 100 conections, if you get to thata
limit you need to fine tune the number of connections allowed.

show full processlist can give you a better idea of what is going on,
connections with the sleep status, are open connections with no currently
no transacctions...

I never use script based stop, I always use
mysqladmin -u root -p -h localhost shutdown
which properly tells mysql to flush tables and terminate.

I can almost bet that you are using Ubuntu... ubuntu had given me sometimes
very hard times because of the edgy  code they use to use, ext4 last
version, and so on... what can you tell us about that?

How much amount of memory you have?
How much concurrent apache/php users you have?
Can you provide more cuantitive data please? Hardware, php version, distro,
kernel...

Cheers...



To start, 100 process is quite a lot, something isn't fine. Each time

On Mon, Apr 23, 2012 at 9:10 AM, Malka Cymbalista 
malki.cymbali...@weizmann.ac.il wrote:

 We are running MySQL version 5.0.45 on a Linux machine. Over the past few
 months we have been having several problems:

 1.   Our mysql processes have increased the memory used from about .3%
 per process to 8% per process

 2.   We sometimes can have over 100 processes running which brings the
 machine to its knees and we have to stop and start MySQL in order to kill
 all the processes. We think that maybe the processes are not finishing
 normally and are just hanging around.

 3.   The machine is a web server and in the last few months we are
 moving over to drupal 7 to build our sites and Drupal 7 requires INNODB
 tables.   Sometimes, when we restart MySQL using the commands
 /etc/init.d/mysql stop and /etc/init.d/mysql start our sites that were
 built in drupal 7 do not come up.  In order for the INNODB tables to work,
 we have to stop mysql, rename the ibdata1 file, copy it back to ibdata1 and
 then restart mysql. Otherwise the INNODB tables are not accessable.



 In the past all our tables were MYIASM.  Our problems started as we
 started using more and more INNODB tables. Is there anything special that
 has to be done to configure MySQL when using INNODB tables?
 We clearly have a problem  but we have no idea where to start looking. Our
 error logs don't show anything.   If anyone has any suggestions, we will be
 happy to hear them.
 We are considering hiring a consultant who is an expert in MySQL. We are
 in Israel and we are open to suggestions.

 Thanks for any help.

 Malki Cymbalista
 Webmaster, Weizmann Institute of Science
 malki.cymbali...@weizmann.ac.ilmailto:malki.cymbali...@weizmann.ac.il
 08-9343036




RE: problems with INNODB tables

2012-04-23 Thread Rick James
Check your memory usage according to
http://mysql.rjweb.org/doc.php/memory

 -Original Message-
 From: Andrés Tello [mailto:mr.crip...@gmail.com]
 Sent: Monday, April 23, 2012 9:00 AM
 To: Malka Cymbalista
 Cc: mysql@lists.mysql.com; Shlomit Afgin; Ronen Hayun
 Subject: Re: problems with INNODB tables
 
 Weird, I use a lot Innodb, and no issue, I even kill bravely the mysql
 process with pkill -9 -f mysql
 
 Y suppose the way drupal is being programed.
 PHP open and closes database connections each time a webpage with db
 access is issued.
 When a php exceution ends and the apache webserver have fullfilled the
 http request, again, php memory is freed and connections closed...
 UNLESS:.. you are using a mem cached db connection, wich I doubt it
 since drupal doens't requiere one, or using persistent connections,
 again, I doubt it, because persistante database connections aren't
 recommended to innodb tables...
 
 Mysql server by default can handles 100 conections, if you get to thata
 limit you need to fine tune the number of connections allowed.
 
 show full processlist can give you a better idea of what is going on,
 connections with the sleep status, are open connections with no
 currently no transacctions...
 
 I never use script based stop, I always use mysqladmin -u root -p -h
 localhost shutdown which properly tells mysql to flush tables and
 terminate.
 
 I can almost bet that you are using Ubuntu... ubuntu had given me
 sometimes very hard times because of the edgy  code they use to use,
 ext4 last version, and so on... what can you tell us about that?
 
 How much amount of memory you have?
 How much concurrent apache/php users you have?
 Can you provide more cuantitive data please? Hardware, php version,
 distro, kernel...
 
 Cheers...
 
 
 
 To start, 100 process is quite a lot, something isn't fine. Each time
 
 On Mon, Apr 23, 2012 at 9:10 AM, Malka Cymbalista 
 malki.cymbali...@weizmann.ac.il wrote:
 
  We are running MySQL version 5.0.45 on a Linux machine. Over the past
  few months we have been having several problems:
 
  1.   Our mysql processes have increased the memory used from
 about .3%
  per process to 8% per process
 
  2.   We sometimes can have over 100 processes running which
 brings the
  machine to its knees and we have to stop and start MySQL in order to
  kill all the processes. We think that maybe the processes are not
  finishing normally and are just hanging around.
 
  3.   The machine is a web server and in the last few months we
 are
  moving over to drupal 7 to build our sites and Drupal 7 requires
 INNODB
  tables.   Sometimes, when we restart MySQL using the commands
  /etc/init.d/mysql stop and /etc/init.d/mysql start our sites that
 were
  built in drupal 7 do not come up.  In order for the INNODB tables to
  work, we have to stop mysql, rename the ibdata1 file, copy it back to
  ibdata1 and then restart mysql. Otherwise the INNODB tables are not
 accessable.
 
 
 
  In the past all our tables were MYIASM.  Our problems started as we
  started using more and more INNODB tables. Is there anything special
  that has to be done to configure MySQL when using INNODB tables?
  We clearly have a problem  but we have no idea where to start
 looking. Our
  error logs don't show anything.   If anyone has any suggestions, we
 will be
  happy to hear them.
  We are considering hiring a consultant who is an expert in MySQL. We
  are in Israel and we are open to suggestions.
 
  Thanks for any help.
 
  Malki Cymbalista
  Webmaster, Weizmann Institute of Science
 
 malki.cymbali...@weizmann.ac.ilmailto:malki.cymbali...@weizmann.ac.il
  
  08-9343036
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: problem with INNODB tables

2012-03-15 Thread Reindl Harald


Am 15.03.2012 17:31, schrieb Malka Cymbalista:
 We are running MySQL version 5.0.45 on a Linux machine.  Most of our tables 
 are MyIASM but we have recently installed drupal 7 and drupal 7 requires 
 INNODB tables. Every now and then when we restart MySQL using the commands 
 /etc/init.d/mysql stop and /etc/init.d/mysql start our sites that were built 
 in drupal 7 do not come up.  After much trial and error we came up with the 
 following solution:
 We stop mysql
 We remove the files ib_logfile0 and ib_logfile1
 We rename ibdata1 to ibdata1.old (mv ibdata1 ibdata1.old)
 We copy ibdata1.old back to ibdata1 (cp -a ibdata1.old ibdata1)
 We start mysql
 And everything is fine and the sites that use INNODB tables are fine.
 We clearly have a problem but we have no idea where to start looking. 

what about start with looking in the errorlog?






signature.asc
Description: OpenPGP digital signature


Fwd: Question relating to transactions on innodb tables.

2011-11-25 Thread Machiel Richards - Gmail



Hi All

Just a quick question relating to the use of transactions on 
innodb tables.


We are doing some archiving on some innodb tables, however 
there seems to be some issues somewhere in the process with data not 
being updated accordingly.


We would like to make use of transactions for this , in order 
to allow us to roll back on changes if the after checks does not correspond.


What I am trying to find out is whether a transaction will 
cause the locks to be handled as table locks or will it be row level locks?


Regards
Machiel



























Re: Question relating to transactions on innodb tables.

2011-11-25 Thread Reindl Harald


Am 25.11.2011 14:20, schrieb Machiel Richards - Gmail:
 Just a quick question relating to the use of transactions on innodb tables.
 
 We are doing some archiving on some innodb tables, however there seems to be 
 some issues somewhere in the
 process with data not being updated accordingly.
 
 We would like to make use of transactions for this , in order to allow us to 
 roll back on changes if the
 after checks does not correspond.
 
 What I am trying to find out is whether a transaction will cause the locks to 
 be handled as table locks or
 will it be row level locks?

which locks about you are speaking?

a transaction is a transaction
a lock is a lock



signature.asc
Description: OpenPGP digital signature


Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-25 Thread Kendall Gifford
On Mon, Jan 24, 2011 at 6:43 PM, Gavin Towey gto...@ffn.com wrote:

 If you show the EXPLAIN SELECT .. output, and the table structure, someone
 will be able to give a more definite answer.


Thanks for the reply Gavin. I actually did place this info in my very first
message on this thread, along with my basic table structure and server
version. Myself and others have just stopped keeping the full,
deeply-nested, quoted thread inside all subsequent messages which is why you
probably haven't seen it.

However, here is the EXPLAIN SELECT from the first message (reformatted for
email):

select_type: SIMPLE
table: recipients
type: ref
possible_keys: messages_fk, employee_idx
key: employee_idx
key_len: 5
ref: const
rows: 222640
Extra: Using where; Using temporary; Using filesort

select_type: SIMPLE
table: messages
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: email_archive.recipients.message_id
rows: 1
Extra:

Anyhow, having now copied these tables to another server (MySQL 5.1) and
done some tests (bumping up innodb_buffer_pool_size and playing with
innodb_flush_log_at_trx_commit for my writes and a few other knobs) it is
simply that these somewhat large tables need lots of RAM to perform well,
just as Reindl Harald originally pointed out.

Thanks again for the help everyone!

-- 
Kendall Gifford
zettab...@gmail.com


Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Joerg Bruehe
Hi everybody!


Shawn Green (MySQL) wrote:
 On 1/21/2011 14:21, Kendall Gifford wrote:
 Hello everyone, I've got a database on an old Fedora Core 4 server
 running
 MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
 has
 just two (InnoDB) tables:

 messages (approx 2.5 million records)
 recipients (approx 6.5 million records)

 [[ ... see the original post for the schema details ... ]]


 I have the following query that is just too slow:

 SELECT messages.* FROM messages
 INNER JOIN recipients ON recipients.message_id = messages.id
 WHERE recipients.employee_id = X
 GROUP BY messages.id
 ORDER BY sent_at DESC
 LIMIT 0, 25;

 This takes about 44 seconds on average. [[...]]

 
 You need to get rid of the GROUP BY to make this go faster. You can do
 that by running two queries, one to pick the list of unique
 recipients.message_id values that match your where condition then
 another to actually retrieve the message data. [[...]]

I don't want to contradict Shawn, but currently I fail to see the need
for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

can return only one row, unless there are multiple recipients records
for the same values of message_id and employee_id.

I don't know whether that can happen in the poster's application, and
whether it would cause trouble if the result line would occur multiple
times.


Regards,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Kendall Gifford
On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe joerg.bru...@oracle.comwrote:

 Hi everybody!


 Shawn Green (MySQL) wrote:
  On 1/21/2011 14:21, Kendall Gifford wrote:
  Hello everyone, I've got a database on an old Fedora Core 4 server
  running
  MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
  has
  just two (InnoDB) tables:
 
  messages (approx 2.5 million records)
  recipients (approx 6.5 million records)
 
  [[ ... see the original post for the schema details ... ]]
 
 
  I have the following query that is just too slow:
 
  SELECT messages.* FROM messages
  INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X
  GROUP BY messages.id
  ORDER BY sent_at DESC
  LIMIT 0, 25;
 
  This takes about 44 seconds on average. [[...]]
 
 
  You need to get rid of the GROUP BY to make this go faster. You can do
  that by running two queries, one to pick the list of unique
  recipients.message_id values that match your where condition then
  another to actually retrieve the message data. [[...]]

 I don't want to contradict Shawn, but currently I fail to see the need
 for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

 can return only one row, unless there are multiple recipients records
 for the same values of message_id and employee_id.

 I don't know whether that can happen in the poster's application, and
 whether it would cause trouble if the result line would occur multiple
 times.


In my application, there CAN in fact be several recipients records with
both the same message_id foreign key value AND the same employee_id
value (some employees may be a recipient of a message several times over via
alternative addresses and/or aliases). However, as I rework things, I could
probably rework application logic nuke the GROUP BY and just cope, in code,
with these extra messages records in my result set. (Just FYI, the SQL
query is simply the default query as created by rails or, more specifically,
ActiveRecord 2.3.9 which I can/will-be optimizing).

I will additionally be moving this database to a new server. However, for
academic interest, I'll see if I can make time to post the query time(s)
once I change the app, before moving the database to a new (and better
configured) server.

Thanks for the help everybody.



 Regards,
 Jörg


-- 
Kendall Gifford
zettab...@gmail.com


Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Kendall Gifford
On Mon, Jan 24, 2011 at 2:20 PM, Kendall Gifford zettab...@gmail.comwrote:



 On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe joerg.bru...@oracle.comwrote:

 Hi everybody!


 Shawn Green (MySQL) wrote:
  On 1/21/2011 14:21, Kendall Gifford wrote:
  Hello everyone, I've got a database on an old Fedora Core 4 server
  running
  MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
  has
  just two (InnoDB) tables:
 
  messages (approx 2.5 million records)
  recipients (approx 6.5 million records)
 
  [[ ... see the original post for the schema details ... ]]
 
 
  I have the following query that is just too slow:
 
  SELECT messages.* FROM messages
  INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X
  GROUP BY messages.id
  ORDER BY sent_at DESC
  LIMIT 0, 25;
 
  This takes about 44 seconds on average. [[...]]
 
 
  You need to get rid of the GROUP BY to make this go faster. You can do
  that by running two queries, one to pick the list of unique
  recipients.message_id values that match your where condition then
  another to actually retrieve the message data. [[...]]

 I don't want to contradict Shawn, but currently I fail to see the need
 for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

 can return only one row, unless there are multiple recipients records
 for the same values of message_id and employee_id.

 I don't know whether that can happen in the poster's application, and
 whether it would cause trouble if the result line would occur multiple
 times.


 In my application, there CAN in fact be several recipients records with
 both the same message_id foreign key value AND the same employee_id
 value (some employees may be a recipient of a message several times over via
 alternative addresses and/or aliases). However, as I rework things, I could
 probably rework application logic nuke the GROUP BY and just cope, in code,
 with these extra messages records in my result set. (Just FYI, the SQL
 query is simply the default query as created by rails or, more specifically,
 ActiveRecord 2.3.9 which I can/will-be optimizing).

 I will additionally be moving this database to a new server. However, for
 academic interest, I'll see if I can make time to post the query time(s)
 once I change the app, before moving the database to a new (and better
 configured) server.


Just an update for posterity, simply removing the GROUP BY clause of my
query above has, overall, no noticeable effect on performance. I suspect
server configuration, as pointed out by Reindl, is too much of a bottleneck
and is what I first need to change (working on that now). Perhaps the
removal of GROUP BY would/will be noticeable if the server configuration for
InnoDB tables wasn't so horrendous. I'll find out...

-- 
Kendall Gifford
zettab...@gmail.com


RE: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Gavin Towey
If you show the EXPLAIN SELECT .. output, and the table structure, someone will 
be able to give a more definite answer.

-Original Message-
From: Kendall Gifford [mailto:zettab...@gmail.com]
Sent: Monday, January 24, 2011 2:29 PM
To: mysql@lists.mysql.com
Subject: Re: Slow query on MySQL4 server doing simple inner join of two InnoDB 
tables

On Mon, Jan 24, 2011 at 2:20 PM, Kendall Gifford zettab...@gmail.comwrote:



 On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe joerg.bru...@oracle.comwrote:

 Hi everybody!


 Shawn Green (MySQL) wrote:
  On 1/21/2011 14:21, Kendall Gifford wrote:
  Hello everyone, I've got a database on an old Fedora Core 4 server
  running
  MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
  has
  just two (InnoDB) tables:
 
  messages (approx 2.5 million records)
  recipients (approx 6.5 million records)
 
  [[ ... see the original post for the schema details ... ]]
 
 
  I have the following query that is just too slow:
 
  SELECT messages.* FROM messages
  INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X
  GROUP BY messages.id
  ORDER BY sent_at DESC
  LIMIT 0, 25;
 
  This takes about 44 seconds on average. [[...]]
 
 
  You need to get rid of the GROUP BY to make this go faster. You can do
  that by running two queries, one to pick the list of unique
  recipients.message_id values that match your where condition then
  another to actually retrieve the message data. [[...]]

 I don't want to contradict Shawn, but currently I fail to see the need
 for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

 can return only one row, unless there are multiple recipients records
 for the same values of message_id and employee_id.

 I don't know whether that can happen in the poster's application, and
 whether it would cause trouble if the result line would occur multiple
 times.


 In my application, there CAN in fact be several recipients records with
 both the same message_id foreign key value AND the same employee_id
 value (some employees may be a recipient of a message several times over via
 alternative addresses and/or aliases). However, as I rework things, I could
 probably rework application logic nuke the GROUP BY and just cope, in code,
 with these extra messages records in my result set. (Just FYI, the SQL
 query is simply the default query as created by rails or, more specifically,
 ActiveRecord 2.3.9 which I can/will-be optimizing).

 I will additionally be moving this database to a new server. However, for
 academic interest, I'll see if I can make time to post the query time(s)
 once I change the app, before moving the database to a new (and better
 configured) server.


Just an update for posterity, simply removing the GROUP BY clause of my
query above has, overall, no noticeable effect on performance. I suspect
server configuration, as pointed out by Reindl, is too much of a bottleneck
and is what I first need to change (working on that now). Perhaps the
removal of GROUP BY would/will be noticeable if the server configuration for
InnoDB tables wasn't so horrendous. I'll find out...

--
Kendall Gifford
zettab...@gmail.com

IMPORTANT: This email message is intended only for the use of the individual to 
whom, or entity to which, it is addressed and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
you are NOT the intended recipient, you are hereby notified that any use, 
dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please reply to 
the sender immediately and permanently delete this email. Thank you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-21 Thread Kendall Gifford
Hello everyone, I've got a database on an old Fedora Core 4 server running
MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
just two (InnoDB) tables:

messages (approx 2.5 million records)
recipients (approx 6.5 million records)

These track information about email messages. Each message has many
recipient records. The structure of the two tables (omitting irrelevant data
fields) are as follows:

+-+--+--+-+-++
| Field   | Type | Null | Key | Default |
Extra  |
+-+--+--+-+-++
| id  | int(10) unsigned |  | PRI | NULL|
auto_increment |
| sent_at | datetime |  | MUL | -00-00 00:00:00
||
| . OTHER FIELDS OMITTED FOR BREVITY
... |
+-+--+--+-+-++

+-+--+--+-+-++
| Field   | Type | Null | Key | Default |
Extra  |
+-+--+--+-+-++
| id  | int(10) unsigned |  | PRI | NULL|
auto_increment |
| message_id  | int(10) unsigned |  | MUL | 0
||
| employee_id | int(10) unsigned | YES  | MUL | NULL
||
| . OTHER FIELDS OMITTED FOR BREVITY
... |
+-+--+--+-+-++

I have the following query that is just too slow:

 SELECT messages.* FROM messages
 INNER JOIN recipients ON recipients.message_id = messages.id
 WHERE recipients.employee_id = X
 GROUP BY messages.id
 ORDER BY sent_at DESC
 LIMIT 0, 25;

This takes about 44 seconds on average. The query explanation is as follows:

++-+++--+--+-+-++--+
| id | select_type | table  | type   | possible_keys|
key  | key_len | ref | rows   |
Extra|
++-+++--+--+-+-++--+
|  1 | SIMPLE  | recipients | ref| messages_fk,employee_idx |
employee_idx |   5 | const   | 222640 |
Using where; Using temporary; Using filesort |
|  1 | SIMPLE  | messages   | eq_ref | PRIMARY  |
PRIMARY  |   4 | email_archive.recipients.message_id |  1
|  |
++-+++--+--+-+-++--+

I've been doing some searching on the web and have no idea if/how this can
be sped up. Most searches these days reference MySQL 5.x which I'm just not
sure how much applies. I'm hoping that there is something obvious that I'm
missing, or that one of you experts knows what I might be able to change to
speed this query up.

Anyhow, thanks in advance for even so much as reading my message, let alone
replying :).

-- 
Kendall Gifford
zettab...@gmail.com


Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-21 Thread Reindl Harald
you need hughe ram / innodb_buffer_pool for large datasets
in a perfect world the buffer_pool is as large as the data

how looks your current config?
how much RAM has the machine?

Am 21.01.2011 20:21, schrieb Kendall Gifford:
 Hello everyone, I've got a database on an old Fedora Core 4 server running
 MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
 just two (InnoDB) tables:
 
 messages (approx 2.5 million records)
 recipients (approx 6.5 million records)
 
 These track information about email messages. Each message has many
 recipient records. The structure of the two tables (omitting irrelevant data
 fields) are as follows:
 
 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default |
 Extra  |
 +-+--+--+-+-++
 | id  | int(10) unsigned |  | PRI | NULL|
 auto_increment |
 | sent_at | datetime |  | MUL | -00-00 00:00:00
 ||
 | . OTHER FIELDS OMITTED FOR BREVITY
 ... |
 +-+--+--+-+-++
 
 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default |
 Extra  |
 +-+--+--+-+-++
 | id  | int(10) unsigned |  | PRI | NULL|
 auto_increment |
 | message_id  | int(10) unsigned |  | MUL | 0
 ||
 | employee_id | int(10) unsigned | YES  | MUL | NULL
 ||
 | . OTHER FIELDS OMITTED FOR BREVITY
 ... |
 +-+--+--+-+-++
 
 I have the following query that is just too slow:
 
 SELECT messages.* FROM messages
 INNER JOIN recipients ON recipients.message_id = messages.id
 WHERE recipients.employee_id = X
 GROUP BY messages.id
 ORDER BY sent_at DESC
 LIMIT 0, 25;
 
 This takes about 44 seconds on average. The query explanation is as follows:
 
 ++-+++--+--+-+-++--+
 | id | select_type | table  | type   | possible_keys|
 key  | key_len | ref | rows   |
 Extra|
 ++-+++--+--+-+-++--+
 |  1 | SIMPLE  | recipients | ref| messages_fk,employee_idx |
 employee_idx |   5 | const   | 222640 |
 Using where; Using temporary; Using filesort |
 |  1 | SIMPLE  | messages   | eq_ref | PRIMARY  |
 PRIMARY  |   4 | email_archive.recipients.message_id |  1
 |  |
 ++-+++--+--+-+-++--+
 
 I've been doing some searching on the web and have no idea if/how this can
 be sped up. Most searches these days reference MySQL 5.x which I'm just not
 sure how much applies. I'm hoping that there is something obvious that I'm
 missing, or that one of you experts knows what I might be able to change to
 speed this query up.
 
 Anyhow, thanks in advance for even so much as reading my message, let alone
 replying :).
 

-- 

Mit besten Grüßen, Reindl Harald
the lounge interactive design GmbH
A-1060 Vienna, Hofmühlgasse 17
CTO / software-development / cms-solutions
p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
icq: 154546673, http://www.thelounge.net/



signature.asc
Description: OpenPGP digital signature


Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-21 Thread Shawn Green (MySQL)

On 1/21/2011 14:21, Kendall Gifford wrote:

Hello everyone, I've got a database on an old Fedora Core 4 server running
MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
just two (InnoDB) tables:

messages (approx 2.5 million records)
recipients (approx 6.5 million records)

These track information about email messages. Each message has many
recipient records. The structure of the two tables (omitting irrelevant data
fields) are as follows:

+-+--+--+-+-++
| Field   | Type | Null | Key | Default |
Extra  |
+-+--+--+-+-++
| id  | int(10) unsigned |  | PRI | NULL|
auto_increment |
| sent_at | datetime |  | MUL | -00-00 00:00:00
||
| . OTHER FIELDS OMITTED FOR BREVITY
... |
+-+--+--+-+-++

+-+--+--+-+-++
| Field   | Type | Null | Key | Default |
Extra  |
+-+--+--+-+-++
| id  | int(10) unsigned |  | PRI | NULL|
auto_increment |
| message_id  | int(10) unsigned |  | MUL | 0
||
| employee_id | int(10) unsigned | YES  | MUL | NULL
||
| . OTHER FIELDS OMITTED FOR BREVITY
... |
+-+--+--+-+-++

I have the following query that is just too slow:


SELECT messages.* FROM messages
INNER JOIN recipients ON recipients.message_id = messages.id
WHERE recipients.employee_id = X
GROUP BY messages.id
ORDER BY sent_at DESC
LIMIT 0, 25;


This takes about 44 seconds on average. The query explanation is as follows:

++-+++--+--+-+-++--+
| id | select_type | table  | type   | possible_keys|
key  | key_len | ref | rows   |
Extra|
++-+++--+--+-+-++--+
|  1 | SIMPLE  | recipients | ref| messages_fk,employee_idx |
employee_idx |   5 | const   | 222640 |
Using where; Using temporary; Using filesort |
|  1 | SIMPLE  | messages   | eq_ref | PRIMARY  |
PRIMARY  |   4 | email_archive.recipients.message_id |  1
|  |
++-+++--+--+-+-++--+

I've been doing some searching on the web and have no idea if/how this can
be sped up. Most searches these days reference MySQL 5.x which I'm just not
sure how much applies. I'm hoping that there is something obvious that I'm
missing, or that one of you experts knows what I might be able to change to
speed this query up.

Anyhow, thanks in advance for even so much as reading my message, let alone
replying :).



You need to get rid of the GROUP BY to make this go faster. You can do 
that by running two queries, one to pick the list of unique 
recipients.message_id values that match your where condition then 
another to actually retrieve the message data. Something like this


CREATE TEMPORARY TABLE tmpMessages (message_id INT UNSIGNED, PRIMARY KEY 
(message_id)) ENGINE=MEMORY;


INSERT IGNORE tmpMessages
SELECT message_id
FROM recipients
WHERE employee_id = X;

SELECT messages.* FROM messages
INNER JOIN tmpMessages ON tmpMessages.message_id = messages.id
ORDER BY sent_at DESC
LIMIT 0, 25;

By pre-selecting a limited set of message_id values from the recipients 
table, you seriously reduce the number of rows that need to be scanned. 
Also, the INSERT IGNORE technique is faster than the GROUP BY because it 
uses an index to identify any duplicates instead of a scan of all 
previous unique values.


Please let us all know if this is faster enough. (and don't forget to 
drop the temp table once you are through using it)


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-21 Thread Kendall Gifford
On Fri, Jan 21, 2011 at 2:01 PM, Shawn Green (MySQL) 
shawn.l.gr...@oracle.com wrote:

 On 1/21/2011 14:21, Kendall Gifford wrote:

 Hello everyone, I've got a database on an old Fedora Core 4 server running
 MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
 just two (InnoDB) tables:

 messages (approx 2.5 million records)
 recipients (approx 6.5 million records)

 These track information about email messages. Each message has many
 recipient records. The structure of the two tables (omitting irrelevant
 data
 fields) are as follows:


 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default |
 Extra  |

 +-+--+--+-+-++
 | id  | int(10) unsigned |  | PRI | NULL|
 auto_increment |
 | sent_at | datetime |  | MUL | -00-00 00:00:00
 ||
 | . OTHER FIELDS OMITTED FOR BREVITY
 ... |

 +-+--+--+-+-++


 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default |
 Extra  |

 +-+--+--+-+-++
 | id  | int(10) unsigned |  | PRI | NULL|
 auto_increment |
 | message_id  | int(10) unsigned |  | MUL | 0
 ||
 | employee_id | int(10) unsigned | YES  | MUL | NULL
 ||
 | . OTHER FIELDS OMITTED FOR BREVITY
 ... |

 +-+--+--+-+-++

 I have the following query that is just too slow:

  SELECT messages.* FROM messages
 INNER JOIN recipients ON recipients.message_id = messages.id
 WHERE recipients.employee_id = X
 GROUP BY messages.id
 ORDER BY sent_at DESC
 LIMIT 0, 25;


 This takes about 44 seconds on average. The query explanation is as
 follows:


 ++-+++--+--+-+-++--+
 | id | select_type | table  | type   | possible_keys|
 key  | key_len | ref | rows   |
 Extra|

 ++-+++--+--+-+-++--+
 |  1 | SIMPLE  | recipients | ref| messages_fk,employee_idx |
 employee_idx |   5 | const   | 222640 |
 Using where; Using temporary; Using filesort |
 |  1 | SIMPLE  | messages   | eq_ref | PRIMARY  |
 PRIMARY  |   4 | email_archive.recipients.message_id |  1
 |  |

 ++-+++--+--+-+-++--+

 I've been doing some searching on the web and have no idea if/how this can
 be sped up. Most searches these days reference MySQL 5.x which I'm just
 not
 sure how much applies. I'm hoping that there is something obvious that I'm
 missing, or that one of you experts knows what I might be able to change
 to
 speed this query up.

 Anyhow, thanks in advance for even so much as reading my message, let
 alone
 replying :).


 You need to get rid of the GROUP BY to make this go faster. You can do that
 by running two queries, one to pick the list of unique recipients.message_id
 values that match your where condition then another to actually retrieve the
 message data. Something like this

 CREATE TEMPORARY TABLE tmpMessages (message_id INT UNSIGNED, PRIMARY KEY
 (message_id)) ENGINE=MEMORY;

 INSERT IGNORE tmpMessages
 SELECT message_id
 FROM recipients
 WHERE employee_id = X;

 SELECT messages.* FROM messages
 INNER JOIN tmpMessages ON tmpMessages.message_id = messages.id

 ORDER BY sent_at DESC
 LIMIT 0, 25;

 By pre-selecting a limited set of message_id values from the recipients
 table, you seriously reduce the number of rows that need to be scanned.
 Also, the INSERT IGNORE technique is faster than the GROUP BY because it
 uses an index to identify any duplicates instead of a scan of all previous
 unique values.

 Please let us all know if this is faster enough. (and don't forget to drop
 the temp table once you are through using it)

 --
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc.
 Office: Blountville, TN


Thanks Shawn, I'm in the process of trying you're suggestion now. I'll let
you know how

Fwd: Backing up the InnoDB tables

2010-10-13 Thread Tompkins Neil
Would really appreciate some help or suggestions on this please, if anyone
can assist ?

Regards
Neil

-- Forwarded message --
From: Tompkins Neil neil.tompk...@googlemail.com
Date: Tue, Oct 12, 2010 at 5:45 PM
Subject: Backing up the InnoDB tables
To: [MySQL] mysql@lists.mysql.com


Hi

On a shared MySQL server with access just to my own database, what is the
recommend backup methods and strategies for the InnoDB tables ?

Cheers
Neil


Re: Backing up the InnoDB tables

2010-10-13 Thread Suresh Kuna
use xtra backup

On Wed, Oct 13, 2010 at 5:37 PM, Tompkins Neil neil.tompk...@googlemail.com
 wrote:

 Would really appreciate some help or suggestions on this please, if anyone
 can assist ?

 Regards
 Neil

 -- Forwarded message --
 From: Tompkins Neil neil.tompk...@googlemail.com
 Date: Tue, Oct 12, 2010 at 5:45 PM
 Subject: Backing up the InnoDB tables
 To: [MySQL] mysql@lists.mysql.com


 Hi

 On a shared MySQL server with access just to my own database, what is the
 recommend backup methods and strategies for the InnoDB tables ?

 Cheers
 Neil




-- 
Thanks
Suresh Kuna
MySQL DBA


Re: Backing up the InnoDB tables

2010-10-13 Thread Tompkins Neil
The problem is I don't have any command line access, just direct MySQL
access to the database tables.


On Wed, Oct 13, 2010 at 1:19 PM, Suresh Kuna sureshkumar...@gmail.comwrote:

 use xtra backup

 On Wed, Oct 13, 2010 at 5:37 PM, Tompkins Neil 
 neil.tompk...@googlemail.com wrote:

 Would really appreciate some help or suggestions on this please, if anyone
 can assist ?

 Regards
 Neil

 -- Forwarded message --
 From: Tompkins Neil neil.tompk...@googlemail.com
 Date: Tue, Oct 12, 2010 at 5:45 PM
 Subject: Backing up the InnoDB tables
 To: [MySQL] mysql@lists.mysql.com


 Hi

 On a shared MySQL server with access just to my own database, what is the
 recommend backup methods and strategies for the InnoDB tables ?

 Cheers
 Neil




 --
 Thanks
 Suresh Kuna
 MySQL DBA



Re: Backing up the InnoDB tables

2010-10-13 Thread a . smith

Quoting Tompkins Neil neil.tompk...@googlemail.com:


The problem is I don't have any command line access, just direct MySQL
access to the database tables.



I dont know xtra backup, but if thats not an option you can just use  
mysqldump. This can be run from a remote server to your DB server,  
just using MySQL network access to the DB(s)...





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Backing up the InnoDB tables

2010-10-13 Thread tomasz dereszynski

 The problem is I don't have any command line access, just direct MySQL
 access to the database tables.


whats wrong with mysqldump?





-- 
bEsT rEgArDs|   Confidence is what you have before you
tomasz dereszynski  |   understand the problem. -- Woody Allen
|
Spes confisa Deo|   In theory, theory and practice are much
numquam confusa recedit |   the same. In practice they are very
|   different. -- Albert Einstein



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Backing up the InnoDB tables

2010-10-12 Thread Tompkins Neil
Hi

On a shared MySQL server with access just to my own database, what is the
recommend backup methods and strategies for the InnoDB tables ?

Cheers
Neil


Question about LVM snapshots and innodb tables

2009-03-11 Thread Jim Lyons
The book “High Performance MySQL” states the following about using LVM
snapshots with innodb tables:  “All innodb files (InnoDB tablespace files
and InnoDB transaction logs) must be on a single logical volume
(partition).”  Here is portion of a df command performed on one of our
hosts:



/dev/mapper/vg01-db   2.5T  2.0T  567G  78% /db

/dev/mapper/vg00-innodb  8.0G  2.0G  6.1G  25% /db/innodb

/dev/mapper/vg02-binlog  503G  140G  363G  28% /db/binlog

/dev/mapper/vg06-data4  755G  652G  103G  87% /db/data

/dev/mapper/vgc2-data8  6.2T  644G  5.6T  11% /db/data8

/dev/mapper/vgc3-data9  6.2T  1.8T  4.5T  29% /db/data9



Where /db/innodb contains the innodb logs and the one ibdata file.  However,
we use innodb_file_per_table so all the /db/datax filesystems have .ibd
files (many of the tables in the datadir, /db/data, are sym-linked to
/db/data8 and /db/data9 where the data actually resides.



We use LVM snapshots to move the data around, since our databases are
several terabytes.  Does this mean our snapshots are inconsistent?  There’s
no way we can get all innodb data on a single partition.



Thanks,


Jim Lyons

-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Calculate total size of InnoDB tables?

2008-09-12 Thread ewen fortune
Hi,

You can get that from the information_schema, check out this post from
Peter Zaitsev

http://www.mysqlperformanceblog.com/2008/03/17/researching-your-mysql-table-sizes/

Ewen


On Fri, Sep 12, 2008 at 10:25 PM, Ryan Schwartz [EMAIL PROTECTED] wrote:
 Is there an easy way to calculate the total size of all InnoDB tables?
 --
 Ryan Schwartz



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Typical Maintenance for InnoDB Tables

2008-08-16 Thread samk
See Thread at: http://www.techienuggets.com/Detail?tx=48414 Posted on behalf of 
a User

I have a MySQL 5.0 InnoDB database that's about 1 GB in size so it's still 
pretty tiny. Is there any performance enhancement maintenance that should be 
done on the tables? I do a weekly Optimize through the MySQL Admin tool, which 
doesn't appear to do anything, I presume it updates the table statistics. Is 
there anything else that I should do with the indexes for instance? 

Thanks for any information.

John T.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Very slow inserts into InnoDB tables

2008-06-15 Thread Nick Adams

hdparm -Tt /dev/sdX ?

Ian Simpson wrote:

That's pretty much what I've been doing to get that the drive is running
at 100% bandwidth.

What I'd like is something that just gives the bandwidth of the device
in terms of Mb/s: you can probably work it out using that iostat
command, seeing how much it wrote and what percentage of the bandwidth
it's using, and then doing a calculation with those numbers to get the
100% value, but I don't know if that's valid, since there are generally
a number of other operations going on at the same time.

Thanks




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi list,

Have a bit of a mystery here that I hope somebody can help with.

I've just got a new server that I'm using as a dedicated MySQL server.
In terms of hardware it's pretty much identical, if not slightly
superior to an existing server already in production use.

It's having a real struggle processing INSERT statements to InnoDB
tables; it's maxing out at around 100 inserts per second, even with very
simple two column tables (inserts into MyISAM tables run fine).
Meanwhile, the original server can happily process around 1000
inserts/sec into an identical table.

The MySQL configuration of the two databases is identical, except for
the tablespace file size (the new server has a larger tablespace
defined), and the InnoDB logs (again, new server has larger logs).

Can anybody suggest an area of investigation as to the cause?

Thanks,
-- 
Ian Simpson

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Alex Arul Lurthu
Please check if the my.cnf configurations to be the same.

 What are your configuration parameters in terms of innodh flush log trx
commit , bin logging, sync binlog and innodb unsafe for binlog ?

If the systems have raid, check if the BBWC is enabled on the new host and
WB is enabled.


On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson [EMAIL PROTECTED] wrote:

 Hi list,

 Have a bit of a mystery here that I hope somebody can help with.

 I've just got a new server that I'm using as a dedicated MySQL server.
 In terms of hardware it's pretty much identical, if not slightly
 superior to an existing server already in production use.

 It's having a real struggle processing INSERT statements to InnoDB
 tables; it's maxing out at around 100 inserts per second, even with very
 simple two column tables (inserts into MyISAM tables run fine).
 Meanwhile, the original server can happily process around 1000
 inserts/sec into an identical table.

 The MySQL configuration of the two databases is identical, except for
 the tablespace file size (the new server has a larger tablespace
 defined), and the InnoDB logs (again, new server has larger logs).

 Can anybody suggest an area of investigation as to the cause?

 Thanks,
 --
 Ian Simpson

 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has misdirected
 this email, please notify the author by replying to this email. If you are
 not the intended recipient(s) disclosure, distribution, copying or printing
 of this email is strictly prohibited and you should destroy this mail.
 Information or opinions in this message shall not be treated as neither
 given nor endorsed by the company. Neither the company nor the sender
 accepts any responsibility for viruses or other destructive elements and it
 is your responsibility to scan any attachments.




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi Alex,

Configurations are identical, other than the differences I initially
mentioned. I've diffed both the configuration files and the output of
SHOW VARIABLES on both servers.

I've contacted my hosting provider to ask about the RAID settings.

Variable_name: innodb_flush_log_at_trx_commit
Value: 1
Variable_name: sync_binlog
Value: 0
Variable_name: innodb_locks_unsafe_for_binlog
Value: OFF

Thanks

-- 
Ian Simpson

On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu wrote:
 Please check if the my.cnf configurations to be the same.
 
  What are your configuration parameters in terms of innodh flush log
 trx commit , bin logging, sync binlog and innodb unsafe for binlog ?
 
 If the systems have raid, check if the BBWC is enabled on the new host
 and WB is enabled.
 
 
 On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson [EMAIL PROTECTED]
 wrote:
 Hi list,
 
 Have a bit of a mystery here that I hope somebody can help
 with.
 
 I've just got a new server that I'm using as a dedicated MySQL
 server.
 In terms of hardware it's pretty much identical, if not
 slightly
 superior to an existing server already in production use.
 
 It's having a real struggle processing INSERT statements to
 InnoDB
 tables; it's maxing out at around 100 inserts per second, even
 with very
 simple two column tables (inserts into MyISAM tables run
 fine).
 Meanwhile, the original server can happily process around 1000
 inserts/sec into an identical table.
 
 The MySQL configuration of the two databases is identical,
 except for
 the tablespace file size (the new server has a larger
 tablespace
 defined), and the InnoDB logs (again, new server has larger
 logs).
 
 Can anybody suggest an area of investigation as to the cause?
 
 Thanks,
 --
 Ian Simpson
 
 This email may contain confidential information and is
 intended for the recipient(s) only. If an addressing or
 transmission error has misdirected this email, please notify
 the author by replying to this email. If you are not the
 intended recipient(s) disclosure, distribution, copying or
 printing of this email is strictly prohibited and you should
 destroy this mail. Information or opinions in this message
 shall not be treated as neither given nor endorsed by the
 company. Neither the company nor the sender accepts any
 responsibility for viruses or other destructive elements and
 it is your responsibility to scan any attachments.
 
 
 
 -- 
 Thanks
 Alex
 http://alexlurthu.wordpress.com

This email may contain confidential information and is intended for the 
recipient(s) only. If an addressing or transmission error has misdirected this 
email, please notify the author by replying to this email. If you are not the 
intended recipient(s) disclosure, distribution, copying or printing of this 
email is strictly prohibited and you should destroy this mail. Information or 
opinions in this message shall not be treated as neither given nor endorsed by 
the company. Neither the company nor the sender accepts any responsibility for 
viruses or other destructive elements and it is your responsibility to scan any 
attachments.

Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Ananda Kumar
check for iostat to see if the disk is heavly used.

On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:

 Hi Alex,

 Configurations are identical, other than the differences I initially
 mentioned. I've diffed both the configuration files and the output of
 SHOW VARIABLES on both servers.

 I've contacted my hosting provider to ask about the RAID settings.

 Variable_name: innodb_flush_log_at_trx_commit
Value: 1
 Variable_name: sync_binlog
Value: 0
 Variable_name: innodb_locks_unsafe_for_binlog
Value: OFF

 Thanks

 --
 Ian Simpson

 On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu wrote:
  Please check if the my.cnf configurations to be the same.
 
   What are your configuration parameters in terms of innodh flush log
  trx commit , bin logging, sync binlog and innodb unsafe for binlog ?
 
  If the systems have raid, check if the BBWC is enabled on the new host
  and WB is enabled.
 
 
  On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson [EMAIL PROTECTED]
  wrote:
  Hi list,
 
  Have a bit of a mystery here that I hope somebody can help
  with.
 
  I've just got a new server that I'm using as a dedicated MySQL
  server.
  In terms of hardware it's pretty much identical, if not
  slightly
  superior to an existing server already in production use.
 
  It's having a real struggle processing INSERT statements to
  InnoDB
  tables; it's maxing out at around 100 inserts per second, even
  with very
  simple two column tables (inserts into MyISAM tables run
  fine).
  Meanwhile, the original server can happily process around 1000
  inserts/sec into an identical table.
 
  The MySQL configuration of the two databases is identical,
  except for
  the tablespace file size (the new server has a larger
  tablespace
  defined), and the InnoDB logs (again, new server has larger
  logs).
 
  Can anybody suggest an area of investigation as to the cause?
 
  Thanks,
  --
  Ian Simpson
 
  This email may contain confidential information and is
  intended for the recipient(s) only. If an addressing or
  transmission error has misdirected this email, please notify
  the author by replying to this email. If you are not the
  intended recipient(s) disclosure, distribution, copying or
  printing of this email is strictly prohibited and you should
  destroy this mail. Information or opinions in this message
  shall not be treated as neither given nor endorsed by the
  company. Neither the company nor the sender accepts any
  responsibility for viruses or other destructive elements and
  it is your responsibility to scan any attachments.
 
 
 
  --
  Thanks
  Alex
  http://alexlurthu.wordpress.com

 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has misdirected
 this email, please notify the author by replying to this email. If you are
 not the intended recipient(s) disclosure, distribution, copying or printing
 of this email is strictly prohibited and you should destroy this mail.
 Information or opinions in this message shall not be treated as neither
 given nor endorsed by the company. Neither the company nor the sender
 accepts any responsibility for viruses or other destructive elements and it
 is your responsibility to scan any attachments.


Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Alex Arul Lurthu
also how often do you issue a commit. batching the inserts inside a
transaction might help.

On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED] wrote:

 check for iostat to see if the disk is heavly used.

 On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:

 Hi Alex,

 Configurations are identical, other than the differences I initially
 mentioned. I've diffed both the configuration files and the output of
 SHOW VARIABLES on both servers.

 I've contacted my hosting provider to ask about the RAID settings.

 Variable_name: innodb_flush_log_at_trx_commit
Value: 1
 Variable_name: sync_binlog
Value: 0
 Variable_name: innodb_locks_unsafe_for_binlog
Value: OFF

 Thanks

 --
 Ian Simpson

 On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu wrote:
  Please check if the my.cnf configurations to be the same.
 
   What are your configuration parameters in terms of innodh flush log
  trx commit , bin logging, sync binlog and innodb unsafe for binlog ?
 
  If the systems have raid, check if the BBWC is enabled on the new host
  and WB is enabled.
 
 
  On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson [EMAIL PROTECTED]
  wrote:
  Hi list,
 
  Have a bit of a mystery here that I hope somebody can help
  with.
 
  I've just got a new server that I'm using as a dedicated MySQL
  server.
  In terms of hardware it's pretty much identical, if not
  slightly
  superior to an existing server already in production use.
 
  It's having a real struggle processing INSERT statements to
  InnoDB
  tables; it's maxing out at around 100 inserts per second, even
  with very
  simple two column tables (inserts into MyISAM tables run
  fine).
  Meanwhile, the original server can happily process around 1000
  inserts/sec into an identical table.
 
  The MySQL configuration of the two databases is identical,
  except for
  the tablespace file size (the new server has a larger
  tablespace
  defined), and the InnoDB logs (again, new server has larger
  logs).
 
  Can anybody suggest an area of investigation as to the cause?
 
  Thanks,
  --
  Ian Simpson
 
  This email may contain confidential information and is
  intended for the recipient(s) only. If an addressing or
  transmission error has misdirected this email, please notify
  the author by replying to this email. If you are not the
  intended recipient(s) disclosure, distribution, copying or
  printing of this email is strictly prohibited and you should
  destroy this mail. Information or opinions in this message
  shall not be treated as neither given nor endorsed by the
  company. Neither the company nor the sender accepts any
  responsibility for viruses or other destructive elements and
  it is your responsibility to scan any attachments.
 
 
 
  --
  Thanks
  Alex
  http://alexlurthu.wordpress.com

 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has misdirected
 this email, please notify the author by replying to this email. If you are
 not the intended recipient(s) disclosure, distribution, copying or printing
 of this email is strictly prohibited and you should destroy this mail.
 Information or opinions in this message shall not be treated as neither
 given nor endorsed by the company. Neither the company nor the sender
 accepts any responsibility for viruses or other destructive elements and it
 is your responsibility to scan any attachments.





-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi guys, thanks for pitching in.

The inserts are from replication; we're not using transactions on the
master (yet), and I don't think there's a way of telling MySQL to batch
incoming replication statements if they're not already in a transaction.

Disk usage: the older server (the one that's running fine) is running
more transactions per second, but has lower blocks written and read per
second than the new server:

The working server (which in addition to replicating is also handling a
bunch of read queries)

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda  88.47   782.20   998.77 9046888130 11551757459

The new server, which is just trying to handle replication

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda  77.83  1367.55  2914.72  358474084  764029986

Thanks,

-- 
Ian Simpson



On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
 also how often do you issue a commit. batching the inserts inside a
 transaction might help.
 
 On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
 wrote:
 check for iostat to see if the disk is heavly used. 
 
 
 On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote: 
 Hi Alex,
 
 Configurations are identical, other than the
 differences I initially
 mentioned. I've diffed both the configuration files
 and the output of
 SHOW VARIABLES on both servers.
 
 I've contacted my hosting provider to ask about the
 RAID settings.
 
 Variable_name: innodb_flush_log_at_trx_commit
Value: 1
 Variable_name: sync_binlog
Value: 0
 Variable_name: innodb_locks_unsafe_for_binlog
Value: OFF
 
 Thanks
 
 --
 Ian Simpson
 
 On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
 wrote:
  Please check if the my.cnf configurations to be the
 same.
 
   What are your configuration parameters in terms of
 innodh flush log
  trx commit , bin logging, sync binlog and innodb
 unsafe for binlog ?
 
  If the systems have raid, check if the BBWC is
 enabled on the new host
  and WB is enabled.
 
 
  On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
 [EMAIL PROTECTED]
  wrote:
  Hi list,
 
  Have a bit of a mystery here that I hope
 somebody can help
  with.
 
  I've just got a new server that I'm using as
 a dedicated MySQL
  server.
  In terms of hardware it's pretty much
 identical, if not
  slightly
  superior to an existing server already in
 production use.
 
  It's having a real struggle processing
 INSERT statements to
  InnoDB
  tables; it's maxing out at around 100
 inserts per second, even
  with very
  simple two column tables (inserts into
 MyISAM tables run
  fine).
  Meanwhile, the original server can happily
 process around 1000
  inserts/sec into an identical table.
 
  The MySQL configuration of the two databases
 is identical,
  except for
  the tablespace file size (the new server has
 a larger
  tablespace
  defined), and the InnoDB logs (again, new
 server has larger
  logs).
 
  Can anybody suggest an area of investigation
 as to the cause?
 
  Thanks,
  --
  Ian Simpson
 
  This email may contain confidential
 information and is
  intended for the recipient(s) only. If an
 addressing or
  transmission error has misdirected this
 email, please notify

Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Alex Arul Lurthu
replication based inserts are serial whereas most of the time the inserts on
masters are concurrent. this leads to the slaves falling behind. to tackle
this we have used the following strategies :

1. Use raid 0 on the slaves (master users raid 10) so as to speed up writes.

2. pre fetch and cache the data that needs to be modified by the slave sql
thread.
3. set innodb flush trx log commit to 2 or even 0.
4. Out of desperation sometimes disable innodb double write and also xa
support.

On Fri, Jun 13, 2008 at 7:33 PM, Ian Simpson [EMAIL PROTECTED] wrote:



 Hi guys, thanks for pitching in.

 The inserts are from replication; we're not using transactions on the
 master (yet), and I don't think there's a way of telling MySQL to batch
 incoming replication statements if they're not already in a transaction.

 Disk usage: the older server (the one that's running fine) is running
 more transactions per second, but has lower blocks written and read per
 second than the new server:

 The working server (which in addition to replicating is also handling a
 bunch of read queries)

 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
 sda 88.47 782.20 998.77 9046888130 11551757459

 The new server, which is just trying to handle replication

 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
 sda 77.83 1367.55 2914.72 358474084 764029986

 Thanks,
 
 --
 Ian Simpson




 On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
  also how often do you issue a commit. batching the inserts inside a
  transaction might help.
 
  On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
  wrote:
  check for iostat to see if the disk is heavly used.
 
 
  On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
  Hi Alex,
 
  Configurations are identical, other than the
  differences I initially
  mentioned. I've diffed both the configuration files
  and the output of
  SHOW VARIABLES on both servers.
 
  I've contacted my hosting provider to ask about the
  RAID settings.
 
  Variable_name: innodb_flush_log_at_trx_commit
  Value: 1
  Variable_name: sync_binlog
  Value: 0
  Variable_name: innodb_locks_unsafe_for_binlog
  Value: OFF
 
  Thanks
 
  --
  Ian Simpson
 
  On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
  wrote:
   Please check if the my.cnf configurations to be the
  same.
  
   What are your configuration parameters in terms of
  innodh flush log
   trx commit , bin logging, sync binlog and innodb
  unsafe for binlog ?
  
   If the systems have raid, check if the BBWC is
  enabled on the new host
   and WB is enabled.
  
  
   On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
  [EMAIL PROTECTED]
   wrote:
   Hi list,
  
   Have a bit of a mystery here that I hope
  somebody can help
   with.
  
   I've just got a new server that I'm using as
  a dedicated MySQL
   server.
   In terms of hardware it's pretty much
  identical, if not
   slightly
   superior to an existing server already in
  production use.
  
   It's having a real struggle processing
  INSERT statements to
   InnoDB
   tables; it's maxing out at around 100
  inserts per second, even
   with very
   simple two column tables (inserts into
  MyISAM tables run
   fine).
   Meanwhile, the original server can happily
  process around 1000
   inserts/sec into an identical table.
  
   The MySQL configuration of the two databases
  is identical,
   except for
   the tablespace file size (the new server has
  a larger
   tablespace
   defined), and the InnoDB logs (again, new
  server has larger
   logs).
  
   Can anybody suggest an area of investigation
  as to the cause?
  
   Thanks,
   --
   Ian Simpson
  
   This email may contain confidential
  information and is
   intended for the recipient(s) only. If an
  addressing or
   transmission error has misdirected this
  email, please notify
   the author by replying to this email. If you
  are not the
   intended recipient(s) disclosure,
  distribution, copying or
   printing of this email is strictly
  prohibited and you should
   destroy this mail. Information or opinions
  in this message
   shall not be treated as neither given nor
  endorsed by the
   company. Neither the company nor the sender
  accepts any
   responsibility for viruses or other
  destructive elements and
   it is your responsibility to scan any
  attachments.
  
  
  
   --
   Thanks
   Alex
   http://alexlurthu.wordpress.com
 
  This email may contain confidential information and is
  intended for the recipient(s) only. If an addressing
  or transmission error has misdirected this email,
  please notify the author by replying to this email. If
  you are not the intended recipient(s) disclosure,
  distribution, copying or printing of this email is
  strictly prohibited and you should destroy this mail.
  Information or opinions in this message shall not be
  treated as neither given nor endorsed by the company.
  Neither the company nor the sender accepts any
  responsibility for viruses or other destructive
  elements

RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Jerry Schwartz
Disk usage: the older server (the one that's running fine) is running
more transactions per second, but has lower blocks written and read per
second than the new server:
[JS] That, to me, suggests that the difference might be in the way the systems 
themselves are configured. Unfortunately, I don't know how Linux handles file 
system buffering.

The working server (which in addition to replicating is also handling a
bunch of read queries)

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda  88.47   782.20   998.77 9046888130 11551757459

The new server, which is just trying to handle replication

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda  77.83  1367.55  2914.72  358474084  764029986

Thanks,
?
--
Ian Simpson



On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
 also how often do you issue a commit. batching the inserts inside a
 transaction might help.

 On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
 wrote:
 check for iostat to see if the disk is heavly used.


 On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
 Hi Alex,

 Configurations are identical, other than the
 differences I initially
 mentioned. I've diffed both the configuration files
 and the output of
 SHOW VARIABLES on both servers.

 I've contacted my hosting provider to ask about the
 RAID settings.

 Variable_name: innodb_flush_log_at_trx_commit
Value: 1
 Variable_name: sync_binlog
Value: 0
 Variable_name: innodb_locks_unsafe_for_binlog
Value: OFF

 Thanks

 --
 Ian Simpson

 On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
 wrote:
  Please check if the my.cnf configurations to be the
 same.
 
   What are your configuration parameters in terms of
 innodh flush log
  trx commit , bin logging, sync binlog and innodb
 unsafe for binlog ?
 
  If the systems have raid, check if the BBWC is
 enabled on the new host
  and WB is enabled.
 
 
  On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
 [EMAIL PROTECTED]
  wrote:
  Hi list,
 
  Have a bit of a mystery here that I hope
 somebody can help
  with.
 
  I've just got a new server that I'm using as
 a dedicated MySQL
  server.
  In terms of hardware it's pretty much
 identical, if not
  slightly
  superior to an existing server already in
 production use.
 
  It's having a real struggle processing
 INSERT statements to
  InnoDB
  tables; it's maxing out at around 100
 inserts per second, even
  with very
  simple two column tables (inserts into
 MyISAM tables run
  fine).
  Meanwhile, the original server can happily
 process around 1000
  inserts/sec into an identical table.
 
  The MySQL configuration of the two databases
 is identical,
  except for
  the tablespace file size (the new server has
 a larger
  tablespace
  defined), and the InnoDB logs (again, new
 server has larger
  logs).
 
  Can anybody suggest an area of investigation
 as to the cause?
 
  Thanks,
  --
  Ian Simpson
 
  This email may contain confidential
 information and is
  intended for the recipient(s) only. If an
 addressing or
  transmission error has misdirected this
 email, please notify
  the author by replying to this email. If you
 are not the
  intended recipient(s) disclosure,
 distribution, copying

RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi Guys,

Having delved a little more into the capabilities of iostat, I've
discovered that the drive bandwidth seems to be maxed out while MySQL is
running, which I'd peg as the primary candidate for the problem.

Looks like I'll be having more words with my hosting company about
this...

Thanks for all your help

-- 
Ian Simpson

On Fri, 2008-06-13 at 10:40 -0400, Jerry Schwartz wrote:
 Disk usage: the older server (the one that's running fine) is running
 more transactions per second, but has lower blocks written and read per
 second than the new server:
 [JS] That, to me, suggests that the difference might be in the way the 
 systems 
 themselves are configured. Unfortunately, I don't know how Linux handles file 
 system buffering.
 
 The working server (which in addition to replicating is also handling a
 bunch of read queries)
 
 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 sda  88.47   782.20   998.77 9046888130 11551757459
 
 The new server, which is just trying to handle replication
 
 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 sda  77.83  1367.55  2914.72  358474084  764029986
 
 Thanks,
 ?
 --
 Ian Simpson
 
 
 
 On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
  also how often do you issue a commit. batching the inserts inside a
  transaction might help.
 
  On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
  wrote:
  check for iostat to see if the disk is heavly used.
 
 
  On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
  Hi Alex,
 
  Configurations are identical, other than the
  differences I initially
  mentioned. I've diffed both the configuration files
  and the output of
  SHOW VARIABLES on both servers.
 
  I've contacted my hosting provider to ask about the
  RAID settings.
 
  Variable_name: innodb_flush_log_at_trx_commit
 Value: 1
  Variable_name: sync_binlog
 Value: 0
  Variable_name: innodb_locks_unsafe_for_binlog
 Value: OFF
 
  Thanks
 
  --
  Ian Simpson
 
  On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
  wrote:
   Please check if the my.cnf configurations to be the
  same.
  
What are your configuration parameters in terms of
  innodh flush log
   trx commit , bin logging, sync binlog and innodb
  unsafe for binlog ?
  
   If the systems have raid, check if the BBWC is
  enabled on the new host
   and WB is enabled.
  
  
   On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
  [EMAIL PROTECTED]
   wrote:
   Hi list,
  
   Have a bit of a mystery here that I hope
  somebody can help
   with.
  
   I've just got a new server that I'm using as
  a dedicated MySQL
   server.
   In terms of hardware it's pretty much
  identical, if not
   slightly
   superior to an existing server already in
  production use.
  
   It's having a real struggle processing
  INSERT statements to
   InnoDB
   tables; it's maxing out at around 100
  inserts per second, even
   with very
   simple two column tables (inserts into
  MyISAM tables run
   fine).
   Meanwhile, the original server can happily
  process around 1000
   inserts/sec into an identical table.
  
   The MySQL configuration of the two databases
  is identical,
   except for
   the tablespace file size (the new server has
  a larger
   tablespace
   defined), and the InnoDB logs (again, new
  server has larger
   logs).
  
   Can anybody suggest an area of investigation
  as to the cause?
  
   Thanks,
   --
   Ian Simpson

RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Jerry Schwartz
Having delved a little more into the capabilities of iostat, I've
discovered that the drive bandwidth seems to be maxed out while MySQL is
running, which I'd peg as the primary candidate for the problem.
[JS] That suggests even more strongly that there is a difference in the kernel 
configuration. More physical I/O would drive the traffic up, by definition. 
Either MySQL is causing this, or the system file system is causing it.

Looks like I'll be having more words with my hosting company about
this...

Thanks for all your help
?
--
Ian Simpson

On Fri, 2008-06-13 at 10:40 -0400, Jerry Schwartz wrote:
 Disk usage: the older server (the one that's running fine) is running
 more transactions per second, but has lower blocks written and read
per
 second than the new server:
 [JS] That, to me, suggests that the difference might be in the way the
systems
 themselves are configured. Unfortunately, I don't know how Linux
handles file
 system buffering.
 
 The working server (which in addition to replicating is also handling
a
 bunch of read queries)
 
 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
Blk_wrtn
 sda  88.47   782.20   998.77 9046888130
11551757459
 
 The new server, which is just trying to handle replication
 
 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
Blk_wrtn
 sda  77.83  1367.55  2914.72  358474084
764029986
 
 Thanks,
 ?
 --
 Ian Simpson
 
 
 
 On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
  also how often do you issue a commit. batching the inserts inside a
  transaction might help.
 
  On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
  wrote:
  check for iostat to see if the disk is heavly used.
 
 
  On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
  Hi Alex,
 
  Configurations are identical, other than the
  differences I initially
  mentioned. I've diffed both the configuration files
  and the output of
  SHOW VARIABLES on both servers.
 
  I've contacted my hosting provider to ask about the
  RAID settings.
 
  Variable_name: innodb_flush_log_at_trx_commit
 Value: 1
  Variable_name: sync_binlog
 Value: 0
  Variable_name: innodb_locks_unsafe_for_binlog
 Value: OFF
 
  Thanks
 
  --
  Ian Simpson
 
  On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
  wrote:
   Please check if the my.cnf configurations to be
the
  same.
  
What are your configuration parameters in terms
of
  innodh flush log
   trx commit , bin logging, sync binlog and innodb
  unsafe for binlog ?
  
   If the systems have raid, check if the BBWC is
  enabled on the new host
   and WB is enabled.
  
  
   On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
  [EMAIL PROTECTED]
   wrote:
   Hi list,
  
   Have a bit of a mystery here that I hope
  somebody can help
   with.
  
   I've just got a new server that I'm using
as
  a dedicated MySQL
   server.
   In terms of hardware it's pretty much
  identical, if not
   slightly
   superior to an existing server already in
  production use.
  
   It's having a real struggle processing
  INSERT statements to
   InnoDB
   tables; it's maxing out at around 100
  inserts per second, even
   with very
   simple two column tables (inserts into
  MyISAM tables run
   fine).
   Meanwhile, the original server can
happily
  process around 1000
   inserts/sec into an identical table.
  
   The MySQL configuration of the two
databases
  is identical,
   except for
   the tablespace file size (the new server
has
  a larger
   tablespace
   defined), and the InnoDB logs (again, new
  server has larger
   logs

RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
Hi Jerry,

It could be a kernel issue; however, currently I'm suspecting that the
drive in the new server simply doesn't have the same bandwidth
capability. The iostat results I'm getting (although I'm not an expert
in reading them, having only learned of it about 3 hours ago) suggest
that the older server is handling roughly the same data quantities, but
just using a much lower percentage of the drive's bandwidth.

I can't seem to find a tool which reports on exactly how much write
bandwidth a drive has; everything seems to focus on reading speed.

Thanks,


-- 
Ian Simpson


On Fri, 2008-06-13 at 11:18 -0400, Jerry Schwartz wrote:
 Having delved a little more into the capabilities of iostat, I've
 discovered that the drive bandwidth seems to be maxed out while MySQL is
 running, which I'd peg as the primary candidate for the problem.
 [JS] That suggests even more strongly that there is a difference in the 
 kernel 
 configuration. More physical I/O would drive the traffic up, by definition. 
 Either MySQL is causing this, or the system file system is causing it.
 
 Looks like I'll be having more words with my hosting company about
 this...
 
 Thanks for all your help
 ?
 --
 Ian Simpson
 
 On Fri, 2008-06-13 at 10:40 -0400, Jerry Schwartz wrote:
  Disk usage: the older server (the one that's running fine) is running
  more transactions per second, but has lower blocks written and read
 per
  second than the new server:
  [JS] That, to me, suggests that the difference might be in the way the
 systems
  themselves are configured. Unfortunately, I don't know how Linux
 handles file
  system buffering.
  
  The working server (which in addition to replicating is also handling
 a
  bunch of read queries)
  
  Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
 Blk_wrtn
  sda  88.47   782.20   998.77 9046888130
 11551757459
  
  The new server, which is just trying to handle replication
  
  Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read
 Blk_wrtn
  sda  77.83  1367.55  2914.72  358474084
 764029986
  
  Thanks,
  ?
  --
  Ian Simpson
  
  
  
  On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
   also how often do you issue a commit. batching the inserts inside a
   transaction might help.
  
   On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
   wrote:
   check for iostat to see if the disk is heavly used.
  
  
   On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
   Hi Alex,
  
   Configurations are identical, other than the
   differences I initially
   mentioned. I've diffed both the configuration files
   and the output of
   SHOW VARIABLES on both servers.
  
   I've contacted my hosting provider to ask about the
   RAID settings.
  
   Variable_name: innodb_flush_log_at_trx_commit
  Value: 1
   Variable_name: sync_binlog
  Value: 0
   Variable_name: innodb_locks_unsafe_for_binlog
  Value: OFF
  
   Thanks
  
   --
   Ian Simpson
  
   On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
   wrote:
Please check if the my.cnf configurations to be
 the
   same.
   
 What are your configuration parameters in terms
 of
   innodh flush log
trx commit , bin logging, sync binlog and innodb
   unsafe for binlog ?
   
If the systems have raid, check if the BBWC is
   enabled on the new host
and WB is enabled.
   
   
On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
   [EMAIL PROTECTED]
wrote:
Hi list,
   
Have a bit of a mystery here that I hope
   somebody can help
with.
   
I've just got a new server that I'm using
 as
   a dedicated MySQL
server.
In terms of hardware it's pretty much
   identical, if not
slightly
superior to an existing server already in
   production use.
   
It's having a real struggle processing
   INSERT statements to
InnoDB
tables; it's maxing out at around 100
   inserts per second, even
with very

RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Wm Mussatto
 struggle processing
   INSERT statements to
InnoDB
tables; it's maxing out at around 100
   inserts per second, even
with very
simple two column tables (inserts into
   MyISAM tables run
fine).
Meanwhile, the original server can
 happily
   process around 1000
inserts/sec into an identical table.
   
The MySQL configuration of the two
 databases
   is identical,
except for
the tablespace file size (the new server
 has
   a larger
tablespace
defined), and the InnoDB logs (again,
 new
   server has larger
logs).
   
Can anybody suggest an area of
 investigation
   as to the cause?
   
Thanks,
--
Ian Simpson
   
This email may contain confidential
   information and is
intended for the recipient(s) only. If
 an
   addressing or
transmission error has misdirected this
   email, please notify
the author by replying to this email. If
 you
   are not the
intended recipient(s) disclosure,
   distribution, copying or
printing of this email is strictly
   prohibited and you should
destroy this mail. Information or
 opinions
   in this message
shall not be treated as neither given
 nor
   endorsed by the
company. Neither the company nor the
 sender
   accepts any
responsibility for viruses or other
   destructive elements and
it is your responsibility to scan any
   attachments.
   
   
   
--
Thanks
Alex
http://alexlurthu.wordpress.com
  
   This email may contain confidential information
 and
 is
   intended for the recipient(s) only. If an
 addressing
   or transmission error has misdirected this email,
   please notify the author by replying to this
 email.
 If
   you are not the intended recipient(s) disclosure,
   distribution, copying or printing of this email is
   strictly prohibited and you should destroy this
 mail.
   Information or opinions in this message shall not
 be
   treated as neither given nor endorsed by the
 company.
   Neither the company nor the sender accepts any
   responsibility for viruses or other destructive
   elements and it is your responsibility to scan any
   attachments.
  
  
  
  
  
   --
   Thanks
   Alex
   http://alexlurthu.wordpress.com
  
  This email may contain confidential information and is intended for
 the
  recipient(s) only. If an addressing or transmission error has
  misdirected this email, please notify the author by replying to this
  email. If you are not the intended recipient(s) disclosure,
  distribution, copying or printing of this email is strictly
 prohibited
  and you should destroy this mail. Information or opinions in this
  message shall not be treated as neither given nor endorsed by the
  company. Neither the company nor the sender accepts any
 responsibility
  for viruses or other destructive elements and it is your
 responsibility
  to scan any attachments.
 
 
 
 
 
 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has
 misdirected this email, please notify the author by replying to this
 email. If you are not the intended recipient(s) disclosure,
 distribution, copying or printing of this email is strictly prohibited
 and you should destroy this mail. Information or opinions in this
 message shall not be treated as neither given nor endorsed by the
 company. Neither the company nor the sender accepts any responsibility
 for viruses or other destructive elements and it is your responsibility
 to scan any attachments.


--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General

RE: Very slow inserts into InnoDB tables

2008-06-13 Thread Ian Simpson
 PROTECTED]
 wrote:
 Hi list,

 Have a bit of a mystery here that I hope
somebody can help
 with.

 I've just got a new server that I'm
  using
  as
a dedicated MySQL
 server.
 In terms of hardware it's pretty much
identical, if not
 slightly
 superior to an existing server already
  in
production use.

 It's having a real struggle processing
INSERT statements to
 InnoDB
 tables; it's maxing out at around 100
inserts per second, even
 with very
 simple two column tables (inserts into
MyISAM tables run
 fine).
 Meanwhile, the original server can
  happily
process around 1000
 inserts/sec into an identical table.

 The MySQL configuration of the two
  databases
is identical,
 except for
 the tablespace file size (the new server
  has
a larger
 tablespace
 defined), and the InnoDB logs (again,
  new
server has larger
 logs).

 Can anybody suggest an area of
  investigation
as to the cause?

 Thanks,
 --
 Ian Simpson

 This email may contain confidential
information and is
 intended for the recipient(s) only. If
  an
addressing or
 transmission error has misdirected this
email, please notify
 the author by replying to this email. If
  you
are not the
 intended recipient(s) disclosure,
distribution, copying or
 printing of this email is strictly
prohibited and you should
 destroy this mail. Information or
  opinions
in this message
 shall not be treated as neither given
  nor
endorsed by the
 company. Neither the company nor the
  sender
accepts any
 responsibility for viruses or other
destructive elements and
 it is your responsibility to scan any
attachments.



 --
 Thanks
 Alex
 http://alexlurthu.wordpress.com
   
This email may contain confidential information
  and
  is
intended for the recipient(s) only. If an
  addressing
or transmission error has misdirected this email,
please notify the author by replying to this
  email.
  If
you are not the intended recipient(s) disclosure,
distribution, copying or printing of this email is
strictly prohibited and you should destroy this
  mail.
Information or opinions in this message shall not
  be
treated as neither given nor endorsed by the
  company.
Neither the company nor the sender accepts any
responsibility for viruses or other destructive
elements and it is your responsibility to scan any
attachments.
   
   
   
   
   
--
Thanks
Alex
http://alexlurthu.wordpress.com
   
   This email may contain confidential information and is intended for
  the
   recipient(s) only. If an addressing or transmission error has
   misdirected this email, please notify the author by replying to this
   email. If you are not the intended recipient(s) disclosure,
   distribution, copying or printing of this email is strictly
  prohibited
   and you should destroy this mail. Information or opinions in this
   message shall not be treated as neither given nor endorsed by the
   company. Neither the company nor the sender accepts any
  responsibility

InnoDB tables but no FK constraints

2008-05-21 Thread debussy007

Hi,

When I add a reference to a non-existing row in the referenced table, I have
no error:
My table member_orders_items references members_orders,
member_orders_item has a FK to a non existing PK in member_orders (since
this one is empty), no error is generated.

I can see in MySQL Administrator that both tables are InnoDB.
Here is my table structure:


DROP TABLE IF EXISTS `members_orders`;
CREATE TABLE `members_orders` (
`id_order` int(10) unsigned NOT NULL auto_increment,
`paid_date` datetime default NULL,
`record_date` datetime NOT NULL,
`total` decimal(7,2) unsigned NOT NULL,
`total_partner` decimal(7,2) unsigned NOT NULL,
`member_id` int(10) unsigned NOT NULL,
`total_no_discount` decimal(7,2) unsigned default NULL,
PRIMARY KEY (`id_order`),
KEY `FK_MEMBER_ID_MEMBERS_ORDERS` (`member_id`),
CONSTRAINT `FK_MEMBER_ID_MEMBERS_ORDERS` FOREIGN KEY (`member_id`)
REFERENCES `members` (`id_member`)
) ENGINE=InnoDB AUTO_INCREMENT=25 DEFAULT CHARSET=utf8;


LOCK TABLES `members_orders` WRITE;
UNLOCK TABLES;

DROP TABLE IF EXISTS `members_orders_items`;
CREATE TABLE `members_orders_items` (
`id_order_item` int(10) unsigned NOT NULL auto_increment,
`qty` int(10) unsigned NOT NULL,
`total` decimal(7,2) unsigned NOT NULL,
`order_id` int(10) unsigned NOT NULL,
`item_id` int(10) unsigned NOT NULL,
PRIMARY KEY (`id_order_item`),
KEY `FK_ORDER_ID_MEMBERS_ORDERS_ITEMS` (`order_id`),
KEY `FK_ITEM_ID_MEMBERS_ORDERS_ITEMS` (`item_id`),
CONSTRAINT `FK_ITEM_ID_MEMBERS_ORDERS_ITEMS` FOREIGN KEY (`item_id`)
REFERENCES `services_items` (`id_item`),
CONSTRAINT `FK_ORDER_ID_MEMBERS_ORDERS_ITEMS` FOREIGN KEY (`order_id`)
REFERENCES `members_orders` (`id_order`)
) ENGINE=InnoDB AUTO_INCREMENT=137 DEFAULT CHARSET=utf8;

LOCK TABLES `members_orders_items` WRITE;
INSERT INTO `members_orders_items` VALUES (137,750,'54.00',25,45); //--
Here should be an error ?
UNLOCK TABLES;



Thank you for any kind help !!
Matt.
-- 
View this message in context: 
http://www.nabble.com/InnoDB-tables-but-no-FK-constraints-tp17364156p17364156.html
Sent from the MySQL - General mailing list archive at Nabble.com.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Symlink InnoDB tables without stopping MySQL

2008-04-24 Thread Dobromir Velev
Hi,
I guessed it was something like it and that is why I wanted to make sure how 
it should be done. Using the ALTER TABLE table DISCARD TABLESPACE doesn't 
seem to work as expected - I succeeded to crash the test server twice. See 
the mysql log details below.

What I did was the following:
 - create table
 - check INNODDB status and copy the table.ibd  to a new location
 - run ALTER TABLE table DISCARD TABLESPACE
 - symlink the table.ibd copy within the database folder
 - run ALTER TABLE table IMPORT TABLESPACE
 - run show table status like 'table';

And here are the crash details from the log.

InnoDB: buf pool start is at 0x3666c000, end at 0xb366c000
InnoDB: Probable reason is database corruption or memory
InnoDB: corruption. If this happens in an InnoDB database recovery,
InnoDB: you can look from section 6.1 at http://www.innodb.com/ibman.html
InnoDB: how to force recovery.
080424  4:31:55InnoDB: Assertion failure in thread 68795312 in 
file ./../include/buf0buf.ic line 262
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html
InnoDB: about forcing recovery.
InnoDB: Thread 73976752 stopped in file ./../include/sync0sync.ic line 111
InnoDB: Thread 729131952 stopped in file sync0arr.c line 336
InnoDB: Thread 150207408 stopped in file sync0arr.c line 336
InnoDB: Thread 747498416 stopped in file sync0arr.c line 336
InnoDB: Thread 63421360 stopped in file ./../include/sync0sync.ic line 111
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

key_buffer_size=1048576000
read_buffer_size=507904
max_used_connections=601
max_connections=600
threads_connected=394
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 
1935995 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=0x2c24e950
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0x4197e0c, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x8136da4
0x438898
(nil)
0x8299f88
0x829a024
0x81c2f5b
0x81d6f60
0x814a563
0x814e66c
0x814f08a
0x814f8e5
0x8150330
0x432371
0x38cffe
New value of fp=(nil) failed sanity check, terminating stack trace!
Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and 
follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do
resolve it
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd-query at 0xbb66460 = show table status like 'temp%'
thd-thread_id=2545123
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.

Number of processes running now: 0
080424 04:31:56  mysqld restarted
080424  4:31:57  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.

Thanks for your help
Dobromir Velev

On Wednesday 23 April 2008 22:05, Jerry Schwartz wrote:
 If Linux works the same way as HP-UX (and it should), anything you do to an
 open file (including deleting it) has no effect until the file is closed.
 The MySQL server is still using the old file. The next time it stops and
 restarts, it will follow the symlink. I don't know what the effect of
 accessing a stale copy of the file will do.

 Regards,

 Jerry Schwartz
 The Infoshop by Global Information Incorporated
 195 Farmington Ave.
 Farmington, CT 06032

 860.674.8796 / FAX: 860.674.8341

 www.the-infoshop.com
 www.giiexpress.com
 www.etudes-marche.com

 -Original Message-
 From: Sebastian Mendel [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, April 23, 2008 9:27 AM
 To: Dobromir Velev
 Cc: mysql@lists.mysql.com
 Subject: Re: Symlink InnoDB tables without stoping MySQL
 
 Dobromir Velev schrieb:
  Hi,
  What I'm trying to do is to create a new InnoDB table on a different
 
 disk and
 
  symlink it to an existing database.
  I have innodb_file_per_table turned on and here is how I tried to do
 
 it
 
  mysql \u test
  mysql create table test (...) ENGINE  = 'InnoDB';
  mysql\q
 
  move the test.ibd file to the other disk
  create a simlink in the database directory
  flush tables;
 
 
  This works as expected

Symlink InnoDB tables without stoping MySQL

2008-04-23 Thread Dobromir Velev
Hi,
What I'm trying to do is to create a new InnoDB table on a different disk and 
symlink it to an existing database.
I have innodb_file_per_table turned on and here is how I tried to do it


mysql \u test
mysql create table test (...) ENGINE  = 'InnoDB';
mysql\q

move the test.ibd file to the other disk
create a simlink in the database directory
flush tables;


This works as expected but there is something that bothers me - I inserted 
about 60K rows in the new table and all queries I tried are working  
including selects, inserts and updates. The SHOW TABLE STATUS command 
displays relevant results and still the test.ibd file to which the symlink 
points hasn't been changed or accessed at all.

Any ideas are welcome
Dobromir Velev



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Symlink InnoDB tables without stoping MySQL

2008-04-23 Thread Sebastian Mendel

Dobromir Velev schrieb:

Hi,
What I'm trying to do is to create a new InnoDB table on a different disk and 
symlink it to an existing database.

I have innodb_file_per_table turned on and here is how I tried to do it


mysql \u test
mysql create table test (...) ENGINE  = 'InnoDB';
mysql\q

move the test.ibd file to the other disk
create a simlink in the database directory
flush tables;


This works as expected but there is something that bothers me - I inserted 
about 60K rows in the new table and all queries I tried are working  
including selects, inserts and updates. The SHOW TABLE STATUS command 
displays relevant results and still the test.ibd file to which the symlink 
points hasn't been changed or accessed at all.


Any ideas are welcome


you need to setup per-table tablespace, did you?

Section 13.2.3.1, “Using Per-Table Tablespaces”.

http://dev.mysql.com/doc/refman/5.0/en/innodb-init.html

--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Symlink InnoDB tables without stoping MySQL

2008-04-23 Thread Dobromir Velev
Hi,
Thanks for pointing it out - I just found the following commands.

ALTER TABLE tbl_name DISCARD TABLESPACE;
ALTER TABLE tbl_name IMPORT TABLESPACE;

 I will test it and let you know if it works

Thanks
Dobromir Velev



On Wednesday 23 April 2008 16:27, Sebastian Mendel wrote:
 Dobromir Velev schrieb:
  Hi,
  What I'm trying to do is to create a new InnoDB table on a different disk
  and symlink it to an existing database.
  I have innodb_file_per_table turned on and here is how I tried to do it
 
 
  mysql \u test
  mysql create table test (...) ENGINE  = 'InnoDB';
  mysql\q
 
  move the test.ibd file to the other disk
  create a simlink in the database directory
  flush tables;
 
 
  This works as expected but there is something that bothers me - I
  inserted about 60K rows in the new table and all queries I tried are
  working including selects, inserts and updates. The SHOW TABLE STATUS
  command displays relevant results and still the test.ibd file to which
  the symlink points hasn't been changed or accessed at all.
 
  Any ideas are welcome

 you need to setup per-table tablespace, did you?

 Section 13.2.3.1, “Using Per-Table Tablespaces”.

 http://dev.mysql.com/doc/refman/5.0/en/innodb-init.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Symlink InnoDB tables without stopping MySQL

2008-04-23 Thread Jerry Schwartz
If Linux works the same way as HP-UX (and it should), anything you do to an 
open file (including deleting it) has no effect until the file is closed. The 
MySQL server is still using the old file. The next time it stops and 
restarts, it will follow the symlink. I don't know what the effect of 
accessing a stale copy of the file will do.

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com
www.giiexpress.com
www.etudes-marche.com
-Original Message-
From: Sebastian Mendel [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 23, 2008 9:27 AM
To: Dobromir Velev
Cc: mysql@lists.mysql.com
Subject: Re: Symlink InnoDB tables without stoping MySQL

Dobromir Velev schrieb:
 Hi,
 What I'm trying to do is to create a new InnoDB table on a different
disk and
 symlink it to an existing database.
 I have innodb_file_per_table turned on and here is how I tried to do
it


 mysql \u test
 mysql create table test (...) ENGINE  = 'InnoDB';
 mysql\q

 move the test.ibd file to the other disk
 create a simlink in the database directory
 flush tables;


 This works as expected but there is something that bothers me - I
inserted
 about 60K rows in the new table and all queries I tried are working
 including selects, inserts and updates. The SHOW TABLE STATUS
command
 displays relevant results and still the test.ibd file to which the
symlink
 points hasn't been changed or accessed at all.

 Any ideas are welcome

you need to setup per-table tablespace, did you?

Section 13.2.3.1, Using Per-Table Tablespaces.

http://dev.mysql.com/doc/refman/5.0/en/innodb-init.html

--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Question regarding innodb tables

2008-01-31 Thread Erich C. Beyrent
I have a question regarding the innodb_file_per_table configuration 
option.  We currently do not have this enabled, so our ibdata1 file is huge.


Is it recommended that we have this configured to store the tables in 
their own files?  What are the performance implications of doing this, 
especially on high-volume sites?


Any insight would be greatly appreciated!

-Erich-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Boolean searches on InnoDB tables?

2007-10-24 Thread mos

At 02:54 AM 3/4/2006, Daevid Vincent wrote:

I just discovered this:
http://dev.mysql.com/doc/refman/5.0/en/fulltext-boolean.html
Which states:
They can work even without a FULLTEXT index, although a search executed in
this fashion would be quite slow. 

But then I'm kicked in the nuts because:
http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html
Full-text indexes can be used only with MyISAM tables

When I try a query on an InnoDB table:

SELECT * FROM categories WHERE MATCH (name) AGAINST ('+ELECTRONICS' IN
BOOLEAN MODE);

I get:
Error Code : 1214
The used table type doesn't support FULLTEXT indexes

So, what is the deal? Am I missing something?

And if I can't use boolean searches on InnoDB tables with mySQL 5.0.18,
Then WHEN will I be able to?

In the mean time, what is the best way to generate this equivallent
functionality via PHP or some other mySQL 5 sanctioned way? I've seen
several different examples on the web, but don't know which to commit to.


Daevid,
  Get yourself an ice-pack and visit http://www.sphinxsearch.com/. 
They have a free full text search add-on for MySQL that works with InnoDb 
and MyISAM tables. You'll feel better in the morning. :)


Mike 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



consitent backup of MyISAM and Innodb tables

2007-07-17 Thread spikerlion
Hello,

we have a MySQL DBMS with a lot of databases. Most of them are using MyISAM 
tables but three databases use InnoDB and MyISAM tables.

What is the best method to get a consitent ONLINE backup of both table types?

Thanks,
Spiker
-- 
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: consitent backup of MyISAM and Innodb tables

2007-07-17 Thread Olexandr Melnyk

Check out this thread:
http://www.sitepoint.com/forums/showpost.php?p=3357628postcount=2

2007/7/17, [EMAIL PROTECTED] [EMAIL PROTECTED]:


Hello,

we have a MySQL DBMS with a lot of databases. Most of them are using
MyISAM tables but three databases use InnoDB and MyISAM tables.

What is the best method to get a consitent ONLINE backup of both table
types?

Thanks,
Spiker
--
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Sincerely yours,
Olexandr Melnyk
http://omelnyk.net/


Accented characters in InnoDB tables?

2007-06-23 Thread patrick

Do you have to do something special with InnoDB tables to accept
various character sets like accented, European characters? Using the
default, these accented characters come out as garbage.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump problem with large innodb tables...

2007-06-19 Thread Dušan Pavlica
Try to look for Lost connection error in MySQL manual and it can give 
your some hints like

http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

Dusan


Hartleigh Burton napsal(a):

Hi All,

I have a database which is currently at ~10GB in it's test phase. It 
is containing uncompressed audio and is expected to reach 1.5TB in no 
time at all. I am just running some backup tests and I have been 
having lots of problems creating an accurate backup.


I have tried both MySQL Administrator  mysqldump, both applications 
drop out on the same table, the table `trackdata` which contains ~9GB 
worth of data. There is no single row any larger than 50MB, most 
average around 40MB. Windows 2000 Server, MySQL v5.0.37-community-nt 
and all tables are InnoDB.


If anyone can help me out with this problem the assistance is greatly 
appreciated. I have scoured google and various other sources and not 
found much information that has been useful to me. I hope I have 
enough info below... if more is required let me know.


mysqldump example

P:\mysqldump -u username -p mraentertainment  mraentertainment.sql 
--opt

--verbose --max_allowed_packet=500M --hex-blob --single_transaction
--net_buffer_length=100M
Enter password: **
-- Connecting to localhost...
-- Retrieving table structure for table albums...
-- Sending SELECT query...

...

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during query 
when dumping

 table `trackdata` at row: 1


my.ini configuration file

[client]

port=3306

[mysql]

default-character-set=latin1

[mysqld]

log-bin=itd002-bin
server-id=1

port=3306

wait_timeout=86400

max_allowed_packet=100M


basedir=C:/Program Files/MySQL/MySQL Server 5.0/

datadir=C:/Program Files/MySQL/MySQL Server 5.0/Data/

default-character-set=latin1

default-storage-engine=INNODB

sql-mode=STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

max_connections=100

query_cache_size=0

table_cache=256

tmp_table_size=77M

thread_cache_size=8

#*** MyISAM Specific options

myisam_max_sort_file_size=100G

myisam_max_extra_sort_file_size=100G

myisam_sort_buffer_size=154M

key_buffer_size=130M

read_buffer_size=64K
read_rnd_buffer_size=256K

sort_buffer_size=256K

#skip-innodb

innodb_additional_mem_pool_size=6M

innodb_flush_log_at_trx_commit=1

innodb_log_buffer_size=3M

innodb_buffer_pool_size=252M

innodb_log_file_size=126M

innodb_thread_concurrency=8





Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: {Spam?} Re: mysqldump problem with large innodb tables...

2007-06-19 Thread John Mancuso
Have you considered using the archive storage engine? I have gotten 30:1 
compression using it.

Create table archive_multimedia engine=Archive as select * from multimedia 
table 


John Mancuso
Linux Administrator/MySQL DBA
IT Infrastructure
American Home Mortgage
w: 631-622-6382
c: 516-652-2475

-Original Message-
From: Dušan Pavlica [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 19, 2007 5:08 AM
To: Hartleigh Burton
Cc: MySql
Subject: {Spam?} Re: mysqldump problem with large innodb tables...

Try to look for Lost connection error in MySQL manual and it can give your some 
hints like http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

Dusan


Hartleigh Burton napsal(a):
 Hi All,

 I have a database which is currently at ~10GB in it's test phase. It 
 is containing uncompressed audio and is expected to reach 1.5TB in no 
 time at all. I am just running some backup tests and I have been 
 having lots of problems creating an accurate backup.

 I have tried both MySQL Administrator  mysqldump, both applications 
 drop out on the same table, the table `trackdata` which contains ~9GB 
 worth of data. There is no single row any larger than 50MB, most 
 average around 40MB. Windows 2000 Server, MySQL v5.0.37-community-nt 
 and all tables are InnoDB.

 If anyone can help me out with this problem the assistance is greatly 
 appreciated. I have scoured google and various other sources and not 
 found much information that has been useful to me. I hope I have 
 enough info below... if more is required let me know.

 mysqldump example

 P:\mysqldump -u username -p mraentertainment  mraentertainment.sql 
 --opt --verbose --max_allowed_packet=500M --hex-blob 
 --single_transaction --net_buffer_length=100M Enter password: **
 -- Connecting to localhost...
 -- Retrieving table structure for table albums...
 -- Sending SELECT query...

 ...

 -- Retrieving rows...
 -- Retrieving table structure for table trackdata...
 -- Sending SELECT query...
 -- Retrieving rows...
 mysqldump: Error 2013: Lost connection to MySQL server during query 
 when dumping  table `trackdata` at row: 1


 my.ini configuration file

 [client]

 port=3306

 [mysql]

 default-character-set=latin1

 [mysqld]

 log-bin=itd002-bin
 server-id=1

 port=3306

 wait_timeout=86400

 max_allowed_packet=100M


 basedir=C:/Program Files/MySQL/MySQL Server 5.0/

 datadir=C:/Program Files/MySQL/MySQL Server 5.0/Data/

 default-character-set=latin1

 default-storage-engine=INNODB

 sql-mode=STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

 max_connections=100

 query_cache_size=0

 table_cache=256

 tmp_table_size=77M

 thread_cache_size=8

 #*** MyISAM Specific options

 myisam_max_sort_file_size=100G

 myisam_max_extra_sort_file_size=100G

 myisam_sort_buffer_size=154M

 key_buffer_size=130M

 read_buffer_size=64K
 read_rnd_buffer_size=256K

 sort_buffer_size=256K

 #skip-innodb

 innodb_additional_mem_pool_size=6M

 innodb_flush_log_at_trx_commit=1

 innodb_log_buffer_size=3M

 innodb_buffer_pool_size=252M

 innodb_log_file_size=126M

 innodb_thread_concurrency=8





 Regards,
 Hartleigh Burton
 Resident Geek

 MRA Entertainment Pty Ltd
 5 Dividend St | Mansfield | QLD 4122 | Australia
 Phone: (07) 3457 5041
 Fax: (07) 3349 8806
 Mobile: 0421 646 978

 www.mraentertainment.com



 Internal Virus Database was built: Never
 Checked by MAC OSX... we don't get viruses!




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump problem with large innodb tables...

2007-06-19 Thread Hartleigh Burton

Hi Dusan,

You replied to a forum post of mine on mysql.com yeah? ;)

I have tried adjusting the max_allowed_packet on both the server and  
client. Both are set to 1G now (apparently the highest value  
accepted) even though each row is no larger than 100M at very most.


I am thinking this may have something to do with --extended-insert.  
So rather than having all of data in an extended insert I have tried  
disabling this feature with --extended-insert=0, --extended- 
insert=false, --skip-extended-insert (not all at once obviously)...  
am I doing this correctly? Mixed results when I search google for  
answers. I don't really care at this stage if backup/restore times  
are reduced with this feature disabled, as long as I can get an  
accurate backup.


I also set --net_buffer_length=800M; in theory --extended-insert will  
only create queries up to this value, so if I keep it lower than the  
--max_allowed_packet value it should all be sweet. Still no cigar  
unfortunately.




On 19/06/2007, at 7:08 PM, Dušan Pavlica wrote:

Try to look for Lost connection error in MySQL manual and it can  
give your some hints like

http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

Dusan


Hartleigh Burton napsal(a):

Hi All,

I have a database which is currently at ~10GB in it's test phase.  
It is containing uncompressed audio and is expected to reach 1.5TB  
in no time at all. I am just running some backup tests and I have  
been having lots of problems creating an accurate backup.


I have tried both MySQL Administrator  mysqldump, both  
applications drop out on the same table, the table `trackdata`  
which contains ~9GB worth of data. There is no single row any  
larger than 50MB, most average around 40MB. Windows 2000 Server,  
MySQL v5.0.37-community-nt and all tables are InnoDB.


If anyone can help me out with this problem the assistance is  
greatly appreciated. I have scoured google and various other  
sources and not found much information that has been useful to me.  
I hope I have enough info below... if more is required let me know.


mysqldump example

P:\mysqldump -u username -p mraentertainment   
mraentertainment.sql --opt

--verbose --max_allowed_packet=500M --hex-blob --single_transaction
--net_buffer_length=100M
Enter password: **
-- Connecting to localhost...
-- Retrieving table structure for table albums...
-- Sending SELECT query...

...

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during  
query when dumping

 table `trackdata` at row: 1


my.ini configuration file

[client]

port=3306

[mysql]

default-character-set=latin1

[mysqld]

log-bin=itd002-bin
server-id=1

port=3306

wait_timeout=86400

max_allowed_packet=100M


basedir=C:/Program Files/MySQL/MySQL Server 5.0/

datadir=C:/Program Files/MySQL/MySQL Server 5.0/Data/

default-character-set=latin1

default-storage-engine=INNODB

sql- 
mode=STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION


max_connections=100

query_cache_size=0

table_cache=256

tmp_table_size=77M

thread_cache_size=8

#*** MyISAM Specific options

myisam_max_sort_file_size=100G

myisam_max_extra_sort_file_size=100G

myisam_sort_buffer_size=154M

key_buffer_size=130M

read_buffer_size=64K
read_rnd_buffer_size=256K

sort_buffer_size=256K

#skip-innodb

innodb_additional_mem_pool_size=6M

innodb_flush_log_at_trx_commit=1

innodb_log_buffer_size=3M

innodb_buffer_pool_size=252M

innodb_log_file_size=126M

innodb_thread_concurrency=8





Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]








Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!




Re: mysqldump problem with large innodb tables...

2007-06-18 Thread Baron Schwartz
My backups use mysqldump, but they have always just worked.  I would suggest you 
try to make a minimal test case that can reproduce the problem and submit it as 
a bug report, if possible.


I'm not familiar with the error message off-hand, but the InnoDB manual is large 
and complete, so I'm sure it is covered in there.


Baron

Hartleigh Burton wrote:

Ok... this error has just started popping up in my .err log file...

070618 14:31:10  InnoDB: ERROR: the age of the last checkpoint is 
237821842,

InnoDB: which exceeds the log group capacity 237813351.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
070618 14:39:17  InnoDB: ERROR: the age of the last checkpoint is 
237829009,

InnoDB: which exceeds the log group capacity 237813351.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.


On 18/06/2007, at 12:09 PM, Baron Schwartz wrote:

I'm out of ideas right now.  I don't actually use mysqldump that much 
and have never had this happen.  Hopefully someone else on the mailing 
list can help, or perhaps you can try #mysql on Freenode IRC.


Baron

Hartleigh Burton wrote:
No there is no indication of that at all. The server service appears 
to be in perfect order, does not drop/restart and my other 
applications continue to function without any interruption.
It appears as if the mysqldump connection to the server is 
interrupted or maybe there is something in row 1 of `trackdata` that 
it does not like. This table contains a long blob field which at 
present does not contain any more than ~80MB per row. I also use the 
--hex-blob flag for mysqldump to try and get around any possible 
problems with exporting this data... I have no descriptive error 
messages anywhere and it is driving me nuts :|

On 18/06/2007, at 11:27 AM, Baron Schwartz wrote:
Is there any indication that the mysqldump crash is killing the 
server and causing it to restart?  For example, ready for 
connections notifications just after you try a mysqldump?


Hartleigh Burton wrote:
H no there are no new errors in there. Nothing out of the 
ordinary thats for sure. Just notifications that MySQL has started 
and is accepting connections etc. :|

On 18/06/2007, at 11:06 AM, Baron Schwartz wrote:
How about in c:\Program Files\MySQL\MySQL Server 
5.0\data\hostname.err?


Cheers
Baron

Hartleigh Burton wrote:

Hi Baron,
There are no MySQL errors in the event viewer.
On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:

Hi Hartleigh,

Hartleigh Burton wrote:

Hi All,
I have a database which is currently at ~10GB in it's test 
phase. It is containing uncompressed audio and is expected to 
reach 1.5TB in no time at all. I am just running some backup 
tests and I have been having lots of problems creating an 
accurate backup.
I have tried both MySQL Administrator  mysqldump, both 
applications drop out on the same table, the table `trackdata` 
which contains ~9GB worth of data. There is no single row any 
larger than 50MB, most average around 40MB. Windows 2000 
Server, MySQL v5.0.37-community-nt and all tables are InnoDB.

[snip]

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during 
query when dumping

 table `trackdata` at row: 1


You might be able to find more information about the precise 
error in the server's error logs.  That will give us a better 
idea what might be wrong, if there is an error server-side, 
which seems likely to me.


Baron

Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!

Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]








Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia

mysqldump problem with large innodb tables...

2007-06-17 Thread Hartleigh Burton

Hi All,

I have a database which is currently at ~10GB in it's test phase. It  
is containing uncompressed audio and is expected to reach 1.5TB in no  
time at all. I am just running some backup tests and I have been  
having lots of problems creating an accurate backup.


I have tried both MySQL Administrator  mysqldump, both applications  
drop out on the same table, the table `trackdata` which contains ~9GB  
worth of data. There is no single row any larger than 50MB, most  
average around 40MB. Windows 2000 Server, MySQL v5.0.37-community-nt  
and all tables are InnoDB.


If anyone can help me out with this problem the assistance is greatly  
appreciated. I have scoured google and various other sources and not  
found much information that has been useful to me. I hope I have  
enough info below... if more is required let me know.


mysqldump example

P:\mysqldump -u username -p mraentertainment  mraentertainment.sql  
--opt

--verbose --max_allowed_packet=500M --hex-blob --single_transaction
--net_buffer_length=100M
Enter password: **
-- Connecting to localhost...
-- Retrieving table structure for table albums...
-- Sending SELECT query...

...

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during query  
when dumping

 table `trackdata` at row: 1


my.ini configuration file

[client]

port=3306

[mysql]

default-character-set=latin1

[mysqld]

log-bin=itd002-bin
server-id=1

port=3306

wait_timeout=86400

max_allowed_packet=100M


basedir=C:/Program Files/MySQL/MySQL Server 5.0/

datadir=C:/Program Files/MySQL/MySQL Server 5.0/Data/

default-character-set=latin1

default-storage-engine=INNODB

sql- 
mode=STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION


max_connections=100

query_cache_size=0

table_cache=256

tmp_table_size=77M

thread_cache_size=8

#*** MyISAM Specific options

myisam_max_sort_file_size=100G

myisam_max_extra_sort_file_size=100G

myisam_sort_buffer_size=154M

key_buffer_size=130M

read_buffer_size=64K
read_rnd_buffer_size=256K

sort_buffer_size=256K

#skip-innodb

innodb_additional_mem_pool_size=6M

innodb_flush_log_at_trx_commit=1

innodb_log_buffer_size=3M

innodb_buffer_pool_size=252M

innodb_log_file_size=126M

innodb_thread_concurrency=8





Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!




Re: mysqldump problem with large innodb tables...

2007-06-17 Thread Baron Schwartz

Hi Hartleigh,

Hartleigh Burton wrote:

Hi All,

I have a database which is currently at ~10GB in it's test phase. It is 
containing uncompressed audio and is expected to reach 1.5TB in no time 
at all. I am just running some backup tests and I have been having lots 
of problems creating an accurate backup.


I have tried both MySQL Administrator  mysqldump, both applications 
drop out on the same table, the table `trackdata` which contains ~9GB 
worth of data. There is no single row any larger than 50MB, most average 
around 40MB. Windows 2000 Server, MySQL v5.0.37-community-nt and all 
tables are InnoDB.

[snip]

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during query when 
dumping

 table `trackdata` at row: 1


You might be able to find more information about the precise error in the 
server's error logs.  That will give us a better idea what might be wrong, if 
there is an error server-side, which seems likely to me.


Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump problem with large innodb tables...

2007-06-17 Thread Hartleigh Burton

Hi Baron,

There are no MySQL errors in the event viewer.

On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:


Hi Hartleigh,

Hartleigh Burton wrote:

Hi All,
I have a database which is currently at ~10GB in it's test phase.  
It is containing uncompressed audio and is expected to reach 1.5TB  
in no time at all. I am just running some backup tests and I have  
been having lots of problems creating an accurate backup.
I have tried both MySQL Administrator  mysqldump, both  
applications drop out on the same table, the table `trackdata`  
which contains ~9GB worth of data. There is no single row any  
larger than 50MB, most average around 40MB. Windows 2000 Server,  
MySQL v5.0.37-community-nt and all tables are InnoDB.

[snip]

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during  
query when dumping

 table `trackdata` at row: 1


You might be able to find more information about the precise error  
in the server's error logs.  That will give us a better idea what  
might be wrong, if there is an error server-side, which seems  
likely to me.


Baron






Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!




Re: mysqldump problem with large innodb tables...

2007-06-17 Thread Baron Schwartz

How about in c:\Program Files\MySQL\MySQL Server 5.0\data\hostname.err?

Cheers
Baron

Hartleigh Burton wrote:

Hi Baron,

There are no MySQL errors in the event viewer.

On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:


Hi Hartleigh,

Hartleigh Burton wrote:

Hi All,
I have a database which is currently at ~10GB in it's test phase. It 
is containing uncompressed audio and is expected to reach 1.5TB in no 
time at all. I am just running some backup tests and I have been 
having lots of problems creating an accurate backup.
I have tried both MySQL Administrator  mysqldump, both applications 
drop out on the same table, the table `trackdata` which contains ~9GB 
worth of data. There is no single row any larger than 50MB, most 
average around 40MB. Windows 2000 Server, MySQL v5.0.37-community-nt 
and all tables are InnoDB.

[snip]

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during query 
when dumping

 table `trackdata` at row: 1


You might be able to find more information about the precise error in 
the server's error logs.  That will give us a better idea what might 
be wrong, if there is an error server-side, which seems likely to me.


Baron






Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump problem with large innodb tables...

2007-06-17 Thread Hartleigh Burton
H no there are no new errors in there. Nothing out of the  
ordinary thats for sure. Just notifications that MySQL has started  
and is accepting connections etc. :|



On 18/06/2007, at 11:06 AM, Baron Schwartz wrote:

How about in c:\Program Files\MySQL\MySQL Server 5.0\data 
\hostname.err?


Cheers
Baron

Hartleigh Burton wrote:

Hi Baron,
There are no MySQL errors in the event viewer.
On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:

Hi Hartleigh,

Hartleigh Burton wrote:

Hi All,
I have a database which is currently at ~10GB in it's test  
phase. It is containing uncompressed audio and is expected to  
reach 1.5TB in no time at all. I am just running some backup  
tests and I have been having lots of problems creating an  
accurate backup.
I have tried both MySQL Administrator  mysqldump, both  
applications drop out on the same table, the table `trackdata`  
which contains ~9GB worth of data. There is no single row any  
larger than 50MB, most average around 40MB. Windows 2000 Server,  
MySQL v5.0.37-community-nt and all tables are InnoDB.

[snip]

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during  
query when dumping

 table `trackdata` at row: 1


You might be able to find more information about the precise  
error in the server's error logs.  That will give us a better  
idea what might be wrong, if there is an error server-side, which  
seems likely to me.


Baron

Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]








Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!




Re: mysqldump problem with large innodb tables...

2007-06-17 Thread Baron Schwartz
Is there any indication that the mysqldump crash is killing the server and 
causing it to restart?  For example, ready for connections notifications just 
after you try a mysqldump?


Hartleigh Burton wrote:
H no there are no new errors in there. Nothing out of the ordinary 
thats for sure. Just notifications that MySQL has started and is 
accepting connections etc. :|



On 18/06/2007, at 11:06 AM, Baron Schwartz wrote:


How about in c:\Program Files\MySQL\MySQL Server 5.0\data\hostname.err?

Cheers
Baron

Hartleigh Burton wrote:

Hi Baron,
There are no MySQL errors in the event viewer.
On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:

Hi Hartleigh,

Hartleigh Burton wrote:

Hi All,
I have a database which is currently at ~10GB in it's test phase. 
It is containing uncompressed audio and is expected to reach 1.5TB 
in no time at all. I am just running some backup tests and I have 
been having lots of problems creating an accurate backup.
I have tried both MySQL Administrator  mysqldump, both 
applications drop out on the same table, the table `trackdata` 
which contains ~9GB worth of data. There is no single row any 
larger than 50MB, most average around 40MB. Windows 2000 Server, 
MySQL v5.0.37-community-nt and all tables are InnoDB.

[snip]

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during query 
when dumping

 table `trackdata` at row: 1


You might be able to find more information about the precise error 
in the server's error logs.  That will give us a better idea what 
might be wrong, if there is an error server-side, which seems likely 
to me.


Baron

Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]








Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump problem with large innodb tables...

2007-06-17 Thread Hartleigh Burton
No there is no indication of that at all. The server service appears  
to be in perfect order, does not drop/restart and my other  
applications continue to function without any interruption.


It appears as if the mysqldump connection to the server is  
interrupted or maybe there is something in row 1 of `trackdata` that  
it does not like. This table contains a long blob field which at  
present does not contain any more than ~80MB per row. I also use the  
--hex-blob flag for mysqldump to try and get around any possible  
problems with exporting this data... I have no descriptive error  
messages anywhere and it is driving me nuts :|


On 18/06/2007, at 11:27 AM, Baron Schwartz wrote:

Is there any indication that the mysqldump crash is killing the  
server and causing it to restart?  For example, ready for  
connections notifications just after you try a mysqldump?


Hartleigh Burton wrote:
H no there are no new errors in there. Nothing out of the  
ordinary thats for sure. Just notifications that MySQL has started  
and is accepting connections etc. :|

On 18/06/2007, at 11:06 AM, Baron Schwartz wrote:
How about in c:\Program Files\MySQL\MySQL Server 5.0\data 
\hostname.err?


Cheers
Baron

Hartleigh Burton wrote:

Hi Baron,
There are no MySQL errors in the event viewer.
On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:

Hi Hartleigh,

Hartleigh Burton wrote:

Hi All,
I have a database which is currently at ~10GB in it's test  
phase. It is containing uncompressed audio and is expected to  
reach 1.5TB in no time at all. I am just running some backup  
tests and I have been having lots of problems creating an  
accurate backup.
I have tried both MySQL Administrator  mysqldump, both  
applications drop out on the same table, the table `trackdata`  
which contains ~9GB worth of data. There is no single row any  
larger than 50MB, most average around 40MB. Windows 2000  
Server, MySQL v5.0.37-community-nt and all tables are InnoDB.

[snip]

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during  
query when dumping

 table `trackdata` at row: 1


You might be able to find more information about the precise  
error in the server's error logs.  That will give us a better  
idea what might be wrong, if there is an error server-side,  
which seems likely to me.


Baron

Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]



Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!






Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!




Re: mysqldump problem with large innodb tables...

2007-06-17 Thread Baron Schwartz
I'm out of ideas right now.  I don't actually use mysqldump that much and have 
never had this happen.  Hopefully someone else on the mailing list can help, or 
perhaps you can try #mysql on Freenode IRC.


Baron

Hartleigh Burton wrote:
No there is no indication of that at all. The server service appears to 
be in perfect order, does not drop/restart and my other applications 
continue to function without any interruption.


It appears as if the mysqldump connection to the server is interrupted 
or maybe there is something in row 1 of `trackdata` that it does not 
like. This table contains a long blob field which at present does not 
contain any more than ~80MB per row. I also use the --hex-blob flag for 
mysqldump to try and get around any possible problems with exporting 
this data... I have no descriptive error messages anywhere and it is 
driving me nuts :|


On 18/06/2007, at 11:27 AM, Baron Schwartz wrote:

Is there any indication that the mysqldump crash is killing the server 
and causing it to restart?  For example, ready for connections 
notifications just after you try a mysqldump?


Hartleigh Burton wrote:
H no there are no new errors in there. Nothing out of the 
ordinary thats for sure. Just notifications that MySQL has started 
and is accepting connections etc. :|

On 18/06/2007, at 11:06 AM, Baron Schwartz wrote:
How about in c:\Program Files\MySQL\MySQL Server 
5.0\data\hostname.err?


Cheers
Baron

Hartleigh Burton wrote:

Hi Baron,
There are no MySQL errors in the event viewer.
On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:

Hi Hartleigh,

Hartleigh Burton wrote:

Hi All,
I have a database which is currently at ~10GB in it's test phase. 
It is containing uncompressed audio and is expected to reach 
1.5TB in no time at all. I am just running some backup tests and 
I have been having lots of problems creating an accurate backup.
I have tried both MySQL Administrator  mysqldump, both 
applications drop out on the same table, the table `trackdata` 
which contains ~9GB worth of data. There is no single row any 
larger than 50MB, most average around 40MB. Windows 2000 Server, 
MySQL v5.0.37-community-nt and all tables are InnoDB.

[snip]

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server during 
query when dumping

 table `trackdata` at row: 1


You might be able to find more information about the precise error 
in the server's error logs.  That will give us a better idea what 
might be wrong, if there is an error server-side, which seems 
likely to me.


Baron

Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!






Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump problem with large innodb tables...

2007-06-17 Thread Hartleigh Burton

Ok... this error has just started popping up in my .err log file...

070618 14:31:10  InnoDB: ERROR: the age of the last checkpoint is  
237821842,

InnoDB: which exceeds the log group capacity 237813351.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
070618 14:39:17  InnoDB: ERROR: the age of the last checkpoint is  
237829009,

InnoDB: which exceeds the log group capacity 237813351.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.


On 18/06/2007, at 12:09 PM, Baron Schwartz wrote:

I'm out of ideas right now.  I don't actually use mysqldump that  
much and have never had this happen.  Hopefully someone else on the  
mailing list can help, or perhaps you can try #mysql on Freenode IRC.


Baron

Hartleigh Burton wrote:
No there is no indication of that at all. The server service  
appears to be in perfect order, does not drop/restart and my other  
applications continue to function without any interruption.
It appears as if the mysqldump connection to the server is  
interrupted or maybe there is something in row 1 of `trackdata`  
that it does not like. This table contains a long blob field which  
at present does not contain any more than ~80MB per row. I also  
use the --hex-blob flag for mysqldump to try and get around any  
possible problems with exporting this data... I have no  
descriptive error messages anywhere and it is driving me nuts :|

On 18/06/2007, at 11:27 AM, Baron Schwartz wrote:
Is there any indication that the mysqldump crash is killing the  
server and causing it to restart?  For example, ready for  
connections notifications just after you try a mysqldump?


Hartleigh Burton wrote:
H no there are no new errors in there. Nothing out of the  
ordinary thats for sure. Just notifications that MySQL has  
started and is accepting connections etc. :|

On 18/06/2007, at 11:06 AM, Baron Schwartz wrote:
How about in c:\Program Files\MySQL\MySQL Server 5.0\data 
\hostname.err?


Cheers
Baron

Hartleigh Burton wrote:

Hi Baron,
There are no MySQL errors in the event viewer.
On 18/06/2007, at 10:36 AM, Baron Schwartz wrote:

Hi Hartleigh,

Hartleigh Burton wrote:

Hi All,
I have a database which is currently at ~10GB in it's test  
phase. It is containing uncompressed audio and is expected  
to reach 1.5TB in no time at all. I am just running some  
backup tests and I have been having lots of problems  
creating an accurate backup.
I have tried both MySQL Administrator  mysqldump, both  
applications drop out on the same table, the table  
`trackdata` which contains ~9GB worth of data. There is no  
single row any larger than 50MB, most average around 40MB.  
Windows 2000 Server, MySQL v5.0.37-community-nt and all  
tables are InnoDB.

[snip]

-- Retrieving rows...
-- Retrieving table structure for table trackdata...
-- Sending SELECT query...
-- Retrieving rows...
mysqldump: Error 2013: Lost connection to MySQL server  
during query when dumping

 table `trackdata` at row: 1


You might be able to find more information about the precise  
error in the server's error logs.  That will give us a better  
idea what might be wrong, if there is an error server-side,  
which seems likely to me.


Baron

Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]



Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!

Regards,
Hartleigh Burton
Resident Geek
MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978
www.mraentertainment.com
Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]








Regards,
Hartleigh Burton
Resident Geek

MRA Entertainment Pty Ltd
5 Dividend St | Mansfield | QLD 4122 | Australia
Phone: (07) 3457 5041
Fax: (07) 3349 8806
Mobile: 0421 646 978

www.mraentertainment.com



Internal Virus Database was built: Never
Checked by MAC OSX... we don't get viruses!




Re: key_buffer_size and InnoDB tables

2007-04-13 Thread Ding Deng
Jim [EMAIL PROTECTED] writes:

 On the following page and in the example ini files installed with MySQL
 it's suggested that key_buffer_size is an option that affects MyISAM
 performance.

 http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html

 But on the following page about tuning MySQL server parameters, it says
 When tuning a MySQL server, the two most important variables to
 configure are key_buffer_size and table_cache, with no mention of its
 applicability to MyISAM or other types of tables.

 http://dev.mysql.com/doc/refman/5.0/en/server-parameters.html

Because MyISAM is the default storage engine, I suspect.

Also take a look at the last paragraph of this page:

 For information on tuning the InnoDB storage engine, see Section
 14.2.11, “InnoDB Performance Tuning Tips”.

 Does this option only affect MyISAM performance, or does it also affect
 performance of operations on InnoDB tables? 

key_buffer_size has nothing to do with InnoDB tables.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



key_buffer_size and InnoDB tables

2007-04-12 Thread Jim
On the following page and in the example ini files installed with MySQL 
it's suggested that key_buffer_size is an option that affects MyISAM 
performance.


http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html

But on the following page about tuning MySQL server parameters, it says 
When tuning a MySQL server, the two most important variables to configure 
are key_buffer_size and table_cache, with no mention of its applicability 
to MyISAM or other types of tables.


http://dev.mysql.com/doc/refman/5.0/en/server-parameters.html


Does this option only affect MyISAM performance, or does it also affect 
performance of operations on InnoDB tables? 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Data back up for innodb tables - Copy paste

2007-02-18 Thread abhishek jain

Hi,
I want to copy paste the data files of Innodb database, is it possible, i
mean can i just copy the data files like that we do for myisam tables,
Thanks,
Abhishek jain


Re: Data back up for innodb tables - Copy paste

2007-02-18 Thread Christian Hammers

On 2007-02-19 abhishek jain wrote:
 I want to copy paste the data files of Innodb database, is it possible, i
 mean can i just copy the data files like that we do for myisam tables

If you mean for a daily backup while the server is running: No! 
You often end up with corrupted tables doing that with MyISAM, too.
Use mysqlhotcopy or mysqldump for that.

If you stop the server, then copy the files and make sure that you have 
the same innodb_data_file_path statements at the target host, it maybe works.

bye,

-christian-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: low-priority-updates and innodb tables

2007-01-25 Thread ViSolve DB Team
hi,

AFAIK, if we start mysqld with --low-priority-updates, it sets table updation a 
lower priority than the SELECT statements, irrespective of storage engines.

hence it will affect the priority of the update operation.
Ref: http://mysql.justdn.org/doc/refman/5.1/en/table-locking.html

- Original Message - 
From: Vitaliy Okulov [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Monday, January 22, 2007 7:27 PM
Subject: low-priority-updates and innodb tables


 Здравствуйте, mysql.
 
 Hi all.
 I want to ask about low-priority-updates and innodb tables. Does
 low-priority-updates=1 affect on priority of select or update query on
 innodb type tables?
 
 -- 
 С уважением,
 Vitaliy  mailto:[EMAIL PROTECTED]
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


low-priority-updates and innodb tables

2007-01-22 Thread Vitaliy Okulov
Здравствуйте, mysql.

Hi all.
I want to ask about low-priority-updates and innodb tables. Does
low-priority-updates=1 affect on priority of select or update query on
innodb type tables?

-- 
С уважением,
 Vitaliy  mailto:[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Table status for innodb tables show innodb free 2 times

2006-10-12 Thread Dominik Klein
I recently deleted about 7.000.000 rows from a table, there are about 
4.000.000 left.

So I want to know how much space is free in table space now and execute:

mysql show table status like table\G
*** 1. row ***
   Name: table
 Engine: InnoDB
Version: 10
 Row_format: Compact
   Rows: 4354196
 Avg_row_length: 210
Data_length: 917536768
Max_data_length: 0
   Index_length: 2294349824
  Data_free: 0
 Auto_increment: 35040856
Create_time: 2006-10-12 10:29:36
Update_time: NULL
 Check_time: NULL
  Collation: latin1_german1_ci
   Checksum: NULL
 Create_options:
Comment: InnoDB free: 6144 kB; InnoDB free: 1762304 kB
1 row in set (0,26 sec)

Why does it show two values for InnoDB free? Which one is correct?

I use MySQL 5.0.21

Regards
Dominik

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table status for innodb tables show innodb free 2 times

2006-10-12 Thread Heikki Tuuri

Dominik,

what does SHOW TABLE STATUS show for other tables?

Are you using innodb_file_per_table?

Best regards,

Heikki
Oracle Corp./Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up 
MyISAM tables

http://www.innodb.com/order.php


I recently deleted about 7.000.000 rows from a table, there are about
4.000.000 left.
So I want to know how much space is free in table space now and execute:

mysql show table status like table\G
*** 1. row ***
Name: table
  Engine: InnoDB
 Version: 10
  Row_format: Compact
Rows: 4354196
  Avg_row_length: 210
 Data_length: 917536768
Max_data_length: 0
Index_length: 2294349824
   Data_free: 0
  Auto_increment: 35040856
 Create_time: 2006-10-12 10:29:36
 Update_time: NULL
  Check_time: NULL
   Collation: latin1_german1_ci
Checksum: NULL
  Create_options:
 Comment: InnoDB free: 6144 kB; InnoDB free: 1762304 kB
1 row in set (0,26 sec)

Why does it show two values for InnoDB free? Which one is correct?

I use MySQL 5.0.21

Regards
Dominik


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   >