Re: Performance problem MySQL 4.0.20
Hello Ananda, yes, the testmachine has the same data. Regards, Spiker -- GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS. Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Replication Question
If I'm replicating a master database to a slave (MyISAM tables), but the slave is busy serving up web pages, how does it get write access to the slave's table if it is always being read? TIA Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Blob data
Interesting, never tried compressing the data, sounds like that might be a nice addon.. Do you have any performance numbers you can share? I posted some performance numbers on one of my implementations some time ago. I found the thread here: http://lists.mysql.com/mysql/206337 On Tue, 3 Jul 2007, Rick James wrote: > And while you are at it, you may as well compress the chunks. You machine > probably can compress/uncompress faster than it can write/read disk. I use > Perl's Zlib::Compress or PHP's equivalent instead of Mysql's function for 2 > reasons: > * The network traffic is compressed. > * Mysql puts an unnecessary extra byte on end of the string (ok, this is > totally insignificant) > > And definitely compress each chunk separately. It seems that those library > routines slow down (excessive memory realloc??) after about 50K. That is, > you can probably compress 20 50K chunks faster than 1 1M chunk. > > My implementation did File <-> Database -- the huge blob was never > instantiated completely in RAM, only one chunk at a time. (Imagine trying > to store a 50GB file.) > > > -Original Message- > > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > > Sent: Tuesday, July 03, 2007 2:08 PM > > To: Rick James > > Cc: 'Paul McCullagh'; 'Ann W. Harrison'; 'MySQL List' > > Subject: RE: Blob data > > > > > > Rick is dead on correct, I call I chunking blob data.. There is an > > article here on a simple implementation: > > > > http://www.dreamwerx.net/phpforum/?id=1 > > > > I've had hundreds of thousands of files in this type of storage before > > with no issues. > > > > > > On Tue, 3 Jul 2007, Rick James wrote: > > > > > I gave up on putting large blobs in Mysql -- too many > > limits around 16MB. > > > > > > Instead I broke blobs into pieces, inserting them with a > > sequence number. > > > > > > Added benefit: Does not clog up replication while huge > > single-insert is > > > being copied over network and reexecuted on slaves. > > > > > > > -Original Message- > > > > From: Paul McCullagh [mailto:[EMAIL PROTECTED] > > > > Sent: Wednesday, June 27, 2007 2:57 AM > > > > To: Ann W. Harrison > > > > Cc: MySQL List; MySQL Internal > > > > Subject: Re: Blob data > > > > > > > > Hi Ann, > > > > > > > > Currently, the thoughts on how to make the BLOB > > references secure go > > > > like this: > > > > > > > > The BLOB reference consists of 2 components: The first > > component is > > > > basically an index used to find the BLOB on the server. The second > > > > component is a random number generated when the BLOB is created. > > > > > > > > The random number acts as an "authorization code", and is checked > > > > when the BLOB is requested. So if the authorization code > > supplied in > > > > the BLOB reference does not match the code stored by the > > server for > > > > that BLOB, then the BLOB is not returned. > > > > > > > > If the authorization code is a 4-byte number, then the chances of > > > > getting the correct code for any particular BLOB is 1 in > > 4 billion. > > > > This makes it practically impossible to "discover" a BLOB by > > > > generating BLOB references and requesting them from the server. > > > > > > > > However, it does mean that once you have a valid BLOB reference it > > > > remains valid until the BLOB is deleted. So you can pass it > > > > around to > > > > your friends, or post it on the internet if you like. > > > > > > > > In order to prevent this (it will depend on the site, as > > to whether > > > > this is required), it would be possible to add a dynamic > > > > component to > > > > the BLOB reference which has a certain lifetime (for example, it > > > > expires after a certain amount of time, or when a database > > > > session is > > > > closed). > > > > > > > > Such a component would have to be added to the BLOB > > reference URL by > > > > the storage engine on the fly. So, as the SELECT result is being > > > > generated, the dynamic component is added to the BLOB references > > > > returned in the rowset. > > > > > > > > Security of the BLOB streaming stuff is one of the major > > issues, so > > > > further comments, questions and ideas are welcome! > > > > > > > > Best regards, > > > > > > > > Paul > > > > > > > > On Jun 26, 2007, at 4:36 PM, Ann W. Harrison wrote: > > > > > > > > > Paul McCullagh wrote: > > > > >> > > > > >> It will also be possible to store the BLOBs > > "out-of-row". In this > > > > >> case, only a BLOB reference is stored in the row. The > > > > reference is > > > > >> basically a URL which can be used to retrieve the data. So when > > > > >> you do an SQL SELECT which includes a BLOB column, the > > resulting > > > > >> rowset does not contain the data, just the BLOB > > reference (URL). > > > > > > > > > > How does this work with access privileges? Can you > > just send random > > > > > numbers in the URL until you start seeing blob data? > > > > > > > > > > Best regards, > > > > > > > > > > > > > > > Ann > > > > > > > > >
RE: Blob data
And while you are at it, you may as well compress the chunks. You machine probably can compress/uncompress faster than it can write/read disk. I use Perl's Zlib::Compress or PHP's equivalent instead of Mysql's function for 2 reasons: * The network traffic is compressed. * Mysql puts an unnecessary extra byte on end of the string (ok, this is totally insignificant) And definitely compress each chunk separately. It seems that those library routines slow down (excessive memory realloc??) after about 50K. That is, you can probably compress 20 50K chunks faster than 1 1M chunk. My implementation did File <-> Database -- the huge blob was never instantiated completely in RAM, only one chunk at a time. (Imagine trying to store a 50GB file.) > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > Sent: Tuesday, July 03, 2007 2:08 PM > To: Rick James > Cc: 'Paul McCullagh'; 'Ann W. Harrison'; 'MySQL List' > Subject: RE: Blob data > > > Rick is dead on correct, I call I chunking blob data.. There is an > article here on a simple implementation: > > http://www.dreamwerx.net/phpforum/?id=1 > > I've had hundreds of thousands of files in this type of storage before > with no issues. > > > On Tue, 3 Jul 2007, Rick James wrote: > > > I gave up on putting large blobs in Mysql -- too many > limits around 16MB. > > > > Instead I broke blobs into pieces, inserting them with a > sequence number. > > > > Added benefit: Does not clog up replication while huge > single-insert is > > being copied over network and reexecuted on slaves. > > > > > -Original Message- > > > From: Paul McCullagh [mailto:[EMAIL PROTECTED] > > > Sent: Wednesday, June 27, 2007 2:57 AM > > > To: Ann W. Harrison > > > Cc: MySQL List; MySQL Internal > > > Subject: Re: Blob data > > > > > > Hi Ann, > > > > > > Currently, the thoughts on how to make the BLOB > references secure go > > > like this: > > > > > > The BLOB reference consists of 2 components: The first > component is > > > basically an index used to find the BLOB on the server. The second > > > component is a random number generated when the BLOB is created. > > > > > > The random number acts as an "authorization code", and is checked > > > when the BLOB is requested. So if the authorization code > supplied in > > > the BLOB reference does not match the code stored by the > server for > > > that BLOB, then the BLOB is not returned. > > > > > > If the authorization code is a 4-byte number, then the chances of > > > getting the correct code for any particular BLOB is 1 in > 4 billion. > > > This makes it practically impossible to "discover" a BLOB by > > > generating BLOB references and requesting them from the server. > > > > > > However, it does mean that once you have a valid BLOB reference it > > > remains valid until the BLOB is deleted. So you can pass it > > > around to > > > your friends, or post it on the internet if you like. > > > > > > In order to prevent this (it will depend on the site, as > to whether > > > this is required), it would be possible to add a dynamic > > > component to > > > the BLOB reference which has a certain lifetime (for example, it > > > expires after a certain amount of time, or when a database > > > session is > > > closed). > > > > > > Such a component would have to be added to the BLOB > reference URL by > > > the storage engine on the fly. So, as the SELECT result is being > > > generated, the dynamic component is added to the BLOB references > > > returned in the rowset. > > > > > > Security of the BLOB streaming stuff is one of the major > issues, so > > > further comments, questions and ideas are welcome! > > > > > > Best regards, > > > > > > Paul > > > > > > On Jun 26, 2007, at 4:36 PM, Ann W. Harrison wrote: > > > > > > > Paul McCullagh wrote: > > > >> > > > >> It will also be possible to store the BLOBs > "out-of-row". In this > > > >> case, only a BLOB reference is stored in the row. The > > > reference is > > > >> basically a URL which can be used to retrieve the data. So when > > > >> you do an SQL SELECT which includes a BLOB column, the > resulting > > > >> rowset does not contain the data, just the BLOB > reference (URL). > > > > > > > > How does this work with access privileges? Can you > just send random > > > > numbers in the URL until you start seeing blob data? > > > > > > > > Best regards, > > > > > > > > > > > > Ann > > > > > > > > > -- > > > MySQL Internals Mailing List > > > For list archives: http://lists.mysql.com/internals > > > To unsubscribe: > > > http://lists.mysql.com/[EMAIL PROTECTED] > > > > > > > > > > > > -- > > MySQL General Mailing List > > For list archives: http://lists.mysql.com/mysql > > To unsubscribe: > http://lists.mysql.com/[EMAIL PROTECTED] > > > -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Blob data
Rick is dead on correct, I call I chunking blob data.. There is an article here on a simple implementation: http://www.dreamwerx.net/phpforum/?id=1 I've had hundreds of thousands of files in this type of storage before with no issues. On Tue, 3 Jul 2007, Rick James wrote: > I gave up on putting large blobs in Mysql -- too many limits around 16MB. > > Instead I broke blobs into pieces, inserting them with a sequence number. > > Added benefit: Does not clog up replication while huge single-insert is > being copied over network and reexecuted on slaves. > > > -Original Message- > > From: Paul McCullagh [mailto:[EMAIL PROTECTED] > > Sent: Wednesday, June 27, 2007 2:57 AM > > To: Ann W. Harrison > > Cc: MySQL List; MySQL Internal > > Subject: Re: Blob data > > > > Hi Ann, > > > > Currently, the thoughts on how to make the BLOB references secure go > > like this: > > > > The BLOB reference consists of 2 components: The first component is > > basically an index used to find the BLOB on the server. The second > > component is a random number generated when the BLOB is created. > > > > The random number acts as an "authorization code", and is checked > > when the BLOB is requested. So if the authorization code supplied in > > the BLOB reference does not match the code stored by the server for > > that BLOB, then the BLOB is not returned. > > > > If the authorization code is a 4-byte number, then the chances of > > getting the correct code for any particular BLOB is 1 in 4 billion. > > This makes it practically impossible to "discover" a BLOB by > > generating BLOB references and requesting them from the server. > > > > However, it does mean that once you have a valid BLOB reference it > > remains valid until the BLOB is deleted. So you can pass it > > around to > > your friends, or post it on the internet if you like. > > > > In order to prevent this (it will depend on the site, as to whether > > this is required), it would be possible to add a dynamic > > component to > > the BLOB reference which has a certain lifetime (for example, it > > expires after a certain amount of time, or when a database > > session is > > closed). > > > > Such a component would have to be added to the BLOB reference URL by > > the storage engine on the fly. So, as the SELECT result is being > > generated, the dynamic component is added to the BLOB references > > returned in the rowset. > > > > Security of the BLOB streaming stuff is one of the major issues, so > > further comments, questions and ideas are welcome! > > > > Best regards, > > > > Paul > > > > On Jun 26, 2007, at 4:36 PM, Ann W. Harrison wrote: > > > > > Paul McCullagh wrote: > > >> > > >> It will also be possible to store the BLOBs "out-of-row". In this > > >> case, only a BLOB reference is stored in the row. The > > reference is > > >> basically a URL which can be used to retrieve the data. So when > > >> you do an SQL SELECT which includes a BLOB column, the resulting > > >> rowset does not contain the data, just the BLOB reference (URL). > > > > > > How does this work with access privileges? Can you just send random > > > numbers in the URL until you start seeing blob data? > > > > > > Best regards, > > > > > > > > > Ann > > > > > > -- > > MySQL Internals Mailing List > > For list archives: http://lists.mysql.com/internals > > To unsubscribe: > > http://lists.mysql.com/[EMAIL PROTECTED] > > > > > > > -- > MySQL General Mailing List > For list archives: http://lists.mysql.com/mysql > To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] > -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Multiple binary log files question
Guys, I would like to know if there is a way to have individual databases under the same instance or server write to separate binary log files. The idea is to have multiple binary log file for each database on the same server. The problem that I experiencing is sorting through the binary log file and not knowing which database the changes are associated with while attempting to reapply the changes. Thanks in advance ~ Clyde Lewis Database Administrator
Re: Blob data
Rick James wrote: Instead I broke blobs into pieces, inserting them with a sequence number. Understanding the underlying problem, that still seems like an unnatural way to store pictures and documents. Added benefit: Does not clog up replication while huge single-insert is being copied over network and reexecuted on slaves. The design of blobs that Jim did at DEC included the ability to send them across the network in chunks of a client specified size. In 1982 it was quite common to have blobs that were larger than physical memory. What he did more recently was add a "blob repository" separate from the active tablespace that allowed the backup function to skip unchanged blobs while backing up active data. It also allows replicants to share a single copy of blobs, if appropriate. There are lots of ways of making large blobs work better in relational databases. Regards, Ann -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Blob data
I gave up on putting large blobs in Mysql -- too many limits around 16MB. Instead I broke blobs into pieces, inserting them with a sequence number. Added benefit: Does not clog up replication while huge single-insert is being copied over network and reexecuted on slaves. > -Original Message- > From: Paul McCullagh [mailto:[EMAIL PROTECTED] > Sent: Wednesday, June 27, 2007 2:57 AM > To: Ann W. Harrison > Cc: MySQL List; MySQL Internal > Subject: Re: Blob data > > Hi Ann, > > Currently, the thoughts on how to make the BLOB references secure go > like this: > > The BLOB reference consists of 2 components: The first component is > basically an index used to find the BLOB on the server. The second > component is a random number generated when the BLOB is created. > > The random number acts as an "authorization code", and is checked > when the BLOB is requested. So if the authorization code supplied in > the BLOB reference does not match the code stored by the server for > that BLOB, then the BLOB is not returned. > > If the authorization code is a 4-byte number, then the chances of > getting the correct code for any particular BLOB is 1 in 4 billion. > This makes it practically impossible to "discover" a BLOB by > generating BLOB references and requesting them from the server. > > However, it does mean that once you have a valid BLOB reference it > remains valid until the BLOB is deleted. So you can pass it > around to > your friends, or post it on the internet if you like. > > In order to prevent this (it will depend on the site, as to whether > this is required), it would be possible to add a dynamic > component to > the BLOB reference which has a certain lifetime (for example, it > expires after a certain amount of time, or when a database > session is > closed). > > Such a component would have to be added to the BLOB reference URL by > the storage engine on the fly. So, as the SELECT result is being > generated, the dynamic component is added to the BLOB references > returned in the rowset. > > Security of the BLOB streaming stuff is one of the major issues, so > further comments, questions and ideas are welcome! > > Best regards, > > Paul > > On Jun 26, 2007, at 4:36 PM, Ann W. Harrison wrote: > > > Paul McCullagh wrote: > >> > >> It will also be possible to store the BLOBs "out-of-row". In this > >> case, only a BLOB reference is stored in the row. The > reference is > >> basically a URL which can be used to retrieve the data. So when > >> you do an SQL SELECT which includes a BLOB column, the resulting > >> rowset does not contain the data, just the BLOB reference (URL). > > > > How does this work with access privileges? Can you just send random > > numbers in the URL until you start seeing blob data? > > > > Best regards, > > > > > > Ann > > > -- > MySQL Internals Mailing List > For list archives: http://lists.mysql.com/internals > To unsubscribe: > http://lists.mysql.com/[EMAIL PROTECTED] > > -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysqlcheck .TMD and .TMM files
Hi. Ive been using mysqlcheck on some very large databases, and im running into a situation that the partition the database files reside on is now getting to small to handle the mysqlcheck temp files. Ive also checked to see if mysqlcheck had a tmpdir command line option, and it doesn't, and I also tried to use the mysqld --tmpdir option, and it doesn't seem to use the tmpdir that I specify for the .TMD or .TMM files that mysqlcheck creates. Is there a way to specify where these .TMD and .TMM files are created, or are they always created in the same directory as the database files? Regards /Cole -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
how to suppress the SHOW WARNINGS limit ?
hello, from the page http://dev.mysql.com/doc/refman/5.0/en/show-warnings.html I understand that if I want to look at all the warnings with the command : show warnings; then I have first to set a limit bigger than any numbers of warnings that could happen, say : (I know that it might be painfull to look at billion of warnings... but) SET max_error_count=1000; Is there just a way to suppress the limit , and so never being limited to 64 default value ? thanx ! _-¯-_-¯-_-¯-_-¯-_ Gilles Missonnier IAP - [EMAIL PROTECTED] 01 44 32 81 36 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: database cache /
Thanks for the leads. I'll double check my indices and check out the following links. > http://dev.mysql.com/doc/refman/5.0/en/server-parameters.html > http://dev.mysql.com/doc/refman/5.0/en/innodb-tuning.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Performance problem MySQL 4.0.20
Please, mount your disks using "forcedirectio". Regards, Juan On 7/3/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Hello, I've a performance problem with our database: Some select statements take about 20 seconds. The same statements on an equal testmachine take less than 1 second. Server: CPU: 2 x 440 MHz sparcv9 RAM: 2GB (top: Memory: 2048M real, 931M free, 732M swap in use, 2839M swap free) OS: SunOS 5.9 Database: Version: MySQL 4.0.20 (build from source) Traffic: averaged 4985 Bytes/sec sent averaged 41 questions/sec averaged 0,3 connections/sec Some options in my.cnf: [mysqld] key_buffer = 16M max_allowed_packet = 1M table_cache = 96 sort_buffer_size = 512K net_buffer_length = 8K myisam_sort_buffer_size = 8M query_cache_size = 16M max_connections = 125 Is there just to much traffic on the machine or could help tuning some parameters? Thanks. Best regards, Spiker -- Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten Browser-Versionen downloaden: http://www.gmx.net/de/go/browser -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
Re: database structure
Hey there again, I suggest you look up a tutorial about database normalisation. Good ones are often hard to find. In general, you give all tables that contain data you will be referencing in other tables a numeric id (INT(11) UNSIGNED) as primary key, and you use that key as the constraint. Your table `locations` will be referenced in your table `tags` as such: FOREIGN KEY (`location`) REFERENCES `locations`(`id`) ON UPDATE CASCADE ON DELETE SET NULL Search the mysql manual for those update/delete options to see what they do and adjust them to your needs. The ones given are the ones I in most situations. HTH, boro Hiep Nguyen schreef: Now, if I have a location table with id, name, address, phone, fax, etc... Should I put id or name into the tag table? If id used, then how do i look up the name, address, phone, fax, etc... when I do a select on tag table? Thank you for all your helps T. Hiep -Original Message- From: Mogens Melander [mailto:[EMAIL PROTECTED] Sent: Monday, July 02, 2007 3:45 PM To: mysql@lists.mysql.com Subject: Re: database structure On Mon, July 2, 2007 21:10, Hiep Nguyen wrote: take your advice, i looked in to JOIN and i got the idea. but i noticed that in order to use JOIN, don't you need to have the same column name in both tables? i just don't see it in your example here. is there something that i'm missing? Using the form: select t1.field1 data1, t2.field1 data2, t3.fieldn data3 from table_a t1 left join table_b t2 on ( t1.id=t2.t1_ref ) left join table_n t3 on ( t2.id=t3.t2_ref ); You can join on allmost anything. ?? Can typecasts be used in this scenario ?? can u give a select example with JOIN on three tables above? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: peformance help: preventing 'using temporary; using filesort'
Rich Brant schrieb: > Is there anyway to prevent the temporary and filesort? > > SELECT > t1.sourceID as sourceID, > count(t1.sourceID) as clicks, > [...] > ORDER BY clicks desc, conversions desc; > > When using EXPLAIN: > > [...] Using where; Using temporary; Using filesort | when using ORDER BY on a computed row MySQL cannot use any index of course, you would need an index on COUNT(t1.sourceID) what is not possible -- Sebastian -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: database structure
Now, if I have a location table with id, name, address, phone, fax, etc... Should I put id or name into the tag table? If id used, then how do i look up the name, address, phone, fax, etc... when I do a select on tag table? Thank you for all your helps T. Hiep -Original Message- From: Mogens Melander [mailto:[EMAIL PROTECTED] Sent: Monday, July 02, 2007 3:45 PM To: mysql@lists.mysql.com Subject: Re: database structure On Mon, July 2, 2007 21:10, Hiep Nguyen wrote: > > take your advice, i looked in to JOIN and i got the idea. but i noticed > that in order to use JOIN, don't you need to have the same column name in > both tables? i just don't see it in your example here. is there > something that i'm missing? Using the form: select t1.field1 data1, t2.field1 data2, t3.fieldn data3 from table_a t1 left join table_b t2 on ( t1.id=t2.t1_ref ) left join table_n t3 on ( t2.id=t3.t2_ref ); You can join on allmost anything. ?? Can typecasts be used in this scenario ?? > > can u give a select example with JOIN on three tables above? -- Later Mogens Melander +45 40 85 71 38 +66 870 133 224 -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: database cache /
At 2:45 PM +1000 7/3/07, Daniel Kasak wrote: On Mon, 2007-07-02 at 21:19 -0700, Ed Lazor wrote: I have a 400mb database. The first query to tables takes about 90 seconds. Additional queries take about 5 seconds. I wait a while and run a query again; it takes about 90 seconds for the first one and the rest go quickly. I'm guessing data is being loaded into memory which is why things speed up. Does this sound right? Is there a way to keep the table in memory? Nothing is changing in the data. You probably have the query cache enabled already if it's behaving like this. See: http://dev.mysql.com/doc/refman/5.0/en/query-cache.html The same behavior might be observed without the query cache being enabled. The key buffer caches MyISAM index blocks, filesystem caching is used for data blocks, etc. The query cache only works for the current connection, so if you open a connection and execute a query, it's only cached for *that* connection. It's not cached only for that connection, actually. But you should get MUCH better performance than what you're currently getting anyway. You need to look at your queries, and put indexes on appropriate fields. Generally you want them on fields used in joins, and fields in your 'where' clause. Always a good idea. Also, you might want to increase your server buffer sizes as appropriate. See, for example: http://dev.mysql.com/doc/refman/5.0/en/server-parameters.html http://dev.mysql.com/doc/refman/5.0/en/innodb-tuning.html -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: peformance help: preventing 'using temporary; using filesort'
On 6/29/07, Rich Brant <[EMAIL PROTECTED]> wrote: Hello all. I'm looking for help with the query below. Is there anyway to prevent the temporary and filesort? The filesort is caused by either the ORDER BY or the GROUP BY. There are sections in the manual about how to get it to use indexes for these. That's the only way to prevent the filesort. - Perrin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
grants
Hi All, We have setup replication for our production database. We need to do monitoring of the slave and master. I created a user with only "SELECT" privileges, and when i do "show master status" on master db, its saying "Access denied; you need the SUPER,REPLICATION CLIENT privilege for this operation". Is it necessary to grant "SUPER" privilages, or can i do it in some other way, without super privilages. This user is just for monitoring. regards anandkl
Re: Performance problem MySQL 4.0.20
does your test machine have the same data as your problem database. Can you also please show the explain plan from both the machines. On 7/3/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Hello, I've a performance problem with our database: Some select statements take about 20 seconds. The same statements on an equal testmachine take less than 1 second. Server: CPU: 2 x 440 MHz sparcv9 RAM: 2GB (top: Memory: 2048M real, 931M free, 732M swap in use, 2839M swap free) OS: SunOS 5.9 Database: Version: MySQL 4.0.20 (build from source) Traffic: averaged 4985 Bytes/sec sent averaged 41 questions/sec averaged 0,3 connections/sec Some options in my.cnf: [mysqld] key_buffer = 16M max_allowed_packet = 1M table_cache = 96 sort_buffer_size = 512K net_buffer_length = 8K myisam_sort_buffer_size = 8M query_cache_size = 16M max_connections = 125 Is there just to much traffic on the machine or could help tuning some parameters? Thanks. Best regards, Spiker -- Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten Browser-Versionen downloaden: http://www.gmx.net/de/go/browser -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Performance problem MySQL 4.0.20
Hello, I've a performance problem with our database: Some select statements take about 20 seconds. The same statements on an equal testmachine take less than 1 second. Server: CPU: 2 x 440 MHz sparcv9 RAM: 2GB (top: Memory: 2048M real, 931M free, 732M swap in use, 2839M swap free) OS: SunOS 5.9 Database: Version: MySQL 4.0.20 (build from source) Traffic: averaged 4985 Bytes/sec sent averaged 41 questions/sec averaged 0,3 connections/sec Some options in my.cnf: [mysqld] key_buffer = 16M max_allowed_packet = 1M table_cache = 96 sort_buffer_size = 512K net_buffer_length = 8K myisam_sort_buffer_size = 8M query_cache_size = 16M max_connections = 125 Is there just to much traffic on the machine or could help tuning some parameters? Thanks. Best regards, Spiker -- Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten Browser-Versionen downloaden: http://www.gmx.net/de/go/browser -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Field size for UTF-8 characters
I believe varchar(50) means 50 characters, not 50 bytes. So, usually I don't care when designing table schema at all, for Japanese characters. On 7/3/07, Cathy Murphy <[EMAIL PROTECTED]> wrote: I am limiting text to 50 chars in mysql field by varchar(50) ( UTF-8 enabled) but what if the user enters 50 japanese chars, does mysql accomodate it OR we have to consider some buffer during design ? -- Cathy www.nachofoto.com -- [EMAIL PROTECTED] Kenji Hirohama -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Field size for UTF-8 characters
From my experience with InnoDB, IF the field is an index, it will use 3 bytes per character. So VARCHAR(50) = 150 bytes, when fully populated. (+ 1 for the length = 151 bytes.) IF the field is not an index, each character will consume between 1 and 3 chars. So VARCHAR(50) = 51 -> 151 chars, when fully populated. Ben Cathy Murphy wrote: I am limiting text to 50 chars in mysql field by varchar(50) ( UTF-8 enabled) but what if the user enters 50 japanese chars, does mysql accomodate it OR we have to consider some buffer during design ? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]