Re: mysqlhotcopy problem
On Wed, 2003-07-30 at 17:40, Vladimir Shiray wrote: How can I make hotcopy of my database if it have many tables (more than 1000) ??? mysqlhotcopy failed when it try to lock all tables ... How many tables can I lock at one time ? Can I increase this limit ? Hi, You probably ran out of file handles. The available file handles can be increased by modifying the table_cache and max_connections variables in my.cnf, eg. set-variable = table_cache=3000 You might need to tinker with /proc/sys/fs/file-max if you hit your operating system's hard limit. regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Replication stops for no reason...
Hi Jeff, We had similar problems caused by replication crossing a firewall with a 5 minute timeout on its access control list. If the replication stream went idle for 5 minutes, the firewall would drop the connection and MySQL wouldn't notice. Our workaround was to run a daemon on the master to replace a row in a special table once per minute - hence keeping the connection open. I think there are some timeout settings in newer MySQL versions which get the slave to reconnect after a period of inactivity. regards, Martin On Mon, 2003-07-07 at 19:20, Jeff McKeon wrote: UPDATED INFO mysql show slave status \G; *** 1. row *** Master_Host: 10.32.1.10 Master_User: repli Master_Port: 3306 Connect_retry: 60 Log_File: db01tc0927-bin.034 Pos: 468335571 Slave_Running: Yes Replicate_do_db: Replicate_ignore_db: Last_errno: 0 Last_error: Skip_counter: 0 1 row in set (0.00 sec) ERROR: No query specified mysql Jeff McKeon IT Manager Telaurus Communications LLC [EMAIL PROTECTED] (973) 889-8990 ex 209 ***The information contained in this communication is confidential. It is intended only for the sole use of the recipient named above and may be legally privileged. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication, or any of its contents or attachments, is expressly prohibited. If you have received this communication in error, please re-send it to the sender and delete the original message, and any copy of it, from your computer system. Thank You.*** -Original Message- From: Jeff McKeon Sent: Monday, July 07, 2003 1:47 PM To: Mysql List Subject: Replication stops for no reason... mysql Ver 11.15 Distrib 3.23.41, for pc-linux-gnu (i686) I have the following replication setup... DB1 -- DB2 -- DB3 Every often, replication from DB1 to DB2 just stops. There is no error messages that I can see. I know it's stopped because I have a check that runs every 5 minutes to see a certain piece of data in DB1 matches DB3. If I issue slave stop and then slave start commnands, it's then fine for another day or so... Any idea how I can track down the cause or where a log may be for this? Thanks, Jeff -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: I don'get binlogs to log properly
On Thu, 2003-06-19 at 14:13, [EMAIL PROTECTED] wrote: binlog-do-db= test mysql Hi, There should only be one database per line: binlog-do-db= test binlog-do-db= mysql regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
replication error-code 1053 - what is it ?
Hi, I have a busy MySQL slave which is directly updated by a periodic purge script which throws away irrelevant data. Occassionally, (er, a few times a day, actually), the slave quietly falls over, but ITS slaves trip with [SQL...] partially completed on the master and was aborted. There is a chance that your master is inconsistent at this point. If you are sure that your master is ok, run this query manually on the slave and then restart the slave with SET SQL_SLAVE_SKIP_COUNTER=1; SLAVE START; Looking in the binlog, I find the affected SQL has error-code 1053 - about which perror knows nothing. I think this must be some timeout or locking issue or something. Does anyone know what the error means for certain ? Is there something I can tweak to alleviate the problem ? MySQL v3.23.51. regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysqlhotcopy generates a segmentation fault
On Wed, 2003-06-11 at 01:55, Danny Cron wrote: I have three boxes. Mysqlhotcopy works on box1, but it generates a segmentation fault on box2 and box3. I want it to work on all boxes. I am prepared to upgrade them all to the level of box3 (but mysqlhotcopy doesn't work with that configuration). Does anyone know of a solution? Hi, I have had the same problem running mysqlhotcopy on the MySQL installed with Woody. I think there is some problem with the client library while it tries to parse the my.cnf file causing a segmentation fault. I'm not sure if the problem is inside DBI or the MySQL libs. A quick solution is to remove the mysql_read_default_group=mysqlhotcopy clause from the connection string: my $dbh = DBI-connect(dbi:mysql:$dsn;mysql_read_default_group=mysqlhotcopy, $opt{user}, $opt{password}, { RaiseError = 1, PrintError = 0, AutoCommit = 1, }); becomes: my $dbh = DBI-connect(dbi:mysql:$dsn, $opt{user}, $opt{password}, { RaiseError = 1, PrintError = 0, AutoCommit = 1, }); This loses the ability to use dedicated mysqlhotcopy sections in your config file, but at least gets around the segfault. regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Join with no matches on other table
On Wed, 2003-06-11 at 13:48, Ville Mattila wrote: Hello, I have two tables, other having information of cottages and other including information when each cottage is booked. The table structures are following: Cottages: - code - name - equipment Reservations: - cottagecode - begindate - enddate I'm looking for a query structure that I can use to find for example cottages that are free on 15. - 16. July. Any help? Hi, Something like: select c.* from cottage c left join reservations r on r.cottagecode = c.code and begindate = '2003-07-16' and enddate = '2003-07-15' where r.cottagecode is null regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Printing table descriptions
On Wed, 2003-06-11 at 14:29, Fernando Gerent wrote: Hi! I need to print the description of all tables in my database. The problem is, there are about 200 of them, and the only way I know to do this is send each of the screens to a file, an then print the file. Isn't there any easier way, like printing directly from mysql or at least saving all descriptions to the file at once?? Hi, Try: mysqldump -d -u user -p database-name regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: problem with DBI connection interaction with sub-process
Hi, I reworked my code so that my parent process opened the DBI connection after the child was forked, and everything works fine. So it looks like Robin is right. The one thing I don't understand just now is how the parent automatically re-connected to MySQL after the child clobbered the DBI connection. regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help! How to generate a list of consecutive numbers with a SELECT?
Here's a quick hack, but you need to have another table guaranteed to contain 1000 or more rows: create temp table n( i int auto_increment not null, primary key(i)); insert into n select null from BIG_TABLE limit 1000; regards, Martin On Mon, 2003-06-09 at 15:00, Martin Szabo wrote: I've tried to find a way to generate a list of consecutive numbers with a SELECT statement. I would like something like Select ... that would result the numbers between 1 and 20 on separate rows. So far, the only solution I have found is to have a table with consecutive numbers from 1 to 1000, and then I can just select the range I want from that table. I've looked in the documentation, but haven't found anything about a nicer solution. If anyone knows of a better solution please share. Thanks in advance. Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: is it possible to get around 4 billion row limit
Hi Edward, Thanks for the info, but I tried setting the avg_row_length with no effect - I'm pretty sure you only need that for dynamic format tables (with varchars and text). My table is fixed format, and the table does claim to get around the 2/4GB limit. My problem is the maximum number of rows, not the final table size. Max_data_length: 47244640255 Create_options: max_rows=4294967295 I'm guessing that you can't have more than 4294967295 rows in a MyISAM table (or maybe the SQL parser can't handle numbers 2^32 as the value for max_rows). I think I'll resubmit the question as What is the maximum number of rows MySQL can store in a table. thanks, Martin On Tue, 2003-06-10 at 11:54, Becoming Digital wrote: Sorry, I meant to include this link, too. http://www.mysql.com/doc/en/Table_size.html Edward Dudlik Becoming Digital www.becomingdigital.com - Original Message - From: Martin Waite [EMAIL PROTECTED] To: MySQL List [EMAIL PROTECTED] Sent: Monday, 09 June, 2003 04:45 Subject: is it possible to get around 4 billion row limit Hi, I want to create a table with a lot (8 billion) small fixed-length records. I thought setting MAX_ROWS in the create table would do this for me, but it looks like it quietly ignores values over 4.2 billion. Is this a hard-limit in MySQL ? Does MySQL 4.0.x have the same limitation ? (MySQL v3.23.51) regards, Martin eg. CREATE TABLE `txn_tag` ( txn_id int unsigned not null, `tag_id` smallint unsigned NOT NULL default '0', `value_id` int(11) NOT NULL default '0', unique KEY (txn_id, tag_id, value_id), KEY `tag_id` (`tag_id`,`value_id`), KEY `value_id` (`value_id`,`tag_id`) ) max_rows=80; show table status like 'txn_tag'\G *** 1. row *** Name: txn_tag Type: MyISAM Row_format: Fixed Rows: 0 Avg_row_length: 0 Data_length: 0 Max_data_length: 47244640255 Index_length: 1024 Data_free: 0 Auto_increment: NULL Create_time: 2003-06-09 09:40:57 Update_time: 2003-06-09 09:40:57 Check_time: NULL Create_options: max_rows=4294967295 Comment: 1 row in set (0.00 sec) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
What is the maximum number of rows per table ?
Hi, I can't find any mention of this in the MySQL manual. It discusses the maximum file size, but what I want to know is if there is some limitation (perhaps in the index file format) on the number of rows a MyISAM or INNODB table can hold ? regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
is it possible to get around 4 billion row limit
Hi, I want to create a table with a lot (8 billion) small fixed-length records. I thought setting MAX_ROWS in the create table would do this for me, but it looks like it quietly ignores values over 4.2 billion. Is this a hard-limit in MySQL ? Does MySQL 4.0.x have the same limitation ? (MySQL v3.23.51) regards, Martin eg. CREATE TABLE `txn_tag` ( txn_id int unsigned not null, `tag_id` smallint unsigned NOT NULL default '0', `value_id` int(11) NOT NULL default '0', unique KEY (txn_id, tag_id, value_id), KEY `tag_id` (`tag_id`,`value_id`), KEY `value_id` (`value_id`,`tag_id`) ) max_rows=80; show table status like 'txn_tag'\G *** 1. row *** Name: txn_tag Type: MyISAM Row_format: Fixed Rows: 0 Avg_row_length: 0 Data_length: 0 Max_data_length: 47244640255 Index_length: 1024 Data_free: 0 Auto_increment: NULL Create_time: 2003-06-09 09:40:57 Update_time: 2003-06-09 09:40:57 Check_time: NULL Create_options: max_rows=4294967295 Comment: 1 row in set (0.00 sec) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
problem with DBI connection interaction with sub-process
Hi, Maybe this is a DBI question rather than MySQL, but here goes... I have a perl script which forks in order to work around a memory leak in XML::Parser. The child parses the data and the parent reads the results back from the child to populate some temporary tables in the database. What seems to happen is that when the child exits, the DBI connection to the database is lost and automatically reconnected (how?). Obviously this blows away my temporary tables. Does anyone know how to avoid this ? I've tried $SIG{CHLD}='IGNORE'; with no effect. My current work around is to read all the data into the parent and then do the DB work, but is there a better way ? regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: problem with DBI connection interaction with sub-process
Hi Robin, This sounds bang-on. Many thanks. regards, Martin On Mon, 2003-06-09 at 13:34, Robin Keech wrote: You could try forking before making a connection to the database. A forked program will share the same resources (I think), and the DBI could be getting the child signal and closing connection. However, any resources you create after the fork will be your own. Hope it helps (bit of a guess really), Robin Keech Java Developer Synectics Ltd -Original Message- From: Martin Waite [mailto:[EMAIL PROTECTED] Sent: 09 June 2003 12:10 To: MySQL List Subject: problem with DBI connection interaction with sub-process snip What seems to happen is that when the child exits, the DBI connection to the database is lost and automatically reconnected (how?). Obviously this blows away my temporary tables. Does anyone know how to avoid this ? snip regards, Martin --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.474 / Virus Database: 272 - Release Date: 18/04/2003 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: another replication question..
Hi Ross, On Wed, 2003-06-04 at 16:44, Ross Simpson wrote: I have another question that doesn't seem to be addressed in the mysql manual. Does any sort of locking occur while a slave is updating it's local databases? Can I still read any/all tables while this process is occurring? Replication works by applying the SQL used to update the master to the slaves. All updates in MySQL perform a lock of some sort - a table lock for MyISAM tables, row locks for INNODB. If there is locking, is the lock table-based or for the entire db? It depends on the table type - table locks for MyISAM, row locks for INNODB. My slaves will be read-only dbs in a production environment with a lot of traffic, so I need to insure that they can always serve requests. Run some benchmarks for each of the updates you apply to the master (disconnected from replication, of course). All tables involved with any updates are locked for the duration of the update, so it is important to get measurements of how long your updates take. You may need to redesign your tables (more effective indexes) or partition your updates so that one long update is broken down into many smaller updates. regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: table copying/replication
Hi Ross, On Wed, 2003-06-04 at 16:35, Ross Simpson wrote: This sounds like the best idea yet :) I have a couple of questions: - I need to keep the data in x_shadow while still creating table x.. will copying accomplish the same thing? I'm guessing it will be slower, but keeping the 'shadow' table around is important. I'm not sure I understand the question, but I think you're saying you want to keep a shadow table after the rename. Erm, you could create a shadow of the shadow, or create a new shadow after the switchover. You have two options for creating the shadow tables: * inside MySQL create table x_shadow( ... ); insert into x_shadow select * from x; This SQL will be replicated to your slaves (ie. prod and replicas), but depending on the amount of data involved might hammer your slaves and make your production system temporarily unusable. * outside MySQL, using OS-level copy (only works for MyISAM tables) - in mysql flush tables and possibly lock them to prevent anyone else updating them - cd mysql-datadir - cp x.MYD x_shadow.MYD - cp x.MYI x_shadow.MYI - cp x.frm x_shadow.frm But note that this will have to be repeated on each replica. You could tar up the x_shadow files on stage, copy the tarball to the replicas and unpack them in the correct directory. mysqlhotcopy can be used to do the locking and copying for you. Once the copied shadow tables are in place, a rename issued on the stage server should replicate fine, swapping x and shadow_x on all replicas. - Will either / both of these (rename and copy) preserve indexes? Each table has ~5 indexes, and I don't want to reindex. Rename will correctly handle the indexes. An OS-level copy of MyISAM files will also preserve the indexes (as long as you get all the files belonging to a table). regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: table copying/replication
On Tue, 2003-06-03 at 22:44, Ross Simpson wrote: Thanks for the reply. I didn't explain properly :) A diagram should help: || |-| 1 || 2 | || |stage| -- |prod| -- |-| || |-| || |-|replicas| || 'stage' is a staging mysql instance, where changes are made all the time. When the data is ready for production, it needs to be pushed to 'prod', at which time it will be replicated out to all the slaves. Step 2 is covered by the answer to my previous question. Step 1 is really my question. My need is that somehow a table already existing on stage can be copied/replicated/etc over to prod, but _only_ when requested, and then immediately. Run a MySQL replication chain from stage to prod to replicas. On stage, prepare the data on shadow tables with different names to those used in prod. Say, if your real tables are one, two, three, four, five then create shadow tables one_shadow, two_shadow, three_shadow... Once the data is ready in the shadow tables, do a rename: rename table one to one_old, one_shadow to one, two to two_old, two_shadow to two, ... Replication will apply the rename to all your replicas and you should get a fairly snappy switchover. The only drawback is that you have two copies of your tables, which might be impractical depending on the amount of data involved. regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: [repost] Finding table name when using Union
On Tue, 2003-03-11 at 07:33, Richard Taubo wrote: When using the Union statement in MySQL, is it possible to retrieve the corresponding table name for a given row? The reason I need to know this is that I present info from different tables in their own way. I know I can create a column in each table that describes which table the row actually is being retrieved from, but I am looking for a more intelligent solution. Couldn't you include the table name in the select clauses of the union ? eg. select 'table_1' table, t1.* from table_1 t1 where blah and blah union select 'table_2' table, t2.* from table_2 t2 where blah and blah == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: too many open files, error 24, but max-files is large
On Wed, 2003-03-05 at 15:16, Johannes Ullrich wrote: on mysql 4.0.10-gamma (rpm install, Redhat advanced server), I am running into 'too many open files' issues ( error 24 ). I am using a rather large merge table (30 distinct tables merged), which is likely the culprit. The error shows up as I have about a dozen of connections. I did increase the number of file handles substantionally (16384) and it looks like this limit is not reached: cat /proc/sys/fs/file-nr 5328 475816384 This is a 2.4 kernel, so inodes are allocated automatically and there is no inode-max setting. 'open_files_limit' is set to 0. 'table_cache' is set to 1024 Hi, You should check your soft limit: ulimit -aS This might be significantly lower than the hard limit for your system, and will be the value inherited my mysql when it starts up. MySQL uses max_connections and table_cache_size to figure out what its max requirement will be. If this requirement is higher than the soft limit, MySQL raises the soft limit using setrlimit() The code that sets the limit is in sql/mysqld.cc: uint wanted_files=10+(uint) max(max_connections*5, max_connections+table_cache_size*2); set_if_bigger(wanted_files, open_files_limit); On my debian box woody box, the soft limit defaults to 1024. If your system is similar, then mysql might raise the limit to 2048+max_connections. Be aware that if you join your merge table to another merge table (or itself), the number of file descriptors used can grow rapidly. A quick spot check can be performed by running 'ls /proc/mysql-thread/fd | wc -l' == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
re: trouble setting key_buffer_size on a debian slave
Hi, I'm using the same MySQL version on debian and have no problem setting the key_buffer size. I take it you are setting key_buffer and not key_buffer_size in the my.cnf file ? eg. set-variable= key_buffer=32M I don't know why it has a different name in the 'show variables' list. Are you saying that running change master ... resets the variable ? == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
setting maximum threads for MySQL under Linux
Hi, Is there any better way of setting the thread/process limit on Linux than by hacking the safe_mysqld script ? Have I missed a configuration variable somewhere that will tell MySQL to attempt to raise the process limit ? I would have thought MySQL would figure it out based on the max_connections value. == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
file-descriptor limits for linux
Hi, Does anyone know what the story is for file-descriptor limits on Linux ? I read in http://www.xenoclast.org/doc/benchmark/HTTP-benchmarking-HOWTO/node7.html that root needs to set /proc/sys/fs/file-max to a high value in order that ulimit -n will work. However, on a Debian Woody box (2.4 kernel), this doesn't seem to be necessary. As root, I can set the value as high as I want (up to about 63000) regardless of the value in /proc/sys/fs/file-max. Also, these are per-process limits. What is the overall machine limit - assuming there is one ? These questions are related to MySQL, SQL, etc. Honest. == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Joins are slow
On Wed, 2003-01-22 at 03:18, Steve Quezadas wrote: ([Defendant] Query WITH a join - 8.79 seconds! EXPLAIN SELECT Defendants.CaseNumber FROM Defendants, Cases WHERE Cases.CaseNumber = Defendants.CaseNumber AND Filed = 1999-01-01 AND (Defendant LIKE owen% OR Defendant LIKE pitts% OR Defendant LIKE general%); +++--+-+-+---+---++ | table | type | possible_keys| key | key_len | ref | rows | Extra | +++--+-+-+---+---++ | Defendants | range | CaseNumber,Defendant | Defendant | 30 | NULL | 82756 | where used | | Cases | eq_ref | CasesNumber,Filed| CasesNumber | 30 | Defendants.CaseNumber | 1 | where used | +++--+-+-+---+---++ Would a combined index on CasesNumber and Filed help ? eg. create index CasesFiled on Cases (CasesNumber, Filed); The query might work out the 'Filed = 1999-01-01' without consulting the table then. == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: replication - queries out of order or dropped?
- sql,query On Tue, 2003-01-14 at 23:55, Jeremy Zawodny wrote: What do the relevant sections of the my.cnf files on the master and slave look like? -- Hi, I've had the same issue - but always assumed that replication was meant to ignore create database I've looked into it further, and it looks like if you have any binlog-ignore-db directives in the master's my.cnf file then create database commands are not routed to the binlog. == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Year Lists
On Tue, 2002-12-31 at 14:53, Steve Vernon wrote: Hiya, I have a database about projects in a company, they all have a start year and end year. If the projects have not ended then they get a end year of . I have made a SQL command, given a year, works out with projects are running in that year. The company I am doing this for, wants basically a drop down box which says (1950- 8 Projects) and such like. Now with the current command I have it would mean 92or so SQL commands as the company has records back to 1910. Is there a way to do this in one command? I have searched all the mannual and I can work out something similair to a for loop in SQL. Would variables help? Hi Steve, You want to use select group by, something like: select start_year, count(*) from projects group by start_year; == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: take one database offline
On Wed, 2002-12-04 at 22:32, Richardson, David E (MVC Corporation) wrote: On a single Linux box w/mysql 3.23.52 I have mysqld running and there are about 20 databases live in the environment. I want to take one of the databases offline but leave mysqld running with the other db's - without interrupting service. I want mysqld to gracefully refresh itself that the database is offline and not require a restart. I have the luxury of time if that's helpful to the solution. How do I disable a database in a running server and leave the rest of the db's in production? A nasty hack for unix-flavoured os is: * place a write lock on all tables in the database you want to disable (eg. lock table one write, two write, three write, ... ) * flush tables * as root, chdir to the mysql datadir (eg. cd /var/lib/mysql) * create a database to hide your database in (eg. mkdir __safe__) * move your database (eg. mv actual_database __safe__) * in mysql, unlock the tables (eg. unlock tables ) Moving a database into a sub-directory makes the database inaccessible. Locking the tables makes you wait for all users to stop using the tables, and prevents anyone else opening one. flushing the tables releases any filehandles MySQL has cached on any of the affected tables. == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Recovery with binary logs.
Hi Manuel, On Tue, 2002-11-26 at 21:38, Manuel Villasante wrote: Hi, I have a few questions regarding recovery of a database using binary logs. 1) If you have a set of binary logs in your directory mylog-bin.001 to mylog-bin.nmp, is there an easy way to find out which logs you need to run since the last backup? In other words, when according to the instructions, after replacing the database files with the backup ones, you run the command: mysqlbinlog mylog-bin.[0-9]* | mysql does it know automatically which set to include so as to not incorporate logs that are too old? Or do we have to manually perform a selection? If so, is there a way to figure out easily the subset if one has not been watching it? At the time of backup, you need to record the master position of the server you are backing up - or perform reset master, but this might threaten your recovery if you have a failure during the current backup process and have to roll-forward from your previous dump (requiring the binary logs that reset master have just deleted). If you record the master position (file name, offset) during your dump, you need to ensure all tables involved in the dump are locked. mysqlhotcopy can do all this for you, see the --record_log_pos option. Unfortunately, mysqlhotcopy only works on Unix-like OSes, and so you will need to roll your own if your OS is not supported. 2) If a loss of data has been caused by an unwanted statement like DROP DATABASE... or DROP TABLE VeryImportantOne, how can one delete that statement from the bin-log before using it for recovery and repeat the mistake? You could write the output of mysqlbinlog to file, edit the file, and then pipe the file into the mysql monitor: mysqlbinlog mylog-bin.[0-9]* file.sql edit file.sql mysql file.sql good luck, Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: listing a count of unique ips by day
On Mon, 2002-11-18 at 17:01, OYNot wrote: [snip] //The following was supposed to create a count, by day, of the unique ip addresses. You can use the date_format() to convert timestamps and dates into whatever format you want. I think you only need a single query like: select ip, date_format( dateIn, '%d.%m-%Y' ) day, count(*) from log group by day, ip; == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Are there ANY terminal-based frontends for Linux?
On Mon, 2002-10-14 at 03:55, Chip Rose wrote: Are there ANY MySQL terminal-based frontends (for Linux) that will allow inputting data via forms, queries,reports? There are a lot of administration tools - that's not what I want. How do I set something up for inputting and simple queries that takes advantage of the *relational* database? The things I've seen all look like flat-file stuff. You could look at GNUe at http://www.gnuenterprise.org. I think their forms designer can generate forms for win32, GTK and Curses. == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Performance when using two BETWEEN statements in the WHEREclause multiple times
On Fri, 2002-10-11 at 16:08, Chris Stoughton wrote: I sent a similar question a few days ago. I don't think there was a response. If there was, sorry that I missed it. I have worked around the issue, but would like to know whether there is something I can do to improve the orignal query. I have a table with two spatial indices -- ra and decl, for right ascension and declination , think of them as x,y coordinates. In order to match objects in one table to a second table, I choose a set of objects in the first table, find the limits of ra,decl, and then query the second table based on these limits. I then do matching in a separate program, between these two lists. For a specific example of one pair of queries: select ra,decl from firstTable where fieldId=1 (based on the results of this query, calculate raMin,raMac, declMin, and declMax -- 1.1, 1.2, 3.4, 3.5 in this example) select ra,decl,a,b,c,d from secondTable where ra between 1.1 1.2 and decl between 3.4 3.5 Hi, You could try a combined index on ra and decl, eg. create index c1 on secondTable (ra, decl ); What is probably happening now is that only one of your indexes is being used in the query (either ra or decl) and then every record falling in that range is read to find those matching the second clause. The amount of work involved in that depends on the distribution of the values. Try running explain on your query and see what indexes are being used, eg: explain select ra,decl,a,b,c,d from secondTable where ra between 1.2 1.3 and decl between 4.0 4.1 Then try adding a combined index and see if explain has changed its mind. == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: HA of MySQL
On Mon, 2002-10-07 at 22:02, Jeremy Tinley wrote: The problem isn't so much with the failover. It's with data integrity. Binlogs control replication. You can place a failover master in between the master and slaves. In the event of a master failure, you eliminate the loss of writes by directing them to the failover. If you lose your failover, the binlogs can be completely different thus pointing the slaves to the master is useless. The binlog position is the real problem. Since binlogs are stored with their byte position as the indicator instead of a unique value passed on from the master, there's no easy way of finding the position you were just at. Is is possible to write two binlogs? One to the local disk, one to a network device? If you set up your failover to log-slave-updates, then this will maintain a fairly up to date copy of your binlogs. There is a risk that if one update takes a very long time, then the master might get a long way ahead while the slave replicates this query, but generally as long as updates are short, this should provide a reasonably secure remote copy of the contents of your binlog. The next problem is that there is no way to synchronise the log position on the failover with that on the master. What you can do however, is periodically stop replication on the failover and snapshot the two log positions, eg: slave stop; show slave status; # for Pm show master status; # for Pf slave start; The slave and master positions give you a reasonable chance of doing the arithmetic required to resync all your slaves to the failover if the master fails. Eg: Psf = Psm + (Pf - Pm) where Psf is Position of slave relative to failover, Psm is Position of slave relative to master, Pm is a recorded master position, and Pf is the corresponding slave position of the failover. The arithmetic gets a bit hairy around rotations of the binlogs, compounded by mysql sometimes inserting extra markers in the binlog when rotations occur, and sometimes not. So, the more often snapshots are made of the positions, the better. Anyhow, there's another 2 cents... == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Error 1053 stalling replication
Mysql, sql, etc. Hi, I am using a slave to perform backups using mysqlhotcopy. This locks all tables in my database for 10 minutes. Sometimes the replication is aborted during the period that the tables are locked. The error message in the slave's log is: ERROR: 1053 Server shutdown in progress 020910 21:00:02 Slave: error running query 'insert into ...' 020910 21:00:02 Error running query, slave aborted. Fix the problem, and re-start the slave thread with mysqladmin start-slave. We stopped at log 'log-bin.114' position 75216512 The slave is then easily restarted using slave start. Does anyone know why this is happening ? I imagine it is something to do with the tables being locked - but I don't understand why it thinks there is a server shutdown, or why this isn't easily repeatable. My slave is 3.23.51 (linux 2.4), and the master is 3.23.33 (linux 2.2). regards, Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Query problem in Perl
Hi, You need to use two separate database and statement handles - you can only have one active query per handle. eg. $dbh1 = DBI-connect(...); $dbh2 = DBI-connect(...); $sth1 = $dbh1-prepare( ... ); $sth1-execute(...); while ( $sth1-fetch() ) { $sth2 = $dbh2-prepare( ... ); $sth2-execute(...); while ( $sth2-fetch() ) { } } On Sat, 2002-08-31 at 20:23, Almar van Pel wrote: Hello, I was trying to create a simple perl program, where my domains where listed followed by there DNS records. But It loops once, and then ends with error DBD::mysql::st fetch failed: fetch() without execute() at test.cgi line 61. I thougt this was the easyest way to do so. But no.. Does anyone have any experience with these kind of sub-statements? $dbh = DBI-connect(DBI:mysql:database=$db_database;host=$db_host, $db_user, $db_pw) || db_error(Databaseverbinding niet gemaakt: $DBI::errstr); $sql = select domain from bind_dns_header; $sth = $dbh-prepare($sql)|| error(Kan het statement niet voorbereiden: $dbh-errstr); $sth-execute || error(Fout bij het communiceren met de database: $DBI::errstr); $sth-bind_columns(\$domain); while ($sth-fetch()) { # line 61 print $domain with the following recordsbr \n; $sql2 = select dnsrecord_id from bind_dns_item where domain = '$domain'; $sth = $dbh-prepare($sql2)|| error(Kan het statement niet voorbereiden: $dbh-errstr); $sth-execute || error(Fout bij het communiceren met de database: $DBI::errstr); $sth-bind_columns(\$dnsrecord_id); while ($sth-fetch()) { print Record: $dnsrecord_id \n; } } $sth-finish(); Regards, Almar - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Query problem in Perl
On Mon, 2002-09-02 at 11:37, Harald Fuchs wrote: In article 1030961610.8175.5.camel@pascal, Martin Waite [EMAIL PROTECTED] writes: Hi, You need to use two separate database and statement handles - you can only have one active query per handle. eg. $dbh1 = DBI-connect(...); $dbh2 = DBI-connect(...); $sth1 = $dbh1-prepare( ... ); $sth1-execute(...); while ( $sth1-fetch() ) { $sth2 = $dbh2-prepare( ... ); $sth2-execute(...); while ( $sth2-fetch() ) { } } Nope. You can have multiple active statement handles per database handle. Harald is correct (- thanks), but you still need a separate statement handle for the query inside the loop: $dbh1 = DBI-connect(...); $sth1 = $dbh1-prepare( ... ); $sth1-execute(...); while ( $sth1-fetch() ) { $sth2 = $dbh1-prepare( ... ); $sth2-execute(...); while ( $sth2-fetch() ) { } } - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
strange behaviour of aggregate functions in if() clauses
MySQL v3.23.49 linux Hi, We came across something strange here. select count(*), user from mysql.user; is illegal (ERROR 1140: Mixing of GROUP columns ... if there is no GROUP BY clause). But, select if( count(*) and user, 1, 0 ) from mysql.user; actually works. Is this a bug or a feature ? == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Large File support on linux 2.2
Mysql, SQL, etc. Hi, Does anyone have experience of compiling in large file support for Linux ? I just want to know what is the scope of the change: do I have to recompile the kernel and every single library and application ? thanks, Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Problem with subqueries
On Wed, 2002-07-31 at 12:03, Luis Rodrigues wrote: hi, I have some sql code that I need to use with mysql it uses nested queries and since mysql doesn't implement them I would like to know how to do this. SELECT * FROM apartamentos WHERE vendido=0 AND Concelho=$row[ConcelhoP] AND Localidade LIKE '$row[LocalidadeP]' ... WHERE Referencia NOT IN ( SELECT Referencia FROM historico_clientes WHERE NumCliente=$id) snip It will require something like: SELECT a.* FROM apartamentos a LEFT JOIN historico_clientes h ON a.Referencia = h.Referencia AND h.NumCliente=$id WHERE h.Referencia IS NULL AND vendido=0 AND Concelho=$row[ConcelhoP] AND Localidade LIKE '$row[LocalidadeP]' ... == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
replication privleges issue
Hi, SQL, MySQL. I want my master to contain only recent data, but my slave to hold a long-term archive. My plan to do this is to use a merge table on the master to provide an alias for the underlying real table. I then perform updates and inserts on the real table, and deletes on the merge table. Both inserts and deletes are replicated to the slave, but on the slave the merge table is defined as an empty union so that the deletes effectively become null-ops. My main worry here is that at some point the merge table on the master might get redefined or rebuilt - and I don't want this replicated to the slave otherwise the deletes will actually start taking effect there. Now the question: how do I set up privileges on the slave so that a drop or alter table performed on this one table on the master are disallowed from being replicated on the slave ? regards, Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: mysqlhotcopy
On Tue, 2002-03-26 at 07:24, Hans Kind wrote: Dear Martin, I tried that option, but it returns a errono 24: ../bin/mysqlhotcopy -u username -p password --flushlog --allowold --regexp='.*' /d1/terminal-a/mysql/ 'mail' is an empty database 'opt' is an empty database 'oveas' is an empty database DBD::mysql::db do failed: Can't find file: './alternat/nuke_bannerfinish.frm' (errno: 24) at ../bin/mysqlhotcopy line 434. It creates the directories, but doesn't backup any of the tables. In the archives I found similar reports, but it was mentioned the problem was fixed!. Not sure how this errono 24 (Error code 24: Too many open files) relates to this problem. [snip] Maybe there are too many open files. How many tables are there in the set of databases you're trying to back up ? I think the error is being thrown when attempting to lock all the tables before the copy. I don't know how mysql locks tables, but if it use flock (or equivalent) that may open one or more files per table. If you have several thousand tables - that may exceed you OS limits. What gets displayed when you run mysqlhotcopy with the '--debug' ( and maybe the '--dryrun' ) flags ? regards, Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: mysqlhotcopy
On Sat, 2002-03-23 at 15:19, Hans Kind wrote: Hi, What is the correct syntax to use to backup all databases in the /usr/local/mysql/var directory. We tried a number of different options, but at best we get only 1 database, backup completely. Using a wildcard, *, only creates the directory, but doesn't copy the tables. [snip] Hi, mysqlhotcopy has a regexp argument which makes it backup all databases matching the regexp. The regexp uses perl regexp syntax, not shell wildcards: So: mysqlhotcopy --regexp='.*' backs up all databases, whereas: mysqlhotcopy --regexp='*' isn't a correct regexp. regards, Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: mysqlhotcopy in 4.0.1
On Mon, 2002-02-11 at 02:04, Jeremy Zawodny wrote: On Mon, Feb 11, 2002 at 03:04:47PM +1300, Ian Collins wrote: In mysqlhotcopy, function copy_files, there is a line, my @non_raid = grep { $_ !~ m:\d\d/: } @$files; (dont you just love perl?). If the database name in question is, for example, testone_750, a perfectly legal database name (??), then no files get copied! and the mysqlhotcopy fails. That's bad. I presume (again ??) that the above line is trying to strip out all ../ matches. Actually it's trying to strip out every database name which contains two digits in a row. That's clearly not ideal. We need something more specific there. [snip] What it should be doing is identifying every file which isn't in a two-digit sub-directory inside a database directory. It should read: my @non_raid = grep { ! m:/\d{2}/[^/]+$: } @$files; I've sent a patch to internals. == Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Multiples instances of MySQL
On Mon, 2001-12-31 at 10:08, Emmanuel van der Meulen wrote: Hello all, I'm in a spot, I'm managing to successfully run several instances of MySQL, say on port=3306 and port=3308. When I use shutdown, the instance which started first shuts down. To shutdown the instance started on port 3306, I use; mysqladmin -P 3306 -u root -p2000 shutdown And to shutdown the instance started on port 3308, I use; mysqladmin -P 3308 -u root -p2000 shutdown However, the instance which started first is always the instance which shuts down, irrespective of the port number I provide. Try using the -S option to select the instance via the Unix domain socket rather than the port number (assuming this is a Unix installation). Could someone please advise and/or point me. Thank you provisionally for any assistance. Kind regards Emmanuel - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Multiples instances of MySQL
On Mon, 2001-12-31 at 10:50, Emmanuel van der Meulen wrote: As suggested, I used -S instead of -P, but still the first instance shuts down. I'm running on Win2K Pro. Is there a different way for Windows? Please advise any further pointers? Ok. Try to explicitly set the host on the command line: eg: mysqladmin -h 127.0.0.1 -P 2000 . Note that you should use the ip address of your host or its network name. Using 'localhost' won't do what you want. Kind regards and all the best for 2002! Emmanuel - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Multiples instances of MySQL
On Mon, 2001-12-31 at 14:01, Emmanuel van der Meulen wrote: Hello Martin, [snip] Ok. Try to explicitly set the host on the command line: eg: mysqladmin -h 127.0.0.1 -P 2000 . Note that you should use the ip address of your host or its network name. Using 'localhost' won't do what you want. I include commands of two instances which I cannot shutdown correctly, the first instance running shuts down first no matter which shutdown command is activited. [snip] Instance2 - shutdown; mysqladmin -P 3308 -h 127.0.0.1 --socket=e:/docs/p3308.sock shutdown --- Can you maybe see something causing the shutdown anomaly? I have a Linux box hosting 2 MySQL servers on ports 3306 and 3307 (sockets mysql.sock and mysql1.sock). I can communicate to each using mysqladmin in this way: mysqladmin -h 127.0.0.1 -P 3307 variables | grep sock | socket| /var/run/mysqld/mysqld1.sock mysqladmin -h 127.0.0.1 -P 3306 variables | grep sock | socket| /var/run/mysqld/mysqld.sock Maybe you can try removing the --socket option ? This might be some special problem with the windows version. Kind regards Emmanuel - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
feature request: privileged connection quotas
Hi, I'd like to be able to reserve some connections to the MySQL server for emergencies - eg. someone issued a slow query and locked out several hundred updates from several hundred other connections. Currently in this sort of situation, it is possible to completely run out of connections and then the only solution is to restart mysqld or just wait for the slow query to end. Would it be possible to add a feature to MySQL so that a configurable number of connections are reserved for a configurable list of users ? Then, an administrator would always be able to connect and clear any blockages. regards, Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
hanging slaves and slave_read_timeout
Hi, We're replicating databases across a firewall with a 300 second reflective ACL timeout set. This causes occasional problems with replication, where if the master is idle for more than 300 seconds, the connections are closed by the firewall in such a way that the slaves never notice. That is, show slave status reports that the slave is running when in fact it has stalled. I see that in the manual there is a slave_read_timeout option which looks like it might fix this problem ( if I upgrade my servers) - but I'm not sure I'm reading the document properly. Is it the case that if slave_read_timeout is set to 150 seconds, and if the connection goes quiet for 150 seconds, then the slave will close and reopen the connection ? (Which is what I want) regards, Martin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php