Re: mysql V 8.0.12 and mysqdump
> Halaasz Saandor hat am 9. Februar 2019 um 10:01 geschrieben: > > > 2019/02/08 10:32 ... Walter Harms: > > Hello list, > > i run into an unexpected problem with mysqldump: > > > > mysqldump --version > > mysqldump Ver 8.0.12 for Linux on x86_64 (MySQL Community Server - GPL) > > > > > > when i try it results in: > > mysqldump: Error: 'Lost connection to MySQL server during query' when trying > > to > > dump tablespaces > > mysqldump: Couldn't execute 'SHOW VARIABLES LIKE 'ndbinfo\_version'': MySQL > > server has gone away (2006) > > I regulary hav this problem with the command-line client (mysql.exe) and > when I asked R H gave this answer (and with the command-line client it > is much less imporant): > > Forwarded Message I found a solution with this (to set for mysqld in my.cnf): wait_timeout = 31536000 It sets the time out very high and mysqldump can now complet the query. personaly i would say this is not a propper solution as it does not solve the problem of an sql statement taking 15min to complet. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: mysql V 8.0.12 and mysqdump
> Halaasz Saandor hat am 9. Februar 2019 um 10:01 geschrieben: > > > 2019/02/08 10:32 ... Walter Harms: > > Hello list, > > i run into an unexpected problem with mysqldump: > > > > mysqldump --version > > mysqldump Ver 8.0.12 for Linux on x86_64 (MySQL Community Server - GPL) > > > > > > when i try it results in: > > mysqldump: Error: 'Lost connection to MySQL server during query' when trying > > to > > dump tablespaces > > mysqldump: Couldn't execute 'SHOW VARIABLES LIKE 'ndbinfo\_version'': MySQL > > server has gone away (2006) > > I regulary hav this problem with the command-line client (mysql.exe) and > when I asked R H gave this answer (and with the command-line client it > is much less imporant): > > Forwarded Message > Subject: Re: ERROR 2013 (HY000): Lost connection to MySQL server during > query > Date: Mon, 06 Jan 2014 17:07:45 +0100 > From: Reindl Harald > > > Am 06.01.2014 15:36, schrieb h...@tbbs.net: > > Now that I installed 5.6.14 on our Vista machine, when using "mysql" > I often see that error-message, which under 5.5.8 I never saw. What is > going on? > > what about look in the servers logfiles > most likely "max_allowed_packet" laughable low > I do not thing so, it is onvoius that the sql statement i postet is rediciusly slow, causing mysqldump to terminate the connection. What anoys me most is the fact that the result is empty. So i could remove it from the code, but i have no idea about the consequences. NTL i will try max_allowed_packet and see what will happen. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
mysql V 8.0.12 and mysqdump
Hello list, i run into an unexpected problem with mysqldump: mysqldump --version mysqldump Ver 8.0.12 for Linux on x86_64 (MySQL Community Server - GPL) when i try it results in: mysqldump: Error: 'Lost connection to MySQL server during query' when trying to dump tablespaces mysqldump: Couldn't execute 'SHOW VARIABLES LIKE 'ndbinfo\_version'': MySQL server has gone away (2006) I seems it get stuck in this query: explain SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SCHEMA.FILES WHERE FILE _TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP _NAME FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATIO N_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('kpc'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE, TOTAL_EXTENTS, INITIAL_SIZE ORDER BY LOGFILE_GROUP_NAME; ++-+-+++-+-+-+-- +---+--+---+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | ++-+-+++-+-+-+-- +---+--+---+ | 1 | SIMPLE | cat | NULL | index | PRIMARY | name| 194 | NULL | 1 | 100.00 | Using index; Using temporary; Using filesort; Start temporary | | 1 | SIMPLE | sch | NULL | eq_ref | PRIMARY,catalog_id | catalog_id | 202 | mysql.cat.id,const | 1 | 100.00 | Using index | | 1 | SIMPLE | tbl | NULL | ref| schema_id | schema_id | 8 | mysql.sch.id |78 | 100.00 | Using where | | 1 | SIMPLE | part| NULL | ref| table_id,table_id_2 | table_id| 8 | mysql.tbl.id | 597 |10.00 | Using where | | 1 | SIMPLE | part_ts | NULL | eq_ref | PRIMARY | PRIMARY | 8 | mysql.part.tablespace_id | 1 | 100.00 | NULL | | 1 | SIMPLE | ts | NULL | ALL| PRIMARY | NULL| NULL| NULL | 12605 | 100.00 | Using join buffer (Block Nested Loop) | | 1 | SIMPLE | tsf | NULL | ref| tablespace_id | tablespace_id | 8 | mysql.ts.id | 1 | 100.00 | Using where | | 1 | SIMPLE | sub_part| NULL | ref| parent_partition_id | parent_partition_id | 9 | mysql.part.id | 13152 | 100.00 | NULL | | 1 | SIMPLE | sub_part_ts | NULL | eq_ref | PRIMARY | PRIMARY | 8 | mysql.sub_part.tablespace _id | 1 | 100.00 | Using where | | 1 | SIMPLE | ts | NULL | eq_ref | PRIMARY,name| name| 779 | func | 1 | 100.00 | Using where | | 1 | SIMPLE | tsf | NULL | ref| tablespace_id | tablespace_id | 8 | mysql.ts.id | 1 | 100.00 | Using where; End temporary | ++-+-+++-+-+-+-- +---+--+---+ The probelm seems to happen only when i dump the whole database, single tables are ok. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: signal handling in mysql cli
Am 02.12.2014 18:31, schrieb Johan De Meersman: - Original Message - From: wharms wha...@bfs.de Subject: signal handling in mysql cli when i use CTRL-C to break a query that works fine in interactive mode. but when i use the noninteractive mode i looks like that but show full processlist; shows otherwise and that is true This may sound silly, but you're basically saying that you can't interact with it while in non-interactive mode... My understanding may be wrong, but this is how it works in my mind: * when in the client, the client intercepts the ctrl-c and interprets it as kill this query on the server. * when in non-interactive mode, the client is not actually reading your keypresses. Thus, the ctrl-c gets intercepted by the *shell*, which does exactly what you ask by killing the MySQL client process. Now, if the mysql client does not explicitly intercept kill signals - which, honestly, most programs have no reason to - that means it doesn't even get a chance to send the cancel that query order, it just dies. A lot of debate could be put in on wether or not the client needs to intercept and send a cancel, but personally I'm leaning towards no, it doesn't - if you don't have a transaction open, or don't even have a transactional engine (which used to be default!) cancelling would leave your database in an inconsistent state. Much better, then, to allow whatever query you sent to continue, on the off chance that it's something that it does not damage, or only does things that you can reverse afterwards. If you really want to kill that query, you always have the option of logging into the server and shooting it yourself. In Unix pressing CTRL-C should be the same a sending a INTR. You can see this with stty -a what key is map to what signal. NTL the question is: how can the user stop the query what was started with a script ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
signal handling in mysql cli
hi list, when i use CTRL-C to break a query that works fine in interactive mode. mysql select sleep(10) ; ^CCtrl-C -- sending KILL QUERY 24289 to server ... Ctrl-C -- query aborted. +---+ | sleep(10) | +---+ +---+ 1 row in set (0.86 sec) but when i use the noninteractive mode timeout 5 mysql -BAN -e select now(); select sleep (100) ; select now() i looks like that but show full processlist; shows otherwise and that is true as a list of long running querys showed. Is there a way to make it behave like the interactive version ? Now it is a bit confusing for everyone. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: NOW() is stuck...
hi, does the value change at all like below ? mysql show global variables like 'timestamp'; +---++ | Variable_name | Value | +---++ | timestamp | 1372404355 | +---++ 1 row in set (0.00 sec) mysql show global variables like 'timestamp'; +---++ | Variable_name | Value | +---++ | timestamp | 1372404371 | +---++ 1 row in set (0.00 sec) re, wh Am 27.06.2013 20:19, schrieb Andy Wallace: Benjamin - Unfortunately: mysql show global variables like 'timestamp'; +---++ | Variable_name | Value | +---++ | timestamp | 1372238834 | +---++ 1 row in set (0.00 sec) And: mysql set global timestamp = 0; ERROR 1228 (HY000): Variable 'timestamp' is a SESSION variable and can't be used with SET GLOBAL This does indeed persist across sessions. Any command line connection I make to the database shows the bad value for NOW(). I also tweaked the application code to include NOW() in an existing query, and the value returned to my PHP code is also the bad value. Thanks for looking, andy On 6/27/13 11:10 AM, Stillman, Benjamin wrote: It persists across sessions? Does this return anything: show global variables like 'timestamp'; Hopefully it returns: Empty set (0.00 sec) I vaguely remember reading about a bug in 5.1.4x with something to do with a global timestamp. I thought it only showed one though, and that you couldn't set it. If the above returned a timestamp and not an empty set, try: set global timestamp = 0; That should return something like this: ERROR 1228 (HY000): Variable 'timestamp' is a SESSION variable and can't be used with SET GLOBAL But if it returns: Query OK, 0 rows affected (0.00 sec) And then your queries return correct timestamps, you've found a bug. I'd hope that it would fail, but the only thing I can think of is if it's being set as a global variable. If this does fix your problem, and if you're using replication, you may have an issue with your replicated data. Replication uses timestamp extensively. On 6/27/13 1:44 PM, Andy Wallace awall...@ihouseweb.com wrote: But the question is how. I have nothing in the code that does it, or this would have been true for months instead of just the last 24 hours. In addition, this is currently set globally - no matter what connection to the database, it all comes up with this value. Which means that all my time-based queries no longer work correctly. Does your message suggest that setting it to 0 might clear the problem? On 6/27/13 10:31 AM, Stillman, Benjamin wrote: Timestamp is a session variable, so it must have been set to something other than 0 (1372228034 epoch is the date you're showing) in your current session. mysql set timestamp = 1372228034; Query OK, 0 rows affected (0.00 sec) mysql select now(), sysdate(); +-+-+ | now() | sysdate() | +-+-+ | 2013-06-26 02:27:14 | 2013-06-27 13:20:48 | +-+-+ 1 row in set (0.00 sec) mysql set timestamp = 0; Query OK, 0 rows affected (0.00 sec) mysql select now(), sysdate(); +-+-+ | now() | sysdate() | +-+-+ | 2013-06-27 13:21:34 | 2013-06-27 13:21:34 | +-+-+ 1 row in set (0.00 sec) Cliff's notes: set timestamp = 0; On 6/26/13 6:10 PM, Andy Wallace awall...@ihouseweb.com wrote: We've been having some issues with one of our MySQL servers lately, and currently the dang thing is stuck. For at least the last hour, NOW() is returning the same value: mysql select now(); +-+ | now() | +-+ | 2013-06-26 02:27:14 | +-+ The system variable timestamp also has that same time value stored in it. How can we kick this loose so that the values are more current with real time? (it is currently 3:08PM here, despite our MySQL instance thinking it's 2am. The system time on the machine is correct: $ date Wed Jun 26 15:08:56 PDT 2013 This is MySQL 5.1.46 running on solaris2.10. Any ideas short of restarting the MySQL engine? I'm willing to do that, but would much rather wait and not do it in the middle of the day. Thanks, Andy -- Andy Wallace iHOUSEweb, Inc. awall...@ihouseweb.com (866) 645-7700 ext 219 -- Sometimes it pays to stay in bed on Monday, rather than spending the rest of the week debugging Monday's code. - Christopher Thompson -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: open files in mysqld 5.1.53
Am 12.06.2013 12:33, schrieb Manuel Arostegui: 2013/6/12 walter harms wha...@bfs.de Hi list, i am trying to understand the incredible use of filepointers in our mysql server (5.1.53). under normal condition the server reports 10k-15k open files pointer. I run a 'flush tables' every 2h to avoid problems, the number of users/connections is constant It is an automatic system but a few human users, a happy mix of myisam and innodb tables running with no problems on mysqld 5.0 . But now sometimes i am hit with an insane increase hitting the ceiling at 60k. I do not like the idea to increase the limit further because i do not understand why this is happening (I seems to happen at random times). I am running out of idea what my cause the unexpected increase, any idea what to watch ? I had a similar problem some years ago with 5.0.84. It was a database with thousand of tables (mostly in MyISAM). It turned to be the --myisam-recover option in Debian init scripts ( /usr/share/mysql/debian-start.inc.sh) . Obviously it's not a good idea to remove it, as if the server crashes, you will needed it. This used to be our db server after getting started: dXX:~# find /proc/`cat /var/run/mysqld/mysqld.pid`/fd -follow -type f 2/dev/null | wc -l 116810 This is what I saw at the time after removing that MyISAM check: dXX:~# find /proc/`cat /var/run/mysqld/mysqld.pid`/fd -follow -type f 2/dev/null | wc -l 10730 I would not recommend to keep this as a for-good-solution but a temporary one till you decide what to do. We migrated most of the tables to InnoDB (there was no reason to keep them as MyISAM, it was just legacy stuff). And these problems were gone. I don't know if this can be your case, but this is what happened to us. Hello Manuel, thx for your tip. We caught the problem when we moved to partitions. Strange is that while testing the problem did not show up and even now we no clue why we experience the sudden burst of use for file descriptors. btw: i checked for that option and it seems that it is not set on the commandline maybe it is hidden somewhere, since we use the mysql_safe wapper it should no be but you never know. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: open files in mysqld 5.1.53
Am 13.06.2013 12:07, schrieb Hartmut Holzgraefe: On 06/13/2013 09:41 AM, walter harms wrote: Hello Manuel, thx for your tip. We caught the problem when we moved to partitions. Strange is that while testing the problem did not show up and even now we no clue why we experience the sudden burst of use for file descriptors. problem with partitions is that all partition files get opened when a partitioned table is used, even when only actually touching a single partition this combined with file-per-table engines like MyISAM or ARCHIVE (maybe with InnoDB with innodb-file-per-table being active, too?) may well lead to a lot of file handles being used, and this again combined with the table_cache keeping opened tables (and the related files) open for later reuse can lead to a lot of file handles being used and kept in use ... FLUSH TABLES may help here by at least releasing those handles that are just kept open by the table cache ... see also bug reports like e.g. http://bugs.mysql.com/bug.php?id=64498 yes that seems related, a quick check show that we have ~800 file/table we have already increased our files limit some time ago, so this should not be a problem. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: MySQL on RHEL4
Am 05.04.2013 07:56, schrieb Keith Keller: On 2013-04-05, Nitin Mehta ntn...@yahoo.com wrote: We're trying to upgrade our existing MySQL 5.1.26 to MySQL 5.1.68 but the installation gives error:libc.so.6(GLIBC_2.4) is needed by MySQL-server-community-5.1.68-1.rhel5.i386 rtld(GNU_HASH) is needed by MySQL-server-community-5.1.68-1.rhel5.i386 Now, GLIBS2.4 is not available for RHEL4 and MySQL 5.1 and 5.5 are supported on RHEL4 as per this: http://www.mysql.com/support/supportedplatforms/database.html Any ideas? You have apparently taken an RPM for RHEL5 and attempted to use it in RHEL4. It should not be a surprise that it doesn't work. To get it working, either use the correct RPMs (which may not be available), compile from source yourself, or upgrade to RHEL5. --keith footnote: you can also recreate the rpm when you recompile. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: log sequence number InnoDB: is in the future!?
Am 02.02.2013 01:34, schrieb Larry Martell: On Mon, Jan 28, 2013 at 5:01 AM, walter harms wha...@bfs.de wrote: hi list, i am using mysql 5.1.53. after a crash i have the follwing error in my log: 130128 10:45:25 InnoDB: Error: page 61 log sequence number 0 2871649158 InnoDB: is in the future! Current system log sequence number 0 2494349480. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. according to the doc's at http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html I need to restore the database from scratch (short version). What i do not understand is what exactly is broken ? Whole DBM ? One Instance ? (mysqlcheck says all tables are ok). Not all tables are INNODB. Is is possible to restore only immodb tables ? (Having fun with forgein keys) Or is there a better way to handle this ? We had the same thing happen to us today. We had a power hit and when the server came back we got the log sequences numbers in the future message. We were able to dump the affected tables, but when we tried to restore them we were not able to drop the old tables. When we tried the server crashed with: InnoDB: Failing assertion not_full_n_used = descr_n_used We did try booting with innodb_force_recovery at all levels from 1 to 6 with the same results. We still have not figured out what to do. Pretty big disaster. Yep, a serious problem. I tried several thinks that came to my mind but this was all useless i had to drop the database and manualy rm ib_datalog0/1 (?). Did you already got the funny errormsg about rawpartions ? I must admit that we made several test before using innodb but we never had such problem, actualy we are happy with that but that kind of problems cost me three days of backup replay. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: log sequence number InnoDB: is in the future!?
Am 28.01.2013 16:18, schrieb Andrew Moore: So this isn't production - well just rebuild it from a backup? It's a pain in the rear to get the lsn aligned again through data creation/removal but if it's a system critical instance without possible downtime you've got some work to do... to be fair, my main concern is to understand what is going on. Last time we had this in production, we loaded the back but it takes some serious time. This time i hoped to find a faster solution. What exactly belongs to the innodb-side of a database (beside the tables) only they ibdata1-file or is there more ? re, wh On Mon, Jan 28, 2013 at 2:21 PM, walter harms wha...@bfs.de wrote: Am 28.01.2013 15:01, schrieb Manuel Arostegui: 2013/1/28 walter harms wha...@bfs.de hi list, i am using mysql 5.1.53. after a crash i have the follwing error in my log: 130128 10:45:25 InnoDB: Error: page 61 log sequence number 0 2871649158 InnoDB: is in the future! Current system log sequence number 0 2494349480. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. according to the doc's at http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html I need to restore the database from scratch (short version). What i do not understand is what exactly is broken ? Whole DBM ? One Instance ? (mysqlcheck says all tables are ok). Not all tables are INNODB. Is is possible to restore only immodb tables ? (Having fun with forgein keys) Or is there a better way to handle this ? Hello, I reckon you really need to think of what caused your MySQL to crash. If there's not a clear reason (HW problem) you might want to dig into that to prevent this happening again. I am saying this because it is not the first time I see someone fixing a corruption (re-building the database or fixing corrupted tables) and then getting it corrupted again within some hours. very simple: power outage Our Production server are on UPS but i was making tests on this one and to be fair power outages are very seldom The problem itself has a solution: increasing the log sequence counter. I wouldn't do it if it's not totally necessary (ie: you don't have another machine to copy the data from). If you can get the data copied again from some other server, that is probably the safest solution here to make sure your data isn't corrupted. If not, I would suggest to run pt-table-checksum to make sure the data is okay. Once your DB is recovered from this crash. pt-table-checksum means this tool ? [ http://www.percona.com/doc/percona-toolkit/2.1/pt-table-checksum.html] I would need to run it once, from the description i had the impression it is intended for monitoring. Could you please explain ? re, wh Cheers Manuel. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: log sequence number InnoDB: is in the future!?
Am 28.01.2013 15:01, schrieb Manuel Arostegui: 2013/1/28 walter harms wha...@bfs.de hi list, i am using mysql 5.1.53. after a crash i have the follwing error in my log: 130128 10:45:25 InnoDB: Error: page 61 log sequence number 0 2871649158 InnoDB: is in the future! Current system log sequence number 0 2494349480. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. according to the doc's at http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html I need to restore the database from scratch (short version). What i do not understand is what exactly is broken ? Whole DBM ? One Instance ? (mysqlcheck says all tables are ok). Not all tables are INNODB. Is is possible to restore only immodb tables ? (Having fun with forgein keys) Or is there a better way to handle this ? Hello, I reckon you really need to think of what caused your MySQL to crash. If there's not a clear reason (HW problem) you might want to dig into that to prevent this happening again. I am saying this because it is not the first time I see someone fixing a corruption (re-building the database or fixing corrupted tables) and then getting it corrupted again within some hours. very simple: power outage Our Production server are on UPS but i was making tests on this one and to be fair power outages are very seldom The problem itself has a solution: increasing the log sequence counter. I wouldn't do it if it's not totally necessary (ie: you don't have another machine to copy the data from). If you can get the data copied again from some other server, that is probably the safest solution here to make sure your data isn't corrupted. If not, I would suggest to run pt-table-checksum to make sure the data is okay. Once your DB is recovered from this crash. pt-table-checksum means this tool ? [http://www.percona.com/doc/percona-toolkit/2.1/pt-table-checksum.html] I would need to run it once, from the description i had the impression it is intended for monitoring. Could you please explain ? re, wh Cheers Manuel. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: log sequence number InnoDB: is in the future!?
Am 28.01.2013 14:40, schrieb Andrew Moore: Dump and reload or use some scripting to create and drop some fake data to increase the lsn towards the 'future' value. http://dba.stackexchange.com/questions/8011/any-better-way-out-of-mysql-innodb-log-in-the-future For now i tend to solution 3, rsync do you know is it possible only certain files ? re, wh On Mon, Jan 28, 2013 at 12:01 PM, walter harms wha...@bfs.de wrote: hi list, i am using mysql 5.1.53. after a crash i have the follwing error in my log: 130128 10:45:25 InnoDB: Error: page 61 log sequence number 0 2871649158 InnoDB: is in the future! Current system log sequence number 0 2494349480. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. according to the doc's at http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html I need to restore the database from scratch (short version). What i do not understand is what exactly is broken ? Whole DBM ? One Instance ? (mysqlcheck says all tables are ok). Not all tables are INNODB. Is is possible to restore only immodb tables ? (Having fun with forgein keys) Or is there a better way to handle this ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Retrieve the values from the table of its max date
perhaps you are looking for something like select entry,timestamp from table A where A.timestamp=(select max(B.timestamp) from table B where a.entry=b.entry); also this oage may be helpful: http://www.artfulsoftware.com/infotree/queries.php re, wh Am 30.11.2012 02:39, schrieb h...@tbbs.net: 2012/11/29 11:46 +0530, Trimurthy i have a table which contains the columns date,sname,age,item,quantity,units.my question is i want to retrieve all the values from the table where date=maxdate group by sname how can i get those values. A question, I suspect, found in all SQL courses -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: UDF behaves non-deterministic
can you reduce the UDF just to return 1; ? that should give you a clue what is going on. Random values usualy point to two suspects 1. mixing 32bit and 64bit 2. using void instead of int re, wh Am 04.11.2012 23:23, schrieb Stefan Kuhn: Hi all, I have a weired (for me at least) problem with a user defined function, written in C. The function seems to return different results in different runs (the code of the function does not contain random elements). Basically, the function calculates a score based on a column in a table and an input value. So I do something like this: select * from table order by udf(column, 'input_value') desc; For my understanding, this should give the same result always. But if I run many statements (execution is from a java program and I can do it in parallel threads) so that they overlap (the udf on a large table takes 5-10 s on a slow machine), the results of some queries are different. If I have enough time between statements, it seems to work, i. e. the result is always the same. I would have thought the statements are independent, even if executed on different jdbc connections in parallel. Does somebody have an idea? Or could somebody give an idea on debugging? Normally I would try to debug the code to see what goes on, but how can I do this in a udf? Can I log in the udf? Thanks for any hints, Stefan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: what is stored in /var/lib/mysql_log/ ?
Am 31.10.2012 17:31, schrieb Rick James: The 2 (possibly more) iblog files are necessary for the inner workings of InnoDB. They do not change in size. They should not (normally) be removed or otherwise tampered with. No useable data is stored there -- that is, they cannot be used for any form of disaster recovery. Tunable things for InnoDB can be found in my.cnf (my.ini). They can be viewed (mostly) via SHOW VARIABLES LIKE 'innodb%'; Current status: SHOW GLOBAL STATUS LIKE 'Innodb%'; SHOW ENGINE INNODB STATUS; Hi Rick, thx for your help. I have found the root cause that was not related to immodb. It was that tmpdir was pointing to the same space. This were no problem for a long time then something happended, the system was optimizing and was running out of space causing a table corruption (why can a failed optimisation cause a table corruption ?) re, wh All of MySQL (not just InnoDB) needs tmp space for _some_ queries. It is normally not be this same directory, but it is probably harmless if it is. -Original Message- From: walter harms [mailto:wha...@bfs.de] Sent: Sunday, October 28, 2012 2:05 PM To: mysql@lists.mysql.com Subject: Re: what is stored in /var/lib/mysql_log/ ? Am 28.10.2012 21:50, schrieb Reindl Harald: Am 28.10.2012 21:29, schrieb walter harms: hi list, on my system this this directory contains ib_logfile0/ib_logfile1, so far no problem. From the documentation i had the impression that this is everything and the files size should not change. but it seems that immodb also uses this space for temp space, do they make a copy of ib_logfile here ? please provide a directory listing so that anybody get a clue what you are speaking about! sorry, I tend to forget that no everyone has the same configuration :( on my system /var/lib/mysql_log -rw-rw 1 mysql mysql 268435456 Oct 28 19:20 ib_logfile0 -rw-rw 1 mysql mysql 268435456 Oct 19 23:03 ib_logfile1 but i guess i just found what is going on: innodb_log_group_home_dir=/var/lib/mysql_log tmpdir=/var/lib/mysql_log I was always looking for innodb related configuration/problems but i guess the strange files that appeared (and related problems) where perhaps caused by tmpdir :) ntl, thx re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
optimize and tmpfiles
hi list, does someone know under what circumstances ''optimize tables'' will create a tmpfile ? we had a strange case of out of space that seems related to an optimize table but i was unable to replicate that case exactly as that no tmpfile appeared. (The table has a lot of partitions if that matters.) re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: optimize and tmpfiles
Am 29.10.2012 13:17, schrieb Reindl Harald: Am 29.10.2012 12:48, schrieb walter harms: hi list, does someone know under what circumstances ''optimize tables'' will create a tmpfile? under all if it is MyISAM and for select id from table order by rand(); too interessting, i was trying to force it (with optimize) but it did not work. Is there a way to restrict the maximum size of those tmpfiles ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: optimize and tmpfiles
Am 29.10.2012 14:55, schrieb Reindl Harald: Am 29.10.2012 14:54, schrieb walter harms: Am 29.10.2012 13:17, schrieb Reindl Harald: Am 29.10.2012 12:48, schrieb walter harms: hi list, does someone know under what circumstances ''optimize tables'' will create a tmpfile? under all if it is MyISAM and for select id from table order by rand(); too interessting, i was trying to force it (with optimize) but it did not work. Is there a way to restrict the maximum size of those tmpfiles? you do you imagine restrict them? actually i espected a no, but sometimes i have to think positv. they are as big as the table that is a problem with a large table re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
what is stored in /var/lib/mysql_log/ ?
hi list, on my system this this directory contains ib_logfile0/ib_logfile1, so far no problem. From the documentation i had the impression that this is everything and the files size should not change. but it seems that immodb also uses this space for temp space, do they make a copy of ib_logfile here ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: what is stored in /var/lib/mysql_log/ ?
Am 28.10.2012 21:50, schrieb Reindl Harald: Am 28.10.2012 21:29, schrieb walter harms: hi list, on my system this this directory contains ib_logfile0/ib_logfile1, so far no problem. From the documentation i had the impression that this is everything and the files size should not change. but it seems that immodb also uses this space for temp space, do they make a copy of ib_logfile here ? please provide a directory listing so that anybody get a clue what you are speaking about! sorry, I tend to forget that no everyone has the same configuration :( on my system /var/lib/mysql_log -rw-rw 1 mysql mysql 268435456 Oct 28 19:20 ib_logfile0 -rw-rw 1 mysql mysql 268435456 Oct 19 23:03 ib_logfile1 but i guess i just found what is going on: innodb_log_group_home_dir=/var/lib/mysql_log tmpdir=/var/lib/mysql_log I was always looking for innodb related configuration/problems but i guess the strange files that appeared (and related problems) where perhaps caused by tmpdir :) ntl, thx re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
restrict Connect Time
Hi list, is there a switch where i can restrict the connect/execution time for a query ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: restrict Connect Time
Am 23.07.2012 15:47, schrieb Ananda Kumar: you can set this is in application server. You can also set this parameter in my.cnf wait_timeout=120 in seconds. But the above parameter is only for inactive session acutualy i want to catch scripts running wild. re, wh On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de wrote: Hi list, is there a switch where i can restrict the connect/execution time for a query ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: restrict Connect Time
Am 23.07.2012 16:10, schrieb Ananda Kumar: you can check the slow query log, this will give you all the sql's which are taking more time to execute Yes but you will see the results only when the query is finished. my first idea was to use something like this: select * from information_schema.processlist where state like 'executing' and time 1000 ; unfortunately time i cumulative and i would kill long running processes that we have also. i guess i will make some assumptions about the statement and kill the rest. re, wh On Mon, Jul 23, 2012 at 7:38 PM, walter harms wha...@bfs.de wrote: Am 23.07.2012 15:47, schrieb Ananda Kumar: you can set this is in application server. You can also set this parameter in my.cnf wait_timeout=120 in seconds. But the above parameter is only for inactive session acutualy i want to catch scripts running wild. re, wh On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de wrote: Hi list, is there a switch where i can restrict the connect/execution time for a query ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: restrict Connect Time
Am 23.07.2012 16:37, schrieb Ananda Kumar: why dont u setup a staging env, which is very much similar to your production and tune all long running sql They are tuned and they are fast :) but the never logout and therefore the time get accumulated. re, wh On Mon, Jul 23, 2012 at 8:02 PM, walter harms wha...@bfs.de wrote: Am 23.07.2012 16:10, schrieb Ananda Kumar: you can check the slow query log, this will give you all the sql's which are taking more time to execute Yes but you will see the results only when the query is finished. my first idea was to use something like this: select * from information_schema.processlist where state like 'executing' and time 1000 ; unfortunately time i cumulative and i would kill long running processes that we have also. i guess i will make some assumptions about the statement and kill the rest. re, wh On Mon, Jul 23, 2012 at 7:38 PM, walter harms wha...@bfs.de wrote: Am 23.07.2012 15:47, schrieb Ananda Kumar: you can set this is in application server. You can also set this parameter in my.cnf wait_timeout=120 in seconds. But the above parameter is only for inactive session acutualy i want to catch scripts running wild. re, wh On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de wrote: Hi list, is there a switch where i can restrict the connect/execution time for a query ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: restrict Connect Time
Am 23.07.2012 16:58, schrieb Ananda Kumar: so. its more of inactive connections, right. What do you mean by NEVER LOGOUT The programms watch certain states in the database, the connect automatic at db startup, disconnecting is an error case. re, wh On Mon, Jul 23, 2012 at 8:17 PM, walter harms wha...@bfs.de wrote: Am 23.07.2012 16:37, schrieb Ananda Kumar: why dont u setup a staging env, which is very much similar to your production and tune all long running sql They are tuned and they are fast :) but the never logout and therefore the time get accumulated. re, wh On Mon, Jul 23, 2012 at 8:02 PM, walter harms wha...@bfs.de wrote: Am 23.07.2012 16:10, schrieb Ananda Kumar: you can check the slow query log, this will give you all the sql's which are taking more time to execute Yes but you will see the results only when the query is finished. my first idea was to use something like this: select * from information_schema.processlist where state like 'executing' and time 1000 ; unfortunately time i cumulative and i would kill long running processes that we have also. i guess i will make some assumptions about the statement and kill the rest. re, wh On Mon, Jul 23, 2012 at 7:38 PM, walter harms wha...@bfs.de wrote: Am 23.07.2012 15:47, schrieb Ananda Kumar: you can set this is in application server. You can also set this parameter in my.cnf wait_timeout=120 in seconds. But the above parameter is only for inactive session acutualy i want to catch scripts running wild. re, wh On Mon, Jul 23, 2012 at 6:18 PM, walter harms wha...@bfs.de wrote: Hi list, is there a switch where i can restrict the connect/execution time for a query ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: restrict Connect Time
Am 23.07.2012 17:38, schrieb Reindl Harald: Am 23.07.2012 17:35, schrieb walter harms: Am 23.07.2012 16:58, schrieb Ananda Kumar: so. its more of inactive connections, right. What do you mean by NEVER LOGOUT The programms watch certain states in the database, the connect automatic at db startup, disconnecting is an error case. so why do you want to restrict connect time if this is a error-case for you? no, this is a misunderstanding, i want to catch running querries that already run longer than a certain time. When i use my simple approach like: select * from information_schema.processlist where state like 'executing' and time 1000 ; it did work work as intended. Unfortunately 'time' is cumulative meaning there is a real risk catching legitimate users. I guess i could filter it based on 'info' (what contains the query) but i was hoping that there is a more simple way. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Script for cleaning data on a regular basis
Am 26.01.2012 18:45, schrieb HalXsz SXndor: 20120126 10:34 AM +0200, a bv Database contains tables (structures?) which gets montly data , and these tables are named as name1_name2_ yearmonth . I only want to have the whole database system for last 2 years, and automaticly clean the data which became more than 2 years old , so i need a script for this (shell, php etc) . Can you please help me for this script ? and also which mysql command must be used drop, trunk? You can, if you are careful, write an SQL procedure for dropping the appropriate tables. Since MySQL has no table variables, if you do this you will need PREPARE. Therefore, it well may be easier to do it all in PHP, since therewith you can construct statements. The statement SHOW TABLES is also a reference to table INFORMATION_SCHEMA.TABLES; maybe you can use SHOW TABLES for a SELECT statement in PHP; if not, try INFORMATION_SCHEMA.TABLES for their names. This is simple, consult your manual about crontab. and start a script like that: #!/bin/bash mysql -BAN database EOF place my great sql-statemant here If you need some additional informationen like $HOME you can use it also EOF If something goes wrong a mail will be send to the owner (details see man 5 crontab) re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Binary builds for AIX
It is some time since i used AIX but maybe this help. So far i know has IBM moved to gnu-tools if not do it, it will ease the pain. I assume that you have gcc etc running. after downloading the latest version of mysql source. 1. unpack 2. ./configure if it complains try to fix it /* hope for the best */ 3. make if it complains try to fix it 4. make check /* i do not remember the exact target but this runs the test and should also be found with INSTALL */ 5. make install note that you still have to make sure that the db will start at boot do not blame me if something fails, i have not used AIX since 4.1. re, wh Am 09.09.2011 17:22, schrieb Peter Gershkovich: I noticed that there is no binary builds for AIX any more. What would be the best way to install a current version of MySQL (5.5) on AIX? Specifically I am looking for instructions for AIX 6.1 and/or 7.1 Thanks, Peter -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL daemons restarting every 7 minutes
Am 10.09.2011 16:07, schrieb a.sm...@ukgrid.net: Ok, this is pretty odd but I have found the problem. Today I have repointed all applications to a different DB server, so I have been free to do any testing on the problem server. I started by dropping the databases one by one, dropped em all and the issue persisted. I stopped crond, even tho Id already looked in crontab and was satisfied it wasn't the culprit. Problem persisted. I then removed the mysql data dir, and my.cnf and restarted with a blank config. Problem still persisted. Turned on the general log, nothing happening (as nothing legit was still pointing to the DB on this server). Problem persisted. Then I thought, what if I have hosts.allow misconfigured and its wide open maybe a remote system is connecting and messing with it. But hosts.allow was correct (mysql not listed, so denied by the last all:all). I tested connecting from a remote server, guess what? Mysql daemons restarted in exactly the way I was seeing each 7 mins, each time just by simply running: mysql -h tau Odd that, so I added a mysql specific line to the hosts.allow (which is not necessary as all services are blocked on the last line anyway). Tested that, no no more crashing. Odd! The hosts.allow file is the stock FreeBSD updated to not allow all:all and to use denyhosts (ssh blocked). Additional info, using tcpdump I could see that the every 7 min crashes were not in fact caused by any connections from other hosts, MySQL was crashing regardless. What i found odd that your mysqld actualy restarts. Do you have it in some runlevel ? if yes stop and see what happens. If this does not work simple move the mysqld out of he way and replace it with a script like #!/bin/sh echo mysqld ... | logger -t TEST see what happens in /var/log/syslog (you get the idea). re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL daemons restarting every 7 minutes
Am 10.09.2011 16:25, schrieb a.sm...@ukgrid.net: Quoting walter harms wha...@bfs.de: What i found odd that your mysqld actualy restarts. Do you have it in some runlevel ? if yes stop and see what happens. If this does not work simple move the mysqld out of he way and replace it with a script like #!/bin/sh echo mysqld ... | logger -t TEST see what happens in /var/log/syslog (you get the idea). Hi, its FreeBSD so no run levels other than single user and up/multi user. And FreeBSD has no Solaris like SMF like monitoring of daemons so I can be fairly certain that the RC script is not being called by anything. I do use PSMON but this will advise me if its taken action, and Ive tested with PSMON stopped also. As I said I can get it to restart just by doing a remote connect from another server... Sorry, I am a late starter in the thread ... What i would like to understand is: is it a propper shutdown ? (So far i understand yes, so the idea of a starter script) Does it concern target mysqld only ? (therefore the script) Since my first idea was that something was calling the starter script we will modify the dummy a bit. #!/bin/sh while true do echo mysqld ... | logger -t TEST sleep 1m done This will make sure that you see a msg every minute. If you trigger something it will try to kill the script. I still do not see why it is restarting ... there must be something watching is disappear. Just to be sure, you do from a remote host: mysql -hHOST -ume -e show tables ? long shot: Do you have LDAP, NIS or so enabled ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL daemons restarting every 7 minutes
Am 10.09.2011 17:32, schrieb a.sm...@ukgrid.net: Quoting walter harms wha...@bfs.de: I still do not see why it is restarting ... there must be something watching is disappear. Just to be sure, you do from a remote host: mysql -hHOST -ume -e show tables ? long shot: Do you have LDAP, NIS or so enabled ? Ok so made a script as you suggested, and it is called from mysqld_safe. So that is what is restarting mysqld. when can establish that here is no interference from outside we need to reduce more since you are running mysqld_safe - that is nice - can you see the parameters who mysqld is started ? restart it with the same parameter on the command line and see what happens the server support a verbos option (never used) perhaps it will tell you more. With regard to is it doing a proper shutdown, no it isn't. It restarts instantly when triggered by a remote connection, and for example I see no InnoDB: Starting shutdown... info logged by mysqld. I'm not using LDAP or NIS. So it must be a local problem (at least it has nothing todo with auth) re, wh Andy. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
different desc for same statement
hi list, i have a very strange effect. I have two boxes with the same DB ( same version, same tables, same my.cnf, etc) I was trying to optimise an sql statement and used desc to see what is going on and found to my surprise two different results. Can this be the result of the optimizer ? (the boxes have very different uptime) re, wh here the output of desc statement; +++---++---+-+-++--+--+ | id | select_type| table | type | possible_keys | key | key_len | ref| rows | Extra| +++---++---+-+-++--+--+ | 1 | PRIMARY| D | ref| PRIMARY,fk_mk_d | fk_mk_d | 2 | const | 1087 | Using where | | 1 | PRIMARY| A | ref| PRIMARY,fk_mms_sb | PRIMARY | 4 | D.kenn | 198 | Using where | | 1 | PRIMARY| B | eq_ref | PRIMARY | PRIMARY | 3 | A.code |1 | | | 2 | DEPENDENT SUBQUERY | C | ref| PRIMARY,fk_mms_sb | PRIMARY | 4 | D.kenn | 165 | Using where; Using index | +++---++---+-+-++--+--+ +++---++---+---+-++--+--+ | id | select_type| table | type | possible_keys | key | key_len | ref| rows | Extra| +++---++---+---+-++--+--+ | 1 | PRIMARY| B | ALL| PRIMARY | NULL | NULL| NULL | 29 | Using where | | 1 | PRIMARY| A | ref| PRIMARY,fk_mms_sb | fk_mms_sb | 3 | B.code | 2035 | Using where | | 1 | PRIMARY| D | eq_ref | PRIMARY,fk_mk_d | PRIMARY | 4 | A.kenn |1 | Using where | | 2 | DEPENDENT SUBQUERY | C | ref| PRIMARY,fk_mms_sb | PRIMARY | 4 | D.kenn | 574 | Using where; Using index | +++---++---+---+-++--+--+ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: different desc for same statement
Am 31.08.2011 13:51, schrieb Johan De Meersman: Exactly the same data, too? Different index leaf distribution might account for something like this, and it does look like you're retrieving different datasets. same data think of it as a backup. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Odd MySQL performance behaviour
maybe its is obvoius but did you look at the statistics ? did you try optimize table ? re, wh Am 18.07.2011 18:40, schrieb A F: [Process:] Importing delimited text files from a Windows based server to a MySQL 5.1.41 instance (multiple databases) on a single Ubuntu 10.04.2 host. The process is initiated on the Windows server via the MySQL exe using ‘load data local infile’. There are 20 databases total and we import 15 files per database – 1 file per table. All tables use the MyISAM engine. Prior to each import, we truncate the destination tables. All processing is done sequentially. [Issues:] The process will run fine for about 2 weeks then continues to increase significantly in overall processing time. For example, we’ll see a 35 minute run-time for 2 weeks, then the next day its 55 minutes, then 80 minutes, etc, then without any changes, it will drop back to 35 after about a week (length of times vary.) It does not seem to be the result of any other processing requirements as the server is basically dedicated to MySQL only and there are no manually added CRON tasks. Even the truncate table steps appear to be affected. An comparison of ‘show status’ on a normal day vs a slow day does not appear to show any major issues other than a high volume of aborted_connects but this variable seems to increase regularly, outside of the actual import processing window. Not sure if this is some sort of MySQL resource buildup or something related to the OS. Any input would be greatly appreciated. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: SELECT records less than 15 minutes old
from:http://dev.mysql.com/doc/refman/5.0/en/time-zone-support.html SET GLOBAL time_zone = timezone; from:http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html#function_utc-timestamp * UTC_TIMESTAMP, UTC_TIMESTAMP() I have the same results, according to the docs timezone is the offset to UTC (what is used internaly of cause). see also: http://www.mysqlfaqs.net/mysql-faqs/General-Questions/How-to-manage-Time-Zone-in-MySQL btw: please notice the difference between: mysql select @@session.time_zone ; +-+ | @@session.time_zone | +-+ | +00:00 | +-+ 1 row in set (0.00 sec) mysql select @@global.time_zone ; ++ | @@global.time_zone | ++ | SYSTEM | ++ 1 row in set (0.00 sec) ntl personally i would say it should work with UTC but someone decided otherwise. (i just found a hint why: http://dev.mysql.com/doc/refman/4.1/en/mysql-tzinfo-to-sql.html ) hope that helps, wh Am 21.06.2011 00:00, schrieb sono...@fannullone.us: On Jun 20, 2011, at 10:11 AM, Jerry Schwartz wrote: You should use UTC time zone or you will run into trouble with DST. [JS] If you do that, you can't use an automatic timestamp field. You have to set the field yourself. Thanks Walter and Jerry. Is there a way to get NOW() to use UTC instead of the server timezone? (The server is not mine, so I can't change the my.cnf.) Here's my statement: SELECT * FROM `log` WHERE `id` = $_id AND ( `time_stamp` = DATE_SUB(NOW(), INTERVAL 30 MINUTE) ) Earlier in my PHP script I've used date_default_timezone_set, but that doesn't affect the MySQL statement. -- Possible Solution I tried: SET time_zone = 'UTC'; but MySQL complained with: #1298 - Unknown or incorrect time zone: 'UTC' I then tried: SET time_zone = '-0:00'; and that seems to have worked. Is this the correct way to do it? Thanks, Marc -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: SELECT records less than 15 minutes old
Am 19.06.2011 21:06, schrieb sono...@fannullone.us: On Jun 19, 2011, at 11:11 AM, Claudio Nanni wrote: just a quick debug: Thanks, Claudio. It turned out to be that NOW() was using the server's time and my timestamp was based on my timezone. After fixing that, the SELECT statement works properly. Marc You should use UTC time zone or you will run into trouble with DST. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Bug using 32-bit libmysqlclient on a 64-bit system?
Am 13.06.2011 18:45, schrieb Alex Gaynor: Unfortunately the decision to run 32-bit libs on 64-bit systems is outside of my control. Given that it *should* work I'm more interested in diagnosing whether this is a bug of some sort in libmysqlclient or a bug in my code/build procedure. You should starting here: http://maketecheasier.com/run-32-bit-apps-in-64-bit-linux/2009/08/10 basicly you have to check that every lib you use is realy 32bit. Missing one is asking for trouble: random bugs, etc. Basicly everything else like running linux in a LXC Container, a vitual machine with qemu, or simply buy a 32bit box is more maintainable than mixing 32 und 64 bit application. They can run perfectly until some random momentum. re, wh Alex On Sat, Jun 4, 2011 at 10:06 AM, walter harms wha...@bfs.de wrote: It is basicly a not clever solution to run 32bit libs with a 64bit system. You have to compile -m32 and all sort of things. It is *way* better to compile with pure 64bit. re, wh Am 04.06.2011 02:18, schrieb Alex Gaynor: I've got a 64-bit Linux system, with a 32-bit libmysqlclient (and a 64-bit), and a C program using the libmysqlclient API which behaves very differently depending on which platform it is compiled for. The program is: -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Bug using 32-bit libmysqlclient on a 64-bit system?
It is basicly a not clever solution to run 32bit libs with a 64bit system. You have to compile -m32 and all sort of things. It is *way* better to compile with pure 64bit. re, wh Am 04.06.2011 02:18, schrieb Alex Gaynor: I've got a 64-bit Linux system, with a 32-bit libmysqlclient (and a 64-bit), and a C program using the libmysqlclient API which behaves very differently depending on which platform it is compiled for. The program is: -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: PHP Generator for MySQL 11.4 released
maybe but what is mysql 11.4 ? re, wh Am 27.04.2011 03:33, schrieb Sharl.Jimh.Tsin: very useful tool,is it free? Best regards, Sharl.Jimh.Tsin (From China **Obviously Taiwan INCLUDED**) 2011/4/26 SQL Maestro Team sql.maes...@gmail.com: Hi! SQL Maestro Group announces the release of PHP Generator for MySQL 11.4, a powerful GUI frontend that allows you to generate feature-rich CRUD web applications for your MySQL database. http://www.sqlmaestro.com/products/mysql/phpgenerator/ Online demo: http://demo.sqlmaestro.com/ PHP Generator for MySQL comes in both Freeware and Professional editions. The feature matrix can be found at http://www.sqlmaestro.com/products/mysql/phpgenerator/feature_matrix/ Please note that before the end of April 2011 you can purchase Professional Edition of PHP Generator for MySQL as well as all other our products and bundles with a 20% discount. Top 10 new features: 1. Multi-level auto-complete editors based on cascading drop-down lists. 2. Editing and inserting data in modal dialogs. 3. Pre-defined client-side validators including Range, Email, and more. 4. Fixed header for data grid and line numbers for grid records. 5. Time edit control. 6. Masked edit control to restrict data input. 7. Grid footers to display summaries (Sum, Average, Count, etc). 8. Excluding certain files from the output. 9. Template-based lookup editors. 10.Generating image thumbnails on the fly. Full press-release (with explaining screenshots) is available at: http://www.sqlmaestro.com/news/company/php_generators_updated_to_11_4/ Background information: --- SQL Maestro Group offers complete database admin, development and management tools for MySQL, SQL Server, PostgreSQL, Oracle, DB2, SQLite, SQL Anywhere, Firebird and MaxDB providing the highest performance, scalability and reliability to meet the requirements of today's database applications. Sincerely yours, The SQL Maestro Group Team http://www.sqlmaestro.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=amoiz.sh...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: linking to mysql in C
You are missing the libmysqlclient. I guess the problem is your makefile. make != shell You CFLAGS should have something like this: -L/usr/lib/mysql -lmysqlclient (Actualy it should be LDFLAGS and LOADLIBES) something like $(shell pkg-config --cflags $(packages) ) should work further reading: http://www.gnu.org/software/make/manual/make.html#Shell-Function hope that helps, re, wh Am 08.01.2011 06:06, schrieb Delan Azabani: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, This is a novice problem I'm having with compiling a C CGI program with MySQL on my Gentoo box. I have a simple source so far: * cbook-main.c: http://pastebin.com/vnT6j1z2 * cbook-main.h: http://pastebin.com/4BnyWs27 * Makefile: http://pastebin.com/m973gbjG When compiling I receive undefined reference errors: delan@delan2 ~/cbook $ make make cbook-main make[1]: Entering directory `/home/delan/cbook' cc -g `pkg-config --cflags libconfuse glib-2.0` `mysql_config --cflags` - -c cbook-main.c make[1]: Leaving directory `/home/delan/cbook' cc `pkg-config --libs libconfuse glib-2.0` `mysql_config --libs` -o cbook.cgi cbook-main.o cbook-main.o: In function `myerror': /home/delan/cbook/cbook-main.c:61: undefined reference to `mysql_error' /home/delan/cbook/cbook-main.c:61: undefined reference to `mysql_errno' cbook-main.o: In function `myescape': /home/delan/cbook/cbook-main.c:67: undefined reference to `mysql_real_escape_string' cbook-main.o: In function `myquery': /home/delan/cbook/cbook-main.c:85: undefined reference to `mysql_query' cbook-main.o: In function `createDatabase': /home/delan/cbook/cbook-main.c:90: undefined reference to `mysql_select_db' cbook-main.o: In function `pageHome': /home/delan/cbook/cbook-main.c:144: undefined reference to `mysql_store_result' /home/delan/cbook/cbook-main.c:146: undefined reference to `mysql_fetch_row' /h -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Searching For Modules In a DB
Carlos Mennens schrieb: I have a database called 'gaming' and with in that database there are several tables and data. I was asked to find a module called 'ako ldap' and disable it (setting it from 1 to 0). My question is how in MySQL do I search for a string if I don't even know what table to search in? I know how to search using the 'select' statement as long as I know where the table data is. In this case I only know which database but nothing more except what I am looking for. Can anyone please point me in the right direction? hi Carlos, put your tables in one file each (dont remember the option) then you can do grep -l string path_where_datafiles/* every file where string is in will be shown. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: script of mysql
PRATIKSHA JAISWAL schrieb: Hi List, Can somebody please help me if they have a script using which we can get an idea for installed mysql server, backup, created databases, indexes, tables, engines, replication etc... I will appreciate your help in advance You can find tons of scripts on the internet but you need to adjust to *your* needs. Running a mysql script is easy. mysql DATABASE script replace DATABASE with your database name and script with your script. Read the available documentation on mysql.com to understand how to write such script for your self. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: remote mysqldump to csv (--tab)
you can circumvent the problem by using stdout. just drop the INTO OUTFILE '/tmp/result.txt'. the result should look something like that: mysql -BAN database -e select ... filename you can also pipe truh gzip to compress the file and save a lot of space. re, wh note: i found ; is not a good separator since people may use ; in comments, i prefer | for that reason. Marijn Vandevoorde schrieb: Thank you for replying Carlos, but I'm under the impression that this will also put the file on the server. Acutally, mysqldump --tab uses INTO OUTFILE to generate the dump if i'm not mistaken Carlos Eduardo Caldi wrote: Hi You can use on shell, connect at the client mysq -h (host or IP) -p(password) and run the query: SELECT a, b, c INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '' LINES TERMINATED BY '\n' FROM test_table more info see the link http://dev.mysql.com/doc/refman/5.0/en/select-into-statement.html Carlos Caldi Date: Wed, 28 Apr 2010 14:11:14 +0200 From: ma...@psb.vib-ugent.be To: mysql@lists.mysql.com Subject: remote mysqldump to csv (--tab) Hi all, posted this in the backup list, but that one seems to be pretty dead, so i'll try my luck again here: We're currently looking for a way to backup a pretty big mysql table to a csv file. However, we don't want to allow the user ssh or file access to the server, so it'd have to happen remotely. the --tab/-T option allows exporting to csv, but not remotely. So we're looking for a way to have mysqldump store these files remotely, on the client where mysqldump is executed. I've been looking up and down, only to find bad news, so I'm pretty sure it's just not possible. I know there are ways to do this (little script to convert the sql to csv, or pipe it all through sed), but we were just wondering if we're really not overlooking a nifty option in mysqldump :-) Thanks in advance mavoo -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ce_ca...@hotmail.com Cansado de entrar em todas as suas diferentes contas de email? Veja como juntar todas http://www.windowslive.com.br/public/tip.aspx/view/16?product=1ocid=Hotmail:MSN:Hotmail:Tagline:1x1:semLinha -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Fwd: Mixing Latin and UTF
Huib schrieb: Hello, I hope that this is the right list. I have a database that has been running for years in latin1 but a software update changed it in to utf8 that would be no big deal if we know it right away so we could change the database. The big problem is that the database has been running for 2 months as utf8 and it is causing problems now. I have like 500mb latin1 and 100mb utf in the database. How can I convert the database to utf8 without breaking it? in short that is bad, i would do a unload/load cycle you can easly use recode/iconv or friends to convert the problem is that you have to check carefully no to convert to much. If you can unload only the Latin1 that would be a great help, also having binary data will make thinks more complicated. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Optimising a very large table
Nathan Harmston schrieb: Hi everyone, I am currently working on an application where I have a very large table called intervals ( atm its 80 000 000 records and growing ), and a smaller table ( token ) which join with it. interval is just an id, start, end, word token is id, interval_id, type, processed_by There is a many to one..ie one interval can have many tokens. The idea being that I as used different tools I generate different tokens but the underlying intervals tend to be the same. When I add a new token I first need to search the intervals table to see if one exists in there. Of course theres an index on intervals to deal with this. But as I add more and more new fields I notice a massive slow down in processing. I think this due to the increase in new records being added and indexed. The problem is I can't turn indexes off as I have 80 million records. Does anyone have any suggestions for optimising this design? Or where to start from? One option and at the moment the only option I have is to denormalise my schema but this will complicate stuff at the application level considerably. Hi, your Problem sound like a time series. The problem i am dealing with. IMHO There is no real solution, we have splitted the TS and use several tables. (1 table per month here but it depends on your exact problem). That moves the problems what table to the application. Since MySql 5.1 there are partitions, that moves the problem back to the DB. The solution depends on your usage pattern. Clearly there is nothing like a clever select statement. For you token-Problem you can use immodb for foreign keys, if insert fail simply check if you need to add an other token in the token table. NTL you need to decide how long you will store, lets assume you have 1E6 Data / day and you want to store for 40 Year then you get 40*360*1E6 Data do you want to handle that ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: SCALING INSERT
Krishna Chandra Prajapati schrieb: Hi list, I want to insert 1 records/sec into table. There can be n number of tables with unique data in each. What are the possible ways to do ? i prefer mysqlimport. just sort your output into a file that is named like the table you wish to import. Basic unix scripting is sufficient. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: parameter being overwritten
Jerome Macaranas schrieb: im trying to setup mysql slave but the things is it wont start because of this errror: -- ERROR 1200 (HY000): The server is not configured as slave; fix in config file or with CHANGE MASTER TO after some testing.. i saw the server-id = 0 through show variables like 'server_id' went to check server-id parameter in /etc/my.cnf grep server-id /etc/my.cnf #server-id = 2 server-id = 2 -- its good.. ls -l ~/my.cnf -- file not found.. print_defaults mysqld result --server-id=2 -- its good but again show variables like server_id is showing 0 what i had to do is set global parameter in mysqld cli w/c is not a good thing.. additional info: Default options are read from the following files in the given order: /etc/my.cnf ~/.my.cnf /etc/my.cnf is there anyway to trace why server-id = 0? To make sure that mysql is actualy reading the file you may use strace (see man strace) and look if the my.cnf is realy read. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
all tables with certain type
hi list, is it possible to get a list of all tables with a certain type in one statement ? for now i collect all tables (show tables) and search for the type (show columns). Any way to circumvent that ? make it one statement ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: mysqlcheck user minimum privileges
I do know thw internals, but i guess having select only would be enough if you drop the auto-repair feature. a repair always requires changes (=write). maybe you can use mysqlldump instead ? re, wh René Fournier schrieb: Just wondering what they are. I'd rather not use the MySQL root user for a backup script, if I can get away with MySQL user with reduced privileges, for the following command: mysqlcheck -ao --auto-repair --all-databases -u someuserotherthanroot -pPASSWORD ...Rene -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: mysql accessing one database from another
Manasi Save schrieb: Hi All, I am needing to access a sub databases through main database. I have one main database and serveral sub databases. For accessing those databases I am using mysql prepared statements, But the performance I am getting because of this is very low. Can anyone suggest me any alternate way for this. Please let me know if you need any other information on this. Thanks in advance. i do not know the performace but ... http://dev.mysql.com/doc/refman/5.0/en/federated-storage-engine.html re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: cannot find my.cnf file
Sydney Puente schrieb: Hello, I want to log all sql queries made against a mysql db. Googled and found I should add a line to my.cnf. However I cannot find a my.cnf file [r...@radium init.d]# ps -ef | grep mysql root 13614 1 0 Sep24 ?00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/object01.pid mysql13669 13614 0 Sep24 ?00:21:40 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log-error=/var/lib/mysql/object01.err --pid-file=/var/lib/mysql/object01.pid root 23050 22746 0 19:05 pts/000:00:00 grep mysql [r...@radium init.d]# locate cnf /usr/share/doc/MySQL-server-community-5.1.39/my-huge.cnf /usr/share/doc/MySQL-server-community-5.1.39/my-innodb-heavy-4G.cnf /usr/share/doc/MySQL-server-community-5.1.39/my-large.cnf /usr/share/doc/MySQL-server-community-5.1.39/my-medium.cnf /usr/share/doc/MySQL-server-community-5.1.39/my-small.cnf /usr/share/man/man8/cnfsheadconf.8.gz /usr/share/man/man8/cnfsstat.8.gz /usr/share/ssl/openssl.cnf /usr/share/mysql/my-large.cnf /usr/share/mysql/my-huge.cnf /usr/share/mysql/my-innodb-heavy-4G.cnf /usr/share/mysql/my-medium.cnf /usr/share/mysql/my-small.cnf Any ideas? I might add i did not install mysql and I did not start it and the guy who did is in holiday! Systemwide config files are always in /etc/ (see: man hier). re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Choose certain columns in mysqldump?
Jaime Crespo Rincón schrieb: 2009/10/29 Michael Dykman mdyk...@gmail.com: mysqldump is not really a data manipulation tool.. as the name implies, it is a dumper. What you are trying to accomlish can be done rather elegantly via the SELECT .. INTO OUTFILE syntax http://dev.mysql.com/doc/refman/5.1/en/select.html and then loaded into your new structure via LOAD INFILE http://dev.mysql.com/doc/refman/5.1/en/load-data.html Yes, in fact, you can still do it from the command line with mysql command line client: mysql -urxxxt -pxxx db_name -e SELECT [any, column, you, want] INTO OUTFILE '/var/www/folder/table_name.txt' FROM table_name WHERE [any, filter, you, want] ORDER BY [any, order, you, want] I prefer mysql -BAN It has the advantage that you do not need to fiddle with headers etc. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: is_string or is_numeric
Do your realy need to know the differenz ? take everything as string. (breaks with pics/geodata but helps a lot). re, wh sangprabv schrieb: Hi, I found no built in function in mysql to check whether a record is numeric or string. Is there any trick to do so? Many thanks. Willy -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Scaling Mysql
Krishna Chandra Prajapati schrieb: Hi list, I have two tables send_sms and alt_send_sms. Users are inserting records into send_sms @ 500/sec ie 3/min. After applying some updates to send_sms data are transferred to alt_send_sms and deleted from send sms. The same thing is happening with alt_send_sms table. Is it possible to insert 1000records/sec in send_sms table and taken out at the rate 1000records/seconds from alt_send_sms. Which engine is more better for the above senario. Hi Krishna, i see you are using some kind of queue mechanism but to get a useful answer you need to be more specific: e.g. what are your safety requirements ? Tables in RAM are very fast. e.g. do you need forgein keys ? When will data be copied (send-alt) ? after 1 day ? 1 hour ? how long to you need to store data at alt ? how often is the access ? If speed is a concern do you need a database at all ? (KISS) where does the current system spend its time ? and why ? You see your request is far from simple and demands detail knowlegde about your requirements going beyound what can be done in such a ML (and this is only software, there is also hardware an economics). Here you can ask how can i improve SQL statement XX ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Simple query slow on large table
Simon Kimber schrieb: Hi Everyone, I'm having a very simple query often take several seconds to run and would be hugely grateful for any advice on how i might spped this up. The table contains around 500k rows and the structure is as follows: +---+--+--+-+---+--- -+ | Field | Type | Null | Key | Default | Extra | +---+--+--+-+---+--- -+ | ID| int(11) | | PRI | NULL | auto_increment | | siteid| int(11) | | MUL | 0 | | | sender| varchar(255) | | | | | | subject | varchar(255) | | MUL | | | | message | text | | | | | | datestamp | timestamp| YES | MUL | CURRENT_TIMESTAMP | | | msgtype | int(1) | | MUL | 0 | | | isread| int(1) | | | 0 | | +---+--+--+-+---+--- -+ I have indexes on siteid, datestamp and msgtype. Queries such as the following are constantly appearing in the slow queries log: SELECT * FROM enquiries WHERE siteid = 59255 AND msgtype = 0 ORDER BY datestamp DESC LIMIT 5; An EXPLAIN on the above query returns: ++-+---+--+++--- --+---+--+-+ | id | select_type | table | type | possible_keys | key| key_len | ref | rows | Extra | ++-+---+--+++--- --+---+--+-+ | 1 | SIMPLE | enquiries | ref | siteid,msgtype | siteid | 4 | const | 1940 | Using where; Using filesort | ++-+---+--+++--- --+---+--+-+ Shouldn't MySQL be using the datestamp index for sorting the records? When I remove the ORDER BY clause the query is considerably faster. Do I need to do something to make sure it using the index when sorting? Any help will be greatly appreciated! Regards hi Simon, you can try a join see http://www.artfulsoftware.com/infotree/queries.php for hints. sql is pretty bad for time series data. IMHO is the most obvious thing to reduce the number entries in your table. (do you realy need ID when you have a timestamp ?, etc) Otherwise the other stuff like: myisam instead of immodb but this depends on your requirements. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
A-letter-from-the-European-commission-regarding-the-OracleSun-merger
The letter is directed to EU based companies. deadline is 13.08.2009. Maybe some more are interessted to participate. http://blog.thinkphp.de/archives/416-A-letter-from-the-European-commission-regarding-the-OracleSun-merger.html re, wh disclaimer: i have nothing to do with that stuff. i only found this notice: http://www.heise.de/newsticker/EU-Kommission-sucht-MySQL-Anwender--/meldung/142963 NTL i think this is of general interest in this ML. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: mysqldump and access rights
ok, i see now the resultings files are owned by mysql. every file has the same user granted, but the wrong one since i can not chown user.group * as normal user. So far i see it is the same problem as with select into outfile Is there a fancy trick for mysqldump so i will create the corresponding select statements ? re, wh peng yao schrieb: you also can do this:#sudo -u mysql mysqldump command or #su - mysql -c mysqldump command 2009/7/24 walter harms wha...@bfs.de muhammad subair schrieb: On Thu, Jul 23, 2009 at 10:10 PM, walter harms wha...@bfs.de wrote: Hi list, i use mysqldump --tab to create database dumps. this will produce txt and sql files. the resulting sql files is owned by the user but the resulting datafile is owned by mysql.mysql is there any way to change that ? re. wh - Hi, you can use this in Linux *# chown user:user /path/to/file.txt* hi, yes i am aware of that but it would be more helpful for me if mysqldump uses the right ownership in the first place. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=xwei...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: mysqldump and access rights
muhammad subair schrieb: On Thu, Jul 23, 2009 at 10:10 PM, walter harms wha...@bfs.de wrote: Hi list, i use mysqldump --tab to create database dumps. this will produce txt and sql files. the resulting sql files is owned by the user but the resulting datafile is owned by mysql.mysql is there any way to change that ? re. wh - Hi, you can use this in Linux *# chown user:user /path/to/file.txt* hi, yes i am aware of that but it would be more helpful for me if mysqldump uses the right ownership in the first place. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
mysqldump and access rights
Hi list, i use mysqldump --tab to create database dumps. this will produce txt and sql files. the resulting sql files is owned by the user but the resulting datafile is owned by mysql.mysql is there any way to change that ? re. wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: BULK DATA HANDLING 0.5TB
st...@edberg-online.com schrieb: At 11:10 AM +0530 6/13/09, Krishna Chandra Prajapati wrote: Hi guys, I'm working in a telecom company. I have table called deliverylog in which 30 million records gets inserted per/day. The table has grown to 0.5TB I have to keep 60days record in the table. So, 60days * 30 million = 1800 million records. The query is taking a lot of time to fetch the result. Please sugget me what storage engine must be used and how i can get the things done. Is there any other alternative. Any response is highly appreciated. Thanks, Krishna Can you provide us with more details about the current configuration? Eg, MySQL version, current database engine, and the result of an EXPLAIN on the problematic queries. Just offhand, unless you need transactions/foreign keys/all the other niceties of InnoDB, I would suspect MyISAM would be the fastest engine, but hard to say for sure. There's a lot of room for performance optimization with all of the system variables as well (eg; increasing key buffers if you have adequate RAM). You can eke out more performance by putting indexes and tables on different drives on different channels. Some references: Book: High Performance MySQL, Second Edition http://oreilly.com/catalog/9780596101718/ Useful tips from the authors of the above book: http://www.mysqlperformanceblog.com/ And assuming you are using MySQL 5.0: Optimization Overview http://dev.mysql.com/doc/refman/5.0/en/optimize-overview.html Table OPTIMIZE command http://dev.mysql.com/doc/refman/5.0/en/optimize-table.html Using EXPLAIN http://dev.mysql.com/doc/refman/5.0/en/using-explain.html http://dev.mysql.com/doc/refman/5.0/en/explain.html MySQL system variables http://dev.mysql.com/doc/refman/5.0/en/mysqld-option-tables.html steve and take a look at partions (available with =5.1), btw do not forget to force one-file-per-table that make handling a lot more easy. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: safe query prevent sites from hijacker
bharani kumar schrieb: Hi All , This is one general question , How to write the safe query , which prevent the site from hijacker , Share your idea's pull the plug for the mains and save energy. there is no silver bullet. take a lecture in security and you will scream who much simple mistakes are made already. security is a habit, a target at best. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Unix compress equivalent
hi Olaf, in unix you have small programms that do one thing and not more. What you want to archive is a compressed output files. the most easy way is: send to stdout | gzip -c outfile depending on your data replace gzip with zoo,lha,bzip2,compress,. re, wh Olaf Stein schrieb: Or even better, can I tell load data infile or somewhere in the table definition to compress whatever is written to the file? Thanks Olaf On 5/8/09 12:29 PM, Olaf Stein olaf.st...@nationwidechildrens.org wrote: Hi all What is the equivalent in unix (more specifically python) to the compress() function. I am trying to make csv file for use with load data infile and am wondering how to compress the strings that I would usually compress with compress() in a regular sql statement. The field I am writing this into is longblob and I need the compressed version here to be identical to what compress() would do Thanks olaf - Confidentiality Notice: The following mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. The recipient is responsible to maintain the confidentiality of this information and to use the information only for authorized purposes. If you are not the intended recipient (or authorized to receive information for the intended recipient), you are hereby notified that any review, use, disclosure, distribution, copying, printing, or action taken in reliance on the contents of this e-mail is strictly prohibited. If you have received this communication in error, please notify us immediately by reply e-mail and destroy all copies of the original message. Thank you. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
reduce number of open files ?
hi list, i am wondering if there is a way to reduce the number of open files. The database has InnoDB and MyISAM. I have a lot a partitions is that a problem ? (To many open files causes problems for mysqldump) running is vanilla 5.1.34 show status like '%open%' ; +--+---+ | Variable_name| Value | +--+---+ | Com_ha_open | 0 | | Com_show_open_tables | 0 | | Open_files | 12437 | | Open_streams | 0 | | Open_table_definitions | 192 | | Open_tables | 362 | | Opened_files | 14520 | | Opened_table_definitions | 0 | | Opened_tables| 0 | | Slave_open_temp_tables | 0 | +--+---+ re, walter -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Memory corrupting, while retrive the query generated
Ravi raj schrieb: Dear walter Harms, Thanks for your valuable solution, but in the code which you provided is printing only one row , if i try to print whole table, or 2, or 3, columns fully means its giving segmentation fault, kindly check the below code for furthur information. software used: --- 1. MYSQL 6.0.0 2.MySQL Connection C 6.0 3.Cygwin (used to run the programs using GCC) Operating Systems: Windows Vista Home basic building executable: -- exporting c connection library (mysql.h) as, export PATH=$PATH:c:/Program Files/MySQL/MySQL Connection C 6.0/lib/opt and i copied the libmysql.dllto local folder where the c code resides, gcc -g -c simple.c gcc -g libmysql.dll simple.o running: ./a.exe if i run the below code its giving output like this ,(trying to get all values from a particular column of a table). ---output---- num_fields = 2 127.0.0.1 localhost 28 [main] a 3836 _cygtls::handle_exceptions: Error while dumping state (probably corrupted stack) Segmentation fault (core dumped) hi Ravi, i have checked your programm on my box and it works as expected. (linux,mysql 5.0) That leaves only your environment (compiler,libraries,...) as culprit. The most easy think to do now is to install a linux and give it a try. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Problems After MySql 5.1.34
there is nothing you can make. Any (major)upgrade of mysql client requires the dependent subsystem to upgrade also. Anything else would be careless since you do not know if the interface has changed. basicly you can install both version of libraries and hope for the best. i would do this only as last rescue if an update is not possible. it is in general not clever. re, wh Gary Smith schrieb: Johnny, Welcome to the hell that is php + apache + mysql. If you upgrade your MySql (especially major versions 5.0 = 5.1) you will also need to recompile php against the new MySql client libs. We've had very limited success trying to get it to work otherwise. This is why you are receiving the error through PHP. Of course, I could be wrong, in which case I know people will probably jump me for it. If this is the case, please do as I would like to be wrong here as it would make my compiling life easier every time I update MySql on all of my boxes. Gary From: Johnny Stork [li...@openenterprise.ca] Sent: Wednesday, May 06, 2009 1:03 PM Cc: mysql@lists.mysql.com Subject: Re: Problems After MySql 5.1.34 Typo, moved from 5.0.67 to 5.1.34 Johnny Stork wrote: I recently upgraded an asterisk/trixbox server to mysql 5.1.34 from 5.64. Access to the db seesm fine from the shell, phpmyadmin or even the Trixbox/FreePBX tool, but trying to apply Trixbox changes, or running a pear update produces the errors below. I created a couple of sl but this did not seem to fix the problem. Below is the error and contents of /usr/lib r...@asterisk:~# pear update PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mysql.so' - /usr/lib/libmysqlclient.so.15: version `libmysqlclient_15' not found (required by /usr/lib/php/modules/mysql.so) in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mysqli.so' - /usr/lib/libmysqlclient.so.15: version `libmysqlclient_15' not found (required by /usr/lib/php/modules/mysqli.so) in Unknown on line 0 Segmentation fault r...@asterisk:~# ls -la /usr/lib/libmy* lrwxrwxrwx 1 root root 26 May 6 09:52 /usr/lib/libmysqlclient_r.so - libmysqlclient_r.so.16.0.0 lrwxrwxrwx 1 root root 28 May 6 11:26 /usr/lib/libmysqlclient_r.so.15 - /usr/lib/libmysqlclient_r.so lrwxrwxrwx 1 root root 28 May 6 11:18 /usr/lib/libmysqlclient_r.so.15.0.0 - /usr/lib/libmysqlclient_r.so lrwxrwxrwx 1 root root 26 May 6 09:52 /usr/lib/libmysqlclient_r.so.16 - libmysqlclient_r.so.16.0.0 -rwxr-xr-x 1 root root 2052884 Mar 31 22:48 /usr/lib/libmysqlclient_r.so.16.0.0 lrwxrwxrwx 1 root root 24 May 6 09:52 /usr/lib/libmysqlclient.so - libmysqlclient.so.16.0.0 lrwxrwxrwx 1 root root 26 May 6 11:14 /usr/lib/libmysqlclient.so.15 - /usr/lib/libmysqlclient.so lrwxrwxrwx 1 root root 24 May 6 09:52 /usr/lib/libmysqlclient.so.16 - libmysqlclient.so.16.0.0 -rwxr-xr-x 1 root root 2044464 Mar 31 22:48 /usr/lib/libmysqlclient.so.16.0.0 -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Memory corrupting, while retrive the query generated
hi ravi, this works for me. it should help you to get a starting point re, wh /* simpple DB connect test gcc -L/usr/lib/mysql -lmysqlclient connect.c */ #define _GNU_SOURCE #include stdio.h #include stdlib.h #include mysql/mysql.h int main() { MYSQL *MySQL; MYSQL_ROW row; MYSQL_RES *res; char *dbhost = localhost; char *dbuser = dbuser; char *dbpass = ; char *dbname = mysql; char *sel_smt; int ret; MySQL = mysql_init(NULL); if (MySQL == NULL) { fprintf(stderr, Connection failed\n); exit(1); } if (mysql_real_connect (MySQL, dbhost, dbuser, dbpass, dbname, 0, NULL, 0) 0) { fprintf(stderr, %s\n, mysql_error(MySQL)); exit(1); } asprintf(sel_smt, select count(*) from user); if (mysql_query(MySQL, sel_smt) != 0) { fprintf(stderr, %s\n, mysql_error(MySQL)); exit(1); } res = mysql_store_result(MySQL); if (res == NULL) { fprintf(stderr, %s\n, mysql_error(MySQL)); exit(1); } row = mysql_fetch_row(res); printf(%s\n, row[0] ? row[0] : NULL); free(sel_smt); mysql_free_result(res); mysql_close(MySQL); exit(0); } Ravi raj schrieb: Dear All, I want to connect MYSQL with following C application , while i'm trying to retrive the query generated , its corrupting the memory. Is there any solution , to retrive the query generated with out any memory crashes? Please help me to solve this problem. code as follows, - 1.. #include stdio.h 2.. #include stdlib.h 3.. #include string.h 4.. #include mysql.h 5.. ? 6.. int main() 7.. { 8.. MYSQL *conn; 9.. MYSQL_RES *res; 10.. MYSQL_ROW row; 11.. MYSQL_FIELD *field; 12.. unsigned int i = 0; 13.. char table_type[30]; 14.. char buffer[200]; 15.. unsigned int num_fields; 16.. char *server = localhost; 17.. char *user = root; 18.. char *password = ; /* set me first */ 19.. char *database = test; 20.. conn = mysql_init(NULL); 21.. ? 22.. /* Connect to database */ 23.. if (!mysql_real_connect(conn, server, user, password, database, 0, NULL, 0)) 24.. { 25.. fprintf(stderr, %s\n, mysql_error(conn)); 26.. exit(1); 27.. } 28.. ? 29.. if(mysql_ping(conn)) 30.. { 31.. printf(error in connection \n); 32.. exit(1); 33.. } 34.. sprintf(table_type, method); 35.. ? 36.. sprintf(buffer, select mid, mname from %s;, table_type); 37.. mysql_query(conn, buffer); 38.. res = mysql_store_result(conn); 39.. num_fields = mysql_num_fields(res); 40.. 41.. while ((row = mysql_fetch_row(res)) != NULL) 42.. { 43.. for(i = 0;i num_fields;i++) //here is the problem , num_fields is corrupting 44.. printf(%s\n, row[i]?row[i]:NULL); 45.. } 46.. mysql_free_result(res); 47.. mysql_close(conn); 48.. return 0; 49.. } - Regards, Raviraj - mobile : (91) (0) 9742293013 www.vinjey.com P Think before you print /* work should be challenging and the challenge should be fun */ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Building 5.1 on Tru64 5.1b
Didier Godefroy schrieb: Hello all, I've been having troubles building mysql on Tru64 v5.1b. I tried several versions and there are always some kind of issues with undefined symbols and things to be changed in the source to allow the build to continue. Lately I've been trying to get 5.1.33 compiled, and after many different tries and slight alterations in some headers, I was able to get it compiled using gcc 4.0.2, but then I get lots of unaligned access warnings all the time, both from the server and the client, plus I can't get the test suite to run, so I can't even trust it. I was hoping that if I could get a successful build not using gcc and only the native compiler, perhaps those unaligned access errors would go away, but I can't get the build to finish, even now with the latest 5.1.34 source. The issues I'm facing now are related to pthreads and it makes no sense to me. What is happening is that a couple of identifiers are undefined: Making all in csv gmake[2]: Entering directory `/usr/local/mysql5/src/mysql-5.1.34/storage/csv' source='transparent_file.cc' object='libcsv_a-transparent_file.o' libtool=no \ DEPDIR=.deps depmode=tru64 /bin/bash ../../depcomp \ cxx -DHAVE_CONFIG_H -I. -I../../include -I../../include -I../../include -I../../regex -I../../sql -I. -pthread -I/usr/local/ssl/include -O4 -pthread -D_POSIX_PII_SOCKET -DUNDEF_HAVE_GETHOSTBYNAME_R -DSNPRINTF_RETURN_TRUNC -I/usr/include/cxx -I/usr/include/cxx_cname -I/usr/include -I/usr/include.dtk -c -o libcsv_a-transparent_file.o `test -f 'transparent_file.cc' || echo './'`transparent_file.cc cxx: Error: ../../sql/log.h, line 140: identifier pthread_mutex_destroy is undefined ~st_log_info() { pthread_mutex_destroy(lock);} ---^ cxx: Error: ../../sql/log.h, line 401: identifier pthread_mutex_unlock is undefined inline void unlock_index() { pthread_mutex_unlock(LOCK_index);} ---^ cxx: Info: 2 errors detected in the compilation of transparent_file.cc. gmake[2]: *** [libcsv_a-transparent_file.o] Error 1 I think it makes no sense that pthread_mutex_destroy and pthread_mutex_unlock are undefined while others like pthread_mutex_lock are not, and they're all defined in pthread.h which must be getting included because all other identifiers are defined, except those 2. Those errors didn't show up with gcc, but with cc/cxx they're stopping the build. Why? Will the unaligned access warnings go away if I get the build done successfully with cc/cxx ??? At least I need this build to finish so I can run it and try to get the test suite to finally work. hi Didier, 1. i do not work with True64 but i run mysql on 64bit Intel therefore i assume that mysql is 64bit clean. What you are missing is the posix thread library. On linux this is with glibc. I assume that you cc/ld need some special options to find it. Asking google shows: Programmers Manual for True64 http://h30097.www3.hp.com/docs/base_doc/DOCUMENTATION/V51B_HTML/ARH9RCTE/TITLE.HTM The mysql documentation says that True64 need special options to configure: http://dev.mysql.com/doc/refman/5.1/en/alpha-dec-unix.html If you find more issues please update the mysql documentation. These days there is a monoculture of intelboxes and linux but good programms should be tested on more. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Problem compiling mysql-5.1.33
most likely a missing include it should have at least: #include sys/time.h #include sys/resource.h Dave Shariff Yadallee - System Administrator a.k.a. The Root of the Problem schrieb: make all-am Making all in mysql-test Making all in lib/My/SafeProcess g++ -DHAVE_CONFIG_H -I. -I../../../../include -O3 -fno-implicit-templates -fno-exceptions -fno-rtti -MT safe_process.o -MD -MP -MF .deps/safe_process.Tpo -c -o safe_process.o safe_process.cc In file included from safe_process.cc:48: /usr/include/sys/resource.h:63: field `ru_utime' has incomplete type /usr/include/sys/resource.h:64: field `ru_stime' has incomplete type why is this taking place? Compile options ./configure --prefix=/usr/contrib --localstatedir=/usr/contrib/mysqld --without-innodb --disable-debug --with-ssl=/usr/contrib --enable-thread-safe-client -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: generic remote command/script for monitoring MySQL instance health
you may like to try mytop or watch -n10 mysql -BNA databasename -e show full processlist add user,host,databasename as needed Sven schrieb: Hi folks I am searching for a generic command to monitor that MySQL instance is up and running. I don't have any know-how about the schema of the DB. kind regards Sven Aluoor -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Multiple Cores.
Kunal Jain schrieb: How we can configure Mysql in such a way so that i start using all the cores of CPU. I Have a QuadCore server but somehow mysql use only single core whose usage percentage goes upto 99% while other three cores remains idle. Any Idea or Multiple Core/CPU is wastage. IMHO, normaly the (linux) kernel handels all distribution of processes, and that you have 99% load means there is no need to use an other CPU. Switching between CPUs has a measureable overheat and it is possible that the kernel assumes that you will not benefit from using that other CPU. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: when calling mysql_real_connect from the c api, I get a malloc() memory corruption error.
Kevin Stevens schrieb: ello, I am encountering a problem I just can't seem to figure out and I am out of ideas. I can compile and run fine on one linux box running Mysql 5.1.23-rc, but as soon as I scp the binary and its required libs to another machine which has identical hardware and only a slightly upgraded distro (but also running 5.1.23-rc), I get a glibc malloc(): memory corruption: *** error, which traces back to the mysql_real_connect() call. I ran ldd -v on the binary on both machines and there are some differences, but they don't look important. I have run this binary on other machines before with no issues. I can connect to the database on the troubled machine both locally from the client and through my program from a different machine, but my program craps out when I run it locally on this new box. What could be going wrong? The database connection is one of the first things the program does- before we do any significant allocation of memory, so I really do not believe that this is a problem with my program (it has also been continually tested with many different data sets). I checked the bug database and this forum and could not find any relevant information, if you have any ideas, please let me know! Below is the output from my program, and the code I am using to connect- am I doing something wrong there? If there is any more information I can provide, please let me know. Thank you, -Kevin hi Kevin, can you reproduce the bug when using a smaler version of you program ? a version that only open/close a connection ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: select ... into outfile=stdout ?
hi ronaldo, iadmit i was mysql (the command) fixated :) thx a lot, wh Rolando Edwards schrieb: Try mysqldump !!! On this web page, http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html It says the following: --fields-terminated-by=..., --fields-enclosed-by=..., --fields-optionally-enclosed-by=..., --fields-escaped-by=... These options are used with the -T option and have the same meaning as the corresponding clauses for LOAD DATA INFILE. See Section 12.2.6, LOAD DATA INFILE Syntax. By default, its output to stdout. Give it a try !!! -Original Message- From: walter harms [mailto:[EMAIL PROTECTED] Sent: Thursday, October 16, 2008 5:06 AM To: 'mysql' Subject: select ... into outfile=stdout ? hi list, i need some options from outfile (exspecialy:FIELDS TERMINATED BY) and would like to send the output to stdout to further processing. unfortunately i found no proper way to force the output to stdout. for now i use the redirection of the mysql -NB output but the interface lacks the options of into outfile. (It is easy to fix using tr but not what was intended). i tried /dev/stdout but this does not work either. any ideas ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: select ... into outfile=stdout ?
hi ronaldo i tried and failed. it seems that mysql has no option to specify a select statement. did i mis something ? re, wh walter harms schrieb: hi ronaldo, iadmit i was mysql (the command) fixated :) thx a lot, wh Rolando Edwards schrieb: Try mysqldump !!! On this web page, http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html It says the following: --fields-terminated-by=..., --fields-enclosed-by=..., --fields-optionally-enclosed-by=..., --fields-escaped-by=... These options are used with the -T option and have the same meaning as the corresponding clauses for LOAD DATA INFILE. See Section 12.2.6, LOAD DATA INFILE Syntax. By default, its output to stdout. Give it a try !!! -Original Message- From: walter harms [mailto:[EMAIL PROTECTED] Sent: Thursday, October 16, 2008 5:06 AM To: 'mysql' Subject: select ... into outfile=stdout ? hi list, i need some options from outfile (exspecialy:FIELDS TERMINATED BY) and would like to send the output to stdout to further processing. unfortunately i found no proper way to force the output to stdout. for now i use the redirection of the mysql -NB output but the interface lacks the options of into outfile. (It is easy to fix using tr but not what was intended). i tried /dev/stdout but this does not work either. any ideas ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
select ... into outfile=stdout ?
hi list, i need some options from outfile (exspecialy:FIELDS TERMINATED BY) and would like to send the output to stdout to further processing. unfortunately i found no proper way to force the output to stdout. for now i use the redirection of the mysql -NB output but the interface lacks the options of into outfile. (It is easy to fix using tr but not what was intended). i tried /dev/stdout but this does not work either. any ideas ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: C api - mysql_list_fields
Mike Aubury schrieb: Excellent - this seems to be the issue - the show create table shows : mysql show create table a\g +---++ | Table | Create Table | +---++ | a | CREATE TABLE `a` ( `blah` char(20) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 | +---+--- So - its utf8 (which I understand enough about to understand why its doing what its doing!) So - the next question is... Is there anyway in code I can find the 'fiddle' factor (1,3,or now possibly 4) that I need to use to divide by to get back to the character width specified in the CREATE TABLE ? why do you want to do that ? i would expect that mysql uses wchar_t for char() if utf8 is selected. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: performance question
you mail like to find it by your self. simply use : explain query re, wh Yong Lee schrieb: All, Just curious as to which query would be better in terms of performance: select * from (select * from a union select * from b) as c; versus select * from a union select * from b; or would these 2 queries be the same ? Thanks, Yong. Yong Lee Developer [EMAIL PROTECTED] http://www.eqo.com/ direct: +1.604.273.8173 x113 mobile:+1.604.418.4470 fax: +1.604.273.8172 web:www.EQO.com http://www.eqo.com/ EQO ID: yonglee -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
performance key-value - int vs ascii ?
Hi list, I need to store what is basically a key-value pair. A few years ago i would have choosen an integer as key and used a translation table to get the name (char[]) for the key. Since diskspace is plenty i thinking about to use the name directly. does anyone has any idea what is the performance penalty ? in short: name char(20) vscode int and a second table code int value int value int name char(20) TIA, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: performance key-value - int vs ascii ?
thx, the results support my suspect re, wh Perrin Harkins schrieb: On Fri, Aug 29, 2008 at 4:57 AM, walter harms [EMAIL PROTECTED] wrote: Since diskspace is plenty i thinking about to use the name directly. does anyone has any idea what is the performance penalty ? http://www.mysqlperformanceblog.com/2008/01/24/enum-fields-vs-varchar-vs-int-joined-table-what-is-faster/ - Perrin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MYSQL C
Vicente Moreno schrieb: Hi all I have a little question, have you ever work C MYSQL??? all about that is new for me, if anybody has some info, help me!!! yes, re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Zip Codes with Leading Zeros
Keith Spiller schrieb: Hi Johnny, Yeah. Sadly I missed the fact that the zip codes were hacked in our original Works to MySQL conversion until long after I had imported the data into our current MySQL table. Now we have to fix the numbers in our MySQL table and guarantee that we can export them out to Excel for the printer. I'm working on either finding or producing a script that will repair the 12,000 rows of mixed zip (5 digit) and zip+4 data. I appreciate your taking the time to help us Johnny. so you core problem is that you zip code is a number (or was in between) like 1234 and you would like to see 01234. unload your data and reformat using a simple shell script with a printf core like this one: printf %09d\n 1234 this should be a matter of minutes to rewrite. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Need help to query with timestamp in C++
Kandy Wong wrote: Hi Saul, I need to use C++ and I'm not writing a web application. Thanks anyway. you can do something like: select min(abs(timediff(targettime,timestamp))) from table where condition ; if you use the libmysql you can get the result as strings back (the method i prefer) and convert them in what ever you need. re, wh The followings are the timestamp in the MySQL database: | 2008-08-05 03:56:09 | 1217933769 | | 2008-08-05 03:56:19 | 1217933779 | | 2008-08-05 03:56:29 | 1217933789 | | 2008-08-05 03:59:39 | 1217933979 | | 2008-08-05 03:59:49 | 1217933989 | | 2008-08-05 03:59:59 | 1217933999 | | 2008-08-05 04:02:39 | 1217934159 | | 2008-08-05 04:02:49 | 1217934169 | | 2008-08-05 04:02:59 | 1217934179 | For example, '2008-08-05 04:01:39' is the time provided by the user which does not exist in the database. So, how can I return the closest data? I know I can make use of 'LIKE' but this will return more than one data. What is the best method to get the closest one? And what is the good connector (C++ to MySQL) to use? Any suggestion? Thank you. Kandy -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Picking the better query (join vs subselect)
Waynn Lue wrote: Out of curiosity, is it generally faster to do a sub query or do it in code for something like this. Schema of Settings table, where the PK is (ApplicationId, SettingId): ApplicationId, SettingId, SettingValue Select SettingValue from Settings where SettingId = 10 and ApplicationId IN (select ApplicationId from Settings where SettingId = 22 and SettingValue = 1); The other solution is to do the two queries separately then do the filtering in code. What's generally faster? Waynn there is no easy answer. subqueries with constants are fast in mysql without is is better to use a join. complex queries involving several tables with perhaps a lot foreign keys tend to be very slow. using join from coreutils can improve things dramaticly. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: spool log to a file
that is a shell question using bash/ksh stuff you can use this: mysql -uroot -pxxx -Dtest -s -e 'select * from amc_25;' 1.txt 21 Ananda Kumar wrote: The problem with below statement is that, if i there is any error in sql statements it does not get written to 1.txt. So, even if there is any error, how can i get it written to 1.txt. mysql -uroot -pxxx -Dtest -s -e 'select * from amc_25;' 1.txt On 7/29/08, Mary Bahrami [EMAIL PROTECTED] wrote: I use mysql -uroot -pxxx -Dtest -s -e 'select * from amc_25;' 1.txt but it would be nice to see other solutions... -Original Message- From: Ananda Kumar [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 29, 2008 6:37 AM To: mysql Subject: spool log to a file Hi All, I am executing below command, but there is no entires in 1.txt, its an empty file. How can i write the logs into this file. mysql -uroot -pxxx -Dtest -s --tee=1.txt -e 'select * from amc_25;' regards anandkl -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
constrain id ?
hi list, i have a simple question: does the constraint id need to be numeric ? re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: How do I (can I) use aggregate functions inside a select
David Ruggles wrote: I may be approaching this all wrong, but I need to know a percentage of total sales within a select statement. So I can do something like this: Select company, state, sales, sum(sales) / sales as percent From Sales mmh, you want sum(sales where company=foo)/sum(sales) you can do this only when doing 2 queries and storing al least one result re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
how to optimize: max(timetstamp) where a.foo=b.foo ?
hi list, i have tables that look like this( 10.000 entries) : id, timestamp, value to get the latest value for each id i have queries like: select * from tab A where timestamp = (select max(timestamp) from tab B where B.id=A.id) group by id ; on a fast system it takes round 4 sec to complet, but since this is done requlary it slows down to much. the same query takes on a informix system an a slower computer next to nothing. any ideas ? re, walter -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: how to optimize: max(timetstamp) where a.foo=b.foo ?
Peter Brawley wrote: to get the latest value for each id i have queries like: select * from tab A where timestamp = (select max(timestamp) from tab B where B.id=A.id) group by id ; See Within-group aggregates at http://www.artfulsoftware.com/infotree/queries.php. PB hi peter, txh for your hint, is seems that the join .. on stuff works fine. the most tricky part was to realize that max(timestamp) as foo was need so the result could be used with join, (took me 3 sec to find :) ) re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]