Re: Undo Changes

2013-07-06 Thread spameden
2013/7/6 Rafael Ribeiro - iPhone rafaelribeiro...@gmail.com Dear Coleagues, I would like to listen your opinion about a situation. There is a function that is able to REMOVE all data from an specific date ? are you talking about removing whole data from the tables or just specific data

Re: restore question

2013-07-05 Thread spameden
Hi 2013/7/5 Jim Sheffer j...@higherpowered.com Hi everyone- This is probably a no brainer (I'm new to Navicat) but I have a backup of a database from Navicat. I want to be able to see if a certain field has changed since this morning in the backup (We are having problems with an order

Re: Master not creating new binary log.

2013-07-04 Thread spameden
issue on the slave: SHOW SLAVE STATUS\G and post here most likely after you reset the master your slave can't synch anymore, because its missing next sequence of replication file. why don't you backup your master with mysqldump and re-issue it ont he new setup (i.e. on MySQL 5.5 instance)?

Re: database perfomance worries

2013-07-02 Thread spameden
We are on a quest to improve the overall performance of our database. It's generally working pretty well, but we periodically get big slowdowns for no apparent reason. A prime example today - in the command line interface to the DB, I tried to update one record, and got:

Re: how to list record in column (instead of a row)

2013-04-28 Thread spameden
you can do: select * from table\G 2013/4/24 h...@tbbs.net 2013/04/24 09:06 -0700, Rajeev Prasad this table has many columns and only 1 record. select * from table; generates an unreadable list. how can i list the record as in two columns? (column name and its value)? i looked at UNPIVOT,

Re: Reg: MYSQL Mail Agent

2013-04-08 Thread spameden
2013/4/8 Reindl Harald h.rei...@thelounge.net do not top-post Am 08.04.2013 12:40, schrieb Bharani Kumar: On Mon, Apr 8, 2013 at 4:02 PM, Johan De Meersman vegiv...@tuxera.be wrote: - Original Message - From: Bharani Kumar bharanikumariyer...@gmail.com How to enable mail

Re: How to change max simultaneous connection parameter in mysql.

2013-04-02 Thread spameden
2013/3/24 Reindl Harald h.rei...@thelounge.net Am 24.03.2013 05:20, schrieb spameden: 2013/3/19 Rick James rja...@yahoo-inc.com: you never have hosted a large site Check my email address before saying that. :D as said, big company does not have only geniusses I do not judge only

Re: How to change max simultaneous connection parameter in mysql.

2013-04-02 Thread spameden
tune worker_processes 8; why should you do that? http://en.wikipedia.org/wiki/Nginx nginx uses an asynchronous event-driven approach to handling requests -Original Message- From: spameden [mailto:spame...@gmail.com] Sent: Tuesday, April 02, 2013 7:10 AM To: Reindl

Re: All client commands to syslog?

2013-04-02 Thread spameden
2013/3/28 RafaƂ Radecki radecki.ra...@gmail.com Hi All. I have a production setup of four databases connected with replication. I would like to log every command that clients execute for auditing. Take a look at general query log it's exactly what you need.

Re: How to change max simultaneous connection parameter in mysql.

2013-03-23 Thread spameden
2013/3/19 Rick James rja...@yahoo-inc.com: you never have hosted a large site Check my email address before saying that. :D 20 may be low, but 100 is rather high. Never use apache2 it has so many problems under load.. The best combo is php5-fpm+nginx. Handles loads of users at once if

Re: file level encryption on mysql

2013-03-14 Thread spameden
I'm sorry for top-posting, but I think you can achieve the best practice if you'd encrypt user data with some sort of hash made from the part of the password, i.e. after user is logged in you can store personal key for the user in memory for decryption so you have to know every user password (or

Re: auto_increment field behavior

2013-03-13 Thread spameden
2013/3/13 Reindl Harald h.rei...@thelounge.net: Am 12.03.2013 22:34, schrieb spameden: NOTE: AUTO_INCREMENT is 32768 instead of 17923 ! So next inserted row would have pc_id=32768. Please suggest if it's normal behavior or not what do you expect if a PRIMARY KEY record get's removed? re

Re: auto_increment field behavior

2013-03-12 Thread spameden
large enough table I'd need to enlarge PRIMARY KEY storage type, because it's almost double size of the actual records. I didn't delete records in this test too, I've inserted them all via LOAD DATA. 2013/3/13 spameden spame...@gmail.com: 2013/3/13 Reindl Harald h.rei...@thelounge.net: Am

Re: auto_increment field behavior

2013-03-12 Thread spameden
| +---+-+ 1 row in set (0.00 sec) Shame it's a read-only variable and need to restart whole MySQL server. 2013/3/13 spameden spame...@gmail.com

Re: auto_increment field behavior

2013-03-12 Thread spameden
Nevermind, I've found the bug: http://bugs.mysql.com/bug.php?id=57643 I'm gonna subscribe for it and see if it's gonna be resolved. Many thanks guys for all your assistance! 2013/3/13 spameden spame...@gmail.com: 2013/3/13 Rick James rja...@yahoo-inc.com: AUTO_INCREMENT guarantees

Re: auto_increment field behavior

2013-03-12 Thread spameden
) So is it normal or should I fill a bug? There may be more. Most of those are covered here: http://mysql.rjweb.org/doc.php/ricksrots -Original Message- From: spameden [mailto:spame...@gmail.com] Sent: Tuesday, March 12, 2013 2:46 PM To: Rick James Cc: mysql@lists.mysql.com

Re: auto_increment field behavior

2013-03-12 Thread spameden
in set (0.00 sec) It is acceptable, by the definition of AUTO_INCREMENT, for it to burn the missing 15K ids. I don't get this explanation, could you please explain bit more? So it's completely normal for AUTO_INCREMENT field to act like this? -Original Message- From: spameden

Re: auto_increment field behavior

2013-03-12 Thread spameden
the table-level AUTO-INC lock is held until the end of the statement, and only one such statement can execute at a time. So I believe this is a bug in MySQL because there were no parallel INSERTs at all. Sorry for the spam :) 2013/3/13 spameden spame...@gmail.com: After setting

Re: Recover dropped database

2012-10-29 Thread spameden
Hi, could your collegue please share steps he taken to recover data? I'd be interested most definetely! Thanks 2012/10/29 Lorenzo Milesi max...@ufficyo.com That's rough. The only thing I could suggest is try out Percona's data recovery tool My collegue did some recovery using Percona

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread spameden
, meta_data, task_id, msgid FROM send_sms_test FORCE INDEX (priority_time) WHERE time = UNIX_TIMESTAMP(NOW()) ORDER by priority LIMIT 0,50; 2012/10/16 Shawn Green shawn.l.gr...@oracle.com On 10/15/2012 7:15 PM, spameden wrote: Thanks a lot for all your comments! I did disable Query cache before

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread spameden
front as in: set @ut= unix_timestamp(now()) and then use that in your statement. On 2012-10-16 8:42 AM, spameden spame...@gmail.com wrote: Will do. mysql SHOW GLOBAL VARIABLES LIKE '%log%'; +-+-+ | Variable_name

Re: Odd Behavior During Replication Start-Up

2012-10-16 Thread spameden
2012/10/16 Tim Gustafson t...@soe.ucsc.edu Thanks for all the responses; I'll respond to each of them in turn below: you can not simply copy a single database in this state innodb is much more complex like myisam I know; that's why I rsync'd the entire /var/db/mysql folder (which

Re: Odd Behavior During Replication Start-Up

2012-10-16 Thread spameden
Also, forgot to say you need to shutdown completely MySQL before rsync'ing it's data, otherwise your snapshot might be inconsistent thus InnoDB fail. Also make sure database shutdown was correct in the log. 2012/10/16 Tim Gustafson t...@soe.ucsc.edu load data from master never worked for

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread spameden
That's exactly what I thought when reading Michael's email, but tried anyways, thanks for clarification :) 2012/10/16 h...@tbbs.net 2012/10/16 12:57 -0400, Michael Dykman your now() statement is getting executed for every row on the select. try ptting the phrase up front as in: set @ut=

mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
Hi, list. Sorry for the long subject, but I'm really interested in solving this and need a help: I've got a table: mysql show create table send_sms_test;

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
* | ++-+---+---+---+---+-+--+---+-+ 1 row in set (0.00 sec) It uses filesort and results in a worser performance... Any suggestions ? Should I submit a bug? 2012/10/16 spameden spame...@gmail.com

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
is going on; better than the SlowLog. * INT(3) is not a 3-digit integer, it is a full 32-bit integer (4 bytes). Perhaps you should have SMALLINT UNSIGNED (2 bytes). * BIGINT takes 8 bytes -- usually over-sized. -Original Message- From: spameden [mailto:spame...@gmail.com] Sent

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
spameden spame...@gmail.com Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table. Here is the MySQL-5.1 Percona testing table: mysql select count(*) from send_sms_test; +--+ | count(*) | +--+ | 143879 | +--+ 1 row in set (0.03 sec) Without LIMIT

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
, run them twice (and be sure not to hit the Query cache). The first time freshens the cache (buffer_pool, etc); the second time gives you a 'reproducible' time. I believe (without proof) that the cache contents can affect the optimizer's choice. ** ** *From:* spameden [mailto:spame