2013/7/6 Rafael Ribeiro - iPhone rafaelribeiro...@gmail.com
Dear Coleagues,
I would like to listen your opinion about a situation.
There is a function that is able to REMOVE all data from an specific date ?
are you talking about removing whole data from the tables or just specific
data
Hi
2013/7/5 Jim Sheffer j...@higherpowered.com
Hi everyone-
This is probably a no brainer (I'm new to Navicat) but I have a backup of
a database from Navicat.
I want to be able to see if a certain field has changed since this morning
in the backup (We are having problems with an order
issue on the slave:
SHOW SLAVE STATUS\G and post here
most likely after you reset the master your slave can't synch anymore,
because its missing next sequence of replication file.
why don't you backup your master with mysqldump and re-issue it ont he new
setup (i.e. on MySQL 5.5 instance)?
We are on a quest to improve the overall performance of our database.
It's
generally
working pretty well, but we periodically get big slowdowns for no
apparent
reason. A
prime example today - in the command line interface to the DB, I tried to
update one
record, and got:
you can do:
select * from table\G
2013/4/24 h...@tbbs.net
2013/04/24 09:06 -0700, Rajeev Prasad
this table has many columns and only 1 record. select * from table;
generates an unreadable list. how can i list the record as in two columns?
(column name and its value)? i looked at UNPIVOT,
2013/4/8 Reindl Harald h.rei...@thelounge.net
do not top-post
Am 08.04.2013 12:40, schrieb Bharani Kumar:
On Mon, Apr 8, 2013 at 4:02 PM, Johan De Meersman vegiv...@tuxera.be
wrote:
- Original Message -
From: Bharani Kumar bharanikumariyer...@gmail.com
How to enable mail
2013/3/24 Reindl Harald h.rei...@thelounge.net
Am 24.03.2013 05:20, schrieb spameden:
2013/3/19 Rick James rja...@yahoo-inc.com:
you never have hosted a large site
Check my email address before saying that.
:D
as said, big company does not have only geniusses
I do not judge only
tune
worker_processes 8;
why should you do that?
http://en.wikipedia.org/wiki/Nginx
nginx uses an asynchronous event-driven approach to handling requests
-Original Message-
From: spameden [mailto:spame...@gmail.com]
Sent: Tuesday, April 02, 2013 7:10 AM
To: Reindl
2013/3/28 RafaĆ Radecki radecki.ra...@gmail.com
Hi All.
I have a production setup of four databases connected with
replication. I would like to log every command that clients execute
for auditing.
Take a look at general query log it's exactly what you need.
2013/3/19 Rick James rja...@yahoo-inc.com:
you never have hosted a large site
Check my email address before saying that.
:D
20 may be low, but 100 is rather high.
Never use apache2 it has so many problems under load..
The best combo is php5-fpm+nginx.
Handles loads of users at once if
I'm sorry for top-posting, but I think you can achieve the best
practice if you'd encrypt user data with some sort of hash made from
the part of the password, i.e. after user is logged in you can store
personal key for the user in memory for decryption so you have to know
every user password (or
2013/3/13 Reindl Harald h.rei...@thelounge.net:
Am 12.03.2013 22:34, schrieb spameden:
NOTE: AUTO_INCREMENT is 32768 instead of 17923 ! So next inserted row
would have pc_id=32768.
Please suggest if it's normal behavior or not
what do you expect if a PRIMARY KEY record get's removed?
re
large enough table I'd need to enlarge
PRIMARY KEY storage type, because it's almost double size of the
actual records.
I didn't delete records in this test too, I've inserted them all via LOAD DATA.
2013/3/13 spameden spame...@gmail.com:
2013/3/13 Reindl Harald h.rei...@thelounge.net:
Am
|
+---+-+
1 row in set (0.00 sec)
Shame it's a read-only variable and need to restart whole MySQL server.
2013/3/13 spameden spame...@gmail.com
Nevermind, I've found the bug:
http://bugs.mysql.com/bug.php?id=57643
I'm gonna subscribe for it and see if it's gonna be resolved.
Many thanks guys for all your assistance!
2013/3/13 spameden spame...@gmail.com:
2013/3/13 Rick James rja...@yahoo-inc.com:
AUTO_INCREMENT guarantees
)
So is it normal or should I fill a bug?
There may be more. Most of those are covered here:
http://mysql.rjweb.org/doc.php/ricksrots
-Original Message-
From: spameden [mailto:spame...@gmail.com]
Sent: Tuesday, March 12, 2013 2:46 PM
To: Rick James
Cc: mysql@lists.mysql.com
in set (0.00 sec)
It is acceptable, by the definition of AUTO_INCREMENT, for it to burn the
missing 15K ids.
I don't get this explanation, could you please explain bit more? So
it's completely normal for AUTO_INCREMENT field to act like this?
-Original Message-
From: spameden
the table-level
AUTO-INC lock is held until the end of the statement, and only one
such statement can execute at a time.
So I believe this is a bug in MySQL because there were no parallel
INSERTs at all.
Sorry for the spam :)
2013/3/13 spameden spame...@gmail.com:
After setting
Hi,
could your collegue please share steps he taken to recover data?
I'd be interested most definetely!
Thanks
2012/10/29 Lorenzo Milesi max...@ufficyo.com
That's rough. The only thing I could suggest is try out Percona's
data recovery tool
My collegue did some recovery using Percona
,
meta_data, task_id, msgid FROM send_sms_test FORCE INDEX (priority_time)
WHERE time = UNIX_TIMESTAMP(NOW()) ORDER by priority LIMIT 0,50;
2012/10/16 Shawn Green shawn.l.gr...@oracle.com
On 10/15/2012 7:15 PM, spameden wrote:
Thanks a lot for all your comments!
I did disable Query cache before
front
as in:
set @ut= unix_timestamp(now())
and then use that in your statement.
On 2012-10-16 8:42 AM, spameden spame...@gmail.com wrote:
Will do.
mysql SHOW GLOBAL VARIABLES LIKE '%log%';
+-+-+
| Variable_name
2012/10/16 Tim Gustafson t...@soe.ucsc.edu
Thanks for all the responses; I'll respond to each of them in turn below:
you can not simply copy a single database in this state
innodb is much more complex like myisam
I know; that's why I rsync'd the entire /var/db/mysql folder (which
Also, forgot to say you need to shutdown completely MySQL before rsync'ing
it's data, otherwise your snapshot might be inconsistent thus InnoDB fail.
Also make sure database shutdown was correct in the log.
2012/10/16 Tim Gustafson t...@soe.ucsc.edu
load data from master never worked for
That's exactly what I thought when reading Michael's email, but tried
anyways, thanks for clarification :)
2012/10/16 h...@tbbs.net
2012/10/16 12:57 -0400, Michael Dykman
your now() statement is getting executed for every row on the select. try
ptting the phrase up front
as in:
set @ut=
Hi, list.
Sorry for the long subject, but I'm really interested in solving this and
need a help:
I've got a table:
mysql show create table send_sms_test;
* |
++-+---+---+---+---+-+--+---+-+
1 row in set (0.00 sec)
It uses filesort and results in a worser performance...
Any suggestions ? Should I submit a bug?
2012/10/16 spameden spame...@gmail.com
is going on; better than the SlowLog.
* INT(3) is not a 3-digit integer, it is a full 32-bit integer (4 bytes).
Perhaps you should have SMALLINT UNSIGNED (2 bytes).
* BIGINT takes 8 bytes -- usually over-sized.
-Original Message-
From: spameden [mailto:spame...@gmail.com]
Sent
spameden spame...@gmail.com
Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table.
Here is the MySQL-5.1 Percona testing table:
mysql select count(*) from send_sms_test;
+--+
| count(*) |
+--+
| 143879 |
+--+
1 row in set (0.03 sec)
Without LIMIT
, run them twice (and be sure not to hit the Query
cache). The first time freshens the cache (buffer_pool, etc); the second
time gives you a 'reproducible' time. I believe (without proof) that the
cache contents can affect the optimizer's choice.
** **
*From:* spameden [mailto:spame
29 matches
Mail list logo