2013/7/6 Rafael Ribeiro - iPhone
> Dear Coleagues,
>
> I would like to listen your opinion about a situation.
>
> There is a function that is able to REMOVE all data from an specific date ?
>
are you talking about removing whole data from the tables or just specific
data inserted at some time?
Hi
2013/7/5 Jim Sheffer
> Hi everyone-
>
> This is probably a no brainer (I'm new to Navicat) but I have a backup of
> a database from Navicat.
>
> I want to be able to see if a certain field has changed since this morning
> in the backup (We are having problems with an order that somehow
> "du
issue on the slave:
SHOW SLAVE STATUS\G and post here
most likely after you reset the master your slave can't synch anymore,
because its missing next sequence of replication file.
why don't you backup your master with mysqldump and re-issue it ont he new
setup (i.e. on MySQL 5.5 instance)?
201
>
> > We are on a quest to improve the overall performance of our database.
> It's
> > generally
> > working pretty well, but we periodically get big slowdowns for no
> apparent
> > reason. A
> > prime example today - in the command line interface to the DB, I tried to
> > update one
> > record, an
you can do:
select * from table\G
2013/4/24
> 2013/04/24 09:06 -0700, Rajeev Prasad
> this table has many columns and only 1 record. select * from table;
> generates an unreadable list. how can i list the record as in two columns?
> (column name and its value)? i looked at UNPIVOT, b
2013/4/8 Reindl Harald
> do not top-post
>
> Am 08.04.2013 12:40, schrieb Bharani Kumar:
> > On Mon, Apr 8, 2013 at 4:02 PM, Johan De Meersman >wrote:
> >
> >> - Original Message -
> >>> From: "Bharani Kumar"
> >>>
> >>> How to enable mail agent service in MYSQL. and what are the necess
2013/3/28 RafaĆ Radecki
> Hi All.
>
> I have a production setup of four databases connected with
> replication. I would like to log every command that clients execute
> for auditing.
>
Take a look at general query log it's exactly what you need.
http://dev.mysql.com/doc/refman/5.5/en/query-log
m dropped much
less requests than apache2 + mod_php.
apache2 is so bad at eating memory and system resources.
> >
> > why should it do that?
> >
> > > And, should you run 8 nginx web servers on an 8-core box?
>
no, you just tune
worker_processes 8;
>
2013/3/24 Reindl Harald
>
>
> Am 24.03.2013 05:20, schrieb spameden:
> > 2013/3/19 Rick James :
> >>> you never have hosted a large site
> >> Check my email address before saying that.
> >
> > :D
>
> as said, big company does not have only
2013/3/19 Rick James :
>> you never have hosted a large site
> Check my email address before saying that.
:D
>
> 20 may be low, but 100 is rather high.
Never use apache2 it has so many problems under load..
The best combo is php5-fpm+nginx.
Handles loads of users at once if well tuned.
>
>> -
I'm sorry for top-posting, but I think you can achieve the best
practice if you'd encrypt user data with some sort of hash made from
the part of the password, i.e. after user is logged in you can store
personal key for the user in memory for decryption so you have to know
every user password (or pa
2013/3/13 Reindl Harald :
>
>
> Am 12.03.2013 22:34, schrieb spameden:
>> NOTE: AUTO_INCREMENT is 32768 instead of 17923 ! So next inserted row
>> would have pc_id=32768.
>>
>> Please suggest if it's normal behavior or not
>
> what do you expect if a PR
e the table-level
AUTO-INC lock is held until the end of the statement, and only one
such statement can execute at a time. "
So I believe this is a bug in MySQL because there were no parallel
INSERTs at all.
Sorry for the spam :)
2013/3/13 spameden :
> After setting innodb_autoinc_lock_mo
ws in set (0.00 sec)
>
> It is acceptable, by the definition of AUTO_INCREMENT, for it to burn the
> missing 15K ids.
I don't get this explanation, could you please explain bit more? So
it's completely normal for AUTO_INCREMENT field to act like this?
>
>> -Origin
TABLE
2. used LOAD FILE only via command line (1 thread)
So is it normal or should I fill a bug?
>
> There may be more. Most of those are covered here:
> http://mysql.rjweb.org/doc.php/ricksrots
>
>
>
>
>> -Original Message-
>> From: spameden [mailt
Nevermind, I've found the bug:
http://bugs.mysql.com/bug.php?id=57643
I'm gonna subscribe for it and see if it's gonna be resolved.
Many thanks guys for all your assistance!
2013/3/13 spameden :
> 2013/3/13 Rick James :
>> AUTO_INCREMENT guarantees that it will not assi
AULT NULL,
PRIMARY KEY (`pc_id`)
) ENGINE=InnoDB AUTO_INCREMENT=17923 DEFAULT CHARSET=utf8 |
+---+-+
1 row in set (0.00 sec)
Shame it'
f I have large enough table I'd need to enlarge
PRIMARY KEY storage type, because it's almost double size of the
actual records.
I didn't delete records in this test too, I've inserted them all via LOAD DATA.
2013/3/13 spameden :
> 2013/3/13 Reindl Harald :
>>
>&g
Hi,
could your collegue please share steps he taken to recover data?
I'd be interested most definetely!
Thanks
2012/10/29 Lorenzo Milesi
> > That's rough. The only thing I could suggest is try out Percona's
> > data recovery tool
>
> My collegue did some recovery using Percona tools and (suspa
That's exactly what I thought when reading Michael's email, but tried
anyways, thanks for clarification :)
2012/10/16
> 2012/10/16 12:57 -0400, Michael Dykman
> your now() statement is getting executed for every row on the select. try
> ptting the phrase up front
> as in:
> set @ut= u
Also, forgot to say you need to shutdown completely MySQL before rsync'ing
it's data, otherwise your snapshot might be inconsistent thus InnoDB fail.
Also make sure database shutdown was correct in the log.
2012/10/16 Tim Gustafson
> > load data from master never worked for innodb.
>
> And the
2012/10/16 Tim Gustafson
> Thanks for all the responses; I'll respond to each of them in turn below:
>
> > you can not simply copy a single database in this state
> > innodb is much more complex like myisam
>
> I know; that's why I rsync'd the entire /var/db/mysql folder (which
> includes the ib_
se up front
> as in:
> set @ut= unix_timestamp(now())
> and then use that in your statement.
>
> On 2012-10-16 8:42 AM, "spameden" wrote:
>
> Will do.
>
> mysql> SHOW GLOBAL VARIABLES LIKE '%log%';
>
> +---
oxc_id, binfo,
meta_data, task_id, msgid FROM send_sms_test FORCE INDEX (priority_time)
WHERE time <= UNIX_TIMESTAMP(NOW()) ORDER by priority LIMIT 0,50;
2012/10/16 Shawn Green
> On 10/15/2012 7:15 PM, spameden wrote:
>
>> Thanks a lot for all your comments!
>>
>> I did
gt;
> ** **
>
> When timing things, run them twice (and be sure not to hit the Query
> cache). The first time freshens the cache (buffer_pool, etc); the second
> time gives you a 'reproducible' time. I believe (without proof) that the
> cache contents can affect the o
ed INDEX (time, priority) at all).
2012/10/16 spameden
> Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table.
>
> Here is the MySQL-5.1 Percona testing table:
>
> mysql> select count(*) from send_sms_test;
> +--+
> | count(*) |
> +--+
ON STATUS LIKE 'Handler_read%';
> SELECT ... FORCE INDEX(...) ...;
> SHOW SESSION STATUS LIKE 'Handler_read%';
> Then take the diffs of the handler counts. This will give you a pretty
> detailed idea of what is going on; better than the SlowLog.
>
> * INT(3) is
ere; *Using filesort* |
++-+---+---+---+---+-+--+---+-+
1 row in set (0.00 sec)
It uses filesort and results in a worser performance...
Any suggestions ? Should I submit a bug?
2012/10/16 spameden
Hi, list.
Sorry for the long subject, but I'm really interested in solving this and
need a help:
I've got a table:
mysql> show create table send_sms_test;
+---+
29 matches
Mail list logo