Brian Dunning wrote:
Hey all -
I have a table with 12,000,000 records spread over about 6 years. I'm trying to
delete all but the last 2 years, but no matter how small of a group I try to
delete at a time, it keeps hanging up the server and I eventually have to
restart MySQL. The table looks
;importe quelle diffusion non autorisée ou la copie de ceci
> est interdite. Ce message sert à l'information seulement et n'aura pas
> n'importe quel effet légalement obligatoire. Étant donné que les email
> peuvent facilement être sujets à la manipulation, nous ne pouvons
uvent facilement
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité
pour le contenu fourni.
> From: br...@briandunning.com
> Subject: Best way to purge old records from a huge table?
> Date: Fri, 4 Jun 2010 08:10:07 -0700
> To: mysql@lists.mysql.com
&g
Hi Brian,
I would suggest you to use mk-archiver (Maatkit Tools) for this activity.
http://www.percona.com/files/presentations/Make_Life_Easier_Maatkit_v2.pdf
Regards,
Krishna
On Fri, Jun 4, 2010 at 8:40 PM, Brian Dunning wrote:
> Hey all -
>
> I have a table with 12,000,000 records spread ove
dont use a single delete statment.
Use a stored proc, loop through and delete record by record and commit for
every 10k. In this way, your mysql will not hang and if you replication
setup, slave also will not lag behind.
regards
anandkl
On Fri, Jun 4, 2010 at 8:40 PM, Brian Dunning wrote:
> Hey
Hey all -
I have a table with 12,000,000 records spread over about 6 years. I'm trying to
delete all but the last 2 years, but no matter how small of a group I try to
delete at a time, it keeps hanging up the server and I eventually have to
restart MySQL. The table looks like this:
`creation
-large-files-safely-into-innodb-with-load-data-infile/
On Tue, Feb 24, 2009 at 1:08 PM, Shuly Avraham wrote:
Hi,
I have a huge table with about 50 millions of rows.
I dumped the table using mysqldump -T , as text dump, and now
trying to
import it to a database on another server, but it keeps
Hello Shuly,
Try this.
http://www.mysqlperformanceblog.com/2008/07/03/how-to-load-large-files-safely-into-innodb-with-load-data-infile/
On Tue, Feb 24, 2009 at 1:08 PM, Shuly Avraham wrote:
> Hi,
>
> I have a huge table with about 50 millions of rows.
> I dumped the table using
Hi,
I have a huge table with about 50 millions of rows.
I dumped the table using mysqldump -T , as text dump, and now trying
to import it to a database on another server, but it keeps hanging.
Are there any options or server variables I can set to help out with it?
Thanks,
Shuly.
--
MySQL
Thanks to everyone who answered, think I've got enough info now to
handle it. :)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
MyISAM does table level locking, which is to say that read (select) and
write (insert/update/delete) cannot happen at the same time. One will wait
for the other.
If your select takes 10 seconds, then any write operations will block for
those 10 seconds. Other read processes should be unaffected
The table is MyISAM, does that matter?
On May 2, 2007, at 7:28 AM, Dan Buettner wrote:
A few observations:
1 - if the table is in the InnoDB format, you aren't going to lock
up their server, as InnoDB doesn't do table locking. SHOW TABLE
STATUS LIKE 'tablename' will tell you.
2 - Ryan'
A few observations:
1 - if the table is in the InnoDB format, you aren't going to lock up their
server, as InnoDB doesn't do table locking. SHOW TABLE STATUS LIKE
'tablename' will tell you.
2 - Ryan's mysqldump script looks useful - also, there's a little-used
option with mysqldump that lets yo
I use a script like this:
#!/bin/bash
DATE=`date +%A`
DESTFILE="/home/mysql-backups/mysql-dump-$DATE"
/usr/bin/mysqldump --skip-extended-insert -uroot -p"password"
mydatabase > $DESTFILE.sql
/usr/bin/zip -qjTm $DESTFILE.zip $DESTFILE.sql
I end up with:
mysql-dump-Friday.zip
mysql-d
[snip]
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
need to dump it in CSV format and zip the file.
This is not on my server, and it's in production, so I don't want to
risk testing different methods and possibly hanging up their server
for a period of time, so I want
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
need to dump it in CSV format and zip the file.
This is not on my server, and it's in production, so I don't want to
risk testing different methods and possibly hanging up their server
for a period of time, so I wanted to
Hello.
MySQL allows you to find out the slow queries. Have a look here:
http://dev.mysql.com/doc/mysql/en/Slow_query_log.html
Luke Vanderfluit wrote:
> Hi.
>
> I have upgraded RT Request Tracker. I'm using Mysql 4.0.18, Apache 1.3.27
> (with static modperl module).
> The database we
I would check the slow query log. Queries that are taking a long time
(meaning slow) should be logged there. You can check your MySQL
configuration file to find out how long a query will run before it's
considered slow and where it is logging the info.
On Oct 25, 2005, at 3:16 AM, Luke Van
Hi.
I have upgraded RT Request Tracker. I'm using Mysql 4.0.18, Apache 1.3.27
(with static modperl module).
The database we are using is quite large.
The main problem we are having is that certain processes can take quite
a long time.
This is due to mysql churning through data.
I'd like to find
> Leonardo,
>
> Earlier you said that a statement without a parameter ('?') works, but
> one with a parameter doesn't.
>
> In both cases, are you still using a PreparedStatement?
>
> -Mark
>
yes, it is always a preparedstatement.
If you need more details (or you want me to try something
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Leonardo Francalanci wrote:
>> What happens if you use a nightly snapshot of Connector/J from
>> http://downloads.mysql.com/snapshots.php?
>
> I tried, but I still have the same problem:
>
> java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsExcept
Richard F. Rebel wrote:
Next time you come over, maybe you can help hold him or take the shot.
He's such a pain to take pictures of, he walks right up to the camera.
Yeah, I also have problems taking snapshots of my cat... use FEL to
pre-flash so he won't blink when you take the actual pictur
tel.ie> Fax to:
Subject: R: R: R: connectorJ & huge
ta
> Next time you come over, maybe you can help hold him or take the shot.
> He's such a pain to take pictures of, he walks right up to the camera.
Mmmh... my English is very bad... I can't understand...
is it a joke or just a mistake?
--
MySQL General Mailing List
For list archives: http://lis
Next time you come over, maybe you can help hold him or take the shot.
He's such a pain to take pictures of, he walks right up to the camera.
On Tue, 2004-06-22 at 09:50, Leonardo Francalanci wrote:
> > It looks as if you are being over-complex anyway. Isn't what you want
> >
> > SELECT * FROM
> What happens if you use a nightly snapshot of Connector/J from
> http://downloads.mysql.com/snapshots.php?
I tried, but I still have the same problem:
java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
at
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPrepa
> It looks as if you are being over-complex anyway. Isn't what you want
>
> SELECT * FROM PARTITIONED_B ORDER BY RAND() LIMIT 1 ;
>
> which picks a single random record.
No, I have to know which is the ID I pick before doing the select
(because I splitted my data, and when I access the splitted
"Leonardo Francalanci" <[EMAIL PROTECTED]> wrote on 22/06/2004
14:26:54:
> > A non-streaming connection will place the resultset in memory. With
large
> > tables you can run out of memory and the application will stop.
>
>
> I get only 1 record per time (accessing by key), so I don't think
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Leonardo Francalanci wrote:
> I have a large table (64,000,000 rows).
> Everything was fine when the table was 16,000,000 rows,
> now connectorJ crashes:
>
> java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
> at
>
com.mysql.jdb
rt a bug in a couple of days...
> -Messaggio originale-
> Da: Leonardo Francalanci [mailto:[EMAIL PROTECTED]
> Inviato: martedì 22 giugno 2004 15.09
> A: Mysql
> Oggetto: R: connectorJ & huge table problem
>
>
> I have also weird results:
> I access data
> A non-streaming connection will place the resultset in memory. With large
> tables you can run out of memory and the application will stop.
I get only 1 record per time (accessing by key), so I don't think
it could be an issue.
Thank you anyway!
--
MySQL General Mailing List
For list archi
was
non streaming.
-Original Message-
From: Leonardo Francalanci
To: 'Mysql '
Sent: 6/22/04 7:55 AM
Subject: R: connectorJ & huge table problem
> Are you using a streaming connection?
ehm... how do I know?
My conn string is
jdbc:mysql://192.168.0.253:3000/LETSPARTY1?
> Dude. I wouldn't post your server ip here. ouch.
Well, it's a LAN address!
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
2004 8:55 AM
To: 'Mysql '
Subject: R: connectorJ & huge table problem
> Are you using a streaming connection?
ehm... how do I know?
My conn string is
jdbc:mysql://192.168.0.253:3000/LETSPARTY1?autoReconnect=yes
where 192.168.0.253 is a machine on my LAN...
--
MySQL Ge
i [mailto:[EMAIL PROTECTED]
> Inviato: martedì 22 giugno 2004 14.43
> A: Mysql
> Oggetto: connectorJ & huge table problem
>
>
> I have a large table (64,000,000 rows).
> Everything was fine when the table was 16,000,000 rows,
> now connectorJ crashes:
>
> java.s
> Are you using a streaming connection?
ehm... how do I know?
My conn string is
jdbc:mysql://192.168.0.253:3000/LETSPARTY1?autoReconnect=yes
where 192.168.0.253 is a machine on my LAN...
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://
Are you using a streaming connection?
-Original Message-
From: Leonardo Francalanci
To: Mysql
Sent: 6/22/04 7:43 AM
Subject: connectorJ & huge table problem
I have a large table (64,000,000 rows).
Everything was fine when the table was 16,000,000 rows,
now connectorJ cra
I have a large table (64,000,000 rows).
Everything was fine when the table was 16,000,000 rows,
now connectorJ crashes:
java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
at
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStateme
nt.java:908)
at
L PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, December 05, 2003 12:42
Subject: DELETE on a huge table; how long *should* it take?
> Hello,
>
> I have quite a large table, 45 million rows, which has 3 indexes on
> it. The rows are evenly distributed across one particular index,
Hello,
I have quite a large table, 45 million rows, which has 3 indexes on
it. The rows are evenly distributed across one particular index, which
records the time the row was inserted. At any given time there's
between 20 and 21 days worth of rows, and every night I delete
anything over 20 days. S
I am trying to create indexes on several columns of a huge table in
MySQL 4.0.12 running on Windows 2000 Server. The table contains 42
million rows, and occupies about 7G of disk space. The server is a
1.5GHz P4 with 1G RAM, and performs fairly well - the load data infile
takes from 22 - 28
From: "Kevin Smith" <[EMAIL PROTECTED]>
> I'm "considering" making a huge table, by that I mean approx. 26 millions
> records which will be static data, no updates, or deletes, etc. will be
> performed on this table, only select statements.
...
> At a
Hi All,
I'm "considering" making a huge table, by that I mean approx. 26 millions
records which will be static data, no updates, or deletes, etc. will be
performed on this table, only select statements.
The schema on the table will be as follows.. The address fields will be 56
you
want info from the vendor table included in queries involving the parts
table.
ryan
- Original Message -
From: "Carlos Fernando Scheidecker Antunes" <[EMAIL PROTECTED]>
To: "MySQL List" <[EMAIL PROTECTED]>
Sent: Saturday, May 05, 2001 4:53 PM
Subject: Perf
I am having performance problems with my server with larger tables.
I Have a 512MB Pentium III Red Hat Linux 7 server running MySQL version 3.23.32
packaged by Red Hat
This server's /etc/my.cnf is as follows :
[mysqld]
pid-file=/var/run/mysqld/mysqld.pid
datadir=/var/lib/mysql
port=3306
socket
45 matches
Mail list logo