Re: Importing large databases faster
Madison Kelly wrote: Hi all, I've got a fairly large set of databases I'm backing up each Friday. The dump takes about 12.5h to finish, generating a ~172 GB file. When I try to load it though, *after* manually dumping the old databases, it takes 1.5~2 days to load the same databases. I am guessing this is, at least in part, due to indexing. My question is; Given an empty target DB and a dump file generated via: ssh r...@server "mysqldump --all-databases -psecret" > /path/to/backup.sql How can I go about efficiently loading it into a new database? Specifically, can I disable triggers, indexes and what not until after load finishes? I can only imagine that a single "ok, go create your indexes now" at the end would be faster. Perhaps some way to hold off commits from happening as often? The target server has 32Gb of RAM, so I suspect I should be able to hold things in memory and commit to disk relatively rarely. I am currently loading via this command: mysql -psecret < /path/to/backup.sql For that kind of dump, that kind of restore is what you get. Your current dump is generating GB of INSERT statements that need to be parsed then processed. To get a faster restore, use a different sort of dump. I suggest you compare your current process to one that uses the --tab option of mysqldump (to save the data) then uses LOAD DATA INFILE ... to repopulate your server. This is the fastest known method to populate a table other than a direct file copy. -- Shawn Green, MySQL Senior Support Engineer Sun Microsystems, Inc. Office: Blountville, TN -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: Importing large databases faster
I don't think so, I'm pretty sure you have to use mk-parallel-dump to get the data in a format it wants. The docs are online though. Regards, Gavin Towey -Original Message- From: Madison Kelly [mailto:li...@alteeve.com] Sent: Wednesday, December 16, 2009 4:35 PM To: Gavin Towey Cc: mysql@lists.mysql.com Subject: Re: Importing large databases faster Gavin Towey wrote: > There are scripts out there such at the Maatkit mk-parallel-dump/restore that > can speed up this process by running in parallel. > > However if you're doing this every week on that large of a dataset, I'd just > use filesystem snapshots. You're backup/restore would then only take as long > as it takes for you to scp the database from one machine to another. > > Regards, > Gavin Towey Thanks! Will the Maatkit script work on a simple --all-databases dump? As for the copy, it's a temporary thing. This is just being done weekly while we test out the new server. Once it's live, the new server will indeed be backed up via LVM snapshots. :) Madi -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are notified that reviewing, disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any loss or damage caused by viruses or errors or omissions in the contents of this message, which arise as a result of e-mail transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com
RE: Spatial extensions
Yes, spatial indexes are very fast: Query would be something like: SET @center = GeomFromText('POINT(37.372241 -122.021671)'); SET @radius = 0.005; SET @bbox = GeomFromText(CONCAT('POLYGON((', X(@center) - @radius, ' ', Y(@center) - @radius, ',', X(@center) + @radius, ' ', Y(@center) - @radius, ',', X(@center) + @radius, ' ', Y(@center) + @radius, ',', X(@center) - @radius, ' ', Y(@center) + @radius, ',', X(@center) - @radius, ' ', Y(@center) - @radius, '))') ); select id, astext(coordinates), Distance(@center,line_segment) as dist FROM places where MBRContains(@bbox, line_segment) order by dist limit 10; Regards, Gavin Towey -Original Message- From: René Fournier [mailto:m...@renefournier.com] Sent: Wednesday, December 16, 2009 4:32 PM To: mysql Subject: Spatial extensions I have table with 2 million rows of geographic points (latitude, longitude). Given a location -- say, 52º, -113.9º -- what's the fastest way to query the 10 closest points (records) from that table? Currently, I'm using a simple two-column index to speed up queries: CREATE TABLE `places` ( `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT, `latitude` decimal(10,8) NOT NULL, `longitude` decimal(12,8) NOT NULL PRIMARY KEY (`id`), KEY `latlng` (`latitude`,`longitude`) ) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; My current query is fairly quick: SELECT SQL_NO_CACHE * FROM places WHERE latitude BETWEEN 51.98228037384 AND 52.033153677 AND longitude BETWEEN -113.94770681881 AND -113.86685484296; But I wonder a couple things: 1. Would MySQL's [seemingly anemic] spatial extensions would speed things up if I added a column of type POINT (and a corresponding spatial INDEX)? CREATE TABLE `places` ( `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT, `latitude` decimal(10,8) NOT NULL, `longitude` decimal(12,8) NOT NULL, `coordinates` point NOT NULL, PRIMARY KEY (`id`), KEY `latlng` (`latitude`,`longitude`), KEY `coord` (`coordinates`(25)) ) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; 2. How would I write the query? ...Rene -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are notified that reviewing, disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any loss or damage caused by viruses or errors or omissions in the contents of this message, which arise as a result of e-mail transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Importing large databases faster
Gavin Towey wrote: There are scripts out there such at the Maatkit mk-parallel-dump/restore that can speed up this process by running in parallel. However if you're doing this every week on that large of a dataset, I'd just use filesystem snapshots. You're backup/restore would then only take as long as it takes for you to scp the database from one machine to another. Regards, Gavin Towey Thanks! Will the Maatkit script work on a simple --all-databases dump? As for the copy, it's a temporary thing. This is just being done weekly while we test out the new server. Once it's live, the new server will indeed be backed up via LVM snapshots. :) Madi -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Spatial extensions
I have table with 2 million rows of geographic points (latitude, longitude). Given a location -- say, 52º, -113.9º -- what's the fastest way to query the 10 closest points (records) from that table? Currently, I'm using a simple two-column index to speed up queries: CREATE TABLE `places` ( `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT, `latitude` decimal(10,8) NOT NULL, `longitude` decimal(12,8) NOT NULL PRIMARY KEY (`id`), KEY `latlng` (`latitude`,`longitude`) ) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; My current query is fairly quick: SELECT SQL_NO_CACHE * FROM places WHERE latitude BETWEEN 51.98228037384 AND 52.033153677 AND longitude BETWEEN -113.94770681881 AND -113.86685484296; But I wonder a couple things: 1. Would MySQL's [seemingly anemic] spatial extensions would speed things up if I added a column of type POINT (and a corresponding spatial INDEX)? CREATE TABLE `places` ( `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT, `latitude` decimal(10,8) NOT NULL, `longitude` decimal(12,8) NOT NULL, `coordinates` point NOT NULL, PRIMARY KEY (`id`), KEY `latlng` (`latitude`,`longitude`), KEY `coord` (`coordinates`(25)) ) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; 2. How would I write the query? ...Rene -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: Importing large databases faster
There are scripts out there such at the Maatkit mk-parallel-dump/restore that can speed up this process by running in parallel. However if you're doing this every week on that large of a dataset, I'd just use filesystem snapshots. You're backup/restore would then only take as long as it takes for you to scp the database from one machine to another. Regards, Gavin Towey -Original Message- From: Madison Kelly [mailto:li...@alteeve.com] Sent: Wednesday, December 16, 2009 12:56 PM To: mysql@lists.mysql.com Subject: Importing large databases faster Hi all, I've got a fairly large set of databases I'm backing up each Friday. The dump takes about 12.5h to finish, generating a ~172 GB file. When I try to load it though, *after* manually dumping the old databases, it takes 1.5~2 days to load the same databases. I am guessing this is, at least in part, due to indexing. My question is; Given an empty target DB and a dump file generated via: ssh r...@server "mysqldump --all-databases -psecret" > /path/to/backup.sql How can I go about efficiently loading it into a new database? Specifically, can I disable triggers, indexes and what not until after load finishes? I can only imagine that a single "ok, go create your indexes now" at the end would be faster. Perhaps some way to hold off commits from happening as often? The target server has 32Gb of RAM, so I suspect I should be able to hold things in memory and commit to disk relatively rarely. I am currently loading via this command: mysql -psecret < /path/to/backup.sql The source and destination MySQL versions are: Source: mysql Ver 14.13 Distrib 5.1.19-beta, for unknown-linux-gnu (x86_64) using readline 5.0 Dest: mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (x86_64) using readline 5.1 The reason for the discrepancy is that the old server was setup from source on CentOS 4.x by a previous tech and the destination server is the stock version from CentOS 5.x. The source server will be phased out soon, so no real attempt at maintaining matching versions was done. Thanks! Madi -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are notified that reviewing, disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any loss or damage caused by viruses or errors or omissions in the contents of this message, which arise as a result of e-mail transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com
Importing large databases faster
Hi all, I've got a fairly large set of databases I'm backing up each Friday. The dump takes about 12.5h to finish, generating a ~172 GB file. When I try to load it though, *after* manually dumping the old databases, it takes 1.5~2 days to load the same databases. I am guessing this is, at least in part, due to indexing. My question is; Given an empty target DB and a dump file generated via: ssh r...@server "mysqldump --all-databases -psecret" > /path/to/backup.sql How can I go about efficiently loading it into a new database? Specifically, can I disable triggers, indexes and what not until after load finishes? I can only imagine that a single "ok, go create your indexes now" at the end would be faster. Perhaps some way to hold off commits from happening as often? The target server has 32Gb of RAM, so I suspect I should be able to hold things in memory and commit to disk relatively rarely. I am currently loading via this command: mysql -psecret < /path/to/backup.sql The source and destination MySQL versions are: Source: mysql Ver 14.13 Distrib 5.1.19-beta, for unknown-linux-gnu (x86_64) using readline 5.0 Dest: mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (x86_64) using readline 5.1 The reason for the discrepancy is that the old server was setup from source on CentOS 4.x by a previous tech and the destination server is the stock version from CentOS 5.x. The source server will be phased out soon, so no real attempt at maintaining matching versions was done. Thanks! Madi -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: Cannot created stored procedure (Using example from mysql manual) -- mysql 5.1.37 -- Ubuntu 9.10
> -Original Message- > From: Walton Hoops [mailto:wal...@vyper.hopto.org] > Hi all. > I am running into a very frustrating problem trying to created a stored > procedure. > > I had originally assumed I was using bad syntax, but even examples > copied > and pasted > > directly from the manual are giving the same error. > mysql> CREATE DEFINER = 'walton'@'localhost' PROCEDURE account_count() > > -> BEGIN > > -> SELECT 'Number of accounts:', COUNT(*) FROM mysql.user; > > ERROR 1064 (42000): You have an error in your SQL syntax; check the > manual > that corresponds to your MySQL server version for the right syntax to > use > near '' at line 3 > > mysql> > This example can be found at: > > http://dev.mysql.com/doc/refman/5.1/en/create-procedure.html > Figures, I find the answer just as soon as I send the request for help. I just needed to read the documentation better. As penance, here is the answer I found: >From http://dev.mysql.com/doc/refman/5.1/en/stored-programs-defining.html: "If you use the mysql client program to define a stored program that contains the semicolon characters within its definition, a problem arises. By default, mysql itself recognizes semicolon as a statement delimiter, so you must redefine the delimiter temporarily to cause mysql to pass the entire stored program definition to the server." So the example I was using becomes: DELIMITER | CREATE DEFINER = 'admin'@'localhost' PROCEDURE account_count() SQL SECURITY INVOKER BEGIN SELECT 'Number of accounts:', COUNT(*) FROM mysql.user; END| DELIMITER ; Sorry for the unnecessary question. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: Cannot created stored procedure (Using example from mysql manual) -- mysql 5.1.37 -- Ubuntu 9.10
You need to use DELIMITER // Or some other symbol besides ; to change the client's end-of-statement symbol. Otherwise it ends the statement at the first ; inside the procedure you use, but it's not yet complete. This is described in the manual on that same page. Regards Gavin Towey -Original Message- From: Walton Hoops [mailto:wal...@vyper.hopto.org] Sent: Wednesday, December 16, 2009 10:46 AM To: mysql@lists.mysql.com Subject: Cannot created stored procedure (Using example from mysql manual) -- mysql 5.1.37 -- Ubuntu 9.10 Hi all. I am running into a very frustrating problem trying to created a stored procedure. I had originally assumed I was using bad syntax, but even examples copied and pasted directly from the manual are giving the same error. mysql> CREATE DEFINER = 'walton'@'localhost' PROCEDURE account_count() -> BEGIN -> SELECT 'Number of accounts:', COUNT(*) FROM mysql.user; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 3 mysql> This example can be found at: http://dev.mysql.com/doc/refman/5.1/en/create-procedure.html Google has failed me on this one. Can anyone advise me as to what I need to do to troubleshoot this? Also if it is in error in the documentation, how would I go about notifying someone so it can be corrected? Any help would be greatly appreciated. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are notified that reviewing, disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any loss or damage caused by viruses or errors or omissions in the contents of this message, which arise as a result of e-mail transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Cannot created stored procedure (Using example from mysql manual) -- mysql 5.1.37 -- Ubuntu 9.10
Hi all. I am running into a very frustrating problem trying to created a stored procedure. I had originally assumed I was using bad syntax, but even examples copied and pasted directly from the manual are giving the same error. mysql> CREATE DEFINER = 'walton'@'localhost' PROCEDURE account_count() -> BEGIN -> SELECT 'Number of accounts:', COUNT(*) FROM mysql.user; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 3 mysql> This example can be found at: http://dev.mysql.com/doc/refman/5.1/en/create-procedure.html Google has failed me on this one. Can anyone advise me as to what I need to do to troubleshoot this? Also if it is in error in the documentation, how would I go about notifying someone so it can be corrected? Any help would be greatly appreciated.
migrating a 32bit installation to a 64bit host - copy or export/import?
Hi, I plan to migrate a 32 bit MySQL installation to a 64bit host. (mysql-5.0.77) Both servers are RedHat EL 5.4 with the original mysql rpm. The "simpel" plan was to shut down mysql and than copy the db-files from /var/lib/mysql/ from host to host. Any suggestions? Or comments? Or should I export the dbs and than import tham? Best regards, Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 420 Fax +49 7141 969 55 420 E-Mail goetz.reini...@filmakademie.de Filmakademie Baden-Württemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung Amtsgericht Stuttgart HRB 205016 Vorsitzende des Aufsichtsrats: Prof. Dr. Claudia Hübner Staatsrätin für Demographischen Wandel und für Senioren im Staatsministerium Geschäftsführer: Prof. Thomas Schadt -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Count records in join
Yes, that would do what you mentioned, show all programs with a count on events, but i need the opposite, show (and delete) all that dont have any events. Well, just have to use IS NULL instead. Thanks. MV On Wed, Dec 16, 2009 at 3:17 PM, Jerry Schwartz wrote: > >-Original Message- > >From: Miguel Vaz [mailto:pagong...@gmail.com] > >Sent: Wednesday, December 16, 2009 9:39 AM > >To: Johan De Meersman > >Cc: Gavin Towey; mysql@lists.mysql.com > >Subject: Re: Count records in join > > > >Thanks all for the feedback. Here's what i did: > > > >select p.id_prog,count(r.id_event) e from programas p left join(events r) > >on(p.id_prog=r.id_prog) group by r.id_event > > > [JS] Add > > HAVING COUNT(*) > 0 > > is one way. > > I haven't been following the thread, but would > > = > > SELECT `p`.`id_prod`, COUNT(`r`.`id_event`) `e` FROM > `programas` `p` LEFT JOIN `events` r > ON `p`.`id_prod` = `r`.`id_prod` > WHERE `r`.`id_prod` IS NOT NULL > GROUP BY `p`.`id_prod`; > > = > > do what you want? That should find only those rows in `programmas` that > match > rows in `events`, and give you the number of events for each one. > > Regards, > > Jerry Schwartz > The Infoshop by Global Information Incorporated > 195 Farmington Ave. > Farmington, CT 06032 > > 860.674.8796 / FAX: 860.674.8341 > > www.the-infoshop.com > > > > >This gives me a list of all the distinct progs with a count of how many > >events on each. I then delete the empty ones. > > > >It would be nice to be able to delete the empty ones on the same query. > > > > > >MV > > > > > > > >On Wed, Dec 16, 2009 at 1:48 PM, Johan De Meersman >wrote: > > > >> If the aim is purely to find the progs without events, it might be more > >> efficient to use something like > >> > >> select * from progs where not exist (select id_prog from events where > >> id_prog = progs.id_prog); > >> > >> My syntax might be off, check "not exists" documentation for more info. > >> > >> > >> On Tue, Dec 15, 2009 at 8:14 PM, Gavin Towey wrote: > >> > >>> Hi Miguel, > >>> > >>> You'll need to use LEFT JOIN, that will show all records that match and > a > >>> row in the second table will all values NULL where there is no match. > >>> Then > >>> you find all those rows that have no match in your WHERE clause. > >>> > >>> Regards, > >>> Gavin Towey > >>> > >>> -Original Message- > >>> From: Miguel Vaz [mailto:pagong...@gmail.com] > >>> Sent: Tuesday, December 15, 2009 10:43 AM > >>> To: mysql@lists.mysql.com > >>> Subject: Count records in join > >>> > >>> Hi, > >>> > >>> I am stuck with a suposedly simple query: > >>> > >>> - i have two tables (: > >>> > >>> PROGS > >>> id_prog > >>> name > >>> > >>> EVENTS > >>> id > >>> id_prog > >>> name > >>> > >>> How can i list all records from PROGS with a sum of how many events > each > >>> have? I want to find the progs that are empty. > >>> > >>> I remember something about using NULL, but i cant remember. :-P > >>> > >>> Thanks. > >>> > >>> MV > >>> > >>> This message contains confidential information and is intended only for > >>> the individual named. If you are not the named addressee, you are > >>> notified > >>> that reviewing, disseminating, disclosing, copying or distributing this > >>> e-mail is strictly prohibited. Please notify the sender immediately by > >>> e-mail if you have received this e-mail by mistake and delete this > e-mail > >>> from your system. E-mail transmission cannot be guaranteed to be secure > or > >>> error-free as information could be intercepted, corrupted, lost, > >>> destroyed, > >>> arrive late or incomplete, or contain viruses. The sender therefore > does > >>> not > >>> accept liability for any loss or damage caused by viruses or errors or > >>> omissions in the contents of this message, which arise as a result of > >>> e-mail > >>> transmission. [FriendFinder Networks, Inc., 220 Humbolt court, > Sunnyvale, > >>> CA > >>> 94089, USA, FriendFinder.com > >>> > >>> -- > >>> MySQL General Mailing List > >>> For list archives: http://lists.mysql.com/mysql > >>> To unsubscribe: > http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be > >>> > >>> > >> > > > >
RE: Count records in join
>-Original Message- >From: Miguel Vaz [mailto:pagong...@gmail.com] >Sent: Wednesday, December 16, 2009 9:39 AM >To: Johan De Meersman >Cc: Gavin Towey; mysql@lists.mysql.com >Subject: Re: Count records in join > >Thanks all for the feedback. Here's what i did: > >select p.id_prog,count(r.id_event) e from programas p left join(events r) >on(p.id_prog=r.id_prog) group by r.id_event > [JS] Add HAVING COUNT(*) > 0 is one way. I haven't been following the thread, but would = SELECT `p`.`id_prod`, COUNT(`r`.`id_event`) `e` FROM `programas` `p` LEFT JOIN `events` r ON `p`.`id_prod` = `r`.`id_prod` WHERE `r`.`id_prod` IS NOT NULL GROUP BY `p`.`id_prod`; = do what you want? That should find only those rows in `programmas` that match rows in `events`, and give you the number of events for each one. Regards, Jerry Schwartz The Infoshop by Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 www.the-infoshop.com >This gives me a list of all the distinct progs with a count of how many >events on each. I then delete the empty ones. > >It would be nice to be able to delete the empty ones on the same query. > > >MV > > > >On Wed, Dec 16, 2009 at 1:48 PM, Johan De Meersman wrote: > >> If the aim is purely to find the progs without events, it might be more >> efficient to use something like >> >> select * from progs where not exist (select id_prog from events where >> id_prog = progs.id_prog); >> >> My syntax might be off, check "not exists" documentation for more info. >> >> >> On Tue, Dec 15, 2009 at 8:14 PM, Gavin Towey wrote: >> >>> Hi Miguel, >>> >>> You'll need to use LEFT JOIN, that will show all records that match and a >>> row in the second table will all values NULL where there is no match. >>> Then >>> you find all those rows that have no match in your WHERE clause. >>> >>> Regards, >>> Gavin Towey >>> >>> -Original Message- >>> From: Miguel Vaz [mailto:pagong...@gmail.com] >>> Sent: Tuesday, December 15, 2009 10:43 AM >>> To: mysql@lists.mysql.com >>> Subject: Count records in join >>> >>> Hi, >>> >>> I am stuck with a suposedly simple query: >>> >>> - i have two tables (: >>> >>> PROGS >>> id_prog >>> name >>> >>> EVENTS >>> id >>> id_prog >>> name >>> >>> How can i list all records from PROGS with a sum of how many events each >>> have? I want to find the progs that are empty. >>> >>> I remember something about using NULL, but i cant remember. :-P >>> >>> Thanks. >>> >>> MV >>> >>> This message contains confidential information and is intended only for >>> the individual named. If you are not the named addressee, you are >>> notified >>> that reviewing, disseminating, disclosing, copying or distributing this >>> e-mail is strictly prohibited. Please notify the sender immediately by >>> e-mail if you have received this e-mail by mistake and delete this e-mail >>> from your system. E-mail transmission cannot be guaranteed to be secure or >>> error-free as information could be intercepted, corrupted, lost, >>> destroyed, >>> arrive late or incomplete, or contain viruses. The sender therefore does >>> not >>> accept liability for any loss or damage caused by viruses or errors or >>> omissions in the contents of this message, which arise as a result of >>> e-mail >>> transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, >>> CA >>> 94089, USA, FriendFinder.com >>> >>> -- >>> MySQL General Mailing List >>> For list archives: http://lists.mysql.com/mysql >>> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be >>> >>> >> -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
mysql-5.1.41 - clean build issue
(resending with subject) Hi, i am trying to build mysql-5.1.41 from source. but it is failing with error, I. -O2 -pipe -m32 -march=i386 -mtune=pentium4 -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DNDEBUG -Wall -pipe -march=pentium3 -mtune=prescott -MD -m32 -DUNIV_LINUX -MT my_user.o -MD -MP -MF .deps/my_user.Tpo -c -o my_user.o my_user.c mv -f .deps/my_user.Tpo .deps/my_user.Po gmake[3]: *** No rule to make target `../storage/archive/libarchive.a', needed by `mysqld'. Stop. gmake[3]: Leaving directory `/usr/build/mysql_build/mysql/mysql/5.1/sql' gmake[2]: *** [all-recursive] Error 1 but it passed with 5.1.39 reason i found is, --- mysql-5.1.39/Makefile.am2009-09-04 09:17:16.0 -0700 +++ mysql-5.1.41/Makefile.am2009-11-04 10:27:57.0 -0800 @@ -1,4 +1,4 @@ -# Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc. +# Copyright 2000-2008 MySQL AB, 2009 Sun Microsystems, Inc. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by @@ -24,7 +24,7 @@ SUBDIRS =. include @docs_dirs@ @zlib_dir@ \ @readline_topdir@ sql-common scripts \ @pstack_dir@ \ -@sql_union_dirs@ unittest storage plugin \ +@sql_union_dirs@ unittest \ @sql_server@ @man_dirs@ tests \ netware @libmysqld_dirs@ \ mysql-test support-files sql-bench @tools_dirs@ \ in mysql-5.1.41, storage directory is not listed in SUBDIRS in Makefile.am and Makefile.in (the one provided by source) and because building inside sql directory just fails expecting a library under storage directory. is this intentionally removed between 5.1.39 and 5.1.41 ? (if so, please let me know how to go with clean build) thanks, alagar A.Alagarsamy The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/ The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Count records in join
Thanks all for the feedback. Here's what i did: select p.id_prog,count(r.id_event) e from programas p left join(events r) on(p.id_prog=r.id_prog) group by r.id_event This gives me a list of all the distinct progs with a count of how many events on each. I then delete the empty ones. It would be nice to be able to delete the empty ones on the same query. MV On Wed, Dec 16, 2009 at 1:48 PM, Johan De Meersman wrote: > If the aim is purely to find the progs without events, it might be more > efficient to use something like > > select * from progs where not exist (select id_prog from events where > id_prog = progs.id_prog); > > My syntax might be off, check "not exists" documentation for more info. > > > On Tue, Dec 15, 2009 at 8:14 PM, Gavin Towey wrote: > >> Hi Miguel, >> >> You'll need to use LEFT JOIN, that will show all records that match and a >> row in the second table will all values NULL where there is no match. Then >> you find all those rows that have no match in your WHERE clause. >> >> Regards, >> Gavin Towey >> >> -Original Message- >> From: Miguel Vaz [mailto:pagong...@gmail.com] >> Sent: Tuesday, December 15, 2009 10:43 AM >> To: mysql@lists.mysql.com >> Subject: Count records in join >> >> Hi, >> >> I am stuck with a suposedly simple query: >> >> - i have two tables (: >> >> PROGS >> id_prog >> name >> >> EVENTS >> id >> id_prog >> name >> >> How can i list all records from PROGS with a sum of how many events each >> have? I want to find the progs that are empty. >> >> I remember something about using NULL, but i cant remember. :-P >> >> Thanks. >> >> MV >> >> This message contains confidential information and is intended only for >> the individual named. If you are not the named addressee, you are notified >> that reviewing, disseminating, disclosing, copying or distributing this >> e-mail is strictly prohibited. Please notify the sender immediately by >> e-mail if you have received this e-mail by mistake and delete this e-mail >> from your system. E-mail transmission cannot be guaranteed to be secure or >> error-free as information could be intercepted, corrupted, lost, destroyed, >> arrive late or incomplete, or contain viruses. The sender therefore does not >> accept liability for any loss or damage caused by viruses or errors or >> omissions in the contents of this message, which arise as a result of e-mail >> transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA >> 94089, USA, FriendFinder.com >> >> -- >> MySQL General Mailing List >> For list archives: http://lists.mysql.com/mysql >> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be >> >> >
Re: Count records in join
If the aim is purely to find the progs without events, it might be more efficient to use something like select * from progs where not exist (select id_prog from events where id_prog = progs.id_prog); My syntax might be off, check "not exists" documentation for more info. On Tue, Dec 15, 2009 at 8:14 PM, Gavin Towey wrote: > Hi Miguel, > > You'll need to use LEFT JOIN, that will show all records that match and a > row in the second table will all values NULL where there is no match. Then > you find all those rows that have no match in your WHERE clause. > > Regards, > Gavin Towey > > -Original Message- > From: Miguel Vaz [mailto:pagong...@gmail.com] > Sent: Tuesday, December 15, 2009 10:43 AM > To: mysql@lists.mysql.com > Subject: Count records in join > > Hi, > > I am stuck with a suposedly simple query: > > - i have two tables (: > > PROGS > id_prog > name > > EVENTS > id > id_prog > name > > How can i list all records from PROGS with a sum of how many events each > have? I want to find the progs that are empty. > > I remember something about using NULL, but i cant remember. :-P > > Thanks. > > MV > > This message contains confidential information and is intended only for the > individual named. If you are not the named addressee, you are notified that > reviewing, disseminating, disclosing, copying or distributing this e-mail is > strictly prohibited. Please notify the sender immediately by e-mail if you > have received this e-mail by mistake and delete this e-mail from your > system. E-mail transmission cannot be guaranteed to be secure or error-free > as information could be intercepted, corrupted, lost, destroyed, arrive late > or incomplete, or contain viruses. The sender therefore does not accept > liability for any loss or damage caused by viruses or errors or omissions in > the contents of this message, which arise as a result of e-mail > transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA > 94089, USA, FriendFinder.com > > -- > MySQL General Mailing List > For list archives: http://lists.mysql.com/mysql > To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be > >
AW: InnoDB Corrupted databases (innodb_force_recovery not working)
Baron, Thanks alot for your reply - I am trying out these tools today. Lukas Lukas C. C. Hempel CEO Delux Group - Approaching future. www.delux.me Postfach 10 02 10 D-48051 Münster Mail: lu...@delux.me This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden. Delux Credit: www.delux-credit.com Delux Host: www.delux-host.com Delux Software: www.delux.me Admin panel: http://admin.delux-host.com Paypal: pay...@delux-host.com Delux is a company of Lukas Hempel, Tax ID: 337/5105/2023 -Ursprüngliche Nachricht- Von: baron.schwa...@gmail.com [mailto:baron.schwa...@gmail.com] Im Auftrag von Baron Schwartz Gesendet: Montag, 14. Dezember 2009 22:57 An: Lukas C. C. Hempel Cc: mysql@lists.mysql.com Betreff: Re: InnoDB Corrupted databases (innodb_force_recovery not working) Lukas, If you can't get innodb_force_recovery to work, then you might have to try to recover the data with these tools: http://code.google.com/p/innodb-tools/ Regards Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=lu...@delux-host.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Help saving MySQL
It's still not too late to save MySQL and everyone that is using MySQL can help making a real difference. Please visit http://monty-says.blogspot.com/2009/12/help-saving-mysql.html and write a message to EC! Regards, Monty Guess you don't want them to write letters like this? http://kirkwylie.blogspot.com/2009/12/my-open-letter-to-european-competition.html With regards, Martijn Tonies Upscene Productions http://www.upscene.com Download Database Workbench for Oracle, MS SQL Server, Sybase SQL Anywhere, MySQL, InterBase, NexusDB and Firebird! Database questions? Check the forum: http://www.databasedevelopmentforum.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org