It's still not too late to save MySQL and everyone that is using MySQL
can help making a real difference.
Please visit
http://monty-says.blogspot.com/2009/12/help-saving-mysql.html
and write a message to EC!
Regards,
Monty
Guess you don't want them to write letters like this?
Baron,
Thanks alot for your reply - I am trying out these tools today.
Lukas
Lukas C. C. Hempel
CEO
Delux Group - Approaching future.
www.delux.me
Postfach 10 02 10
D-48051 Münster
Mail: lu...@delux.me
This e-mail may contain confidential and/or privileged
If the aim is purely to find the progs without events, it might be more
efficient to use something like
select * from progs where not exist (select id_prog from events where
id_prog = progs.id_prog);
My syntax might be off, check not exists documentation for more info.
On Tue, Dec 15, 2009 at
Thanks all for the feedback. Here's what i did:
select p.id_prog,count(r.id_event) e from programas p left join(events r)
on(p.id_prog=r.id_prog) group by r.id_event
This gives me a list of all the distinct progs with a count of how many
events on each. I then delete the empty ones.
It would be
(resending with subject)
Hi,
i am trying to build mysql-5.1.41 from source.
but it is failing with error,
I. -O2 -pipe -m32 -march=i386 -mtune=pentium4 -D_GNU_SOURCE
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DNDEBUG -Wall -pipe -march=pentium3
-mtune=prescott -MD -m32 -DUNIV_LINUX
-Original Message-
From: Miguel Vaz [mailto:pagong...@gmail.com]
Sent: Wednesday, December 16, 2009 9:39 AM
To: Johan De Meersman
Cc: Gavin Towey; mysql@lists.mysql.com
Subject: Re: Count records in join
Thanks all for the feedback. Here's what i did:
select p.id_prog,count(r.id_event) e
Yes, that would do what you mentioned, show all programs with a count on
events, but i need the opposite, show (and delete) all that dont have any
events. Well, just have to use IS NULL instead. Thanks.
MV
On Wed, Dec 16, 2009 at 3:17 PM, Jerry Schwartz
jschwa...@the-infoshop.comwrote:
Hi,
I plan to migrate a 32 bit MySQL installation to a 64bit host.
(mysql-5.0.77)
Both servers are RedHat EL 5.4 with the original mysql rpm.
The simpel plan was to shut down mysql and than copy the db-files from
/var/lib/mysql/ from host to host.
Any suggestions? Or comments? Or should I
Hi all.
I am running into a very frustrating problem trying to created a stored
procedure.
I had originally assumed I was using bad syntax, but even examples copied
and pasted
directly from the manual are giving the same error.
mysql CREATE DEFINER = 'walton'@'localhost' PROCEDURE
You need to use
DELIMITER //
Or some other symbol besides ; to change the client's end-of-statement symbol.
Otherwise it ends the statement at the first ; inside the procedure you use,
but it's not yet complete.
This is described in the manual on that same page.
Regards
Gavin Towey
-Original Message-
From: Walton Hoops [mailto:wal...@vyper.hopto.org]
Hi all.
I am running into a very frustrating problem trying to created a stored
procedure.
I had originally assumed I was using bad syntax, but even examples
copied
and pasted
directly from the manual are
Hi all,
I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days to load the same databases. I am guessing this is, at
There are scripts out there such at the Maatkit mk-parallel-dump/restore that
can speed up this process by running in parallel.
However if you're doing this every week on that large of a dataset, I'd just
use filesystem snapshots. You're backup/restore would then only take as long
as it takes
I have table with 2 million rows of geographic points (latitude, longitude).
Given a location -- say, 52º, -113.9º -- what's the fastest way to query the 10
closest points (records) from that table? Currently, I'm using a simple
two-column index to speed up queries:
CREATE TABLE `places` (
Gavin Towey wrote:
There are scripts out there such at the Maatkit mk-parallel-dump/restore that
can speed up this process by running in parallel.
However if you're doing this every week on that large of a dataset, I'd just
use filesystem snapshots. You're backup/restore would then only take
Yes, spatial indexes are very fast:
Query would be something like:
SET @center = GeomFromText('POINT(37.372241 -122.021671)');
SET @radius = 0.005;
SET @bbox = GeomFromText(CONCAT('POLYGON((',
X(@center) - @radius, ' ', Y(@center) - @radius, ',',
X(@center) + @radius, ' ', Y(@center) -
I don't think so, I'm pretty sure you have to use mk-parallel-dump to get the
data in a format it wants. The docs are online though.
Regards,
Gavin Towey
-Original Message-
From: Madison Kelly [mailto:li...@alteeve.com]
Sent: Wednesday, December 16, 2009 4:35 PM
To: Gavin Towey
Cc:
Madison Kelly wrote:
Hi all,
I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days to load the same databases. I am
18 matches
Mail list logo