Marten, in my experience, these are most often temporary files leftover from
an incomplete operation.
They might be from a long-running query that was canceled, or from a table
repair operation that errored out or was interrupted.
In my experience it is safe to delete them, provided you take care
Chris, this should already be pretty fast as it is using a primary key in
its entirety, and as long as the index size remains manageable MySQL will be
able to keep it in memory for fast access.
That said, doing away with the aggregate function might speed things up just
slightly.
You don't care h
Waynn, I've used both schemes 1 and 2 as you describe, and in my experience
2 is the best way to go. It's easy to scale up as you add users and
settings, and it's easy to make changes if the meaning of settings should
change (i.e. you need to do a backend change to people's settings).
#1 is harde
rn is really when we go to innodb, since mysqlhotcopy won't work
> then if I understand the documentation
> Pierre
>
> *Dan Buettner <[EMAIL PROTECTED]>* wrote:
>
> Are you currently dumping raw SQL? If so, how? One table at a time, or
> by obtaining a lock on a
Richard, it's possible, & your syntax is pretty close.
Try this:
INSERT INTO info_stamp
(fav_colour, hobby, stamp_date, firstname,
last_name, sexe, age, username, email, insc_date)
SELECT $fav_colour, $hobby, $time, a.firstname, a.last_name, a.sexe,
a.age, a.username, b.email, b.inscription_date
Are you currently dumping raw SQL? If so, how? One table at a time, or by
obtaining a lock on all tables?
If you're getting a lock on all tables now, I don't think anything would
change if you switched to a transactional engine like InnoDB and did the
same thing. The database is "frozen" for a
Mark, is the 'secs' column the offset from the minimum value of the
timestamp column?
If so, you might try something like this:
SELECT UNIX_TIMESTAMP(MIN(timestamp)) INTO @min_timestamp FROM my_table;
SELECT uid, timestamp,
UNIX_TIMESTAMP(timestamp) - @min_timestamp AS secs
FROM my_table
ORDER B
I'd strongly recommend setting up replication, and then taking your backups
from the replica.
mysqlhotcopy works great, I used it for years myself, but it does require
"freezing" your database while the copy happens. And no matter how you do
it, copying 20 GB takes a little bit of time.
Dan
On
Afan, you'll need to have a date and time column in the database storing a
"created at" value, that is set on INSERT and then not changed.
Assuming you have such a column - let's call it "created_at" - you can run
queries like this:
/* to get count of records created on Mondays */
SELECT COUNT(*)
Jim, MyISAM tables tend to work best in situations where the proportions of
reads to writes is either very high or very low.
That is to say, either the data doesn't change much but it's being accessed
a lot, or the data changes a lot but it's rarely accessed.
MyISAM is quite a bit faster than Inn
Hello Renito -
What you are looking for is MySQL's OPTIMIZE function:
http://dev.mysql.com/doc/refman/5.0/en/optimize-table.html
It does exactly what you are looking for.
You can implement this manually, via shell script on a timer, etc. I have
written a multi-threaded perl solution which will c
I see one conflict that could be causing your "lost connection" message -
you are specifying a 1 GB "max_allowed_packet" for the client, but the
server is configured to only support 64 MB.
You should adjust the "max_allowed_packet = 64M" setting on the server to
match or exceed what you specify on
Erik, I think the main reason your query is running slowly is the use of a
subselect. MySQL does not generally perform well with subselects, though
work continues in that area.
There is also a problem/situation in MySQL in that you can't use MAX/GROUP
BY functions quite the way you can in other d
Manivannan -
In a word, no. It's database software, not, um, cron. :)
There are scheduled events in 5.1, but that is more of a trigger-like
functionality than what you need, which is a scheduled task to save your
data somewhere. http://dev.mysql.com/doc/refman/5.1/en/events.html
You could loo
Chris, a couple of thoughts -
First, your index on the section is doing you no good (at this time) since
all the values are the same. You may already know that, but thought I'd
mention it.
Second, my theory on why query #1 is faster - if all your prices range from
1 up, and you're querying for p
Hi Jim -
I'm using MySQL on Fedora 6 as well, with no performance problems. Did not
need to do anything to speed it up on Fedora.
It's difficult to answer the question why one is faster than the other, as
there are any number of potential differences. Some more specifics about
your setup (hardw
It's a database, not a scripting language ... :)
You can run a simple cron entry like this:
0 4 * * * /path/to/mysql -u USER -pPASS -D DATABASE -e "delete from contacts
where TO_DAYS(CURDATE()) - TO_DAYS(today) >= 30 and
status != 'Y';"
so at 4 AM each day your SQL would be executed. For long S
Michael, it looks to me like your root partition is absolutely 100% chock
full. Am I misunderstanding your request for help? Sounds like you are
saying you think you have nothing in that partition - but your 'df -h'
command is showing 0 bytes available in /.
Dan
On 8/13/07, Michael Habashy <[E
Good morning, Willy -
If you're using some sort of scripting language, like PHP, this would be
easily done by starting a variable at the beginning of the date range, then
repeating the below query and incrementing the variable by 10 minutes each
time through a while loop until you've passed the ot
Miguel, it's possible, but it's not normal procedure in most systems I've
worked with. Separate databases are generally used for separate "systems".
Reasons include more work/steps to grant privileges to users, more
difficulty establishing a test system (you need a whole separate MySQL
instance i
David, you've hit the nail on the head.
Dan
On 7/5/07, David T. Ashley <[EMAIL PROTECTED]> wrote:
I'm an old Microsoft Access user, so I just need to double-check on this
with MySQL:
I want to relate two tables (let's call them "users" and "priveleges") in
a
many:many way. That is, each use
What I've done in situations like this is write SQL that generates the SQL I
need. I then pipe out to a file and pipe it back in, or just pipe from one
instance of mysql to another.
Example SQL:
SELECT
CONCAT('UPDATE main SET ', field, ' = ', id, ';')
AS update_sql
FROM map
Example command to
You may have encountered a bug; one thing you could do to avoid having to
downgrade is specify the column number you wish to sort on (4th column in
your case), as in:
ORDER BY 4 DESC LIMIT 10;
Dan
On 6/22/07, Andrew Carlson <[EMAIL PROTECTED]> wrote:
I upgraded to Mysql 5.0.42 from 4.1.14 ye
Hi Seth -
I believe MySQL's official position is that you should always dump-and-load
the data when upgrading major or minor versions (4.0 to 4.1, 4.1 to 5.0,
etc.)
I've done it both ways (dump-load and just moving table files) and have
never had a problem with either, even when moving files acr
Ben, there's a slow query log feature that may be just what you're looking
for:
http://dev.mysql.com/doc/refman/4.1/en/slow-query-log.html
There's an analysis script that will show you the most popular slow queries,
too, '*mysqldumpslow'.
You can take those queries and use the EXPLAIN feature to
I would try adding an index on the freetags.tag column as you are querying
against that column with
WHERE tag = 'shot'
HTH,
Dan
On 6/19/07, Kishore Jalleda <[EMAIL PROTECTED]> wrote:
Hi everybody,
we have this super slow query which is going through
more
than 10 million ro
If they're both numeric fields, then it's as easy as:
SELECT field1-field2 AS difference FROM table;
and if you always want a positive number:
SELECT ABS(field1-field2) AS difference FROM table;
HTH,
Dan
On 6/6/07, Jason Pruim <[EMAIL PROTECTED]> wrote:
Okay, so I have been gooling all over t
Perrin, I like #3 the best.
#1 - it's not a good approach to "hope" your database keeps up. There are
fairly common situations that can come up where you never know how long
something will take - unusually high traffic, table check and repair, a bulk
load into the same or another database on tha
Tim, it's a gnarly problem that most DBAs struggle with in some form or
another, whether using MySQL or another database package.
If you're using only MyISAM tables, MySQL's free, included 'mysqlhotcopy'
script might work for you, as it's generally a bit faster than mysqldump in
my experience. I
Hi Kenneth -
it appears that you need to use an explicit 'commit' command when using
InnoDB tables and Python.
Something like this:
try:
cursor.execute("INSERT INTO Test1 (s1, i1) VALUES ('Now is the
time', 5)")
db.commit()
Found this on http://www.serpia.org/mysql
Hi Edoardo -
I think you've been misinformed; MyISAM tables do not support simultaneous
read and write operations. MyISAM is a multiple reader/single writer, table
locking design. You may want to switch to InnoDB tables for that
functionality.
http://dev.mysql.com/doc/refman/5.0/en/locking-iss
Thanks Jerome.
With the high number of "aborted_clients", it seems like you might have
networking issues:
http://dev.mysql.com/doc/refman/5.0/en/communication-errors.html
Dan
Hi JM -
Can you send us a few things?
1 - the exact error message you get
2 - the output of "SHOW VARIABLES;" from a mysql session
3 - the output of "SHOW STATUS;" from a mysql session
What have you tried so far in terms of troubleshooting that has been
unsuccessful?
Any recent changes on the
Some additional resources for fixing corrupt MyISAM tables:
http://dev.mysql.com/doc/refman/5.0/en/myisam-table-problems.html
http://dev.mysql.com/doc/refman/5.0/en/myisamchk.html
http://dev.mysql.com/doc/refman/5.0/en/table-maintenance.html , especially:
http://dev.mysql.com/doc/refman/5.0/en/re
Hi John -
using --xml or --html as an argument and redirecting to a file seems to work
here. As in,
mysql -u me -psecret -D database --html -e "select * from that_table" >
~/test_file.html
HTH,
Dan
On 5/8/07, John Kebbel <[EMAIL PROTECTED]> wrote:
When I try using the --xml or --html opti
Michael, this looks pretty decent overall.
I'm a big fan of fully descriptive table and column names, avoiding
abbreviations except where truly needed, so I personally would spell out
"claim" and "claimant" for example. I also like to separate words in table
and column names with underscores, wh
x27;s not
always a no-brain switch.
Dan
On 5/2/07, Brian Dunning <[EMAIL PROTECTED]> wrote:
The table is MyISAM, does that matter?
On May 2, 2007, at 7:28 AM, Dan Buettner wrote:
> A few observations:
>
> 1 - if the table is in the InnoDB format, you aren't going to l
A few observations:
1 - if the table is in the InnoDB format, you aren't going to lock up their
server, as InnoDB doesn't do table locking. SHOW TABLE STATUS LIKE
'tablename' will tell you.
2 - Ryan's mysqldump script looks useful - also, there's a little-used
option with mysqldump that lets yo
The smaller Dells, like the 28xx and 18xx series, are exactly what I
consider commodity hardware. Other companies (Sun, HP, Gateway, IBM, Apple,
others) make comparable equipment at comparable prices.
Whether you need to spend the money on redundant power supplies and a
redundant drive setup is
Martjin, I've run various versions of MySQL from 3.2 through 5.1 on various
platforms (Mac OS X, FreeBSD, RedHat/Fedora Linux, Windows, Solaris),
exporting and importing as needed, and haven't encountered any problems.
I've even transferred the MyISAM tables between machines in a few cases,
rather
Michael, here's what I can tell you -
Some people will advise you against storing documents in tables - claim it
makes the data harder to serve up, etc. I agree it's quite simple to set up
a folder structure with your files on the filesystem and point Apache or IIS
to it while tracking metadata
That's a much larger file than most any text editor would work with, in my
experience. I'd give BBEdit on the Mac a try if nothing else, but my
expectations would not be too high.
For examining and altering a file that large I'd try grep, awk, sed, perl,
etc.
Barring that, one thing you might d
Hi Schalk -
Having just done this, it was not too hard.
We used Ruby on Rails (RoR) to create a schema file from the MSSQL database,
in order to recreate it in MySQL. If your database is simple enough and/or
you have all the SQL used to create tables views etc., you don't need to do
this.
Make
Richard, there's no inherent problem around 60 MB - I routinely dump data
ranging from a few KB to a few GB.
One thing I note is that you are using the mysql command, which is the
interactive database client. You want to use mysqldump, the client program
that dumps data from the database in SQL
Rod, I'm speculating here, but I think what's happening is that too many of
your entries have a PRODUCT_ID that starts with 'DRM' to make your index
useful. When MySQL expects an index query to match more than around 30% of
all rows, it will instead proceed with a table scan, estimating that to b
ere are probably others
that are hired on the same date...
On Apr 4, 2007, at 1:51 PM, Dan Buettner wrote:
> James, one option would be to run a query to find the number of
> people in
> the list ahead of him, rather than determining position within the
> result
> set.
>
> As
James, one option would be to run a query to find the number of people in
the list ahead of him, rather than determining position within the result
set.
As in:
SELECT COUNT(*) FROM some_table
WHERE state = "Maine"
AND hire_date < (SELECT hire_date FROM some_table
WHERE last_name = "Smith"
AND fi
In the category of terrible, horrible, no good, very bad (but at least
documented) software behavior, I bumped into this today:
http://bugs.mysql.com/bug.php?id=14770
where the LOAD DATA INFILE command does not respect the default value of a
column if no value is supplied in the file. Instead, it
Another solution might be something like:
SELECT * FROM tablename
WHERE id NOT IN (SELECT id FROM tablename WHERE some criteria ORDER BY
something LIMIT 20)
ORDER BY something
Dan
On 3/20/07, Rolando Edwards <[EMAIL PROTECTED]> wrote:
SELECT ... FROM ... LIMIT 20,1;
100 million is a
Brian, the online MySQL documentation is very complete and easy to read.
That said, you do kind of have to know what you're looking for! I'm not
sure what to recommend for a guide to beginning SQL, sorry, others may have
some thoughts.
You are going down the right road with an aggregate function
NTP won't solve this problem for you.
NTP, as well as most computer clocks, know nothing about daylight savings
time, or about time zones. What they know is how many seconds have elapsed
since "the epoch". The epoch, in the case of most UNIX-based OSes, is
midnight January 1, 1970. I think Win
Sorry for the double-post, but another thought if it's not obvious already:
with such a large dataset, making sure to employ striping and high-RPM disks
will be important for speed. I'd recommend looking at RAID 1+0 with dual
SCSI channels and 15K RPM disks if you're looking in the Dell PowerEdge
Rich, one consideration is of course disk space - you'll want to make sure
that 2.5 billion * row length will fit on your disks.
Offhand, you've got (referencing
http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html)
78 bytes in 3 varchars (50 + 10 + 15, + 3 for internal use)
16 bytes
Olexandr, I fear that model may not scale well. Certainly having data
spread across so many different tables reduces the effectiveness of memory
caching.
I recommend you alter the design to have 30 tables total, each with a column
for user id, rather than 30 tables per user. This may improve pe
Ian, based on your needs (regularly deleting everything morre than X
months old), I recommend you look into using the MERGE engine.
Essentially, it is multiple MyISAM tables that appear as one, and
lopping off the oldest data is as simple as redfining the MERGE and
then dropping the oldest table.
tus,ptsID)
INDEX timeframe (mid, status, time)
Dan Buettner-2 wrote:
>
> Andrew, can you post the result of EXPLAIN for your query?
> Minus the "FORCE INDEX" too. Also the structure of the other 2 tables
> would be helpful as well.
>
> Thanks,
> Dan
>
&g
Andrew, can you post the result of EXPLAIN for your query?
Minus the "FORCE INDEX" too. Also the structure of the other 2 tables
would be helpful as well.
Thanks,
Dan
On 1/22/07, altendew <[EMAIL PROTECTED]> wrote:
--- ptsSignups --
id int(20) No
mid int(20) No 0
ptsID int(20) No 0
pps dou
ATE_ADD('2000-01-01 00:00:00', INTERVAL $counter HOUR) );\n";
$counter++;
}
END
then run
./populate_hours.pl | mysql -h host -u user -ppassword -D database
you'll have a table full of hours.
Dan
On 1/10/07, Dan Buettner <[EMAIL PROTECTED]> wrote:
One of the m
One of the most puzzling and challenging things to do with SQL can be
to show what's NOT there, as you're trying to do.
Many people opt to do such a report in their favorite scripting
language for this reason, as one can easily increment timestamps by a
given amount and re-do the query. Can be r
Brian, can you post the output of EXPLAIN for your query?
I.e.,
EXPLAIN ;
At first glance, your query should be able to use the 'stamp-source'
index since stamp is the first column indexed. However, I wonder if
wrapping the "NOW() - INTERVAL 14 DAY inside a "DATE()" would help.
MySQL may be ca
Guillermo -
You likely want mysqldump -
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
Dan
On 1/5/07, Guillermo <[EMAIL PROTECTED]> wrote:
Hello, i have this problem:
I have two databases in diferent servers. I need to copy the contents from some tables in Server1
to Server2, so i tr
Olaf, not a silly question at all. You can indeed save space by using
different forms of integer.
See
http://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html
and
http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html
From that second page:
Storage Requirements for Numeric
.
Can you post the output of SHOW PROCESSLIST during a stall? That
would help us help you, I think.
Dan
On 1/3/07, Michael Katz <[EMAIL PROTECTED]> wrote:
Dan Buettner wrote:
> Michael -
>
> You should see all running queries in the output of SHOW PROCESSLIST.
>
> MyISA
Michael -
You should see all running queries in the output of SHOW PROCESSLIST.
MyISAM is a multiple-reader/single-writer design, also known as table
locking design - that is to say, you can have lots of threads reading
simultaneously, but only one writing. Also, when one is writing,
reading th
Regarding the format of TIMESTAMP columns, one of the user comments on
http://dev.mysql.com/doc/refman/4.1/en/timestamp-4-1.html
offers the solution below:
Posted by Kjell Arne Rekaa on April 14 2005 11:11pm
If you want the same view of a timestamp field in 4.1.x as it was in
in earlier mysql ver
Optimize table does not reclaim space for InnoDB tables in the shared
tablespace, FYI - only for those that live independently when using
the 'innodb_file_per_table' option.
On 12/20/06, Raj Shekhar <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] wrote:
> In my configuration file there was a line
y!
-Dan
On 12/18/06, Richard Reina <[EMAIL PROTECTED]> wrote:
Dan,
Thank you very much for the reply. Is there a way to do it with version
3.23.54?
Dan Buettner <[EMAIL PROTECTED]> wrote:
Yes, the LIMIT function affects number of rows returned, not number of
rows evaluated.
If
Yes, the LIMIT function affects number of rows returned, not number of
rows evaluated.
If you're on 4.1 or later, you could use a subselect. Assuming you
have an ID field in your table, something like this:
SELECT AVG(cost)
FROM items
WHERE id IN (SELECT id FROM items ORDER BY date DESC LIMIT 1
GRANT (ALL|SELECT|INSERT|UPDATE|DELETE|etc) ON DATABASE.* TO
'user'@'%' IDENTIFIED BY 'password'
See http://dev.mysql.com/doc/refman/5.0/en/grant.html for details.
Note that localhost is considered as a special case, not included in
the wildcard %
HTH,
Dan
On 12/13/06, Brent Anderson <[EMAIL P
This is a feature - a NULL value is an undefined value, therefore two
NULL values are not the same. Can be a little confusing but makes
sense when you think about it.
A UNIQUE index does ensure that non-NULL values are unique; you could
specify that your column not accept NULL values.
Dan
On
Just add a DISTINCT:
SELECT DISTINCT t1.universityID FROM SvnTBL t1 WHERE t1.universityID NOT IN
(SELECT t2.universityID FROM SvnTBL t2 WHERE t2.actionID = 2);
Dan
On 12/8/06, bruce <[EMAIL PROTECTED]> wrote:
hi peter.
thanks, the solution you gave me is close...!!
my actual data i
I see several values set to '18446744073709551615', which is an
insanely large number for any memory setting (16.7 million terabytes
unless my math is wrong; huge in any case).
There was another person on the list earlier this year who had a
similar problem with large numbers, IIRC. I'd adjust t
Try this on for size:
SELECT DISTINCT id FROM tbl
WHERE id NOT IN (SELECT id FROM tbl WHERE action = 1)
The subselect will only work in 4.1 and later I think.
Dan
On 12/8/06, bruce <[EMAIL PROTECTED]> wrote:
hi...
i'm looking at what is probably a basic question.
i have a tbl with
-id
Looks like this was sent just to me - no knowledge of it myself, but
here you go.
-Dan
-- Forwarded message --
From: Nicholas Vettese <[EMAIL PROTECTED]>
Date: Dec 8, 2006 6:29 AM
Subject: RE: How to get started on MySQL
To: Dan Buettner <[EMAIL PROTECTED]>
What
You could use triggers to check for sane values on fields and rollback
if not sane.
http://dev.mysql.com/doc/refman/5.0/en/triggers.html
You'd need to handle the rollback in your frontend app.
Dan
On 12/8/06, Christian High <[EMAIL PROTECTED]> wrote:
Is it possible to restrict the range that
Eric, you'll likely want the community edition to get started with.
It's essentially the same software, at least for now, but community is
free whereas enterprise is cost (but includes support).
I've read good things about, and have just ordered, this:
MySQL Cookbook, 2nd edition
http://www.amazo
It's actually pretty easy -
select * from table order by creation_date desc limit 1
Now one catch to this use of LIMIT is that you only get one row - even
if there are multiple rows that share the same greatest creation_date.
Sometimes that's acceptable, sometimes not.
Dan
On 12/7/06, Tim McI
Another thought is - do you have backups running during this time?
Something that might be attempting to backup live the InnoDB files?
We had similar problems with MySQL and backup software a while ago,
though we used all-MyISAM.
Dan
On 12/7/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Hi,
Thomas, I do not think in this case that one is better than the other,
for the most part, because both require using a value computed from
the column. Computing month from a DATE field should be just as fast
as computing from a DATETIME column I would think.
Also splitting into DATE and TIME col
Sounds like you want the SOURCE command
http://dev.mysql.com/doc/refman/5.0/en/batch-commands.html
Dan
On 12/1/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
I want to load a sql file from inside the interactive mysql console, not as a
single command line. How do I do this?
Daniel, you might look into the use of MERGE tables, which are
essentially multipule identical MyISAM tables that look like one
table.
Dan
On 11/27/06, Chris White <[EMAIL PROTECTED]> wrote:
On Monday 27 November 2006 13:50, Daniel Smith wrote:
> Assuming a decent spec server, would a simple s
Jason, in addition to Daniel's suggestions, I'll throw this out there:
I had a somewhat similar problem with a database I used to own, where
a handful of very hard-hit tables would become progressively slower
over time, despite the fact that (due to daily archiving and purging)
they were not grow
This kind of timeframe (2 - 2.5 secs) could just be the result of
running on a laptop. You've got a small amount of RAM compared to
many servers, a bit slower processor, and *much* slower hard disk
system than most servers. If your query has to access multiple
records spread out throughout the t
James, for a truly 24/7 site, MySQL replication is a better answer
than mysqlhotcopy. You leave your master running all the time, and
replicate data off to another "slave" server, which you back up. For
slave backups, you can use any of a few different schemes, such as
mysqlhotcopy, shut down my
On 11/17/06, John Kopanas <[EMAIL PROTECTED]> wrote:
That is what I thought. Then why would it be going at 99% CPU for over 20
minutes? What factors would cause that? We are using version 4.1.20
standard.
Would the fact that a website was still connected to DB cause the problem?
Normally, n
Scratch that, I just created a 10 row table with 2 varchar255's.
Creating a 2 col index on it took 2.09 seconds. Could take longer due
to all the other fields you have in your table but a couple of minutes
at the outside.
Dan
On 11/17/06, Dan Buettner <[EMAIL PROTECTED]> wrot
John, I would guess with about 100,000 rows, it might take a couple
minutes to create an index on two varchar-255 columns. With modern
hardware anyway. Very rough estimate.
Factors include amount of RAM, speed of disks, speed of processors,
other processes running and either locking table or jus
Sorry - what's your question?
#1 will work, or you can increase the value for the variable named in
the error message in scenario #2.
Dan
On 11/17/06, sofox <[EMAIL PROTECTED]> wrote:
Dear All,
I am using mysql-4.0.26, and I have a very large innodb table(>10G) .
When I try to moved the tabl
max_allowed_packet is the maximum size of a single SQL statement.
It's a setting on the server, as well as for the mysql command line
interface.
See http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html
I think 1 MB is the default; the maximum setting for MySQL 5 is 1 GB.
You can adjust t
Strictly an InnoDB issue.
Dan
On 11/16/06, Chaos Engine <[EMAIL PROTECTED]> wrote:
2006/11/15, Dan Buettner <[EMAIL PROTECTED]>:
> It's normal for space to not be reclaimed from the InnoDB file, yes.
> You could change your setup to use the 'innodb_file_per_t
James - you should be able to do this with the GROUP_CONCAT function:
http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html
Dan
On 11/15/06, James Eaton <[EMAIL PROTECTED]> wrote:
I have a database with roughly the following structure:
album
--
albumid INT
It's normal for space to not be reclaimed from the InnoDB file, yes.
You could change your setup to use the 'innodb_file_per_table' option
to change the behavior; with that in effect, deletes/dropped tables
etc will lead to reclaimed space after optimizing.
However, you'd also have to either drop
e use InnoDB tables where I'm at now.
Dan
On 11/14/06, Tim Lucia <[EMAIL PROTECTED]> wrote:
Is mysqlhotcopy still considered "beta"? We steered clear of it for
production use for that reason.
Tim
-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Mo
If you're looking for the records from the last full day contained in
the data, not the past 24 hours according to the clock, then this
ought to work:
select * from table
where InsertDate >= date_sub( (select max(InsertDate from table),interval 1 day)
order by InserDate desc
Dan
On 11/14/06, V
Kory -
It's always a little more challenging to find something that's NOT in the data!
I'd suggest two approaches:
1 - create a reference table containing all the possible values, 1 -
999. Then do a SELECT MIN and a join to find the lowest number
not in your data table, something like this
Van, I'll second what Gerald said about mysqlhotcopy.
When we first began using MySQL at my last job, we had terrible
problems with MySQL crashing. Turned out to be due to a 3rd party
backup process attempting to lock and read the database files while
MySQL was attempting to use them.
Using mys
Afan, the password hashing is different, starting with 4.1. Also the
JOINs were changed / made to comply with standard in 5.0.
http://dev.mysql.com/doc/refman/4.1/en/password-hashing.html
and
http://dev.mysql.com/doc/refman/5.0/en/join.html
Dan
On 11/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED
Depends on the value of your data, and what you might want to invest
in cluster setup (time, money) to get this done.
Another simple option from a hardware perspective might be the use of
a PCI card with RAM that serves as a Serial ATA disk. They're
relatively new on the market but they're out t
Good morning, Steffan -
Can you post some details about what you mean when you say the tables
are damaged? What makes you say that / what are the signs/symptoms?
Does the server process crash, or stop responding? Do you see
anything in the error logs? Do you have enough disk space for temp
ta
Dave, you could just use a LIKE statement
SELECT *
FROM table
WHERE mystring LIKE "%,2,%"
Of course if it is at the beginning or end, or the only item, it may
look like '2' '2,' or ',2' and not ',2,' so this would work in all
cases I think:
WHERE (mystring LIKE "%,2,%" OR mystring LIKE "2,%" OR
1 - 100 of 364 matches
Mail list logo