Re: strange files in database directory

2008-03-31 Thread Dan Buettner
Marten, in my experience, these are most often temporary files leftover from
an incomplete operation.

They might be from a long-running query that was canceled, or from a table
repair operation that errored out or was interrupted.

In my experience it is safe to delete them, provided you take care to not
delete one that is in use (i.e. mysql is actively writing to it).  The
modification dates should let you watch for that.

Hope this helps.

-Dan


On Sun, Mar 30, 2008 at 10:57 AM, Marten Lehmann [EMAIL PROTECTED] wrote:

 Hello,

 I was wondering why the partition is almost full, since the databases
 aren't very big. Then I noticed these files in the database directory:

 -rw-rw   1 mysql mysql4310560 Dec 24 10:13 #sql_10d4_0.MYD
 -rw-rw   1 mysql mysql   1024 Dec 24 10:13 #sql_10d4_0.MYI
 -rw-rw   1 mysql mysql   17182880 Jan 10 01:28 #sql_10e0_0.MYD
 -rw-rw   1 mysql mysql   1024 Jan 10 01:28 #sql_10e0_0.MYI
 -rw-rw   1 mysql mysql   20853120 Mar 17 10:55 #sql_10e6_0.MYD
 -rw-rw   1 mysql mysql   1024 Mar 17 10:55 #sql_10e6_0.MYI
 -rw-rw   1 mysql mysql   19048320 Feb 20 07:35 #sql_10ef_0.MYD
 -rw-rw   1 mysql mysql   1024 Feb 20 07:35 #sql_10ef_0.MYI
 -rw-rw   1 mysql mysql   25702240 Jan 10 03:23 #sql_10fa_0.MYD
 -rw-rw   1 mysql mysql   1024 Jan 10 03:23 #sql_10fa_0.MYI
 -rw-rw   1 mysql mysql8159360 Mar 17 11:17 #sql__0.MYD
 -rw-rw   1 mysql mysql   1024 Mar 17 11:17 #sql__0.MYI
 -rw-rw   1 mysql mysql7724800 Mar  8 18:11 #sql_1119_0.MYD
 -rw-rw   1 mysql mysql   1024 Mar  8 18:11 #sql_1119_0.MYI
 -rw-rw   1 mysql mysql   17927360 Feb  6 04:40 #sql_1125_0.MYD
 -rw-rw   1 mysql mysql   1024 Feb  6 04:40 #sql_1125_0.MYI
 -rw-rw   1 mysql mysql   11600160 Mar  3 12:59 #sql_1126_0.MYD
 -rw-rw   1 mysql mysql   1024 Mar  3 12:59 #sql_1126_0.MYI
 -rw-rw   1 mysql mysql  0 Mar  3 12:59 #sql_1126_1.MYD
 -rw-rw   1 mysql mysql   1024 Mar  3 12:59 #sql_1126_1.MYI
 -rw-rw   1 mysql mysql   25076800 Dec 24 10:28 #sql_1129_0.MYD
 -rw-rw   1 mysql mysql   1024 Dec 24 10:28 #sql_1129_0.MYI
 -rw-rw   1 mysql mysql   27626080 Mar  8 18:19 #sql_1135_0.MYD
 -rw-rw   1 mysql mysql   1024 Mar  8 18:19 #sql_1135_0.MYI

 There a really dozends of it, currently about 5000 files using 45 GB (!)
 of the partition. I thought these might be files from temporary tables,
 but then I restartet mysql so it should remove them, but they didn't
 disappear.

 What are these files from and how can I get rid of it? Can I simply
 delete them?

 And how can I be sure that they don't appear again?

 Regards
 Marten

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Optimizing a query

2008-02-29 Thread Dan Buettner
Chris, this should already be pretty fast as it is using a primary key in
its entirety, and as long as the index size remains manageable MySQL will be
able to keep it in memory for fast access.

That said, doing away with the aggregate function might speed things up just
slightly.

You don't care how many matches there are, or which match provided access
(right?) - you just care whether there is or is not a match.

So, perhaps you could do this instead:

SELECT GroupID
FROM `grouplink` u
JOIN `grouplink` p USING(`GroupID`)
WHERE u.`LinkType` = 'user' AND p.`LinkType` = 'page'
AND u.`ID` = '23' AND p.`ID` = '18'
LIMIT 1

and if you get a result, the user has access; if you get an empty set, the
user has no access.

By not COUNTing and using a limit 1, you let the database answer your
question without examining more rows than it needs to.

Make sense?

-Dan


On Fri, Feb 29, 2008 at 9:31 PM, Chris W [EMAIL PROTECTED] wrote:

 I was wondering if someone could tell me what things I might need to do
 to make this query as fast as possible.

 I am developing a web site where users will have access to certain
 things based on what groups they are in and what groups have access to
 certain things.  There are several different types things they have
 access to based on the group but for this discussion lets limit it to
 pages.  For a user to view a page they have to be associated with one or
 more of the groups that the page is linked to.

 Since the relation ship between pages to groups and users to groups is
 many to many I have a table just for that relationship.  So here are my
 tables
 Group:
  contains various info about the group with key field GroupID.
 User:
  contains various info about the user along with the key field UserID.
 Page:
  contains various info about a page on the site along with it's PageID.

 GroupLink:
 CREATE TABLE  `grouplink` (
  `LinkType` set('user','page','template','templatefile','menu') NOT
 NULL default '',
  `ID` int(10) unsigned NOT NULL default '0',
  `GroupID` int(10) unsigned NOT NULL default '0',
  PRIMARY KEY  (`LinkType`,`ID`,`GroupID`)
 ) ;

 Since there are several things that will be linked to groups I decided
 to use one table to create all links and the The LinkType field to
 designate which think we are linking to a group.

 For example suppose I had page 18 linked to group 2, 5, 6, and 7 and I
 had User 23 linked to group 1, 2, and 9.  The rows in the table would be
 like this

 group, 18, 2
 group, 18, 5
 group, 18, 6
 group, 18, 7
 user, 23, 1
 user, 23, 2
 user, 23, 9

 Now I want to know if user 23 can access page 18 so I execute this query

 SELECT COUNT(`GroupID`)
 FROM `grouplink` u
 JOIN `grouplink` p USING(`GroupID`)
 WHERE u.`LinkType` = 'user' AND p.`LinkType` = 'page'
 AND u.`ID` = '23' AND p.`ID` = '18'

 Since User 23 and Page 18 are both linked to group 2, COUNT(`GroupID`)
 should return 1.
 The way the rules I have set work, if the count is 1 or larger then that
 user has access to the page.

 Now the question is there anything I can do to make this query faster?

 --
 Chris W
 KE5GIX

 Protect your digital freedom and privacy, eliminate DRM,
 learn more at http://www.defectivebydesign.org/what_is_drm;

 Ham Radio Repeater Database.
 http://hrrdb.com


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: User Preferences?

2008-02-28 Thread Dan Buettner
Waynn, I've used both schemes 1 and 2 as you describe, and in my experience
2 is the best way to go.  It's easy to scale up as you add users and
settings, and it's easy to make changes if the meaning of settings should
change (i.e. you need to do a backend change to people's settings).

#1 is harder to make those kind of back end updates on, and harder for
someone troubleshooting to make sense of the data.

#3 may not scale well - you would end up having to track too many tables, I
think.

What I'm doing in my current project is using a data model that has a method
for each preference setting, and returns a sensible value by default if the
user has no pref set for a given lookup key; otherwise, I return what the
user has set.  This means adding a method every time I add a preference
setting, which on the one hand means adding code - on the other hand,
chances are very high that if I am adding the ability for a user to set a
preference, I'm already adding code somewhere to ensure that preference has
an effect.

HTH,
Dan



On Thu, Feb 28, 2008 at 9:50 AM, Waynn Lue [EMAIL PROTECTED] wrote:

 I'm looking for a good way to store user preferences.  The most
 straightforward way is just to add a column to the Users table for
 each preference we're looking to store.  Downside is that it requires
 an ALTER TABLE which gets prohibitively expensive as it gets larger,
 as it's fairly inflexible.  I've come up with a few alternatives, and
 I'm wondering if people have ideas or suggestions, as this has to be a
 common problem.  A quick Google search didn't turn up anything.

 1.  Store the preferences as a binary blob on the Users table.  This
 blob could be either a blob, or an integer that I use application
 logic to read/write from, or I could use the SET datatype.
 2.  Store the preferences in normalized form, with a new table called
 UserPreferences that has UserId, Setting, Preference and we add a row
 for each setting of that user.
 3.  Create a separate table each time we want to add a new setting,
 UserId, WhateverTheNameOfThePreferenceIs.

 Anyone have any experience with this, or better suggestions?

 Thanks,
 Waynn

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: [EMAIL PROTECTED] locations

2008-02-19 Thread Dan Buettner
Hi Pierre -

You're correct, mysqlhotcopy will no longer work when you switch to InnoDB.

One option you could pursue is using mysqldump instead, which will write out
full SQL files needed to restore your databases.  It will write these to a
filesystem.

It is generally slower than mysqlhotcopy to take the backup, and slower to
restore, but it is still possible to get a consistent backup snapshot this
way.  I've been using mysqldump for backups for years.

See the mysqldump man pages or
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
specifically, the '--lock-all-tables' option will be of interest for a
consistent db snapshot

HTH.

(copying the list to close the loop)

Best,
Dan


On Feb 19, 2008 9:23 PM, P. Evans [EMAIL PROTECTED] wrote:

 Dan,
 we have single linux pc's at each location, so replication isnt feasable.
 The concern is really when we go to innodb, since mysqlhotcopy won't work
 then if I understand the documentation 
 Pierre

 *Dan Buettner [EMAIL PROTECTED]* wrote:

 Are you currently dumping raw SQL?  If so, how?  One table at a time, or
 by obtaining a lock on all tables?

 If you're getting a lock on all tables now, I don't think anything would
 change if you switched to a transactional engine like InnoDB and did the
 same thing.  The database is frozen for a period of time while the backups
 happen, which may be very quick if you don't have a lot of data.

 If you're not getting a lock on all tables now, then it's possible you're
 not getting a consistent snapshot of your data, and switching to InnoDB or
 another transactional engine won't fix that.  I'd recommend aiming for a
 consistent backup.  You know your operation better than I do, though - if
 there's truly *never* anything happening at the time you take your backups,
 then it's no big deal.

 The best strategy in many people's opinion when you need a consistent
 snapshot and can't spare the time to have the database frozen, is to set
 up a replica of your master server, and take your backups from the replica
 (slave).  If you have a large number of servers this may be problematic from
 a cost/maintenance standpoint.

 If you can spare the time to have the database frozen, no big deal.

 -Dan


 On Fri, Feb 15, 2008 at 4:50 PM, P. Evans [EMAIL PROTECTED] wrote:

  Greetings,
   I've got a retail operation with mysql 5.0.22 on linux pc's across the
  country, and i need some input on setting up a backup strategy, preferrably
  without purchasing a package. We're currently using MyISAM, with the
  databases  being dumped to a filesystem on a separate drive, in case the
  main drive goes down. However we will need to implement some kind of
  transactional engine in the near future, and we'd prefer not to take down
  the database to take a backup.
   Any thoughts ?
   Thanks
   Pierre
 
 
  -
  Looking for last minute shopping deals?  Find them fast with Yahoo!
  Search.
 


 --
 Never miss a thing. Make Yahoo your 
 homepage.http://us.rd.yahoo.com/evt=51438/*http://www.yahoo.com/r/hs



Re: insert new records from other tables

2008-02-17 Thread Dan Buettner
Richard, it's possible,  your syntax is pretty close.

Try this:

INSERT INTO info_stamp
(fav_colour, hobby, stamp_date, firstname,
last_name, sexe, age, username, email, insc_date)
SELECT  $fav_colour, $hobby, $time, a.firstname, a.last_name, a.sexe,
a.age, a.username, b.email, b.inscription_date
FROM mem_info a
JOIN mem_login b ON a.username = b.username
WHERE a.username = $username;

I removed the reference to the 'key' column, fyi.

More info here: http://dev.mysql.com/doc/refman/5.0/en/insert-select.html

Hope this helps!

-Dan



On Feb 17, 2008 9:09 PM, Richard [EMAIL PROTECTED] wrote:

 Hello,

 I need to insert one entry(one line) containing 10 values, some from a
 form(ok) but some from two other tables, what is the best way to do this ?

 Say for example I need to enter this information :

 first_name - last_name - age - sexe - username - email - favorite_colour
 - hobby - inscription_date - timestamp


 I already have a table containing : firstname - last_name - sexe - age -
 username
 And another one containing: username - email - inscription_date

 And I get from the member : favorite_colour - hobby

 Out of these three sources I would like to insert into one table
 containing all of these details.

 I need this to make a print of one moment So I would be able to pull out
 an entry saying : At this date the information was the following :

 first_name - last_name - age - sexe : height - username - email -
 favorite_colour - hobby - inscription_date - timestamp


 Can I do something like this :

 INSERT INTO info_stamp (key , fav_colour, hobby, stamp_date, firstname,
 last_name, sexe, age, username, email, insc_date )VALUES ('',
 $fav_colour, $hobby, $time, (SELECT  a.firstname, a.last_name, a.sexe,
 a.age, a.username, b.email, b.inscription_date FROM mem_info a JOIN
 mem_login b ON a.username = b.username WHERE a.username = $username));

 I guess this query would not actually work, what would be the corect way
 to do this ?

 Thanks in advance :)

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: [EMAIL PROTECTED] locations

2008-02-15 Thread Dan Buettner
Are you currently dumping raw SQL?  If so, how?  One table at a time, or by
obtaining a lock on all tables?

If you're getting a lock on all tables now, I don't think anything would
change if you switched to a transactional engine like InnoDB and did the
same thing.  The database is frozen for a period of time while the backups
happen, which may be very quick if you don't have a lot of data.

If you're not getting a lock on all tables now, then it's possible you're
not getting a consistent snapshot of your data, and switching to InnoDB or
another transactional engine won't fix that.  I'd recommend aiming for a
consistent backup.  You know your operation better than I do, though - if
there's truly *never* anything happening at the time you take your backups,
then it's no big deal.

The best strategy in many people's opinion when you need a consistent
snapshot and can't spare the time to have the database frozen, is to set
up a replica of your master server, and take your backups from the replica
(slave).  If you have a large number of servers this may be problematic from
a cost/maintenance standpoint.

If you can spare the time to have the database frozen, no big deal.

-Dan


On Fri, Feb 15, 2008 at 4:50 PM, P. Evans [EMAIL PROTECTED] wrote:

 Greetings,
  I've got a retail operation with mysql 5.0.22 on linux pc's across the
 country, and i need some input on setting up a backup strategy, preferrably
 without purchasing a package. We're currently using MyISAM, with the
 databases  being dumped to a filesystem on a separate drive, in case the
 main drive goes down. However we will need to implement some kind of
 transactional engine in the near future, and we'd prefer not to take down
 the database to take a backup.
  Any thoughts ?
  Thanks
  Pierre


 -
 Looking for last minute shopping deals?  Find them fast with Yahoo!
 Search.



Re: SQL help/problem with timestamped data differences

2008-01-08 Thread Dan Buettner
Mark, is the 'secs' column the offset from the minimum value of the
timestamp column?

If so, you might try something like this:

SELECT UNIX_TIMESTAMP(MIN(timestamp)) INTO @min_timestamp FROM my_table;

SELECT uid, timestamp,
UNIX_TIMESTAMP(timestamp) - @min_timestamp AS secs
FROM my_table
ORDER BY 1, 2, 3;

HTH,
Dan


On Jan 8, 2008 7:17 PM, mark carson [EMAIL PROTECTED] wrote:

 Hi All

 I have the following data example
 UID   Timestamp
 123456 20071201 12:00:01
 123456 20071201 12:00:06
 987654 20071201 12:00:01
 987654 20071201 12:00:09
 etc

 I need :
 UID   Timestamp secs
 123456 20071201 12:00:01  
 123456 20071201 12:00:06  0005
 987654 20071201 12:00:01  
 987654 20071201 12:00:09  0008

 or similar solution. I am using version 5.0 and willing to use interim
 tables or any SQL based technique.

 Thanks in advance

 Mark

 --
 Mark Carson
 Managing
 Integrated  Product Intelligence CC
 EMail : [EMAIL PROTECTED]/[EMAIL PROTECTED]
 snailmail : P.O. Box 36095 Menlo Park 0102, South Africa
 Cell : +27 83 260 8515


 This e-mail may contain PRIVILEGED AND/OR CONFIDENTIAL INFORMATION
 intended
 only for use of the addressee. If you are not the addressee, or the person
 responsible for delivering it to the person addressed, you may not copy or
 deliver this to anyone else. If you received this e-mail by mistake,
 please
 do not make use of it, nor disclose it's contents to anyone. Thank you for
 notifying us immediately by return e-mail or telephone. INFORMATION
 PROVIDED
 IN THIS ELECTRONIC MAIL IS PROVIDED AS IS WITHOUT WARRANTY
 REPRESENTATION
 OR CONDITION OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT
 LIMITED TO CONDITIONS OR OTHER TERMS OF MERCHANTABILITY AND/OR FITNESS FOR
 A
 PARTICULAR PURPOSE. THE USER ASSUMES THE ENTIRE RISK AS TO THE ACCURACY
 AND
 THE USE OF THIS DOCUMENT.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: preferred way to backup a 20GB MySQL database

2007-11-27 Thread Dan Buettner
I'd strongly recommend setting up replication, and then taking your backups
from the replica.

mysqlhotcopy works great, I used it for years myself, but it does require
freezing your database while the copy happens.  And no matter how you do
it, copying 20 GB takes a little bit of time.

Dan

On Nov 27, 2007 4:35 PM, David Campbell [EMAIL PROTECTED] wrote:

 Andras Kende wrote:
  Hi,
 
  What is the preferred way to backup a 20GB database daily,
  without taking offline ?
 
  MySQL 4.1 MyISAM - (will be updated to MySQL 5)
 
  133 table(s)  Sum 115,416,561  latin1_swedish_ci  20.1 GB
 

 Mysqlhotcopy

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: need query: records inserted on Monday?

2007-11-20 Thread Dan Buettner
Afan, you'll need to have a date and time column in the database storing a
created at value, that is set on INSERT and then not changed.

Assuming you have such a column - let's call it created_at - you can run
queries like this:

/* to get count of records created on Mondays */
SELECT COUNT(*)
FROM table t
WHERE DAYOFWEEK(t.created_at) = 2;

/* to get count created on a given date between 8 AM and 4 PM */
SELECT COUNT(*)
FROM table t
WHERE t.created_at = 2007-11-20 8:00
AND t.created_at = 2007-11-20 16:00;

MySQL's docs on date and time functions are here:
http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html

HTH,
Dan




On Nov 20, 2007 12:16 PM, Afan Pasalic [EMAIL PROTECTED] wrote:

 Hi,
 I have to build a report - when (date and/or time) the records are
 inserted. E.g., number of records inserted on Monday - doesn't matter
 what month.
 Or, number of records inserted on specific date between 8am and 4pm.

 Thanks for any help.

 -afan


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Table type for high number of insert/delete operations

2007-10-25 Thread Dan Buettner
Jim, MyISAM tables tend to work best in situations where the proportions of
reads to writes is either very high or very low.

That is to say, either the data doesn't change much but it's being accessed
a lot, or the data changes a lot but it's rarely accessed.

MyISAM is quite a bit faster than InnoDB in some cases, so it could be that
if the size of this table will remain small, it would be the faster choice.

InnoDB will allow concurrent access, though, so depending on the level of
concurrency you expect, things may move faster using it.

Bottom line, no concrete answer for you - I'd test it each way if I were
you.  Also keep in mind you can switch back and forth without too much
trouble, though of course if your table gets large it could take some time
to switch.

As for impact on your other applications - my knowledge of single tablespace
InnoDB performance is limited; I've been using individual tablespaces for
InnoDB tables for some time now.

-Dan


On 10/24/07, Jim [EMAIL PROTECTED] wrote:

 I have an application which will be inserting and then deleting many
 thousands of rows per hour within a single table.  It essentially queues
 and then handles requests from a series of processes, deleting the
 requests after they've been dealt with.

 Our MySQL 5.0.45 server is set up to use InnoDB tables by default, in a
 single tablespace.  Would MyISAM tables be a better fit for this type of
 application?  The database server is used for other applications so the
 impact of this application on the others is a concern we have.

 Also, in terms of speed or server load, would it be better to mark records
 deleted and then periodically (say once an hour) run a delete query, or
 would this approach not make a difference?


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Reusing deleted variable-size record space

2007-09-30 Thread Dan Buettner
Hello Renito -

What you are looking for is MySQL's OPTIMIZE function:
http://dev.mysql.com/doc/refman/5.0/en/optimize-table.html
It does exactly what you are looking for.

You can implement this manually, via shell script on a timer, etc.  I have
written a multi-threaded perl solution which will check, repair and optimize
tables simultaneously to save time:
http://dbuettner.dyndns.org/blog/?page_id=88

HTH,
Dan


On 9/30/07, Renito 73 [EMAIL PROTECTED] wrote:

 Hello

 I have a database with variable-size fields, some of them may be
 modified/deleted during the usage and administration, so my doubt is: how
 can
 I compact the records to remove those blank spaces and save space, lets
 say defragment the database file so data is continuous and contiguous?

 This task should be performed just once or twice a month, so no matter if
 it
 could take a while.

 Is there an internal function to do that? I could copy existing records to
 a
 new table, delete all the original ones and then insert them back and drop
 the second table but don't like it too much.


 Thanks for any suggestion


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: mysqldump of huge innodb database

2007-09-24 Thread Dan Buettner
I see one conflict that could be causing your lost connection message -
you are specifying a 1 GB max_allowed_packet for the client, but the
server is configured to only support 64 MB.

You should adjust the max_allowed_packet = 64M setting on the server to
match or exceed what you specify on the mysql or mysqldump  command line
client, then try again.

HTH,
Dan



On 9/24/07, Benjamin Schmidt [EMAIL PROTECTED] wrote:

 Unfortunately the additional parameters didn't solve my problem. But
 thanks for your response!

 ssh [EMAIL PROTECTED]  \
   mysqldump -u XYZ --verbose --password=XYZ --quick
 --single-transaction --net_buffer_length=1G --max_allowed_packet=1G
 dbmail | /bin/gzip \
/Backup/HD-Save/XYZ/data_mysql/XYZ_mysqldump.tar.gz_tmp


 I don't thinks the problem and also following command didn't work

 mysqldump -h XYZ -u XYZ --verbose --password=XYZ --quick
 --single-transaction --net_buffer_length=1G --max_allowed_packet=1G
 dbmail | gzip  /Backup/HD-Save/XYZ/data_mysql/XYZ_mysqldump.tar.gz_tmp


 Always get the result:

 mysqldump: Error 2013: Lost connection to MySQL server during query when
 dumping table `dbmail_messageblks` at row: 177912


 
 Script ended at: Sat Sep 22 06:32:16 CEST 2007 (1190435536)
 Execution Time:
   Hours:   4
   Minutes: 269
   Seconds: 16155

 OR

 mysqldump: Error 2013: Lost connection to MySQL server during query when
 dumping table `dbmail_messageblks` at row: 189738


 
 Script ended at: Sun Sep 23 06:30:30 CEST 2007 (1190521830)
 Execution Time:
   Hours:   4
   Minutes: 267
   Seconds: 16048

 OR

 mysqldump: Error 2013: Lost connection to MySQL server during query when
 dumping table `dbmail_messageblks` at row: 137554


 
 Script ended at: Mon Sep 24 06:30:01 CEST 2007 (1190608201)
 Execution Time:
   Hours:   4
   Minutes: 267
   Seconds: 16020


 I know these two other solutions:
   - Setting up a replication service
   - Stopping mysql, copying db-files, and restart mysql

 Doing replication is not possible cause of the huge size of the
 database. Hard-core copy of db-files causes a downtime of up to 8 hours
 so it would be possible.

 Or does somebody has another (hope better) solution?

 With best regards,
 Benjamin Schmidt


 Hartleigh Burton wrote:
  Hiya,
 
  I was backing up a 95GB InnoDB database and forever had problems. It
 ended up working and I never really worked out exactly what the cause was...
 but try using the following:
 
  --opt (does --quick + extended-insert + others)
  --net_buffer_length=1G (set this to whatever you want, 1G is the largest
 it will support. I was backing up uncompressed audio so had it at 1G. When
 --opt is set it also uses --extended-insert, the net_buffer_length tells
 mysqldump when to break the extended insert and create a new insert. Useful
 when dealing with large packets)
  --max_allowed_packet=1G (or whatever you expect your largest packet to
 be, in my case was up to 1G)
 
  Example: mysqldump -u mysqldump --password= --opt --verbose
 --net_buffer_length=1G --max_allowed_packet=1G --single-transaction dbname 
 dbname.sql
 
  If this still fails... try running the backup from a remote computer
 either by using MySQL Administrator or mysqldump. Occasionally I would get
 the same error you received when running mysqldump on localhost, however it
 would complete when run from either my workstation or on another server. I
 can't really explain why this would happen, but now I just run all of my
 backups straight to a mirrored server.
 
  Example: mysqldump -h 192.168.x.x -u mysqldump --password= --opt
 --verbose --net_buffer_length=1G --max_allowed_packet=1G
 --single-transaction dbname  dbname.sql
 
  Good luck, hope this helps.
 
 
  Hartz.
 
  -Original Message-
  From: Benjamin Schmidt [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, 4 September 2007 7:05 PM
  To: mysql@lists.mysql.com
  Subject: mysqldump of huge innodb database
 
  Hello list members
 
  Since a few days I get this error message when making a backup of my
  database:
 
 
  mysqldump: Error 2013: Lost connection to MySQL server during query when
  dumping table `dbmail_messageblks` at row: 174955
 
 
 
  Script ended at: Tue Sep  4 06:45:37 CEST 2007 (111137)
  Execution Time:
Hours:   4
Minutes: 282
Seconds: 16956
 
 
  The ibdata1 file now has a size of 42GB (I use the innodb engine). The
  command to backup is following:
 
 
  ssh [EMAIL PROTECTED]  \
   mysqldump -u mysqldump --password= --quick
  --single-transaction dbmail | /bin/gzip  /Backup/mysqldump.tar.gz
 
 
  And this is my config-file (default values from the debian package):
 
 
  ...
  

Re: Assistance avoiding a full table scan

2007-09-21 Thread Dan Buettner
Erik, I think the main reason your query is running slowly is the use of a
subselect.  MySQL does not generally perform well with subselects, though
work continues in that area.

There is also a problem/situation in MySQL in that you can't use MAX/GROUP
BY functions quite the way you can in other databases; you'll get an
accurate MAX value for one column, but the value in another won't
necessarily be from the same row.  Someone posted on the list about this
recently, calling it a bug, and I tend to agree.

To solve your problem:

I would take one of two approaches.

First approach: split it into two queries in PHP, and use the results of the
first in the second, like so:

query1 = select max(id) from tbl where gid in ( 1234,2345,3456 .. 7890 )
group by gid

in PHP: id_string = join the results with commas.  implode function?

query2 = select comment, gid, date_posted from tbl where id in (id_string)

Generally speaking, fewer queries = higher performance, and databases are
optimized to join tables, they do it well - but in your case I think you'll
find one of these works better.

Second approach:

Insert values from first query into a temporary table, then join on that
temp table in your second query.

I don't think either approach will have a speed advantage, and the first is
probably easier to code.

HTH,
Dan



On 9/21/07, Erik Giberti [EMAIL PROTECTED] wrote:

 Hello everyone,

 The app server in this case is PHP, and the database is MySQL 5.0.22
 on RedHat linux

 I've got a database with about 7.5K records in it that I expect to
 start growing very quickly ~10-12K records per day. The storage
 engine is InnoDB. This table is growing quickly and will continue to
 grow for a long time. This table stores comments (as you can see from
 the structure) and is being used to display a list of comments based
 on a users affiliations.

 The structure is approximately this - I'm leaving out unrelated columns:

 id int - primary key - auto increment
 gid bigint - indexed
 comment varchar
 date_posted timestamp

 I run a query with the following form

 select comment, gid, date_posted from tbl where id in (select max(id)
 from tbl where gid in ( 1234,2345,3456 .. 7890 ) group by gid);

 I have an index on gid and id is the primary key

 When I describe the query with about 50 gid values inserted (where
 indicated above) I get the following:

 +++---+---+---+-
 +-+--+--+--+
 | id | select_type| table | type  | possible_keys | key |
 key_len | ref  | rows | Extra|
 +++---+---+---+-
 +-+--+--+--+
 |  1 | PRIMARY| tbl   | ALL   | NULL  | NULL|
 NULL| NULL | 7533 | Using where  |
 |  2 | DEPENDENT SUBQUERY | tbl   | range | idx_gid   | idx_gid |
 9   | NULL |   58 | Using where; Using index |
 +++---+---+---+-
 +-+--+--+--+

 Running the query on a production machine with sufficient memory and
 horsepower (box is only 20% utilized) it still takes 3 seconds to run
 - obviously not quick enough for web use.

 What I really need is the most recent comment from each group based
 on a variable set of gid's that change from user to user.

 Any thoughts on how to tweak this to avoid the full table scan? Thank
 you in advance for your assistance.

 Erik Giberti


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Mysql backup scheduler.

2007-09-13 Thread Dan Buettner
Manivannan -

In a word, no.  It's database software, not, um, cron.  :)

There are scheduled events in 5.1, but that is more of a trigger-like
functionality than what you need, which is a scheduled task to save your
data somewhere.  http://dev.mysql.com/doc/refman/5.1/en/events.html

You could look at establishing a central backup server that would do nothing
but perform scheduled backups of your remote databases, thus simplifying
your backup management and using only a single platform to do the backups.

On Windows, you can use its built-in Scheduled Tasks function, which I
have found to work passably well in recent versions of Windows Server.
There are also other cron-like solutions out there - one I have used in the
past is nncron lite - http://www.nncron.ru/

HTH,
Dan


On 9/13/07, Manivannan Palanichamy [EMAIL PROTECTED] wrote:


 Hi,
 Is there any built-in backup scheduler for mysql database server? Yes, I
 can
 use cron-tab in linux, but my env is windows. Also, I will be running
 mysql
 server in different platforms. So, is there any built-in backup scheduler?
 --
 Manivannan Palanichamy
 http://mani.gw.googlepages.com/index.html
 --
 View this message in context:
 http://www.nabble.com/Mysql-backup-scheduler.-tf4436845.html#a12658327
 Sent from the MySQL - General mailing list archive at Nabble.com.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Really strange index/speed issues

2007-09-10 Thread Dan Buettner
Chris, a couple of thoughts -

First, your index on the section is doing you no good (at this time) since
all the values are the same.  You may already know that, but thought I'd
mention it.

Second, my theory on why query #1 is faster - if all your prices range from
1 up, and you're querying for prices greater than 0, then MySQL can just
return the first 30 rows after sorting them.

The second query, where you are looking for prices greater than 1, MySQL has
to sort and then examine a number of rows until it finds enough matching
rows (price  1) to satisfy your query. This likely takes a little bit of
time.  How many rows do you have with price = 1?  It would have to scan over
that many before it could start satisfying your query, if you think about
it.

HTH,
Dan



On 9/10/07, Chris Hemmings [EMAIL PROTECTED] wrote:

 Hello,

 I have a table, currently holding 128,978 rows...  In this table, I have a
 section column (int) and a price column (int).  Every row has a section of
 1
 currently, every row has a price, ranging from 1 to 10,000.

 I have an index on both columns separately.

 Have a look at these two queries, can someone tell me why there is such a
 difference in speed of execution?  (Note difference in price qualifier)

 

 SELECT *
 FROM `table1`
 WHERE price 0
 AND section =1
 ORDER BY price
 LIMIT 0 , 30

 Showing rows 0 - 29 (128,978 total, Query took 0.9462 sec)

 Explain output: 1 SIMPLE table1 ALL section,price NULL NULL NULL 96734
 Using
 where; Using filesort

 

 SELECT *
 FROM `table1`
 WHERE price 1
 AND section =1
 ORDER BY price
 LIMIT 0 , 30


 Showing rows 0 - 29 (128,949 total, Query took 0.0008 sec)

 Explain output: 1 SIMPLE table1 ALL section,price NULL NULL NULL 96734
 Using
 where; Using filesort

 

 Other info:

 Query cacheing = off
 MySQL version = 5.0.32
 OS  = Debian Sarge

 Sure, the second query returns 29 fewer records than the first, but should
 that make the difference in time?

 Hope you can shed some light onto this :-)

 Ta!

 Chris.



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Query sloooow

2007-08-27 Thread Dan Buettner
Hi Jim -

I'm using MySQL on Fedora 6 as well, with no performance problems.  Did not
need to do anything to speed it up on Fedora.

It's difficult to answer the question why one is faster than the other, as
there are any number of potential differences.  Some more specifics about
your setup (hardware, software) and the queries you're running would be
helpful.  The output of EXPLAIN from both machines may be very telling.

In general, some things to consider are:
- do you have sufficient RAM in the Fedora machine for what you're running?
If you're paging memory that can slow things down a lot
- are you running the same version of MySQL on the two machines?  Older
versions can perform particularly poorly with subqueries, for example
- do you have the MySQL query cache enabled on Windows and not on Fedora, or
cache settings that are very different?

Dan


On 8/26/07, Jim Douglas [EMAIL PROTECTED] wrote:

 I installed MySQL on Fedora 6 and simple queries are very slow.

 I have ten databases with very few hundred records and there are only a
 few
 hundred records in the table I am querying.  I only have this issue on
 Fedora ONLY.  The same query on on my Windows machine is as expected.

 What can I do to speed things up on Fedora?

 Jim

 _
 See what you're getting into…before you go there
 http://newlivehotmail.com/?ocid=TXT_TAGHM_migration_HM_viral_preview_0507


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Scheduled events

2007-08-14 Thread Dan Buettner
It's a database, not a scripting language ... :)

You can run a simple cron entry like this:

0 4 * * * /path/to/mysql -u USER -pPASS -D DATABASE -e delete from contacts
where TO_DAYS(CURDATE()) - TO_DAYS(today) = 30 and
status != 'Y';

so at 4 AM each day your SQL would be executed.  For long SQL, you can write
it to a file and do something like so:

0 4 * * * /path/to/mysql -u USER -pPASS -D DATABASE  /path/to/myfile.sql

HTH,
Dan


On 8/14/07, Beauford [EMAIL PROTECTED] wrote:

   I have been trying for days to find a simple command in
  MySQL where I
   can automatically delete records based on some criteria after a
   certain timeframe.
  
   I found this in the MySQL manual, but I guess it only works with
   version 5.1+. Unfortunately the version I need this to work
  on is 4.1,
   and can't be upgraded.
  
   CREATE EVENT e_hourly
   ON SCHEDULE
 EVERY 1 HOUR
   COMMENT 'Clears out sessions table each hour.'
   DO
 DELETE FROM something;
  
   Is there something similar in version 4.1 that will do the
  same thing.
 
  No.  But there are cron jobs :-)  And if you're deleting many
  rows and you don't want to interfere with other running
  queries (sounds like this is an OLTP system), try MySQL
  Archiver with the --purge argument
  (http://mysqltoolkit.sourceforge.net/).  It's much more
  efficient at large data volumes than a single DELETE statement.
 
  Baron
 

 Really. I thought it would have some kind of scripting capability. I did
 check out the link, but really don't need anything that extensive.

 Is there a way to run the following command via cron.

 delete from contacts where TO_DAYS(CURDATE()) - TO_DAYS(today) = 30 and
 status != Y;

 Thanks



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Find record between 10 minutes

2007-08-13 Thread Dan Buettner
Good morning, Willy -

If you're using some sort of scripting language, like PHP, this would be
easily done by starting a variable at the beginning of the date range, then
repeating the below query and incrementing the variable by 10 minutes each
time through a while loop until you've passed the other end of the date
range.

Something like this:

$date = '01-01-2007 00:00:00';
while ($date = '01-02-2007 23:59:59') {

SELECT * FROM my_table
WHERE id IN
( SELECT id FROM my_table
WHERE datefield = $date
AND datefield  DATE_ADD($date, INTERVAL 10 MINUTE) )
ORDER BY RANDOM
LIMIT 1

$date = $date + (php function to add 10 minutes);

}

I'm not a PHP whiz but hopefully you get the idea.

Dan


On 8/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Hi,
 I have a table with records dated 01-01-2007 00:00:00 to 01-02-2007
 23:59:59. What I want to do is grab 1 random record in every 10 minutes
 between the date. Please help me.

 Regards,


 Willy
 --
 www.sangprabv.web.id
 www.binbit.co.id


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: There's not enough space in /var/lib/mysql/ ---help!!!!

2007-08-13 Thread Dan Buettner
Michael, it looks to me like your root partition is absolutely 100% chock
full.  Am I misunderstanding your request for help?  Sounds like you are
saying you think you have nothing in that partition - but your 'df -h'
command is showing 0 bytes available in /.

Dan


On 8/13/07, Michael Habashy [EMAIL PROTECTED] wrote:

 I am using debian 4.0 and mysql-server-5.0 package.
 My server will not start because it states that there is not enough
 spaceCan someone assist?

 rider:~# df -h
 FilesystemSize  Used Avail Use% Mounted on
 /dev/mapper/vg_house-lv_root
   493G  468G 0 100% /
 tmpfs 1.5G 0  1.5G   0% /lib/init/rw
 udev   10M   96K   10M   1% /dev
 tmpfs 1.5G 0  1.5G   0% /dev/shm
 /dev/md0  274M   25M  235M  10% /boot


 I have a 500gb lvm partition...with hardly anything on it.

 I am new to lvm so i will need assistance..i think the package is
 failing because it is not reading lvm properly.

 I have removed the mysql package and now i can not re-install it on the
 partition...i would appriciate any help offered.

 thanks
 mjh



Re: join between databases

2007-07-07 Thread Dan Buettner

Miguel, it's possible, but it's not normal procedure in most systems I've
worked with.  Separate databases are generally used for separate systems.
Reasons include more work/steps to grant privileges to users, more
difficulty establishing a test system (you need a whole separate MySQL
instance in your setup vs. a single database with a different name, ease of
creating/restoring one database in one step vs. potentially multiple steps.

You might consider prefixing table names with something to help categorize
them, like
mgmt_
fin_
prod_

You can join tables from different databases like so (assuming you have
privileges):

SELECT *
FROM database1.table1 , database2.table2
WHERE ...

HTH,
Dan



On 7/7/07, Miguel Vaz [EMAIL PROTECTED] wrote:



Hi,

I am building a new system for the place i work and i was thinking
about organizing my tables in several databases, like management,
financial, production_line to make things more tidy. I will have
tons of tables, so is it a usual procedure in organizing tables?

The other problem is about doing a join between tables that are on
different databases, is that possible? For example:

database: people
table: users
fields: id, name, email

database: production
table: machines
fields: id, machine_job_num, id_user, etc

I want to do a select on table machines that gets the user names
from the table users, which is on a different databse. How can i do
it? Is it even remotely possible?

Sorry if the question is basic, but i searched the net all over
the
place and i couldnt get any answer to this.
Thanks.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Many:Many Relation Between Database Tables

2007-07-05 Thread Dan Buettner

David, you've hit the nail on the head.

Dan


On 7/5/07, David T. Ashley [EMAIL PROTECTED] wrote:


I'm an old Microsoft Access user, so I just need to double-check on this
with MySQL:

I want to relate two tables (let's call them users and priveleges) in
a
many:many way.  That is, each user may have multiple priveleges, and each
privelege may have multiple users with that privelege.

Here are my questions:

a)I'm assuming that under MySQL I have to create a third table that maps
between them?  (Let's call it users2priveleges.)

b)I'm assuming that there is nothing special I need to do to get, for
example, all the priveleges with a user (just the standard join stuff with
x=y and y=z or something like that)?

c)I'm assuming that from an optimization point of view, there is nothing I
can/should do beyond optimizing the links, i.e. making sure the related
fields are indexed?

Thanks.



Re: Interresting update problem

2007-06-27 Thread Dan Buettner

What I've done in situations like this is write SQL that generates the SQL I
need.  I then pipe out to a file and pipe it back in, or just pipe from one
instance of mysql to another.

Example SQL:

SELECT
CONCAT('UPDATE main SET ', field, ' = ', id, ';')
AS update_sql
FROM map

Example command to accomplish on one step:

mysql -u user -ppass -D db -e the above sql | mysql -u user -ppass -D db

HTH,
Dan


On 6/27/07, Mogens Melander [EMAIL PROTECTED] wrote:


Hi all,

I'm trying to wrap my brain around folowing problem.

table main(code, field_1, field_2, field_3, , , field_51)

111, 'X', '', 'X',,,
222, '',  '', 'X',,,
333, '', 'X', '' ,,,
444, '',  '', '' ,,,
555, 'X','X', '' ,,,

table map(id, field)

1, 'field_1'
5, 'field_2'
9, 'field_3'


86, 'field_51'

The exercise is: replace 'X' with map.id in main.map.field

main.code and map.id are primary keys, all other are varchar.

Hmm, did that make any sense?

--
Later

Mogens Melander
+45 40 85 71 38
+66 870 133 224



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Query seemingly ignoring order by after upgrade

2007-06-22 Thread Dan Buettner

You may have encountered a bug; one thing you could do to avoid having to
downgrade is specify the column number you wish to sort on (4th column in
your case), as in:

ORDER BY 4 DESC LIMIT 10;

Dan


On 6/22/07, Andrew Carlson [EMAIL PROTECTED] wrote:


I upgraded to Mysql 5.0.42 from 4.1.14 yesterday.  Afterwards, a query the
I
run every day to capture  data on nodes that have increased backup load
the
most in TSM, seems to be ignoring my order by statement. Here is the
query
and the output:

mysql select occupancy.node_name as NodeName,nodetrend.occupancy/1024 as
Occupancy on 09/24/2006 (GB),sum(occupancy.physical_mb)/1024 as
Occupancy
Today (GB),(sum(occupancy.physical_mb) - nodetrend.occupancy)/1024 as
Occupancy Increase (GB) from nodetrend,occupancy where
nodetrend.date=2006-09-24
and occupancy.stgpool_name='BACKUPPOOL' and
nodetrend.node_name=occupancy.node_name group by occupancy.node_name order
by Occupancy Increase (GB) desc limit 10;

+-+--+--+-+
| NodeName| Occupancy on 09/24/2006 (GB) | Occupancy Today (GB) |
Occupancy Increase (GB) |

+-+--+--+-+
| AMHFP01 |   213.378672 |   188.096826|
   -
25.281846 |
| AMHFP01-S   |15.284570 |25.520684|
10.236113 |
| AMHROCP01   | 8.740791 |15.711035|
6.970244 |
| ANDYC   | 2.503848 | 2.503848|
0.00 |
| BHSCAFAS01  | 7.057617 |11.109980|
4.052363 |
| BJC11101828 | 5.401748 | 3.433613|
-
1.968135 |
| BJC3758 |15.911094 |15.911094|
0.00 |
| BJCAMSQL01  |72.650596 |   102.609326|
29.958730 |
| BJCAMWEB01  | 3.703057 | 8.468467|
4.765410 |
| BJCBCA01|22.580410 |33.760918|
11.180508 |

+-+--+--+-+
10 rows in set (0.24 sec)

Any suggestions? I can try backing off to 5.0.41 if anyone thinks that
would
help.

Thanks

--
Andy Carlson

---
Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License:
$8.95/month,
The feeling of seeing the red box with the item you want in it:Priceless.



Re: Upgrading databases?

2007-06-21 Thread Dan Buettner

Hi Seth -

I believe MySQL's official position is that you should always dump-and-load
the data when upgrading major or minor versions (4.0 to 4.1, 4.1 to 5.0,
etc.)

I've done it both ways (dump-load and just moving table files) and have
never had a problem with either, even when moving files across OS platforms
*knock on wood*.  That said, I think you might find dump-and-load just the
ticket to work around the problem you're having.

What you're doing *should* work, but since it isn't, I'd try another avenue
myself to avoid spending much more time on it.

Something as simple as

mysqldump -u root -ppassword -h hostwith41 --all-databases | mysql -u root
-ppassword -h hostwith51

would do it, if you want to transfer everything

Mind that you've got an appropriate network connection - you wouldn't want
to make your laptop on home wireless with DSL the in-between if you have
50GB of data to transfer.

Hope this helps, and let me know if you have any questions.

Dan


On 6/21/07, Seth Seeger [EMAIL PROTECTED] wrote:


On Jun 21, 2007, at 12:21 PM, Gerald L. Clark wrote:

 Seth Seeger wrote:
 Hello,
 I'm having trouble copying a database from MySQL 4.1.22 to 5.1.19-
 beta.  Both are FreeBSD i386-based machines.  I have run the
 following commands:
 mysqlcheck --check-upgrade --all-databases --auto-repair
 mysql_fix_privilege_tables
 Both executed with no problems.  (mysqlcheck reported OK for
 all  tables.)  When I try to access any of the tables, I get this:
 mysql select * from users;
 ERROR 1034 (HY000): Incorrect key file for table 'users'; try to
 repair it
 So I tried to repair it:
 mysql repair table users;
 +++--
 ++
 | Table  | Op | Msg_type |
 Msg_text   |
 +++--
 ++
 | seth_icsx_mands_live.users | repair | error| Incorrect key
 file  for table 'users'; try to repair it |
 +++--
 ++
 1 row in set, 1 warning (0.10 sec)
 Running repair table users doesn't seem to have any effect on
 it  because the problem persists.  I have tried to run
 mysql_upgrade,  with no success:
 # mysql_upgrade --basedir=/usr/local --datadir=/var/db --verbose
 Looking for 'mysql' in: mysql
 FATAL ERROR: Can't find 'mysql'
 I have tried it with all different combinations for the two
 directory  options with no luck.  All tables are MyISAM.
 Can anyone shed some light on what I'm supposed to do?
 Thanks,
 Seth
 Shut the server down and run myisamchk on users.MYI

Sadly, no success.  I tried running it two different ways:

# myisamchk -e -r users
- recovering (with sort) MyISAM-table 'users'
Data records: 1283
- Fixing index 1
Found block with too small length at 101420; Skipped

# myisamchk -c -r users
- recovering (with sort) MyISAM-table 'users'
Data records: 1283
- Fixing index 1

I still get the same error when I try to access the table.

Thanks,
Seth



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Getting list of queries run against a database

2007-06-20 Thread Dan Buettner

Ben, there's a slow query log feature that may be just what you're looking
for:
http://dev.mysql.com/doc/refman/4.1/en/slow-query-log.html

There's an analysis script that will show you the most popular slow queries,
too, '*mysqldumpslow'.

You can take those queries and use the EXPLAIN feature to start analyzing
how to speed them up.

HTH,
Dan
*
On 6/20/07, Ben Edwards [EMAIL PROTECTED] wrote:


We are having a problem with out mysql database (4.2) and think we may
have indexes missing.  What we are trying to do is find out the most
popular queries that run.  We know there are not may and that they are
relatively simple.

Does anyone know of a tool that allows us to see what queries (i.e.
via a log file) are/have been run against the database.  If it counts
how may times/how much resources each query uses that would be good.
The icing on the cake would be a prog that told us what queries were
doing full table scans and other expensive operations.

Regards,
Ben
--
Ben Edwards - Bristol, UK
If you have a problem emailing me use
http://www.gurtlush.org.uk/profiles.php?uid=4
(email address this email is sent from may be defunct)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Slow query examining 10 Million Rows, please help !!!

2007-06-19 Thread Dan Buettner

I would try adding an index on the freetags.tag column as you are querying
against that column with
 WHERE tag = 'shot'

HTH,
Dan


On 6/19/07, Kishore Jalleda [EMAIL PROTECTED] wrote:


Hi everybody,
   we have this super slow query which is going through
more
than 10 million rows to retrieve results, here is the query and other
information, I tried a few things to make this faster , but failed , so
any
help from you guys in making this faster is greatly appreciated 

# Query_time: 10  Lock_time: 0  Rows_sent: 1  Rows_examined: 11863498
SELECT DISTINCT object_id
FROM freetagged_objects INNER JOIN freetags ON (tag_id = id)
WHERE tag = 'shot'

AND object_type = 1
ORDER BY object_id ASC
LIMIT 0, 10

explain gives the following output


++-+++---+-+-+--+-+--+
| id | select_type | table  | type   | possible_keys | key
|
key_len | ref  | rows|
Extra|

++-+++---+-+-+--+-+--+
|  1 | SIMPLE  | freetagged_objects | ALL| PRIMARY   | NULL
|NULL | NULL | 9079381 | Using where;
Using temporary; Using filesort |
|  1 | SIMPLE  | freetags   | eq_ref | PRIMARY   | PRIMARY
|   4 | osCommerce.freetagged_objects.tag_id |   1 | Using where;
Distinct|

++-+++---+-+-+--+-+--+


mysql show create table freetagged_objects;

| freetagged_objects | CREATE TABLE `freetagged_objects` (
  `tag_id` int(11) unsigned NOT NULL default '0',
  `tagger_id` int(11) unsigned NOT NULL default '0',
  `object_id` int(11) unsigned NOT NULL default '0',
  `tagged_on` datetime NOT NULL default '-00-00 00:00:00',
  `object_type` int(11) NOT NULL default '0',
  PRIMARY KEY  (`tag_id`,`tagger_id`,`object_id`),
  KEY `tagger_id_index` (`tagger_id`),
  KEY `object_id_tagger_id_index` (`object_id`,`tagger_id`),
  KEY `object_id_tag_id_index` (`object_id`,`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |


mysql show create table freetags;

| freetags | CREATE TABLE `freetags` (
  `id` int(11) unsigned NOT NULL auto_increment,
  `tag` varchar(30) NOT NULL default '',
  `raw_tag` varchar(50) NOT NULL default '',
  `suppress` tinyint(1) NOT NULL default '0',
  PRIMARY KEY  (`id`),
  KEY `raw_tag` (`raw_tag`(10))
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |


Freetags table has like a million rows in it 
MySQL version 4.1.11 , server has 16GB RAM ..

Kishore Jalleda
http://kjalleda.googlepages.com/mysqlprojects



Re: Figuring out the difference in value between 2 fields

2007-06-06 Thread Dan Buettner

If they're both numeric fields, then it's as easy as:
SELECT field1-field2 AS difference FROM table;

and if you always want a positive number:
SELECT ABS(field1-field2) AS difference FROM table;

HTH,
Dan

On 6/6/07, Jason Pruim [EMAIL PROTECTED] wrote:


Okay, so I have been gooling all over trying to figure this out. I'm
sure it's easy enough to do, but I can't seem to find it.

All I want to do is figure out the difference between 2 fields. IE:

Field 1= 20
Field 2 =10
Difference between Field 1  2 is: 10

Any ideas?



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: identifying rows that have changed

2007-05-30 Thread Dan Buettner

Perrin, I like #3 the best.

#1 - it's not a good approach to hope your database keeps up.  There are
fairly common situations that can come up where you never know how long
something will take - unusually high traffic, table check and repair, a bulk
load into the same or another database on that db host, etc.

#2 - the flag is a good idea on the face of it, but in reality your process
may end up doing large numbers of table scans to find rows with the flag
set.

#3 allows you to keep track of exactly which rows need post-processing, and
potentially even track when the request came in vs. when it was processed,
if desired.

You could also get even closer to the near real-time desire by running the
process constantly and having it idle for 30 seconds, check for new rows,
idle, etc.

One interesting gotcha would be trying to ensure that whatever updates your
batch process does - do not cause additional entries in the needs to be
post processed table, causing an endless loop...  I'm sure there's a way
around it, like an extra column called is_post_process and then your
trigger doesn't do its thing if that equals 1 in the update or something
like that.  Or perhaps you only need an insert trigger - then you don't have
that problem.

Best,
Dan




On 5/30/07, Perrin Harkins [EMAIL PROTECTED] wrote:


Hi,

I'm working on a rewrite of a batch process that operates on a large
InnoDB database.  In the past, it would process the entire database
every night, but now the size of the data is making that impossible,
and there is a desire for the process to operate in near real-time, so
I'm rewriting it to work incrementally.  The idea is to run it from
cron every 5 minutes and process the data from the last 5 minutes.

The problem I'm having is identifying which rows have changed.  I'm
not concerned about deletes in this application, so I thought it would
be enough to just use automatic timestamps and keep track of the last
run time of the program, but then I realized that an uncommitted
transaction could come in later with a timestamp from earlier.  I
haven't seen any way to make the timestamp reflect the actual commit
time.

I have a few ideas of how to handle this:

1) Put in a 5-minute delay and hope this is long enough for all
transactions to commit.  This is simple, but the delay is not
desirable and there's no guarantee that transactions will all be
finished in this time window (although if they didn't it would
certainly indicate a problem for this system).

2) Use a boolean flag on every row in every table to indicate if it
has been seen yet.  This seems like a bad idea, since it would require
the batch process to do tons of updates to the source data as it
processes the rows.

3) Triggers to fill in a logging table.  This seems to be a common
approach.  I'm not sure what the best way to structure the logging
table is, since some of these tables have multi-column primary keys.
Suggestions here would be welcome.  This one is a lot of work, but
sounds pretty safe.

Can anyone offer advice or anecdotes about how you have handled
similar situations?

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: dump db without locking up the server

2007-05-22 Thread Dan Buettner

Tim, it's a gnarly problem that most DBAs struggle with in some form or
another, whether using MySQL or another database package.

If you're using only MyISAM tables, MySQL's free, included 'mysqlhotcopy'
script might work for you, as it's generally a bit faster than mysqldump in
my experience.  If you're on InnoDB, there is a commercial product that
offers live backups ('InnoBackup' I think).  No experience with that myself.

You could also look at setting up a second database server replicating from
the first, and run your backups off the second server.

There's also the subject of consistent snapshot vs. a simple serial backup
of your tables, which can be a tricky thing to work out satisfactorily
without complete database locks.

Dan





On 5/22/07, tim h [EMAIL PROTECTED] wrote:


is there a safe way to dump/backup a live database without disrupting
service?  when i run mysqldump the whole server comes to a crawl and
queries
start taking 60+ seconds to complete.

is there a way to make mysqldump run at low priority?
worst case scenario is i miss the queries that took place during the
backup
right?

Tim



Re: Weird connection timed-out

2007-05-15 Thread Dan Buettner

Thanks Jerome.

With the high number of aborted_clients, it seems like you might have
networking issues:
http://dev.mysql.com/doc/refman/5.0/en/communication-errors.html

Dan


Re: Issue with locking and INSERT DELAYED

2007-05-15 Thread Dan Buettner

Hi Edoardo -

I think you've been misinformed; MyISAM tables do not support simultaneous
read and write operations.  MyISAM is a multiple reader/single writer, table
locking design.  You may want to switch to InnoDB tables for that
functionality.

http://dev.mysql.com/doc/refman/5.0/en/locking-issues.html

Dan




On 5/15/07, Edoardo Serra [EMAIL PROTECTED] wrote:


Hi Guys,
we have a MySQL server acting as a backend for a VoIP provider.

We're using this MySQL server to collect CDRs and to extract some easy
reports from them (including web access to CDRs for customers)

CDRs are inserted directly from Asterisk switches when a call ends.
We're using INSERT DELAYED to store CDR because we don't want a report
query to get a lock on the CDR table and prevent CDRs from being
inserted immediatly (I need the query to return immediatly to avoid
strange interacions with Asterisk)

1) I see that MyISAM tables should support INSERT and SELECT query
running simoultaneously but I was getting some INSERT locked during a
slow SELECT (I don't think CDR table has some free space in it because
it's used justo to insert rows)

2) I switched to INSERT DELAYED to solve the problem but sometimes I get
some queries locked again

I did a SHOW FULL PROCESSLIST when I had some locked queries, here are
running threads
(I omitted Sleeping threads, renumberet threads ids and omitted some
columns)

[...]
Id - Command - State - Info

1 - Query  - Locked - SELECT count(*) as missed FROM cdr WHERE
dialcause'ANSWER' AND (dst='2876543' OR 0) AND DATE(calldate)=CURDATE()

2 - Delayed insert - upgrading lock

3 - Query   - Locked - SELECT count(*) as missed FROM cdr WHERE
dialcause'ANSWER' AND (dst='' OR 0) AND DATE(calldate)=CURDATE()

4 - Query - Locked - SELECT count(*) as missed FROM cdr WHERE
dialcause'ANSWER' AND (dst='' OR 0) AND DATE(calldate)=CURDATE()

5 - Delayed insert - Waiting for INSERT

6 - Query - Sorting result - SELECT DATE_FORMAT(calldate, '%d-%m-%Y
%H:%i:%s') AS data, accountcode AS utente, dst AS numero, billsec AS
secondi, usercost FROM cdr WHERE cdIdCompany = '' AND calldate BETWEEN
'2007-05-15 00:00:00' AND '2007-05-15 23:59:59' AND carriercost  0 AND
lastapp != 'SMS' ORDER BY calldate ASC

7 - Query - NULL - SHOW FULL PROCESSLIST
[...]

Thread #6 is running the slow report query but other SELECTs are Locked
(1, 3, 4)

I see thread #2 is 'Upgrading lock', is that locking the other SELECTs ?
Why is that happening ? Shouldn't MyISAM support INSERTs without READ
LOCKING the tables ?

I'm using MySQL 5.0.27 on a gentoo

Tnx in advance for help

Regards

Ing. Edoardo Serra
WeBRainstorm S.r.l.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: InnoDB dropping records / MyISAM working as it should

2007-05-15 Thread Dan Buettner

Hi Kenneth -

it appears that you need to use an explicit 'commit' command when using
InnoDB tables and Python.

Something like this:

   try:
   cursor.execute(INSERT INTO Test1 (s1, i1) VALUES ('Now is the
time', 5))
   db.commit()

Found this on http://www.serpia.org/mysql

HTH,
Dan


On 5/15/07, Kenneth Loafman [EMAIL PROTECTED] wrote:


Folks,

Here's an interesting problem for you.  I found a problem that did not
make any sense, and in diagnosing the problem I found an issue with
InnoDB vs MyISAM, so I wrote a short script to test it.  The test case
is a simple Open, Insert, Close series repeated 5 times with both
engines.  The results should be absolutely identical, except for the
timestamp, but you can see the results below are not.  The InnoDB engine
is dropping all but the last insert.

mysql --version yields mysql  Ver 14.12 Distrib 5.0.24a, for
pc-linux-gnu (x86_64) using readline 5.1, running on Ubunty Edgy.

Before I report this as a bug, can anyone see anything obvious that I'm
missing.

The attached test case should generate:

Database engine is 'myisam'
(1L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(2L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(3L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(4L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(5L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
Database engine is 'innodb'
(1L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(2L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(3L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(4L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(5L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)

Instead, it generates:

Database engine is 'myisam'
(1L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(2L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(3L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(4L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
(5L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)
Database engine is 'innodb'
(5L, datetime.datetime(2007, 5, 15, 14, 27, 59), 'Now is the time', 5)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Weird connection timed-out

2007-05-14 Thread Dan Buettner

Hi JM -

Can you send us a few things?

1 - the exact error message you get
2 - the output of SHOW VARIABLES; from a mysql session
3 - the output of SHOW STATUS; from a  mysql session

What have you tried so far in terms of troubleshooting that has been
unsuccessful?
Any recent changes on the machine(s) in question?

Also, not to belabor the point, but how do you know there are no network or
firewall issues?

Dan

On 5/14/07, JM  [EMAIL PROTECTED] wrote:


after trying it again ang again.. maybe for the 5th time.. PHP will now be

able to connect..


On Monday 14 May 2007 22:34, JM wrote:
 Hi,

   Im using MySQL-5.0.19-0 and for some reason when ever PHP connects
to the
 DB the server won't respond immediately causing the conenct to
time-out..
 No firewall issues and no network related issues.  The only thing that I
 noticed is that the server's average load is 60%.

 Thanks,

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Frequently MyISAM TABLE corruption.....Pls help

2007-05-10 Thread Dan Buettner

Some additional resources for fixing corrupt MyISAM tables:
http://dev.mysql.com/doc/refman/5.0/en/myisam-table-problems.html
http://dev.mysql.com/doc/refman/5.0/en/myisamchk.html
http://dev.mysql.com/doc/refman/5.0/en/table-maintenance.html , especially:
 http://dev.mysql.com/doc/refman/5.0/en/repair.html

I don't know for sure as it will depend on your data and your indices, but
with 580 MB of data, 122000 records, and 22 indices, I would expect that
your index file should be larger than it is (just 32 MB).  I would suggest
that perhaps your index file is corrupt and needs to be rebuilt - look at
the stage 2 and stage 3 procedures mentioned in the repair page (last URL
above).  Be sure to make backup copies!  Read it and understand it before
you do it!

Best of luck,
Dan


On 5/10/07, Nilnandan [EMAIL PROTECTED] wrote:



Hello Gerald,

Data_lengthMax_data_length Index_length
596483288281474976710655 33758208

580 MB is table size and 32MB is index size.
The default maximum MyISAM size is 4GB.

Now, tell me where is an issue?

regards,
Nilnandan Joshi
DBA-SDU
Juriscape



Gerald L. Clark-2 wrote:

 Nilnandan wrote:
 Hello all,

 I have one server which has mysql 5.0.27 installed. There is one table
 named
 table1.
 that table has 122000 records..It has 114 fields and 22 indexes.

 Now this table always been corrupt. I have try to found the solution
but
 i
 couldn't.
 Pls help me ASAP. I have used CHECK and REPAIR option I have given here
 the
 output.

 070509  4:06:17 [ERROR] /usr/sbin/mysqld: Table 'table1' is marked as
 crashed and should be repaired
 070509  4:06:17 [ERROR] /usr/sbin/mysqld: Sort aborted
 
 How big is the index file? the data file?
 Has either reached the file size limit of your filesystem,
 or the default maximum MyISAM size?


 --
 Gerald L. Clark
 Supplier Systems Corporation

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
View this message in context:
http://www.nabble.com/Frequently-MyISAM-TABLE-corruption.Pls-help-tf3715472.html#a10412877
Sent from the MySQL - General mailing list archive at Nabble.com.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: --xml or --html output to file

2007-05-08 Thread Dan Buettner

Hi John -

using --xml or --html as an argument and redirecting to a file seems to work
here.  As in,

mysql -u me -psecret -D database --html -e select * from that_table 
~/test_file.html

HTH,
Dan



On 5/8/07, John Kebbel [EMAIL PROTECTED] wrote:


When I try using the --xml or --html option with a batch file using INTO
OUTFILE 'dirpath', the --xml or --html option seems to be ignored in
favor of the tab-delimited default. (If I get rid of the INTO OUTFILE,
xml or html displays fine in the terminal.)

I tried using the pager to write to a file from inside MySQL. I
succeeded, but it was table data. I couldn't figure out how to add the
--xml or --html options from inside the pager.

I tried the redirection operator from the command line, but I haven't
stumbled on the correct syntax if such a syntax does exist.

Does anyone know how to write an --xml or --html file from a SELECT
statement, either from the command line or from a batch file? This would
really be useful information.

Thanks in advance for reading or responding.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Dan Buettner

A few observations:

1 - if the table is in the InnoDB format, you aren't going to lock up their
server, as InnoDB doesn't do table locking.  SHOW TABLE STATUS LIKE
'tablename' will tell you.

2 - Ryan's mysqldump script looks useful - also, there's a little-used
option with mysqldump that lets you specify a where clause to get just the
records you want into the SQL file.

3 - since you're not operating on the server itself, but transferring over
the net, the time for the transfer could become a problem, especially if
you're not using InnoDB.  You could copy the data into a temp table and then
work with that to your heart's content, without tying up production tables.
Something like this:
DROP TABLE IF EXISTS _tmp_tablename;
CREATE TABLE _tmp_tablename LIKE tablename;
INSERT INTO _tmp_tablename SELECT * FROM tablename WHERE whatiwant blah blah
then use select into outfile, mysqldump, php etc. on the _tmp_tablename
table.
While this does involve copying lots of records, in my experience, this sort
of thing can be very fast, since it's all self-contained in the database
software.  Not having any indices on your temp table will help too.

HTH,
Dan


On 5/2/07, Brian Dunning [EMAIL PROTECTED] wrote:


I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
need to dump it in CSV format and zip the file.

This is not on my server, and it's in production, so I don't want to
risk testing different methods and possibly hanging up their server
for a period of time, so I wanted to seek advice here first to find
what's the best way to proceed.

I can easily use PHP to query the table for the results I want and
write a file line by line and then zip it, but I'm worried that might
take too long and hang up the machine. The other way to go is some
kind of sql dump command, which I guess would be faster, but not sure
how much control I'd have over the exact format of the file. Any
suggestions which way I should proceed? Not hanging up their server
is my prime concern.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Dan Buettner

MyISAM does table level locking, which is to say that read (select) and
write (insert/update/delete) cannot happen at the same time.  One will wait
for the other.

If your select takes 10 seconds, then any write operations will block for
those 10 seconds.  Other read processes should be unaffected, though perhaps
slightly slower depending on resources you are consuming.

InnoDB avoids the problem described above by implementing transactions and
row-level locking, so that reads can proceed while writes are happening in
many cases.  InnoDB does have disadvantages compared to MyISAM so it's not
always a no-brain switch.

Dan



On 5/2/07, Brian Dunning [EMAIL PROTECTED] wrote:


The table is MyISAM, does that matter?


On May 2, 2007, at 7:28 AM, Dan Buettner wrote:

 A few observations:

 1 - if the table is in the InnoDB format, you aren't going to lock
 up their server, as InnoDB doesn't do table locking.  SHOW TABLE
 STATUS LIKE 'tablename' will tell you.

 2 - Ryan's mysqldump script looks useful - also, there's a little-
 used option with mysqldump that lets you specify a where clause to
 get just the records you want into the SQL file.

 3 - since you're not operating on the server itself, but
 transferring over the net, the time for the transfer could become a
 problem, especially if you're not using InnoDB.  You could copy the
 data into a temp table and then work with that to your heart's
 content, without tying up production tables.  Something like this:
 DROP TABLE IF EXISTS _tmp_tablename;
 CREATE TABLE _tmp_tablename LIKE tablename;
 INSERT INTO _tmp_tablename SELECT * FROM tablename WHERE whatiwant
 blah blah
 then use select into outfile, mysqldump, php etc. on the
 _tmp_tablename table.
 While this does involve copying lots of records, in my experience,
 this sort of thing can be very fast, since it's all self-contained
 in the database software.  Not having any indices on your temp
 table will help too.

 HTH,
 Dan


 On 5/2/07, Brian Dunning [EMAIL PROTECTED] wrote:
 I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
 need to dump it in CSV format and zip the file.

 This is not on my server, and it's in production, so I don't want to
 risk testing different methods and possibly hanging up their server
 for a period of time, so I wanted to seek advice here first to find
 what's the best way to proceed.

 I can easily use PHP to query the table for the results I want and
 write a file line by line and then zip it, but I'm worried that might
 take too long and hang up the machine. The other way to go is some
 kind of sql dump command, which I guess would be faster, but not sure
 how much control I'd have over the exact format of the file. Any
 suggestions which way I should proceed? Not hanging up their server
 is my prime concern.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?
 [EMAIL PROTECTED]






Re: basic architecture review?

2007-05-02 Thread Dan Buettner

Michael, this looks pretty decent overall.

I'm a big fan of fully descriptive table and column names, avoiding
abbreviations except where truly needed, so I personally would spell out
claim and claimant for example.  I also like to separate words in table
and column names with underscores, which you're already doing in most cases.
And finally, I like to have the table name be plural and the primary key be
singular_id.  So, users table will have user_id, for example (how I wish
Ruby on Rails did that).  I'd either rename your overview table to claims,
or change the id column to overview_id

In the carrdocs table (two r's) you have a column named cardoc_id (one r).
No biggie but you'll scratch your head more than once as you write SQL that
doesn't work the first time.

One performance suggestion: add an index on each table for the claim_id
column.  This will greatly speed retrieval of material related to a given
claim/overview.  ALTER TABLE x ADD INDEX claim_id_idx (claim_id)

When you say query the lot, what do you mean?  Get all related stuff in
a single SQL statement?  Possible, but maybe a bit messy, and not as easy to
maintain as a handful of routines that each get documents, emails,
pictures.  As you add more tables holding related material the SQL would
become unwieldy and you'd likely break it down later anyway.

HTH,
Dan



On 5/2/07, Michael Higgins [EMAIL PROTECTED] wrote:


Hello, list --

No problem, yet. ;-)

Wondering if anyone would have a suggestion to ensure better performance,
or
could point out any likely errors in the database outlined below.

Basically, I have digital pictures, scanned text/forms and emails that all
relate to information indexed in a separate DB with shipment_id. I don't
have any real experience with DB design, so any suggestions or things to
consider would be appreciated.

My thinking is that I create an overview with an id and store that id in
the
other tables so I can get all related documents. (My next question will be
how to query the lot in a single statement...)

All the tables are ENGINE=MyISAM and DEFAULT CHARSET=utf8. I don't know
squat about configuration parameters but an error in putting up the images
led me to change this line in my.cnf:

max_allowed_packet  = 4M

... because I don't know how to put up a binary in chunks, I guess. (I'm
using DBD::mysql and CGI in perl and inserting the uploaded file with a
placeholder in my SQL...)

Thanks in advance for any helpful suggestions, corrections or
clarifications. ;-)

Cheers,

Michael Higgins

# db info ###

+--+
| Tables_in_claims |
+--+
| carrdocs |
| claimsubs|
| emails   |
| overview |
| pictures |
+--+


mysql describe carrdocs;
+---++--+-+-++
| Field | Type   | Null | Key | Default | Extra  |
+---++--+-+-++
| cardoc_id | int(11)| NO   | PRI | NULL| auto_increment |
| claim_id  | int(11)| NO   | | ||
| carr_doc  | mediumblob | YES  | | NULL||
| carr_doctype  | tinytext   | YES  | | NULL||
| carr_mimetype | tinytext   | YES  | | NULL||
+---++--+-+-++
5 rows in set (0.13 sec)

mysql describe claimsubs;
+--++--+-+-++
| Field| Type   | Null | Key | Default | Extra  |
+--++--+-+-++
| claimsub_id  | int(11)| NO   | PRI | NULL| auto_increment |
| claim_id | int(11)| NO   | | ||
| claim_doc| mediumblob | YES  | | NULL||
| clm_doctype  | tinytext   | YES  | | NULL||
| clm_mimetype | tinytext   | YES  | | NULL||
| clmdoc_name  | tinytext   | YES  | | NULL||
+--++--+-+-++
6 rows in set (0.01 sec)

mysql describe emails;
+--+-+--+-+-++
| Field| Type| Null | Key | Default | Extra  |
+--+-+--+-+-++
| email_id | int(11) | NO   | PRI | NULL| auto_increment |
| claim_id | int(11) | NO   | | ||
| email| text| YES  | | NULL||
+--+-+--+-+-++
3 rows in set (0.00 sec)

mysql describe overview;
+-+-+--+-+-++
| Field   | Type| Null | Key | Default | Extra  |
+-+-+--+-+-++
| claim_id| int(11)  

Re: Any issues migrating Solaris - Linux?

2007-04-26 Thread Dan Buettner

Martjin, I've run various versions of MySQL from 3.2 through 5.1 on various
platforms (Mac OS X, FreeBSD, RedHat/Fedora Linux, Windows, Solaris),
exporting and importing as needed, and haven't encountered any problems.
I've even transferred the MyISAM tables between machines in a few cases,
rather than wait on the export/import process.

Works great across platforms.

One thing to ensure is that any firewall or security settings you employ
allow traffic on MySQL's TCP port (3306 by default), assuming you are using
network connections.

Dan


On 4/26/07, Martijn van den Burg [EMAIL PROTECTED] wrote:


Dear list,

My company wants to migrate MySQL 5.0.18 from SPARC/Solaris 8 to
Intel/RedHat Enterprise Linux 4 update 4.

Are there any incompatibilities or snags to be expected (expect from the
endian issue, which will be solved by exporting/importing the data)?


Thank you for your time,

Martijn

--
The information contained in this communication and any attachments is
confidential
and may be privileged, and is for the sole use of the intended
recipient(s). Any
unauthorized review, use, disclosure or distribution is prohibited. If you
are not
the intended recipient, please notify the sender immediately by replying
to this
message and destroy all copies of this message and any attachments. ASML
is neither
liable for the proper and complete transmission of the information
contained in this
communication, nor for any delay in its receipt.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Examples of commodity servers for MySQL

2007-04-26 Thread Dan Buettner

The smaller Dells, like the 28xx and 18xx series, are exactly what I
consider commodity hardware.  Other companies (Sun, HP, Gateway, IBM, Apple,
others) make comparable equipment at comparable prices.

Whether you need to spend the money on redundant power supplies and a
redundant drive setup is up to you - will depend on how well you and the
system you are building will tolerate machines going belly up due to
hardware failure.

Dan



On 4/26/07, lightbulb432 [EMAIL PROTECTED] wrote:



What exactly are examples of commodity servers? I know what
characteristics
they have, but could somebody point out several examples that'd be used in
a
MySQL scale-out? e.g. What do you use?

Also, would these servers be 1U or other configurations that take up very
little room in a rack?

Would something like http://www.pcmag.com/article2/0,1759,1645657,00.aspbe
considered a commodity server? With a price tag of $6K, I'd guess no.
--
View this message in context:
http://www.nabble.com/Examples-of-commodity-servers-for-MySQL-tf3652386.html#a10202903
Sent from the MySQL - General mailing list archive at Nabble.com.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: advice for blob tables?

2007-04-20 Thread Dan Buettner

Michael, here's what I can tell you -

Some people will advise you against storing documents in tables - claim it
makes the data harder to serve up, etc.  I agree it's quite simple to set up
a folder structure with your files on the filesystem and point Apache or IIS
to it while tracking metadata in MySQL, but there may well be other
destinations/uses for your documents where actually having them in BLOBs
would be quite handy.

There's no technical reason you can't do it, obviously, and I worked for
several years at a midsize newspaper where we stored literally everything in
gigantic Sybase databases.  Story, ad, page layouts, postscript graphics
files, etc.  Everything.  And by and large it worked quite well.  Nearly 1TB
by the time I left, and a colleague at another newspaper had near 3TB.

A big plus was the centralized workflow and tracking it allowed, but that
will depend largely on the quality of the application software you have.

At any rate - based on my experience with the Sybase system I managed, I
would advise you to consider this when designing your database: instead of
having one gigantic table to store every document, try to design a system
that allows for splitting the data across multiple identical tables.

You could do this with the MERGE engine in MySQL; that has MyISAM tables
underneath, with a view of sorts that presents all the underlying tables as
one.

You could also do it by having multiple DocumentTable001 tables structured
identically, with another table to track the document tables as well as the
current insert path.  This is obviously more complex but doable.

MyISAM is not transactional; InnoDB is, but doesn't offer MERGE.  InnoDB can
be configured to store one file per table in the latest versions of MySQL,
and I'd recommend you go that route.

Having the data split across multiple table files (in MyISAM or InnoDB) will
allow you to check, optimize, and on bad days recover, your data in a more
incremental fashion than a single large table.  It would also potentially
allow you to distribute the data across multiple physical storage devices
for improved speed - and while that may not be a concern up front, some day
it likely will be if you intend to store things for long.  You could even
distribute data across multiple database servers or clusters if you
structured it properly.

You could also take advantage of MySQL's compressed table type for archival
data, which would save disk space and potentially improve read speed if your
data compresses well.

Anyway, hope this helps.  Let me know if I can answer any other questions
about such a setup.

Dan


On 4/20/07, Michael Higgins [EMAIL PROTECTED] wrote:


Hello, all --

I want to set up a database for document storage. I've never worked with
binary files stored in tables.

So before I just jump in and go like I usually do, does anyone have any
quick advice, things to consider, links, must-RTFMs or the like to help a
newbie out?

I feel like I need to become a bit more expert in database design but I
simply don't really know where to start. Like, what 'engine' should I use
in
this case? I haven't a clue

Thanks!

--
Michael Higgins



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Trying to open a big sql script

2007-04-19 Thread Dan Buettner

That's a much larger file than most any text editor would work with, in my
experience.  I'd give BBEdit on the Mac a try if nothing else, but my
expectations would not be too high.

For examining and altering a file that large I'd try grep, awk, sed, perl,
etc.

Barring that, one thing you might do is use perl or another scripting
language (or perhaps some utility software) to read the file in 100 MB or so
chunks and write out to a series of smaller files.  Edit the smaller files,
then use shell command to cat them all back into one bigger file.

HTH,
Dan


On 4/19/07, molemenacer [EMAIL PROTECTED] wrote:



Hi all,

I have backed up a database using mysqldump and have a .sql script that is
over 2GB in size.  I am trying to open this file to view it and make some
changes.  I have not been able to find a program that can open this file.

Does anyone have any suggestions as to a program that can do this?

Thanks in advance
--
View this message in context:
http://www.nabble.com/Trying-to-open-a-big-sql-script-tf3606302.html#a10075570
Sent from the MySQL - General mailing list archive at Nabble.com.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: database backup problem, since database over 60mb

2007-04-11 Thread Dan Buettner

Richard, there's no inherent problem around 60 MB - I routinely dump data
ranging from a few KB to a few GB.

One thing I note is that you are using the mysql command, which is the
interactive database client.  You want to use mysqldump, the client program
that dumps data from the database in SQL format.

Why are you sure that a 46 MB dump doesn't contain everything?  Granted, you
have a dump in the last that was larger, but perhaps some unneeded data has
been removed?

Dan


On 4/11/07, Richard [EMAIL PROTECTED] wrote:


Hello, I've got a problem with mysql 5 on my debian server.

I've got a forum on this server and untill the database reached about 60
Mo I could dump the database with either phpmyadmin or with the command :

mysql -u user -p'password' databasename  backup_date.sql

My last backup that worked was about 56Mb, but now since I've gone over
the 60mb my backup files with phpmyadmin and mysqldump are only around
46Mb and therefore don't contain everything and also when I do a
mysql -u user -p'password' databasename  backup_date.sql

it never finishes, and even if I wait for two hours the bacup_date.sql
file is 0Mb ...

The forum runs well and I use no compression I save the file in simple
.sql text format.

Any ideas as to why it does this or how I can fix it would be great !
I've gone through my my.cnf file and I can't see any setting that seems
to block this. If you need any further information please let me know

Thanks in advance,

Richard

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Tips for migration

2007-04-11 Thread Dan Buettner

Hi Schalk -

Having just done this, it was not too hard.

We used Ruby on Rails (RoR) to create a schema file from the MSSQL database,
in order to recreate it in MySQL.  If your database is simple enough and/or
you have all the SQL used to create tables views etc., you don't need to do
this.

Make sure the tables created in MySQL have the columns in the same order as
the MSSQL tables.  Also check column types - we had some FLOATs in MSSQL
which were very precise and RoR initially only created the MySQL columns
with default precision, for example.  Simple fix but a gotcha.

We then bcp'd out the data from MSSQL into files, and ran a series of LOAD
DATA INFILE commands to import it into MySQL.

bcp out like so:
bcp DATABASENAME..TABLENAME out FILENAME -c -T

LOAD DATA INFILE guide:
http://dev.mysql.com/doc/refman/5.0/en/load-data.html

Two challenges:

- if you have CR or LF characters in your data, this may make working LOAD
DATA INFILE difficult.  If possible, you might run an update command prior
to BCPing to replace CR/LF characters with a special character sequence
(like ...---CRHERE---...) - then once you have it in MySQL, you could run a
command to replace the sequence with a CR or LF.

- MySQL ignores default column values when importing data using LOAD DATA
INFILE.  I ranted about this last week; it's terrible, awful, no good, very
bad database software behavior, but that's the way it is right now.  We
devised a series of commands to apply needed defaults to the columns we
needed, but in some cases (if you have a lot of NULL values for example)
this could be a lot of work

HTH,
Dan

On 4/11/07, Schalk Neethling [EMAIL PROTECTED] wrote:


HI All,

Can anyone give me some pointers, help, point me to articles with
regards to transfering a MSSQL database to MySQL?

Thanks!
Schalk Neethling

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Poor performance with WHERE Value like 'XXX%'

2007-04-05 Thread Dan Buettner

Rod, I'm speculating here, but I think what's happening is that too many of
your entries have a PRODUCT_ID that starts with 'DRM' to make your index
useful.  When MySQL expects an index query to match more than around 30% of
all rows, it will instead proceed with a table scan, estimating that to be
less expensive overall.

To see more about your indices, run a 'show indexes from MyTable'.  The
'cardinality' will be of interest in this situation - when it is low
compared to the number of rows in your table, the index is not as effective
- when it is high, as primary keys are, it is more useful.

If the underscore characters in your query are real underscores and not
wildcards as used in a LIKE query (_ = any character), you can escape them
to speed up your query:
select PRODUCT_ID from My_Table where PRODUCT_ID like 'DRM\_002292\_1055%'

Dan


On 4/5/07, Rod Heyd [EMAIL PROTECTED] wrote:


Hi Folks,

I'm getting some unexpectedly poor performance from some queries using the
like operator.

The table is an Innodb table with an index on the column appearing in a
like
conditional.

Explain gives me this:

[localhost]explain select PRODUCT_ID from My_Table where PRODUCT_ID like
'DRM_002292_1055%' \G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: My_Table
 type: ALL
possible_keys: PRODUCT_ID
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 87684
Extra: Using where
1 row in set (0.00 sec)

[localhost]



Now, this does not make any sense to me at all.  As you can see, the
PRODUCT_ID column is already indexed, and in fact, the pattern in the like
operator argument should reduce the number of rows to scan down to about
50,
and yet, mysql wants to examine practically the entire table.  Adding a
force index(PRODUCT_ID) seems to help a little bit, but still not like it
should.  Any ideas what could be happening?  I'm seeing the same behavior
on
both mysql 5.0.37 and an older system running mysql 4.0.


Thanks.

Rod Heyd



Re: Finding a record in a result set

2007-04-04 Thread Dan Buettner

James, one option would be to run a query to find the number of people in
the list ahead of him, rather than determining position within the result
set.

As in:

SELECT COUNT(*) FROM some_table
WHERE state = Maine
AND hire_date  (SELECT hire_date FROM some_table
WHERE last_name = Smith
AND first_name = Joe
AND state = Maine)

Dan



On Mar 22, 2007, at 11:21 AM, James Tu wrote:

  Is there some quick way to do the following in MySQL?  (I know I
  can use PHP to search through the result set, but I wanted to see
  if there's a quick way using some sort of query)
 
  Let's say I know that Joe is from Maine.
  I want to do a query of all employees from Maine, ordered
 by hiring
  date, and figure out where Joe falls in that list. (i.e. which
  record number is he?)
 
  -James


Re: Finding a record in a result set

2007-04-04 Thread Dan Buettner

Then you just have to come up with some other criteria for determining who
should be counted as before or after Joe, which might well be the same
as the order by clause in whatever you're doing right now while examining
the result set.

I think your approach of examining the result set will work well in
testing.  In practice, with load, it could quickly become a real problem.
Databases are very good at providing answers about the data they contain,
while languages like PHP are very good at emitting HTML ... use each to its
fullest potential.  Looping through thousands of results provided by the
database to find one record is not efficient in terms of either the database
or PHP.

Dan


On 4/4/07, James Tu [EMAIL PROTECTED] wrote:


That is a nice idea, I'll have to keep it in my bag of tricks.
However, I don't know if it will work b/c there are probably others
that are hired on the same date...


On Apr 4, 2007, at 1:51 PM, Dan Buettner wrote:

 James, one option would be to run a query to find the number of
 people in
 the list ahead of him, rather than determining position within the
 result
 set.

 As in:

 SELECT COUNT(*) FROM some_table
 WHERE state = Maine
 AND hire_date  (SELECT hire_date FROM some_table
 WHERE last_name = Smith
 AND first_name = Joe
 AND state = Maine)

 Dan


 On Mar 22, 2007, at 11:21 AM, James Tu wrote:
 
   Is there some quick way to do the following in MySQL?  (I know I
   can use PHP to search through the result set, but I wanted to see
   if there's a quick way using some sort of query)
  
   Let's say I know that Joe is from Maine.
   I want to do a query of all employees from Maine, ordered
  by hiring
   date, and figure out where Joe falls in that list. (i.e. which
   record number is he?)
  
   -James



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Handling of NULL values when importing data from SQL Server BCP files

2007-03-28 Thread Dan Buettner

In the category of terrible, horrible, no good, very bad (but at least
documented) software behavior, I bumped into this today:
http://bugs.mysql.com/bug.php?id=14770
where the LOAD DATA INFILE command does not respect the default value of a
column if no value is supplied in the file.  Instead, it assigns zero to
numeric columns, empty string to character columns, etc., per
http://dev.mysql.com/doc/refman/5.0/en/load-data.html

This is awful!  I mean, it's documented behavior and all, but it's still
just bloody awful!

Has anyone else found a graceful solution to this problem?  The thing is, my
data files may have real zeros in numeric columns, whereas NULL of course
is used to indicate an unknown value.  When I migrate this application from
SQL Server to MySQL in the next couple of weeks, suddenly I'll have a lot of
zeros where I previously had NULLs.

Dan


Re: about limit

2007-03-20 Thread Dan Buettner

Another solution might be something like:

SELECT * FROM tablename
WHERE id NOT IN (SELECT id FROM tablename WHERE some criteria ORDER BY
something LIMIT 20)
ORDER BY something

Dan

On 3/20/07, Rolando Edwards [EMAIL PROTECTED] wrote:


SELECT ... FROM ... LIMIT 20,1;

100 million is a little exaggeration
but you can other reasonably high numbers.

Remember, LIMITs are zero-based not 1-based.

- Original Message -
From: nikos [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Tuesday, March 20, 2007 12:50:49 PM (GMT-0500) Auto-Detected
Subject: about limit

Hello list.

Does any body knows how to select not the LIMIT 20 records but the rest
of them?

MySQL version is 4.1.21-standard and I cant make a VIEW.

Thank you

Nikos

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: How do I alter the result of a select

2007-03-07 Thread Dan Buettner

Brian, the online MySQL documentation is very complete and easy to read.
That said, you do kind of have to know what you're looking for!  I'm not
sure what to recommend for a guide to beginning SQL, sorry, others may have
some thoughts.

You are going down the right road with an aggregate function (grouping).
What you want is slightly tricky, actually, but MySQL makes it easy with the
GROUP_CONCAT function.
http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html

Here's a query you can run to get the output you specified:

SELECT module_id AS Module ID,
GROUP_CONCAT(participant_answer ORDER BY question_id SEPARATOR ' ') AS
Participant Answers
FROM table_name
WHERE email = '[EMAIL PROTECTED]'
GROUP BY module_id
ORDER BY module_id

HTH,
Dan

On 3/7/07, Brian Menke [EMAIL PROTECTED] wrote:


MySQL 5.x

I have a table that looks like this:

module_id question_id email participant_answer
2 2.1 [EMAIL PROTECTED] a
2 2.2 [EMAIL PROTECTED] b
2 2.3 [EMAIL PROTECTED] c
2 2.4 [EMAIL PROTECTED] d
2 2.5 [EMAIL PROTECTED] e
1 1.1 [EMAIL PROTECTED] a
1 1.2 [EMAIL PROTECTED] c
1 1.3 [EMAIL PROTECTED] d
1 1.4 [EMAIL PROTECTED] b
1 1.5 [EMAIL PROTECTED] d
5 5.1 [EMAIL PROTECTED] a
5 5.2 [EMAIL PROTECTED] c
5 5.3 [EMAIL PROTECTED] d
5 5.4 [EMAIL PROTECTED] b
5 5.5 [EMAIL PROTECTED] d

Being an SQL novice, whenever I run any kind of selects, for example

select *
from table_name
where email = '[EMAIL PROTECTED]'

I get the results in rows just like you would see above, which is what I
would expect. What I really need is this format

Module ID   Participants Answers

1   a b c d e
2   a b c d e
5   a c d b d

Instead of 1 row for each listed module id. I tried grouping by module_id
such as

select *
from table_name
where email = '[EMAIL PROTECTED]'
group by module_id

But that ended up just giving me 1 row with one answer. I think the
solution
is grouping somehow, but I'm not quite experienced enough to put it all
together.

Any help is greatly appreciated, even if it's just a pointer to some mysql
docs.

Thanks in advance!

-Brian Menke


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: MySQL Daylight Savings Time Patch

2007-02-20 Thread Dan Buettner

NTP won't solve this problem for you.

NTP, as well as most computer clocks, know nothing about daylight savings
time, or about time zones.  What they know is how many seconds have elapsed
since the epoch.  The epoch, in the case of most UNIX-based OSes, is
midnight January 1, 1970.  I think Windows is the same.  The original Mac OS
was 1/1/1904, for example.

It's up to the operating system to apply rules that determine that X number
of seconds (as reported by the clock hardware) since the epoch translates to
some human time, based on local settings for time zone and with any daylight
savings time rules for that time zone applied.

My understanding is that MySQL needs no patch, but your underlying OS most
likely does.  I know there have been patches issued for Solaris 2.x, 9 and
10, Windows XP, and Mac OS X 10.4, and almost certainly others.

HTH,
Dan


On 2/20/07, Chris White [EMAIL PROTECTED] wrote:


Sun, Jennifer wrote:
 Any answers for the question below ?

 Is there a DST patch for MySql 4.0.20?   Thanks.

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Friday, February 09, 2007 9:30 AM
 To: mysql@lists.mysql.com
 Subject: MySQL Daylight Savings Time Patch

 Is there a DST patch for MySQL 4.0.x series?

 I've been getting scary emails from our sys and net admins about
 impending
 doom.

 Thanks,

 David

If you're using NTP then what's the problem?  Sync to one of the ntp
pools, boom your clocks are updated, MySQL uses system time and yay.
I'm fairly sure you could sync 500 server this way.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Minimum hardware requirements

2007-02-14 Thread Dan Buettner

Rich, one consideration is of course disk space - you'll want to make sure
that 2.5 billion * row length will fit on your disks.

Offhand, you've got (referencing
http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html)
78 bytes in 3 varchars (50 + 10 + 15, + 3 for internal use)
16 bytes in 4 ints
12 bytes in 3 floats
---
106 bytes max per record

106 bytes * 25 records = 246 GB

Any indices you add to your data will consume space on top of that, and
you'll need space available for temp tables etc. as well.  Personally, I
would double that 246 GB as a safe starting point.  With that much data you
will want to consider indexing only prefixes (first few characters) of the
varchar columns, to make indexes smaller  more likely to fit in RAM.  That
being said, with this much data, loading the machine up on RAM will help
obviously.

Don't know your requirements but a MyISAM table might treat you better as in
my experience that storage engine excels at raw speed for something like
this.  You would consume more disk space but speed might also improve if you
used fixed-length rows by declaring CHAR instead of VARCHAR.  MyISAM has a
max table size of around 64TB but the limitation is often the filesystem;
MERGE tables could help work around that.  If the data is not going to
change then an archive (compressed) table might help as well, as the data
will consume less disk space and therefore can be read off disk faster.

But if you need transactions etc. then of course MyISAM is out.

HTH,
Dan


On 2/14/07, richard [EMAIL PROTECTED] wrote:



Hi,

I have a table (structure below) which will hold 2.5 billion rows. I'm
currently choosing the hardware i'll need. Does anybody know what the
minimum spec of machine is likely to be that I comfortably use? I
imagine the table will have to be Innodb split across a number of files.
It will also need careful indexing to be able to access with rapidly. I
was thinking of something along the lines of the Dell PowerEdge 1950 or
2950?

TIA
Rich


mysql show columns from table1;
++-+--+-+-+---+
| Field  | Type| Null | Key | Default | Extra |
++-+--+-+-+---+
| p  | varchar(50) | YES  | | NULL|   |
| id | int(11) | YES  | | NULL|   |
| call_1 | int(11) | YES  | | NULL|   |
| prob_1 | float   | YES  | | NULL|   |
| call_2 | int(11) | YES  | | NULL|   |
| prob_2 | float   | YES  | | NULL|   |
| call_3 | int(11) | YES  | | NULL|   |
| prob_3 | float   | YES  | | NULL|   |
| coh| varchar(10) | YES  | | NULL|   |
| ana| varchar(15) | YES  | | NULL|   |
++-+--+-+-+---+


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Minimum hardware requirements

2007-02-14 Thread Dan Buettner

Sorry for the double-post, but another thought if it's not obvious already:
with such a large dataset, making sure to employ striping and high-RPM disks
will be important for speed.  I'd recommend looking at RAID 1+0 with dual
SCSI channels and 15K RPM disks if you're looking in the Dell PowerEdge
series.  The more disks you can stripe across, the better (to a point of
course).  Multiple SCSI channels help too.  RAID 5 is pretty effective too
for read speed.

Dan


On 2/14/07, richard [EMAIL PROTECTED] wrote:



Hi,

I have a table (structure below) which will hold 2.5 billion rows. I'm
currently choosing the hardware i'll need. Does anybody know what the
minimum spec of machine is likely to be that I comfortably use? I
imagine the table will have to be Innodb split across a number of files.
It will also need careful indexing to be able to access with rapidly. I
was thinking of something along the lines of the Dell PowerEdge 1950 or
2950?

TIA
Rich


mysql show columns from table1;
++-+--+-+-+---+
| Field  | Type| Null | Key | Default | Extra |
++-+--+-+-+---+
| p  | varchar(50) | YES  | | NULL|   |
| id | int(11) | YES  | | NULL|   |
| call_1 | int(11) | YES  | | NULL|   |
| prob_1 | float   | YES  | | NULL|   |
| call_2 | int(11) | YES  | | NULL|   |
| prob_2 | float   | YES  | | NULL|   |
| call_3 | int(11) | YES  | | NULL|   |
| prob_3 | float   | YES  | | NULL|   |
| coh| varchar(10) | YES  | | NULL|   |
| ana| varchar(15) | YES  | | NULL|   |
++-+--+-+-+---+


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Performace of a database with a lot of tables

2007-02-09 Thread Dan Buettner

Olexandr, I fear that model may not scale well.  Certainly having data
spread across so many different tables reduces the effectiveness of memory
caching.

I recommend you alter the design to have 30 tables total, each with a column
for user id, rather than 30 tables per user.  This may improve performance
by allowing memory caching to be more effective.  If you index the user id
field performance should remain good even though the tables will be larger.

Dan


On 2/9/07, Olexandr Melnyk [EMAIL PROTECTED] wrote:


Hello,

I am working on hosting solution for a popular open source script, where
users can sign up and get an own forum. We use InnoDB, and each user has
an
own set of tables (around 30); there are currently around 500 users
registered. Despite the large count of tables, database is small, around
50
MB.

Server is dual Xeon with 4GB of RAM and MySQL uses all memory, available
on
the server, so there's even nothing left for Apache.

As the user base is growing, what steps would you suggest me to do, in
order
to keep good database performance and make it more efficient on RAM usage?
The former is more important.


Olexandr Melnyk,
http://omelnyk.net/



Re: Questions about delete and optimize

2007-02-01 Thread Dan Buettner

Ian, based on your needs (regularly deleting everything morre than X
months old), I recommend you look into using the MERGE engine.

Essentially, it is multiple MyISAM tables that appear as one, and
lopping off the oldest data is as simple as redfining the MERGE and
then dropping the oldest table.

http://dev.mysql.com/doc/refman/5.0/en/merge-storage-engine.html

They're not perfect but they may do what you want, without the problem
you currently have of huge tmp tables filling up your partition.

Dan


On 2/1/07, Ian Barnes [EMAIL PROTECTED] wrote:

Hi,

We are looking at various methods that we can effectively and efficiently
delete lots of rows from a database and then optimize it. Our main concern
is disk space - the partition we are working with is only 12gigs small and
our database vary in size from 1gig (not a problem) to 11gig. In the example
below I will use one whos .MYD is 6.5 Gig and the .MYI is 2.7Gig. There are
around 28,900,000 rows in the database.

Once a month we run an automated program that deletes rows older than X
months and then we attempt the optimize the table in question. The delete
query we use is:
DELETE FROM table WHERE date(current_date - interval 2 month). Now my
questions surrounding this are:

1.) Is it quicker to do a query where we say something like: DELETE FROM
table WHERE date = '2006-11-01' instead of where date(current_date)?
2.) Does the current way we do it use a tmp table that is written to disk ?

Then, we run the simple optimize command: OPTIMIZE TABLE tablename and that
is normally where we come into the problem that mysql tries to create a tmp
file while optimizing and it runs out of space, and then corrupts the main
table. We need to run the optimize because after deleting all those rows,
the space isnt freed up until we run the optimize. So my other question is
can we do an optimize a different way, or is there some way that we can
insert and delete rows that would require less optimization?

Thanks in advance,
Ian




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help indexing this query.

2007-01-22 Thread Dan Buettner

Andrew, can you post the result of EXPLAIN query for your query?
Minus the FORCE INDEX too.  Also the structure of the other 2 tables
would be helpful as well.

Thanks,
Dan



On 1/22/07, altendew [EMAIL PROTECTED] wrote:


--- ptsSignups --
id int(20) No
mid int(20) No 0
ptsID int(20) No 0
pps double No 0
points double No 0
loginID varchar(255) No
emailConfirm longtext No
time timestamp Yes CURRENT_TIMESTAMP
reasonForDeny longtext No
status int(1) No 1

--- index (timeframe) ---

timeframe (mid,status,time)

--- query ---

SELECT SUM(s.pps) as earned,m.id,m.username
FROM ptsSignups s
FORCE INDEX(timeframe)
JOIN members m
ON s.mid=m.id
AND m.status='Member'
LEFT JOIN ptsContestExclude e
ON e.cid=1
AND e.mid=m.id
WHERE
s.status='2'
AND s.time=2004-06-08
AND s.time2008-06-08+INTERVAL 1 DAY
AND e.mid IS NULL
GROUP BY s.mid
HAVING earned0
ORDER BY earned DESC

--- problem ---

`ptsSignups` is a table listing everything my members have completed.
Sometimes I like to run contests to see who has earned the most. `members`
is a table that contains all my users. `ptsContestExclude` is a table of
members of whom I would like to exclude from the contest.

What I do first is group the table `ptsSignups` by member id, and calculate
a sum of how much they earned. Then I reorder that sum in Descending order
so the highest earned is on top.

This `ptsSignups` table contains 82752 rows and is 75KB big. It runs
extremely slow. I tried to create an index for it but it failed to increase
performance.

Any help is appreciated.
--
View this message in context: 
http://www.nabble.com/Help-indexing-this-query.-tf3059045.html#a8505554
Sent from the MySQL - General mailing list archive at Nabble.com.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help indexing this query.

2007-01-22 Thread Dan Buettner

Andrew, couple of suggestions:

1 - where you use
s.status='2'
change it to
s.status=2
otherwise MySQL is likely casting your data from int to string, which
is slow and also precludes using an index.

2 - in this case, instead of using a left join, try using a subquery:
WHERE ...
AND s.mid NOT IN (SELECT mid FROM ptsContestExclude)
- or -
change your index around, from
UNIQUE (cid,mid)
to
UNIQUE (mid,cid)
due to the way MySQL uses indices you need the queried-upon column(s)
listed first(earlier) in the index.

These might speed things up

HTH,
Dan


On 1/22/07, altendew [EMAIL PROTECTED] wrote:


--- EXPLAIN ---

1 SIMPLE e system cid NULL NULL NULL 0 const row not found
1 SIMPLE s ref sid,status,timeframe status 4 const 20438 Using where; Using
temporary; Using filesort
1 SIMPLE m eq_ref PRIMARY,status PRIMARY 4 sc_72.s.mid 1 Using where

--- members ---

id  int(20)  No
first  varchar(255) No
last  varchar(255) No
username  varchar(25) No
email  varchar(255) No
password  varchar(25) No

PRIMARY id

--- ptsContestExclude ---

cid  int(20)  No  0
mid  int(20)  No  0

UNIQUE (cid,mid)

--- ptsSignups (all indexes) ---

PRIMARY id
INDEX (mid,ptsID)
INDEX (status,ptsID)
INDEX timeframe (mid, status, time)


Dan Buettner-2 wrote:

 Andrew, can you post the result of EXPLAIN query for your query?
 Minus the FORCE INDEX too.  Also the structure of the other 2 tables
 would be helpful as well.

 Thanks,
 Dan



 On 1/22/07, altendew [EMAIL PROTECTED] wrote:

 --- ptsSignups --
 id int(20) No
 mid int(20) No 0
 ptsID int(20) No 0
 pps double No 0
 points double No 0
 loginID varchar(255) No
 emailConfirm longtext No
 time timestamp Yes CURRENT_TIMESTAMP
 reasonForDeny longtext No
 status int(1) No 1

 --- index (timeframe) ---

 timeframe (mid,status,time)

 --- query ---

 SELECT SUM(s.pps) as earned,m.id,m.username
 FROM ptsSignups s
 FORCE INDEX(timeframe)
 JOIN members m
 ON s.mid=m.id
 AND m.status='Member'
 LEFT JOIN ptsContestExclude e
 ON e.cid=1
 AND e.mid=m.id
 WHERE
 s.status='2'
 AND s.time=2004-06-08
 AND s.time2008-06-08+INTERVAL 1 DAY
 AND e.mid IS NULL
 GROUP BY s.mid
 HAVING earned0
 ORDER BY earned DESC

 --- problem ---

 `ptsSignups` is a table listing everything my members have completed.
 Sometimes I like to run contests to see who has earned the most.
 `members`
 is a table that contains all my users. `ptsContestExclude` is a table of
 members of whom I would like to exclude from the contest.

 What I do first is group the table `ptsSignups` by member id, and
 calculate
 a sum of how much they earned. Then I reorder that sum in Descending
 order
 so the highest earned is on top.

 This `ptsSignups` table contains 82752 rows and is 75KB big. It runs
 extremely slow. I tried to create an index for it but it failed to
 increase
 performance.

 Any help is appreciated.
 --
 View this message in context:
 http://www.nabble.com/Help-indexing-this-query.-tf3059045.html#a8505554
 Sent from the MySQL - General mailing list archive at Nabble.com.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
View this message in context: 
http://www.nabble.com/Help-indexing-this-query.-tf3059045.html#a8505966
Sent from the MySQL - General mailing list archive at Nabble.com.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Strange query.

2007-01-10 Thread Dan Buettner

One of the most puzzling and challenging things to do with SQL can be
to show what's NOT there, as you're trying to do.

Many people opt to do such a report in their favorite scripting
language for this reason, as one can easily increment timestamps by a
given amount and re-do the query.  Can be resource intensive to re-do
the queries for each hour or whatever, but it's often pretty easy.

Another option is to create a table used specifically for joining to
get units of time with no corresponding entries in the other table.

You could create a table like so:
CREATE TABLE all_hours (
date_hour DATETIME,
KEY (date_hour) );

then populate it like so, with perl:
$counter = 0;
while $counter  100 {
  print INSERT INTO all_hours (date_hour) VALUES (
DATE_ADD(2000-01-01 00:00:00, INTERVAL $counter HOUR) );
  $counter++;
}

Then you can join on that table.

A million hour entries would be good for 114 years or so.  Fewer would
likely give somewhat better performance.

Dan


On 1/10/07, Paul Halliday [EMAIL PROTECTED] wrote:

That query doesn't return empty values. Just to clarify what I want as
the result:

My initial query was this,

mysql select count(*),DATE_FORMAT(timestamp,'%H') AS hour FROM event
WHERE timestamp between '2007-01-10 04:00:00' and '2007-01-11
04:00:00' AND sid=1 GROUP BY hour;
+--+--+
| count(*) | hour |
+--+--+
|4 | 04   |
|5 | 06   |
|5 | 07   |
|1 | 08   |
|7 | 09   |
|   12 | 10   |
|   73 | 12   |
|   31 | 13   |
|   50 | 14   |
+--+--+
9 rows in set (0.03 sec)

What I am looking for is  0's for every empty result and up to the end
of the day.

Thanks.

On 1/10/07, Brent Baisley [EMAIL PROTECTED] wrote:
 You can't join on the result of calculations in the field selection. The 
result is not associated with any table. So the problem
 isn't so much with the date_format statement, but that you are joining on 
HOURS.hour the timestamp, not HOURS.hour the DATE_FORMAT.
 I would think you would be getting an error when you run your SELECT.
 Your group by can use the result of a calculation. So you may actually have 
two problems, since you are grouping on HOURS.hour, the
 timestamp, the 'hour' the alias name for the calculation result.
 I'm not sure why you don't just pull the hour from the timestamp either.

 SELECT COUNT(*), HOUR(timestamp) AS hour FOM HOURS
 LEFT JOIN event ON HOURS.hour=HOUR(timestamp)
 WHERE timestamp BETWEEN '2007-01-09 04:00:00'
 AND '2007-01-10 04:00:00' AND sid=2 GROUP BY hour

 - Original Message -
 From: Paul Halliday [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Wednesday, January 10, 2007 8:39 AM
 Subject: Strange query.


  Hi,
 
  I am trying to deal with empty values so that I can graph data over a
  24 hour period without gaps.
 
  I created a table called HOURS which simply has 0-23 and I am trying
  to do a join on this to produce the desired results. I think that the
  DATE_FORMAT in the query is screwing things up.
 
  The query looks something like this:
 
  SELECT COUNT(*),DATE_FORMAT(timestamp,'%H') AS hour FROM HOURS LEFT
  JOIN event ON HOURS.hour=hour WHERE timestamp BETWEEN '2007-01-09
  04:00:00' AND '2007-01-10 04:00:00' AND sid=2 GROUP BY HOURS.hour;
 
  Any help would be appreciated.
 
  Thanks.
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Strange query.

2007-01-10 Thread Dan Buettner

Ugh. My perl isn't quite right there.  Here's a bit better (e.g.
working) example:

If you create the table, then put this in populate_hours.pl:
 BEGIN
#!/usr/bin/perl
$counter = 0;
while ($counter  100) {
 print INSERT INTO all_hours (date_hour) VALUES (
DATE_ADD('2000-01-01 00:00:00', INTERVAL $counter HOUR) );\n;
 $counter++;
}
 END

then run
./populate_hours.pl | mysql -h host -u user -ppassword -D database

you'll have a table full of hours.

Dan



On 1/10/07, Dan Buettner [EMAIL PROTECTED] wrote:

One of the most puzzling and challenging things to do with SQL can be
to show what's NOT there, as you're trying to do.

Many people opt to do such a report in their favorite scripting
language for this reason, as one can easily increment timestamps by a
given amount and re-do the query.  Can be resource intensive to re-do
the queries for each hour or whatever, but it's often pretty easy.

Another option is to create a table used specifically for joining to
get units of time with no corresponding entries in the other table.

You could create a table like so:
CREATE TABLE all_hours (
date_hour DATETIME,
KEY (date_hour) );

then populate it like so, with perl:
$counter = 0;
while $counter  100 {
   print INSERT INTO all_hours (date_hour) VALUES (
DATE_ADD(2000-01-01 00:00:00, INTERVAL $counter HOUR) );
   $counter++;
}

Then you can join on that table.

A million hour entries would be good for 114 years or so.  Fewer would
likely give somewhat better performance.

Dan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help optimizing this query?

2007-01-08 Thread Dan Buettner

Brian, can you post the output of EXPLAIN for your query?
I.e.,
EXPLAIN problem query ;


At first glance, your query should be able to use the 'stamp-source'
index since stamp is the first column indexed.  However, I wonder if
wrapping the NOW() - INTERVAL 14 DAY inside a DATE() would help.
MySQL may be casting your table data to a DATETIME for comparison,
which would be resource intensive.  Try this:

stamp=DATE(NOW()-interval 14 day)

Your query will not be able to use the other index, 'country-source',
because 'source' is the second column of the index and you are not
querying against the 'country' column.  If you do not need 'country'
to be the first column (do you have other queries that use this index
and query against country?), you could redefine the index to put
'source' first.

As Michael pointed out you could also create a separate index on just
'source'.  Or as brian suggests an index combining the two columns
you're using here (I'd put 'source' first since you're doing a
constant comparison with it).  However, more indices = more overhead,
which is sometimes a problem, sometimes not.

Dan

On 1/8/07, Michael Gargiullo [EMAIL PROTECTED] wrote:



-Original Message-
From: Brian Dunning [mailto:[EMAIL PROTECTED]
Sent: Sunday, January 07, 2007 1:12 PM
To: mysql
Subject: Help optimizing this query?

This is the query that's killing me in the slow query log, usually
taking around 20 seconds:

select count(ip) as counted,stamp from ip_addr where stamp=NOW()-
interval 14 day and source='sometext' group by stamp order by stamp
desc;

Here is the table:

CREATE TABLE `ip_addr` (
   `ip` int(10) unsigned NOT NULL default '0',
   `stamp` date NOT NULL default '-00-00',
   `country` char(2) NOT NULL default '',
   `source` varchar(20) NOT NULL default '',
   PRIMARY KEY  (`ip`),
   KEY `country-source` (`country`,`source`),
   KEY `stamp-source` (`stamp`,`source`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

Any help please?   :)

---

Just a thought? Put a normal index on source and another on stamp (not
combined).




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Data types and space needs

2007-01-05 Thread Dan Buettner

Olaf, not a silly question at all.  You can indeed save space by using
different forms of integer.

See
http://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html
and
http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html


From that second page:


Storage Requirements for Numeric Types

Data Type   Storage Required
TINYINT 1 byte
SMALLINT2 bytes
MEDIUMINT   3 bytes
INT, INTEGER4 bytes
BIGINT  8 bytes

You may also be able to use UNSIGNED to extend the range of a column,
if you don't need to store negative values.

HTH,
Dan

On 1/5/07, Olaf Stein [EMAIL PROTECTED] wrote:

Hi All,

I have somewhat of a silly question.
If I define a column as int it needs 4 bytes.
Do I gain anything space wise if I restrict the length to e.g. 10, int(10),
or is this only a logical restriction?


Thanks
Olaf



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Coping table

2007-01-05 Thread Dan Buettner

Guillermo -

You likely want mysqldump -
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

Dan


On 1/5/07, Guillermo [EMAIL PROTECTED] wrote:

Hello, i have this problem:
 I have two databases in diferent servers. I need to copy the contents from some tables in Server1 
to Server2, so i tried using the select into outfile  and then  load data , 
but the problem is that this commands creates the files on the server, and i want/need to do that 
on a client machine...¿what should i do?

Guillermo



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Troubleshooting Stalls

2007-01-03 Thread Dan Buettner

Michael -

You should see all running queries in the output of SHOW PROCESSLIST.

MyISAM is a multiple-reader/single-writer design, also known as table
locking design - that is to say, you can have lots of threads reading
simultaneously, but only one writing.  Also, when one is writing,
reading threads block and wait for it.  The simplicity lends itself to
speedy operations in many cases, but sometimes you can run into
trouble with it, especially in high volume situations - as you have.

I'd suggest looking for an UPDATE/INSERT/DELETE operation in the
output of SHOW PROCESSLIST next time you see a stall - that is likely
your culprit.

Some workarounds include:
- using INSERT DELAYED syntax
- using LOW PRIORITY UPDATEs
- switching storage engine to InnoDB

HTH,
Dan


On 1/2/07, Michael Katz [EMAIL PROTECTED] wrote:

I have a large database, about a million records in some tables, and I
am trying to troubleshoot some stalls in our application.  We have
assumed that slow query speed causes the stalls, however, when my web
app stalls  I do not see the query in process list.  What we see is the
web page stall with a wait indicator and all other queries to the same
tables are in a locked state (using MyISAM) in process list, but our
query is not present in the list.  After 30sec to 1 minute the web page
will render and the locked queries will quickly complete.  My questions
are as follows:

Should a query that takes a long time show up in show full processlist
while it is executing?

If the query is not causing the stalls any clues where to look in the
connection code that could be causing table locks?

Thank You
Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Troubleshooting Stalls

2007-01-03 Thread Dan Buettner

SHOW PROCESSLIST should show local queries as well as remote - in my
experience, it has been utterly reliable in terms of showing all
current queries, regardless of origin.

InnoDB by default uses a single tablespace, which some people do
find inefficient because it never gets smaller even after deleting
data or dropping tables.  You can work around this by using the
innodb_file_per_table option.  ISTR that it was an unstable option
until recently, so if you're using an older release you might want to
consider/research carefully.

A couple of other things I thought of that may help you improve performance:
- using HIGH_PRIORITY for your SELECTs -
http://dev.mysql.com/doc/refman/5.0/en/select.html
- if you are experiencing problems due to long-running UPDATEs,
evaluating your indices may help.  Indices can be as helpful for
updates and deletes as they are for selects.

Can you post the output of SHOW PROCESSLIST during a stall?  That
would help us help you, I think.

Dan



On 1/3/07, Michael Katz [EMAIL PROTECTED] wrote:

Dan Buettner wrote:
 Michael -

 You should see all running queries in the output of SHOW PROCESSLIST.

 MyISAM is a multiple-reader/single-writer design, also known as table
 locking design - that is to say, you can have lots of threads reading
 simultaneously, but only one writing.  Also, when one is writing,
 reading threads block and wait for it.  The simplicity lends itself to
 speedy operations in many cases, but sometimes you can run into
 trouble with it, especially in high volume situations - as you have.

 I'd suggest looking for an UPDATE/INSERT/DELETE operation in the
 output of SHOW PROCESSLIST next time you see a stall - that is likely
 your culprit.

 Some workarounds include:
 - using INSERT DELAYED syntax
 - using LOW PRIORITY UPDATEs
 - switching storage engine to InnoDB

 HTH,
 Dan

Thank You Dan. We have tried InnoDB but it is very inefficient in disk
usage and we can not use it many cases, since we install on customer
supplied hardware.

Does show full processlist show local connections and queries as well?
We only see remote queries in the list.

Thank You


 On 1/2/07, Michael Katz [EMAIL PROTECTED] wrote:
 I have a large database, about a million records in some tables, and I
 am trying to troubleshoot some stalls in our application.  We have
 assumed that slow query speed causes the stalls, however, when my web
 app stalls  I do not see the query in process list.  What we see is the
 web page stall with a wait indicator and all other queries to the same
 tables are in a locked state (using MyISAM) in process list, but our
 query is not present in the list.  After 30sec to 1 minute the web page
 will render and the locked queries will quickly complete.  My questions
 are as follows:

 Should a query that takes a long time show up in show full processlist
 while it is executing?

 If the query is not causing the stalls any clues where to look in the
 connection code that could be causing table locks?

 Thank You
 Mike

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ERROR 1114

2006-12-20 Thread Dan Buettner

Optimize table does not reclaim space for InnoDB tables in the shared
tablespace, FYI - only for those that live independently when using
the 'innodb_file_per_table' option.

On 12/20/06, Raj Shekhar [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:

 In my configuration file there was a line that said:
 innodb_data_file_path = ibdata1:10M:autoextend:max:1900M

 This refers to the data file that contains all of my databases.  The
 number
 1900M is close to the 2G filesize limit on my linux server.  With all of
 the chatter about other causes of Error 1114, I'd missed that this file
 had grown to it's limit.


A small suggestion that you may find useful.  You could have used 'optimize
table' to free up some space.  To make it easier to run optimize table, you
can put in the 'innodb_file_per_table' into your my.cnf. This will create
each table in its own file/tablespace, thus making it easier for optimize
to run faster.  However, this option only affects new tables.  Old tables
would still live in the shared ibdata file.

--
raj shekhar
facts: http://rajshekhar.net | opinions: http://rajshekhar.net/blog
I dare do all that may become a man; Who dares do more is none.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: upgrading from mysql 4.0 to 4.1 : TIMESTAMP OTHER FEATURES

2006-12-20 Thread Dan Buettner

Regarding the format of TIMESTAMP columns, one of the user comments on
http://dev.mysql.com/doc/refman/4.1/en/timestamp-4-1.html
offers the solution below:

Posted by Kjell Arne Rekaa on April 14 2005 11:11pm
If you want the same view of a timestamp field in 4.1.x as it was in
in earlier mysql versions, without the delimiter characters in date
and time, simply add a +0 to the column name:
mysql create table date (remember timestamp);
Query OK, 0 rows affected (0.03 sec)
mysql insert date values ('2005051712');
Query OK, 1 row affected (0.00 sec)
mysql select remember from date;
+-+
| remember|
+-+
| 2005-05-17 12:00:00 |
+-+
1 row in set (0.00 sec)
mysql select remember+0 from date;
++
| remember+0 |
++
| 2005051712 |
++
1 row in set (0.00 sec)

As far as your character set, I believe that is stored by table, so
your data should remain OK.

Another major consideration with 4.1 is that passwords changed
considerably from 4.0 to 4.1:
http://dev.mysql.com/doc/refman/4.1/en/password-hashing.html

HTH,
Dan



On 12/20/06, tere [EMAIL PROTECTED] wrote:

Hi! I would like to upgrade the database of my organisation from
mysql4.0.22 from 4.1. We use Debian.
I've read info in the manual, but i don't have things clear.

We process data of DB with scripts, and  I'm annoyed because the change
of format  of timestamp, is there any command in 4.1 to obtain  this
info in the previous format (4.0) ??? I want that out in 4.1 as
MMDDHHMMSS

Furthermore, my databases are in latin1, i've read that 4.1 take data in
utf8, but i don't understand reading manual how this affect to my info.

And to finish, do i have to keep more features in mind

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: avg() AND limit

2006-12-18 Thread Dan Buettner

Yes, the LIMIT function affects number of rows returned, not number of
rows evaluated.

If you're on 4.1 or later, you could use a subselect.  Assuming you
have an ID field in your table, something like this:

SELECT AVG(cost)
FROM items
WHERE id IN (SELECT id FROM items ORDER BY date DESC LIMIT 10)

HTH,
Dan


On 12/18/06, Richard Reina [EMAIL PROTECTED] wrote:

Good day all!  I have query like the one below that I uses to get the average 
cost over the last six months.  As is it works fine but, however now I want to 
limit it to the last 10 rows in order to get a glimpse of the most recent cost 
data. If I add ORDER BY date DESC limit 10 the average does not change.  Does 
anyone know how I can achieve the desired result?

 Thanks in advance.

 SELECT AVG(cost)
 FROM items
 WHERE UNIX_TIMESTAMP(date) = (UNIX_TIMESTAMP(CURDATE() - 15724800)





Your beliefs become your thoughts.  Your thoughts become your words.  Your 
words become your actions.  Your actions become your habits.  Your habits 
become your values.  Your values become your destiny.  -- Mahatma Gandhi



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: avg() AND limit

2006-12-18 Thread Dan Buettner

I'm sure there is ... here's one way I can think of, a two-step
process but doable in 3.23 I think.  Use a server-side variable and a
combination of the LIMIT and OFFSET features.

SELECT id FROM items ORDER BY DATE DESC LIMIT 1 OFFSET 10 INTO @myid

then

SELECT AVG(cost)
FROM items
WHERE id = @myid

this assumes that going by date and id is valid for your situation.
Hopefully you can tweak that a little if not.

Also, the LIMIT/OFFSET can be used as LIMIT 10,1 instead of LIMIT 1
OFFSET 10.  I don't know which form will be valid in 3.23, sorry!

-Dan



On 12/18/06, Richard Reina [EMAIL PROTECTED] wrote:

Dan,

 Thank you very much for the reply.  Is there a way to do it with version
3.23.54?

Dan Buettner [EMAIL PROTECTED] wrote:
 Yes, the LIMIT function affects number of rows returned, not number of
rows evaluated.

If you're on 4.1 or later, you could use a subselect. Assuming you
have an ID field in your table, something like this:

SELECT AVG(cost)
FROM items
WHERE id IN (SELECT id FROM items ORDER BY date DESC LIMIT 10)

HTH,
Dan


On 12/18/06, Richard Reina wrote:
 Good day all! I have query like the one below that I uses to get the
average cost over the last six months. As is it works fine but, however now
I want to limit it to the last 10 rows in order to get a glimpse of the most
recent cost data. If I add ORDER BY date DESC limit 10 the average does not
change. Does anyone know how I can achieve the desired result?

 Thanks in advance.

 SELECT AVG(cost)
 FROM items
 WHERE UNIX_TIMESTAMP(date) = (UNIX_TIMESTAMP(CURDATE() - 15724800)





 Your beliefs become your thoughts. Your thoughts become your words. Your
words become your actions. Your actions become your habits. Your habits
become your values. Your values become your destiny. -- Mahatma Gandhi




Your beliefs become your thoughts. Your thoughts become your words. Your
words become your actions. Your actions become your habits. Your habits
become your values. Your values become your destiny. -- Mahatma Gandhi


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unlimited client connections for MySQL

2006-12-13 Thread Dan Buettner

GRANT (ALL|SELECT|INSERT|UPDATE|DELETE|etc) ON DATABASE.* TO
'user'@'%' IDENTIFIED BY 'password'

See http://dev.mysql.com/doc/refman/5.0/en/grant.html for details.

Note that localhost is considered as a special case, not included in
the wildcard %

HTH,
Dan

On 12/13/06, Brent Anderson [EMAIL PROTECTED] wrote:

Hello.

I'm developing a client application for several platforms that will
need to connect to a remote MySQL database. Unfortunately, MySQL
refuses connections from external IP's that aren't allowed and since
the clients using this will be on unknown IP addresses (their home
computers), I'm in a bit of a situation. How does one setup a MySQL
account with no IP restrictions?

Thanks,
Brent Anderson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE KEY vs NULLs

2006-12-11 Thread Dan Buettner

This is a feature - a NULL value is an undefined value, therefore two
NULL values are not the same.  Can be a little confusing but makes
sense when you think about it.

A UNIQUE index does ensure that non-NULL values are unique; you could
specify that your column not accept NULL values.

Dan


On 12/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hi,

I have an InnoDB table similar to this:

CREATE TABLE Target
(IMSI VARCHAR(15) ASCII,
 IMEI VARCHAR(15) ASCII,
 UNIQUE KEY (IMSI, IMEI));

After playing a bit with it, I managed to add duplicate records, if one of
the fields was a NULL:

+-+-+
| IMSI| IMEI|
+-+-+
| NULL| 35195600126418  |
| NULL| 35195600126418  |
+-+-+


Is this a bug, or a feature? :-)

If it is a feature, than how can I assure uniqueness for a table in a sense
that won't allow such duplicates?

Thx

ImRe



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: numerical range

2006-12-08 Thread Dan Buettner

You could use triggers to check for sane values on fields and rollback
if not sane.
http://dev.mysql.com/doc/refman/5.0/en/triggers.html

You'd need to handle the rollback in your frontend app.

Dan


On 12/8/06, Christian High [EMAIL PROTECTED] wrote:

Is it possible to restrict the range that can be put in a floating point, or
other data type that allows decimal numbers. The data being stored will be
that of the result of chemical analysis. The result could very well be
2.41however it would never be
24.1 so i want to ensure that a simple typo like this is caught. the context
i am using this in would make validation at the front end very difficult so
i would like to use the tables definition to error on the insert rather than
except a value that is out of a realistic range.

Thanks,

Chris




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Fwd: RE: How to get started on MySQL

2006-12-08 Thread Dan Buettner

Looks like this was sent just to me - no knowledge of it myself, but
here you go.

-Dan

-- Forwarded message --
From: Nicholas Vettese [EMAIL PROTECTED]
Date: Dec 8, 2006 6:29 AM
Subject: RE: How to get started on MySQL
To: Dan Buettner [EMAIL PROTECTED]


What about Larry Ullman's MySQL Book,a
href=http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?z=yEA
N=9780321375735itm=3MySQL, Second Edition/a?

Nick

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query question...

2006-12-08 Thread Dan Buettner

Try this on for size:

SELECT DISTINCT id FROM tbl
WHERE id NOT IN (SELECT id FROM tbl WHERE action = 1)

The subselect will only work in 4.1 and later I think.

Dan


On 12/8/06, bruce [EMAIL PROTECTED] wrote:

hi...

i'm looking at what is probably a basic question.

i have a tbl with
  -id
  -action
  -status
  -date

ie:
 id action   statusdate
  1   0  1
  1   1  2
  1   2  3
-
  2   0  4
  2   2  5


i need a query to somehow get all id's that don't have an 'action=1', in
this case, id '2' would be the id that should be returned from the query...

however, i can't quite figure out how to create a query to select the items
that are the ones i'm looking for.

any help/thoughts would be appreciated!

thanks

-bruce


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Issues with MySQL x86_64 crashing

2006-12-08 Thread Dan Buettner

I see several values set to '18446744073709551615', which is an
insanely large number for any memory setting (16.7 million terabytes
unless my math is wrong; huge in any case).

There was another person on the list earlier this year who had a
similar problem with large numbers, IIRC.  I'd adjust those numbers
down for sure.

Dan


On 12/8/06, Philip Mather [EMAIL PROTECTED] wrote:

  Kevin Old wrote:
 On 12/8/06, Philip Mather [EMAIL PROTECTED] wrote:
 So something like 15G, that's not that bad. I'd run mtop as someone
 suggested and see if some query is hammering it, maybe some other
 process on the machine is hogging or going IO bound?

 Thanks.  We are watching the queries.  The pattern we're seeing now is
 any large query that takes more than a few seconds to execute causes
 incoming queries to stack up and not execute, which causes the mysql
 load to go higher.  We've seen a few times where mysql recovered after
 a large query started other queries to stack up.

 Keep in mind that we've been running some of these queries that are
 now having problems for over a year.  We were running on the same
 hardware with the 386 version of mysql and performance was awesome
 only using 2GB RAM (the max mysql would allow us to use).  Only after
 the switch to the x86_64 version are we seeing these problems.
Tried an optimize or maybe a myisamchk |--check| or a |--analyze? Might
not be the underlying cause but might reduce the occurrences of pile ups?

Maybe there's a hardware issue when using the 64 bit code, any RAID
involved? ||I was vaguely assuming it was a RedHat-a-like box of some
description?|
|
shrug /
Sounds like some other issue is just pushing MySQL over the edge, not
bumping into any ulimits are you?

Regards,
Phil
|

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RE: query question...

2006-12-08 Thread Dan Buettner

Just add a DISTINCT:

SELECT DISTINCT t1.universityID FROM SvnTBL t1 WHERE  t1.universityID NOT IN
(SELECT t2.universityID FROM SvnTBL t2 WHERE t2.actionID = 2);

Dan


On 12/8/06, bruce [EMAIL PROTECTED] wrote:

hi peter.

thanks, the solution you gave me is close...!!

my actual data is:
mysql select * from SvnTBL;
+--+--+--+-++
| universityID | actionID | statusID | _date   | ID |
+--+--+--+-++
|1 |1 |0 | 2006-12-08 13:12:15 |  1 |
|1 |2 |0 | 2006-12-08 13:12:15 |  2 |
|1 |3 |0 | 2006-12-08 13:12:15 |  3 |
|2 |1 |0 | 2006-12-08 13:12:15 |  4 |
|2 |3 |0 | 2006-12-08 13:12:15 |  5 |
|3 |1 |0 | 2006-12-08 13:12:15 |  6 |
|3 |6 |0 | 2006-12-08 13:12:15 |  7 |
|3 |3 |0 | 2006-12-08 13:12:15 |  8 |
|3 |4 |0 | 2006-12-08 13:12:15 |  9 |
+--+--+--+-++

if i do:
mysql SELECT t1.universityID FROM SvnTBL t1 WHERE  t1.universityID NOT IN
(SELECT t2.universityID FROM SvnTBL t2 WHERE t2.actionID = 2);

i get,

+--+
| universityID |
+--+
|2 |
|2 |
|3 |
|3 |
|3 |
|3 |
+--+
6 rows in set (0.00 sec)

what i really want to get is:
+--+
| universityID |
+--+
|2 |
|3 |
+--+

which would be the unique 'id's.

i've tried to do a 'limit' and group, but i'm missing some thing...

thanks

-bruce


-Original Message-
From: Peter Bradley [mailto:[EMAIL PROTECTED]
Sent: Friday, December 08, 2006 12:26 PM
To: [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Subject: Re: query question...


Ysgrifennodd bruce:
 hi...

 i'm looking at what is probably a basic question.

 i have a tbl with
   -id
   -action
   -status
   -date

 ie:
  id action   statusdate
   1   0  1
   1   1  2
   1   2  3
 -
   2   0  4
   2   2  5


 i need a query to somehow get all id's that don't have an 'action=1', in
 this case, id '2' would be the id that should be returned from the
query...

 however, i can't quite figure out how to create a query to select the
items
 that are the ones i'm looking for.

 any help/thoughts would be appreciated!

 thanks

 -bruce

Hi Bruce,

Does this do it for you?

SELECT t1.id
FROM tbl t1
WHERE  t1.id NOT IN (SELECT t2.id
   FROM tbl t2
  WHERE t2.id = 1)


Peter

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Issues with MySQL x86_64 crashing

2006-12-07 Thread Dan Buettner

Another thought is - do you have backups running during this time?
Something that might be attempting to backup live the InnoDB files?
We had similar problems with MySQL and backup software a while ago,
though we used all-MyISAM.

Dan


On 12/7/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hi,

Do you use MySQL with a NPTL or a linuxthread glibc configuration ?

Regards,
  Jocelyn Fournier
  www.mesdiscussions.net

 Hello everyone,

 We have a 4 CPU master server running the 5.0.27 RPM x86_64 version of
 MySQL with a mix of InnoDB and MyISAM tables.

 We normally run at 1500 queries/per second and lately, the server will
 all of a sudden lock up and we are forced to restart mysql.

 This is not related to higher load, as it frequently happens during
 our slowest load period (early AM).  This is not related to any
 significant code changes.

 We have recently converted about 20 of our largest and most frequently
 used tables from MyISAM to InnoDB.  The size of our InnoDB tables
 collectively is about 95GB.

 I can provide my configuration if it is necessary, but was just
 wondering if someone else might have experienced this and could
 possibly point me in the right direction as to what might be causing
 this.

 Thanks,
 Kevin
 --
 Kevin Old
 [EMAIL PROTECTED]

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select row with greatest valued column

2006-12-07 Thread Dan Buettner

It's actually pretty easy -

select * from table order by creation_date desc limit 1

Now one catch to this use of LIMIT is that you only get one row - even
if there are multiple rows that share the same greatest creation_date.
Sometimes that's acceptable, sometimes not.

Dan


On 12/7/06, Tim McIntyre [EMAIL PROTECTED] wrote:

Hey all

I would guess this would be a pretty simple question.

How do I select a row whose value for a given column is the greatest
or the least or whatever for that table?

e.g:
select * from table where creation_date  all_other_creation_dates;

Hope that makes some sense.  Thanks in advance!
Tim











--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to get started on MySQL

2006-12-07 Thread Dan Buettner

Eric, you'll likely want the community edition to get started with.
It's essentially the same software, at least for now, but community is
free whereas enterprise is cost (but includes support).

I've read good things about, and have just ordered, this:
MySQL Cookbook, 2nd edition
http://www.amazon.com/gp/product/059652708X/104-6445927-8829555

A visit to Barnes  Noble or Borders would also turn up a number of
good books in the getting started category for PHP and MySQL, I'm
sure.  That's where I started in about 1999, though I've since lost
the two books I got.

Welcome to the community - hopefully you find MySQL every bit as
useful and powerful as I have!

Best,
Dan

On 12/7/06, Eric Nygma [EMAIL PROTECTED] wrote:

Hi,

I am new to MySQL, never used it before. I have basic knowledge of databases
and have used Oracle in the past, as far as writing SQL queries and
starting/shutting down the database instance.

I need to write server side scripts (php scripts) that communicate with
MySQL database.

I want to download MySQL and learn how to use it. I went to
http://dev.mysql.com/downloads/index.html. There are 2 main downloads
Community Server and Enterprise. Which would you recommend for a newbie
? Any pointers to a real-fast-quickstart-guide would also help. :)

Thanks in advance.
Eric.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Date v. DateTime index performance

2006-12-04 Thread Dan Buettner

Thomas, I do not think in this case that one is better than the other,
for the most part, because both require using a value computed from
the column.  Computing month from a DATE field should be just as fast
as computing from a DATETIME column I would think.

Also splitting into DATE and TIME columns can make your SQL a bit
trickier depending on your needs.

That being said, one difference that might come up in extreme cases is
that the size of an index on a DATE column  will be smaller than on a
DATETIME (fewer unique values, less cardinality) so if you have a lot
of records you might be able to keep all or more of the index in
memory.

One potential solution might be to use an extra column that tracks
month_number, and populate it with a trigger on insert or update.
Index that field and then use it in your WHERE clause.  One
possibility anyway.

HTH,
Dan


On 12/4/06, Thomas Bolioli [EMAIL PROTECTED] wrote:

If one has a large number of records per month and normally searches for
things by month, yet needs to keep things time coded, does anyone know
if it make sense to use datetime or separate date and a time columns?
Thanks,
Tom

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Running a sql file

2006-12-01 Thread Dan Buettner

Sounds like you want the SOURCE command
http://dev.mysql.com/doc/refman/5.0/en/batch-commands.html

Dan



On 12/1/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

  I want to load a sql file from inside the interactive mysql console, not as a 
single command line. How do I do this?


Check out the new AOL.  Most comprehensive set of free safety and security 
tools, free access to millions of high-quality videos from across the web, free 
AOL Mail and more.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: One big table or several smaller tables?

2006-11-28 Thread Dan Buettner

Daniel, you might look into the use of MERGE tables, which are
essentially multipule identical MyISAM tables that look like one
table.

Dan


On 11/27/06, Chris White [EMAIL PROTECTED] wrote:

On Monday 27 November 2006 13:50, Daniel Smith wrote:
 Assuming a decent spec server, would a simple search query on a big
 indexed table be quicker than searching the amount of data divided into
 separate tables?

I'd recommend a single large table with a DATE/DATETIME field which would be
converted to a timestamp storage wise and would become an indexable integer.
Even 1,000,000+ rows shouldn't have that much of an effect.  You'll also have
the nice functionality of MySQL's date/time functions.

--
Chris White
PHP Programmer
Interfuel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Performance Degrades Significantly Over Time

2006-11-27 Thread Dan Buettner

Jason, in addition to Daniel's suggestions, I'll throw this out there:

I had a somewhat similar problem with a database I used to own, where
a handful of very hard-hit tables would become progressively slower
over time, despite the fact that (due to daily archiving and purging)
they were not growing in size.  For me, running OPTIMIZE operations
periodically did the trick, keeping the tables performing fast.  Daily
wasn't sufficient, actually - I ended up optimizing the key tables
every other hour, though that was probably more often than needed.

I think the tables were becoming fragmented in memory, possibly along
with the index data.

This was with MyISAM, and I do not know whether performance would have
improved with mysqld restarts, as we never really had occasion to
restart mysqld except during major upgrades.

HTH,
Dan


On 11/27/06, Jason J. W. Williams [EMAIL PROTECTED] wrote:

Hi,

We're running MySQL 5.0.27 under Solaris 10 on both Opteron and
UltraSparc T1 machines. The performance on both boxes starts out great
when the process is fresh, however over the course of a week of heavy
use the performance degrades to the point where its nearly unusable.

The Opteron has 2GB of RAM and the T1 has 8GB. A little stumped as to
what to look for that might cause performance to degrade over time.
Any pointers are greatly appreciated.

On a side note, when the Opteron is a slave of the T1, when the T1 has
heavy load the Opteron slave falls behind on its replication duties.
The whole thing is kind of strange. Thank you again in advance.

Best Regards,
Jason

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread Dan Buettner

This kind of timeframe (2 - 2.5 secs) could just be the result of
running on a laptop.  You've got a small amount of RAM compared to
many servers, a bit slower processor, and *much* slower hard disk
system than most servers.  If your query has to access multiple
records spread out throughout the table off a slow laptop disk, this
makes sense to me.

Do you normally run this database on a real server in production?
Is response time better?

Still, it does seem a bit slow, even for an 867 MHz laptop, and seeing
the output of an EXPLAIN might be helpful.

Also, to answer your question about the speed of selecting the count
of rows in a table - MyISAM always knows exactly how many rows are in
a table, so it answers quickly without checking the data.  InnoDB does
not keep track of how many rows are in a table, so it has to go count
them when you do a SELECT COUNT(*) or in your case SELECT COUNT(id).
That takes a little bit of time.

Dan


On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:

If I just SELECT id:
SELECT id FROM purchased_services WHERE (company_id = 1000)

It takes approx 2-2.5s.  When I look at the process list it looks like
that it's state seems to always be in sending data...

This is after killing the db and repopulating it again.  So what is going on?

On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
 I tried the same tests with the database replicated in a MyISAM
 engine.  The count was instantaneous but the following still took
 3-6seconds:

 SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)

 The following though was instantaneous:

 SELECT * FROM purchased_services WHERE (id = 1000)

 This is the result from my SHOW INDEX FROM purchased_services:

 
+++-+--+-+---+-+--++--++-+
 | Table  | Non_unique | Key_name
  | Seq_in_index | Column_name | Collation | Cardinality | Sub_part |
 Packed | Null | Index_type | Comment |
 
+++-+--+-+---+-+--++--++-+
 | purchased_services |  0 | PRIMARY
  |1 | id  | A |  627546 | NULL |
 NULL   |  | BTREE  | |
 | purchased_services |  1 |
 purchased_services_company_id_index |1 | company_id  | A
   |  627546 | NULL | NULL   | YES  | BTREE  |
 |
 
+++-+--+-+---+-+--++--++-+

 So I do have the necessary index.  I am so confused  Argh...

 Your Friend,

 John


 On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
  Sorry about these questions.  I am used to working with DBs with less
  then 10K rows and now I am working with tables with over 500K rows
  which seems to be changing a lot for me.  I was hoping I can get some
  people's advice.
 
  I have a 'companies' table with over 500K rows and a
  'purchased_services' table with over 650K rows.
 
  The following query takes over 6 seconds:
  SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)
 
  purchased_services.company_id has an index on it.
 
  The following query takes over 3 seconds:
  SELECT count(id) FROM companies;
 
  To me the time it takes to run these queries makes no sense.  I would
  imagine both of these queries would take a fraction of a second.
 
  When running some of these queries and looking at 'SHOW processlist' I
  was getting a lot of 'Writing to tmp'.
 
  My DB engine is InnoDB.  I am running this on my laptop that is a
  PowerBook 867 with 756 MB of Ram.
 
  Feedback and insight would be greatly appreciated.
 
  Thanks my friends! :-)
 
  --
  John Kopanas
  [EMAIL PROTECTED]
 
  http://www.kopanas.com
  http://www.cusec.net
  http://www.soen.info
 


 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info



--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Moving large Innodb table to another mysql

2006-11-17 Thread Dan Buettner

Sorry - what's your question?

#1 will work, or you can increase the value for the variable named in
the error message in scenario #2.

Dan

On 11/17/06, sofox [EMAIL PROTECTED] wrote:

Dear All,

I am using mysql-4.0.26, and I have a very large innodb table(10G) .

When I try to moved the table from one mysqld to another one by mysqldump to a 
script and import the script on target server, I have problem:

1) if I don't use --no-autocommit option when mysqldump, it will take more than 
10 hours to import the script;
2) if I use --no-autocommit option during dump, I will get error
# Error: 1197 SQLSTATE: HY000 (ER_TRANS_CACHE_FULL)
Message: Multi-statement transaction required more than 'max_binlog_cache_size'
bytes of storage; increase this mysqld variable and try againAny Comment?Oscar 
Yen.Select


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How Long Should It Take To Create an Index?

2006-11-17 Thread Dan Buettner

John, I would guess with about 100,000 rows, it might take a couple
minutes to create an index on two varchar-255 columns.  With modern
hardware anyway. Very rough estimate.

Factors include amount of RAM, speed of disks, speed of processors,
other processes running and either locking table or just using
resources.

Do a SHOW PROCESSLIST; to see if your index creation is blocked.

Dan


On 11/17/06, John Kopanas [EMAIL PROTECTED] wrote:

I have a mysql table with 100K rows approximately.

We are creating the following index:
create index closed_base_cust_and_job on backlog_dev.closed_bases(cust_no,
jobno);

Both cust_no and jobno are varchars(255)

There is 56 fields in the table and no other indeces except on the primary
key.

Should it be taking a long time?  How long would some estimate it takes?  Is
there a way to estimate how long it takes?  What factors affect the time it
takes to create an index?

Thanks for your help.

Your Friend,

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: How Long Should It Take To Create an Index?

2006-11-17 Thread Dan Buettner

Scratch that, I just created a 10 row table with 2 varchar255's.
Creating a 2 col index on it took 2.09 seconds.  Could take longer due
to all the other fields you have in your table but a couple of minutes
at the outside.

Dan


On 11/17/06, Dan Buettner [EMAIL PROTECTED] wrote:

John, I would guess with about 100,000 rows, it might take a couple
minutes to create an index on two varchar-255 columns.  With modern
hardware anyway. Very rough estimate.

Factors include amount of RAM, speed of disks, speed of processors,
other processes running and either locking table or just using
resources.

Do a SHOW PROCESSLIST; to see if your index creation is blocked.

Dan


On 11/17/06, John Kopanas [EMAIL PROTECTED] wrote:
 I have a mysql table with 100K rows approximately.

 We are creating the following index:
 create index closed_base_cust_and_job on backlog_dev.closed_bases(cust_no,
 jobno);

 Both cust_no and jobno are varchars(255)

 There is 56 fields in the table and no other indeces except on the primary
 key.

 Should it be taking a long time?  How long would some estimate it takes?  Is
 there a way to estimate how long it takes?  What factors affect the time it
 takes to create an index?

 Thanks for your help.

 Your Friend,

 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Re: How Long Should It Take To Create an Index?

2006-11-17 Thread Dan Buettner

On 11/17/06, John Kopanas [EMAIL PROTECTED] wrote:

That is what I thought.  Then why would it be going at 99% CPU for over 20
minutes?  What factors would cause that?  We are using version 4.1.20
standard.

Would the fact that a website was still connected to DB cause the problem?


Normally, no - an index creation is an atomic operation.  It waits
until it has a table lock, then does its thing, and then it is done.
It waits for connections, then while it is working other connections
wait for it.

I would guess that perhaps you have somehow encountered a deadlock.

Can you post the output of SHOW PROCESSLIST ?

-Dan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Best way to backup a 24/7 database

2006-11-17 Thread Dan Buettner

James, for a truly 24/7 site, MySQL replication is a better answer
than mysqlhotcopy.  You leave your master running all the time, and
replicate data off to another slave server, which you back up.  For
slave backups, you can use any of a few different schemes, such as
mysqlhotcopy, shut down mysql server and copy files, dump using
mysqldump, or the commercial InnoDB backup tool if you wanted.

I used mysqlhotcopy at my last job to do backups and it worked very
well - but it did mean a few minutes of locked databases every night.
In my situation that was OK, but maybe it's not in yours.  Also it
doesn't work if you use InnoDB.

Dan


On 11/17/06, James Tu [EMAIL PROTECTED] wrote:

We're working on a site that will most likely be up 24 hours a day.
What is the best backup strategy for the database?
The client will be using hosting services, but they haven't' picked
anyone yet.

I've been playing around with mysqlimport (pretty straightforward)
and mysqlhotcopy (haven't been able to run it...don't have the
appropriate Perl modules.)

We'll probably just take a daily 'snapshot' of the database.  What is
the best way to do this?
Am I safe with writing my own shell scripts that essentially just
calls mysqlimport?  Does that handle a live database w/o any issues?

-James


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Concatenate a column in multiple rows into one column?

2006-11-16 Thread Dan Buettner

James - you should be able to do this with the GROUP_CONCAT function:
http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html

Dan


On 11/15/06, James Eaton [EMAIL PROTECTED] wrote:

I have a database with roughly the following structure:

album
--
albumid INT
title   VARCHAR(255)

artist
--
artistidINT
nameVARCHAR(255)

albumartist
--
albumid INT
artistidINT


From the above, you can see that any given album can have more than one
artist.  For instance:

album title: A Meeting by the River
artist name: Ry Cooder
artist name: V.M. Bhatt

I'd like to run a query that returns one row per album, but somehow also
returns all of the artists associated with the album (ideally concatenated
with an ampersand seperator).  Can this be done?

titleartists
---  --
A Meeting by the River   Ry Cooder  V.M.Bhat


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: mysqloptimize

2006-11-16 Thread Dan Buettner

Strictly an InnoDB issue.

Dan

On 11/16/06, Chaos Engine [EMAIL PROTECTED] wrote:



2006/11/15, Dan Buettner [EMAIL PROTECTED]:
 It's normal for space to not be reclaimed from the InnoDB file, yes.
 You could change your setup to use the 'innodb_file_per_table' option
 to change the behavior; with that in effect, deletes/dropped tables
 etc will lead to reclaimed space after optimizing.

 However, you'd also have to either drop your current data and
 re-import, or switch to myisam and back again to get the data into
 separate table files.

Isn't there any easier solution to reclaim garbage space? This is somewhat
silly to drop/reimport or convert tables to myisam and back again. Is it
strickly innodb issue ?

 Dan

 On 11/15/06, Curtis Maurand [EMAIL PROTECTED] wrote:
  I think this question has been asked, but I have not yet found an answer
  to the problem.
 
  I'm running MySQL 5.0.22 on Gentoo Linux AMD 64.  Its blazingly fast,
  mostly.  I'm running a package called dbmail on it.
  http://www.dbmail.org.  All mail is stored in the database.  After
running
  a dbmail-util which deleted somewhere around 9,000 messages, I ran
  mysqloptimize against the database.  After running mysqloptimize the
  innodb file (ibdata1) was larger than it was before i started.  is this
  normal?  If not, how do I change the behavior?
 
  I'm happy to forward any relevant data that you need.
 
  Thanks,
  Curtis
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql?unsub=
[EMAIL PROTECTED]
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





--
Chaos greets U


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Error for Blobs

2006-11-16 Thread Dan Buettner

max_allowed_packet is the maximum size of a single SQL statement.
It's a setting on the server, as well as for the mysql command line
interface.

See http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

I think 1 MB is the default; the maximum setting for MySQL 5 is 1 GB.
You can adjust this setting in the server config files (my.cnf or
my.ini).  I do not know if you need to adjust anything in Navicat,
ODBC, Query Browser or Access - sorry!

Dan

On 11/16/06, C K [EMAIL PROTECTED] wrote:

I got an error while entering an BMP image of 1.7MB size in a
mediumblob field through MS Access 2003 with MyODBC 3.51.12
 on Windows XP SP2 also with MySQL Query Browser
and Navicat GUI tool. Navicat returned the error as 'Got a packet bigger
than Max_allowed_packet bytes'. What this means?. Access and Query browser
simple gives error as MySQL has gone away? Why? please help. Also please
give a solution if we have to insert
images/Other Objects of size more than 1 MB what we have to do using ODBC
driver with/without MS Access  VB.net?
Thanks
CPK




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqloptimize

2006-11-15 Thread Dan Buettner

It's normal for space to not be reclaimed from the InnoDB file, yes.
You could change your setup to use the 'innodb_file_per_table' option
to change the behavior; with that in effect, deletes/dropped tables
etc will lead to reclaimed space after optimizing.

However, you'd also have to either drop your current data and
re-import, or switch to myisam and back again to get the data into
separate table files.

Dan

On 11/15/06, Curtis Maurand [EMAIL PROTECTED] wrote:

I think this question has been asked, but I have not yet found an answer
to the problem.

I'm running MySQL 5.0.22 on Gentoo Linux AMD 64.  Its blazingly fast,
mostly.  I'm running a package called dbmail on it.
http://www.dbmail.org.  All mail is stored in the database.  After running
a dbmail-util which deleted somewhere around 9,000 messages, I ran
mysqloptimize against the database.  After running mysqloptimize the
innodb file (ibdata1) was larger than it was before i started.  is this
normal?  If not, how do I change the behavior?

I'm happy to forward any relevant data that you need.

Thanks,
Curtis


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Max date in recordset

2006-11-14 Thread Dan Buettner

If you're looking for the records from the last full day contained in
the data, not the past 24 hours according to the clock, then this
ought to work:

select * from table
where InsertDate = date_sub( (select max(InsertDate from table),interval 1 day)
order by InserDate desc

Dan


On 11/14/06, Vittorio Zuccalà [EMAIL PROTECTED] wrote:

Hello,
i've a table with a lot of field and in particular: InsertDate,Box,Prt
Example:
InsertDate, Box, PRT
2006-11-01, BXT, 34
2006-11-01, TTS, 33
2006-11-01, RRT, 55
2006-11-02, BXT, 22
2006-11-02, TTS, 99
2006-11-02, SAR, 75


I'd like to find all record inserted in the last day...
In this example the last three records...
I do not know which is the last day before

Any suggestions?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RE: Backing up large dbs with tar

2006-11-14 Thread Dan Buettner

Interesting question - I too noticed that in the comments.  For what
it's worth, I used it in production environment for more than 5 years
with no problems, from 2001 on.  I did restore a few things here and
there, so I know it was working!  ;)

I use mysqldump for backups now because we use InnoDB tables where I'm at now.

Dan

On 11/14/06, Tim Lucia [EMAIL PROTECTED] wrote:

Is mysqlhotcopy still considered beta?  We steered clear of it for
production use for that reason.

Tim

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Monday, November 13, 2006 12:39 PM
To: Van
Cc: mysql@lists.mysql.com
Subject: Re: Backing up large dbs with tar

Van, I'll second what Gerald said about mysqlhotcopy.

When we first began using MySQL at my last job, we had terrible
problems with MySQL crashing.  Turned out to be due to a 3rd party
backup process attempting to lock and read the database files while
MySQL was attempting to use them.

Using mysqlhotcopy to copy the files elsewhere, and excluding the data
directory from the backup software, gave us a stable solution.

mysqldump might also work well for you, as it can lock
tables/databases and give you a consistent snapshot.  Potentially
takes longer to restore from a mysqldump file though.

HTH,
Dan


On 11/13/06, Van [EMAIL PROTECTED] wrote:
 Greetings:

 I have a 600M data file that never gets backed up.  The following error
 occurs in the cron job:

 tar: /data/mysql/my_db_name/my_large_table_name.MYI: file changed as
we read it

 Is there a way I can set this one table to read-only prior to the backup
 without affecting other db writes during this operation?

 Thanks,
 Van

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Backing up large dbs with tar

2006-11-13 Thread Dan Buettner

Van, I'll second what Gerald said about mysqlhotcopy.

When we first began using MySQL at my last job, we had terrible
problems with MySQL crashing.  Turned out to be due to a 3rd party
backup process attempting to lock and read the database files while
MySQL was attempting to use them.

Using mysqlhotcopy to copy the files elsewhere, and excluding the data
directory from the backup software, gave us a stable solution.

mysqldump might also work well for you, as it can lock
tables/databases and give you a consistent snapshot.  Potentially
takes longer to restore from a mysqldump file though.

HTH,
Dan


On 11/13/06, Van [EMAIL PROTECTED] wrote:

Greetings:

I have a 600M data file that never gets backed up.  The following error
occurs in the cron job:

tar: /data/mysql/my_db_name/my_large_table_name.MYI: file changed as we 
read it

Is there a way I can set this one table to read-only prior to the backup
without affecting other db writes during this operation?

Thanks,
Van

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Find next available number for gidnumbers and uidnumbers

2006-11-13 Thread Dan Buettner

Kory -

It's always a little more challenging to find something that's NOT in the data!

I'd suggest two approaches:

1 - create a reference table containing all the possible values, 1 -
999.  Then do a SELECT MIN and a join to find the lowest number
not in your data table, something like this:

SELECT MIN(r.uid)
FROM reference_table r
LEFT JOIN data_table d ON r.uid = d.uid
WHERE d.uid IS NULL

2 - create a stored procedure that starts at 1 and checks for the
presence of each number, adds one, until not present in the table.
This is pretty inefficient though.

HTH,
Dan


On 11/13/06, Kory Wheatley [EMAIL PROTECTED] wrote:

I need some advice. We currently are in the process of starting to use  LDAP  
for our accounts to authenticate.  Now when I create a new LDAP account I need 
to assign a free gidnumber and uidnumber which can be to 1 to 999.   My 
plan is to load all gidnumbers and uidnumbers that are being used into two 
separate mysql tables.  What command could I use to find the first number that 
is not being used or not in the database?.

 Now sometimes it could be 12 or the very highest number, because when accounts 
are deleted these numbers will be removed from the tables, so I need to find 
the next available number.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: replacing mysql 4 with versiono 5

2006-11-10 Thread Dan Buettner

Afan, the password hashing is different, starting with 4.1.  Also the
JOINs were changed / made to comply with standard in 5.0.

http://dev.mysql.com/doc/refman/4.1/en/password-hashing.html
and
http://dev.mysql.com/doc/refman/5.0/en/join.html

Dan


On 11/10/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

hi,
we are in the process of moving from our old hosting company to new one.
and I was thinking that it's the best time to move from mysql 4 to version
5. currently, we use php 4.3.4 / mysql 4.0.18 / apache 2.0

where I can expect problems because of difference in versions?
all queries I use are really simple, standard if I can say so.

Thanks for any help.

-afan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: V 5.0.18 on Mac OS X

2006-11-08 Thread Dan Buettner

Good morning, Steffan -

Can you post some details about what you mean when you say the tables
are damaged?  What makes you say that / what are the signs/symptoms?

Does the server process crash, or stop responding?  Do you see
anything in the error logs?  Do you have enough disk space for temp
tables?

Can you post the output of SHOW STATUS; after it has been running for
a while as well ?

If the server truly is just being overloaded, then an analysis of the
problematic/slow queries would be in order.  Could be that better
queries and/or additional indices could help.  Still, I'd like to know
more about what is happening with the damaged tables.

I don't know of any specific problems with 5.0.18, but it generally
doesn't hurt to upgrade to the latest production release.  The release
notes are online so you could search for relevant fixes before
deciding to download and upgrade.

Dan


On 11/8/06, Steffan A. Cline [EMAIL PROTECTED] wrote:

I am having an issue with MySQL running on Mac OS X. Currently the version
as stated 5.0.18 on a dual G4 Xserve with 1gb of ram. MySQL is mainly the
only thing running on this server. I am trying to track down an issue in
which MySQL is being overloaded and it consistently damages the same one or
two tables. I am trying to narrow down the issue to the web service
connecting to MySQL or MySQL itself. When I check the status I see a ton of
locks and unauthenticated connections. Any suggestions of what to look for
on the MySQL side? It seems rather odd that being overloaded is that it
damages the tables. There is no replication or auto backups in place with
this that could cause these issues. Some of the queries thrown are indeed
big ones and do require many ticks to calculate but still, what is left? I
just made some changes to the config and this is now the base of what I
have:

# The MySQL server
[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 256M
max_allowed_packet = 50M
table_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
#thread_cache_size = 8
query_cache_type = 1
query_cache_size = 128M
query_cache_limit = 128M
# added the next few lines for debugging all the way to the next comment
skip-name-resolve
interactive_timeout = 300
wait_timeout = 300
max_connections = 250
thread_cache_size = 40
log_error = /var/log/mysqld-error.log
log_slow_queries = /var/log/mysqld-slow.log
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8


Could it be the version of MySQL that is causing the damage? I did a fsck on
the drive and all comes back fine. What's left?


Thanks

Steffan

---
T E L  6 0 2 . 5 7 9 . 4 2 3 0 | F A X  6 0 2 . 9 7 1 . 1 6 9 4
Steffan A. Cline
[EMAIL PROTECTED] Phoenix, Az
http://www.ExecuChoice.net  USA
AIM : SteffanC  ICQ : 57234309
  Lasso Partner Alliance Member
---




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: MySQL on a ram disk?

2006-11-08 Thread Dan Buettner

Depends on the value of your data, and what you might want to invest
in cluster setup (time, money) to get this done.

Another simple option from a hardware perspective might be the use of
a PCI card with RAM that serves as a Serial ATA disk.  They're
relatively new on the market but they're out there, and not too
expensive.  If you set that up as the storage point for the database
in question, and possibly also moved MySQL temp space onto it, it
could provide a speed boost.  Seems like you could also use two or
more of these cards and software or hardware RAID for even more speed.

Not exactly enterprise-grade clustering but may suit your needs.

Dan

On 11/8/06, Ryan Stille [EMAIL PROTECTED] wrote:

Maybe I should just be looking at using a HEAP table?

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Searching Fields for a SubString

2006-11-04 Thread Dan Buettner

Dave, you could just use a LIKE statement

SELECT *
FROM table
WHERE mystring LIKE %,2,%

Of course if it is at the beginning or end, or the only item, it may
look like '2' '2,' or ',2' and not ',2,' so this would work in all
cases I think:

WHERE (mystring LIKE %,2,% OR mystring LIKE 2,% OR mystring LIKE
%,2 OR mystring = 2)

Performance will be terrible if this grows much but for a trivial
implementation it should work.

Dan



On 11/4/06, David T. Ashley [EMAIL PROTECTED] wrote:

Hi,

I'm a little bit lost on MySQL statements.

I am implementing a many:many relationship using a string (wrong way, I
know, but I'm trying to keep the design simple and this is a trivial
application).

In a VARCHAR field, I will store a comma-delimited set of integers
corresponding to the indices in another table.  It might look like:

,1,2,9,

to specify a link to records 1, 2, and 9.

I'm not sure how to find, for example, all records where the text of a given
field contains ,2,.

I'm not interested in an efficient search, just a search.  I know it will be
O(N) unless indexing of some kind is used.

My explanation above might be confusing, so let me give an example that will
give the same answer ...

Suppose I'm interested in searching a table for all records with last names
that contain sh.  How would I do that?

(Different problem, same answer.)

Thanks, Dave.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   >