Re: Table creation fail
Correct, a primary key is used to uniquely identify each row in a table. It can either be part of the actual record itself , or it can be an artificial field (one that has nothing to do with the actual record). A primary key can consist of one or more fields on a table. When multiple fields are used as a primary key, they are called a composite key. A Primary key constraint can be defined at various levels: * Primary key constraint defined at column level Syntax: Column Name datatype(size) Primary Key * Primary key constraint defined at table level Syntax: Primary key (Column Name, Column Name) --Prabhat On Thu, Jun 24, 2010 at 3:03 PM, Joerg Bruehe joerg.bru...@sun.com wrote: Hi David, all! David Stoltz wrote: Actually, That table isn't supposed to have a PK, so I removed that, and it works...same effect you suggested. Even if you currently don't need a primary key in that table, IMO you should still define one. Use some 'id_testresult' column with an autoincrement clause, so you need not provide a value. Sooner or later you may (I really think: will) feel the need to uniquely identify a row, especially to delete it, and a primary key will be very helpful then. Your original problem was most likely not due to mentioning primary key but rather to not providing a column name for it. Regards, Jörg -- Joerg Bruehe, MySQL Build Team, joerg.bru...@sun.com Sun Microsystems GmbH, Komturstrasse 18a, D-12099 Berlin Geschaeftsfuehrer: Juergen Kunz Amtsgericht Muenchen: HRB161028 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com -- Best Regards, Prabhat Kumar MySQL DBA Datavail-India Mumbai Mobile : 91-9987681929 www.datavail.com My Blog: http://adminlinux.blogspot.com My LinkedIn: http://www.linkedin.com/in/profileprabhat
Re: Altering database size to add more space
On Fri, Jun 25, 2010 at 7:11 AM, Prabhat Kumar aim.prab...@gmail.comwrote: In case MyISAM it will grow up to space on your data drive or the Max size of file limited by OS.. Not entirely correct. There is some kind of limit to a MyISAM file that has to do with pointer size - I've encountered it several years ago. You shouldn't be encountering it, in most circumstances, but that's what the max_data_length column in *show table status* is about. Before 5.0.6, the default max datafile size was 4G, but that's been upped to 256T now. If you're really running in to this have a look at the various advanced options for create/alter table, like avg_row_length and max_rows; as well as the variable myisam_data_pointer_size. Now what Sarkis is running into, is more of a logical error: data_free does not tell you how much free space there is *for data*, but how much free space there is *in the existing datafile*. That is, it really tells you how much space in your file has become free by deleting rows et al. This also explains why it's always 0 for InnoDB tables :-) -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel Hy die't drinkt, is ras een ezel
Re: Table creation fail
A Primary key constraint can be defined at various levels: * Primary key constraint defined at column level Syntax: Column Name datatype(size) Primary Key * Primary key constraint defined at table level Syntax: Primary key (Column Name, Column Name) Aren't those two alternate syntaxes for the exact same thing ? -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel Hy die't drinkt, is ras een ezel
Re: Table creation fail
Yes, but few exceptions; Column level constraints are applicable to that column only, whereas table level constraints are used to define composite keys like primary key for the combination of two or more columns in a table. column level constraints contain all types of constraints (like, not null,primary key,foreign key,unique).but table level except not null constraint its supports all constraints. --Prabhat On Fri, Jun 25, 2010 at 2:45 PM, Johan De Meersman vegiv...@tuxera.bewrote: A Primary key constraint can be defined at various levels: * Primary key constraint defined at column level Syntax: Column Name datatype(size) Primary Key * Primary key constraint defined at table level Syntax: Primary key (Column Name, Column Name) Aren't those two alternate syntaxes for the exact same thing ? -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel Hy die't drinkt, is ras een ezel -- Best Regards, Prabhat Kumar MySQL DBA Datavail-India Mumbai Mobile : 91-9987681929 www.datavail.com My Blog: http://adminlinux.blogspot.com My LinkedIn: http://www.linkedin.com/in/profileprabhat
Re: Altering database size to add more space
I think you're confusing table size with data base size. The original post grouped by schema so it appears the question concerns database size. I don't believe mysql imposes any limits on that. Is there a limit on the number of tables you can have in a schema imposed by mysql? On Fri, Jun 25, 2010 at 4:13 AM, Johan De Meersman vegiv...@tuxera.bewrote: On Fri, Jun 25, 2010 at 7:11 AM, Prabhat Kumar aim.prab...@gmail.comwrote: In case MyISAM it will grow up to space on your data drive or the Max size of file limited by OS.. Not entirely correct. There is some kind of limit to a MyISAM file that has to do with pointer size - I've encountered it several years ago. You shouldn't be encountering it, in most circumstances, but that's what the max_data_length column in *show table status* is about. Before 5.0.6, the default max datafile size was 4G, but that's been upped to 256T now. If you're really running in to this have a look at the various advanced options for create/alter table, like avg_row_length and max_rows; as well as the variable myisam_data_pointer_size. Now what Sarkis is running into, is more of a logical error: data_free does not tell you how much free space there is *for data*, but how much free space there is *in the existing datafile*. That is, it really tells you how much space in your file has become free by deleting rows et al. This also explains why it's always 0 for InnoDB tables :-) -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel Hy die't drinkt, is ras een ezel -- Jim Lyons Web developer / Database administrator http://www.weblyons.com
Re: Altering database size to add more space
On Fri, 25 Jun 2010 06:31:11 -0500, Jim Lyons jlyons4...@gmail.com wrote: I think you're confusing table size with data base size. The original post grouped by schema so it appears the question concerns database size. I don't believe mysql imposes any limits on that. Is there a limit on the number of tables you can have in a schema imposed by mysql? Not by MySQL. On some file systems, there's a practical limit of ~10k tables/database. At that point, looking up directory entries can cause slowdowns. / Carsten -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: selecting data for computation
In the last episode (Jun 24), Tim Gustafson said: Hrmm, something didn't work quite right. Here's the actual query: select x, y, zoom from map_images where image_data_size = 0 and get_lock(concat(x, ',', y, ',', zoom), 0) = 1 order by zoom , sqrt(pow(x, 2) + pow(y, 2)) limit 1 I've added the image_data_size column and indexed it, and set its current value to length(image_data), and modified my update query to set image_data_size correctly once the job is done. When I run more than one compute job, they all start processing the same job simultaneously. What am I missing? That shouldn't happen, since get_lock() will only succeed for one mysql connection at a time. If you can easily edit the code that is running these queries, can you add a select is_used_lock('x,y,zoom') query right after your first query, and another one just before your compute client exits? (You'll have to build the x,y,zoom string from the values returned by the first query) It'd be nice if there was an information_schema table that listed all currently-held locks, but there isn't :( is_used_lock() will return the mysql connection ID of the session holding the lock, so what I am expecting to see is zeros, meaning that for some reason your mysql connection is getting dropped after your first query completed. Locks are held for the lifetime of a connection, so if you disconnect, your lock disappears. -- Dan Nelson dnel...@allantgroup.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: selecting data for computation
Hi Tim, all! Tim Gustafson wrote: [[...]] There is not an index on the work_data column as it's a longblob and I was under the impression that indexing a longblob field wasn't helpful. Maybe I should add a work_data_size field as an integer, index that, and search for records where work_data_size = 0. I would add some status field with (at least) three values, signifying new, in progress, done; and for that I would create an index. Then, let your application grab a record whose status is new and change that to in progress, do the computation, and store the result and set status done. If you worry about aborting computations, add some date/time field which you set when you start the computation, and periodically do a cleanup searching for in progress records which are overdue. For what it's worth, the table currently has 800,000 rows in it and is likely to expand in the future. That is my reason for proposing some column which can be indexed. Jörg -- Joerg Bruehe, MySQL Build Team, joerg.bru...@sun.com Sun Microsystems GmbH, Komturstrasse 18a, D-12099 Berlin Geschaeftsfuehrer: Juergen Kunz Amtsgericht Muenchen: HRB161028 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: selecting data for computation
That shouldn't happen, since get_lock() will only succeed for one mysql connection at a time. I figured it out. Because there was an order by clause, the get_lock function was being called for each row in the result set, and since each call to get_lock releases any previous locks you had, mySQL was effectively locking and then immediately unlocking (during the sort) each row in the result set. I wound up moving the get_lock call into a second query, like this: set i = 0; select * from foo order by blah limit i, 1; and then: select get_lock(blah, 0); and if that returns 0, do the processing, otherwise do: i = i + 1; continue; and that seems to be working like a charm now. Thanks for your help! Tim Gustafson 831-332-1496 t...@tgustafson.com http://tgustafson.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org