RE: Please help me.
Hi You can make a backup of the original table and recreate the table and import. That will help you ASAP Muthukumar Selvarasu, Project Manager, Webmaster Ltd. -Original Message- From: Gary Smith [mailto:g...@primeexalia.com] Sent: Thursday, March 19, 2009 7:25 AM To: Valentin Ionescu; mysql@lists.mysql.com Cc: wi...@lists.mysql.com; mysql-h...@lists.mysql.com Subject: RE: Please help me. Velentin, http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html Note the section for the droping of foreign keys used the contraint name, not the key name. Try this and see if it solves the first problem (of removing the constraint). Then you should be able to drop the column after that. From: Valentin Ionescu [colibry...@yahoo.com] Sent: Wednesday, March 18, 2009 11:27 AM To: mysql@lists.mysql.com Cc: wi...@lists.mysql.com; mysql-h...@lists.mysql.com Subject: Please help me. Hi! My name is Valentin and I am writing to you for the following problem: I created a database containing the table: CREATE TABLE `documents_ex` ( `ID` int(10) unsigned NOT NULL AUTO_INCREMENT, `Journal_ex_ID` int(10) unsigned DEFAULT NULL, `Documents_ID` int(10) unsigned DEFAULT NULL, `Data` datetime DEFAULT NULL, `Nr` varchar(50) DEFAULT NULL, PRIMARY KEY (`ID`), UNIQUE KEY `Journal_ex_ID` (`Journal_ex_ID`,`Documents_ID`,`Data`,`Nr`), KEY `Documents_ID` (`Documents_ID`), CONSTRAINT `documents_ex_ibfk_1` FOREIGN KEY (`Journal_ex_ID`) REFERENCES `journal_ex` (`ID`) ON DELETE CASCADE, CONSTRAINT `documents_ex_ibfk_2` FOREIGN KEY (`Documents_ID`) REFERENCES `documents` (`ID`) ON DELETE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=latin1 After some time I decided to drop 'Journal_ex_ID' column and all its environment from this non empty table. All I tried to do like: alter table documents_ex drop column Journal_ex_ID or alter table documents_ex drop foreign key Journal_ex_ID or alter table documents_ex drop key Journal_ex_ID I receive the same error 150 and I don't know what to do. Please help me. Best regards. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=g...@primeexalia.com -- MySQL Windows Mailing List For list archives: http://lists.mysql.com/win32 To unsubscribe: http://lists.mysql.com/win32?unsub=muthukumar...@hotmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: From MS Access to MySQL
I have taken a Microsoft Access database and have basically copied the structure of it and rebuilt it in MySQL, however, I have encountered some problems with formatting of data. I need to ask what is the most suitable field type to use to retain the content from Access field types The MS Access fields are: Currency (formatted to £ with two decimal places e.g. £10.00) DECIMAL(nnn,2) You'll have to add your own signage ($,£, etc.) upon extraction. Currency (formatted to £#,##0.00;(£#,##0.00) Date/Time (with a Now() default) From the manual: The DATETIME type is used when you need values that contain both date and time information. MySQL retrieves and displays DATETIME values in '-MM-DD HH:MM:SS' format. The supported range is '1000-01-01 00:00:00' to '-12-31 23:59:59'. Yes/No enum() or set() Number (as a percentage) decimal() Number (with field size of Double and Decimal Places of 2) decimal() What would be the equivalent of the above for MySQL 5.1.35? If there isn't an equivalent for some, how can I work around it considering that the Access version of the database is already used in an ASP website? see: http://dev.mysql.com/doc/refman/5.1/en/data-types.html --Curtis -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
copy blob between tables may get in consistent data?
hi, I'm using mysql-5.1.22, innodb. For some reason, I need to add some columns to a table without locking it, so I do this: insert into TABLE2(col1, col2, ... ,colN) select col1, col2, ... colN from TABLE1 where ...(primary key traversal condition). which TABLE1 is under use, and TABLE2 has new structure. Before I really do the copy task, I created some triggers to synchronize the insert update. When I finally complete the task, I find ALL blob column appear to result in inconsistent status with the orignal table. There are users which do query and update on TABLE1. So what's going wrong? Thank you Cui Shijun -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: How to set db property so that table name queries are case-insensitive?
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_lower_case_table_names I think you'll want to set it to 1 On Wed, Mar 18, 2009 at 22:15, David M. Karr davidmichaelk...@gmail.com wrote: Ubuntu 8.10. I was experimenting with the Spring Petclinic sample application, configured with MySQL. I found that some of the tests were failing, apparently because of table name case-sensitivity issues. I was able to fix some of the code references, but after that I hit other, probably for the same issue. The tables were created with lower case names, but generated queries are using uppercase names. I can't easily control how it generates the SQL (using Hibernate). I think I'd prefer to have table names be case-sensitive, but I guess for now I have to figure out how to disable that, at least for this database. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=mart...@crystal-labs.nl -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Separate customer databases vs all in one
You only confirm my first-blush opinion: keep separate databases. Regarding your second issue (joining across databases for reports), I would suggest that these operations are best done against an OLAP database not the numerous OLTP databases. That is, create an additional database whose sole purpose is reporting, and which accumulates all the data from the numerous OLTP databases, on some scheduled basis (schedule dictated by stakeholders -- how recent should the reports be? Will one week do? One day? One hour? The basic idea here is that reporting does aggregates and therefore necessarily does table scans, especially in your case. To place this burden on the OLTP databases is an error in design and more important, a bottleneck in performance. My advice would be to separate the reporting tasks from the data-entry tasks. Aggregate the data periodically in the OLAP database and base all your reports on this, not on the OLTP databases. This way you maximize data-entry and update speed, while also maximizing the reporting speed (since it won't cause contention with the data-entry activities). This would mean that the aggregate db is very large, but OTOH interrogating it won't impair the OLTP databases in the slightest. So the big problem this scenario suggests is the granularity of the updates to the OLAP version of the data. That's not for me to decide. Ask the stakeholders how recent the data must be and proceed from there. Arthur
Open source Data warehouse
Hi Anybody know some open source data warehouse product like Bizgres?
Re: Performance Spamassin PostgreSQL vs MySQL
At 02:53 PM 3/18/2009, you wrote: We are using the PostgreSQL currently to store the Bayes information. It seems to periodically spend a lot of time 'vacumming' which of course drives up disk load. The system admin has adjusted it so it only does this at low load. I'm curious if anyone has actually tested the PostgreSQL vs MySQL versions. We are currently running a uniprocessor system (Linux version 2.6.18-6-vserver-amd64 (Debian 2.6.18.dfsg.1-24). System appears disk limited, we have the files on hardware raid 0 and have moved nearly everything else off that set (they are the fastest drives). Just curious. Thanks. Bill Mussatto CyberStrategies, Inc. www.csz.com When the db is not vacuuming, how is the speed? PostgreSQL is far superior to MySQL for multi-processor CPU's. But if you are sure the CPU level is low, then adding more CPU's is not going to help. When you say you are storing Bayes information, are you referring to Bayes' theorem so you are storing mostly numbers (Doubles?)? How large are your tables (# of rows, row length disk space?) Are the rows being deleted or updated repeatedly? If not, then why do you need vacuuming? Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Multiple batch files
HeyaI have folders of sql files which, when there were a few, I executed using source d:\folder\file.sqlI now I have 100's, so is there a way to do source d:\folder\*.sql (which sadly does not work) ? How would you go about automating this? The DB is running on a windows box. Thanks for any suggestions you may have. -- David Scott
Re: Multiple batch files
Thanks for responding Gary I get: --- Volume in drive D has no label. Volume Serial Number is C45F-3341 Directory of D:\data\03\09 03/19/2009 05:45 PM 389,305 sessions-20090309-00.sql 03/19/2009 05:45 PM 402,264 sessions-20090309-01. sql 03/19/2009 05:45 PM 417,579 sessions-20090309-02. sql 03/19/2009 05:45 PM 402,084 sessions-20090309-03. sql etc 220 File(s) 50,149,425 bytes 0 Dir(s) 116,891,242,496 bytes free --- How should I edit this? -- Dave 2009/3/19 Gary Smith g...@primeexalia.com Dir *.sql my.bat Edit batch file for each line to append | mysql Call batch file Sent via BlackBerry by ATT -Original Message- From: David Scott critt...@desktopcreatures.com Date: Thu, 19 Mar 2009 11:08:28 To: mysql@lists.mysql.com Subject: Multiple batch files HeyaI have folders of sql files which, when there were a few, I executed using source d:\folder\file.sqlI now I have 100's, so is there a way to do source d:\folder\*.sql (which sadly does not work) ? How would you go about automating this? The DB is running on a windows box. Thanks for any suggestions you may have. -- David Scott
Re: Multiple batch files
Hi David, Why not use a FOR loop in a batch file? This post seems to be doing the same thing, using MS SQL Server: http://bytes.com/topic/windows/answers/647680-xp-batch-file-loop-help Andy David Scott wrote: Thanks for responding Gary I get: --- Volume in drive D has no label. Volume Serial Number is C45F-3341 Directory of D:\data\03\09 03/19/2009 05:45 PM 389,305 sessions-20090309-00.sql 03/19/2009 05:45 PM 402,264 sessions-20090309-01. sql 03/19/2009 05:45 PM 417,579 sessions-20090309-02. sql 03/19/2009 05:45 PM 402,084 sessions-20090309-03. sql etc 220 File(s) 50,149,425 bytes 0 Dir(s) 116,891,242,496 bytes free --- How should I edit this? -- Dave 2009/3/19 Gary Smith g...@primeexalia.com Dir *.sql my.bat Edit batch file for each line to append | mysql Call batch file Sent via BlackBerry by ATT -Original Message- From: David Scott critt...@desktopcreatures.com Date: Thu, 19 Mar 2009 11:08:28 To: mysql@lists.mysql.com Subject: Multiple batch files HeyaI have folders of sql files which, when there were a few, I executed using source d:\folder\file.sqlI now I have 100's, so is there a way to do source d:\folder\*.sql (which sadly does not work) ? How would you go about automating this? The DB is running on a windows box. Thanks for any suggestions you may have. -- David Scott -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Multiple batch files
Thanks, much better.I appended '| mysql' to each line but I am not sure what you mean by 'then call it' I ran the batch file and tried using source batch.bat but no joy. What am I missing? Thanks again -- Dave 2009/3/19 Gary Smith g...@primeexalia.com Dir /b Sent via BlackBerry by ATT -Original Message- From: David Scott critt...@desktopcreatures.com Date: Thu, 19 Mar 2009 11:16:03 To: g...@primeexalia.com Cc: mysql@lists.mysql.com Subject: Re: Multiple batch files Thanks for responding Gary I get: --- Volume in drive D has no label. Volume Serial Number is C45F-3341 Directory of D:\data\03\09 03/19/2009 05:45 PM 389,305 sessions-20090309-00.sql 03/19/2009 05:45 PM 402,264 sessions-20090309-01. sql 03/19/2009 05:45 PM 417,579 sessions-20090309-02. sql 03/19/2009 05:45 PM 402,084 sessions-20090309-03. sql etc 220 File(s) 50,149,425 bytes 0 Dir(s) 116,891,242,496 bytes free --- How should I edit this? -- Dave 2009/3/19 Gary Smith g...@primeexalia.com Dir *.sql my.bat Edit batch file for each line to append | mysql Call batch file Sent via BlackBerry by ATT -Original Message- From: David Scott critt...@desktopcreatures.com Date: Thu, 19 Mar 2009 11:08:28 To: mysql@lists.mysql.com Subject: Multiple batch files HeyaI have folders of sql files which, when there were a few, I executed using source d:\folder\file.sqlI now I have 100's, so is there a way to do source d:\folder\*.sql (which sadly does not work) ? How would you go about automating this? The DB is running on a windows box. Thanks for any suggestions you may have. -- David Scott
why is this happening?
I have a rather odd problem with a replication slave. The slave has been down for a number of hours. Of course, both io and sql threads are stopped - from SHOW SLAVE STATUS: Slave_IO_Running: No Slave_SQL_Running: No However, the relay logs are still being written to. I was under the impression that the slave's io thread was what brought data from the master's bin log to the slave's relay log. With the io thread stopped, the relay logs should stop filling up - right? Mine are definitely filling up. Does anyonee know why the relay logs keep filling up after replication has broken? Thanks -- Jim Lyons Web developer / Database administrator http://www.weblyons.com
Re: why is this happening?
On 3/19/09, Jim Lyons jlyons4...@gmail.com wrote: I have a rather odd problem with a replication slave. The slave has been down for a number of hours. Of course, both io and sql threads are stopped - from SHOW SLAVE STATUS: Slave_IO_Running: No Slave_SQL_Running: No However, the relay logs are still being written to. I was under the impression that the slave's io thread was what brought data from the master's bin log to the slave's relay log. With the io thread stopped, the relay logs should stop filling up - right? Mine are definitely filling up. Does anyonee know why the relay logs keep filling up after replication has broken? Thanks -- Jim Lyons Web developer / Database administrator http://www.weblyons.com I noticed a similar occurrence the last time my replication broke. Someone here suggested running show processlist. What does yours show? David
RE: why is this happening?
what does the master db show for processlist ? master_mysqlshow processlist ? Martin __ Disclaimer and confidentiality note This message is confidential and may be privileged. If you are not the intended recipient, we kindly ask you to please inform the sender. Any unauthorised dissemination or copying hereof is prohibited. This message serves for information purposes only and shall not have any legally binding effect. Given that e-mails can easily be subject to manipulation, we can not accept any liability for the content provided. Date: Thu, 19 Mar 2009 15:24:43 -0500 Subject: why is this happening? From: jlyons4...@gmail.com To: mysql@lists.mysql.com I have a rather odd problem with a replication slave. The slave has been down for a number of hours. Of course, both io and sql threads are stopped - from SHOW SLAVE STATUS: Slave_IO_Running: No Slave_SQL_Running: No However, the relay logs are still being written to. I was under the impression that the slave's io thread was what brought data from the master's bin log to the slave's relay log. With the io thread stopped, the relay logs should stop filling up - right? Mine are definitely filling up. Does anyonee know why the relay logs keep filling up after replication has broken? Thanks -- Jim Lyons Web developer / Database administrator http://www.weblyons.com _ Hotmail® is up to 70% faster. Now good news travels really fast. http://windowslive.com/online/hotmail?ocid=TXT_TAGLM_WL_HM_70faster_032009
Re: why is this happening?
I am afraid you have two mysql instances up on the same host. ps -ef | grep mysql ? Claudio Jim Lyons wrote: I have a rather odd problem with a replication slave. The slave has been down for a number of hours. Of course, both io and sql threads are stopped - from SHOW SLAVE STATUS: Slave_IO_Running: No Slave_SQL_Running: No However, the relay logs are still being written to. I was under the impression that the slave's io thread was what brought data from the master's bin log to the slave's relay log. With the io thread stopped, the relay logs should stop filling up - right? Mine are definitely filling up. Does anyonee know why the relay logs keep filling up after replication has broken? Thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Auto increment without uniquness
I think ideally I would like to create an auto increment column that has no requirement for uniqueness. So if 6 was the last entry, and there are 10 of them, 7 would still be the next, is this possible? I am assuming it is not? I am working in a case where data is needing to be de-normalized, and I have never had to do this before. In the past, this would be a join away. Now I have one table, it will have a single parent record, with x children records. I start with an `id` primary key auto inc field, standard stuff. Keeping it simple, lets say I have: id, group_id, foo, bar, baz I will do a batch of inserts, where the first record is the parent, and the rest are children. The first parent record is what is getting me stuck, I need to give it a group_id, but all the children will need to have the same group id. I could do one insert on the parent, get the returned id, and then update the group_id on the parent to the same id, but that is an insert and an update, I want to avoid the update. Can I insert into table set foo = 'test', `group_id` = `id` Or is that too soon in the insert trasaction to be noticed? I can allow the parent to have an empty group_id, and use the id as what I search on, but it seems messy. Suggestions? Thank you all -- Scott * If you contact me off list replace talklists@ with scott@ * -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org