help needed to mount data from /var/lib/mysql
Hi Everybody, I need small help from you. In my Linux box i have limitation of Size in partition. I have only 5 GB space for /var. MySQL is installed in this partition only. I want to give another path like /home, where i have 120GB of space. Somebody told me that we need to change the data path of MySQL, but he is not sure of that. Can anybody knows abt this?
Re: help needed to mount data from /var/lib/mysql
On 8/19/06, balaraju mandala [EMAIL PROTECTED] wrote: Hi Everybody, I need small help from you. In my Linux box i have limitation of Size in partition. I have only 5 GB space for /var. MySQL is installed in this partition only. I want to give another path like /home, where i have 120GB of space. Somebody told me that we need to change the data path of MySQL, but he is not sure of that. Can anybody knows abt this? Edit the my.cnf and change the data path. Then read this page: http://dev.mysql.com/doc/refman/4.1/en/unix-post-installation.html about how to set up your new environment. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Parameterized stored procedures via ADO Command Object
Hi, I have been trying very hard to get this working but I have NOT been able to call the stored procedures in my MySQL database using the ADO (**NOT** Ado.NET) Command object. I want to use stored procedures 'cos I want to restrict access to stored procedures and views only. However, it seems like support of ADO Command object has not been implemented - at least, that's what posters on the relevant MySQL forum say. Is there __any__ way I can do this? -- Thanks in advance, Asif -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: help needed to mount data from /var/lib/mysql
Hello, You can change the MySQL data path in /etc/my.cnf by editing the configuration parameter datadir with new data path. Once you done the changes in my.cnf, reboot the MySQL server. Thanks, ViSolve DB Team - Original Message - From: balaraju mandala [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Saturday, August 19, 2006 12:28 PM Subject: help needed to mount data from /var/lib/mysql Hi Everybody, I need small help from you. In my Linux box i have limitation of Size in partition. I have only 5 GB space for /var. MySQL is installed in this partition only. I want to give another path like /home, where i have 120GB of space. Somebody told me that we need to change the data path of MySQL, but he is not sure of that. Can anybody knows abt this? No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.405 / Virus Database: 268.11.3/423 - Release Date: 8/18/2006 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Query to convert a varchar into int
Dear all, I have a table with the following structure. ield Type CollationNullKey Default Extra Privileges Comment --- -- -- --- -- --- --- idint(11) (NULL) NO PRI (NULL) auto_increment select,insert,update,references indicatorName varchar(255) utf8_general_ci YES (NULL) select,insert,update,references periodNamevarchar(255) utf8_general_ci YES (NULL) select,insert,update,references sourcevarchar(255) utf8_general_ci YES (NULL) select,insert,update,references level int(11) (NULL) YES (NULL) select,insert,update,references value varchar(255) utf8_general_ci YES (NULL) select,insert,update,references numeratorValuevarchar(255) utf8_general_ci YES (NULL) select,insert,update,references denominatorValue varchar(255) utf8_general_ci YES (NULL) select,insert,update,references The values in value,NumeratorValue and DenominatorValue Value NumeratorValue denominatorValue NaNNull
Query needed to convert varchar to int ....sorry previous posting was incomplete
Dear all, I have a table with the following structure. ield Type CollationNullKey Default Extra Privileges Comment --- -- -- --- -- --- --- idint(11) (NULL) NO PRI (NULL) auto_increment select,insert,update,references indicatorName varchar(255) utf8_general_ci YES (NULL) select,insert,update,references periodNamevarchar(255) utf8_general_ci YES (NULL) select,insert,update,references sourcevarchar(255) utf8_general_ci YES (NULL) select,insert,update,references level int(11) (NULL) YES (NULL) select,insert,update,references value varchar(255) utf8_general_ci YES (NULL) select,insert,update,references numeratorValuevarchar(255) utf8_general_ci YES (NULL) select,insert,update,references denominatorValue varchar(255) utf8_general_ci YES (NULL) select,insert,update,references The values in value,NumeratorValue and DenominatorValue Value NumeratorValue denominatorValue NaNNull Null infinity null Null 2143.9888 NUll NUll 0.0 0.0 0.0 Now i need a query which converts the varchar into some numeric values.For ex for non numeric values like NAN,Infinity, Null get as zero and 2143.9 is converted into a numerical 2143.9888 and 0.0 is also converted to numeric.The resultset should in Numeric value all the above fields, Can i do it using a query.If so can any one give me the query thanks and regards, venu
Re: help needed to mount data from /var/lib/mysql
Hi All, Thank you for u r reply. But i am unable to find my.cnf, is i need to create this file.
Managing big mysqldump files
Hi List, We are facing a problem of managing mysqldump out put file which is currently of size 80 GB and it is growing daily by 2 - 3 GB, but we have a linux partition of only 90 GB.. Our backup process is first generate the mysqldump file of total database and then compress the dump file and remove the dump file. Is there any way to get compressed dump file instead of generating dump file and then compressing it later. Any ideas or suggestions please Thanks Anil
RE: Managing big mysqldump files
Hi Anil, Why not pipe the mysqldump direct into gzip? eg: mysqldump etc ... | gzip -c Regards --- ** _/ ** David Logan *** _/ *** ITO Delivery Specialist - Database *_/* Hewlett-Packard Australia Ltd _/_/_/ _/_/_/ E-Mail: [EMAIL PROTECTED] _/ _/ _/ _/ Desk: +618 8408 4273 _/ _/ _/_/_/ Mobile: 0417 268 665 *_/ ** ** _/ Postal: 148 Frome Street, _/ ** Adelaide SA 5001 Australia invent --- -Original Message- From: Anil [mailto:[EMAIL PROTECTED] Sent: Saturday, 19 August 2006 8:03 PM To: mysql@lists.mysql.com Subject: Managing big mysqldump files Hi List, We are facing a problem of managing mysqldump out put file which is currently of size 80 GB and it is growing daily by 2 - 3 GB, but we have a linux partition of only 90 GB.. Our backup process is first generate the mysqldump file of total database and then compress the dump file and remove the dump file. Is there any way to get compressed dump file instead of generating dump file and then compressing it later. Any ideas or suggestions please Thanks Anil -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Managing big mysqldump files
At 4:03 PM +0530 8/19/06, Anil wrote: Hi List, We are facing a problem of managing mysqldump out put file which is currently of size 80 GB and it is growing daily by 2 - 3 GB, but we have a linux partition of only 90 GB.. Our backup process is first generate the mysqldump file of total database and then compress the dump file and remove the dump file. Is there any way to get compressed dump file instead of generating dump file and then compressing it later. Any ideas or suggestions please Thanks Anil Short answer: Yes - mysqldump mysqldump options | gzip outputfile.gz Other alternatives: You could direct output to a filesystem that is larger than the 90GB filesystem you mention (perhaps NFS mounted?). You could pipe the output of gzip through ssh to a remote server. You could use bzip2, which compresses substantially better than gzip, but with a significant performance/speed penalty (that is, do mysqldump | bzip2 outputfile.bz2). Try 'man gzip' and 'man bzip2' for more info. steve -- +--- my people are the people of the dessert, ---+ | Steve Edberghttp://pgfsun.ucdavis.edu/ | | UC Davis Genome Center[EMAIL PROTECTED] | | Bioinformatics programming/database/sysadmin (530)754-9127 | + said t e lawrence, picking up his fork + -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: More than 4 CPUs?
On 8/19/06, Wai-Sun Chia wrote: On 8/19/06, Jochem van Dieten wrote: Tweakers.net did a benchmark comparing a trace of the queries generated by their own website on a T1 to a dual Opteron. The article is in Dutch, but the graphs speak for themselves: http://tweakers.net/reviews/633/7 http://tweakers.net/reviews/633/8 Wow! The graphs speak for themselves... CoolThreads suddenly don't seem so cool after all! :-( Linear scalability is good The graphs showing linear scalability are from PostgreSQL, the graphs for MySQL are the ones on the bottom that show a rather spectacular meltdown when the load increases. Jochem -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: help needed to mount data from /var/lib/mysql
I am unable to start server after shifting to new location. i tried to start 'mysqld' but it was failed. A blank mysql.sock file is creating. Entries of log files are also not reporting any problem.
Re: help needed to mount data from /var/lib/mysql
if i search for any process running i am getting following thing. [EMAIL PROTECTED] mysql]# ps -ef | grep mysqld root 18389 1 0 13:09 pts/300:00:00 /bin/sh /usr/bin/mysqld_safe --defaults-file=/etc/my.cnf --pid-file=/var/run/mysqld/mysqld.pid mysql18422 18389 0 13:09 pts/300:00:00 /usr/libexec/mysqld --defaults-file=/etc/my.cnf --basedir=/usr --datadir=/home/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-locking --socket=/home/mysql/mysql.sock root 18725 15850 0 13:47 pts/300:00:00 grep mysqld i am new to this concept please help me.
MySQL Cluster 5.0.24 (Import) Slow
Hi everybody I am running linuz AS-4 with 5.0.24 max version MySQL Cluster i am able to create all the table as ndb but when comming to the import i am not able to import 20 lakhs of record for a table.please help to solve the problem . Any suggestions?... Thanks Regards Dilipkumar ** DISCLAIMER ** Information contained and transmitted by this E-MAIL is proprietary to Sify Limited and is intended for use only by the individual or entity to which it is addressed, and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If this is a forwarded message, the content of this E-MAIL may not have been sent with the authority of the Company. If you are not the intended recipient, an agent of the intended recipient or a person responsible for delivering the information to the named recipient, you are notified that any use, distribution, transmission, printing, copying or dissemination of this information in any way or in any manner is strictly prohibited. If you have received this communication in error, please delete this mail notify us immediately at [EMAIL PROTECTED] Watch the latest updates on Mumbai, with video coverage of news, events, Bollywood, live darshan from Siddhivinayak temple and more, only on www.mumbailive.in Watch the hottest videos from Bollywood, Fashion, News and more only on www.sifymax.com
Re: MySQL Cluster 5.0.24 (Import) Slow
Dilipkumar wrote: Hi everybody I am running linuz AS-4 with 5.0.24 max version MySQL Cluster i am able to create all the table as ndb but when comming to the import i am not able to import 20 lakhs of record for a table.please help to solve the problem . Any suggestions?... Hi - Do you have any specific errors? Can you elaborate any? Thanks -dant -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL Cluster 5.0.24 (Import) Slow
Hi, Its saying as (unknown error 1 in ndb cluster) please report a bug to mysql.bug. Thanks Regards Dilipkumar - Original Message - From: Dan Trainor [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Sunday, August 20, 2006 2:06 AM Subject: Re: MySQL Cluster 5.0.24 (Import) Slow Dilipkumar wrote: Hi everybody I am running linuz AS-4 with 5.0.24 max version MySQL Cluster i am able to create all the table as ndb but when comming to the import i am not able to import 20 lakhs of record for a table.please help to solve the problem . Any suggestions?... Hi - Do you have any specific errors? Can you elaborate any? Thanks -dant -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] ** DISCLAIMER ** Information contained and transmitted by this E-MAIL is proprietary to Sify Limited and is intended for use only by the individual or entity to which it is addressed, and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If this is a forwarded message, the content of this E-MAIL may not have been sent with the authority of the Company. If you are not the intended recipient, an agent of the intended recipient or a person responsible for delivering the information to the named recipient, you are notified that any use, distribution, transmission, printing, copying or dissemination of this information in any way or in any manner is strictly prohibited. If you have received this communication in error, please delete this mail notify us immediately at [EMAIL PROTECTED] Watch the latest updates on Mumbai, with video coverage of news, events, Bollywood, live darshan from Siddhivinayak temple and more, only on www.mumbailive.in Watch the hottest videos from Bollywood, Fashion, News and more only on www.sifymax.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: confirm unsubscribe to mysql@lists.mysql.com
On Sunday 20 August 2006 05:36, [EMAIL PROTECTED] wrote: To confirm that you would like [EMAIL PROTECTED] removed from the mysql mailing list, please click on the following link: http://lists.mysql.com/u/mysql/44e7d8d08e557788/wolfgang=rohdewald.de This confirmation serves two purposes. First, it verifies that we are able to get mail through to you. Second, it protects you in case someone forges a subscription request in your name. We haven't checked whether your address is currently on the mailing list. To see what address you used to subscribe, look at the messages you are receiving from the mailing list. Each message has your address hidden inside its return path; for example, [EMAIL PROTECTED] receives messages with return path: mysql-return-number[EMAIL PROTECTED] --- Administrative commands for the mysql list --- I can handle administrative requests automatically. Please do not send them to the list address! Instead, send your message to the correct command address: For help and a description of available commands, send a message to: [EMAIL PROTECTED] To subscribe to the list, send a message to: [EMAIL PROTECTED] To remove your address from the list, just send a message to the address in the ``List-Unsubscribe'' header of any list message. If you haven't changed addresses since subscribing, you can also send a message to: [EMAIL PROTECTED] or for the digest to: [EMAIL PROTECTED] For addition or removal of addresses, I'll send a confirmation message to that address. When you receive it, simply reply to it to complete the transaction. If you need to get in touch with the human owner of this list, please send a message to: [EMAIL PROTECTED] Please include a FORWARDED list message with ALL HEADERS intact to make it easier to help you. --- Enclosed is a copy of the request I received. Received: (qmail 20150 invoked by uid 48); 20 Aug 2006 03:36:47 - Date: 20 Aug 2006 03:36:47 - Message-ID: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Unsubscribe request From: [EMAIL PROTECTED] This message was generated because of a request from 84.143.193.67. -- mit freundlichen GrĂ¼ssen with my best greetings Wolfgang Rohdewald dipl. Informatik Ing. ETH Rohdewald Systemberatung Karauschenstieg 4 D 21640 Horneburg Tel.: 04163 826 819 Fax: 04163 826 828 Internet: http://www.rohdewald.de -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Should Joins always be using an index? (where possible?)
I'm have a query like so select A, index_A from tableA join tableB on tableB.indexA = tableA.indexA select A, index_A from tableA join tableB on tableB.A = tableA.A whcih would be more efficient? using the where clause which uses the index or the one which isn't index? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]