checking return type of sprintf... configure: error: cannot run test program while cross compiling
Hi ! Any clue will be appriciated. I have following problem while cross compiling for powerpc. (Configure) checking return type of sprintf... configure: error: cannot run test program while cross compiling This occurs when makefile is generated with the following configure option. CC=powerpc-linux-gcc LD=powerpc-linux-ld ./configure --host=powerpc-linux --build=i686-pc-linux-gnu I also try many other options but this problem has not been resolved. Kindly help Thanks Arvind
Mysql: checking return type of sprintf... configure: error: cannot run test program while cross compiling
Hi ! Any clue will be appriciated. I have following problem while cross compiling for powerpc. (Configure) checking return type of sprintf... configure: error: cannot run test program while cross compiling This occurs when makefile is generated with the following configure option. CC=powerpc-linux-gcc LD=powerpc-linux-ld ./configure --host=powerpc-linux --build=i686-pc-linux-gnu Kindly help Thanks Arvind
Moving database to another machine
Hlo all, I wanted to move around 50 GB of data on Machine A to Machine B. Both the machines are of same architecture ( LAMP ) Dumping and restoring takes a lot of time. Is there a faster method to accomplish the same? Is there a way to tar the whole thing and untar on Machine B? or any other way? Ratheesh Bhat K J
Moving database to another machine
Hlo all, I wanted to move around 50 GB of data on Machine A to Machine B. Both the machines are of same architecture ( LAMP ) Dumping and restoring takes a lot of time. Is there a faster method to accomplish the same? Is there a way to tar the whole thing and untar on Machine B? or any other way? Thanks, Ratheesh Bhat K J
Re: Moving database to another machine
Hi, Try, Compress - [Mac A] shell mysqldump -all -databases |gzip dbfilename.sql.gz UnCompress [Mac B] shell gunzip dbfilename.sql.gz | mysql -u user -p db Thanks ViSolve DB Team. - Original Message - From: Ratheesh K J [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Tuesday, September 12, 2006 12:13 PM Subject: Moving database to another machine Hlo all, I wanted to move around 50 GB of data on Machine A to Machine B. Both the machines are of same architecture ( LAMP ) Dumping and restoring takes a lot of time. Is there a faster method to accomplish the same? Is there a way to tar the whole thing and untar on Machine B? or any other way? Ratheesh Bhat K J -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
myisam primary key with innodb primary key..
Hi, I want to know is there any difference between myisam primary index vs innodb primary index... -- Regards, Lakshmi.M.P. DBA Support Sify Limited. Extn:4134 ** DISCLAIMER ** Information contained and transmitted by this E-MAIL is proprietary to Sify Limited and is intended for use only by the individual or entity to which it is addressed, and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If this is a forwarded message, the content of this E-MAIL may not have been sent with the authority of the Company. If you are not the intended recipient, an agent of the intended recipient or a person responsible for delivering the information to the named recipient, you are notified that any use, distribution, transmission, printing, copying or dissemination of this information in any way or in any manner is strictly prohibited. If you have received this communication in error, please delete this mail notify us immediately at [EMAIL PROTECTED] Watch the latest updates on Mumbai, with video coverage of news, events, Bollywood, live darshan from Siddhivinayak temple and more, only on www.mumbailive.in Watch the hottest videos from Bollywood, Fashion, News and more only on www.sifymax.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Enums vs prepared statements
Hi, I try to read enums from a database through the prepared statement API, as a number. The enum is defined e.g., as ENUM('Accept', 'Reject', 'Reject_All') The bind structure is filled in the following way: buffer_type = MYSQL_TYPE_LONG buffer points to an uint32_t buffer_length = 4 length points to an unsigned long is_null points to a bool is_unsigned = true error points to a my_bool After calling mysql_stmt_fetch() I receive MYSQL_DATA_TRUNCATED, error is set to 1, and length is set to 4. Could somebody explain what am I doing wrong? Inserts seem to work woth the same setup properly, only selects have this problem. Thx. ImRe P.S.: version = 5.0.22 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: myisam primary key with innodb primary key..
Lakshmi wrote: Hi, I want to know is there any difference between myisam primary index vs innodb primary index... One's for a myisam table one's for an innodb table. They are treated exactly the same - both are unique, both have indexes. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: myisam primary key with innodb primary key..
Hi Lakshmi , I guess both treated same . But physical structure of storage is different from Myisam and Innodb . Correct me if iam wrong . Lakshmi wrote: Hi, I want to know is there any difference between myisam primary index vs innodb primary index... -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Moving database to another machine
Ratheesh K J wrote: Hlo all, I wanted to move around 50 GB of data on Machine A to Machine B. Both the machines are of same architecture ( LAMP ) Dumping and restoring takes a lot of time. Is there a faster method to accomplish the same? Is there a way to tar the whole thing and untar on Machine B? or any other way? Thanks, Ratheesh Bhat K J if you have exactly the same version of mysql keep the same configuration you can copy on the second machine the whole data directory (e.g. /var/lib/mysql), but obviously in this manner you will overwrite what you have in the second server. Don't forget to copy innodb's files, too, if you use it. -- Email.it, the professional e-mail, gratis per te: http://www.email.it/f Sponsor: Refill srl il paradiso della tua stampante - cartucce e toner compatibili, inchiostri e accessori per la ricarica, carta speciale. Tutto a prezzi scontatissimi! Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=5187d=12-9 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Moving database to another machine
Nico Sabbi wrote: Ratheesh K J wrote: Hlo all, I wanted to move around 50 GB of data on Machine A to Machine B. Both the machines are of same architecture ( LAMP ) Dumping and restoring takes a lot of time. Is there a faster method to accomplish the same? Is there a way to tar the whole thing and untar on Machine B? or any other way? Thanks, Ratheesh Bhat K J if you have exactly the same version of mysql keep the same configuration you can copy on the second machine the whole data directory (e.g. /var/lib/mysql), but obviously in this manner you will overwrite what you have in the second server. Don't forget to copy innodb's files, too, if you use it. An additional note - you can only do this while mysql is completely shut down. You cannot do this while mysql is running on either server. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Moving database to another machine
Chris wrote: An additional note - you can only do this while mysql is completely shut down. You cannot do this while mysql is running on either server. why is it that I received this mail 3 times? I can understand 2 (one to my and one to the list), but 3 ... -- Email.it, the professional e-mail, gratis per te: http://www.email.it/f Sponsor: Ascolta tutta la musica che vuoi gratis! * Clicca su www.radiosnj.com Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=5176d=12-9 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysql driver for C++
Hi , Could you please tell me , how can i install C++ , mysql driver in Fedora core 5 ? Is that available for any platforms ? does YUM supports for install mysql c++ ? please help me , its really urgent . Thank you, Shaine. - Yahoo! Cars NEW - sell your car and browse thousands of new and used cars online search now -
Re: mysql driver for C++
check out mysql++ I dont think there is a package but configure, make, makeinstall works for me (on FC5, with mysql++ 1.7.40 and 2.1.1) - Original Message From: Shain Lee [EMAIL PROTECTED] To: MySql mysql@lists.mysql.com Sent: Tuesday, September 12, 2006 11:31:02 AM Subject: mysql driver for C++ Hi , Could you please tell me , how can i install C++ , mysql driver in Fedora core 5 ? Is that available for any platforms ? does YUM supports for install mysql c++ ? please help me , its really urgent . Thank you, Shaine. - Yahoo! Cars NEW - sell your car and browse thousands of new and used cars online search now -
Re: Has InnoDb licensing changed to accommodate Oracle?
Mike, Oracle Corp. and MySQL AB renewed the InnoDB OEM contract in spring 2006. The licensing of InnoDB is the same as before and it is distributed in the official MySQL distros. Best regards, Heikki Tuuri CEO of Innobase Oy VP of Oracle Corporation Has Oracle placed any restrictions on using InnoDb and MySQL now that the original MySQL AB license has expired with Heikki?? What is the name of the new MySQL transaction engine and is anyone using it? Is it any good? TIA Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
storing URL 2083 characters
Hi, URL's have a practical limit of 2083 characters it seems. To store these in a space efficient way (I have 1,000,000s of url rows), what's the best approach? varchar has a 255 maximum, right? Should I just use TEXT? I'm not searching *in* the urls, I am selecting like this: where url = 'xxx'. Thanks, Peter -- find videoblogs: http://mefeedia.com my blog: http://poorbuthappy.com/ease/ my job: http://petervandijck.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: IN ANY subqueries
Or perhaps add a TEXT index to the table? Peter On 9/6/06, Philippe Poelvoorde [EMAIL PROTECTED] wrote: 2006/9/5, Ben Lachman [EMAIL PROTECTED]: So possibly a better solution would be to create a temporary table and then do a subquery on that table? yes. Or do a : SELECT id FROM t1 WHERE name LIKE '%s1%' OR name LIKE %'s2%' OR name LIKE '%s3%' but depending on your data, a table may be the way to go. -Ben On Sep 4, 2006, at 7:37 AM, Visolve DB Team wrote: Hi The ANY keyword, which must follow a comparison operator, means return TRUE if the comparison is TRUE for ANY of the values in the column that the subquery returns In has 2 forms: 1. IN (subquery). [The word IN is an alias for = ANY (subquery)]. 2. IN (list of values seperated by comma) Hence the exact syntax to use is: Select fields from table where fieldname = ANY ( select fieldname from table); Ref: http://dev.mysql.com/doc/refman/5.0/en/any-in-some- subqueries.html Thanks ViSolve DB Team - Original Message - From: Ben Lachman [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Sunday, September 03, 2006 10:33 AM Subject: IN ANY subqueries I have a string comparison that I would like to do against a short list of constant strings. at the moment I am using the syntax SELECT id FROM t1 WHERE name IN('s1', 's2', 's3', ...); However, this limits me to exact matches and I'd like to move to a caparison expersion that lets me match names that contain any of the list. The MySQL docs state that 'IN()' is an alias to '= ANY()' however when I substitute' = ANY' for IN I get a parse error. What I'd like to do is write something like (although I figure there may be a better way to do the comparison that I am not thinking of): SELECT id FROM t1 WHERE name LIKE ANY('%s1%', '%s2%', '%s3%', ...); Does anyone know a way to do this? Thanks, -Ben -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- http://www.myspace.com/sakuradrop : forget the rest http://www.w-fenec.org/ Webzine rock/metal -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- find videoblogs: http://mefeedia.com my blog: http://poorbuthappy.com/ease/ my job: http://petervandijck.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Moving database to another machine
Or you can do this to combine the two steps together shellmysqldump --all-databases -uuser -ppassword | gzip | ssh [EMAIL PROTECTED] 'gunzip dump.sql' Kishore Jalleda http://kjalleda.googlepages.com/mysqlprojects On 9/12/06, Visolve DB Team [EMAIL PROTECTED] wrote: Hi, Try, Compress - [Mac A] shell mysqldump -all -databases |gzip dbfilename.sql.gz UnCompress [Mac B] shell gunzip dbfilename.sql.gz | mysql -u user -p db Thanks ViSolve DB Team. - Original Message - From: Ratheesh K J [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Tuesday, September 12, 2006 12:13 PM Subject: Moving database to another machine Hlo all, I wanted to move around 50 GB of data on Machine A to Machine B. Both the machines are of same architecture ( LAMP ) Dumping and restoring takes a lot of time. Is there a faster method to accomplish the same? Is there a way to tar the whole thing and untar on Machine B? or any other way? Ratheesh Bhat K J -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
SQL
Is there anyone who has EXTENSIVE sql experience that is available for consulting? This is a short term opportunity for a single project. Maybe an couple hours tops to assist in some COMPLEX queries that do totals and the like across multiple tables. This is NOT for an amateur. IF you can handle COMPLEX SQL and have EXTENSIVE experience please contact me off list, just send me an email and I'll respond to them tonight. When you respond, please send the MOST COMPLEX query you have ever done as an example of your work. The caps are not meant to be yelling but rather accentuating the fact that I need someone assistance of someone extremely experienced for this project. Thanks Steffan --- T E L 6 0 2 . 5 7 9 . 4 2 3 0 | F A X 6 0 2 . 9 7 1 . 1 6 9 4 Steffan A. Cline [EMAIL PROTECTED] Phoenix, Az http://www.ExecuChoice.net USA AIM : SteffanC ICQ : 57234309 Lasso Partner Alliance Member --- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
2 versions of mysql on Unix: problem with the socket
I am running mysql 4.1.7 on solaris 9. I want to install 2 versions (versions 4.1.7 and 5) on the server. I am reconfiguring my scripts in that perspective. I have a script that start the mysql v4 so that I finally have: 7002 pts/4S 0:00 /bin/sh /seqweb/mysql4/bin/mysqld_safe --defaults-file=/seqweb/mysqldata/my.cnf --port=3306 --socket=/tmp/mysql4.sock --pid-file=/seqweb/mysqldata/mysql4.pid --user=mysql --datadir=/seqweb/mysqldata 7023 pts/4S 0:13 /seqweb/mysql-4.1.7/libexec/mysqld --defaults-file=/seqweb/mysqldata/my.cnf --basedir=/seqweb/mysql-4.1.7 --datadir=/seqweb/mysqldata --pid-file=/seqweb/mysqldata/mysql4.pid --port=3306 --socket=/tmp/mysql4 The problem is that I always have to give the --socket to all the clients, the port number is not enough. Ex: mercure{mysql}139: /seqweb/mysql4/bin/mysql --port 3306 ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) mercure{mysql}140: /seqweb/mysql4/bin/mysql --port 3306 --socket /tmp/mysql4.sock Welcome to the MySQL monitor. Of course I can do it but the problem is when I use client like a perl script with DBI: while (DBLIST) { my $db_name=$_; my $port=3306; my $hostname='localhost'; my $socket='/tmp/mysql4.sock'; chomp($db_name); my $dsn = DBI:mysql:database=$db_name;host=$hostname;port=$port; my $dbh = DBI-connect ($dsn,$DBUSER, $DBPASS, { RaiseError = 1, PrintError = 0 }) DBI connect('database=mysql;host=localhost;port=3306','mysql',...) failed: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) at /seqweb/mysql4/myscripts/4/check_innodb.pl line 58 Where is the problem? Is it in the way I start mysql? I really would appreciate help! Scripts are below. Thank you in advance start_script: #! /bin/sh /seqweb/mysql4/bin/mysqld_safe \ --defaults-file=/seqweb/mysqldata/my.cnf \ --port=3306 \ --socket=/tmp/mysql4.sock \ --pid-file=/seqweb/mysqldata/mysql4.pid \ --user=mysql \ --datadir=/seqweb/mysqldata /etc/my.conf mercure{mysql}143: more /etc/my.cnf [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout my.cnf sever side: more /seqweb/mysqldata/my.cnf [client] #password = your_password port= 3306 socket = /tmp/mysql4.sock # The MySQL server [mysqld] skip-locking key_buffer = 16M max_allowed_packet = 1M table_cache = 64 #sort_buffer_size = 512K sort_buffer_size = 4M #Modifié 17 aout 2004 JD net_buffer_length = 8K myisam_sort_buffer_size = 8M log-bin = /home/mysqldisk2/4/logs/log-bin/log_bin #aout 2006 log-slow-queries=/seqdata/static/mysql/4/logs/slow-query/log-slow-queries #aout 2006 log= /seqdata/static/mysql/4/logs/general_query_log/mercure.log #aout 2006 tmpdir =/seqdata/static/mysql/4/tmp #aout 2006 thread_cache = 16 #Modifié 17 aout 2004 JD read_rnd_buffer_size=2M #Modifié 17 aout 2004 JD innodb_file_per_table log-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 innodb_data_home_dir = /home/mysqldisk3/innodb/ innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /home/mysqldisk2/4/logs/innodb_logfiles/logfiles #aout 2006 innodb_log_arch_dir = /home/mysqldisk2/innodb_logfiles/log_arch_files innodb_buffer_pool_size = 500M #Modifié le 1out 2004 JD innodb_additional_mem_pool_size = 2M innodb_log_file_size = 100M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 Johanne Duhaime [EMAIL PROTECTED]
Re: storing URL 2083 characters
Hi Peter, I'd thought I'd just mention that the varchar length depends on your MySQL version and character set. 5.0.3 and later handles upto 64k chars. See: http://dev.mysql.com/doc/refman/5.0/en/char.html /Johan Peter Van Dijck skrev: Hi, URL's have a practical limit of 2083 characters it seems. To store these in a space efficient way (I have 1,000,000s of url rows), what's the best approach? varchar has a 255 maximum, right? Should I just use TEXT? I'm not searching *in* the urls, I am selecting like this: where url = 'xxx'. Thanks, Peter -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: storing URL 2083 characters
Peter Van Dijck wrote: Hi, URL's have a practical limit of 2083 characters it seems. To store these in a space efficient way (I have 1,000,000s of url rows), what's the best approach? varchar has a 255 maximum, right? Should I just use TEXT? I'm not searching *in* the urls, I am selecting like this: where url = 'xxx'. Do the URLs occur multiple times? If so, I would create a URL table, that had the URL and a primary auto_increment key. Then I would just reference the key for each instance. The other thing you could do is use the compress function. Note, that while URLs can be 2083 characters, they generally aren't. So if you use a TEXT field and had 1,000,000 URLs and the average URL was x characters long you would need (x + overhead) * 1,000,000 bytes. I would guess for most situations (x + overhead) is less than 200, so that is only about 200 MB. not particularly huge. You can probably save a factor of 2 or 3 with compress. If the URLs repeat a lot, you can probably save a lot more than that with the sepearate URL table. Thanks, Peter -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
resources for tuning
Hi, We just upgraded our server's RAM from 1 GB to 5 GB - I'd like to now make some adjustments in my.cnf to better tune my server to this new memory amount. I was looking in the MySQL 4.1.21 source dir's support-files at the example my.cnf files provided with the distribution, but these seem horribly out-dated. (The my-large.cnf assumes you have a whopping 512 mb of memory). I was thinking of modelling my.cnf after the provided my-innodb-heavy.cnf file, but I am worried these files might be out-dated so I figured I'd ping the mailing list for other resources for tuning? Thanks, Josh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: myisam primary key with innodb primary key..
FULLTEXT can only be used with ISAM tables. Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 -Original Message- From: Lakshmi [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 12, 2006 3:29 AM To: mysql@lists.mysql.com Subject: myisam primary key with innodb primary key.. Hi, I want to know is there any difference between myisam primary index vs innodb primary index... -- Regards, Lakshmi.M.P. DBA Support Sify Limited. Extn:4134 ** DISCLAIMER ** Information contained and transmitted by this E-MAIL is proprietary to Sify Limited and is intended for use only by the individual or entity to which it is addressed, and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If this is a forwarded message, the content of this E-MAIL may not have been sent with the authority of the Company. If you are not the intended recipient, an agent of the intended recipient or a person responsible for delivering the information to the named recipient, you are notified that any use, distribution, transmission, printing, copying or dissemination of this information in any way or in any manner is strictly prohibited. If you have received this communication in error, please delete this mail notify us immediately at [EMAIL PROTECTED] Watch the latest updates on Mumbai, with video coverage of news, events, Bollywood, live darshan from Siddhivinayak temple and more, only on www.mumbailive.in Watch the hottest videos from Bollywood, Fashion, News and more only on www.sifymax.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Moving database to another machine
I am receiving some messages an extra time, also. It seems that something is wrong with the list server. Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 -Original Message- From: Nico Sabbi [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 12, 2006 4:27 AM To: mysql@lists.mysql.com Subject: Re: Moving database to another machine Chris wrote: An additional note - you can only do this while mysql is completely shut down. You cannot do this while mysql is running on either server. why is it that I received this mail 3 times? I can understand 2 (one to my and one to the list), but 3 ... -- Email.it, the professional e-mail, gratis per te: http://www.email.it/f Sponsor: Ascolta tutta la musica che vuoi gratis! * Clicca su www.radiosnj.com Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=5176d=12-9 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: storing URL 2083 characters
the urls never repeat. and it's a very active table so I don't wanna compress right now. And you're right, most URLs are 255chars, but some are bigger, so can't use varchar. I guess I'll just use TEXT :) Thanks! Peter On 9/12/06, Mike Wexler [EMAIL PROTECTED] wrote: Peter Van Dijck wrote: Hi, URL's have a practical limit of 2083 characters it seems. To store these in a space efficient way (I have 1,000,000s of url rows), what's the best approach? varchar has a 255 maximum, right? Should I just use TEXT? I'm not searching *in* the urls, I am selecting like this: where url = 'xxx'. Do the URLs occur multiple times? If so, I would create a URL table, that had the URL and a primary auto_increment key. Then I would just reference the key for each instance. The other thing you could do is use the compress function. Note, that while URLs can be 2083 characters, they generally aren't. So if you use a TEXT field and had 1,000,000 URLs and the average URL was x characters long you would need (x + overhead) * 1,000,000 bytes. I would guess for most situations (x + overhead) is less than 200, so that is only about 200 MB. not particularly huge. You can probably save a factor of 2 or 3 with compress. If the URLs repeat a lot, you can probably save a lot more than that with the sepearate URL table. Thanks, Peter -- find videoblogs: http://mefeedia.com my blog: http://poorbuthappy.com/ease/ my job: http://petervandijck.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Duplicate Messages
Overnight (Monday night US time) I was receiving some duplicate messages (not ones addressed to me specifically). Has this been fixed? Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341
Re: 2 versions of mysql on Unix: problem with the socket
Duhaime Johanne wrote: I am running mysql 4.1.7 on solaris 9. I want to install 2 versions (versions 4.1.7 and 5) on the server. I am reconfiguring my scripts in that perspective. I have a script that start the mysql v4 so that I finally have: 7002 pts/4S 0:00 /bin/sh /seqweb/mysql4/bin/mysqld_safe --defaults-file=/seqweb/mysqldata/my.cnf --port=3306 --socket=/tmp/mysql4.sock --pid-file=/seqweb/mysqldata/mysql4.pid --user=mysql --datadir=/seqweb/mysqldata 7023 pts/4S 0:13 /seqweb/mysql-4.1.7/libexec/mysqld --defaults-file=/seqweb/mysqldata/my.cnf --basedir=/seqweb/mysql-4.1.7 --datadir=/seqweb/mysqldata --pid-file=/seqweb/mysqldata/mysql4.pid --port=3306 --socket=/tmp/mysql4 The problem is that I always have to give the --socket to all the clients, the port number is not enough. Ex: put a .my.cnf file in the client's home directory that specifies the socket. -- Gerald L. Clark Supplier Systems Corporation -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: 2 versions of mysql on Unix: problem with the socket
Thank you for your answer. Here is what I did cd /seqweb/mysqlhome mercure{mysql}140: more .my.cnf # The following options will be passed to all MySQL clients [client] user= mysql password= port= 3306 socket = /tmp/mysql4.sock # Here follows entries for some specific programs # The MySQL server [mysqld] [mysqldump] [mysql] [isamchk] [myisamchk] [mysqlhotcopy] I still have the problem. DBI connect('database=mysql;host=localhost;port=3306','mysql',...) failed: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) at /seqweb/mysql4/myscripts/4/check_innodb.pl line 62 Also this is a less attractive solution since the same user will work on version4 and version 5. Thank you -Message d'origine- De : Gerald L. Clark [mailto:[EMAIL PROTECTED] Envoyé : Tuesday, 12 September 2006 10:39 À : Duhaime Johanne Cc : mysql@lists.mysql.com Objet : Re: 2 versions of mysql on Unix: problem with the socket Duhaime Johanne wrote: I am running mysql 4.1.7 on solaris 9. I want to install 2 versions (versions 4.1.7 and 5) on the server. I am reconfiguring my scripts in that perspective. I have a script that start the mysql v4 so that I finally have: 7002 pts/4S 0:00 /bin/sh /seqweb/mysql4/bin/mysqld_safe --defaults-file=/seqweb/mysqldata/my.cnf --port=3306 --socket=/tmp/mysql4.sock --pid-file=/seqweb/mysqldata/mysql4.pid --user=mysql --datadir=/seqweb/mysqldata 7023 pts/4S 0:13 /seqweb/mysql-4.1.7/libexec/mysqld --defaults-file=/seqweb/mysqldata/my.cnf --basedir=/seqweb/mysql-4.1.7 --datadir=/seqweb/mysqldata --pid-file=/seqweb/mysqldata/mysql4.pid --port=3306 --socket=/tmp/mysql4 The problem is that I always have to give the --socket to all the clients, the port number is not enough. Ex: put a .my.cnf file in the client's home directory that specifies the socket. -- Gerald L. Clark Supplier Systems Corporation -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
anyone using subversion to sync mysql dbs ?
Is anyone using subversion to sync live and development databases? If so, how? Is this desired or a best practice? Everything except my databases are under version control. In theory, I would like my databases to sync with the same subversion 'svn update' command. That way, all web content updates with one command. many thanks g -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: resources for tuning
Josh, I highly recommend Jeremy Zawodny's High Performance MySQL from O'Reilly. It's written for 4.1 users, so would be perfect for you. Dan On 9/12/06, Josh Trutwin [EMAIL PROTECTED] wrote: Hi, We just upgraded our server's RAM from 1 GB to 5 GB - I'd like to now make some adjustments in my.cnf to better tune my server to this new memory amount. I was looking in the MySQL 4.1.21 source dir's support-files at the example my.cnf files provided with the distribution, but these seem horribly out-dated. (The my-large.cnf assumes you have a whopping 512 mb of memory). I was thinking of modelling my.cnf after the provided my-innodb-heavy.cnf file, but I am worried these files might be out-dated so I figured I'd ping the mailing list for other resources for tuning? Thanks, Josh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
MySQL tuning advice
Hello, I am looking for some advice regarding the tuning of a dedicated MySQL database server. The machine in questions is a Dell 2850 with dual Xeon procs and 4GB of memory running FreeBSD 6.1. Its sole purpose in life is to run the MySQL 5.0.x database engine serving out databases to multiple Apache servers. I started with the my-large.cnf file distributed with the MySQL port on FreeBSD 6.1. I made the following changes: set max_allowed_packet = 2M set-variable=max_connections=500 Now the questions: 1. What would be the recommended max_user_connections for the Dell 2850? 2. What is the practical maximum value that max_user_connections could be set to? I have been looking around for documentation regarding a max value for max_user connections and found the following: http://www-css.fnal.gov/dsg/external/freeware/mysqlTuning.html What documentation do others use for tuning MySQL? My goal is to convert our PHP and Perl applications we run on the Apache web servers to use persistent database connections to the dedicated MySQL server. -- Greg Pelle System Administrator Domain-it! Inc. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL tuning advice
I highly recommend Jeremy Zawodny's High Performance MySQL from O'Reilly. Dan On 9/12/06, Gregory T Pelle [EMAIL PROTECTED] wrote: Hello, I am looking for some advice regarding the tuning of a dedicated MySQL database server. The machine in questions is a Dell 2850 with dual Xeon procs and 4GB of memory running FreeBSD 6.1. Its sole purpose in life is to run the MySQL 5.0.x database engine serving out databases to multiple Apache servers. I started with the my-large.cnf file distributed with the MySQL port on FreeBSD 6.1. I made the following changes: set max_allowed_packet = 2M set-variable=max_connections=500 Now the questions: 1. What would be the recommended max_user_connections for the Dell 2850? 2. What is the practical maximum value that max_user_connections could be set to? I have been looking around for documentation regarding a max value for max_user connections and found the following: http://www-css.fnal.gov/dsg/external/freeware/mysqlTuning.html What documentation do others use for tuning MySQL? My goal is to convert our PHP and Perl applications we run on the Apache web servers to use persistent database connections to the dedicated MySQL server. -- Greg Pelle System Administrator Domain-it! Inc. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Distinct select over 2 fields?
I'm searching a database of geopoints, and when two records have the same latitude and longitude, I only want to return one of them - basically just find all the unique locations. How do you set up a select like this? Thanks... -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Distinct select over 2 fields?
select * from table where . limit 1 that would do it if you don't care which one it returns JC On Tue, 12 Sep 2006, Brian Dunning wrote: I'm searching a database of geopoints, and when two records have the same latitude and longitude, I only want to return one of them - basically just find all the unique locations. How do you set up a select like this? Thanks... -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Distinct select over 2 fields?
Many different records will be returned though, I just don't want any dupes where both lat/lon is the same. :) On Sep 12, 2006, at 12:20 PM, Hiep Nguyen wrote: select * from table where . limit 1 that would do it if you don't care which one it returns JC On Tue, 12 Sep 2006, Brian Dunning wrote: I'm searching a database of geopoints, and when two records have the same latitude and longitude, I only want to return one of them - basically just find all the unique locations. How do you set up a select like this? Thanks... -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql? [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Distinct select over 2 fields?
Select DISTINCT(lat_long_field) from table where... Steve Musumeche CIO, Internet Retail Connection [EMAIL PROTECTED] Brian Dunning wrote: Many different records will be returned though, I just don't want any dupes where both lat/lon is the same. :) On Sep 12, 2006, at 12:20 PM, Hiep Nguyen wrote: select * from table where . limit 1 that would do it if you don't care which one it returns JC On Tue, 12 Sep 2006, Brian Dunning wrote: I'm searching a database of geopoints, and when two records have the same latitude and longitude, I only want to return one of them - basically just find all the unique locations. How do you set up a select like this? Thanks... -- --MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] --MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Distinct select over 2 fields?
Lat lon are two different fields. Either can be duplicated, but not both. On Sep 12, 2006, at 12:33 PM, Steve Musumeche wrote: Select DISTINCT(lat_long_field) from table where... Steve Musumeche CIO, Internet Retail Connection [EMAIL PROTECTED] Brian Dunning wrote: Many different records will be returned though, I just don't want any dupes where both lat/lon is the same. :) On Sep 12, 2006, at 12:20 PM, Hiep Nguyen wrote: select * from table where . limit 1 that would do it if you don't care which one it returns JC On Tue, 12 Sep 2006, Brian Dunning wrote: I'm searching a database of geopoints, and when two records have the same latitude and longitude, I only want to return one of them - basically just find all the unique locations. How do you set up a select like this? Thanks... -- --MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql? [EMAIL PROTECTED] --MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql? [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql? [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL tuning advice
Really you need to look at what your server is doing, the traffic patterns, then you can tune. There is a lot you can determine by just looking at the output of SHOW STATUS and SHOW VARIABLES. Threads_created number high and growing? Increase the thread_cache_size variable. Opened_tables number keeps growing? Increase the table_cache variable Those are very simple examples and there is a lot more you can tell, including how your indexes are doing. Read up on the status and variables, there is information on ratios and lots of other good stuff to help you change basic things. Careful with persistent connections and PHP, you can easily trip over your own connections. - Original Message - From: Gregory T Pelle [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Tuesday, September 12, 2006 12:50 PM Subject: MySQL tuning advice Hello, I am looking for some advice regarding the tuning of a dedicated MySQL database server. The machine in questions is a Dell 2850 with dual Xeon procs and 4GB of memory running FreeBSD 6.1. Its sole purpose in life is to run the MySQL 5.0.x database engine serving out databases to multiple Apache servers. I started with the my-large.cnf file distributed with the MySQL port on FreeBSD 6.1. I made the following changes: set max_allowed_packet = 2M set-variable=max_connections=500 Now the questions: 1. What would be the recommended max_user_connections for the Dell 2850? 2. What is the practical maximum value that max_user_connections could be set to? I have been looking around for documentation regarding a max value for max_user connections and found the following: http://www-css.fnal.gov/dsg/external/freeware/mysqlTuning.html What documentation do others use for tuning MySQL? My goal is to convert our PHP and Perl applications we run on the Apache web servers to use persistent database connections to the dedicated MySQL server. -- Greg Pelle System Administrator Domain-it! Inc. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Distinct select over 2 fields?
You could try using CONCAT: select distinct(CONCAT(lat, long)) from table where ... Steve Musumeche CIO, Internet Retail Connection [EMAIL PROTECTED] Brian Dunning wrote: Lat lon are two different fields. Either can be duplicated, but not both. On Sep 12, 2006, at 12:33 PM, Steve Musumeche wrote: Select DISTINCT(lat_long_field) from table where... Steve Musumeche CIO, Internet Retail Connection [EMAIL PROTECTED] Brian Dunning wrote: Many different records will be returned though, I just don't want any dupes where both lat/lon is the same. :) On Sep 12, 2006, at 12:20 PM, Hiep Nguyen wrote: select * from table where . limit 1 that would do it if you don't care which one it returns JC On Tue, 12 Sep 2006, Brian Dunning wrote: I'm searching a database of geopoints, and when two records have the same latitude and longitude, I only want to return one of them - basically just find all the unique locations. How do you set up a select like this? Thanks... -- --MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] --MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] --MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] --MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
query to find duplicate rows
Hi all, a though query problem for me... I have a table with 2 rows that matter: url and id If url and id are the same in 2 rows, then that's no good (bad data). I need to find all the rows that are duplicates. I can't think of how to approach the sql for this.. any pointers? Thanks! Peter -- find videoblogs: http://mefeedia.com my blog: http://poorbuthappy.com/ease/ my job: http://petervandijck.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: query to find duplicate rows
Select COUNT(*) as num_entries, url from table WHERE num_entries1 GROUP BY url Untested, but the concept should work for you. Steve Musumeche CIO, Internet Retail Connection [EMAIL PROTECTED] Peter Van Dijck wrote: Hi all, a though query problem for me... I have a table with 2 rows that matter: url and id If url and id are the same in 2 rows, then that's no good (bad data). I need to find all the rows that are duplicates. I can't think of how to approach the sql for this.. any pointers? Thanks! Peter -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: query to find duplicate rows
Hi all, a though query problem for me... I have a table with 2 rows that matter: url and id If url and id are the same in 2 rows, then that's no good (bad data). I need to find all the rows that are duplicates. I can't think of how to approach the sql for this.. any pointers? Select COUNT(*) as num_entries, url from table WHERE num_entries1 GROUP BY url Untested, but the concept should work for you. That would mark rows that have different id's, but the same url... SELECT t1.id, t2.id, t1.url FROM table t1, table t2 WHERE t1.id = t2.id AND t1.url = t2.url Expensive if you have a lot of rows, but should work I think... if my memory is right :) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Restarting MySQL on Solaris 8?
- Original Message - From: Jay Paulson [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, September 08, 2006 9:51 PM Subject: Restarting MySQL on Solaris 8? A couple questions since I'm not a Solaris person I really don't know how to do the following and was hoping that someone could help me out (Google isn't much help on this). How does one start the MySQL daemon on Solaris 8? (it's running MySQL 3.23.49) How does one tell Solaris 8 to start the MySQL daemon on boot? When you've got the first problem fixed To have MySQL [or any application] restart at boot time, you need to copying the start script to /etc/rc3.d Start scripts in the rc3.d dir all start with 'S' followed by a number, like: S53mysql. As root, copy the /usr/local/mysql/support-files/mysql.server script to /etc/rc3.d, and rename it as say S53mysql. This should fix the boot problem. ~mm Lastly, tried running /usr/local/bin/safe_mysqld but got the following error: TIMESTAMP mysqld ended Thanks for any help! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Getting next Birthdays
http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html Read the comments at the bottom. DÆVID -Original Message- From: ESV Media GmbH [mailto:[EMAIL PROTECTED] Sent: Wednesday, September 06, 2006 8:05 AM To: mysql@lists.mysql.com Subject: Getting next Birthdays Hello MySQL-User, i need your help with the following query. I want to get the next birthdays of my friends, but not only the one of this year. So when im in december ( or november ) i want to see the birthdays of next year too With my query i only see the birthdays of this year : SELECT SQL_CACHE DISTINCT fname,lname, mem.mem_id AS p_id, DATE_FORMAT(mem.birthday,'%d.%m') AS geburtstag, DATEDIFF(DATE_FORMAT(mem.birthday,CONCAT(DATE_FORMAT(NOW(),'%Y '),'-%m-%d')),DATE_FORMAT(NOW(),'%Y-%m-%d')) as tage FROM members mem INNER JOIN network n ON ( ( n.mid = 1 AND n.action = 1 AND n.bid=mem.mem_id ) OR ( n.bid = 1 AND n.action = 1 AND n.mid=mem.mem_id) ) WHERE DATE_SUB(DATE_FORMAT(NOW(),'%Y-%m-%d'),INTERVAL 1 DAY) = DATE_FORMAT(mem.birthday,CONCAT(DATE_FORMAT(NOW(),'%Y'),'-%m-%d')) ORDER BY tage Thx a lot !!! Cheers Marco spacemarc schrieb: 2006/9/6, Miles Thompson [EMAIL PROTECTED]: First of all - please reply to the list .. I thought you wanted all fields, that's the way your SELECT statement is constructed. If just the first 20 char from fieldA, then ... SELECT LEFT(fieldA, 20) AS fieldA FROM Tab1 should do what you want. Again, check the syntax for LEFT() in the MySQL docs. the syntax is valid: from http://dev.mysql.com/doc/refman/5.0/en/string-functions.html: LEFT(str,len) returns the leftmost len characters from the string str, or NULL if any argument is NULL. mysql SELECT LEFT('foobarbar', 5); - 'fooba' I have 30 fields: instead writing all 30 fields, I would want to select them all with * and only for one of they, fieldA, to obtain the first 20 chars: it's possible? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
What's the best way to INSERT a long list of email addresses and names into a TABLE?
Hi, I have a long list of email addresses, and names in a text file in csv format, and I want to know what's the best approach to quickly INSERT them into my customers TABLE. Initially I thought it could be easy to just add SQL INSERTs to the database dump but, the problem is I have to manually add some fields and that can be a tedious and slow process, is there a better approach? This is the table structure: CREATE TABLE `customers` ( `customers_id` int(11) NOT NULL auto_increment, `purchased_without_account` tinyint(1) unsigned NOT NULL default '0', `customers_gender` char(1) NOT NULL default '', `customers_firstname` varchar(32) NOT NULL default '', `customers_lastname` varchar(32) NOT NULL default '', `customers_dob` datetime NOT NULL default '-00-00 00:00:00', `customers_email_address` varchar(96) NOT NULL default '', `customers_default_address_id` int(11) default NULL, `customers_telephone` varchar(32) NOT NULL default '', `customers_fax` varchar(32) default NULL, `customers_password` varchar(40) NOT NULL default '', `customers_newsletter` char(1) default NULL, `customers_group_name` varchar(27) NOT NULL default 'Retail', `customers_group_id` int(11) NOT NULL default '0', PRIMARY KEY (`customers_id`), KEY `purchased_without_account` (`purchased_without_account`) ) TYPE=MyISAM AUTO_INCREMENT=60 ; I only have Name and Email in the text file. Sincerely, Axis -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: What's the best way to INSERT a long list of email addresses and names into a TABLE?
Presuming your csv is set up in the structure of your table - this may help BULK INSERT tablename FROM 'C:\filelocation\my_data.csv' WITH ( FIELDTERMINATOR =',', ROWTERMINATOR ='\n' ) I am going through a SQL class and this is the method we are using to bulk insert into our tables. Works like a champ. Don -Original Message- From: axis [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 12, 2006 9:10 PM To: [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Subject: What's the best way to INSERT a long list of email addresses and names into a TABLE? Hi, I have a long list of email addresses, and names in a text file in csv format, and I want to know what's the best approach to quickly INSERT them into my customers TABLE. Initially I thought it could be easy to just add SQL INSERTs to the database dump but, the problem is I have to manually add some fields and that can be a tedious and slow process, is there a better approach? This is the table structure: CREATE TABLE `customers` ( `customers_id` int(11) NOT NULL auto_increment, `purchased_without_account` tinyint(1) unsigned NOT NULL default '0', `customers_gender` char(1) NOT NULL default '', `customers_firstname` varchar(32) NOT NULL default '', `customers_lastname` varchar(32) NOT NULL default '', `customers_dob` datetime NOT NULL default '-00-00 00:00:00', `customers_email_address` varchar(96) NOT NULL default '', `customers_default_address_id` int(11) default NULL, `customers_telephone` varchar(32) NOT NULL default '', `customers_fax` varchar(32) default NULL, `customers_password` varchar(40) NOT NULL default '', `customers_newsletter` char(1) default NULL, `customers_group_name` varchar(27) NOT NULL default 'Retail', `customers_group_id` int(11) NOT NULL default '0', PRIMARY KEY (`customers_id`), KEY `purchased_without_account` (`purchased_without_account`) ) TYPE=MyISAM AUTO_INCREMENT=60 ; I only have Name and Email in the text file. Sincerely, Axis -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
What's the best way to INSERT a long list of email addresses and names into a TABLE?
I have a long list of email addresses, and names in a text file in csv format, and I want to know what's the best approach to quickly INSERT them into my customers TABLE. Initially I thought it could be easy to just add SQL INSERTs to the database dump but, the problem is I have to manually add some fields and that can be a tedious and slow process, is there a better approach? This is the table structure: CREATE TABLE `customers` ( `customers_id` int(11) NOT NULL auto_increment, `purchased_without_account` tinyint(1) unsigned NOT NULL default '0', `customers_gender` char(1) NOT NULL default '', `customers_firstname` varchar(32) NOT NULL default '', `customers_lastname` varchar(32) NOT NULL default '', `customers_dob` datetime NOT NULL default '-00-00 00:00:00', `customers_email_address` varchar(96) NOT NULL default '', `customers_default_address_id` int(11) default NULL, `customers_telephone` varchar(32) NOT NULL default '', `customers_fax` varchar(32) default NULL, `customers_password` varchar(40) NOT NULL default '', `customers_newsletter` char(1) default NULL, `customers_group_name` varchar(27) NOT NULL default 'Retail', `customers_group_id` int(11) NOT NULL default '0', PRIMARY KEY (`customers_id`), KEY `purchased_without_account` (`purchased_without_account`) ) TYPE=MyISAM AUTO_INCREMENT=60 ; I only have Name and Email in the text file. Sincerely, Axis -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: What's the best way to INSERT a long list of email addresses and names into a TABLE?
I don't know; never tried it that way _ From: axis [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 12, 2006 9:37 PM To: [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Subject: RE: What's the best way to INSERT a long list of email addresses and names into a TABLE? HTML_MESSAGE,MIME_HTML_ONLY autolearn=disabled version=3.1.4 X-Spam-Checker-Version: SpamAssassin 3.1.4 (2006-07-25) on farm03 X-NAS-Classification: 0 X-NAS-MessageID: 45965 X-NAS-Validation: {9F3F1FB5-9CCB-44C4-8345-B1DFB7F0F848} Thank you for your quick reply, and FROM only works with absolute paths? Does it work with '../../Customer_Data.csv'? Axis On 9/12/2006 10:22:01 PM, Don ([EMAIL PROTECTED]) wrote: Presuming your csv is set up in the structure of your table - this may help BULK INSERT tablename FROM 'C:\filelocation\my_data.csv' WITH ( FIELDTERMINATOR =',', ROWTERMINATOR ='\n' ) I am going through a SQL class and this is the method we are using to bulk insert into our tables. Works like a champ. Don -Original Message- From: axis [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 12, 2006 9:10 PM To: [EMAIL PROTECTED] Cc: mysql@lists.mysql.com
Strange select problem
Dear all, I've encountered a very strange problem. I dumpped one table from a database and then import it into another database. I works successfully, all the data is exactly the same. But when I the below SQL on two tables: select * from foo_table ORDER BY id DESC it shows up different result. All result truely sorted by id by some of them appears different. Can someone help me on this problem? Any suggestion is extremely welcomed. Thanks in advanced. -- Cheng-Lin Yang Sun Certified Java Programmer High Speed Network Group Lab (HSNG) Institute of Computer Science Info. Engineering, National Chung Cheng Univerisity, Taiwan E-mail: [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]