Re: inserting client time instead of server time
On Tue, May 13, 2008 at 8:25 AM, Sebastian Mendel [EMAIL PROTECTED] wrote: is there a way or a function like NOW() except it returns the client time and not the server time? On Linux you could do something like: mysql -e insert into tablex (..., clientside_timestamp, ...) values (..., '$(date +%F %H:%M:%S)', ...) dbname Regards, Wieland -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Data Modelling
what is the reason for creating main_dup. If your thinking of taking a backup of all the changes from main table, then the trigger also will have to wait, till the locks on the main table are released. This trigger would add another feather to the lock/load on the machine. regards anandkl On 5/13/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: Hi, Generally, in data modelling there are some independent table, user related tables, master tables and one main table which is related with most of the tables. Almost in all the queries main table is involved to fetch the `id`. In this way main table is used maximum. some times the main table gets locked due to the other query. When the hits on database server increases the locking time will increase. Is there any way to reduce the locking time of main table. for eg main_dup can be created. To reflect all the changes from main table to main_dup trigger can be used. So main_dup can be used to reduce the locking time of table. The above is my idea. Is there any other way to reduce the locking period. -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED]
count from 3 tables
Hi, I have 3 tables which I want to count records from. The tables are users, phonebooks, groups. I want to count how many phonebooks and groups does a user have. Currently I try to use this query: SELECT users.username, count( phonebooks.key_user ) AS pb, count( groups.key_user ) AS gp FROM users LEFT JOIN ( phonebooks, groups ) ON ( phonebooks.key_user = users.key_user AND groups.key_user = users.key_user ) GROUP BY users.username But the result was wrong. Does anybody know why it's wrong and how to solve it? TIA -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Table Structure
Hi all, Below is the user_delivery table structure. CREATE TABLE `user_delivery` ( `user_id` decimal(22,0) NOT NULL default '0', `delivery_id` decimal(22,0) NOT NULL default '0', `send_to_regulator` char(1) default NULL, PRIMARY KEY (`user_id`), KEY `user_delivery_comp1` (`user_id`,`delivery_id`), CONSTRAINT `fk_user_delivery` FOREIGN KEY (`user_id`) REFERENCES `user_info` (`user_id`) ON DELETE NO ACTION ON UPDATE NO ACTION ) ENGINE=InnoDB DEFAULT CHARSET=latin1; According to me user_delivery_comp1 index can be dropped and new index can be created on delivery_id column. I would to know that the changes will work or not. Yours suggestion regarding this table structure. Thanks, -- Krishna Chandra Prajapati
Re: Data Modelling
Hi, MyISAM is being used on production server. It applies table level locking. From mytop view, I see that the table gets locked very frequently for 5 to 10 seconds. Reports are generated everyday. so it scans billions of data ( 1years data). Changing to innodb will be doing soon and optimising queries also. What else can be the solution for time being. Thanks, Krishna Chandra Prajapati On Tue, May 13, 2008 at 1:14 PM, Ben Clewett [EMAIL PROTECTED] wrote: If you use InnoDB you should not have a problem as it used row-level locking and isolated transitions. Other than that you can split your tables into smaller ones using either partitioning or the federated engine... Ben Krishna Chandra Prajapati wrote: Hi, Generally, in data modelling there are some independent table, user related tables, master tables and one main table which is related with most of the tables. Almost in all the queries main table is involved to fetch the `id`. In this way main table is used maximum. some times the main table gets locked due to the other query. When the hits on database server increases the locking time will increase. Is there any way to reduce the locking time of main table. for eg main_dup can be created. To reflect all the changes from main table to main_dup trigger can be used. So main_dup can be used to reduce the locking time of table. The above is my idea. Is there any other way to reduce the locking period. -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED]
Re: Data Modelling
Hi, I am looking for a solution to reduce the table locks as mytop shows that table gets locked very frequently. During report generation. Thanks, Prajapati On Tue, May 13, 2008 at 1:10 PM, Ananda Kumar [EMAIL PROTECTED] wrote: what is the reason for creating main_dup. If your thinking of taking a backup of all the changes from main table, then the trigger also will have to wait, till the locks on the main table are released. This trigger would add another feather to the lock/load on the machine. regards anandkl On 5/13/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: Hi, Generally, in data modelling there are some independent table, user related tables, master tables and one main table which is related with most of the tables. Almost in all the queries main table is involved to fetch the `id`. In this way main table is used maximum. some times the main table gets locked due to the other query. When the hits on database server increases the locking time will increase. Is there any way to reduce the locking time of main table. for eg main_dup can be created. To reflect all the changes from main table to main_dup trigger can be used. So main_dup can be used to reduce the locking time of table. The above is my idea. Is there any other way to reduce the locking period. -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED] -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED]
Re: latin1 vs UTF-8
I assume I should check if my mysql has support for UTF-8, I believe it just has to be 4.1 or newer. And, that's only necessary so you can get UTF-8 aware sorting and such. You don't need any special support to just _store_ UTF-8 data. Ah, that's actually the critical part. I'm actually generating the data myself through PHP, but I'm getting a weird ^A character when I try to print it out in a textarea field. I'm trying to figure out if there's some weird interaction between htmlentities that's causing it to be displayed strangely. Can I trust that mysql is displaying the text correctly on the command line tool if I have 4.1, even if the charset is set to latin1? Are there any caveats to using htmlentities that I'm missing? Essentially I'm creating a form with a text area that allows people to enter in values, then they can reload the form with the text area pre-filled in for the id they stored it for. Waynn -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Usefulness of mysql logs when using innodb?
Hi, I guess that when I'm using only Innodb and no replication I can safely disable mysql's (bin-) log files (that grow to no end) because Innodb has its own log files. Is it correct? Thanks, Nico -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: index creation taking too much time
Hi anand, PRIMARY KEY (`id`), KEY `KI_IDX_0805090456` (`words`,`id`), KEY `CI_IDX_0805090456` (`lf_id`) Since id is a primary key. Then why again indexing on id is being created (`words`,`id`). It will be a duplicate index on id. words is a varchar type. So instead of creating fulltext index restrict word(15)). Try. Key buffer seems to be OK. No free key blocks are there. This can be the reason. Key_blocks_unused 0 On Mon, May 12, 2008 at 6:48 PM, Ananda Kumar [EMAIL PROTECTED] wrote: Hi All, We have a table which is around 100 Million rows. Its a myisam table, but the db default is innodb. CREATE TABLE `dc_data` ( `id` decimal(22,0) NOT NULL, `words` varchar(255) NOT NULL, `lf_id` decimal(22,0) NOT NULL, `occurence` bigint(20) NOT NULL, `date_modified` timestamp NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP) ENGINE=MyIsam DEFAULT CHARSET=utf8 indexs are as below PRIMARY KEY (`id`), KEY `KI_IDX_0805090456` (`words`,`id`), KEY `CI_IDX_0805090456` (`lf_id`) we have 8 cpu, 8 gb ram. We use set below parameters at session level myisam_sort_buffer_size=300MB myisam_max_sort_file_size=10GB Each index creation is taking 10hrs, is there any way i can speed up index creation. regards anandkl -- Krishna Chandra Prajapati
Re: Data Modelling
Table level locking is inherent to MyIsam. Look into partitioning, as this breaks table into two or more other tables which will lock separately. Or use InnoDB: ALTER TABLE ... SET ENGINE=InnoDB; (I think) Ben Krishna Chandra Prajapati wrote: Hi, I am looking for a solution to reduce the table locks as mytop shows that table gets locked very frequently. During report generation. Thanks, Prajapati On Tue, May 13, 2008 at 1:10 PM, Ananda Kumar [EMAIL PROTECTED] wrote: what is the reason for creating main_dup. If your thinking of taking a backup of all the changes from main table, then the trigger also will have to wait, till the locks on the main table are released. This trigger would add another feather to the lock/load on the machine. regards anandkl On 5/13/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: Hi, Generally, in data modelling there are some independent table, user related tables, master tables and one main table which is related with most of the tables. Almost in all the queries main table is involved to fetch the `id`. In this way main table is used maximum. some times the main table gets locked due to the other query. When the hits on database server increases the locking time will increase. Is there any way to reduce the locking time of main table. for eg main_dup can be created. To reflect all the changes from main table to main_dup trigger can be used. So main_dup can be used to reduce the locking time of table. The above is my idea. Is there any other way to reduce the locking period. -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: index creation taking too much time
Hi Krishna, how do i make my index to get more key blocks On 5/13/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: Hi anand, PRIMARY KEY (`id`), KEY `KI_IDX_0805090456` (`words`,`id`), KEY `CI_IDX_0805090456` (`lf_id`) Since id is a primary key. Then why again indexing on id is being created (`words`,`id`). It will be a duplicate index on id. words is a varchar type. So instead of creating fulltext index restrict word(15)). Try. Key buffer seems to be OK. No free key blocks are there. This can be the reason. Key_blocks_unused 0 On Mon, May 12, 2008 at 6:48 PM, Ananda Kumar [EMAIL PROTECTED] wrote: Hi All, We have a table which is around 100 Million rows. Its a myisam table, but the db default is innodb. CREATE TABLE `dc_data` ( `id` decimal(22,0) NOT NULL, `words` varchar(255) NOT NULL, `lf_id` decimal(22,0) NOT NULL, `occurence` bigint(20) NOT NULL, `date_modified` timestamp NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP) ENGINE=MyIsam DEFAULT CHARSET=utf8 indexs are as below PRIMARY KEY (`id`), KEY `KI_IDX_0805090456` (`words`,`id`), KEY `CI_IDX_0805090456` (`lf_id`) we have 8 cpu, 8 gb ram. We use set below parameters at session level myisam_sort_buffer_size=300MB myisam_max_sort_file_size=10GB Each index creation is taking 10hrs, is there any way i can speed up index creation. regards anandkl -- Krishna Chandra Prajapati
Re: Data Modelling
Thanks a lot On Tue, May 13, 2008 at 4:30 PM, Ben Clewett [EMAIL PROTECTED] wrote: Table level locking is inherent to MyIsam. Look into partitioning, as this breaks table into two or more other tables which will lock separately. Or use InnoDB: ALTER TABLE ... SET ENGINE=InnoDB; (I think) Ben Krishna Chandra Prajapati wrote: Hi, I am looking for a solution to reduce the table locks as mytop shows that table gets locked very frequently. During report generation. Thanks, Prajapati On Tue, May 13, 2008 at 1:10 PM, Ananda Kumar [EMAIL PROTECTED] wrote: what is the reason for creating main_dup. If your thinking of taking a backup of all the changes from main table, then the trigger also will have to wait, till the locks on the main table are released. This trigger would add another feather to the lock/load on the machine. regards anandkl On 5/13/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: Hi, Generally, in data modelling there are some independent table, user related tables, master tables and one main table which is related with most of the tables. Almost in all the queries main table is involved to fetch the `id`. In this way main table is used maximum. some times the main table gets locked due to the other query. When the hits on database server increases the locking time will increase. Is there any way to reduce the locking time of main table. for eg main_dup can be created. To reflect all the changes from main table to main_dup trigger can be used. So main_dup can be used to reduce the locking time of table. The above is my idea. Is there any other way to reduce the locking period. -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED] -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED]
Re: latin1 vs UTF-8
Waynn Lue wrote: I'm getting a weird ^A character when I try to print it out in a textarea field. In that case, what character set does the browser think it should be using for the page? If you don't explicitly declare it, the browser has to guess, and you know what happens when you rely on a stupid computer to do thinking a human should have done instead. You can either declare it for all pages on a site in your web server configuration (gets sent with HTTP headers), in the equivalent meta tag, or in an ?xml tag if you're using XHTML. I'm trying to figure out if there's some weird interaction between htmlentities that's causing it to be displayed strangely. To debug problems like this, I recommend studying hex dumps of the relevant data at every stage along the path: 1. echo 'query' | mysql --user=fred --password=barney mydb | hexdump 2. write hex dump of query results to PHP debug log 3. packet capture of HTTP reply containing finished page 4. in browser, save web page to disk, and run through hexdump tool You'll find that a) the data isn't stored correctly in the database; or b) it's being translated to another character set along the way (it happens!); or c) the browser is misinterpreting it because you didn't tell it what it needs to know to display it correctly. Can I trust that mysql is displaying the text correctly on the command line tool if I have 4.1, even if the charset is set to latin1? Unless you're on a very old or strangely configured system, your terminal is probably configured for UTF-8. Since your DB is in Latin-1, there's a character set translation in there, and I can't confidently predict what will happen. In this modern world, it's best to use some form of Unicode everywhere possible. Then the worst you have to deal with is conversion among encodings, which is annoying but lossless. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help needed
Hi Jerry, Thanks for the tip. What is UUID look for? is it an ID associated with the motherboard, CPU, harddisk? Thanks. Regards, Velen - Original Message - From: Jerry Schwartz [EMAIL PROTECTED] To: 'Garris, Nicole' [EMAIL PROTECTED]; mysql@lists.mysql.com Sent: Monday, May 12, 2008 11:43 PM Subject: RE: Help needed Your best bet would be to use a UUID: SELECT UUID(); +--+ | uuid() | +--+ | c712dc72-718d-102b-b3c8-97395a1766b9 | +--+ There are equivalents in various programming languages, although not in PHP. Regards, Jerry Schwartz The Infoshop by Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 www.the-infoshop.com www.giiexpress.com www.etudes-marche.com -Original Message- From: Garris, Nicole [mailto:[EMAIL PROTECTED] Sent: Monday, May 12, 2008 11:59 AM To: mysql@lists.mysql.com Subject: RE: Help needed Sorry, you can't prove uniqueness by running it against a hundred thousand, million, or even a billion computers. (The billionth-and-first computer could be the one with the non-unique ID.) You need a mathematical proof to prove uniqueness. -Original Message- From: Velen [mailto:[EMAIL PROTECTED] Sent: Monday, May 12, 2008 8:32 AM To: Arthur Fuller Cc: mysql@lists.mysql.com Subject: Re: Help needed Hi Arthur, Could you please tell this guy that it was not a virus and you tested it without any problem? Thanks. Velen On Sat, 2008-05-10 at 20:59 +0400, Velen wrote: Hi, I'm testing a program and I need you assistance. Please unzip the file at http://www.biz-mu.com/PCID.zip and run the program. Please supply a Linux version of your virus so I can test. -- Daniel Kasak IT Developer NUS Consulting Group Level 5, 77 Pacific Highway North Sydney, NSW, Australia 2060 T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989 email: [EMAIL PROTECTED] website: http://www.nusconsulting.com.au - Original Message - From: Arthur Fuller To: Velen Sent: Saturday, May 10, 2008 10:44 PM Subject: Re: Help needed On my main machine the PCID is 135184-45-4-10-1513-1. Hope it helps. Arthur On Sat, May 10, 2008 at 12:59 PM, Velen [EMAIL PROTECTED] wrote: Hi, I'm testing a program and I need you assistance. Please unzip the file at http://www.biz-mu.com/PCID.zip and run the program. It will display an ID, please mail me back the ID. If you can use it on several computers, it will be even better for me, I need to have as much results as possible. The program i'm testing is supposed to create a unique ID for each PC, this is why I need to test if it is really unique. This file is virus free and cannot do any harm to your PC. It is however a voluntary testing. Thanks for your help in advance. Velen -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] infoshop.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: connectors: per session persistent connection (PHP)
On May 9, 2008, at 12:17 AM, Sebastian Mendel wrote: Paul DuBois schrieb: On May 7, 2008, at 4:36 AM, Sebastian Mendel wrote: Hi, wouldn't it be very helpful if mysql connectors support some sort of per session persistent connection? this would save a lot of queries in many apps, for example SET NAMES, setting variables, creating temporary tables How would a persistent connection save any of that? Suppose the script that previously used the connection reset any or all of those things? i do not fully understand, why should the script reset these things? It doesn't have to reset any of those things, but you are *assuming* that no other script *will* reset any of those things. As far as I understand what you want to do, that's an invalid assumption. How do you know what some other script might want to do? Perhaps it wants a different character set than you do. i do not talk of a global persistent connection, every session should have it's own persistent connection (if requested). A session is a connection. When the connection ends, the session ends. -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: latin1 vs UTF-8
I used not only the PHP functions but also the native MySQL HEX() function to see exactly what was going into my database. You can also set always_populate_raw_post_data = On in php.ini and examine $HTTP_RAW_POST_DATA to see exactly what the web server is seeing. Regards, Jerry Schwartz The Infoshop by Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 www.the-infoshop.com www.giiexpress.com www.etudes-marche.com -Original Message- From: Warren Young [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 13, 2008 8:53 AM To: MySQL List Subject: Re: latin1 vs UTF-8 Waynn Lue wrote: I'm getting a weird ^A character when I try to print it out in a textarea field. In that case, what character set does the browser think it should be using for the page? If you don't explicitly declare it, the browser has to guess, and you know what happens when you rely on a stupid computer to do thinking a human should have done instead. You can either declare it for all pages on a site in your web server configuration (gets sent with HTTP headers), in the equivalent meta tag, or in an ?xml tag if you're using XHTML. I'm trying to figure out if there's some weird interaction between htmlentities that's causing it to be displayed strangely. To debug problems like this, I recommend studying hex dumps of the relevant data at every stage along the path: 1. echo 'query' | mysql --user=fred --password=barney mydb | hexdump 2. write hex dump of query results to PHP debug log 3. packet capture of HTTP reply containing finished page 4. in browser, save web page to disk, and run through hexdump tool You'll find that a) the data isn't stored correctly in the database; or b) it's being translated to another character set along the way (it happens!); or c) the browser is misinterpreting it because you didn't tell it what it needs to know to display it correctly. Can I trust that mysql is displaying the text correctly on the command line tool if I have 4.1, even if the charset is set to latin1? Unless you're on a very old or strangely configured system, your terminal is probably configured for UTF-8. Since your DB is in Latin-1, there's a character set translation in there, and I can't confidently predict what will happen. In this modern world, it's best to use some form of Unicode everywhere possible. Then the worst you have to deal with is conversion among encodings, which is annoying but lossless. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] infoshop.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
how to escape new lines using select * into outfile
Hi all, I want to export the values based on the condition. I export them as csv. select * into outfile 'filename.csv' from table; the problem is one field in table has many lines, means with \n. I couldn't import the values into the xls sheet. any ideas how to escape the new line characters? Thanks, Saravanan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Help needed
I suggest you check out http://en.wikipedia.org/wiki/UUID. It seems to have a pretty current, but not overwhelming, explanation. Originally the starting point for a UUID was the MAC (hardware) address of the network interface. Theoretically, there should only be one device with any particular MAC address in the whole world. There are exceptions, but they would be mistakes. For your purposes, you can assume that it is a unique string of 32 hexadecimal digits. Regards, Jerry Schwartz The Infoshop by Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 www.the-infoshop.com www.giiexpress.com www.etudes-marche.com -Original Message- From: Velen [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 13, 2008 11:28 AM To: Jerry Schwartz Cc: mysql@lists.mysql.com Subject: Re: Help needed Hi Jerry, Thanks for the tip. What is UUID look for? is it an ID associated with the motherboard, CPU, harddisk? Thanks. Regards, Velen - Original Message - From: Jerry Schwartz [EMAIL PROTECTED] To: 'Garris, Nicole' [EMAIL PROTECTED]; mysql@lists.mysql.com Sent: Monday, May 12, 2008 11:43 PM Subject: RE: Help needed Your best bet would be to use a UUID: SELECT UUID(); +--+ | uuid() | +--+ | c712dc72-718d-102b-b3c8-97395a1766b9 | +--+ There are equivalents in various programming languages, although not in PHP. Regards, Jerry Schwartz The Infoshop by Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 www.the-infoshop.com www.giiexpress.com www.etudes-marche.com -Original Message- From: Garris, Nicole [mailto:[EMAIL PROTECTED] Sent: Monday, May 12, 2008 11:59 AM To: mysql@lists.mysql.com Subject: RE: Help needed Sorry, you can't prove uniqueness by running it against a hundred thousand, million, or even a billion computers. (The billionth-and- first computer could be the one with the non-unique ID.) You need a mathematical proof to prove uniqueness. -Original Message- From: Velen [mailto:[EMAIL PROTECTED] Sent: Monday, May 12, 2008 8:32 AM To: Arthur Fuller Cc: mysql@lists.mysql.com Subject: Re: Help needed Hi Arthur, Could you please tell this guy that it was not a virus and you tested it without any problem? Thanks. Velen On Sat, 2008-05-10 at 20:59 +0400, Velen wrote: Hi, I'm testing a program and I need you assistance. Please unzip the file at http://www.biz-mu.com/PCID.zip and run the program. Please supply a Linux version of your virus so I can test. -- Daniel Kasak IT Developer NUS Consulting Group Level 5, 77 Pacific Highway North Sydney, NSW, Australia 2060 T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989 email: [EMAIL PROTECTED] website: http://www.nusconsulting.com.au - Original Message - From: Arthur Fuller To: Velen Sent: Saturday, May 10, 2008 10:44 PM Subject: Re: Help needed On my main machine the PCID is 135184-45-4-10-1513-1. Hope it helps. Arthur On Sat, May 10, 2008 at 12:59 PM, Velen [EMAIL PROTECTED] wrote: Hi, I'm testing a program and I need you assistance. Please unzip the file at http://www.biz-mu.com/PCID.zip and run the program. It will display an ID, please mail me back the ID. If you can use it on several computers, it will be even better for me, I need to have as much results as possible. The program i'm testing is supposed to create a unique ID for each PC, this is why I need to test if it is really unique. This file is virus free and cannot do any harm to your PC. It is however a voluntary testing. Thanks for your help in advance. Velen -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] infoshop.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: how to escape new lines using select * into outfile
So far as I've been able to figure out, there's no form of escaping that will convince Excel to do what you want. The only thing I can think of is to replace * with a list of fields, and use MySQL functions to replace the \n with something else. Regards, Jerry Schwartz The Infoshop by Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 www.the-infoshop.com www.giiexpress.com www.etudes-marche.com -Original Message- From: Saravanan [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 13, 2008 1:26 PM To: mysql Subject: how to escape new lines using select * into outfile Hi all, I want to export the values based on the condition. I export them as csv. select * into outfile 'filename.csv' from table; the problem is one field in table has many lines, means with \n. I couldn't import the values into the xls sheet. any ideas how to escape the new line characters? Thanks, Saravanan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] infoshop.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: how to escape new lines using select * into outfile
Try using: select * into outfile 'filename.csv' fields enclosed by '' terminated by ',' lines terminated by '\n' from table Regards, Velen - Original Message - From: Saravanan [EMAIL PROTECTED] To: mysql mysql@lists.mysql.com Sent: Tuesday, May 13, 2008 9:26 PM Subject: how to escape new lines using select * into outfile Hi all, I want to export the values based on the condition. I export them as csv. select * into outfile 'filename.csv' from table; the problem is one field in table has many lines, means with \n. I couldn't import the values into the xls sheet. any ideas how to escape the new line characters? Thanks, Saravanan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: how to escape new lines using select * into outfile
thanks velen. But I want to escape the newlines in the fields. It is annoying and new lines moves the values in separate line of excel sheet. Thanks, Saravanan --- On Wed, 5/14/08, Velen [EMAIL PROTECTED] wrote: From: Velen [EMAIL PROTECTED] Subject: Re: how to escape new lines using select * into outfile To: [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Date: Wednesday, May 14, 2008, 1:35 AM Try using: select * into outfile 'filename.csv' fields enclosed by '' terminated by ',' lines terminated by '\n' from table Regards, Velen - Original Message - From: Saravanan [EMAIL PROTECTED] To: mysql mysql@lists.mysql.com Sent: Tuesday, May 13, 2008 9:26 PM Subject: how to escape new lines using select * into outfile Hi all, I want to export the values based on the condition. I export them as csv. select * into outfile 'filename.csv' from table; the problem is one field in table has many lines, means with \n. I couldn't import the values into the xls sheet. any ideas how to escape the new line characters? Thanks, Saravanan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Timestamps replicating inconsistently depending on local timezone of server?
Hi, can anyone explain mysql 5.0.54 handling of replication between two servers with inconstant /etc/localtime (but the same real clock time for UTC) On one server I inserted a new row with created_at and updated_at as the same time. Server localtime is GMT+1, created col is a date, updated_at col is a timestamp When this replicated to the other server which had localtime set to GMT, and then after it replicated I changed localtime to GMT+1 I find that created_at is no longer the same as updated_at - now updated_at is 1 hour later... I had naively assumed that dates would always be stored in UTC in the database and the only effect of localtime would be for display purposes? Can anyone shed some light on what's happening here please? Thanks Ed W -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Timestamps replicating inconsistently depending on local timezone of server?
On Tue, May 13, 2008 at 2:07 PM, Ed W [EMAIL PROTECTED] wrote: I had naively assumed that dates would always be stored in UTC in the database and the only effect of localtime would be for display purposes? Can anyone shed some light on what's happening here please? TIMESTAMP values are converted from the current time zone to UTC for storage, and converted back from UTC to the current time zone for retrieval. (This occurs only for the TIMESTAMP data type, not for other types such as DATETIME.) http://dev.mysql.com/doc/refman/5.1/en/timestamp.html -- Rob Wultsch [EMAIL PROTECTED] wultsch (aim) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
How To run Multiple Storage Engines: CSV, InnoDB...
I'm running MySQL 5.0.22 on CentOS 5.1 (with all current patches applied) and tried to define a table 'testcsv' using ENGINE = CSV. I couldn't find a *.CSV file in the data directory. After looking it up in MySQL Third Edition (by Paul DuBois) it turns out that I'm not running the CSV engine. My testcsv table is using the default MyISAM engine. See what 'show tables' says: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 11 to server version: 5.0.22 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql show engines; ++-++ | Engine | Support | Comment| ++-++ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | | MEMORY | YES | Hash based, stored in memory, useful for temporary tables | | InnoDB | YES | Supports transactions, row-level locking, and foreign keys | | BerkeleyDB | YES | Supports transactions and page-level locking | | BLACKHOLE | NO | /dev/null storage engine (anything you write to it disappears) | | EXAMPLE| NO | Example storage engine | | ARCHIVE| NO | Archive storage engine | | CSV| NO | CSV storage engine | | ndbcluster | NO | Clustered, fault-tolerant, memory-based tables | | FEDERATED | NO | Federated MySQL storage engine | | MRG_MYISAM | YES | Collection of identical MyISAM tables | | ISAM | NO | Obsolete storage engine| ++-++ 12 rows in set (0.01 sec) I'm not sure if a 'YES' in the 'Support' column above means the engine is started at runtime, or if it means the MySQL server was compiled with support for that engine. In any case, I want to be able to use the CSV engine in addition to MyISAM, MEMORY, InnoDB, BerkeleyDB and MRG_MYISAM. How do I activate that CSV engine? Thanks Bob Cochran Greenbelt, Maryland, USA -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
inserting client time instead of server time
Hi, is there a way or a function like NOW() except it returns the client time and not the server time? -- Sebastian Mendel -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Data Modelling
Hi, Generally, in data modelling there are some independent table, user related tables, master tables and one main table which is related with most of the tables. Almost in all the queries main table is involved to fetch the `id`. In this way main table is used maximum. some times the main table gets locked due to the other query. When the hits on database server increases the locking time will increase. Is there any way to reduce the locking time of main table. for eg main_dup can be created. To reflect all the changes from main table to main_dup trigger can be used. So main_dup can be used to reduce the locking time of table. The above is my idea. Is there any other way to reduce the locking period. -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED]
Re: index creation taking too much time
| Handler_commit| 25802690 | | Handler_delete| 100 | | Handler_discover | 0| | Handler_prepare | 10370014 | | Handler_read_first| 88920| | Handler_read_key | 496940874| | Handler_read_next | 664869434| | Handler_read_prev | 0| | Handler_read_rnd | 29330217 | | Handler_read_rnd_next | 3285192105 | | Handler_rollback | 31076| | Handler_savepoint | 0| | Handler_savepoint_rollback| 0| | Handler_update| 343453532| | Handler_write | 1323617337 | | Innodb_buffer_pool_pages_data | 243487 | | Innodb_buffer_pool_pages_dirty| 47410| | Innodb_buffer_pool_pages_flushed | 12373875 | | Innodb_buffer_pool_pages_free | 0| | Innodb_buffer_pool_pages_latched | 8| | Innodb_buffer_pool_pages_misc | 12513| | Innodb_buffer_pool_pages_total| 256000 | | Innodb_buffer_pool_read_ahead_rnd | 24087| | Innodb_buffer_pool_read_ahead_seq | 24761| | Innodb_buffer_pool_read_requests | 4097964853 | | Innodb_buffer_pool_reads | 673174 | | Innodb_buffer_pool_wait_free | 0| | Innodb_buffer_pool_write_requests | 1522044932 | | Innodb_data_fsyncs| 559537 | | Innodb_data_pending_fsyncs| 0| | Innodb_data_pending_reads | 0| | Innodb_data_pending_writes| 0| | Innodb_data_read | 60568031232 | | Innodb_data_reads | 1158787 | | Innodb_data_writes| 5265040 | | Innodb_data_written | 520279266304 | | Innodb_dblwr_pages_written| 12373875 | | Innodb_dblwr_writes | 165315 | | Innodb_log_waits | 0| | Innodb_log_write_requests | 273756463| | Innodb_log_writes | 154793 | | Innodb_os_log_fsyncs | 157558 | | Innodb_os_log_pending_fsyncs | 0| | Innodb_os_log_pending_writes | 0| | Innodb_os_log_written | 114805523968 | | Innodb_page_size | 16384| | Innodb_pages_created | 4941607 | | Innodb_pages_read | 3696646 | | Innodb_pages_written | 12373875 | | Innodb_row_lock_current_waits | 0| | Innodb_row_lock_time | 3302 | | Innodb_row_lock_time_avg | 16 | | Innodb_row_lock_time_max | 840 | | Innodb_row_lock_waits | 203 | | Innodb_rows_deleted | 313476 | | Innodb_rows_inserted | 533960321| | Innodb_rows_read | 2338647213 | | Innodb_rows_updated | 2294055 | | Key_blocks_not_flushed| 0| | Key_blocks_unused | 0| | Key_blocks_used | 1673854 | | Key_read_requests | 10242450469 | | Key_reads | 108256939| | Key_write_requests| 1907823218 | | Key_writes| 58522089 | | Last_query_cost | 0.00 | | Max_used_connections | 102 | | Ndb_cluster_node_id | 0| | Ndb_config_from_host | | | Ndb_config_from_port | 0| | Ndb_number_of_data_nodes | 0| | Not_flushed_delayed_rows | 0| | Open_files| 98 | | Open_streams | 0| | Open_tables | 223 | | Opened_tables | 314 | | Prepared_stmt_count | 0| | Qcache_free_blocks| 0| | Qcache_free_memory| 0| | Qcache_hits | 0| | Qcache_inserts| 0| | Qcache_lowmem_prunes | 0| | Qcache_not_cached | 0| | Qcache_queries_in_cache | 0| | Qcache_total_blocks | 0| | Questions | 64304791 | | Rpl_status| NULL | | Select_full_join | 0| | Select_full_range_join| 0| | Select_range | 26225| | Select_range_check| 0| | Select_scan | 1026415 | | Slave_open_temp_tables| 0| | Slave_retried_transactions| 0| | Slave_running
Re: Data Modelling
If you use InnoDB you should not have a problem as it used row-level locking and isolated transitions. Other than that you can split your tables into smaller ones using either partitioning or the federated engine... Ben Krishna Chandra Prajapati wrote: Hi, Generally, in data modelling there are some independent table, user related tables, master tables and one main table which is related with most of the tables. Almost in all the queries main table is involved to fetch the `id`. In this way main table is used maximum. some times the main table gets locked due to the other query. When the hits on database server increases the locking time will increase. Is there any way to reduce the locking time of main table. for eg main_dup can be created. To reflect all the changes from main table to main_dup trigger can be used. So main_dup can be used to reduce the locking time of table. The above is my idea. Is there any other way to reduce the locking period. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]