Re: Problem connecting to 4.1 server
Hello. Are you sure that mysql.exe is the same version as the server? Check if you're able to connect to 3.23.xx server using API. What value does your MYSQL_SECURE_AUTH option have? See: http://dev.mysql.com/doc/mysql/en/mysql-options.html Nicholas Watmough [EMAIL PROTECTED] wrote: I am have trouble connecting to the MySQL server 4.1 using the API, though I have no trouble connecting using the same parameters from the command line mysql.exe. I'm using server version 4.1.10 on Windows Server 2003, which was started with --old-password (to ensure back compatibility with legacy PHP 4 applications). As a result my mysql server uses the older style, 16-bit password hashing. I'm using mysql_real_connect, and linking with the libmysql.lib that ships with ver 4.1.10. When I connect using mysql_real_connect() to a database with a password, it fails, and returns a bad handshake error. However, when I connect using mysql_real_connect() to a database with no password, it connects without a problem. This makes me suspect that the client is incorrectly trying to hash with the newer style 41-bit hashing, against a server using the 16-bit hashing. Is there any way to tell the client to use older-style hashing? One of the client flags seems to relate to this, ie: #define CLIENT_LONG_PASSWORD 1 /* New more secure passwords */ (mysql_com.h, line 107) but this should mean that a value of 0 uses short passwords, and I'm calling mysql_real_connect with a value of 0 for the client flag argument (though I've also tried with 1 to see if that works). I have no trouble connecting to the database using the same connection parameters using the command line mysql.exe. Do anyone have any idea what I should do? *** code extract *** MYSQL my; mysql_init(my); if (!mysql_real_connect(my, host, user, password, db, port, NULL, 0)) { cerr ... exit(EXIT_FAILURE); } else { cerr ... exit(EXIT_SUCCESS); } -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Beta 5.0.4: Table 'mysql.host' Does Not Exist
Hello. Probably mysql_install_db wasn't executed due to SElinux restrictions. Execute it is manually. Check that your data directory has the 'mysql' database. Robert L Cochran [EMAIL PROTECTED] wrote: When I installed MySQL-server version 5.0.4 on my Fedora Core 3 system (Linux x86, using RPMs from MySQL.com) with SELinux running in enforcing mode, the server failed to start, possibly due to denials from the SELinux implementation. 'restorecon -R -v /var/lib/mysql' failed to let the server start. 'restorecon -R -v /usr/lib/mysql' failed to let the server start. 'restorecon -v /usr/sbin/mysqld' failed to let the server start. I then disabled SELinux just for the mysqld application. However MySQL 5's server won't start. I located the error log and it has this message: [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist How can I fix this problem? Should I uninstall MySQL (with 'rpm -e') and reinstall it? This should work if I disabled SELinux for mysqld? Thanks for your help Bob Cochran Greenbelt, Maryland, USA -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Tables lost in new location of database
Hello. Remove mysqld-nt and install mysqld-nt-max. See: http://dev.mysql.com/doc/mysql/en/windows-start-service.html jNo - mysqld-nt-max is not running - I can see in the task manager that jonly mysqld-nt is running!! jI also confirmed this by checking in my services - I only have mysqld-nt jpresent as a service for mysql! j jShould I run mysqld-nt-max for this functionality to work?? j jAlso - I need to run only myisam and nothing else - we do not use innodb jor bdb or anything else but myisam. Anoop kumar V [EMAIL PROTECTED] wrote: -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
C API : Problem using multi-statements
Hello, I have some problems using multiple queries in a databased driven project, therefore I wrote a little testprogram which causes the same problems. I am using the C-API of MySQL 4.1.11 on a Gentoo Linux 3.3.2-r5, propolice-3.3-7 with 2.4.27 kernel. I connect to the server (on localhost) with mysql_real_connect and the flag CLIENT_MULTI_STATEMENTS, I submit multiple queries (two INSERTS seperated by ;) on the existing connection. Executing the multistatement with mysql_query in a loop (i.e. 10 times), I get a lot of lost connection during query errors, but sending a single INSERT query in a loop causes no errors !!! Thanks in advance for help regards Tinosch Ganjineh The following program operates on a simple table structure created with the following statement : CREATE TABLE BIGTABLE (myoid char(40), mykey char(40), myval char(40), myint bigint) TYPE = InnoDB; /** mysqltest.cpp */ #include iostream #include mysql.h #include sstream #include string using namespace std; string itos(long long i) { ostringstream sstream; sstream i; return sstream.str(); } int main(int argc, char** argv) { MYSQL* conn; if(conn = mysql_init(NULL)) { if(mysql_real_connect(conn, localhost, root, x, test, 0, NULL, CLIENT_MULTI_STATEMENTS )) { int loop=100; for(int i=0; iloop; ++i) { int e=0; string query; query = string(INSERT INTO BIGTABLE VALUES () + 'object- +itos(i)+', 'foo', 'bar', NULL);\ INSERT INTO BIGTABLE VALUES ( + 'object- +itos(i+1000)+', 'bar, 'foo', NULL); e = mysql_query(conn, query.c_str()); if(e) { cerr *Query failed*: e - mysql_error(conn) endl; } else { MYSQL_RES* result = mysql_store_result(conn); if(result) { // .. parse result set ... } else { //cerr Could not fetch Results from DB: mysql_error(conn); } mysql_free_result(result); } } } else { cerr Could not connect to MySQL database: mysql_error(conn) endl; } } else { cerr Could not initialize MySQL: mysql_error(conn) endl; } } /** mysqltest.cpp */ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: [OT] client_IP
Am Freitag, 6. Mai 2005 02.10 schrieb [EMAIL PROTECTED]: Forgive me if this question has been answer in the past. How can you record IP address on your first page index.html? I am using my index.html to be index.lasso, and client_ip tag does not seem to work. It is picking up my server IP, and yes I have turned off the setting in Lasso. Now it seems to work if I actually write out http://websitename.com/index.lasso, why doesn't it work with just http://websitename.com/ ? I do not want the user to be having to type out index.lasso to access the first page. Your question has nothing to do with mysql, but: If you use apache, you have to configure the DirectoryIndex directive in httpd.conf: # DirectoryIndex: Name of the file or files to use as a pre-written HTML DirectoryIndex index.lasso joe -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Work / Home DB synchronization
Thanks for the info. I did not know there was a GUI for the mysql product. I installed it and its like Toad for mysql. Very cool. As I am not very strong at using mysql, I am a J2EE developer. I came up with two questions I need some more support with. 1). My home box does not have a static IP address, and my machine at work is behind a firewall, and we use DHCP. So I do not know if I can use sja? Anyone know? Upon reading, it looks like there is a need to use an IP on both ends. 2). Do ISPs allow this type of synchronization? 3). Is there a way to do a simple backup in the interim (copy the data to removable media) and copy to other machine until I get this all worked out? I see some export data capability in the SQLyog program. But do not know if this is valid? Thanks, Scott -Original Message- From: Karam Chand [mailto:[EMAIL PROTECTED] Sent: Thursday, May 05, 2005 12:08 PM To: Scott Purcell; mysql@lists.mysql.com Subject: Re: Work / Home DB synchronization You require SQLyog's Data Sync Tool. Download it from http://www.webyog.com You can find an article on it at: http://www.sitepoint.com/article/mysql-data-sqlyog-job-agent Karam --- Scott Purcell [EMAIL PROTECTED] wrote: Hello, I am developing a web application in which I have been working between home and an away office. Anyway, up to lately, most of my efforts have been writing the queries, etc for the backend. So I have a text file that dumps the database and repopulates them. And up until now, it has been effective in getting some basic data into the datase to code against. But now, that portion is done, and I am starting to port more data into the database. Data I need to display purposes, etc. Is there a simple, easy, effective way to transfer the data between home and away? What I do for the code, is create a .war file and copy that to a ram stick and transfer it that way. But when it comes to the database, I am not sure how to keep them synchronized? Less than 10,000 total records at launch time. Thanks, Scott K Purcell -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
[Fwd: Re: Beta 5.0.4: Table 'mysql.host' Does Not Exist]
Hello, Thank you! This is what I needed to do to get the server started. I executed mysql_install_db as the operating system's root user which may have been a mistake? That set the permissions on the 'mysql' database in the data directory to root.root, causing the server to fail to start when it couldn't open or read host.frm. Changing the permissions to mysql.mysql for the directory and all files in it fixed the problem and allowed me to start the server. Bob Cochran Greenbelt, Maryland, USA Gleb Paharenko wrote: Hello. Probably mysql_install_db wasn't executed due to SElinux restrictions. Execute it is manually. Check that your data directory has the 'mysql' database. Robert L Cochran [EMAIL PROTECTED] wrote: When I installed MySQL-server version 5.0.4 on my Fedora Core 3 system (Linux x86, using RPMs from MySQL.com) with SELinux running in enforcing mode, the server failed to start, possibly due to denials from the SELinux implementation. 'restorecon -R -v /var/lib/mysql' failed to let the server start. 'restorecon -R -v /usr/lib/mysql' failed to let the server start. 'restorecon -v /usr/sbin/mysqld' failed to let the server start. I then disabled SELinux just for the mysqld application. However MySQL 5's server won't start. I located the error log and it has this message: [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist How can I fix this problem? Should I uninstall MySQL (with 'rpm -e') and reinstall it? This should work if I disabled SELinux for mysqld? Thanks for your help Bob Cochran Greenbelt, Maryland, USA -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: no /tmp/mysql.sock
Mark Sargent wrote: ... but, my real query, now, is, why the configs/my.cnf were different, Because there isn't any /etc/my.cnf created on your system by default; you are responsible for creating one if needed, using the examples in the ./support-files subdirectory. Obviously the /etc/my.cnf on your system was detritus from the previous installation; it's not surprising it was out of sync with the new. Hence my original suggestion to remove all traces of the old MySQL installation before starting fresh :-) along with the coding being wrong in the mysql_install_db script, which I had to move to /usr/local/mysql from the scripts dir to get it to run without errors. Extract from the INSTALL-BINARY file in the 4.1.11-standard dist: The basic commands you must execute to install and use a MySQL binary distribution are: shell groupadd mysql shell useradd -g mysql mysql shell cd /usr/local shell gunzip /PATH/TO/MYSQL-VERSION-OS.tar.gz | tar xvf - shell ln -s FULL-PATH-TO-MYSQL-VERSION-OS mysql shell cd mysql shell scripts/mysql_install_db --user=mysql ... Note that you are directed to run the script from the MySQL *base* install directory -- it sounds like you were trying to run it from ./scripts, which makes a big difference in the interpretation of the 'current working directory' :-) HTH! -- Hassan Schroeder - [EMAIL PROTECTED] Webtuitive Design === (+1) 408-938-0567 === http://webtuitive.com dream. code. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Foreign Key Restriction
Partha Dutta [EMAIL PROTECTED] schrieb im Newsbeitrag news:[EMAIL PROTECTED] You can turn off foreign key restrictions within your session: SET SESSION foreign_key_checks = 0; Then later, turn them back on using SET SESSION foreign_key_checks = 1; I saw, that the tables on my mySQLVersion 4.0.8 are of type MyISAM and the tables on mySQL 4.1.1 are of type InnoDB, but they are created with the same SQL-Script. Could be this difference, that I could INSERT a record with a foreignkey-value 0 on mySQL 4.0.8? Thanks in advanced, Oliver Hirschi -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Hardware requirements
Hello, I'm doing an University project and I need to buy a server for a business. I have to simulate an enterprise that sells by Internet. There are many clients and products in the Data Base and we use MySQL in a Linux OS. I need to find as much information as possible about the hardware requirements like number of processors, necessary memory, cache, HD... to use in a high performance MySQL server Thanks very much for your help -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: confirm subscribe to mysql@lists.mysql.com
On Thursday 05 May 2005 20:50, [EMAIL PROTECTED] wrote: To confirm that you would like [EMAIL PROTECTED] added to the mysql mailing list, please click on the following link: http://lists.mysql.com/s/mysql/427a791715a3ffed/filipe=ipb.pt This confirmation serves two purposes. First, it verifies that we are able to get mail through to you. Second, it protects you in case someone forges a subscription request in your name. --- Administrative commands for the mysql list --- I can handle administrative requests automatically. Please do not send them to the list address! Instead, send your message to the correct command address: For help and a description of available commands, send a message to: [EMAIL PROTECTED] To subscribe to the list, send a message to: [EMAIL PROTECTED] To remove your address from the list, just send a message to the address in the ``List-Unsubscribe'' header of any list message. If you haven't changed addresses since subscribing, you can also send a message to: [EMAIL PROTECTED] or for the digest to: [EMAIL PROTECTED] For addition or removal of addresses, I'll send a confirmation message to that address. When you receive it, simply reply to it to complete the transaction. If you need to get in touch with the human owner of this list, please send a message to: [EMAIL PROTECTED] Please include a FORWARDED list message with ALL HEADERS intact to make it easier to help you. --- Enclosed is a copy of the request I received. Received: (qmail 9576 invoked by uid 48); 5 May 2005 19:50:47 - Date: 5 May 2005 19:50:47 - Message-ID: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Unsubscribe request From: [EMAIL PROTECTED] This message was generated because of a request from 193.136.195.3. -- QOTD: I thought I saw a unicorn on the way over, but it was just a horse with one of the horns broken off. pgpQ4Dy2PIOLO.pgp Description: PGP signature
Multi-user bookmark system
I'm writing a web-application that allows users to store their bookmarks. Each user has a tree of folders (and bookmarks belong to these folders). The only thing I want to do with tree elements at the same level is display them, and let the user only go up and down in the tree by one level. No aggregate functions or things like that on subtrees. I decided to do it like this: http://wilma.vub.ac.be/~lddekeyz/test/schema.png Where the arrows represent foreign key constraints. But then I realized there is a transitive functional dependency: parent_id- owner. So I normalized it to: http://wilma.vub.ac.be/~lddekeyz/test/schema2.png With the arrows still representing foreign key constraints. Now, I really feel something is wrong here. And I just know when I try to put this in SQL :) CREATE TABLE role ( role_id INTEGER NOT NULL, rolename VARCHAR(25) NOT NULL, PRIMARY KEY (role_id)) TYPE = InnoDB; CREATE TABLE account ( username VARCHAR(25) NOT NULL, password VARCHAR(80) NOT NULL, email VARCHAR(80) NOT NULL, first_name VARCHAR(80) NOT NULL, last_name VARCHAR(80) NOT NULL, role_id INTEGER NOT NULL, PRIMARY KEY (username), FOREIGN KEY (role_id) REFERENCES role(role_id)) TYPE = InnoDB; CREATE TABLE folder ( folder_id INTEGER NOT NULL AUTO_INCREMENT, parent_id INTEGER, foldername VARCHAR(80), PRIMARY KEY (folder_id), FOREIGN KEY (parent_id) REFERENCES folder(folder_id) ON DELETE CASCADE) TYPE = InnoDB; CREATE TABLE owner ( parent_id INTEGER NOT NULL, owner VARCHAR(25) NOT NULL, PRIMARY KEY (parent_id), FOREIGN KEY (parent_id) REFERENCES folder(folder_id) ON DELETE CASCADE, FOREIGN KEY (owner) REFERENCES account(username) ON DELETE CASCADE) TYPE = InnoDB; Now when I delete a user, everything related to him in the owner table will be deleted, but in the folder table, his folders will not be deleted. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Hardware requirements
At 10:17 AM 5/6/05, Berta Alcala Larramendi wrote: I'm doing an University project and I need to buy a server for a business. I have to simulate an enterprise that sells by Internet. There are many clients and products in the Data Base and we use MySQL in a Linux OS. I need to find as much information as possible about the hardware requirements like number of processors, necessary memory, cache, HD... to use in a high performance MySQL server With an open-ended question like yours, your going to get open-ended answers. I am running MySQL on several machines. One of those machines has a single P2 processor, 64M and 4G hard drive - it provides excellent performance to its many clients. To get reasonable proposals, you need to provide info like how big the database is and how frequently it will be queried. It is interesting to note that you already decided on MySQL and Linux. This platform is not the answer to every environment; like those that require features only available in beta releases. Frank -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Foreign Key Restriction
Oliver Hirschi wrote: Partha Dutta [EMAIL PROTECTED] schrieb im Newsbeitrag news:[EMAIL PROTECTED] You can turn off foreign key restrictions within your session: SET SESSION foreign_key_checks = 0; Then later, turn them back on using SET SESSION foreign_key_checks = 1; I saw, that the tables on my mySQLVersion 4.0.8 are of type MyISAM and the tables on mySQL 4.1.1 are of type InnoDB, but they are created with the same SQL-Script. Could be this difference, that I could INSERT a record with a foreignkey-value 0 on mySQL 4.0.8? Thanks in advanced, Oliver Hirschi It's not the version of mysql that matters, it's the table type. MyISAM tables don't support foreign key constraints. See the manual for details: http://dev.mysql.com/doc/mysql/en/ansi-diff-foreign-keys.html http://dev.mysql.com/doc/mysql/en/myisam-storage-engine.html http://dev.mysql.com/doc/mysql/en/innodb.html If your tables were created as MyISAM on one server and InnoDB on the other using the same script, then I'd guess the script doesn't specify the table type, and you have different settings for the default table type on the two servers. Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Tables lost in new location of database
Thank you Gleb, I will try that out and let you know how I did.. Thanks again, Anoop On 5/5/05, Gleb Paharenko [EMAIL PROTECTED] wrote: Hello. Remove mysqld-nt and install mysqld-nt-max. See: http://dev.mysql.com/doc/mysql/en/windows-start-service.html jNo - mysqld-nt-max is not running - I can see in the task manager that jonly mysqld-nt is running!! jI also confirmed this by checking in my services - I only have mysqld-nt jpresent as a service for mysql! j jShould I run mysqld-nt-max for this functionality to work?? j jAlso - I need to run only myisam and nothing else - we do not use innodb jor bdb or anything else but myisam. Anoop kumar V [EMAIL PROTECTED] wrote: -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ / Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET http://Ensita.NET ___/ www.mysql.com http://www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- Thanks and best regards, Anoop
Re: Multi-user bookmark system
Lieven, If a folder belongs to an account, why not use the account PK as a FK in folders? See http://www.intelligententerprise.com/001020/celko1_1.jhtml for ideas about SQL representation of trees. Peter Brawley http://www.artfulsoftware.com - Lieven De Keyzer wrote: I'm writing a web-application that allows users to store their bookmarks. Each user has a tree of folders (and bookmarks belong to these folders). The only thing I want to do with tree elements at the same level is display them, and let the user only go up and down in the tree by one level. No aggregate functions or things like that on subtrees. I decided to do it like this: http://wilma.vub.ac.be/~lddekeyz/test/schema.png Where the arrows represent foreign key constraints. But then I realized there is a transitive functional dependency: parent_id- owner. So I normalized it to: http://wilma.vub.ac.be/~lddekeyz/test/schema2.png With the arrows still representing foreign key constraints. Now, I really feel something is wrong here. And I just know when I try to put this in SQL :) CREATE TABLE role ( role_id INTEGER NOT NULL, rolename VARCHAR(25) NOT NULL, PRIMARY KEY (role_id)) TYPE = InnoDB; CREATE TABLE account ( username VARCHAR(25) NOT NULL, password VARCHAR(80) NOT NULL, email VARCHAR(80) NOT NULL, first_name VARCHAR(80) NOT NULL, last_name VARCHAR(80) NOT NULL, role_id INTEGER NOT NULL, PRIMARY KEY (username), FOREIGN KEY (role_id) REFERENCES role(role_id)) TYPE = InnoDB; CREATE TABLE folder ( folder_id INTEGER NOT NULL AUTO_INCREMENT, parent_id INTEGER, foldername VARCHAR(80), PRIMARY KEY (folder_id), FOREIGN KEY (parent_id) REFERENCES folder(folder_id) ON DELETE CASCADE) TYPE = InnoDB; CREATE TABLE owner ( parent_id INTEGER NOT NULL, owner VARCHAR(25) NOT NULL, PRIMARY KEY (parent_id), FOREIGN KEY (parent_id) REFERENCES folder(folder_id) ON DELETE CASCADE, FOREIGN KEY (owner) REFERENCES account(username) ON DELETE CASCADE) TYPE = InnoDB; Now when I delete a user, everything related to him in the owner table will be deleted, but in the folder table, his folders will not be deleted. -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 5/4/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: no /tmp/mysql.sock
Hassan Schroeder wrote: Mark Sargent wrote: ... but, my real query, now, is, why the configs/my.cnf were different, Because there isn't any /etc/my.cnf created on your system by default; you are responsible for creating one if needed, using the examples in the ./support-files subdirectory. Obviously the /etc/my.cnf on your system was detritus from the previous installation; it's not surprising it was out of sync with the new. earth calling mars???... Hence my original suggestion to remove all traces of the old MySQL installation before starting fresh :-) yep, did just that, just wasn't aware that the my.cnf wasn't installed by default(although, in all reality, I shoulda realised such; as how would it have been..) along with the coding being wrong in the mysql_install_db script, which I had to move to /usr/local/mysql from the scripts dir to get it to run without errors. Extract from the INSTALL-BINARY file in the 4.1.11-standard dist: The basic commands you must execute to install and use a MySQL binary distribution are: shell groupadd mysql shell useradd -g mysql mysql shell cd /usr/local shell gunzip /PATH/TO/MYSQL-VERSION-OS.tar.gz | tar xvf - shell ln -s FULL-PATH-TO-MYSQL-VERSION-OS mysql shell cd mysql shell scripts/mysql_install_db --user=mysql ... Note that you are directed to run the script from the MySQL *base* install directory -- it sounds like you were trying to run it from ./scripts, which makes a big difference in the interpretation of the 'current working directory' :-) lord, now who feels like a dill...lol... HTH! Cheers. Mark Sargent. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Multi-user bookmark system
Peter, From: Peter Brawley [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] To: Lieven De Keyzer [EMAIL PROTECTED] CC: mysql@lists.mysql.com Subject: Re: Multi-user bookmark system Date: Fri, 06 May 2005 10:09:12 -0500 Lieven, If a folder belongs to an account, why not use the account PK as a FK in folders? Well, that's what's in the first picture. The owner is a FK in the folders table. But problem is there is a transitive dependency there. So I normalized to 3NF. But perhaps it's just easier to not normalize and do it as in picture 1. See http://www.intelligententerprise.com/001020/celko1_1.jhtml for ideas about SQL representation of trees. I'll take a look. Peter Brawley http://www.artfulsoftware.com - Lieven De Keyzer wrote: I'm writing a web-application that allows users to store their bookmarks. Each user has a tree of folders (and bookmarks belong to these folders). The only thing I want to do with tree elements at the same level is display them, and let the user only go up and down in the tree by one level. No aggregate functions or things like that on subtrees. I decided to do it like this: http://wilma.vub.ac.be/~lddekeyz/test/schema.png Where the arrows represent foreign key constraints. But then I realized there is a transitive functional dependency: parent_id- owner. So I normalized it to: http://wilma.vub.ac.be/~lddekeyz/test/schema2.png With the arrows still representing foreign key constraints. Now, I really feel something is wrong here. And I just know when I try to put this in SQL :) CREATE TABLE role ( role_id INTEGER NOT NULL, rolename VARCHAR(25) NOT NULL, PRIMARY KEY (role_id)) TYPE = InnoDB; CREATE TABLE account ( username VARCHAR(25) NOT NULL, password VARCHAR(80) NOT NULL, email VARCHAR(80) NOT NULL, first_name VARCHAR(80) NOT NULL, last_name VARCHAR(80) NOT NULL, role_id INTEGER NOT NULL, PRIMARY KEY (username), FOREIGN KEY (role_id) REFERENCES role(role_id)) TYPE = InnoDB; CREATE TABLE folder ( folder_id INTEGER NOT NULL AUTO_INCREMENT, parent_id INTEGER, foldername VARCHAR(80), PRIMARY KEY (folder_id), FOREIGN KEY (parent_id) REFERENCES folder(folder_id) ON DELETE CASCADE) TYPE = InnoDB; CREATE TABLE owner ( parent_id INTEGER NOT NULL, owner VARCHAR(25) NOT NULL, PRIMARY KEY (parent_id), FOREIGN KEY (parent_id) REFERENCES folder(folder_id) ON DELETE CASCADE, FOREIGN KEY (owner) REFERENCES account(username) ON DELETE CASCADE) TYPE = InnoDB; Now when I delete a user, everything related to him in the owner table will be deleted, but in the folder table, his folders will not be deleted. -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 5/4/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Multi-user bookmark system
From: Peter Brawley [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] To: Lieven De Keyzer [EMAIL PROTECTED] Subject: Re: Multi-user bookmark system Date: Fri, 06 May 2005 11:07:48 -0500 Lieven, Here it is. Removing the FK looks to me like an incorrect fix to the transitive dependency. But the FK is not gone, is it? According to Database Systems by Connoly and Begg, this is the way to resolve a transitive functional dependency. The foreign key to the account table is now in the owner table and the owner table has a foreign key to folder. I know there's something wrong with my scheme, but it should be possible to normalize it, or not? PB Lieven De Keyzer wrote: From: Peter Brawley [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] To: Lieven De Keyzer [EMAIL PROTECTED] CC: mysql@lists.mysql.com Subject: Re: Multi-user bookmark system Date: Fri, 06 May 2005 10:09:12 -0500 Lieven, If a folder belongs to an account, why not use the account PK as a FK in folders? Well, that's what's in the first picture. The owner is a FK in the folders table. But problem is there is a transitive dependency there. So I normalized to 3NF. But perhaps it's just easier to not normalize and do it as in picture 1. See http://www.intelligententerprise.com/001020/celko1_1.jhtml for ideas about SQL representation of trees. I'll take a look. Peter Brawley http://www.artfulsoftware.com - Lieven De Keyzer wrote: I'm writing a web-application that allows users to store their bookmarks. Each user has a tree of folders (and bookmarks belong to these folders). The only thing I want to do with tree elements at the same level is display them, and let the user only go up and down in the tree by one level. No aggregate functions or things like that on subtrees. I decided to do it like this: http://wilma.vub.ac.be/~lddekeyz/test/schema.png Where the arrows represent foreign key constraints. But then I realized there is a transitive functional dependency: parent_id- owner. So I normalized it to: http://wilma.vub.ac.be/~lddekeyz/test/schema2.png With the arrows still representing foreign key constraints. Now, I really feel something is wrong here. And I just know when I try to put this in SQL :) CREATE TABLE role ( role_id INTEGER NOT NULL, rolename VARCHAR(25) NOT NULL, PRIMARY KEY (role_id)) TYPE = InnoDB; CREATE TABLE account ( username VARCHAR(25) NOT NULL, password VARCHAR(80) NOT NULL, email VARCHAR(80) NOT NULL, first_name VARCHAR(80) NOT NULL, last_name VARCHAR(80) NOT NULL, role_id INTEGER NOT NULL, PRIMARY KEY (username), FOREIGN KEY (role_id) REFERENCES role(role_id)) TYPE = InnoDB; CREATE TABLE folder ( folder_id INTEGER NOT NULL AUTO_INCREMENT, parent_id INTEGER, foldername VARCHAR(80), PRIMARY KEY (folder_id), FOREIGN KEY (parent_id) REFERENCES folder(folder_id) ON DELETE CASCADE) TYPE = InnoDB; CREATE TABLE owner ( parent_id INTEGER NOT NULL, owner VARCHAR(25) NOT NULL, PRIMARY KEY (parent_id), FOREIGN KEY (parent_id) REFERENCES folder(folder_id) ON DELETE CASCADE, FOREIGN KEY (owner) REFERENCES account(username) ON DELETE CASCADE) TYPE = InnoDB; Now when I delete a user, everything related to him in the owner table will be deleted, but in the folder table, his folders will not be deleted. -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 5/4/2005 -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 5/4/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: C API : Problem using multi-statements
Answer is simple. Can't do that. - Original Message - From: [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 5:40 AM Subject: C API : Problem using multi-statements Hello, I have some problems using multiple queries in a databased driven project, therefore I wrote a little testprogram which causes the same problems. I am using the C-API of MySQL 4.1.11 on a Gentoo Linux 3.3.2-r5, propolice-3.3-7 with 2.4.27 kernel. I connect to the server (on localhost) with mysql_real_connect and the flag CLIENT_MULTI_STATEMENTS, I submit multiple queries (two INSERTS seperated by ;) on the existing connection. Executing the multistatement with mysql_query in a loop (i.e. 10 times), I get a lot of lost connection during query errors, but sending a single INSERT query in a loop causes no errors !!! Thanks in advance for help regards Tinosch Ganjineh The following program operates on a simple table structure created with the following statement : CREATE TABLE BIGTABLE (myoid char(40), mykey char(40), myval char(40), myint bigint) TYPE = InnoDB; /** mysqltest.cpp */ #include iostream #include mysql.h #include sstream #include string using namespace std; string itos(long long i) { ostringstream sstream; sstream i; return sstream.str(); } int main(int argc, char** argv) { MYSQL* conn; if(conn = mysql_init(NULL)) { if(mysql_real_connect(conn, localhost, root, x, test, 0, NULL, CLIENT_MULTI_STATEMENTS )) { int loop=100; for(int i=0; iloop; ++i) { int e=0; string query; query = string(INSERT INTO BIGTABLE VALUES () + 'object- +itos(i)+', 'foo', 'bar', NULL);\ INSERT INTO BIGTABLE VALUES ( + 'object- +itos(i+1000)+', 'bar, 'foo', NULL); e = mysql_query(conn, query.c_str()); if(e) { cerr *Query failed*: e - mysql_error(conn) endl; } else { MYSQL_RES* result = mysql_store_result(conn); if(result) { // .. parse result set ... } else { //cerr Could not fetch Results from DB: mysql_error(conn); } mysql_free_result(result); } } } else { cerr Could not connect to MySQL database: mysql_error(conn) endl; } } else { cerr Could not initialize MySQL: mysql_error(conn) endl; } } /** mysqltest.cpp */ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: C API : Problem using multi-statements
Jeremiah I don't use the client library in my work but this should work from 4.1 on. -Reggie -Original Message- From: Jeremiah Gowdy [mailto:[EMAIL PROTECTED] Sent: Friday, May 06, 2005 11:19 AM To: [EMAIL PROTECTED]; mysql@lists.mysql.com Subject: Re: C API : Problem using multi-statements Answer is simple. Can't do that. - Original Message - From: [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 5:40 AM Subject: C API : Problem using multi-statements Hello, I have some problems using multiple queries in a databased driven project, therefore I wrote a little testprogram which causes the same problems. I am using the C-API of MySQL 4.1.11 on a Gentoo Linux 3.3.2-r5, propolice-3.3-7 with 2.4.27 kernel. I connect to the server (on localhost) with mysql_real_connect and the flag CLIENT_MULTI_STATEMENTS, I submit multiple queries (two INSERTS seperated by ;) on the existing connection. Executing the multistatement with mysql_query in a loop (i.e. 10 times), I get a lot of lost connection during query errors, but sending a single INSERT query in a loop causes no errors !!! Thanks in advance for help regards Tinosch Ganjineh The following program operates on a simple table structure created with the following statement : CREATE TABLE BIGTABLE (myoid char(40), mykey char(40), myval char(40), myint bigint) TYPE = InnoDB; /** mysqltest.cpp */ #include iostream #include mysql.h #include sstream #include string using namespace std; string itos(long long i) { ostringstream sstream; sstream i; return sstream.str(); } int main(int argc, char** argv) { MYSQL* conn; if(conn = mysql_init(NULL)) { if(mysql_real_connect(conn, localhost, root, x, test, 0, NULL, CLIENT_MULTI_STATEMENTS )) { int loop=100; for(int i=0; iloop; ++i) { int e=0; string query; query = string(INSERT INTO BIGTABLE VALUES () + 'object- +itos(i)+', 'foo', 'bar', NULL);\ INSERT INTO BIGTABLE VALUES ( + 'object- +itos(i+1000)+', 'bar, 'foo', NULL); e = mysql_query(conn, query.c_str()); if(e) { cerr *Query failed*: e - mysql_error(conn) endl; } else { MYSQL_RES* result = mysql_store_result(conn); if(result) { // .. parse result set ... } else { //cerr Could not fetch Results from DB: mysql_error(conn); } mysql_free_result(result); } } } else { cerr Could not connect to MySQL database: mysql_error(conn) endl; } } else { cerr Could not initialize MySQL: mysql_error(conn) endl; } } /** mysqltest.cpp */ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: C API : Problem using multi-statements
Hi Instead of use two INSERT statements, try something like this: INSERT INTO Table table1 VALUES (list of values1), (list of values2) -Original Message- From: Jeremiah Gowdy [mailto:[EMAIL PROTECTED] Sent: 06 May 2005 17:19 To: [EMAIL PROTECTED]; mysql@lists.mysql.com Subject: Re: C API : Problem using multi-statements Answer is simple. Can't do that. - Original Message - From: [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 5:40 AM Subject: C API : Problem using multi-statements Hello, I have some problems using multiple queries in a databased driven project, therefore I wrote a little testprogram which causes the same problems. I am using the C-API of MySQL 4.1.11 on a Gentoo Linux 3.3.2-r5, propolice-3.3-7 with 2.4.27 kernel. I connect to the server (on localhost) with mysql_real_connect and the flag CLIENT_MULTI_STATEMENTS, I submit multiple queries (two INSERTS seperated by ;) on the existing connection. Executing the multistatement with mysql_query in a loop (i.e. 10 times), I get a lot of lost connection during query errors, but sending a single INSERT query in a loop causes no errors !!! Thanks in advance for help regards Tinosch Ganjineh The following program operates on a simple table structure created with the following statement : CREATE TABLE BIGTABLE (myoid char(40), mykey char(40), myval char(40), myint bigint) TYPE = InnoDB; /** mysqltest.cpp */ #include iostream #include mysql.h #include sstream #include string using namespace std; string itos(long long i) { ostringstream sstream; sstream i; return sstream.str(); } int main(int argc, char** argv) { MYSQL* conn; if(conn = mysql_init(NULL)) { if(mysql_real_connect(conn, localhost, root, x, test, 0, NULL, CLIENT_MULTI_STATEMENTS )) { int loop=100; for(int i=0; iloop; ++i) { int e=0; string query; query = string(INSERT INTO BIGTABLE VALUES () + 'object- +itos(i)+', 'foo', 'bar', NULL);\ INSERT INTO BIGTABLE VALUES ( + 'object- +itos(i+1000)+', 'bar, 'foo', NULL); e = mysql_query(conn, query.c_str()); if(e) { cerr *Query failed*: e - mysql_error(conn) endl; } else { MYSQL_RES* result = mysql_store_result(conn); if(result) { // .. parse result set ... } else { //cerr Could not fetch Results from DB: mysql_error(conn); } mysql_free_result(result); } } } else { cerr Could not connect to MySQL database: mysql_error(conn) endl; } } else { cerr Could not initialize MySQL: mysql_error(conn) endl; } } /** mysqltest.cpp */ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] ** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept by MIMEsweeper for the presence of computer viruses. www.mimesweeper.com ** -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Work / Home DB synchronization
You can use mysqldump to create a text version of your database. Mysqldump will generate the statements that will both (re)generate your tables and populate them with data. It may be the low tech solution you asked about. Just dump your DB to file (memory stick, zip disk, CD-RW) and move it en masse from one machine to the other. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Scott Purcell [EMAIL PROTECTED] wrote on 05/06/2005 09:05:39 AM: Thanks for the info. I did not know there was a GUI for the mysql product. I installed it and its like Toad for mysql. Very cool. As I am not very strong at using mysql, I am a J2EE developer. I came up with two questions I need some more support with. 1). My home box does not have a static IP address, and my machine at work is behind a firewall, and we use DHCP. So I do not know if I can use sja? Anyone know? Upon reading, it looks like there is a need to use an IP on both ends. 2). Do ISPs allow this type of synchronization? 3). Is there a way to do a simple backup in the interim (copy the data to removable media) and copy to other machine until I get this all worked out? I see some export data capability in the SQLyog program. But do not know if this is valid? Thanks, Scott -Original Message- From: Karam Chand [mailto:[EMAIL PROTECTED] Sent: Thursday, May 05, 2005 12:08 PM To: Scott Purcell; mysql@lists.mysql.com Subject: Re: Work / Home DB synchronization You require SQLyog's Data Sync Tool. Download it from http://www.webyog.com You can find an article on it at: http://www.sitepoint.com/article/mysql-data-sqlyog-job-agent Karam --- Scott Purcell [EMAIL PROTECTED] wrote: Hello, I am developing a web application in which I have been working between home and an away office. Anyway, up to lately, most of my efforts have been writing the queries, etc for the backend. So I have a text file that dumps the database and repopulates them. And up until now, it has been effective in getting some basic data into the datase to code against. But now, that portion is done, and I am starting to port more data into the database. Data I need to display purposes, etc. Is there a simple, easy, effective way to transfer the data between home and away? What I do for the code, is create a .war file and copy that to a ram stick and transfer it that way. But when it comes to the database, I am not sure how to keep them synchronized? Less than 10,000 total records at launch time. Thanks, Scott K Purcell -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Hardware requirements
That's kind of like asking what kind of car to buy to get from point A to point B. A sports car, it will get you there fast and in style. Unless you are a moving company, then it's totally inappropriate. A taxi service? Well that's different too. Ok, so you are selling by internet. Amazon.com, eBay.com and Woot.com all sell buy internet. All three are very, very different in what they require. Amazon sells a ton of products and provides store fronts/hosting for many other companies. eBay sells tons of products, but also needs to track bids. Woot sells one product each day and only 5 days a week. Very different software and hardware requirements and probably traffic patterns. Amazon and eBay would both require clustering and load balancing, Woot would probably only require automatic failover. Many is a relative term and so is high performance and high availability (i.e. five 9's). Maybe you need a cluster or maybe just replication. Short answer, go with an IBM dual core Power based system with 32GB of RAM (or more), 16CPUs (or more), and 4GB per second FibreChannel array for storage. Or, order a bunch of cheap off the shelf generic PC systems and cluster them like Google does. Really, it all depends on what your traffic patterns will look like. iTunes needs fast disks and lots of them to handle all the downloads that can take a while. The systems also need to handle many many long lasting active connections. Amazon really only ships products, so it's traffic patterns will be different than iTunes. And of course, poor database design will bring any hardware to it's knees. On May 6, 2005, at 10:17 AM, Berta Alcala Larramendi wrote: Hello, I'm doing an University project and I need to buy a server for a business. I have to simulate an enterprise that sells by Internet. There are many clients and products in the Data Base and we use MySQL in a Linux OS. I need to find as much information as possible about the hardware requirements like number of processors, necessary memory, cache, HD... to use in a high performance MySQL server Thanks very much for your help -- Brent Baisley Systems Architect Landover Associates, Inc. Search Advisory Services for Advanced Technology Environments p: 212.759.6400/800.759.0577 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
myisamchk and myisampack
Hello everyone, When running these utilities does the MySQL daemon have to be down? There's no activity against the database other than an occasional query. George __ Switch to Netscape Internet Service. As low as $9.95 a month -- Sign up today at http://isp.netscape.com/register Netscape. Just the Net You Need. New! Netscape Toolbar for Internet Explorer Search from anywhere on the Web and block those annoying pop-ups. Download now at http://channels.netscape.com/ns/search/install.jsp -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Work / Home DB synchronization
Hello, I am not the final authority on this but you can use SSH/HTTP Tunneling. More information on it can be found at: http://www.devx.com/security/Article/27854?trk=DXRSS_DB If you have any question regarding, SQLyog, I guess - http://www.webyog.com/forums would be your best guess. Karam --- Scott Purcell [EMAIL PROTECTED] wrote: Thanks for the info. I did not know there was a GUI for the mysql product. I installed it and its like Toad for mysql. Very cool. As I am not very strong at using mysql, I am a J2EE developer. I came up with two questions I need some more support with. 1). My home box does not have a static IP address, and my machine at work is behind a firewall, and we use DHCP. So I do not know if I can use sja? Anyone know? Upon reading, it looks like there is a need to use an IP on both ends. 2). Do ISPs allow this type of synchronization? 3). Is there a way to do a simple backup in the interim (copy the data to removable media) and copy to other machine until I get this all worked out? I see some export data capability in the SQLyog program. But do not know if this is valid? Thanks, Scott -Original Message- From: Karam Chand [mailto:[EMAIL PROTECTED] Sent: Thursday, May 05, 2005 12:08 PM To: Scott Purcell; mysql@lists.mysql.com Subject: Re: Work / Home DB synchronization You require SQLyog's Data Sync Tool. Download it from http://www.webyog.com You can find an article on it at: http://www.sitepoint.com/article/mysql-data-sqlyog-job-agent Karam --- Scott Purcell [EMAIL PROTECTED] wrote: Hello, I am developing a web application in which I have been working between home and an away office. Anyway, up to lately, most of my efforts have been writing the queries, etc for the backend. So I have a text file that dumps the database and repopulates them. And up until now, it has been effective in getting some basic data into the datase to code against. But now, that portion is done, and I am starting to port more data into the database. Data I need to display purposes, etc. Is there a simple, easy, effective way to transfer the data between home and away? What I do for the code, is create a .war file and copy that to a ram stick and transfer it that way. But when it comes to the database, I am not sure how to keep them synchronized? Less than 10,000 total records at launch time. Thanks, Scott K Purcell -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: C API : Problem using multi-statements
doh! need another redbull. :) - Original Message - From: Reggie Burnett [EMAIL PROTECTED] To: 'Jeremiah Gowdy' [EMAIL PROTECTED]; [EMAIL PROTECTED]; mysql@lists.mysql.com Sent: Friday, May 06, 2005 9:31 AM Subject: RE: C API : Problem using multi-statements Jeremiah I don't use the client library in my work but this should work from 4.1 on. -Reggie -Original Message- From: Jeremiah Gowdy [mailto:[EMAIL PROTECTED] Sent: Friday, May 06, 2005 11:19 AM To: [EMAIL PROTECTED]; mysql@lists.mysql.com Subject: Re: C API : Problem using multi-statements Answer is simple. Can't do that. - Original Message - From: [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 5:40 AM Subject: C API : Problem using multi-statements Hello, I have some problems using multiple queries in a databased driven project, therefore I wrote a little testprogram which causes the same problems. I am using the C-API of MySQL 4.1.11 on a Gentoo Linux 3.3.2-r5, propolice-3.3-7 with 2.4.27 kernel. I connect to the server (on localhost) with mysql_real_connect and the flag CLIENT_MULTI_STATEMENTS, I submit multiple queries (two INSERTS seperated by ;) on the existing connection. Executing the multistatement with mysql_query in a loop (i.e. 10 times), I get a lot of lost connection during query errors, but sending a single INSERT query in a loop causes no errors !!! Thanks in advance for help regards Tinosch Ganjineh The following program operates on a simple table structure created with the following statement : CREATE TABLE BIGTABLE (myoid char(40), mykey char(40), myval char(40), myint bigint) TYPE = InnoDB; /** mysqltest.cpp */ #include iostream #include mysql.h #include sstream #include string using namespace std; string itos(long long i) { ostringstream sstream; sstream i; return sstream.str(); } int main(int argc, char** argv) { MYSQL* conn; if(conn = mysql_init(NULL)) { if(mysql_real_connect(conn, localhost, root, x, test, 0, NULL, CLIENT_MULTI_STATEMENTS )) { int loop=100; for(int i=0; iloop; ++i) { int e=0; string query; query = string(INSERT INTO BIGTABLE VALUES () + 'object- +itos(i)+', 'foo', 'bar', NULL);\ INSERT INTO BIGTABLE VALUES ( + 'object- +itos(i+1000)+', 'bar, 'foo', NULL); e = mysql_query(conn, query.c_str()); if(e) { cerr *Query failed*: e - mysql_error(conn) endl; } else { MYSQL_RES* result = mysql_store_result(conn); if(result) { // .. parse result set ... } else { //cerr Could not fetch Results from DB: mysql_error(conn); } mysql_free_result(result); } } } else { cerr Could not connect to MySQL database: mysql_error(conn) endl; } } else { cerr Could not initialize MySQL: mysql_error(conn) endl; } } /** mysqltest.cpp */ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
couldn't drop column in Innodb table
I use alter table tbl_name drop col_name but failed. The table has 20 million rows, and this turns out to be a size problem. appreciate any suggestions. __ Yahoo! Mail Mobile Take Yahoo! Mail with you! Check email on your mobile phone. http://mobile.yahoo.com/learn/mail -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
ERROR 1063 (4200) when trying to drop a user
I am trying to drop a user via: mysql drop user 'wiki_user'; ERROR 1268 (HY000): Can't drop one or more of the requested users The relevant user table: mysql select host,user from user; +---++ | host | user | +---++ | % | root | | % | wiki_user | | localhost | cbxoops| | localhost | fg_phpnuke | | localhost | fg_xoops | | localhost | gw_default | | localhost | nwcb | | localhost | pm4| | localhost | root | | localhost | wiki_user | | localhost.localdomain | root | | localhost.localdomain | wiki_user | | sm.domain.com | root | +---++ 13 rows in set (0.00 sec) And the GRANTS: mysql show grants for wiki_user; +--+ | Grants for [EMAIL PROTECTED] | +--+ | GRANT USAGE ON *.* TO 'wiki_user'@'%' IDENTIFIED BY PASSWORD '*7503F5BF90CD33491FD041DB75F9F3D08AE01E0B' | | GRANT SELECT, INSERT, UPDATE, DELETE ON `wikidb`.* TO 'wiki_user'@'%'| +--+ What am I missing that I can't drop the user? I would try this though phpMyAdmin, but I can't login to it, which I think occured after the above wiki web install. Thanks eric -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
index slowing things down?
I was playing around with a table that had 100 tinyint fields. Each record contained other a '0' or a '1' for each field. A query such as SELECT * FROM `foobar` WHERE `f01` =1 AND `f02` =1 AND `f03` =1 AND `f04` =1 AND `f05` =1 AND `f06` =1 AND `f07` =1 AND `f08` =1 AND `f09` =1 AND `f10` =1 runs in 0.07 seconds with 200,000 records. When I add indeces to fields f01 and f02, the query now takes 0.23 seconds. Is this expected? Is this simply the time required to load the index? Are indeces simply not designed to work with something of such low cardinality? Thanks, Jon -- Chance favors only a prepared mind. Jon Beyer 302N Dod Hall Princeton University Princeton, NJ 08544 609 986 8722 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Problem compiling mysql 4.1.11 on AIX 5.1
Hi all! I'm having trouble building the latest stable on AIX 5.1. My env is as follows: Relevant GNU tools: autoconf-2.58-1.aix5.1.noarch.rpm automake-1.7.9-1.aix5.1.noarch.rpm binutils-2.14-3.aix5.1.ppc.rpm bison-1.875-2.aix5.1.ppc.rpm flex-2.5.4a-6.aix4.3.ppc.rpm gcc-3.3.2-4.aix5.1.ppc.rpm libtool-1.5-2.aix5.1.ppc.rpm m4-1.4.1-1.aix5.1.ppc.rpm make-3.80-1.aix5.1.ppc.rpm patch-2.5.4-4.aix4.3.ppc.rpm zlib-1.2.1-2.aix5.1.ppc.rpm zlib-devel-1.2.1-2.aix5.1.ppc.rpm PATH: /opt/freeware/bin:/usr/local/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr1/acct/jonathan/bin:/usr/bin/X11:/sbin:. Compiler Flags: CC=gcc CFLAGS=-mcpu=powerpc -Wa,-many CXX=gcc CXXFLAGS=-mcpu=powerpc -Wa,-many -felide-constructors -fno-exceptions -fno-rtti export CC CXX CFLAGS CXXFLAGS Configure directives: ./configure \ --prefix=/usr2/tools/mysql \ --enable-assembler \ --with-big-tables \ --with-low-memory \ --with-openssl=/usr2/tools/openssl \ --without-extra-tools \ --without-docs \ --without-bench \ --enable-local-infile Results in the following error: ... checking term.h usability... no checking term.h presence... yes configure: WARNING: term.h: present but cannot be compiled configure: WARNING: term.h: check for missing prerequisite headers? configure: WARNING: term.h: see the Autoconf documentation configure: WARNING: term.h: section Present But Cannot Be Compiled configure: WARNING: term.h: proceeding with the preprocessor's result configure: WARNING: term.h: in the future, the compiler will take precedence configure: WARNING: ## -- ## configure: WARNING: ## Report this to the AC_PACKAGE_NAME lists. ## configure: WARNING: ## -- ## checking for term.h... yes ... config.log has this to say on that: ... configure:22525: checking term.h usability configure:22537: gcc -c -mcpu=powerpc -Wa,-manyconftest.c 5 In file included from conftest.c:90: /usr/include/term.h:1194: error: parse error before bool /usr/include/term.h:1225: error: parse error before SGTTY /usr/include/term.h:1242: error: parse error before sgr_mode /usr/include/term.h:1243: error: parse error before sgr_faked /usr/include/term.h:1254: error: parse error before funckeystarter /usr/include/term.h:1255: error: parse error before _fl_rawmode /usr/include/term.h:1265: error: parse error before '*' token /usr/include/term.h:1267: error: parse error before bit_vector /usr/include/term.h:1271: error: parse error before check_turn_off /usr/include/term.h:1272: error: parse error before _cur_pair /usr/include/term.h:1273: error: parse error before '*' token /usr/include/term.h:1276: error: parse error before '}' token ... If I search for ''bool'' in the include files referenced in the configure test, I only get: sys/types.h:typedef int boolean_t; Type bool does appear to be defined in curses.h, which is not used in the term.h test. Compiling stops with: ... if gcc -DUNDEF_THREADS_HACK -DHAVE_CONFIG_H -DNO_KILL_INTR -I. -I. -I../.. -I../../include -I./../.. -I..-O3 -DDBUG_OFF -mcpu=powerpc -Wa,-many -MT term.o -MD -MP -MF .deps/term.Tpo -c -o term.o term.c; \ then mv -f .deps/term.Tpo .deps/term.Po; else rm -f .deps/term.Tpo; exit 1; fi In file included from /opt/freeware/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.3.2/include/curses.h:136, from term.c:62: /usr/include/term.h:1351: error: conflicting types for `tgetstr' /usr/local/include/termcap.h:27: error: previous declaration of `tgetstr' /usr/include/term.h:1355: error: conflicting types for `tgetflag' /usr/local/include/termcap.h:26: error: previous declaration of `tgetflag' /usr/include/term.h:1355: error: conflicting types for `tgetnum' /usr/local/include/termcap.h:25: error: previous declaration of `tgetnum' /usr/include/term.h:1357: error: conflicting types for `tputs' /usr/local/include/termcap.h:31: error: previous declaration of `tputs' term.c: In function `term_set': term.c:941: warning: passing arg 1 of `tgetstr' discards qualifiers from pointer target type term.c: In function `term_echotc': term.c:1445: warning: passing arg 1 of `tgetstr' discards qualifiers from pointer target type make[3]: *** [term.o] Error 1 make[3]: Leaving directory `/usr1/acct/jonathan/ostk/build/mysql-4.1.11/cmd-line-utils/libedit' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/usr1/acct/jonathan/ostk/build/mysql-4.1.11/cmd-line-utils' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/usr1/acct/jonathan/ostk/build/mysql-4.1.11' make: *** [all] Error 2 In config.h, HAVE_CURSES_H, HAVE_TERMCAP_H and HAVE_TERM_H are all defined and set to 1. I tried removing HAVE_TERM_H, but curses.h just includes it anyway. If I #undef HAVE_TERMCAP_H and rerun make, then it stops at: ... gcc: installation problem, cannot exec `cc1plus': No such file or directory make[2]: *** [my_new.o] Error 1 make[2]: Leaving directory `/usr1/acct/jonathan/ostk/build/mysql-4.1.11/mysys'
Re: index slowing things down?
Use explain to see what MySQL is doing. My guess is that MySQL is trying to figure out if it would be faster to use the index or scan the whole table. MySQL is probably deciding that it is quicker to scan the whole table based on the low cardinality of the indices it can use. Thus, you've just added the query optimizer overhead to your query and accomplished no more than what was happening before. To really take advantage of an index, you would need to create a compound index, one that spans multiple field. So you would have a single index that is made up of f01, f02, f03, ... But if you are going to do that, you might as well just create a single field (char) that contains all your numbers. I don't know what your search criteria or data format is like, so that may not work. But it would considerably simplify your data structure, searches and code. On May 6, 2005, at 4:14 PM, Jon Beyer wrote: I was playing around with a table that had 100 tinyint fields. Each record contained other a '0' or a '1' for each field. A query such as SELECT * FROM `foobar` WHERE `f01` =1 AND `f02` =1 AND `f03` =1 AND `f04` =1 AND `f05` =1 AND `f06` =1 AND `f07` =1 AND `f08` =1 AND `f09` =1 AND `f10` =1 runs in 0.07 seconds with 200,000 records. When I add indeces to fields f01 and f02, the query now takes 0.23 seconds. Is this expected? Is this simply the time required to load the index? Are indeces simply not designed to work with something of such low cardinality? Thanks, Jon -- Chance favors only a prepared mind. Jon Beyer 302N Dod Hall Princeton University Princeton, NJ 08544 609 986 8722 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- Brent Baisley Systems Architect Landover Associates, Inc. Search Advisory Services for Advanced Technology Environments p: 212.759.6400/800.759.0577 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
MySQL not using optimum disk throughput.
We have a few of DBs which aren't using disk IO to optimum capacity. They're running at a load of 1.5 or so with a high workload of pending queries. When I do iostat I'm not noticing much IO : Device:rrqm/s wrqm/s r/s w/s rsec/s wsec/srkB/swkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 13.73 128.43 252.94 1027.45 1695.10 513.73 847.55 7.1490.13 285.00 2.53 96.57 ... This is only seeing about 500k - 1M per second throughput. When I run bonnie++ on these drives they're showing 20M-40M throughput. Which is really strange. Most of our queries are single INSERTS/DELETES. I could probably rewrite these to become batch operations but I think I'd still end up seeing the above iostat results but with higher throughput. ... so I'd like to get to the bottom of this before moving forward? I ran OPTIMIZE TABLE on all tables but nothing. The boxes aren't paging. They're running on a RAID5 disk on XFS. Could it be that the disks are having to do a number of HEAD seeks since we have large tables? -- Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. See irc.freenode.net #rojo if you want to chat. Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html Kevin A. Burton, Location - San Francisco, CA AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL not using optimum disk throughput.
What kernel are you running. If your running 2.6.x use the deadline scheduler or downgrade to 2.4.23aavm 2.6.[0-9] has major problems with the IO scheduler since the process scheduler is very fast now. DVP Dathan Vance Pattishall http://www.friendster.com -Original Message- From: Kevin Burton [mailto:[EMAIL PROTECTED] Sent: Friday, May 06, 2005 1:58 PM To: mysql@lists.mysql.com Subject: MySQL not using optimum disk throughput. We have a few of DBs which aren't using disk IO to optimum capacity. They're running at a load of 1.5 or so with a high workload of pending queries. When I do iostat I'm not noticing much IO : Device:rrqm/s wrqm/s r/s w/s rsec/s wsec/srkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 13.73 128.43 252.94 1027.45 1695.10 513.73 847.55 7.1490.13 285.00 2.53 96.57 This is only seeing about 500k - 1M per second throughput. When I run bonnie++ on these drives they're showing 20M-40M throughput. Which is really strange. Most of our queries are single INSERTS/DELETES. I could probably rewrite these to become batch operations but I think I'd still end up seeing the above iostat results but with higher throughput. so I'd like to get to the bottom of this before moving forward? I ran OPTIMIZE TABLE on all tables but nothing. The boxes aren't paging. They're running on a RAID5 disk on XFS. Could it be that the disks are having to do a number of HEAD seeks since we have large tables? -- Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. See irc.freenode.net #rojo if you want to chat. Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html Kevin A. Burton, Location - San Francisco, CA AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Data Design : Numeric or keyword lookup values?
hi, When designing data it is common to have lookup tables such animal_type : dog=1, cat=2,bird=3 etc And then in other tables to refer to animals by their number 1, 2 or 3. This is memory and presumably speed efficient. Howver not much fun for humans who are reading/debugging the data. Alternatively it's possible to have a lookup table containing animal types as text strings cat dog, bird and actually then use the actual names where ever required in other tables. If table size speed are not top priority is there anything wrong with using 2nd method? Views/opinions please -- zzapper vim -c :%s%s*%Cyrnfr)fcbafbe[Oenz(Zbbyranne%|:%s)[[()])-)Ig|norm Vg? http://www.rayninfo.co.uk/tips/ vim, zsh success tips -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL not using optimum disk throughput.
We have seen the exact same thing here. We used the deadline scheduler and saw an immediate improvement. However, we still saw much worse performance on our Opteron's (compared to our older Xeon boxes). We ended up rolling back to Fedora Core 1 2.4.22-1.2199.nptlsmp kernel and shut down NPTL and now our Opteron's are much much faster than our Xeons. The thing I find strange about this is that our experience (@ Meetup) seems to match that of Friendsters (I know of a few other high traffic sites that have mentioned similar issues), in that Mysql on Opteron and Linux 2.6 is not a good solution. Yet, Mysql recommends exactly this config and in fact, does not seem to even support (via support contract) a 2.4 solution for Opteron + Mysql. Greg Dathan Pattishall wrote: What kernel are you running. If your running 2.6.x use the deadline scheduler or downgrade to 2.4.23aavm 2.6.[0-9] has major problems with the IO scheduler since the process scheduler is very fast now. DVP Dathan Vance Pattishall http://www.friendster.com -Original Message- From: Kevin Burton [mailto:[EMAIL PROTECTED] Sent: Friday, May 06, 2005 1:58 PM To: mysql@lists.mysql.com Subject: MySQL not using optimum disk throughput. We have a few of DBs which aren't using disk IO to optimum capacity. They're running at a load of 1.5 or so with a high workload of pending queries. When I do iostat I'm not noticing much IO : Device:rrqm/s wrqm/s r/s w/s rsec/s wsec/srkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 13.73 128.43 252.94 1027.45 1695.10 513.73 847.55 7.1490.13 285.00 2.53 96.57 This is only seeing about 500k - 1M per second throughput. When I run bonnie++ on these drives they're showing 20M-40M throughput. Which is really strange. Most of our queries are single INSERTS/DELETES. I could probably rewrite these to become batch operations but I think I'd still end up seeing the above iostat results but with higher throughput. so I'd like to get to the bottom of this before moving forward? I ran OPTIMIZE TABLE on all tables but nothing. The boxes aren't paging. They're running on a RAID5 disk on XFS. Could it be that the disks are having to do a number of HEAD seeks since we have large tables? -- Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. See irc.freenode.net #rojo if you want to chat. Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html Kevin A. Burton, Location - San Francisco, CA AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Expression Confusion?
Greetings All, Being new to this list I beg indulgence in a matter that has me quite befuddled. Having already had good success in building my first database web site I was taken aback when I ran across his little problem whist checking things out for full functionality on a 7500+ item database;-) Using the below statement results in a null data dump: SELECT * FROM items WHERE item = '109S2' AND venturi_type = 'L-shaped' AND category = 'burner'; Whereas if change the 'S' in the item = 10902 like this: SELECT * FROM items WHERE item = '10902' AND venturi_type = 'L-shaped' AND category = 'burner'; I get a full complete data dump. Both numbers are legitimate part numbers in the item column yet one works the other does not. I have the 'item' column data type set to VARCHAR(30) btw! I am using phpMyAdmin-2.5.3 to check this all out. What am I missing here? Any help would be greatly appreciated! Tommy Barrios WEBZIGHT.COM http://www.webzight.com If you've got a good thing going, don't fix it! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Expression Confusion?
Tommy Barrios wrote: Using the below statement results in a null data dump: SELECT * FROM items WHERE item = '109S2' AND venturi_type = 'L-shaped' AND category = 'burner'; Whereas if change the 'S' in the item = 10902 like this: SELECT * FROM items WHERE item = '10902' AND venturi_type = 'L-shaped' AND category = 'burner'; I get a full complete data dump. Both numbers are legitimate part numbers in the item column yet one works the other does not. Apparently you have no rows where all three criteria are true. If you think you do, you need to examine the values in the table more carefully. Perhaps the item value has a newline at the end or a space at the beginning or something else not immediately visible. Try selecting LENGTH() or HEX() of a column to see whether it's what you expect. -- Keith Ivey [EMAIL PROTECTED] Smokefree DC http://www.smokefreedc.org Washington, DC -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
amPiguous!
Why are columns included in the join between two tables ambigious? It seems that they should *not* be ambigious! Like this select pk from a inner join b using (pk); ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!! Is this a bug, or is it like this for a reason? It drives me nuts, because it is not ambigious (as far as I can tell). Am I a fool? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: amPiguous!
Actually, the error message is misleading. There is nothing that I would call ambiguous in your query: you have a syntax error. The join should be written: select pk from a inner join b on a.col1 = b.col2 Of course, you need to replace 'a.col1' and 'b.col2' with real column names from tables a and b respectively. Rhino - Original Message - From: Dan Bolser [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 6:50 PM Subject: amPiguous! Why are columns included in the join between two tables ambigious? It seems that they should *not* be ambigious! Like this select pk from a inner join b using (pk); ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!! Is this a bug, or is it like this for a reason? It drives me nuts, because it is not ambigious (as far as I can tell). Am I a fool? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 04/05/2005 -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 04/05/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: amPiguous!
On 7/05/2005 11:00 a.m., Rhino wrote: Actually, the error message is misleading. There is nothing that I would call ambiguous in your query: you have a syntax error. The join should be written: select pk from a inner join b on a.col1 = b.col2 Of course, you need to replace 'a.col1' and 'b.col2' with real column names from tables a and b respectively. Rhino Not true, his join syntax is correct. select pk from a inner join b using (pk) is the same as saying select pk from a inner join b on a.pk = b.pk However, MySQL is complaining about the select pk part because it doesn't know whether to select a.pk or b.pk. I think what Dan is arguing is that MySQL should know from the join that a.pk and b.pk are the same, so it doesn't matter which one it uses. -Simon - Original Message - From: Dan Bolser [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 6:50 PM Subject: amPiguous! Why are columns included in the join between two tables ambigious? It seems that they should *not* be ambigious! Like this select pk from a inner join b using (pk); ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!! Is this a bug, or is it like this for a reason? It drives me nuts, because it is not ambigious (as far as I can tell). Am I a fool? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: amPiguous!
The way he is joining tables is fine. You can specify how to link the using ON or you can just say to use a commonly named field with USING. The Problem is with the SELECT pk. That is ambiguous. From what table would you like the pk field? It can be table1.pk or table2.pk. Eric Jensen Rhino wrote: Actually, the error message is misleading. There is nothing that I would call ambiguous in your query: you have a syntax error. The join should be written: select pk from a inner join b on a.col1 = b.col2 Of course, you need to replace 'a.col1' and 'b.col2' with real column names from tables a and b respectively. Rhino - Original Message - From: Dan Bolser [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 6:50 PM Subject: amPiguous! Why are columns included in the join between two tables ambigious? It seems that they should *not* be ambigious! Like this select pk from a inner join b using (pk); ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!! Is this a bug, or is it like this for a reason? It drives me nuts, because it is not ambigious (as far as I can tell). Am I a fool? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 04/05/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: amPiguous!
On Fri, 6 May 2005, Rhino wrote: Actually, the error message is misleading. There is nothing that I would call ambiguous in your query: you have a syntax error. The join should be written: select pk from a inner join b on a.col1 = b.col2 Of course, you need to replace 'a.col1' and 'b.col2' with real column names from tables a and b respectively. I would have said... select pk from a inner join b on a.pk = b.pk; (probably pk was a bad choice for an example column name). Using the ON syntax instead of the USING syntax makes my problem look even more silly than it already is, i.e. just say select a.pk from a inner join b on a.pk = b.pk; Thing is I use the USING syntax all the time and name equivelent columns in different tables the same to help me do this (I read somewhere that this is good practice). Is it still me, or should the USING syntax 'disambiguate' columns in the select statement? Rhino - Original Message - From: Dan Bolser [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 6:50 PM Subject: amPiguous! Why are columns included in the join between two tables ambigious? It seems that they should *not* be ambigious! Like this select pk from a inner join b using (pk); ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!! Is this a bug, or is it like this for a reason? It drives me nuts, because it is not ambigious (as far as I can tell). Am I a fool? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 04/05/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: amPiguous!
On Sat, 7 May 2005, Simon Garner wrote: On 7/05/2005 11:00 a.m., Rhino wrote: Actually, the error message is misleading. There is nothing that I would call ambiguous in your query: you have a syntax error. The join should be written: select pk from a inner join b on a.col1 = b.col2 Of course, you need to replace 'a.col1' and 'b.col2' with real column names from tables a and b respectively. Rhino Not true, his join syntax is correct. select pk from a inner join b using (pk) is the same as saying select pk from a inner join b on a.pk = b.pk However, MySQL is complaining about the select pk part because it doesn't know whether to select a.pk or b.pk. I think what Dan is arguing is that MySQL should know from the join that a.pk and b.pk are the same, so it doesn't matter which one it uses. Exactly! Afterall a.pk = b.pk! However, I can imagine how this could become somewhat tricky with the ON syntax. -Simon - Original Message - From: Dan Bolser [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 6:50 PM Subject: amPiguous! Why are columns included in the join between two tables ambigious? It seems that they should *not* be ambigious! Like this select pk from a inner join b using (pk); ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!! Is this a bug, or is it like this for a reason? It drives me nuts, because it is not ambigious (as far as I can tell). Am I a fool? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: amPiguous!
He's right in saying that mysql is capable of knowing. My thoughts are that it's not worth the speed loss, extra code, and potential guess work by mysql just so you don't have to type a table name. Eric Jensen wrote: The way he is joining tables is fine. You can specify how to link the using ON or you can just say to use a commonly named field with USING. The Problem is with the SELECT pk. That is ambiguous. From what table would you like the pk field? It can be table1.pk or table2.pk. Eric Jensen Rhino wrote: Actually, the error message is misleading. There is nothing that I would call ambiguous in your query: you have a syntax error. The join should be written: select pk from a inner join b on a.col1 = b.col2 Of course, you need to replace 'a.col1' and 'b.col2' with real column names from tables a and b respectively. Rhino - Original Message - From: Dan Bolser [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 6:50 PM Subject: amPiguous! Why are columns included in the join between two tables ambigious? It seems that they should *not* be ambigious! Like this select pk from a inner join b using (pk); ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!! Is this a bug, or is it like this for a reason? It drives me nuts, because it is not ambigious (as far as I can tell). Am I a fool? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 04/05/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: amPiguous!
On Fri, 6 May 2005, Eric Bergen wrote: He's right in saying that mysql is capable of knowing. My thoughts are that it's not worth the speed loss, extra code, and potential guess work by mysql just so you don't have to type a table name. I see what you mean. I didn't think about additional query processing burden / potential bugs as a result of this 'fix'. Eric Jensen wrote: The way he is joining tables is fine. You can specify how to link the using ON or you can just say to use a commonly named field with USING. The Problem is with the SELECT pk. That is ambiguous. From what table would you like the pk field? It can be table1.pk or table2.pk. Eric Jensen Rhino wrote: Actually, the error message is misleading. There is nothing that I would call ambiguous in your query: you have a syntax error. The join should be written: select pk from a inner join b on a.col1 = b.col2 Of course, you need to replace 'a.col1' and 'b.col2' with real column names from tables a and b respectively. Rhino - Original Message - From: Dan Bolser [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 6:50 PM Subject: amPiguous! Why are columns included in the join between two tables ambigious? It seems that they should *not* be ambigious! Like this select pk from a inner join b using (pk); ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!! Is this a bug, or is it like this for a reason? It drives me nuts, because it is not ambigious (as far as I can tell). Am I a fool? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 04/05/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: amPiguous!
My mistake; I wasn't aware of the 'using' clause being an alternative syntax for 'on' in MySQL's versio of MySQL. I mostly use DB2 and that is *not* valid in DB2's SQL. Rhino - Original Message - From: Simon Garner [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 7:14 PM Subject: Re: amPiguous! On 7/05/2005 11:00 a.m., Rhino wrote: Actually, the error message is misleading. There is nothing that I would call ambiguous in your query: you have a syntax error. The join should be written: select pk from a inner join b on a.col1 = b.col2 Of course, you need to replace 'a.col1' and 'b.col2' with real column names from tables a and b respectively. Rhino Not true, his join syntax is correct. select pk from a inner join b using (pk) is the same as saying select pk from a inner join b on a.pk = b.pk However, MySQL is complaining about the select pk part because it doesn't know whether to select a.pk or b.pk. I think what Dan is arguing is that MySQL should know from the join that a.pk and b.pk are the same, so it doesn't matter which one it uses. -Simon - Original Message - From: Dan Bolser [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, May 06, 2005 6:50 PM Subject: amPiguous! Why are columns included in the join between two tables ambigious? It seems that they should *not* be ambigious! Like this select pk from a inner join b using (pk); ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!! Is this a bug, or is it like this for a reason? It drives me nuts, because it is not ambigious (as far as I can tell). Am I a fool? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 04/05/2005 -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.308 / Virus Database: 266.11.5 - Release Date: 04/05/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Newbie:number of hits per day query
how do I get the average number of hits per day I have a table like: DateTimeid ip 20050506190723 1 121.198.262 what I have so far SELECT DateTime , count( * ) FROM userLog GROUP BY DateTime LIMIT 0 , 30 I assume that DateTime would have to be converted to a specific day like 2005-3-7 and somehow I would need to average them can a kind soul point me in the right direction g -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: amPiguous!
Somethign else to think about as well, look at this slight modification: select pk from a left join b using (pk); Now, it's not likely this is a valid query for your table structure, but, in this instance, a.pk and b.pk are not necessarily the same. b.pk could potentially be NULL while a.pk was not -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL not using optimum disk throughput.
Greg Whalin wrote: We have seen the exact same thing here. We used the deadline scheduler and saw an immediate improvement. However, we still saw much worse performance on our Opteron's (compared to our older Xeon boxes). We ended up rolling back to Fedora Core 1 2.4.22-1.2199.nptlsmp kernel and shut down NPTL and now our Opteron's are much much faster than our Xeons. Sweet... I'm going to take a look at that! Two votes for the deadline scheduler. Though I'm an NPTL fan but I'm not sure our DB boxes need this as they don't use THAT many threads. The thing I find strange about this is that our experience (@ Meetup) seems to match that of Friendsters (I know of a few other high traffic sites that have mentioned similar issues), in that Mysql on Opteron and Linux 2.6 is not a good solution. Yet, Mysql recommends exactly this config and in fact, does not seem to even support (via support contract) a 2.4 solution for Opteron + Mysql. Wow... whats the consensus on Opteron here then? It seems to be a clear winner since you can give the mysql process more memory for caching. Is it an OS issue since few of the distributions seem to support Opteron (well). -- Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. See irc.freenode.net #rojo if you want to chat. Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html Kevin A. Burton, Location - San Francisco, CA AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL not using optimum disk throughput.
Kevin Burton wrote: Greg Whalin wrote: We have seen the exact same thing here. We used the deadline scheduler and saw an immediate improvement. However, we still saw much worse performance on our Opteron's (compared to our older Xeon boxes). We ended up rolling back to Fedora Core 1 2.4.22-1.2199.nptlsmp kernel and shut down NPTL and now our Opteron's are much much faster than our Xeons. Sweet... I'm going to take a look at that! Two votes for the deadline scheduler. Though I'm an NPTL fan but I'm not sure our DB boxes need this as they don't use THAT many threads. Deadline was much faster. Using sysbench: test: sysbench --num-threads=16 --test=fileio --file-total-size=20G --file-test-mode=rndrw run results: 2.6.10-1.14_FC2smp on dual Opteron 248s w/ 4GB RAM default scheduler (anticaptory): Operations performed: 6004 Read, 3996 Write, 12800 Other = 22800 Total Read 93.812Mb Written 62.438Mb Total transferred 156.25Mb (2.9186Mb/sec) 186.79 Requests/sec executed Test execution summary: total time: 53.5363s total number of events: 1 total time taken by event execution: 376.0398 per-request statistics: min:0.s avg:0.0376s max:18446744073709.4961s approx. 95 percentile: 0.1106s Threads fairness: distribution:70.15/87.92 execution: 88.48/93.88 deadline scheduler: Operations performed: 6006 Read, 3994 Write, 12800 Other = 22800 Total Read 93.844Mb Written 62.406Mb Total transferred 156.25Mb (4.4464Mb/sec) 284.57 Requests/sec executed Test execution summary: total time: 35.1411s total number of events: 1 total time taken by event execution: 289.2953 per-request statistics: min:0.s avg:0.0289s max:0.3520s approx. 95 percentile: 0.0870s Threads fairness: distribution:84.92/92.89 execution: 90.52/96.58 The 2.4 scheduler showed similar results to deadline under 2.6. The thing I find strange about this is that our experience (@ Meetup) seems to match that of Friendsters (I know of a few other high traffic sites that have mentioned similar issues), in that Mysql on Opteron and Linux 2.6 is not a good solution. Yet, Mysql recommends exactly this config and in fact, does not seem to even support (via support contract) a 2.4 solution for Opteron + Mysql. Wow... whats the consensus on Opteron here then? It seems to be a clear winner since you can give the mysql process more memory for caching. Is it an OS issue since few of the distributions seem to support Opteron (well). I suspect this is an OS issue. Our Opteron's were completing large data update queries aprox 2-3 times slower than our Xeons when running under 2.6. After a switch to 2.4, Opteron's are faster than the Xeons. I mentioned NPTL being shut off (LD_ASSUME_KERNEL=2.4.19 in init script). When we left NPTL running, we saw almost instant deadlocks just watching replication catching up (no other site traffic directed to the machine). This is in 2.4 btw, so this is the backported NPTL kernels from Fedora. I somewhat suspect NPTL being a problem in 2.6 as well due to impressions I get from sifting through mysql's bug tracking system. The IO scheduler was also an obvious culprit. Other issues I have noticed w/ Opteron ver of mysql ... - Under 2.6, if we took the db offline and ran myisamchk on a table w/ fulltext indexes, and then started back up again, the table would nearly instantly crash (upon first writes to it). Running repair table would seg fault. Shutting down to run myisamchk would only cause the table to crash again upon 1st write. Only solution ... alter table tablename engine=myisam; Then the table would run fine. We have since dropped all fulltext indexes and moved to Lucene (much more flexible and way faster anyhow). - Under 2.4 (just happened to me tonight and this is a scary one), we routinely archive and cleanup large tables w/ seldom used old data. After doing a DELETE FROM table WHERE ctime '2005-05-01', we would see a select count(*) show around 160k rows remaining (from 1st of the month). I would call repair table on the table, and the remaining rows would be deleted. Repair would make mention of dropping row count from 165k to 0. Yikes! This happened on both Opterons and did not happen on the Xeons (thank god ... was able to save the data). In any rate, I am 100% confidant in saying that Mysql (w/ myisam table engine ... not tried innodb yet) on linux on Opterons is not yet stable or speedy. Though we usually only see problems under large data cleanups (moving, deleting, repairing, etc). Greg
Re: MySQL not using optimum disk throughput.
Greg Whalin wrote: Deadline was much faster. Using sysbench: test: sysbench --num-threads=16 --test=fileio --file-total-size=20G --file-test-mode=rndrw run Wow... what version of sysbench are you running? Its giving me strange errors sysbench v0.3.4: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 16 Extra file open flags: 0 128 files, 160Mb each 20Gb total file size Block size 16Kb Number of random requests for random IO: 1 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Threads started! FATAL: Failed to read file! file: 90 pos: 14761984 errno = 0 (Success) FATAL: Failed to read file! file: 103 pos: 161398784 errno = 0 (Success) FATAL: Failed to read file! file: 75 pos: 79413248 errno = 0 (Success) FATAL: Failed to read file! file: 79 pos: 67207168 errno = 0 (Success) FATAL: Failed to read file! file: 108 pos: 64028672 errno = 0 (Success) FATAL: Failed to read file! file: 53 pos: 96157696 errno = 0 (Success) FATAL: Failed to read file! file: 88 pos: 137068544 errno = 0 (Success) -- Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. See irc.freenode.net #rojo if you want to chat. Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html Kevin A. Burton, Location - San Francisco, CA AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Here is your documents.
The message contains Unicode characters and has been sent as a binary attachment. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL not using optimum disk throughput.
Kevin Burton wrote: Greg Whalin wrote: Deadline was much faster. Using sysbench: test: sysbench --num-threads=16 --test=fileio --file-total-size=20G --file-test-mode=rndrw run So... FYI. I rebooted with elevator=deadline as a kernel param. db2:~# cat /sys/block/sda/queue/scheduler noop anticipatory [deadline] cfq (which I assume means I'm now running deadline. Is there any other way to find out?) And no performance diff. Note that you're benchmarks only show a 20M addition overhead. We're about 60x too slow for these drives so I'm not sure what could be going on here :-/ Kevin -- Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. See irc.freenode.net #rojo if you want to chat. Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html Kevin A. Burton, Location - San Francisco, CA AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL not using optimum disk throughput.
What drives are you using? For SCSI RAID, you definitly want deadline scheduler. That said, even after the switch to deadline, we saw our Opteron's running way slow (compared to older slower Xeons). Whatever the problem is, we fought it for quite a while (though difficult to test too much w/ production dbs) and ended up rolling back to 2.4. Kevin Burton wrote: Kevin Burton wrote: Greg Whalin wrote: Deadline was much faster. Using sysbench: test: sysbench --num-threads=16 --test=fileio --file-total-size=20G --file-test-mode=rndrw run So... FYI. I rebooted with elevator=deadline as a kernel param. db2:~# cat /sys/block/sda/queue/scheduler noop anticipatory [deadline] cfq (which I assume means I'm now running deadline. Is there any other way to find out?) And no performance diff. Note that you're benchmarks only show a 20M addition overhead. We're about 60x too slow for these drives so I'm not sure what could be going on here :-/ Kevin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Newbie:number of hits per day query
how about this: select substr(datetime,0,8) AS date, count(datetime) from userLog group by date; I could not clearly understand what you wanted to average upon. HTH, Anoop On 5/6/05, Graham Anderson [EMAIL PROTECTED] wrote: how do I get the average number of hits per day I have a table like: DateTime id ip 20050506190723 1 121.198.262 what I have so far SELECT DateTime , count( * ) FROM userLog GROUP BY DateTime LIMIT 0 , 30 I assume that DateTime would have to be converted to a specific day like 2005-3-7 and somehow I would need to average them can a kind soul point me in the right direction g -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- Thanks and best regards, Anoop
Re: Newbie:number of hits per day query
ohh one correction: select substr(datetime,0,8) AS date, count(date) from userLog group by date; Anoop On 5/6/05, Anoop kumar V [EMAIL PROTECTED] wrote: how about this: select substr(datetime,0,8) AS date, count(datetime) from userLog group by date; I could not clearly understand what you wanted to average upon. HTH, Anoop On 5/6/05, Graham Anderson [EMAIL PROTECTED] wrote: how do I get the average number of hits per day I have a table like: DateTime id ip 20050506190723 1 121.198.262 what I have so far SELECT DateTime , count( * ) FROM userLog GROUP BY DateTime LIMIT 0 , 30 I assume that DateTime would have to be converted to a specific day like 2005-3-7 and somehow I would need to average them can a kind soul point me in the right direction g -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- Thanks and best regards, Anoop -- Thanks and best regards, Anoop
Re: MySQL not using optimum disk throughput.
Greg Whalin wrote: What drives are you using? For SCSI RAID, you definitly want deadline scheduler. That said, even after the switch to deadline, we saw our Opteron's running way slow (compared to older slower Xeons). Whatever the problem is, we fought it for quite a while (though difficult to test too much w/ production dbs) and ended up rolling back to 2.4. Ug.. I don't want to roll back to 2.4... 2.6 has so many nice features we depend on. We're using SCSI RAID5 on XEON of course. I think its time to rule out some things. I'm going to migrate to RAID1... just to verify... then try reviewing our kernel options.. maybe disabling NPTL... maybe try another filesystem... Not fun. For the record... no a loaded system what type of IO do you guys see? Anywhere near full disk capacity? I'm curious to see what type of IO people are seeing on a production/loaded mysql box. Kevin -- Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. See irc.freenode.net #rojo if you want to chat. Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html Kevin A. Burton, Location - San Francisco, CA AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL not using optimum disk throughput.
In the last episode (May 06), Kevin Burton said: We have a few of DBs which aren't using disk IO to optimum capacity. They're running at a load of 1.5 or so with a high workload of pending queries. When I do iostat I'm not noticing much IO : Device:rrqm/s wrqm/s r/s w/s rsec/s wsec/srkB/swkB/s avgrq-sz avgqu-sz await svctm %util sda0.00 13.73 128.43 252.94 1027.45 1695.10 513.73847.55 7.1490.13 285.00 2.53 96.57 This is only seeing about 500k - 1M per second throughput. When I run bonnie++ on these drives they're showing 20M-40M throughput. They're running on a RAID5 disk on XFS. An OLTP database is not a system that requires throughput. It requires lots of random access. MB/sec doesn't matter a bit. Instead, take a look at the r/s and w/r columns. You're doing ~380 IOs/sec, which sounds like maybe a 3-disk set? Each disk you add to the set should give you another 120 or IOs per second. When you max out the number of drives in your case, you will realize why drive manufacturers sell 15K rpm disks: an array of 15k drives will give you double the transaction rate (250 IO/s instead of 120) of the same number of 7200 rpm drives :) -- Dan Nelson [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL not using optimum disk throughput.
And no performance diff. Note that you're benchmarks only show a 20M addition overhead. We're about 60x too slow for these drives so I'm not sure what could be going on here :-/ I know of a site that encountered a similar performance issue: The OS was reading in a lot more data from the disk than the database really needed. The culprit turned out to be the stripe size on a 4-disk RAID. By reducing the stripe size from 768K to 32K, they obtained a 200% increase in mysql throughput. - JD -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]