Re: Images
From: GH [EMAIL PROTECTED] I was wondering how to get images into and out of a Mysql database was told it was possible. I have mysql 4.0.18 The general opinion is that files should be stored in a file system and not a database. There are circumstances that you might want to store binary data in a database. Take a look at the BLOB column types (http://dev.mysql.com/doc/mysql/en/BLOB.html). Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Different default collation in each connection
Hi, I have a problem setting default collation for each connection. I have multilanguage site and I don't want to add COLLATE statement to every SQL - I could set it once during PHP startup and forget about it... But there's a problem. I tried to do SET names 'utf-8'; SET collation_connection = 'utf8_polish_ci'; and I think it should set collation in this connection to default 'utf8_polish_ci'. Because depending on what user language will choose I can set i.e. 'utf_8_romanian_ci'. So this does not work. When i use SQLs that search smth with LIKE I get no results entering polish national char in different case than it is in DB. Any suggestions? Please help as I don't want to add to every SQL COLLATE statment because it's not even possible - many of theese SQLs are generated automagically... P.S. I use MySQL 4.1.4 on Windows XP Home machine. P.S2 I store data in one table which has no default collation set only UTF-8 charset. In this table I can have data in different collations. -- Use the force - read the source Piotr Duszynski mailto:[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
hostname-bin.001 Files
On my mysql servers, I've noticed some unexpected files in the /usr/local/mysql/data/ directory. Specifically, I have files like: hostname-bin.001 hostname-bin.002 hostname-bin.003 etc. (obviously with hostname being replaced with the name of the computer) I also have files like... innodb.status.410 innodb.status.411 innodb.status.412 etc. What ARE these files (I assume some sort of log files) and do I want/need them around? Some are small and some are quite large. Thanks! ps. mysql version: 4.0.20, OS version: MacOS X Server 10.3.5 Robert C. Best III - [EMAIL PROTECTED] Instructional Technology Coordinator for N.E.R.I.C. at Potsdam Central School Phone: (315) 265-2000 x266
Re: hostname-bin.001 Files
Okay, replying to my own post is like talking to one's self but... so I finally found the manual page that describes the two files. Any reason I want them (the log files that is)? My databases currently are not replicating (no plans to add it) and have many reads but few writes. On Sep 28, 2004, at 4:49 AM, Rob Best wrote: On my mysql servers, I've noticed some unexpected files in the /usr/local/mysql/data/ directory. Specifically, I have files like: hostname-bin.001 hostname-bin.002 hostname-bin.003 etc. (obviously with hostname being replaced with the name of the computer) I also have files like... innodb.status.410 innodb.status.411 innodb.status.412 etc. What ARE these files (I assume some sort of log files) and do I want/need them around? Some are small and some are quite large. Thanks! ps. mysql version: 4.0.20, OS version: MacOS X Server 10.3.5 Robert C. Best III - [EMAIL PROTECTED] Instructional Technology Coordinator for N.E.R.I.C. at Potsdam Central School Phone: (315) 265-2000 x266 Robert C. Best III - [EMAIL PROTECTED] Instructional Technology Coordinator for N.E.R.I.C. at Potsdam Central School Phone: (315) 265-2000 x266 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Is MySQL Cluster support for Windows ready yet?
Hello, This is my first posting on this list and I would like to know whether cluster support is ready for Windows. I've read on the www.mysql.com website that it is not ready yet, but the MySQL Cluster Administrator Guide describes installation procedures for Windows also. If there is Windows support for clustering, which is the minimum version number that provides this support, and is it possible to have some nodes of the cluster run on Linuix while other nodes on Windows? Thanks in advance for any assistance. Best regards, -- Vlasis Hatzistavrou, System Administrator, Hellenic Academic Libraries Link (HEAL-Link), Library of Physics Informatics, Aristotle University of Thessaloniki, email: [EMAIL PROTECTED] Phone: +30 2310 998208 Fax: +30 2310 999428 http://www.heal-link.gr -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
newbie permissions question
Hi All, I'm learning mysql among other programs to put together a web page with database in the background. My first goal is to simply produce a crude example for demo purposes, it can be refined a great deal later. I've set up a small mysql data base on a commercial web hosting server. It prevents me from uploading entire data files into a table, I presume because I lack file permission on my account. Is it possible for the system administrator to grant me file permission on an individual database and not the entire account? Would there be security issues for the web service provider by doing this? One more ...! Initially I'm going to have small amounts of data sent to the server by email. Would Perl be the language of choice to write some sort of script to intercept the email, strip out the headers and rubbish, and then insert the data into a mysql database? If anyone knows of analogous examples of how to do this I'd be greatful. Thanks a bunch. Jim -- == Jim Maas jamaas btinternet com
Re: Retrieving field characteristics
On 27 Sep 2004 at 11:36, Rhino wrote: snip I really have no idea how Java is getting the information. I haven't tried very many of the metadata methods yet so I don't know how much Java can actually see and what is hidden. But I do know that it can see the descriptions of the columns. I had assumed that Mark Matthews, the guy who develops the JDBC drivers, had persuaded the other MySQL developers that the JDBC drivers *had* to be able to see the metadata and got permission to do that well in advance of the developers making the metadata available via the command line in the normal way for SQL catalogs. But that was strictly a wild guess; you may be totally correct in your assumptions. Hi, Maybe the Java driver is simply running this query: USE database; SHOW FIELDS FROM `tablename`; and caching the result? Regards Ian -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
alias in expression not allowed in ORDER BY?
All tested in MySQL 4.0.x and 4.1.x (MyISAM and InnoDB): (1) A query with an expression in an ORDER BY clause works well: SELECT * FROM `table1` ORDER BY `col` + 1 (2) A query with an alias of a calculated field in the ORDER BY works also: SELECT * , sqrt( `col1` ) AS blarg FROM `table1` ORDER BY blarg (3) But the combination of an alias in an expression in the ORDER BY: SELECT * , sqrt( `col1` ) AS blarg FROM `table1` ORDER BY blarg + 1 results in: #1054 - Unknown column 'blarg' in 'order clause' In the manual there is stuff about aliases in WHERE clauses, in GROUP BY, but the use in ORDER BY is a bit of a mystery to me. I don't see why (2) would work and (3) not!? If necessary you can use the HAVING clause to use expressions that aren't allowed in WHERE, but there is no such thing as a post-having-order-by clause ;-) Regards, Jigal -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Oracle query to mysql
Hi guys, I'm working in a telecom company that has Oracle for the call statistics. Now we export the daily stats to a remote mySql. The daily resume table looks like this: ++---+-++--+ --+++ | IMRFLL | IMR906| IMRTER | IMRTAR | IMRDUR | IMRFAC | IMRCLI | IMRCLL | ++---+-++--+ --+++ | 2004-06-01 | 803xx | x | N | 446.9166572 | 40355904 | 21 | 26 | | 2004-06-01 | 803xx | 0 | R |9.414 | 40355904 | 21 | 10 | ++---+-++--+ --+++ What I need it's to get a report that joins the table to itself two times to get the Normal tarif ( IMRTAR=N) and the Reduced tarif (IMRTAR=R). In Oracle is done using Outer joins like this: SELECT IMRTAB.IMR906 AS NUM906,IMRTAB.IMRFLL AS FLL, SUM(IMRTAB.IMRCLL) AS CLL ,ROUND(SUM(IMRTAB.IMRDSC),2) AS DUR, ROUND(SUM(IMRTAB.IMRDSC)/SUM(IMRTAB.IMRCLL),2) AS PRO, SUM(IMRTAB1.IMRCLL) AS CLL_N,ROUND(SUM(IMRTAB1.IMRDSC),2) AS DUR_N,ROUND(SUM(IMRTAB1.IMRDSC)/SUM(IMRTAB1.IMRCLL),2) AS PRO_N, SUM(IMRTAB2.IMRCLL) AS CLL_R,ROUND(SUM(IMRTAB2.IMRDSC),2) AS DUR_R,ROUND(SUM(IMRTAB2.IMRDSC)/SUM(IMRTAB2.IMRCLL),2) AS PRO_R FROM IMRTAB, IMRTAB IMRTAB1, IMRTAB IMRTAB2 /* here does the tables alias */ WHERE IMRTAB.IMRIDE = IMRTAB1.IMRIDE (+) AND IMRTAB.IMRIDE = IMRTAB2.IMRIDE (+) /* links by the uniqID both alias */ AND (IMRTAB1.IMRTAR (+) = 'N') /* takes N calls and discard the rest for IMRTAB1 */ AND (IMRTAB2.IMRTAR (+) = 'R') /* takes R calls and discard the rest for IMRTAB2 */ AND (IMRTAB.IMRFLL BETWEEN '01/09/2004' and '10/09/2004') AND (IMRTAB.IMRCLI =2584 AND (IMRTAB.IMR906=803xx )) GROUP BY IMRTAB.IMR906,IMRTAB.IMRFLL ORDER BY IMRTAB.IMR906,IMRTAB.IMRFLL And this query returns something like this: +---++++++ | NUM906| FLL| CLL| DUR| DUR_N | CLL_N | DUR_R | CLL_R +---++++++ | 803xx | 2004-09-02 | 1 | 4.30 | 4.30 | 1 | | 803xx | 2004-09-01 | 2 | 0.00 | 0.00 | 2 | +---++++++ I took out some fields in this graphic just to simplify. I've been trying to get this query running for mysql but I can get the same results and I kill the DB. Does someone knows if I can get the same result ? Thanks is advance, MARTIN [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Unable top drop table, error 1051
Bug filled: http://bugs.mysql.com/bug.php?id=5784 Thank you! -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Install Problem on Fedora Core 2 w/ ver 4.0.21
Jigal van Hemert [EMAIL PROTECTED] wrote: 040925 21:50:34 Warning: Asked for 196608 thread stack, but got 126976 Databases are installed. You can ignore the above warning. Egor, 1) A warning usually indicates an error that has limited impact on the system, but still an error! Well, not always and not in this case. This warning is shown in logs by default since .19 or .20 afair; but it existed in earlier versions, just not being shown. 2) Can you explain this warning and what to do to solve (and not supress) it? This issue shows up very regularly on this list and nobody here seems to have a clue what causes it... Because it's a minor warning with a recomendation to ignore it. :) Your question forwarded to developers. I will publish an answer here. -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: How to pick the oldest row
Thanks Jim,but that doesn't work for me because the order by performs after the records have been selected. At that time I already have the unwanted row. I will try to explain a little bit further SELECT * FROM `mj_tmp`This select is suposed to return all rows WHERE idk NOT IN but only those whose idk is not found in the following result (idk is the primary key) (SELECT idk FROM `mj_tmp` Here is where I need actually the youngest date GROUP BY Concat(ida,u) So the final result would deliver the oldest row HAVING Count( Concat(ida,u) ) 1 ) This last select should pick the youngest row to exclude them from the final result. In other words I need a subselect whose rows selection is inverted taking column F into account (column F is a date) On Mon, 2004-09-27 at 12:17, Jim Grill wrote: Hi , I'm trying to fetch the oldest row from a result set in wich the resulting rows have duplicates, I need only one row for each duplicate. The problem is that I need always the oldest row Like in the example SELECT * FROM `mj_tmp` WHERE idk NOT IN (SELECT idk FROM `mj_tmp` GROUP BY Concat(ida,u) HAVING Count( Concat(ida,u) ) 1 ) I would like to get the first of this two rows but I'm always getting the second. idk ida u d 90 14450 13 2004-09-08 147 14450 13 2004-09-09 Is there a way in which I can make sure I get always the oldest row? Try adding something like ORDER BY d DESC LIMIT 1 Jim Grill -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Error 1033, still having major problems
Brian J.S. Miller [EMAIL PROTECTED] wrote: Does anyone have any ideas? This is killing me! In this case I may suggest you to buy a support and request a support incident. MySQL developers will recover your files manually if there is anything possible to recover. To buy a support contract, please follow the link in my signature. -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
select range of matched word in text field
hello, I wonder if it is possible to match a word like with instr() and get as a result just 10 words bevor and ten words after the matched word. (like google presents the matched query) I would need a select with a replacement and a fitting where clause which could be done with isset. I have isam tables.. and wonder if this is even possible best regards mathias _/_/_/ Mathias E. Koch _/_/_/ Florastrasse 46 _/_/_/ 47799 Krefeld Development+ Communications+ Strategies Tel.: +49 (0)2151 15 05 28 / Mob.: +49 (0)171 64 50 137 mailto:[EMAIL PROTECTED] -NMA- http://www.na8ur.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySql Distributed database version
Look forward in documentation at http://dev.mysql.com/doc/ for Replication and MySQL Cluster sections. Hopefully one or both will be answer for you. -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: hostname-bin.001 Files
Rob Best [EMAIL PROTECTED] wrote: Okay, replying to my own post is like talking to one's self but... so I finally found the manual page that describes the two files. Any reason I want them (the log files that is)? My databases currently are not replicating (no plans to add it) and have many reads but few writes. If replication is not used and you don't need it, then remove log-bin from your configuration, then delete these log files. -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
MySQL Process
Hi, I run Debian 3.1 with kernel 2.6.8-1-686-smp, I wonder why why I do ps -aux, I only can see 1 MySQL process. mysql 4227 0.0 64.6 1849456 1341912 pts/1 S Sep16 4:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql/ --user=mysql --pid-file=/var/lib/mysql//natuna.pid --skip-locking --port=3306 --socket=/var/lib/mysql/mysql.sock On my other machine, there are many processes. mysql13648 0.1 57.7 1858672 1195352 ? S20:44 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/tidore.pid --skip-locking --port=3306 --socket=/var/lib/mysql/mysql.sock mysql13649 0.1 57.7 1858672 1195352 ? S20:44 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/tidore.pid --skip-locking --port=3306 --socket=/var/lib/mysql/mysql.sock mysql13650 0.1 57.7 1858672 1195352 ? S20:44 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/tidore.pid --skip-locking --port=3306 --socket=/var/lib/mysql/mysql.sock ... I use the same my.cnf, and show processlist show me more than 1 process on both machines. Both of these machines are slave DBs. The one that only has 1 process tends to lag behind a lot. I am not sure that they are somewhat connected. Any idea? Thank you very much. --bk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: show the row of last action like UPDATE or INSERT
Vincent Ghilardi [EMAIL PROTECTED] wrote: Is there a way to show the row of last action like UPDATE or INSERT ? Only for INSERT if there was an AUTO_INCREMENT value added. See http://dev.mysql.com/doc/mysql/en/Information_functions.html -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Install Problem on Fedora Core 2 w/ ver 4.0.21
From: Dennis Skinner [EMAIL PROTECTED] http://lists.mysql.com/mysql/168700 bang on own head I've been searching through the 15502 messages from this list that I have archived since Jan 16, 2004 but I must've missed this one... Thanks for looking it up! Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: SLOW select when ordering
Also note that the server is pretty fast, dual 2 GHz with 2GB of RAM. Using the my-huge.cnf settings. Also... I forgot to mention [EMAIL PROTECTED] I'm using: mysql 4.0.20-standard-log Official MySQL RPM And here is the EXPLAIN on the select portion of the slow query is: +---+++---+-+ | table | type | possible_keys | key | key_len | +---+++---+-+ | AL| ref| PRIMARY,CAT_ATTR_INDEX,ID_VAL_ID_INDEX | CAT_ATTR_INDEX| 8 | | A | ref| PRIMARY,ATTR_VAL_ID_INDEX | ATTR_VAL_ID_INDEX | 4 | | AC| eq_ref | PRIMARY| PRIMARY | 8 | +---+++---+-+ (continued..) +---+-+---+--+ | table | ref | rows | Extra| +---+-+---+--+ | AL| const,const | 14077 | Using where; Using temporary; Using filesort | | A | AL.attr_val_id | 1 | | | AC| A.attr_val_id,const | 1 | | +---+-+---+--+ Thanks - John I have a very slow query that is taking anywhere from 10 to 90 seconds. The query selects attribute values that are associated with a particular attribute and category and orders them alphabetically. For example, category 1 might have an attribute Example: category: books category_id: 1 attr_name: authors attr_id: 5 - OUTPUT (attribute.attr_val, attribute.attr_val_id) Stephen King, 123 In order to associate a attribute with a category, I have a table attr_lookup which references an attr_val_id, attr_id to a particular category. Since I am populating a combobox and some categories have 4 to 6,000 authors, I need to limit the results by the most popular attr_val (the most popular authors in this case). I decided to make a denormalized table, attribute_count(attr_val_id, attr_count) that keeps track of the attribute count. I tried a temporary table because I thought the sorting would be faster this way but it takes 10 to 95 seconds: CREATE TEMPORARY TABLE tmpAttr (KEY(attr_val_id)) SELECT A.attr_val_id, A.attr_val FROM attr_lookup AS AL INNER JOIN attribute AS A ON A.attr_val_id = AL.attr_val_id INNER JOIN attribute_count AS AC ON AC.attr_val_id = A.attr_val_id AND AC.attr_id = AL.attr_id WHERE AL.attr_id = 5 AND AL.category_id = 1 ORDER BY AC.attr_count DESC LIMIT 200 Using the temporary table, I then select from the top authors in alphabetical order (this query is fast - no problems here) SELECT TA.attr_val, TA.attr_val_id FROM tmpAttr AS TA ORDER BY TA.attr_val ASC LIMIT 200 Any help or suggestions on this would be wonderful! - John Summary: OBJECTIVE: Select authors (attribute values) for a category limited to most popular 200 authors (attribute values) and then order alphabetically. - mysql DESCRIBE attribute; +-+-+--+-+-++ | Field | Type| Null | Key | Default | Extra | +-+-+--+-+-++ | attr_val_id | int(13) | | PRI | NULL| auto_increment | | attr_val| text| YES | MUL | NULL|| +-+-+--+-+-++ (123,Stephen King) mysql DESCRIBE attr_lookup; +--+-+--+-+-+---+ | Field| Type| Null | Key | Default | Extra | +--+-+--+-+-+---+ | attr_id | int(9) | | PRI | 0 | | | attr_val_id | int(13) | | PRI | 0 | | | category_id | int(9) | | PRI | 0 | | +--+-+--+-+-+---+ (5,123,1) #Author Stephen King is in category 1 (Horror Books) mysql DESCRIBE attribute_count; +-+-+--+-+-+---+ | Field | Type| Null | Key | Default | Extra | +-+-+--+-+-+---+ | attr_val_id | int(13) | | PRI | 0 | | | attr_id | int(9) | | PRI | 0 | | | attr_count | int(9) | | | 0 | | +-+-+--+-+-+---+ (123,5,35) #Author Stephen King occurs 35 times // following table isn't used in above queries but it helps to see it mysql DESCRIBE compare_attr; +---+-+--+-+-++ | Field | Type| Null | Key | Default | Extra
Data fields from one database to another
I want to move over 2 - 4 fields from a table in one database , to a table in another. The field names are not an exact match but they function identically (i.e. username, password) Whatever I'm using , web development garbage program ;) won't allow me to connect to 2 different databases so I can't do an update or insertion through that means. Wondering how I script something like that, and probably prefer a way to do it on the fly with each new sign up , or batched on a regular basis aka, every hour , every few hours. Thank you, Stuart -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: alias in expression not allowed in ORDER BY?
I guess I've never noticed that but why not just select more than one calculation such as: SELECT sqrt(col) AS blarg, sqrt(col) + 1 AS order_blarg FROM table ORDER BY order_blarg its a work around but it should work fine -John -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: How to pick the oldest row
There are several things wrong here, which makes it a bit difficult to tell what you want. Let's look at the subquery first. SELECT idk FROM `mj_tmp` GROUP BY Concat(ida,u) HAVING Count(Concat(ida,u)) 1 First, you shouldn't use CONCAT() in your GROUP BY, as it breaks the possibility of using an index on ida or (ida,u) to speed things up. That would give you SELECT idk FROM `mj_tmp` GROUP BY ida,u HAVING Count(*) 1 which is equivalent but probably faster. Now that the query is uncluttered by CONCAT(), perhaps you can see the problem. idk is not one of your GROUP BY columns. Many systems wouldn't allow this. MySQL does, but you are warned http://dev.mysql.com/doc/mysql/en/GROUP-BY-hidden-fields.html not to select columns whose values are not unique per group. If you do, you get essentially random data. (Actually, I think in the interest of speed you get the values in the first row found per group). I'm guessing you simply want the oldest row for each ida,u group. Then the following should do what you want: SELECT * FROM mj_tmp m1 WHERE d=(SELECT MAX(d) FROM mj_tmp m2 WHERE m1.ida=m2.ida AND m1.u=m2.u); This is one of three solutions to this problem documented in the manual http://dev.mysql.com/doc/mysql/en/example-Maximum-column-group-row.html. Now, I'm not sure why you are counting rows and using NOT IN. I also note that in your first post you had a date column d, but in this post you say it's F. In other words, I'm not certain I've really answered your question. Let us know if the above query doesn't meet your needs. Michael Mauricio Pellegrini wrote: Thanks Jim,but that doesn't work for me because the order by performs after the records have been selected. At that time I already have the unwanted row. I will try to explain a little bit further SELECT * FROM `mj_tmp`This select is suposed to return all rows WHERE idk NOT IN but only those whose idk is not found in the following result (idk is the primary key) (SELECT idk FROM `mj_tmp` Here is where I need actually the youngest date GROUP BY Concat(ida,u) So the final result would deliver the oldest row HAVING Count( Concat(ida,u) ) 1 ) This last select should pick the youngest row to exclude them from the final result. In other words I need a subselect whose rows selection is inverted taking column F into account (column F is a date) On Mon, 2004-09-27 at 12:17, Jim Grill wrote: Hi , I'm trying to fetch the oldest row from a result set in wich the resulting rows have duplicates, I need only one row for each duplicate. The problem is that I need always the oldest row Like in the example SELECT * FROM `mj_tmp` WHERE idk NOT IN (SELECT idk FROM `mj_tmp` GROUP BY Concat(ida,u) HAVING Count( Concat(ida,u) ) 1 ) I would like to get the first of this two rows but I'm always getting the second. idk ida u d 90 14450 13 2004-09-08 147 14450 13 2004-09-09 Is there a way in which I can make sure I get always the oldest row? Try adding something like ORDER BY d DESC LIMIT 1 Jim Grill -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Data fields from one database to another
Try using MYdbPAL - its a new free program that will do the job plus lots of other goodies. www.it-map.com Tim Hayes -Original Message- From: Stuart Felenstein [mailto:[EMAIL PROTECTED] Sent: 28 September 2004 08:23 To: [EMAIL PROTECTED] Subject: Data fields from one database to another I want to move over 2 - 4 fields from a table in one database , to a table in another. The field names are not an exact match but they function identically (i.e. username, password) Whatever I'm using , web development garbage program ;) won't allow me to connect to 2 different databases so I can't do an update or insertion through that means. Wondering how I script something like that, and probably prefer a way to do it on the fly with each new sign up , or batched on a regular basis aka, every hour , every few hours. Thank you, Stuart -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Certifications
Hi! I just passed the Mysql Core Certification ... (Applause!!!)... Thank you! ;) I was wondering if there is a shorcut for MySQL certified? Something like BSc, MSc. PhD... You got the idea. Thanks, Paul -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Data fields from one database to another
So it will let me transfar individual fields ? Most of the clients allow for data transfer provided the database schema is the same, and then it is the entire record. Stuart --- Tim Hayes [EMAIL PROTECTED] wrote: Try using MYdbPAL - its a new free program that will do the job plus lots of other goodies. www.it-map.com Tim Hayes -Original Message- From: Stuart Felenstein [mailto:[EMAIL PROTECTED] Sent: 28 September 2004 08:23 To: [EMAIL PROTECTED] Subject: Data fields from one database to another I want to move over 2 - 4 fields from a table in one database , to a table in another. The field names are not an exact match but they function identically (i.e. username, password) Whatever I'm using , web development garbage program ;) won't allow me to connect to 2 different databases so I can't do an update or insertion through that means. Wondering how I script something like that, and probably prefer a way to do it on the fly with each new sign up , or batched on a regular basis aka, every hour , every few hours. Thank you, Stuart -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Data fields from one database to another
Yes. Absolutely. MYdbPAL has a complete schema-to-schema table and data field mapping capability that will also let you do things like table splits and joins. It also has inbuilt scripting and data value translation lookups. Timk -Original Message- From: Stuart Felenstein [mailto:[EMAIL PROTECTED] Sent: 28 September 2004 09:57 To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: RE: Data fields from one database to another So it will let me transfar individual fields ? Most of the clients allow for data transfer provided the database schema is the same, and then it is the entire record. Stuart --- Tim Hayes [EMAIL PROTECTED] wrote: Try using MYdbPAL - its a new free program that will do the job plus lots of other goodies. www.it-map.com Tim Hayes -Original Message- From: Stuart Felenstein [mailto:[EMAIL PROTECTED] Sent: 28 September 2004 08:23 To: [EMAIL PROTECTED] Subject: Data fields from one database to another I want to move over 2 - 4 fields from a table in one database , to a table in another. The field names are not an exact match but they function identically (i.e. username, password) Whatever I'm using , web development garbage program ;) won't allow me to connect to 2 different databases so I can't do an update or insertion through that means. Wondering how I script something like that, and probably prefer a way to do it on the fly with each new sign up , or batched on a regular basis aka, every hour , every few hours. Thank you, Stuart -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Oracle query to mysql
Your original Oracle(R) query (slightly reformatted): SELECT IMRTAB.IMR906 AS NUM906 , IMRTAB.IMRFLL AS FLL , SUM(IMRTAB.IMRCLL) AS CLL , ROUND(SUM(IMRTAB.IMRDSC),2) AS DUR , ROUND(SUM(IMRTAB.IMRDSC)/SUM(IMRTAB.IMRCLL),2) AS PRO , SUM(IMRTAB1.IMRCLL) AS CLL_N , ROUND(SUM(IMRTAB1.IMRDSC),2) AS DUR_N , ROUND(SUM(IMRTAB1.IMRDSC)/SUM(IMRTAB1.IMRCLL),2) AS PRO_N , SUM(IMRTAB2.IMRCLL) AS CLL_R , ROUND(SUM(IMRTAB2.IMRDSC),2) AS DUR_R , ROUND(SUM(IMRTAB2.IMRDSC)/SUM(IMRTAB2.IMRCLL),2) AS PRO_R FROM IMRTAB , IMRTAB IMRTAB1 , IMRTAB IMRTAB2 /* here does the tables alias*/ WHERE IMRTAB.IMRIDE = IMRTAB1.IMRIDE (+) AND IMRTAB.IMRIDE = IMRTAB2.IMRIDE (+) /* links by the uniqID both alias */ AND (IMRTAB1.IMRTAR (+) = 'N') /* takes N calls and discard the rest for IMRTAB1 */ AND (IMRTAB2.IMRTAR (+) = 'R') /* takes R calls and discard the rest for IMRTAB2 */ AND (IMRTAB.IMRFLL BETWEEN '01/09/2004' and '10/09/2004') AND (IMRTAB.IMRCLI=2584 AND (IMRTAB.IMR906=803xx )) GROUP BY IMRTAB.IMR906,IMRTAB.IMRFLL ORDER BY IMRTAB.IMR906,IMRTAB.IMRFLL My MySQL translation: SELECT IMRTAB.IMR906 AS NUM906 , IMRTAB.IMRFLL AS FLL , SUM(IMRTAB.IMRCLL) AS CLL , ROUND(SUM(IMRTAB.IMRDSC),2) AS DUR , ROUND(SUM(IMRTAB.IMRDSC)/SUM(IMRTAB.IMRCLL),2) AS PRO , SUM(IMRTAB1.IMRCLL) AS CLL_N , ROUND(SUM(IMRTAB1.IMRDSC),2) AS DUR_N , ROUND(SUM(IMRTAB1.IMRDSC)/SUM(IMRTAB1.IMRCLL),2) AS PRO_N , SUM(IMRTAB2.IMRCLL) AS CLL_R , ROUND(SUM(IMRTAB2.IMRDSC),2) AS DUR_R , ROUND(SUM(IMRTAB2.IMRDSC)/SUM(IMRTAB2.IMRCLL),2) AS PRO_R FROM IMRTAB LEFT JOIN IMRTAB IMRTAB1 ON IMRTAB.IMRIDE = IMRTAB1.IMRIDE AND IMRTAB1.IMRTAR = 'N' /* N calls only for IMRTAB1 */ LEFT JOIN IMRTAB IMRTAB2 ON IMRTAB.IMRIDE = IMRTAB2.IMRIDE AND IMRTAB2.IMRTAR = 'R' /* R calls only for IMRTAB2 */ WHERE IMRTAB.IMRFLL BETWEEN '2004-09-01' and '2004-09-10' AND IMRTAB.IMRCLI=2584 AND IMRTAB.IMR906=803xx GROUP BY IMRTAB.IMR906,IMRTAB.IMRFLL ORDER BY IMRTAB.IMR906,IMRTAB.IMRFLL You were using the Oracle syntax , ...(+) to declare your outer joins. The equivalent MySQL form is LEFT JOIN... ON http://dev.mysql.com/doc/mysql/en/JOIN.html I also had to reformat the dates in your WHERE clause to be MySQL formatted: '01/09/2004' (dd/mm/) = '2004-09-01' (-mm-dd) http://dev.mysql.com/doc/mysql/en/Date_and_time_types.html http://dev.mysql.com/doc/mysql/en/DATETIME.html Shawn Green Database Administrator Unimin Corporation - Spruce Pine martin fasani [EMAIL PROTECTED] wrote on 09/28/2004 07:23:38 AM: Hi guys, I'm working in a telecom company that has Oracle for the call statistics. Now we export the daily stats to a remote mySql. The daily resume table looks like this: ++---+-++--+ --+++ | IMRFLL | IMR906| IMRTER | IMRTAR | IMRDUR | IMRFAC | IMRCLI | IMRCLL | ++---+-++--+ --+++ | 2004-06-01 | 803xx | x | N | 446.9166572 | 40355904 | 21 | 26 | | 2004-06-01 | 803xx | 0 | R | 9.414 | 40355904 | 21 | 10 | ++---+-++--+ --+++ What I need it's to get a report that joins the table to itself two times to get the Normal tarif ( IMRTAR=N) and the Reduced tarif (IMRTAR=R). In Oracle is done using Outer joins like this: SELECT IMRTAB.IMR906 AS NUM906,IMRTAB.IMRFLL AS FLL, SUM(IMRTAB.IMRCLL) AS CLL ,ROUND(SUM(IMRTAB.IMRDSC),2) AS DUR, ROUND(SUM(IMRTAB.IMRDSC)/SUM(IMRTAB.IMRCLL),2) AS PRO, SUM(IMRTAB1.IMRCLL) AS CLL_N,ROUND(SUM(IMRTAB1.IMRDSC),2) AS DUR_N,ROUND(SUM(IMRTAB1.IMRDSC)/SUM(IMRTAB1.IMRCLL),2) AS PRO_N, SUM(IMRTAB2.IMRCLL) AS CLL_R,ROUND(SUM(IMRTAB2.IMRDSC),2) AS DUR_R,ROUND(SUM(IMRTAB2.IMRDSC)/SUM(IMRTAB2.IMRCLL),2) AS PRO_R FROM IMRTAB, IMRTAB IMRTAB1, IMRTAB IMRTAB2 /* here does the tables alias */ WHERE IMRTAB.IMRIDE = IMRTAB1.IMRIDE (+) AND IMRTAB.IMRIDE = IMRTAB2.IMRIDE (+) /* links by the uniqID both alias */ AND (IMRTAB1.IMRTAR (+) = 'N') /* takes N calls and discard the rest for IMRTAB1 */ AND (IMRTAB2.IMRTAR (+) = 'R') /* takes R calls and discard the rest for IMRTAB2 */ AND (IMRTAB.IMRFLL BETWEEN '01/09/2004' and '10/09/2004') AND (IMRTAB.IMRCLI =2584 AND (IMRTAB.IMR906=803xx )) GROUP BY IMRTAB.IMR906,IMRTAB.IMRFLL ORDER BY IMRTAB.IMR906,IMRTAB.IMRFLL And this query returns something like this: +---++++++ | NUM906| FLL| CLL| DUR| DUR_N | CLL_N | DUR_R | CLL_R
Transfer data from mySql to access
Hi, How do I transfer mySql Database (e.g Test Case Manager ) to MS Access ( Test Director) The Table names are different for the two DB's , but the content is similar Let say I have a Table CASE in Test Case Manager having 10 columns , and want to transfer indivisual columns to an access DB ( Test Director ) with Table TEST having 5 columns with different names as comapred to the one's in Test Case Manager. Tahnks, -Kamal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
It's an argument that can go on forever... We have 10's of thousands of images in mysql databases.. very fast/reliable.. easy to replicate, stream, etc.. On Tue, 28 Sep 2004 08:42:13 +0200, Jigal van Hemert [EMAIL PROTECTED] wrote: From: GH [EMAIL PROTECTED] I was wondering how to get images into and out of a Mysql database was told it was possible. I have mysql 4.0.18 The general opinion is that files should be stored in a file system and not a database. There are circumstances that you might want to store binary data in a database. Take a look at the BLOB column types (http://dev.mysql.com/doc/mysql/en/BLOB.html). Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
Is there anything special in your setup that you did to have such good performance? On Tue, 28 Sep 2004 11:47:26 -0400, DreamWerx [EMAIL PROTECTED] wrote: It's an argument that can go on forever... We have 10's of thousands of images in mysql databases.. very fast/reliable.. easy to replicate, stream, etc.. On Tue, 28 Sep 2004 08:42:13 +0200, Jigal van Hemert [EMAIL PROTECTED] wrote: From: GH [EMAIL PROTECTED] I was wondering how to get images into and out of a Mysql database was told it was possible. I have mysql 4.0.18 The general opinion is that files should be stored in a file system and not a database. There are circumstances that you might want to store binary data in a database. Take a look at the BLOB column types (http://dev.mysql.com/doc/mysql/en/BLOB.html). Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
Separate/dedicated servers for web/database. All the data was chunked to allow faster streaming/lower overhead (large images/files).. If you have more specific questions I can answer them.. On Tue, 28 Sep 2004 11:49:27 -0400, GH [EMAIL PROTECTED] wrote: Is there anything special in your setup that you did to have such good performance? On Tue, 28 Sep 2004 11:47:26 -0400, DreamWerx [EMAIL PROTECTED] wrote: It's an argument that can go on forever... We have 10's of thousands of images in mysql databases.. very fast/reliable.. easy to replicate, stream, etc.. On Tue, 28 Sep 2004 08:42:13 +0200, Jigal van Hemert [EMAIL PROTECTED] wrote: From: GH [EMAIL PROTECTED] I was wondering how to get images into and out of a Mysql database was told it was possible. I have mysql 4.0.18 The general opinion is that files should be stored in a file system and not a database. There are circumstances that you might want to store binary data in a database. Take a look at the BLOB column types (http://dev.mysql.com/doc/mysql/en/BLOB.html). Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
special characters not inserting into database - help
Newbie type question: apostrophes ie. a ' single quote is not getting into the database fields. So if I entererd in a PHP/MYSQL web page entry field: Sally's Website. When I look directly into the column with the MYSQL cmdline I see: Sally s Website. Below are the code snips I'm using(I numbered each snip to show order of execution), wondered what I should do?: 1.?php echo 'form method=post action=populate4.php H3Url Field/H3 input type=text name=url_field size=80 maxlength=199 H3Title Field/H3 input type=text name=title_field size=80 maxlength=199 H3Description Field/H3 textarea name=descrip_field rows=15 cols=56 maxlength=1000/textarea H3Submit keywords with a file ior/i with the text area below/H3 input TYPE=radio NAME=Type_Submit VALUE=radio_file Use Key Word File br input TYPE=radio NAME=Type_Submit VALUE=radio_area Checked Use Key Word Text Area H3Keyword file/H3 input type=text name=kw_file size=40 maxlength=80h4i or /i/h4 H3Keyword text area/H3 textarea name=kw_tarea rows=10 cols=40/textarea H3Insert into MYSQL/H3 input type=submit value=Insert /form'; ? 2. $title_field = $_POST [ 'title_field' ]; $title_field = trim ( $title_field); $title_field = substr($title_field, 0, 200); $title_field = EscapeShellCmd($title_field); if( !$title_field ) { die( You need to put a title in the title field. ); } if (!get_magic_quotes_gpc()) { $title_field = addslashes($title_field); } 3. mysql_query(INSERT INTO page (page_url, title, descrip) VALUES ('$url_field', '$title_field', '$descrip_field')); -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Images
What do you mean chunked? I figured it would be easier to track about 32,000 images in MySQL than in files, so I setup a test to see what the performance difference is and if storing in MySQL would actually work. Everything is working and it's a lot easier to keep track of the images in MySQL. I ran some performance tests using Apache's ab though and there's a huge performance difference - 38 times faster grabbing the file. It could be my test system here at home. I've asked my ISP to upgrade my production server to PHP5 so that I can run tests from there. There could also be performance hits in the script that grabs the images from MySQL, because I tried using OOP. I was told OOP would be a faster approach, but I'm pretty new to it and may not have done something correctly. I'm going to create some non-OOP scripts to cross-reference the tests. I only have one production server, so I won't be able to separate / dedicate servers. Your mention of chunking sounds promising though. Actually, I'd appreciate any ideas or recommendations you have. Thanks, Ed -Original Message- From: DreamWerx [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 28, 2004 8:54 AM To: GH Cc: Jigal van Hemert; [EMAIL PROTECTED] Subject: Re: Images Separate/dedicated servers for web/database. All the data was chunked to allow faster streaming/lower overhead (large images/files).. If you have more specific questions I can answer them.. On Tue, 28 Sep 2004 11:49:27 -0400, GH [EMAIL PROTECTED] wrote: Is there anything special in your setup that you did to have such good performance? On Tue, 28 Sep 2004 11:47:26 -0400, DreamWerx [EMAIL PROTECTED] wrote: It's an argument that can go on forever... We have 10's of thousands of images in mysql databases.. very fast/reliable.. easy to replicate, stream, etc.. On Tue, 28 Sep 2004 08:42:13 +0200, Jigal van Hemert [EMAIL PROTECTED] wrote: From: GH [EMAIL PROTECTED] I was wondering how to get images into and out of a Mysql database was told it was possible. I have mysql 4.0.18 The general opinion is that files should be stored in a file system and not a database. There are circumstances that you might want to store binary data in a database. Take a look at the BLOB column types (http://dev.mysql.com/doc/mysql/en/BLOB.html). Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
won't start with mysqld_safe
I had a server hacked last night and they did alot a damage, but I think I've got just about everything back up as normal...including a brand new mysql installation (4.0.21). My problem is this. The 4.0.21 or the old 4.0.20 will not start using mysqld_safe and will start with mysqld if I don't run it as root. there is not --user=something in my my.cnf which would prevent it from running as root. I also don't understand why mysqld will run, but mysqld_safe won't. mysqld_safe starts up and shuts right back down again without logging any errors: 040928 08:12:17 mysqld started 040928 08:12:17 mysqld ended this is all my log says. Any ideas? Has anyone seem this before? if not, can someone point me into the best way to debug this? They both worked fine before the hack. Thanks, Anne -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: special characters not inserting into database - help
Try mysql_escape_string instead of addslashes. Also, I'm not sure why you did it, but you don't need to include your entire form in the PHP script and echo it. If you have PHP code before and after the form, you could use this approach: ?php // some pre-form code ? Form ?php // some post-form code ? You could also try escaping the title all of the time rather than conditionally with the gpc check - just to cross-reference that potential issue. -Ed -Original Message- apostrophes ie. a ' single quote is not getting into the database fields. So if I entererd in a PHP/MYSQL web page entry field: Sally's Website. When I look directly into the column with the MYSQL cmdline I see: Sally s Website. Below are the code snips I'm using(I numbered each snip to show order of execution), wondered what I should do?: 1.?php echo 'form method=post action=populate4.php H3Url Field/H3 input type=text name=url_field size=80 maxlength=199 H3Title Field/H3 input type=text name=title_field size=80 maxlength=199 H3Description Field/H3 textarea name=descrip_field rows=15 cols=56 maxlength=1000/textarea H3Submit keywords with a file ior/i with the text area below/H3 input TYPE=radio NAME=Type_Submit VALUE=radio_file Use Key Word File br input TYPE=radio NAME=Type_Submit VALUE=radio_area Checked Use Key Word Text Area H3Keyword file/H3 input type=text name=kw_file size=40 maxlength=80h4i or /i/h4 H3Keyword text area/H3 textarea name=kw_tarea rows=10 cols=40/textarea H3Insert into MYSQL/H3 input type=submit value=Insert /form'; ? 2. $title_field = $_POST [ 'title_field' ]; $title_field = trim ( $title_field); $title_field = substr($title_field, 0, 200); $title_field = EscapeShellCmd($title_field); if( !$title_field ) { die( You need to put a title in the title field. ); } if (!get_magic_quotes_gpc()) { $title_field = addslashes($title_field); } 3. mysql_query(INSERT INTO page (page_url, title, descrip) VALUES ('$url_field', '$title_field', '$descrip_field')); -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Images
I have heard also that it helps to keep the table with the blobs having a low number of fields. Like just a primary key and the blob field. Have all your other metadata in a seperate table. Especially if you are going to be occasionally doing queries of just the metadata and don't want to always retrieve the image. Comments? I am doing some blob storage in Oracle with Cold Fusion. Using some purcha$ed java code to get the image in and out. Anyone else doing coldfusion with mysql blobs? How do you retrieve/display the images? --ja On Tue, 28 Sep 2004, Ed Lazor wrote: What do you mean chunked? I figured it would be easier to track about 32,000 images in MySQL than in files, so I setup a test to see what the performance difference is and if storing in MySQL would actually work. Everything is working and it's a lot easier to keep track of the images in MySQL. I ran some performance tests using Apache's ab though and there's a huge performance difference - 38 times faster grabbing the file. It could be my test system here at home. I've asked my ISP to upgrade my production server to PHP5 so that I can run tests from there. There could also be performance hits in the script that grabs the images from MySQL, because I tried using OOP. I was told OOP would be a faster approach, but I'm pretty new to it and may not have done something correctly. I'm going to create some non-OOP scripts to cross-reference the tests. I only have one production server, so I won't be able to separate / dedicate servers. Your mention of chunking sounds promising though. Actually, I'd appreciate any ideas or recommendations you have. Thanks, Ed -Original Message- From: DreamWerx [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 28, 2004 8:54 AM To: GH Cc: Jigal van Hemert; [EMAIL PROTECTED] Subject: Re: Images Separate/dedicated servers for web/database. All the data was chunked to allow faster streaming/lower overhead (large images/files).. If you have more specific questions I can answer them.. On Tue, 28 Sep 2004 11:49:27 -0400, GH [EMAIL PROTECTED] wrote: Is there anything special in your setup that you did to have such good performance? On Tue, 28 Sep 2004 11:47:26 -0400, DreamWerx [EMAIL PROTECTED] wrote: It's an argument that can go on forever... We have 10's of thousands of images in mysql databases.. very fast/reliable.. easy to replicate, stream, etc.. On Tue, 28 Sep 2004 08:42:13 +0200, Jigal van Hemert [EMAIL PROTECTED] wrote: From: GH [EMAIL PROTECTED] I was wondering how to get images into and out of a Mysql database was told it was possible. I have mysql 4.0.18 The general opinion is that files should be stored in a file system and not a database. There are circumstances that you might want to store binary data in a database. Take a look at the BLOB column types (http://dev.mysql.com/doc/mysql/en/BLOB.html). Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: won't start with mysqld_safe
Anne Ramey wrote: I had a server hacked last night and they did alot a damage, but I think I've got just about everything back up as normal...including a brand new mysql installation (4.0.21). My problem is this. The 4.0.21 or the old 4.0.20 will not start using mysqld_safe and will start with mysqld if I don't run it as root. there is not --user=something in my my.cnf which would prevent it from running as root. I also don't understand why mysqld will run, but mysqld_safe won't. mysqld_safe starts up and shuts right back down again without logging any errors: 040928 08:12:17 mysqld started 040928 08:12:17 mysqld ended this is all my log says. Any ideas? Has anyone seem this before? if not, can someone point me into the best way to debug this? They both worked fine before the hack. Thanks, Anne Does your system have a 'mysql' user, and does mysql have write permission to the data directory. If the mysql daemon can not write to the log file, it temrinates. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
Grabbing the file was 38 times faster because MySQL was not designed to be a filesystem. There are filesystems out there specifically designed to handle hundreds of thousands of small files. One of the best is ReiserFS http://www.namesys.com If you record the filename in mysql tracking becomes a non issue. -- Eric Bergen [EMAIL PROTECTED] On Tue, 28 Sep 2004 09:19:47 -0700, Ed Lazor [EMAIL PROTECTED] wrote: What do you mean chunked? I figured it would be easier to track about 32,000 images in MySQL than in files, so I setup a test to see what the performance difference is and if storing in MySQL would actually work. Everything is working and it's a lot easier to keep track of the images in MySQL. I ran some performance tests using Apache's ab though and there's a huge performance difference - 38 times faster grabbing the file. It could be my test system here at home. I've asked my ISP to upgrade my production server to PHP5 so that I can run tests from there. There could also be performance hits in the script that grabs the images from MySQL, because I tried using OOP. I was told OOP would be a faster approach, but I'm pretty new to it and may not have done something correctly. I'm going to create some non-OOP scripts to cross-reference the tests. I only have one production server, so I won't be able to separate / dedicate servers. Your mention of chunking sounds promising though. Actually, I'd appreciate any ideas or recommendations you have. Thanks, Ed -Original Message- From: DreamWerx [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 28, 2004 8:54 AM To: GH Cc: Jigal van Hemert; [EMAIL PROTECTED] Subject: Re: Images Separate/dedicated servers for web/database. All the data was chunked to allow faster streaming/lower overhead (large images/files).. If you have more specific questions I can answer them.. On Tue, 28 Sep 2004 11:49:27 -0400, GH [EMAIL PROTECTED] wrote: Is there anything special in your setup that you did to have such good performance? On Tue, 28 Sep 2004 11:47:26 -0400, DreamWerx [EMAIL PROTECTED] wrote: It's an argument that can go on forever... We have 10's of thousands of images in mysql databases.. very fast/reliable.. easy to replicate, stream, etc.. On Tue, 28 Sep 2004 08:42:13 +0200, Jigal van Hemert [EMAIL PROTECTED] wrote: From: GH [EMAIL PROTECTED] I was wondering how to get images into and out of a Mysql database was told it was possible. I have mysql 4.0.18 The general opinion is that files should be stored in a file system and not a database. There are circumstances that you might want to store binary data in a database. Take a look at the BLOB column types (http://dev.mysql.com/doc/mysql/en/BLOB.html). Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Images
-Original Message- I have heard also that it helps to keep the table with the blobs having a low number of fields. Like just a primary key and the blob field. Have all your other metadata in a seperate table. Especially if you are going to be occasionally doing queries of just the metadata and don't want to always retrieve the image. Comments? I agree. That's the approach I used. Data on products are stored in a products table, while images are stored in a table called images. The images table has the following fields: ID, ProductID, SizeID, Image. There are 4 images of different resolutions for each product. The SizeID is used to tell which resolution is being requested. For example: Select Image from images where ProductID='8443' AND SizeID='1' ProductID is an int. SizeID is a small int. The Image field is largeblob, because some images are as large as 300k. However, images are generally 15k, 45k, 90k, and 180k, so perhaps largeblob is causing a performance loss? I also plan on using caching tools, so maybe performance tuning on the backend isn't as significant? What do you think? -Ed -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Data fields from one database to another
Been trying it out. Seems to be very powerful! Thank you , Stuart --- Tim Hayes [EMAIL PROTECTED] wrote: Yes. Absolutely. MYdbPAL has a complete schema-to-schema table and data field mapping capability that will also let you do things like table splits and joins. It also has inbuilt scripting and data value translation lookups. Timk -Original Message- From: Stuart Felenstein [mailto:[EMAIL PROTECTED] Sent: 28 September 2004 09:57 To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: RE: Data fields from one database to another So it will let me transfar individual fields ? Most of the clients allow for data transfer provided the database schema is the same, and then it is the entire record. Stuart --- Tim Hayes [EMAIL PROTECTED] wrote: Try using MYdbPAL - its a new free program that will do the job plus lots of other goodies. www.it-map.com Tim Hayes -Original Message- From: Stuart Felenstein [mailto:[EMAIL PROTECTED] Sent: 28 September 2004 08:23 To: [EMAIL PROTECTED] Subject: Data fields from one database to another I want to move over 2 - 4 fields from a table in one database , to a table in another. The field names are not an exact match but they function identically (i.e. username, password) Whatever I'm using , web development garbage program ;) won't allow me to connect to 2 different databases so I can't do an update or insertion through that means. Wondering how I script something like that, and probably prefer a way to do it on the fly with each new sign up , or batched on a regular basis aka, every hour , every few hours. Thank you, Stuart -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Images
Thanks for the link =) I understand what you're saying about MySQL not being designed as a filesystem. I've used this same argument with others. It's just that... well, have performance boosts have decreased the margin of difference? Especially when web-page caching is being used - don't the images get cached as actual files? If so, the original method of storage would be a mute point. That's what I'm trying to find out. What do you think? -Ed -Original Message- From: Eric Bergen [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 28, 2004 9:48 AM To: Ed Lazor Cc: DreamWerx; [EMAIL PROTECTED] Subject: Re: Images Grabbing the file was 38 times faster because MySQL was not designed to be a filesystem. There are filesystems out there specifically designed to handle hundreds of thousands of small files. One of the best is ReiserFS http://www.namesys.com If you record the filename in mysql tracking becomes a non issue. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
Most people make the mistake of using the biggest blob size to store files.. That blob size is capable of storing just HUGE files..What we do is store files in 64K (medium blob) chunks.. So if the image was say 200K in size, the metadata for the image would be 1 row in a table, and the image data would be 4 rows in the data table. 3 full 64K rows + 1 partially used rows. There is a good article/sample code here on the kind of technique we started with: http://php.dreamwerx.net/forums/viewtopic.php?t=6 Using chunked data, apache/php only needs to pull row by row(64k) and deliver to the client, keeping the resultset size low = memory overhead low. The storage servers (mysql storage) I have tested on the LAN; them storing and retreiving data from mysql (using FTP gateway) at rates of 4600K/sec.. which is I think the fastest speed my laptop network card could deliver. That's pretty fast.. Rare day when most internet users can talk to servers at those speeds. On Tue, 28 Sep 2004 09:19:47 -0700, Ed Lazor [EMAIL PROTECTED] wrote: What do you mean chunked? I figured it would be easier to track about 32,000 images in MySQL than in files, so I setup a test to see what the performance difference is and if storing in MySQL would actually work. Everything is working and it's a lot easier to keep track of the images in MySQL. I ran some performance tests using Apache's ab though and there's a huge performance difference - 38 times faster grabbing the file. It could be my test system here at home. I've asked my ISP to upgrade my production server to PHP5 so that I can run tests from there. There could also be performance hits in the script that grabs the images from MySQL, because I tried using OOP. I was told OOP would be a faster approach, but I'm pretty new to it and may not have done something correctly. I'm going to create some non-OOP scripts to cross-reference the tests. I only have one production server, so I won't be able to separate / dedicate servers. Your mention of chunking sounds promising though. Actually, I'd appreciate any ideas or recommendations you have. Thanks, Ed -Original Message- From: DreamWerx [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 28, 2004 8:54 AM To: GH Cc: Jigal van Hemert; [EMAIL PROTECTED] Subject: Re: Images Separate/dedicated servers for web/database. All the data was chunked to allow faster streaming/lower overhead (large images/files).. If you have more specific questions I can answer them.. On Tue, 28 Sep 2004 11:49:27 -0400, GH [EMAIL PROTECTED] wrote: Is there anything special in your setup that you did to have such good performance? On Tue, 28 Sep 2004 11:47:26 -0400, DreamWerx [EMAIL PROTECTED] wrote: It's an argument that can go on forever... We have 10's of thousands of images in mysql databases.. very fast/reliable.. easy to replicate, stream, etc.. On Tue, 28 Sep 2004 08:42:13 +0200, Jigal van Hemert [EMAIL PROTECTED] wrote: From: GH [EMAIL PROTECTED] I was wondering how to get images into and out of a Mysql database was told it was possible. I have mysql 4.0.18 The general opinion is that files should be stored in a file system and not a database. There are circumstances that you might want to store binary data in a database. Take a look at the BLOB column types (http://dev.mysql.com/doc/mysql/en/BLOB.html). Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
It would be quite easy to drop a squid or similar proxy infront to cache the db images in memory and deliver them for a set cache-time or something.. That would be 1 way to boost performance.. On Tue, 28 Sep 2004 09:55:54 -0700, Ed Lazor [EMAIL PROTECTED] wrote: -Original Message- I have heard also that it helps to keep the table with the blobs having a low number of fields. Like just a primary key and the blob field. Have all your other metadata in a seperate table. Especially if you are going to be occasionally doing queries of just the metadata and don't want to always retrieve the image. Comments? I agree. That's the approach I used. Data on products are stored in a products table, while images are stored in a table called images. The images table has the following fields: ID, ProductID, SizeID, Image. There are 4 images of different resolutions for each product. The SizeID is used to tell which resolution is being requested. For example: Select Image from images where ProductID='8443' AND SizeID='1' ProductID is an int. SizeID is a small int. The Image field is largeblob, because some images are as large as 300k. However, images are generally 15k, 45k, 90k, and 180k, so perhaps largeblob is causing a performance loss? I also plan on using caching tools, so maybe performance tuning on the backend isn't as significant? What do you think? -Ed -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
increasing mysql/table performance..
hi... i've got a basic question (with probably many answers) i'm creating a php/web app that uses mysql tbls. i have a number of pages that do various selects using 'left joins'/'right joins'/etc... i'm getting to the point where most of the basic logic works. now i want to start figuring out how to speed up the app/table interactions... i've read/seen information regarding indexes within a table. i'm curious as to what i can do to speed up the response time/tbl interactions for the users thanks -bruce ps. if need to, i could provide sample sql statements/table defs... -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Images
Ok... I have some more test results. After optimizing as much as I can think of, without using caching, I've gotten things down to a 13x difference. Using Apache's ab performance test, the image comes from a file at an average of 2ms and from the database (using PHP4) at an average of 28ms. I know... it just reiterates what you were already saying, but it sure is great to see actual numbers measuring the difference. Maybe the difference could be even less if I were properly optimizing MySQL. The big question still outstanding, for me at least, is whether web page caching makes the performance difference a mute point. If caching is storing everything as files, we get the best of both worlds. Plus, I think there may be a little bit of a security benefit. A directory has to be marked as writeable so that scripts can store image files. This isn't necessary when using MySQL. Do you agree with the security benefit? Does webpage caching negate the performance difference? -Ed -Original Message- Grabbing the file was 38 times faster because MySQL was not designed to be a filesystem. There are filesystems out there specifically designed to handle hundreds of thousands of small files. One of the best is ReiserFS http://www.namesys.com If you record the filename in mysql tracking becomes a non issue. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Using Visio to diagram MySQL db, export SQL
Hi, I have Visio 2002. I am trying to set it up to use MySQL-specific datatypes (e.g., ENUM) and have some success using the User-Defined Types. But what I really want is something that I can export from Visio to actual SQL statements, and I am not succeeding in finding that at all. If Visio is total toast for this purpose, is there a comfortable open source tool that works under windows that will let me/help me visually set up my entity diagrams as I work out my database schema? ari Ari Davidow [EMAIL PROTECTED] http://www.ivritype.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Images
Thanks for the article. I'll check it out. Throughput of 4600K/s is great. How's latency? -Ed -Original Message- So if the image was say 200K in size, the metadata for the image would be 1 row in a table, and the image data would be 4 rows in the data table. 3 full 64K rows + 1 partially used rows. There is a good article/sample code here on the kind of technique we started with: http://php.dreamwerx.net/forums/viewtopic.php?t=6 Using chunked data, apache/php only needs to pull row by row(64k) and deliver to the client, keeping the resultset size low = memory overhead low. The storage servers (mysql storage) I have tested on the LAN; them storing and retreiving data from mysql (using FTP gateway) at rates of 4600K/sec.. which is I think the fastest speed my laptop network card could deliver. That's pretty fast.. Rare day when most internet users can talk to servers at those speeds. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
I thank you all for this discussion... and for the great information that everyone has provideded. Next question (which is part of my original) how do I actually get the images into the Blobs... Additionally, the what are the names and capacities of the datatypes that I could use? I have been hearing Blob but are there others? On Tue, 28 Sep 2004 13:39:20 -0400, DreamWerx [EMAIL PROTECTED] wrote: It would be quite easy to drop a squid or similar proxy infront to cache the db images in memory and deliver them for a set cache-time or something.. That would be 1 way to boost performance.. On Tue, 28 Sep 2004 09:55:54 -0700, Ed Lazor [EMAIL PROTECTED] wrote: -Original Message- I have heard also that it helps to keep the table with the blobs having a low number of fields. Like just a primary key and the blob field. Have all your other metadata in a seperate table. Especially if you are going to be occasionally doing queries of just the metadata and don't want to always retrieve the image. Comments? I agree. That's the approach I used. Data on products are stored in a products table, while images are stored in a table called images. The images table has the following fields: ID, ProductID, SizeID, Image. There are 4 images of different resolutions for each product. The SizeID is used to tell which resolution is being requested. For example: Select Image from images where ProductID='8443' AND SizeID='1' ProductID is an int. SizeID is a small int. The Image field is largeblob, because some images are as large as 300k. However, images are generally 15k, 45k, 90k, and 180k, so perhaps largeblob is causing a performance loss? I also plan on using caching tools, so maybe performance tuning on the backend isn't as significant? What do you think? -Ed -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
importing data into mysql from oracle
Hi, Could any one of you suggest me a better way to bump the data in oracle 7.3 to mysql 4.0 classic. Thanks, Narasimha -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tue 9/28/2004 8:58 PM To: martin fasani Cc: [EMAIL PROTECTED] Subject: Re: Oracle query to mysql Your original Oracle(R) query (slightly reformatted): SELECT IMRTAB.IMR906 AS NUM906 , IMRTAB.IMRFLL AS FLL , SUM(IMRTAB.IMRCLL) AS CLL , ROUND(SUM(IMRTAB.IMRDSC),2) AS DUR , ROUND(SUM(IMRTAB.IMRDSC)/SUM(IMRTAB.IMRCLL),2) AS PRO , SUM(IMRTAB1.IMRCLL) AS CLL_N , ROUND(SUM(IMRTAB1.IMRDSC),2) AS DUR_N , ROUND(SUM(IMRTAB1.IMRDSC)/SUM(IMRTAB1.IMRCLL),2) AS PRO_N , SUM(IMRTAB2.IMRCLL) AS CLL_R , ROUND(SUM(IMRTAB2.IMRDSC),2) AS DUR_R , ROUND(SUM(IMRTAB2.IMRDSC)/SUM(IMRTAB2.IMRCLL),2) AS PRO_R FROM IMRTAB , IMRTAB IMRTAB1 , IMRTAB IMRTAB2 /* here does the tables alias*/ WHERE IMRTAB.IMRIDE = IMRTAB1.IMRIDE (+) AND IMRTAB.IMRIDE = IMRTAB2.IMRIDE (+) /* links by the uniqID both alias */ AND (IMRTAB1.IMRTAR (+) = 'N') /* takes N calls and discard the rest for IMRTAB1 */ AND (IMRTAB2.IMRTAR (+) = 'R') /* takes R calls and discard the rest for IMRTAB2 */ AND (IMRTAB.IMRFLL BETWEEN '01/09/2004' and '10/09/2004') AND (IMRTAB.IMRCLI=2584 AND (IMRTAB.IMR906=803xx )) GROUP BY IMRTAB.IMR906,IMRTAB.IMRFLL ORDER BY IMRTAB.IMR906,IMRTAB.IMRFLL My MySQL translation: SELECT IMRTAB.IMR906 AS NUM906 , IMRTAB.IMRFLL AS FLL , SUM(IMRTAB.IMRCLL) AS CLL , ROUND(SUM(IMRTAB.IMRDSC),2) AS DUR , ROUND(SUM(IMRTAB.IMRDSC)/SUM(IMRTAB.IMRCLL),2) AS PRO , SUM(IMRTAB1.IMRCLL) AS CLL_N , ROUND(SUM(IMRTAB1.IMRDSC),2) AS DUR_N , ROUND(SUM(IMRTAB1.IMRDSC)/SUM(IMRTAB1.IMRCLL),2) AS PRO_N , SUM(IMRTAB2.IMRCLL) AS CLL_R , ROUND(SUM(IMRTAB2.IMRDSC),2) AS DUR_R , ROUND(SUM(IMRTAB2.IMRDSC)/SUM(IMRTAB2.IMRCLL),2) AS PRO_R FROM IMRTAB LEFT JOIN IMRTAB IMRTAB1 ON IMRTAB.IMRIDE = IMRTAB1.IMRIDE AND IMRTAB1.IMRTAR = 'N' /* N calls only for IMRTAB1 */ LEFT JOIN IMRTAB IMRTAB2 ON IMRTAB.IMRIDE = IMRTAB2.IMRIDE AND IMRTAB2.IMRTAR = 'R' /* R calls only for IMRTAB2 */ WHERE IMRTAB.IMRFLL BETWEEN '2004-09-01' and '2004-09-10' AND IMRTAB.IMRCLI=2584 AND IMRTAB.IMR906=803xx GROUP BY IMRTAB.IMR906,IMRTAB.IMRFLL ORDER BY IMRTAB.IMR906,IMRTAB.IMRFLL You were using the Oracle syntax , ...(+) to declare your outer joins. The equivalent MySQL form is LEFT JOIN... ON http://dev.mysql.com/doc/mysql/en/JOIN.html I also had to reformat the dates in your WHERE clause to be MySQL formatted: '01/09/2004' (dd/mm/) = '2004-09-01' (-mm-dd) http://dev.mysql.com/doc/mysql/en/Date_and_time_types.html http://dev.mysql.com/doc/mysql/en/DATETIME.html Shawn Green Database Administrator Unimin Corporation - Spruce Pine martin fasani [EMAIL PROTECTED] wrote on 09/28/2004 07:23:38 AM: Hi guys, I'm working in a telecom company that has Oracle for the call statistics. Now we export the daily stats to a remote mySql. The daily resume table looks like this: ++---+-++--+ --+++ | IMRFLL | IMR906| IMRTER | IMRTAR | IMRDUR | IMRFAC | IMRCLI | IMRCLL | ++---+-++--+ --+++ | 2004-06-01 | 803xx | x | N | 446.9166572 | 40355904 | 21 | 26 | | 2004-06-01 | 803xx | 0 | R | 9.414 | 40355904 | 21 | 10 | ++---+-++--+ --+++ What I need it's to get a report that joins the table to itself two times to get the Normal tarif ( IMRTAR=N) and the Reduced tarif (IMRTAR=R). In Oracle is done using Outer joins like this: SELECT
RE: increasing mysql/table performance..
I usually create an index for each criteria being checked against in the SQL statements. For example, for this query Select * from products where ProductID = 'aeg8557' I'd create an index on ProductID. The same thing applies if you're pulling data from multiple tables. For this query: Select products.ID, products.Title, categories.Title from products, categories where products.ID = '5' AND products.CategoryID = categories.ID I'd make sure that products.ID, products.CategoryID, and categories.ID all have an index. -Ed -Original Message- i've read/seen information regarding indexes within a table. i'm curious as to what i can do to speed up the response time/tbl interactions for the users -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
Great .. I don't have any numbers.. but basically there is quite little latency using the FTP interface, data is streamed in and out of mysql on the fly, inserting and selecting blocks of rows.. The FTP server is written in java using JDBC to talk the database, so it's quite fast. On Tue, 28 Sep 2004 10:57:09 -0700, Ed Lazor [EMAIL PROTECTED] wrote: Thanks for the article. I'll check it out. Throughput of 4600K/s is great. How's latency? -Ed -Original Message- So if the image was say 200K in size, the metadata for the image would be 1 row in a table, and the image data would be 4 rows in the data table. 3 full 64K rows + 1 partially used rows. There is a good article/sample code here on the kind of technique we started with: http://php.dreamwerx.net/forums/viewtopic.php?t=6 Using chunked data, apache/php only needs to pull row by row(64k) and deliver to the client, keeping the resultset size low = memory overhead low. The storage servers (mysql storage) I have tested on the LAN; them storing and retreiving data from mysql (using FTP gateway) at rates of 4600K/sec.. which is I think the fastest speed my laptop network card could deliver. That's pretty fast.. Rare day when most internet users can talk to servers at those speeds. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
Getting images/files into blob is quite easy.. examples could be provided based on what lanaguge/interface way you want to use... Mysql site has a great list of data types and capacities, etc... do a search for it.. On Tue, 28 Sep 2004 13:58:16 -0400, GH [EMAIL PROTECTED] wrote: I thank you all for this discussion... and for the great information that everyone has provideded. Next question (which is part of my original) how do I actually get the images into the Blobs... Additionally, the what are the names and capacities of the datatypes that I could use? I have been hearing Blob but are there others? On Tue, 28 Sep 2004 13:39:20 -0400, DreamWerx [EMAIL PROTECTED] wrote: It would be quite easy to drop a squid or similar proxy infront to cache the db images in memory and deliver them for a set cache-time or something.. That would be 1 way to boost performance.. On Tue, 28 Sep 2004 09:55:54 -0700, Ed Lazor [EMAIL PROTECTED] wrote: -Original Message- I have heard also that it helps to keep the table with the blobs having a low number of fields. Like just a primary key and the blob field. Have all your other metadata in a seperate table. Especially if you are going to be occasionally doing queries of just the metadata and don't want to always retrieve the image. Comments? I agree. That's the approach I used. Data on products are stored in a products table, while images are stored in a table called images. The images table has the following fields: ID, ProductID, SizeID, Image. There are 4 images of different resolutions for each product. The SizeID is used to tell which resolution is being requested. For example: Select Image from images where ProductID='8443' AND SizeID='1' ProductID is an int. SizeID is a small int. The Image field is largeblob, because some images are as large as 300k. However, images are generally 15k, 45k, 90k, and 180k, so perhaps largeblob is causing a performance loss? I also plan on using caching tools, so maybe performance tuning on the backend isn't as significant? What do you think? -Ed -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images
Ed brings up a great point... i would rather not have anyone 'searching' for images in directories and such... On Tue, 28 Sep 2004 10:53:01 -0700, Ed Lazor [EMAIL PROTECTED] wrote: Ok... I have some more test results. After optimizing as much as I can think of, without using caching, I've gotten things down to a 13x difference. Using Apache's ab performance test, the image comes from a file at an average of 2ms and from the database (using PHP4) at an average of 28ms. I know... it just reiterates what you were already saying, but it sure is great to see actual numbers measuring the difference. Maybe the difference could be even less if I were properly optimizing MySQL. The big question still outstanding, for me at least, is whether web page caching makes the performance difference a mute point. If caching is storing everything as files, we get the best of both worlds. Plus, I think there may be a little bit of a security benefit. A directory has to be marked as writeable so that scripts can store image files. This isn't necessary when using MySQL. Do you agree with the security benefit? Does webpage caching negate the performance difference? -Ed -Original Message- Grabbing the file was 38 times faster because MySQL was not designed to be a filesystem. There are filesystems out there specifically designed to handle hundreds of thousands of small files. One of the best is ReiserFS http://www.namesys.com If you record the filename in mysql tracking becomes a non issue. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Multiple Insert Statement?
My query: SELECT * FROM wow.resume r INNER JOIN wow.candidate c WHERE r.Section_ID = '1' AND MATCH (r.Section_Value) AGAINST ('+baan' IN BOOLEAN MODE) AND c.Candidate_ID = r.Candidate_ID; Is it possible to do a multiple insert statement like so? INSERT INTO wow.candidate_erp (Candidate_ID, Section_ID, Section_Value) INSERT INTO wow.resume_erp (Candidate_ID, Vendor_ID, etc.) SELECT SQL_CALC_FOUND_ROWS * FROM wow.resume r INNER JOIN wow.candidate c; Or do I have to break out the INSERT statements seperately? Thanks! - Eve -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: increasing mysql/table performance..
On Tuesday 28 September 2004 01:02 pm, Ed Lazor wrote: I usually create an index for each criteria being checked against in the SQL statements. For example, for this query Select * from products where ProductID = 'aeg8557' I'd create an index on ProductID. The same thing applies if you're pulling data from multiple tables. For this query: Select products.ID, products.Title, categories.Title from products, categories where products.ID = '5' AND products.CategoryID = categories.ID I'd make sure that products.ID, products.CategoryID, and categories.ID all have an index. Um.. Are you serious? thats all you do, create indexes? Jeff pgpqSbV4EnhVQ.pgp Description: PGP signature
Re: increasing mysql/table performance..
one suggestion would be to get the latest mysql performance tuning book from o'reilly. its pretty good. bruce wrote: hi... i've got a basic question (with probably many answers) i'm creating a php/web app that uses mysql tbls. i have a number of pages that do various selects using 'left joins'/'right joins'/etc... i'm getting to the point where most of the basic logic works. now i want to start figuring out how to speed up the app/table interactions... i've read/seen information regarding indexes within a table. i'm curious as to what i can do to speed up the response time/tbl interactions for the users thanks -bruce ps. if need to, i could provide sample sql statements/table defs... -- Jeff Mathis, Ph.D. 505-955-1434 Prediction Company [EMAIL PROTECTED] 525 Camino de los Marquez, Ste 6http://www.predict.com Santa Fe, NM 87505 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Transfer data from mySql to access
You should rather do some scripting. It will be a very simple one - use perl / python or whatever suits. If they were identical with only db or table or column names different - the simple work around would have been to to Find and Replace for those names but you also have the columns count different. HTH Aman Raheja Kamal Ahmed wrote: Hi, How do I transfer mySql Database (e.g Test Case Manager ) to MS Access ( Test Director) The Table names are different for the two DB's , but the content is similar Let say I have a Table CASE in Test Case Manager having 10 columns , and want to transfer indivisual columns to an access DB ( Test Director ) with Table TEST having 5 columns with different names as comapred to the one's in Test Case Manager. Tahnks, -Kamal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Images
I read through the article and ran some more tests. The new scripts and tables provide similar initial latency, but I think the test results show them to be faster overall. When it comes to latency, direct file access is still the champion without caching. I think you made a good point about throughput which makes MySQL more appealing for storing larger files. That kind of surprised me actually, because I always figured I'd have to store things like PDF's on disk and control access to them by putting them outside of the document root. There's still a question of whether caching provides the edge and at what cost. I haven't set up caching, so I'm not sure if it's complicated or not. It would provide performance boosts to more than just images through, so it seems worthwhile to explore. That's what I'll be exploring next. =) -Ed -Original Message- Most people make the mistake of using the biggest blob size to store files.. That blob size is capable of storing just HUGE files..What we do is store files in 64K (medium blob) chunks.. So if the image was say 200K in size, the metadata for the image would be 1 row in a table, and the image data would be 4 rows in the data table. 3 full 64K rows + 1 partially used rows. There is a good article/sample code here on the kind of technique we started with: http://php.dreamwerx.net/forums/viewtopic.php?t=6 Using chunked data, apache/php only needs to pull row by row(64k) and deliver to the client, keeping the resultset size low = memory overhead low. The storage servers (mysql storage) I have tested on the LAN; them storing and retreiving data from mysql (using FTP gateway) at rates of 4600K/sec.. which is I think the fastest speed my laptop network card could deliver. That's pretty fast.. Rare day when most internet users can talk to servers at those speeds. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
idex usages in updates
Hello! Could anyone tell me if updates in multiple tables also make use of table indexes? And if they do does it works in the same general manner as it does for select clauses? Thanks, Arthur
Re: Multiple Insert Statement?
I know that this is off topic and such... but can you explain the Match / Against that you used in your query? i have never seen syntax like that in SQL On Tue, 28 Sep 2004 14:16:56 -0400, Eve Atley [EMAIL PROTECTED] wrote: My query: SELECT * FROM wow.resume r INNER JOIN wow.candidate c WHERE r.Section_ID = '1' AND MATCH (r.Section_Value) AGAINST ('+baan' IN BOOLEAN MODE) AND c.Candidate_ID = r.Candidate_ID; Is it possible to do a multiple insert statement like so? INSERT INTO wow.candidate_erp (Candidate_ID, Section_ID, Section_Value) INSERT INTO wow.resume_erp (Candidate_ID, Vendor_ID, etc.) SELECT SQL_CALC_FOUND_ROWS * FROM wow.resume r INNER JOIN wow.candidate c; Or do I have to break out the INSERT statements seperately? Thanks! - Eve -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Multiple Insert Statement?
At 14:16 -0400 9/28/04, Eve Atley wrote: My query: SELECT * FROM wow.resume r INNER JOIN wow.candidate c WHERE r.Section_ID = '1' AND MATCH (r.Section_Value) AGAINST ('+baan' IN BOOLEAN MODE) AND c.Candidate_ID = r.Candidate_ID; Is it possible to do a multiple insert statement like so? No. INSERT INTO wow.candidate_erp (Candidate_ID, Section_ID, Section_Value) INSERT INTO wow.resume_erp (Candidate_ID, Vendor_ID, etc.) SELECT SQL_CALC_FOUND_ROWS * FROM wow.resume r INNER JOIN wow.candidate c; Or do I have to break out the INSERT statements seperately? Yes. -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: increasing mysql/table performance..
It is possible to have too many indexes. Usually you tune your indexes to fit the majority of your queries. Definitely index the fields that are used to JOIN your tables. Consider multi-column indexes more than lots of single-column indexes as MySQL will use only one index per table for any query. The order you use to list the columns in a multi-column index makes a huge difference. I have lots of suggested reading for you: http://dev.mysql.com/doc/mysql/en/CREATE_INDEX.html http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html (the whole chapter) Come back and we can help explain whatever didn't make any sense. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Jeff Smelser [EMAIL PROTECTED] wrote on 09/28/2004 02:19:29 PM: On Tuesday 28 September 2004 01:02 pm, Ed Lazor wrote: I usually create an index for each criteria being checked against in the SQL statements. For example, for this query Select * from products where ProductID = 'aeg8557' I'd create an index on ProductID. The same thing applies if you're pulling data from multiple tables. For this query: Select products.ID, products.Title, categories.Title from products, categories where products.ID = '5' AND products.CategoryID = categories.ID I'd make sure that products.ID, products.CategoryID, and categories.ID all have an index. Um.. Are you serious? thats all you do, create indexes? Jeff [attachment attonuto.dat deleted by Shawn Green/Unimin]
Re: Multiple Insert Statement?
That's the syntax used to do a full-text search in MySQL. Here's some light reading: http://dev.mysql.com/doc/mysql/en/Fulltext_Search.html Shawn Green Database Administrator Unimin Corporation - Spruce Pine GH [EMAIL PROTECTED] wrote on 09/28/2004 03:14:21 PM: I know that this is off topic and such... but can you explain the Match / Against that you used in your query? i have never seen syntax like that in SQL On Tue, 28 Sep 2004 14:16:56 -0400, Eve Atley [EMAIL PROTECTED] wrote: My query: SELECT * FROM wow.resume r INNER JOIN wow.candidate c WHERE r.Section_ID = '1' AND MATCH (r.Section_Value) AGAINST ('+baan' IN BOOLEAN MODE) AND c.Candidate_ID = r.Candidate_ID; Is it possible to do a multiple insert statement like so? INSERT INTO wow.candidate_erp (Candidate_ID, Section_ID, Section_Value) INSERT INTO wow.resume_erp (Candidate_ID, Vendor_ID, etc.) SELECT SQL_CALC_FOUND_ROWS * FROM wow.resume r INNER JOIN wow.candidate c; Or do I have to break out the INSERT statements seperately? Thanks! - Eve -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Problem with insert statement; ERROR 1030 at line 188: Got error 28 from table handler
Hi Sebastian, If the new cds_catalog is created with primary key, this should produce a duplicate key error for the second iteration of the second table, as the data selected is from cds_catalog alone, but joining two tables causing cartisian joint to be formed(n*(m- t1.fieldt2.field)), each time the same set of data being inserted. reg, Eldo. On Mon, 27 Sep 2004 14:07:54 +0200 (CEST), Tobias Asplund [EMAIL PROTECTED] wrote: On Mon, 27 Sep 2004, Sebastian Geib wrote: I have a huge problem with the following insert statement: INSERT INTO cds_catalog SELECT cds_stage.cds_catalog.* FROM cds.cds_catalog, cds_stage.cds_catalog WHERE cds_stage.cds_catalog.prodidcds.cds_catalog.prodid; Look at this query, it will create a huge table as a result, as an approximisation the table created will have the number of rows in both tables multiplied with eachother. Has anyone else any idea? I tried all Google resources I could get hands on, but they were all about disk space on the tmp partition or repairing the db which both isn't the problem here. Are you sure 60GB is enough? Look above, say you have 1000 rows in each table, the result could be up to 100 rows. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: increasing mysql/table performance..
You are correct. It's not necessary to change your SQL statements to take advantage of indices. Redefining your tables may not be necessary, but I can see areas where you might see benefits. One example would be the stateVAL field in the first table. It looks like you're storing an abbreviation of each state's name. I'd just create a table for states and replace stateVAL with a StateID int(2). That way you're leveraging the relational database aspect of MySQL. You could probably go through the rest of the tables and look for this type of optimization, but how you model the data is really going to depend on the data itself, what you're trying to accomplish, and how much data you'll be working with. You may spend more time trying to optimize things than is necessary if you're dealing with a small data set... If you're not familiar with what I'm trying to describe, one good approach is to do a Google search using the words good sql table design. It will bring up a lot of pages that talk about different approaches to optimizing the tables that you're creating. Here's some of the better links that I found when testing the search: http://dev.mysql.com/doc/mysql/en/Optimizing_Database_Structure.html http://www.onlamp.com/pub/a/onlamp/2001/03/06/aboutSQL.html http://www.sql-server-performance.com/database_design.asp I know that there are references to Microsoft's SQL server, but the SQL concepts apply to MySQL as well. -Ed -Original Message- here's a section of my tbl defs.. and the sql that accesses it.. are you saying that i should simply redefine the tbls, to create an index on the column name. as far as i can tell from reviewing mysql/google, i don't have to do anything differntly to the sql, in order to use the 'indexes'... is this correct..??? tbl defs: create table universityTBL( name varchar(50) not null default '', repos_dir_name varchar(50) not null default '', city varchar(20) default '', stateVAL varchar(5) not null, userID int(10) not null default'', ID int(10) not null auto_increment, primary key (ID), unique key (name) --unique key (repos_dir_name) )type =bdb; create table university_urlTBL( universityID int(10) not null default '', urltype int(5) not null, url varchar(50) not null default '', -- userID int(10) not null default'', actionID int(5) null default '', status int(5) null default '', ID int(10) not null auto_increment, primary key (ID), --unique key (url, urltype), unique key (url, universityID, urltype) )type =bdb; create table parsefileTBL( university_urlID int(5) not null default '', -- filelocation varchar(50) not null default '', name varchar(50) not null default '', -- urltype int(2) not null, userID int(10) not null default '', -- actionID int(5) null default '', start_status int(1) null default '', dev_status int(1) null default '', test_status int(1) null default '', review_status int(1) null default '', prodtest_status int(1) null default '', prod_status int(1) null default '', op_status int(1) null default '', fileversion varchar(50) not null default '', fileID int(10) not null auto_increment, primary key (fileID), unique key (university_urlID, name) )type =bdb; sql : $query_ = select u1.urltype as type, p1.start_status as status from university_urlTBL as u1 right join parsefileTBL as p1 on u1.ID = p1.university_urlID join latestParseStatusTBL as l1 on p1.fileID = l1.itemID where u1.universityID='$id'; $query_ = select u4.username as user from universityTBL as u1 left join university_urlTBL as u2 on u2.universityID = u1.ID right join parsefileTBL as p1 on p1.university_urlID = u2.ID left join user_rolesTBL as u3 on u3.itemID = u2.ID left join users as u4 on u3.userID = u4.user_id where u2.urltype = u3.itemType and u2.urltype = '$type' and u3.process = '$process' and u1.ID='$id' group by date asc limit 1; i'm not sure i understand how the 'index' is supposed to speed up table access/interaction... as an example.. if i run the 1st query.. i get: mysql explain select u1.urltype as type, p1.start_status as status from university_urlTBL as u1 right join parsefileTBL as p1 on u1.ID = p1.university_urlID join
RE: increasing mysql/table performance..
Primary Keys and Unique Keys are still Keys. Key is a synonym for INDEX. So, you already have some indexes. What you need to look at is the possibility of defining other regular keys (just plain indexes) to help specifically with some of your common query situations. Some people also add indexes to solve critical performance hits during some very important queries even if they aren't executed often at all. Shawn Green Database Administrator Unimin Corporation - Spruce Pine bruce [EMAIL PROTECTED] wrote on 09/28/2004 03:28:54 PM: shawn... in my tables... i usually try to create a unique key that's used to link with the other tbls... does creating an index within a table that's already consisting of unique rows give any benefits..??? thanks... -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 28, 2004 12:17 PM To: Jeff Smelser Cc: [EMAIL PROTECTED] Subject: Re: increasing mysql/table performance.. It is possible to have too many indexes. Usually you tune your indexes to fit the majority of your queries. Definitely index the fields that are used to JOIN your tables. Consider multi-column indexes more than lots of single-column indexes as MySQL will use only one index per table for any query. The order you use to list the columns in a multi-column index makes a huge difference. I have lots of suggested reading for you: http://dev.mysql.com/doc/mysql/en/CREATE_INDEX.html http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html (the whole chapter) Come back and we can help explain whatever didn't make any sense. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Jeff Smelser [EMAIL PROTECTED] wrote on 09/28/2004 02:19:29 PM: On Tuesday 28 September 2004 01:02 pm, Ed Lazor wrote: I usually create an index for each criteria being checked against in the SQL statements. For example, for this query Select * from products where ProductID = 'aeg8557' I'd create an index on ProductID. The same thing applies if you're pulling data from multiple tables. For this query: Select products.ID, products.Title, categories.Title from products, categories where products.ID = '5' AND products.CategoryID = categories.ID I'd make sure that products.ID, products.CategoryID, and categories.ID all have an index. Um.. Are you serious? thats all you do, create indexes? Jeff [attachment attonuto.dat deleted by Shawn Green/Unimin]
RE: Multiple Insert Statement?
Then I need help getting on the right track here. What I really want to do is something like the following: INSERT INTO wow.resume_erp (Candidate_ID, Section_ID, Section_Value) SELECT * FROM wow.resume r WHERE r.Candidate_ID = '13103'; INSERT INTO wow.candidate_erp (Candidate_ID, Vendor_ID, Last_Name, First_Name, Middle_Initial, Condition_Type, Employer, Country_ID, Visa_Status, Dt_Visa, MMDD_Birth, SSN, CSG_Comments, Working, Available, Start_Date, Location, HoldOnPeriod, Relocation, Tech_Ranking, Comm_Ranking, Availability, Cert_Comments, Dt_Submitted, Def_Rate, Def_Rate_Unit, Other_Country, Currency_id, Interview_Availability, Interview_Contact, US_Experience, Location_Country) SELECT * FROM wow.candidate c WHERE c.Candidate_ID = '13103'; Yet pulled from the resultset in this query: SELECT * FROM wow.resume r INNER JOIN wow.candidate c WHERE r.Section_ID = '1' AND MATCH (r.Section_Value) AGAINST ('+baan' IN BOOLEAN MODE) AND c.Candidate_ID = r.Candidate_ID; Perhaps the above isn't set up correctly, as when I attempt these queries: INSERT INTO wow.candidate_erp (Candidate_ID, Section_ID, Section_Value) SELECT SQL_CALC_FOUND_ROWS * FROM wow.candidate; INSERT INTO wow.resume_erp (Candidate_ID, Vendor_ID, Last_Name, First_Name, Middle_Initial, Condition_Type, Employer, Country_ID, Visa_Status, Dt_Visa, MMDD_Birth, SSN, CSG_Comments, Working, Available, Start_Date, Location, HoldOnPeriod, Relocation, Tech_Ranking, Comm_Ranking, Availability, Cert_Comments, Dt_Submitted, Def_Rate, Def_Rate_Unit, Other_Country, Currency_id, Interview_Availability, Interview_Contact, US_Experience, Location_Country) SELECT SQL_CALC_FOUND_ROWS * FROM wow.resume; ...it returns an error of 1136: Column count doesn't match value count at row 1. The query INSERT INTO wow.resume_erp (Candidate_ID, Section_ID, Section_Value) SELECT * FROM wow.resume r WHERE r.Candidate_ID = '13103'; is impractical when my results are over 400. Thanks, Eve -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
query on a column with xml content
Is there any tools or built-in functions for querying contents of a column that holds a xml file?
RE: Multiple Insert Statement?
At 02:56 PM 9/28/2004, you wrote: Then I need help getting on the right track here. What I really want to do is something like the following: INSERT INTO wow.resume_erp (Candidate_ID, Section_ID, Section_Value) SELECT * FROM wow.resume r WHERE r.Candidate_ID = '13103'; You need to match up the columns in the Insert to the Select statement (they both have to have the same number of columns and same column types are preferred). So explicitly specify the columns in the Select statement as: INSERT INTO wow.resume_erp (Candidate_ID, Section_ID, Section_Value) SELECT Candidate_ID, Section_ID, Section_Value FROM wow.resume r WHERE r.Candidate_ID = '13103'; Using * on your Select statements to fill an Insert is dangerous because the table structure could change in the future. Mike INSERT INTO wow.candidate_erp (Candidate_ID, Vendor_ID, Last_Name, First_Name, Middle_Initial, Condition_Type, Employer, Country_ID, Visa_Status, Dt_Visa, MMDD_Birth, SSN, CSG_Comments, Working, Available, Start_Date, Location, HoldOnPeriod, Relocation, Tech_Ranking, Comm_Ranking, Availability, Cert_Comments, Dt_Submitted, Def_Rate, Def_Rate_Unit, Other_Country, Currency_id, Interview_Availability, Interview_Contact, US_Experience, Location_Country) SELECT * FROM wow.candidate c WHERE c.Candidate_ID = '13103'; Yet pulled from the resultset in this query: SELECT * FROM wow.resume r INNER JOIN wow.candidate c WHERE r.Section_ID = '1' AND MATCH (r.Section_Value) AGAINST ('+baan' IN BOOLEAN MODE) AND c.Candidate_ID = r.Candidate_ID; Perhaps the above isn't set up correctly, as when I attempt these queries: INSERT INTO wow.candidate_erp (Candidate_ID, Section_ID, Section_Value) SELECT SQL_CALC_FOUND_ROWS * FROM wow.candidate; INSERT INTO wow.resume_erp (Candidate_ID, Vendor_ID, Last_Name, First_Name, Middle_Initial, Condition_Type, Employer, Country_ID, Visa_Status, Dt_Visa, MMDD_Birth, SSN, CSG_Comments, Working, Available, Start_Date, Location, HoldOnPeriod, Relocation, Tech_Ranking, Comm_Ranking, Availability, Cert_Comments, Dt_Submitted, Def_Rate, Def_Rate_Unit, Other_Country, Currency_id, Interview_Availability, Interview_Contact, US_Experience, Location_Country) SELECT SQL_CALC_FOUND_ROWS * FROM wow.resume; ...it returns an error of 1136: Column count doesn't match value count at row 1. The query INSERT INTO wow.resume_erp (Candidate_ID, Section_ID, Section_Value) SELECT * FROM wow.resume r WHERE r.Candidate_ID = '13103'; is impractical when my results are over 400. Thanks, Eve -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Multiple Insert Statement?
- Original Message - From: Eve Atley [EMAIL PROTECTED] To: Paul DuBois [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Tuesday, September 28, 2004 3:56 PM Subject: RE: Multiple Insert Statement? Then I need help getting on the right track here. What I really want to do is something like the following: INSERT INTO wow.resume_erp (Candidate_ID, Section_ID, Section_Value) SELECT * FROM wow.resume r WHERE r.Candidate_ID = '13103'; INSERT INTO wow.candidate_erp (Candidate_ID, Vendor_ID, Last_Name, First_Name, Middle_Initial, Condition_Type, Employer, Country_ID, Visa_Status, Dt_Visa, MMDD_Birth, SSN, CSG_Comments, Working, Available, Start_Date, Location, HoldOnPeriod, Relocation, Tech_Ranking, Comm_Ranking, Availability, Cert_Comments, Dt_Submitted, Def_Rate, Def_Rate_Unit, Other_Country, Currency_id, Interview_Availability, Interview_Contact, US_Experience, Location_Country) SELECT * FROM wow.candidate c WHERE c.Candidate_ID = '13103'; Yet pulled from the resultset in this query: SELECT * FROM wow.resume r INNER JOIN wow.candidate c WHERE r.Section_ID = '1' AND MATCH (r.Section_Value) AGAINST ('+baan' IN BOOLEAN MODE) AND c.Candidate_ID = r.Candidate_ID; Perhaps the above isn't set up correctly, as when I attempt these queries: INSERT INTO wow.candidate_erp (Candidate_ID, Section_ID, Section_Value) SELECT SQL_CALC_FOUND_ROWS * FROM wow.candidate; INSERT INTO wow.resume_erp (Candidate_ID, Vendor_ID, Last_Name, First_Name, Middle_Initial, Condition_Type, Employer, Country_ID, Visa_Status, Dt_Visa, MMDD_Birth, SSN, CSG_Comments, Working, Available, Start_Date, Location, HoldOnPeriod, Relocation, Tech_Ranking, Comm_Ranking, Availability, Cert_Comments, Dt_Submitted, Def_Rate, Def_Rate_Unit, Other_Country, Currency_id, Interview_Availability, Interview_Contact, US_Experience, Location_Country) SELECT SQL_CALC_FOUND_ROWS * FROM wow.resume; ...it returns an error of 1136: Column count doesn't match value count at row 1. The query INSERT INTO wow.resume_erp (Candidate_ID, Section_ID, Section_Value) SELECT * FROM wow.resume r WHERE r.Candidate_ID = '13103'; is impractical when my results are over 400. The following simple script illustrates that multiple rows can be copied from one table into another by putting a Select from the source table within the Insert for the target table. #use tmp; #Drop/Create source table drop table if exists source; create table if not exists source (idno smallint not null, surname char(10) not null, primary key(id)); #Populate source table insert into source (idno, surname) values (1, 'Adams'), (2, 'Bailey'), (3, 'Collins'); #Display populated source table select * from source; #Drop/Create target table drop table if exists target; create table if not exists target (id smallint not null, name char(10) not null, primary key(id)); #Populate target table insert into target (id, name) select * from source; #Display populated target table select * from target; The Insert/Select (second last statement in the script) will work as long as the column list, which is '(id, name)' in this case, has the same number of columns as is returned by the Select clause. In this case, the source table has two columns so 'select *' returns two columns so we have satisfied this requirement. Also, the two columns identified in the column list must correspond in datatype and size to the columns listed in the select. In this case, 'select * from source' translates into 'select idno, surname from source'; idno is a smallint as is the corresponding column in the target table, id; surname is a char(10) as is the corresponding column in the target table, name. Therefore, the Insert/Select works. The Insert/Select could also have been written 'insert into target(id, name) select idno, surname from source' and still worked. However, this would not have worked: insert into target(id, name) select surname, idno from source; because the column names don't correspond in datatype and length: id does not correspond to surname and name does not correspond to idno. I hope this clarifies the use of Insert/Select for you. Rhino -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
backup question: INSERT statements
Hello, I need to backup a mysql database in such a way that the output is simply a bunch of insert statements. I do not want the database schema as output as well: just the insert statements. This is because I already have a script with CREATE statements and would like to rebuild the database from scratch: since I need to modify the schema and table structure I prefer this approach than using ALTER TABLE, given that the database data size is small and rebuilding from scratch could add some efficiency. So how do I get this with myqldump. I just want the insert statements and the data. Thanks!!! Neil -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Multiple Insert Statement?
Sorry, there were a few typos in my reply. I have amended the reply at the bottom Rhino - Original Message - From: Rhino [EMAIL PROTECTED] To: [EMAIL PROTECTED]; Paul DuBois [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Tuesday, September 28, 2004 5:41 PM Subject: Re: Multiple Insert Statement? - Original Message - From: Eve Atley [EMAIL PROTECTED] To: Paul DuBois [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Tuesday, September 28, 2004 3:56 PM Subject: RE: Multiple Insert Statement? Then I need help getting on the right track here. What I really want to do is something like the following: INSERT INTO wow.resume_erp (Candidate_ID, Section_ID, Section_Value) SELECT * FROM wow.resume r WHERE r.Candidate_ID = '13103'; INSERT INTO wow.candidate_erp (Candidate_ID, Vendor_ID, Last_Name, First_Name, Middle_Initial, Condition_Type, Employer, Country_ID, Visa_Status, Dt_Visa, MMDD_Birth, SSN, CSG_Comments, Working, Available, Start_Date, Location, HoldOnPeriod, Relocation, Tech_Ranking, Comm_Ranking, Availability, Cert_Comments, Dt_Submitted, Def_Rate, Def_Rate_Unit, Other_Country, Currency_id, Interview_Availability, Interview_Contact, US_Experience, Location_Country) SELECT * FROM wow.candidate c WHERE c.Candidate_ID = '13103'; Yet pulled from the resultset in this query: SELECT * FROM wow.resume r INNER JOIN wow.candidate c WHERE r.Section_ID = '1' AND MATCH (r.Section_Value) AGAINST ('+baan' IN BOOLEAN MODE) AND c.Candidate_ID = r.Candidate_ID; Perhaps the above isn't set up correctly, as when I attempt these queries: INSERT INTO wow.candidate_erp (Candidate_ID, Section_ID, Section_Value) SELECT SQL_CALC_FOUND_ROWS * FROM wow.candidate; INSERT INTO wow.resume_erp (Candidate_ID, Vendor_ID, Last_Name, First_Name, Middle_Initial, Condition_Type, Employer, Country_ID, Visa_Status, Dt_Visa, MMDD_Birth, SSN, CSG_Comments, Working, Available, Start_Date, Location, HoldOnPeriod, Relocation, Tech_Ranking, Comm_Ranking, Availability, Cert_Comments, Dt_Submitted, Def_Rate, Def_Rate_Unit, Other_Country, Currency_id, Interview_Availability, Interview_Contact, US_Experience, Location_Country) SELECT SQL_CALC_FOUND_ROWS * FROM wow.resume; ...it returns an error of 1136: Column count doesn't match value count at row 1. The query INSERT INTO wow.resume_erp (Candidate_ID, Section_ID, Section_Value) SELECT * FROM wow.resume r WHERE r.Candidate_ID = '13103'; is impractical when my results are over 400. The following simple script illustrates that multiple rows can be copied from one table into another by putting a Select from the source table within the Insert for the target table. #use tmp; #Drop/Create source table drop table if exists source; create table if not exists source (idno smallint not null, surname char(10) not null, primary key(id)); #Populate source table insert into source (idno, surname) values (1, 'Adams'), (2, 'Bailey'), (3, 'Collins'); #Display populated source table select * from source; #Drop/Create target table drop table if exists target; create table if not exists target (id smallint not null, name char(10) not null, primary key(id)); #Populate target table insert into target (id, name) select * from source; #Display populated target table select * from target; The Insert/Select (second last statement in the script) will work as long as the column list, which is '(id, name)' in this case, has the same number of columns as is returned by the Select clause. In this case, the source table has two columns so 'select *' returns two columns so we have satisfied this requirement. Also, the two columns identified in the column list must correspond in datatype and size to the columns listed in the select. In this case, 'select * from source' translates into 'select idno, surname from source'; idno is a smallint as is the corresponding column in the target table, id; surname is a char(10) as is the corresponding column in the target table, name. Therefore, the Insert/Select works. The Insert/Select could also have been written 'insert into target(id, name) select idno, surname from source' and still worked. However, this would not have worked: insert into target(id, name) select surname, idno from source; because the column names don't correspond in datatype and length: id does not correspond to surname and name does not correspond to idno. I hope this clarifies the use of Insert/Select for you. === AMENDED REPLY === Most of what I said above is correct but the script had a mistake. (I started editing the script on the fly to improve it but wasn't able to test the amended version due to a temporary glitch on our server. I sent the note anyway, assuming it was correct, and only discovered
Re: backup question: INSERT statements
At 19:47 -0230 9/28/04, Neil Zanella wrote: Hello, I need to backup a mysql database in such a way that the output is simply a bunch of insert statements. I do not want the database schema as output as well: just the insert statements. This is because I already have a script with CREATE statements and would like to rebuild the database from scratch: since I need to modify the schema and table structure I prefer this approach than using ALTER TABLE, given that the database data size is small and rebuilding from scratch could add some efficiency. So how do I get this with myqldump. I just want the insert statements and the data. mysqldump --help shows: -t, --no-create-info Don't write table creation info. -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: query on a column with xml content
At 14:17 -0600 9/28/04, Elim Qiu wrote: Is there any tools or built-in functions for querying contents of a column that holds a xml file? If you mean using something like the standard string functions, yes. But I suspect you mean using something like XPath, in which case, no. If you mean something else (it's not clear from your question), please provide more detail. -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
UTF8 columns and MySQL
Hello, I have a table with the following column among others: text VARCHAR(120) CHARACTER SET utf8 The provider I am using however only offers MySQL version 3.23.58, so I cannot use this statement. At first I thought I was not going to be able to store UTF8. So I changed the column to the following: text VARCHAR(120) However, when I submit data from a web interface, much to my surprise, I was able to store some Chinese characters, which no doubt take up two bytes in UTF8. Great!!! Can anyone explain why this is and what exactly is going on under the hood? Just curious, Thanks, Neil -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Connection Error
Egor Egorov wrote: Phillip Mangwiro [EMAIL PROTECTED] wrote: I'm getting an Error 1130, (MyDNSName is not allowed to connect to this machine), whenever I try to connect to MySQL server from any client by using its IP or friendly DNS name other than 127.0.0.1 or localhost. This has happened on three machines so far, is there something I forgot to enable/disable? Are you sure that GRANTs are correct? Take a look at http://dev.mysql.com/doc/mysql/en/GRANT.html Here's what happens: C:\mysql\binmysqlc -u root -p -h netadmin Enter password: * ERROR 1130: Host 'netadmin.kptc.org.zw' is not allowed to connect to this MySQL server But if I try it without the host parameter I connect succesfully: C:\mysql\binmysqlc -u root -p Enter password: * Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 31368 to server version: 4.0.20a-nt Type 'help' for help. mysql \q Bye root has all priviledges but isn't allowec to connect when using the host parameter. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
testing
Testing Karl James (TheSaint) [EMAIL PROTECTED]mailto:[EMAIL PROTECTED] http://theufl.com/http://theufl.com/
Re: Connection Error
Phillip Mangwiro wrote: Egor Egorov wrote: Phillip Mangwiro [EMAIL PROTECTED] wrote: I'm getting an Error 1130, (MyDNSName is not allowed to connect to this machine), whenever I try to connect to MySQL server from any client by using its IP or friendly DNS name other than 127.0.0.1 or localhost. This has happened on three machines so far, is there something I forgot to enable/disable? There's something you forgot to read. The privilege system is explained in the manual http://dev.mysql.com/doc/mysql/en/Privileges.html. MySQL users are the combination of username and connecting host. Different host = diffferent user. Are you sure that GRANTs are correct? Take a look at http://dev.mysql.com/doc/mysql/en/GRANT.html Here's what happens: C:\mysql\binmysqlc -u root -p -h netadmin Enter password: * ERROR 1130: Host 'netadmin.kptc.org.zw' is not allowed to connect to this MySQL server [EMAIL PROTECTED] does not have permission to connect to mysql. That is, there is no mysql account which matches [EMAIL PROTECTED] But if I try it without the host parameter I connect succesfully: C:\mysql\binmysqlc -u root -p Enter password: * Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 31368 to server version: 4.0.20a-nt Type 'help' for help. mysql \q Bye Without -h, you connect via unix socket to the localhost server as [EMAIL PROTECTED], which does have an account, and is allowed to connect. root has all priviledges but isn't allowed to connect when using the host parameter. [EMAIL PROTECTED] has all privileges. [EMAIL PROTECTED] doesn't yet exist. In any case, the unix socket is more efficient, as it avoids the tcp/ip overhead, so I'm not sure why you'd want to connect via tcp/ip from the same machine as the server. Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: ResultSet NotUpdatabelProblem
Hello: I have recently posted the message attached at the bottom of this one to the mailing list. Since then, I have continued to work the sporadic and troublesome errors that are described in that attached message on otherwise perfectly working and proven code. I now have some insights that I would like to share with the group and solicit their thoughts and ideas as to what the root cause(s) may be. Update and progress: In continuing to work towards identifying the cause of this problem, I have recently modified my Connection URLs to disable the new (4.1) server based PreparedStatements (i.e. added useServerPrepStmts=false to JDCB URL string. This appears to be a huge key as all of my sporadic errors as described below have magically gone away with NO other changes to the source code, DB, or environment. From this, I am assuming that there are still some remaining bugs in the 4.1.5-gamma DB and/or both the mysql-connector-java-3.1.4-beta-bin.jar and the mysql- connector-java-3.0.15-ga-bin.jar Connector/J drivers related to server PreparedStatements. For the time being, running without the server PreparedStatements has me working again - however, I'd obvously like to take advantage of them as soon as possible for the performance advantage they can offer. Can anyone confirm this theory and/or shed any light on the issue or expected fixes? Thanks again in advance! Todd Original posting of problem details follows: -- Hello: I am having a problem with an (not)updatable ResultSet that I cannot figure out. The problem is not consistent in that the exact SQL statements and tables involved will work one time and then fail the next. There does not appear to be any rhyme or reason as to when or why it fails. Once it does fail, simply retrying (perhaps several times) get's me back to a working state. I have already looked extensivley for any sort of non-closed Connection, ResultSet, etc. and have found nothing. Additionally, I have isolated this in order to ensure there are no Threading considerations. Here is the table involved: CREATE TABLE OSECRegistration ( id VARCHAR(255) NOT NULL, hostName VARCHAR(255) NOT NULL, platformType VARCHAR(255) NOT NULL, registrationTime BIGINT NOT NULL, createdBy VARCHAR(255) NOT NULL, dateCreated BIGINT NOT NULL, modifiedBy VARCHAR(255) NOT NULL, dateModified BIGINT NOT NULL, PRIMARY KEY( id ), UNIQUE ( hostName, platformType ), INDEX id_index ( id ), INDEX PlatformType_index ( platformType ), FOREIGN KEY PlatformType_key ( platformType ) REFERENCES PlatformType ( platformType ) ) TYPE=InnoDB; The SQL that I am using in a PreparedStatement is as follows: String stmt = select id, registrationTime, modifiedBy, dateModified from OSECRegistration where ( hostName = ? ) AND platformType = ? for update; I prepare the statement as follows: ps = con.prepareStatement( stmt, ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE ); I then resolve the '?' values and: ResultSet rs = ps.executeQuery(); and walk through the ResultSet modifying the required fields with statements like: rs.updateLong( i, ((Long) newValue).longValue() ); As far as I can tell, all of the above is correct. Yet, I still inconsistantly get errors such as the following: [junit] com.mysql.jdbc.NotUpdatable: Result Set not updatable.This result set must come from a statement that was created with a result set type of ResultSet.CONCUR_UPDATABLE, the query must select only one table, and must select all primary keys from that table. See the JDBC 2.1 API Specification, section 5.6 for more details. [junit] at com.mysql.jdbc.UpdatableResultSet.generateStatements (UpdatableResultSet.java:1770) [junit] at com.mysql.jdbc.UpdatableResultSet.syncUpdate (UpdatableResultSet.java:1959) [junit] at com.mysql.jdbc.UpdatableResultSet.updateLong (UpdatableResultSet.java:1343) [junit] at com.ensuren.ose.common.registration.OSECRegistrationJDBCDao.updat eResultSet(OSECRegistrationJDBCDao.java:2298) [junit] at com.ensuren.ose.common.registration.OSECRegistrationJDBCDao.readF orUpdate(OSECRegistrationJDBCDao.java:724) [junit] at com.ensuren.ose.server.registration.RegistrationBean.register (RegistrationBean.java:140) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) [junit] at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke (Method.java:324) [junit] at org.jboss.ejb.StatelessSessionContainer$ContainerInterceptor.invo ke(StatelessSessionContainer.java:683) [junit] at
Update a portion of text in a field
Hello mysql, Something I have been thinking about without any clue on how I can achieve it. I know how to update a field with update table X set field1='My Text' where tableid = 1 Now, say I have in a table X, the field1 with the value : 'I have been searching that functionalities for several days' and I would like to replace 'functionalities' by 'functionality' Would anyone knows how to replace JUST one word or a part of a text in a field without using an external program ? Please advise, thanks Best regards, Jacques Jocelyn -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Different default collation in each connection
P.S2 I store data in one table which has no default collation set only UTF-8 charset. In this table I can have data in different collations. Every table does have a default collation either you specify it or it comes from the default collation for your database or the server default. You need to either: 1. set your table to use utf8_polish_ci or if you need more than one collation in your table 2. set the collation for the Polish columns to utf8_polish_ci. Otherwise you are probably using the default collation for utf8 which is either utf8_general_ci or utf8_unicode_ci I forget which. Collation _can_ be set for individual columns and it sounds like that is what you need to do. This is also important because each column index is collation specific and you need your columns to be indexed for the right collation. best regards, Jeremy March -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: UTF8 columns and MySQL
Hello, I have a table with the following column among others: text VARCHAR(120) CHARACTER SET utf8 The provider I am using however only offers MySQL version 3.23.58, so I cannot use this statement. At first I thought I was not going to be able to store UTF8. So I changed the column to the following: text VARCHAR(120) However, when I submit data from a web interface, much to my surprise, I was able to store some Chinese characters, which no doubt take up two bytes in UTF8. Great!!! Can anyone explain why this is and what exactly is going on under the hood? With the earlier versions you are able to store utf8 data but you cannot take advantage of case-insensitive character comparisons or correct sort order etc. You should probably change the Chinese columns from VARCHAR to one of the BLOB types so the data is treated as binary. Just curious, Thanks, Neil -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Update a portion of text in a field
Jacques Jocelyn wrote: Hello mysql, Something I have been thinking about without any clue on how I can achieve it. I know how to update a field with update table X set field1='My Text' where tableid = 1 Now, say I have in a table X, the field1 with the value : 'I have been searching that functionalities for several days' and I would like to replace 'functionalities' by 'functionality' Would anyone knows how to replace JUST one word or a part of a text in a field without using an external program ? MySQL has a 'replace' function for this: update table X set field1=replace(field1, 'functionalities', 'functionality') where tableid=1; http://dev.mysql.com/doc/mysql/en/String_functions.html -- Daniel Kasak IT Developer NUS Consulting Group Level 5, 77 Pacific Highway North Sydney, NSW, Australia 2060 T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989 email: [EMAIL PROTECTED] website: http://www.nusconsulting.com.au -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
merging of two tables using temp tables???
hi i have a situation where i create the following tables via two different select sql statements. +-+--+---+--+-+ | ID | type | user | ID | uID | +-+--+---+--+-+ | 40 |1 | admin | 157 | 40 | | 102 |1 | admin | 405 | 102 | | 257 |1 | admin | 1025 | 257 | | 267 |1 | admin | 1065 | 267 | | 379 |1 | admin | 1513 | 379 | +-+--+---+--+-+ 5 rows in set (0.00 sec) +--+--++ | ID | type | status | +--+--++ | 40 |1 | 0 | | 40 |2 | 0 | | 40 |3 | 0 | | 40 |4 | 0 | | 102 |1 | 0 | | 102 |2 | 0 | | 102 |3 | 0 | | 102 |4 | 0 | | 257 |1 | 0 | | 257 |2 | 0 | | 257 |3 | 0 | | 257 |4 | 0 | | 267 |1 | 0 | | 267 |2 | 0 | | 267 |3 | 0 | | 267 |4 | 0 | | 379 |1 | 0 | | 379 |2 | 0 | | 379 |3 | 0 | | 379 |4 | 0 | | 394 |1 | 0 | | 394 |2 | 0 | | 394 |3 | 0 | | 394 |4 | 0 | | 460 |1 | 0 | | 460 |2 | 0 | | 460 |3 | 0 | | 460 |4 | 0 | | 541 |1 | 0 | | 541 |2 | 0 | | 541 |3 | 0 | | 541 |4 | 0 | i'd like to be able to merge/combine the two tables so that i get +--+--++--+---+-+ | ID | type | status | user | ID| uID | +--+--++--+---+-+ | 40 |1 | 0 | | 40 |2 | 0 | . | 40 |3 | 0 | . | 40 |4 | 0 | | 102 |1 | 0 | | 102 |2 | 0 | | 102 |3 | 0 | | 102 |4 | 0 | with the appropriate information in the various columns/rows... i'm looking to be able to fill the resulting table with the information if it's present, or to have nulls/'0' where the information isn't available... i'd prefer to do this in mysql if possible, as my gut tells me the operation would be faster/more efficient in mysql, than if i coded this in php/perl... i believe that i's need to create a temp table based on each select, and then some how merge the two temp tables, and finally do a select on the resulting table to get the values i need... looking through google/mysql hasn't shed any light on this one... any ideas/thoughts/comments on how i can do this. thanks... -bruce ps... the actual select sql used to create the 2 tbls are listed: select u4.username as user, u3.itemID as ID, u1.ID as uID from universityTBL as u1 left join university_urlTBL as u2 on u2.universityID = u1.ID right join parsefileTBL as p1 on p1.university_urlID = u2.ID left join user_rolesTBL as u3 on u3.itemID = u2.ID left join users as u4 on u3.userID = u4.user_id where u2.urltype = u3.itemType and u2.urltype = '1' and u3.process = '20' and (u1.ID='40' or u1.ID='102' or u1.ID='257' or u1.ID='267' or u1.ID='379' or u1.ID='394'); select u1.universityID as ID, u1.urltype as type, p1.start_status as status from university_urlTBL as u1 right join parsefileTBL as p1 on u1.ID = p1.university_urlID join latestParseStatusTBL as l1 on p1.fileID = l1.itemID where u1.universityID='40' or u1.universityID='102' or u1.universityID='257' or u1.universityID='267' or u1.universityID='379' or u1.universityID='394' or u1.universityID='460' or u1.universityID='541' or u1.universityID='560' -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re[2]: Update a portion of text in a field
Hello Daniel, DK MySQL has a 'replace' function for this: DK update table X set field1=replace(field1, 'functionalities', DK 'functionality') where tableid=1; DK http://dev.mysql.com/doc/mysql/en/String_functions.html Awesome ! Got it, thanks Best regards, Jacques Jocelyn -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: SELECT queries on replicate DB server
Thanks, that is a really good answer. Raises a bunch more questions but they're good ones. Jim Grill wrote: My question is: Why would they deem it necessary to use yet a third server? Could it be because the main server and the main slave are constantly being updated and they wouldn't want to overload the main slave(which is not on as high a horsepower of a box I know for a fact). Could it be because maybe the subset of tables that they put on the third server are relatively more stable and hence there arent so many writethroughs so it can handle the complex selects better. All theories gladly accepted... I'm not too sure about the third server either, but I do have an idea. It wouldn't make much sense if the third server had different data on it. That would tend to make things difficult to keep up to date - or maybe not. It might be a slave that they only connect to the master every so often. It's very common to have applications that write to one server and read from a slave server. Sometimes many slave servers since there are typically way more reads than writes. Perhaps they use the third server so that if the master or slave servers die there will always be a spare server for reads. As far as any difference in the tables on the third server... Since it is doing selects only you can start a slave server with a few options to speed things up like: --skip-innodb, --skip-bdb, --low-priority-updates, and --delay-key-write=ALL which will force the server to use non-transactional MyIsam tables for better performance. It's really tough to speculate. Every system administrator would probably do it a different way. Jim Grill -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: merging of two tables using temp tables???
Hello bruce, Wednesday, September 29, 2004, 6:57:34 AM, you wrote: b hi b i'd like to be able to merge/combine the two tables so that i get b +--+--++--+---+-+ b | ID | type | status | user | ID| uID | b +--+--++--+---+-+ b | 40 |1 | 0 | b | 40 |2 | 0 | . b | 40 |3 | 0 | . b | 40 |4 | 0 | ... b with the appropriate information in the various columns/rows... b i'm looking to be able to fill the resulting table with the information if b it's present, or to have nulls/'0' where the information isn't available... I was about to say it's easy ;-) then I saw your ps section :-o anyway, the idea I had may give a way to start : insert NEW_TABLE(ID,type,status,user,ID,uID) select ID,type,status,user,ID,uID from table1 left join table2 on ... where ... hope that helps. Best regards, Jacques Jocelyn -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]