Adding values returned by GREATEST
Hello, i have a question regarding the GREATEST function of mysql. I would like to add the values returned by GREATEST function is mysql, so a query is like below: For example table t has 6 fields with values as follows: A = 1, B = 3, C=0, D = 0, E = 1 and F = 0 and I run a query: SELECT GREATEST (A, B, C) AS G1, GREATEST (D, E, F) AS G2, ( GREATEST (A, B, C) + GREATEST(D, E, F) ) AS Total FROM t The result row I expect is: 3, 1, 4 But I get 3, 1, 6 However when I run the query like below I get correct results as total being 4: SELECT ( GREATEST (1, 3, 0) + GREATEST(0,1,0) ) AS Total So what I noticed is as I add result from GREATEST function, the result is adding 1 for each GREATEST call I have in total. So, if I change my query as below: SELECT GREATEST (A, B, C) AS G1, GREATEST (D, E, F) AS G2, ( GREATEST (A, B, C) + GREATEST(D, E, F) + GREATEST(D, E, F) ) AS Total FROM t The results will be 3, 1, 8 GREATEST (A, B, C) + GREATEST(D, E, F) + GREATEST(D, E, F) is calculated as GREATEST (A, B, C) = 3 + 1 GREATEST(D, E, F) = 1 +1 GREATEST(D, E, F) = 1 +1 So the total is 8. I have tried online to search for this type of behaviour but no luck. Can anyone please explain this. Many Thanks, SK
Re: Performance boost by splitting up large table?
Hi, You could split the table into two and can avoid code changes by creating a view which matches what code is looking for. I think loading few fields vs 254 into memory will make a difference but if your select statement only have specific fields you want and not the whole row (and also given the fact that you have index on the field you are using to search record), I don't think it can make a difference. But I will looking forward for more answers to your question. Regards. On Wed, May 14, 2014 at 8:05 AM, Larry Martell larry.mart...@gmail.comwrote: We have a table with 254 columns in it. 80% of the time, a very small subset of these columns are queried. The other columns are rarely, if ever, queried. (But they could be at any time, so we do need to maintain them.). Would I expect to get a marked performance boost if I split my table up into 2 tables, one with the few frequently queried columns and another with less frequently queried ones? Doing this will require a lot of code changes, so I don't want to go down this path if it won't be beneficial. Can folks here offer their experiences and learned opinions about this? Thanks! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Decode Json in MySQL query
Many Thanks for the kind replies. I have decoded in my code but just wondering in case I missed any solution to decode via query. On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman mdyk...@gmail.com wrote: Short answer, no. There is nothing in MySQL to facilitate this. In general, storing structured data as a blob (JSON, CSV, XML-fragment, etc..) is an anti-pattern in a relational environment. There are NoSQL solutions that provide the facility: Mongo comes to mind; there are some others, I am sure. On Thu, Mar 20, 2014 at 2:59 PM, Karr Abgarian a...@apple.com wrote: Hi, you probably want to perform this conversion on your client. There are JSON parser libraries available for Java, PHP and the like. Cheers, Karr On Mar 20, 2014, at 11:35 AM, Sukhjinder K. Narula narula...@gmail.com wrote: Hello, I would like to know if there is a way to decode the json string stored in one of the fields as text without using triggers or stored procedures. What I want to do is is within the query, I would like to get one row per element within the json string. For example: the json string is as follow: [ { name : Abc, age : 20 }, { name : Xyz, age : 18 } ] and after query, I want result as: NameAge Abc 20 Xyz 18 Would this be possible, I greatly appreciate any help regarding this matter. Many Thanks, Sukhjinder -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- - michael dykman - mdyk...@gmail.com May the Source be with you. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Decode Json in MySQL query
Hello, I would like to know if there is a way to decode the json string stored in one of the fields as text without using triggers or stored procedures. What I want to do is is within the query, I would like to get one row per element within the json string. For example: the json string is as follow: [ { name : Abc, age : 20 }, { name : Xyz, age : 18 } ] and after query, I want result as: NameAge Abc 20 Xyz 18 Would this be possible, I greatly appreciate any help regarding this matter. Many Thanks, Sukhjinder
Re: a Java-connector
Its called JDBC connector, please see the link: http://dev.mysql.com/doc/connector-j/en/connector-j-usagenotes-connect-drivermanager.html good luck. On Wed, Dec 11, 2013 at 10:15 AM, h...@tbbs.net wrote: I have MySQL 5.5.8 under Windows Vista, and I am minded to write Java programs to talk to the server. I believe that a connecter is needed for that, something with ODBC in the name--which version is best for my use? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Check for numeric values
Hi, You could use regular expression to do this, here is the example with the reference site that might help you: If your data is 'test', 'test0', 'test', '111test', '111' SELECT * FROM myTable WHERE col1 REGEXP '^[0-9]+$'; Result: '111' In regex ^ mean begin, and $ - end. SELECT * FROM myTable WHERE col1 REGEXP '^[0-9]+\.?[0-9]*$'; - for 123.12 *But,* select all records where number exists: SELECT * FROM myTable WHERE col1 REGEXP '[0-9]+'; Result: 'test0' and 'test' and '111test' and '111' http://stackoverflow.com/questions/5064977/detect-if-value-is-number-in-mysql Regards. On Tue, Oct 8, 2013 at 7:53 AM, Mike Blezien mick...@frontiernet.netwrote: Hello, I need to select some data from a table where a column is not a numerical value but can't seem to get the right syntax for this. Basically we need to do something like this: SELECT * FROM tablename WHERE column_name (IS NOT A NUMERIC VALUE) what is the correct syntax to accomplish this? MySQL version: 5.5 Thank you, Mike(mickalo)Blezien =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-**=-=-=-=-=-=-=-=-=-=-= Thunder Rain Internet Publishing Custom Programming Web Hosting Services http://www.thunder-rain.com/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=**-=-=-=-=-=-=-=-=-=-=- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Problem with having
Hi, In your second query, you seem to have MIN(date_time), but you are talking about maximum. So your group by query is actually pulling the minimum date for this recipe. Regards. On Mon, Sep 23, 2013 at 3:35 PM, Larry Martell larry.mart...@gmail.comwrote: I want to find the rows from a table that have the max date_time for each recipe. I know I've done this before with group by and having, but I can't seem to get it to work now. I get the correct row id, but not the correct date_time. I'm sure I'm missing something simple. For purposes of showing an example, I'll use one recipe, 19166. For that recipe here's the row I would want: mysql select id, MAX(date_time) from data_cstmeta where recipe_id = 19166; +-+-+ | id | MAX(date_time) | +-+-+ | 1151701 | 2013-02-07 18:38:13 | +-+-+ 1 row in set (0.01 sec) I would think this query would give me that - it gives me the correct id, but not the correct date_time: mysql SELECT id, date_time as MaxDateTime FROM data_cstmeta where recipe_id = 19166 group by recipe_id HAVING MIN(date_time); +-+-+ | id | MaxDateTime | +-+-+ | 1151701 | 2010-12-13 16:16:55 | +-+-+ 1 row in set (0.01 sec) How can I fix this? Thanks! -larry
Question regarding creating a query
Hello, I have a question regarding creating a query as follows: I have several databases (all with same structure), which I to query. For instansce: db1, db2, db3 - all have table tb1 with field a, b and table tb2 with fields flag1, flag2 So I want to query and get field a from tb for all db's. One way to do is union i.e. SELECT a FROM db1.tb1 WHERE (SELECT flag1 FROM db1.tb2) = 'y' UNION SELECT a FROM db2.tb1 WHERE (SELECT flag1 FROM db2.tb2) = 'y' UNION SELECT a FROM db3.tb1 WHERE (SELECT flag1 FROM db3.tb2) = 'y' But the problem here is that if I add more db's, I have to update the query every time. In addition to above, I also have a database e.g. common, which has a table called dbnames with field name, that keeps the name of all the databases I have (db1, db2, db3). So, what I would like to do is query the common db to get the names of the db's and then run the select query on each db. So here is the pseudocode of what I want to do: for each (SELECT name AS DbName FROM common.dbnames) (SELECT a FROM DbName.tb1 WHERE (SELECT flag1 FROM DbName.tb2) = 'y') AS CONCAT(DbName, '-', a) Could you please advice if this possible and if yes, how can this be acheived. Many Thanks, SK
Re: MySQL server has gone away
Hi Aveek, I would like to suggest some points here: You could try increasing the max allowed packets to 128MB. Though you think 16MB is enough, increasing it is not going to affect the server. It is dynamic value, so you could revert back the changes if you feel so. Also make sure log_warnings=2 which will give more error messages in error log. This is the basic step for this error, if it is not working then we should check for other options then. Regards, Vinodh.k On Fri, May 27, 2011 at 11:05 AM, Claudio Nanni claudio.na...@gmail.comwrote: 'MySQL server has gone away' Can be a network problem, Just to increase complexity :) On May 26, 2011 11:03 PM, Prabhat Kumar aim.prab...@gmail.com wrote: I had experience with such type of error, It was due lack of resources available to MySql, max connections exceeds on the server. you can write a simple script which will grab and store output of 'show processlist' every min. and later you cna investigate the issue. On Wed, May 25, 2011 at 3:34 AM, Aveek Misra ave...@yahoo-inc.com wrote: Nothing in the error log or the slow query log that suggests that the query size is too large or us taking too much time to execute. Thanks Aveek On May 25, 2011, at 3:53 PM, Rik Wasmus wrote: failed to execute SELECT * FROM cluster_info WHERE cluster = ?: MySQL server has gone away The error MySQL server has gone away is the error from the db handle. Can anyone give me any pointers on why that happens? I looked up the documentation in MySQL docs and the most common reason seems to be that it happens if the query size is very large or if there is a timeout. None of them seems to be a probable cause. The max_allowed_packet on the server is 16 MB and as can be seen in the query above, the query is very small and nowhere near the size limit. We also have a timeout setting (wait_timeout) of 10 minutes and the above query for us cannot possibly take that amount of time. In any case, given the same query, it executes correctly 99% of time (so to speak). It fails intermittently with the above error. What possibly could be the reason? I also looked at the max connections on the server at that time (around ~80) and it is much less than the limit we have (limit is 1000). How can I extract more information when this happens? This error message sucks since it does not tell me what exactly happened. The server version is 5.1.45. Can you access the error log of the server? That can probably shed more light on the issue... -- Rik Wasmus -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ave...@yahoo-inc.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com -- Best Regards, Prabhat Kumar MySQL DBA My Blog: http://adminlinux.blogspot.com My LinkedIn: http://www.linkedin.com/in/profileprabhat
Re: WHERE does not work on calculated view field
Hi Daniel, Could you check the 'myview' once again? I think you thought to create the view as follows: CREATE VIEW `myview2` AS SELECT a.*, EMPTY_STRING(b.`Name`) AS `TypeName`FROM `mytable` a LEFT JOIN `types` b ON *a.ID* *= b.`ID`*; Now your select queries will give results as follows: mysql SELECT COUNT(*) FROM `myview2` WHERE `TypeName` LIKE '%'; +--+ | COUNT(*) | +--+ |1 | +--+ 1 row in set (0.00 sec) mysql SELECT COUNT(*) FROM `myview2` WHERE `TypeName` IS NOT NULL; +--+ | COUNT(*) | +--+ |1 | +--+ 1 row in set (0.00 sec) mysql SELECT COUNT(*) FROM `myview2` WHERE `TypeName` IS NULL; +--+ | COUNT(*) | +--+ |0 | +--+ 1 row in set (0.00 sec) Regards, Vinodh.k MySQL DBA On Sat, Apr 23, 2011 at 1:50 AM, Daniel Kraft d...@domob.eu wrote: Hi Carsten, On 04/22/11 22:11, Carsten Pedersen wrote: On 22.04.2011 21:37, Daniel Kraft wrote: DROP DATABASE `test`; CREATE DATABASE `test`; USE `test`; CREATE TABLE `mytable` (`ID` SERIAL, `Type` INTEGER UNSIGNED NULL, PRIMARY KEY (`ID`)); INSERT INTO `mytable` (`Type`) VALUES (NULL); CREATE TABLE `types` (`ID` SERIAL, `Name` TEXT NOT NULL, PRIMARY KEY (`ID`)); INSERT INTO `types` (`Name`) VALUES ('Type A'), ('Type B'); DELIMITER | CREATE FUNCTION `EMPTY_STRING` (value TEXT) RETURNS TEXT DETERMINISTIC BEGIN RETURN IF(value IS NULL, '', value); END| DELIMITER ; CREATE VIEW `myview` AS SELECT a.*, EMPTY_STRING(b.`Name`) AS `TypeName` FROM `mytable` a LEFT JOIN `types` b ON a.`Type` = b.`ID`; SELECT COUNT(*) FROM `myview` WHERE `TypeName` IS NULL; SELECT COUNT(*) FROM `myview` WHERE `TypeName` IS NOT NULL; SELECT COUNT(*) FROM `myview` WHERE `TypeName` LIKE '%'; (I tried to simplify my problem as far as possible.) When I run this against MySQL 5.0.24a, I get three times 0 as output from the SELECTs at the end -- shouldn't at least one of them match the single row? (Preferably first and third ones.) What am I doing wrong here? I have no clue what's going on... Thanks a lot! Hint: What's the output of SELECT * FROM `myview`? I get: mysql select * from myview; ++--+--+ | ID | Type | TypeName | ++--+--+ | 1 | NULL | | ++--+--+ 1 row in set (0.00 sec) mysql select *, `TypeName` IS NOT NULL from myview; ++--+--++ | ID | Type | TypeName | `TypeName` IS NOT NULL | ++--+--++ | 1 | NULL | | 1 | ++--+--++ 1 row in set (0.00 sec) Should this tell me something? To me, it looks as expected and fine. Cheers, Daniel -- http://www.pro-vegan.info/ -- Done: Arc-Bar-Cav-Kni-Ran-Rog-Sam-Tou-Val-Wiz To go: Hea-Mon-Pri -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=daffodil...@gmail.com
Re: Duplicate key name when importing mysql dump file
Hi Jason, You may have to run ANALYZE TABLE.. for the particular table for which you are facing the error. So it'll rebuild the indexes. This would be the best one to save your data. We can use the method which Mr.Isart suggested, but it'll ignore the error and also will lead to data loss. Regards, Vinodh.k On Sat, Jun 20, 2009 at 12:19 AM, Isart Montane isart.mont...@gmail.comwrote: Hi Jason, if you run mysql with -f it will ignore any errors and continue importing cat aac.sql | mysql -f -u root AAC Isart On Wed, Jun 17, 2009 at 8:59 PM, Jason Novotny jason.novo...@gmail.com wrote: Hi, I'm trying to import a dumpfile like so: cat aac.sql | mysql -u root AAC It all runs fine until I get something like: ERROR 1061 (42000) at line 5671: Duplicate key name 'FK_mediaZip_to_zipSet' Is there a way I can tell it to ignore or replace the key? Thanks, Jason -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=isart.mont...@gmail.com
Re: load data into temporary table
Thank you but the real problem occurs when you don't know the schema of the table in advance. If data.txt has two columns columns how can I still load it in a temporary table? I'm asking this question because I'd like to add an import csv feature to a web application. I know that you can load data infile into table without specifying the schema of this table but it does not look like you can do load data infile into a temporary table. Thank you, Alex 2009/5/19 Janek Bogucki janek.bogu...@studylink.com: Hi, mysql create temporary table t(i int); mysql \! echo 1 /tmp/data.txt mysql load data infile '/tmp/data.txt' into table t; Query OK, 1 row affected (0.00 sec) Records: 1 Deleted: 0 Skipped: 0 Warnings: 0 mysql select * from t; +--+ | i | +--+ | 1 | +--+ 1 row in set (0.00 sec) Best Regards, -Janek, CMDEV 5.0. StudyLink. Helping People Realise Their Potential. http://studylink.com On Tue, 2009-05-19 at 02:57 +0100, Alex K wrote: Hello, Would anyone know how to load data infile into a temporary table? Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=alex.ksi...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
load data into temporary table
Hello, Would anyone know how to load data infile into a temporary table? Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Happy New Year
* Hi All, New Year's Eve is the perfect time to bid goodbye to the year 2008 and welcome the New Year 2009. It's the time to rock and roll with friends and family Let the New Year brings with it new hopes and new opportunities to explore. ! Wishing you all a happy and a prosperous New Year! Regards Arumugam.K *
Re: slow in statement?
Hi Ananda, Indexes are present. I'm passing no more 10 values. Alex 2008/11/10 Ananda Kumar [EMAIL PROTECTED]: it should not cause any issues, unless your passing too many values in id in(1,2,...n). Are the indexes present. On 11/10/08, Alex K [EMAIL PROTECTED] wrote: Hello, It seems to me that the statement id in (id1, id2 ... idn) is much slower than id=id1 or id=id2 ... or id=idn or I am doing something wrong? Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
slow in statement?
Hello, It seems to me that the statement id in (id1, id2 ... idn) is much slower than id=id1 or id=id2 ... or id=idn or I am doing something wrong? Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: slow in statement?
Here we go: http://pastebin.com/m2439985d replace $company_ids by list of ids from 'companies' replace $neg_company_ids by -1 * $company_ids replace $location_ids by list of location ids from 'locations' replace $all_company_ids by list of ids from 'all_companies' The reason why I'm doing $neg_company_ids is because I want to differentiate between ids from 'companies' and from ids 'all_companies'. So I noticed that when I replace the in statements by ors, it is actually faster. Thank you so much. 2008/11/10 Ananda Kumar [EMAIL PROTECTED]: can u please show me the explain plan for this sql and also the table structure explain select statement desc table_name On 11/10/08, Alex K [EMAIL PROTECTED] wrote: Hi Ananda, Indexes are present. I'm passing no more 10 values. Alex 2008/11/10 Ananda Kumar [EMAIL PROTECTED]: it should not cause any issues, unless your passing too many values in id in(1,2,...n). Are the indexes present. On 11/10/08, Alex K [EMAIL PROTECTED] wrote: Hello, It seems to me that the statement id in (id1, id2 ... idn) is much slower than id=id1 or id=id2 ... or id=idn or I am doing something wrong? Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
simple design choice
Hello, I have a table of a 1 million users. I want to add a flag called delete if a user wants to delete his account. Note that this situation does not happen a lot. 1) Should I alter my users table and add a delete flag to the users table. it's easy to update however it uses a lot of unnecessary space. 2) Should I create a new table user_id, flag already prefilled with all user_ids. 3) Should I create a new table called deleted_users that has a user_id if this user wants to be deleted. it's hassle to update but takes into consideration the spareness of the data. Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: simple design choice
2) is probably cleaner but a hassle as well because one needs to make sure all user_ids are also in this new table. 2008/10/3 Alex K [EMAIL PROTECTED]: Hello, I have a table of a 1 million users. I want to add a flag called delete if a user wants to delete his account. Note that this situation does not happen a lot. 1) Should I alter my users table and add a delete flag to the users table. it's easy to update however it uses a lot of unnecessary space. 2) Should I create a new table user_id, flag already prefilled with all user_ids. 3) Should I create a new table called deleted_users that has a user_id if this user wants to be deleted. it's hassle to update but takes into consideration the spareness of the data. Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: simple design choice
That seems like a nice trick. I suppose the flag would just be an int and not an enum in this case. 2008/10/3 Mr. Shawn H. Corey [EMAIL PROTECTED]: On Fri, 2008-10-03 at 09:58 -0700, Rob Wultsch wrote: On Fri, Oct 3, 2008 at 9:49 AM, Alex K [EMAIL PROTECTED] wrote: Hello, I have a table of a 1 million users. I want to add a flag called delete if a user wants to delete his account. Note that this situation does not happen a lot. 1) Should I alter my users table and add a delete flag to the users table. it's easy to update however it uses a lot of unnecessary space. 2) Should I create a new table user_id, flag already prefilled with all user_ids. 3) Should I create a new table called deleted_users that has a user_id if this user wants to be deleted. it's hassle to update but takes into consideration the spareness of the data. Thank you, Alex #1 Define uses a lot of unnecessary space. I would imagine it would add not much more than 1 MB to the size of db, depending on column choice. A decent choice I think. #2 Yuck. #3 A compact and clean solution. If you're going to do #1, make the new column status, with two states: active and deleted. In the future you can add more states without re-doing your tables again. -- Just my 0.0002 million dollars worth, Shawn Linux is obsolete. -- Andrew Tanenbaum -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: History of changed rows
Thanks for all of you. I will test the suggestions and then reply. CPK -- Keep your Environment clean and green.
History of changed rows
Hi all. How can we manage the history of changed rows in the database. I have some idea but not yet implemented. By using triggers to make a copy of the row being changed to the another table in the same db. Is there any way to only save the changed fields data and field name? Any other idea? Thanks CPK -- Keep your Environment clean and green.
[Stored Procedure] - Error handling
Hello folks, Any way to retrieve the error code/error number from a stored proc. Scenario -- calling a stored proc from PHP - using mysqli_multi_query() The stored proc has multiple queries. Lets say one of the queries generates an exception. How do I retrieve the error message within the procedure itself? OR Is there any way from PHP to get the last error msg? I tried with mysqli_error().. Did not work.. Any inputs will be appriciated. Thanks, Ratheesh
UTF-8 sorting
Hi, I am getting a problem while making a sort on any column having character set as UTF-8. The soring from a view or a query through ODBC is very slow. I have data in us-english and marathi languages at a time. Sorting is proper but very time consuming! What to do to solve this problem? Your Help is needed. Thanks CPK -- Keep your Environment clean and green.
backup questions
Hi guys, Is it safe to perform a backup of a database by copying its mysql files if the database is: 1) MyISAM and the database is not being used (mysql server turned off)? 2) InnoDB and and the database is not being used? 3) MyISAM and the database is being used? 4) InnoDB and and the database is being used? I know I can use mysqhotcopy or mysqldump but I'm just curious if it's possible to perform a backup by simply copying the files over in the conditions described above. Thank you so much, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
MySQL database synchronization
I have a problem related with 2 mysql database synchronization. We are using a 256kbps internet at our mfg. site and a 2mbps internet connection at our HO. We are using MySQL5.0.45 for our ERP application. We want to work from both locations at a time through ERP software. For this we are trying to synchronize both servers are site and at HO. what will be the best solution for this? 1. Replication, (is it possible over 256kbps connection?) 2. Manual synchronization (using Navicat/SQLyog like software) 3. Using Binlogs (applying binlog to the another db) 4. any other We need urgent help regarding this. Thanks in advance and regards CPK -- Keep your Environment clean and green.
Re: performance of heterogeneous joins
Cool it's good to know thank you. On 25/01/2008, Jay Pipes [EMAIL PROTECTED] wrote: Nope, no difference, AFAIK. Alex K wrote: Any ideas pertaining this newbie question? Thank you so much, Hi Guys, Is there a performance hit when joining across multiple databases as opposed to joining multiples tables in one database? Suppose the same tables are available across all databases. Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: performance of heterogeneous joins
Any ideas pertaining this newbie question? Thank you so much, Hi Guys, Is there a performance hit when joining across multiple databases as opposed to joining multiples tables in one database? Suppose the same tables are available across all databases. Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: select from otherdb.table question?
Hi Brent, ahh of course :) thank you so much for answering though. Alex On 20/01/2008, Brent Baisley [EMAIL PROTECTED] wrote: When you establish a connection, it's a connection to a server, not a specific DB. You can set a default db so that you don't always have to specify the db name you are working with. So to answer your question, no, a new connection is not established. Brent On Jan 19, 2008, at 10:19 AM, Alex K wrote: Hi Guys, What does the statement select * from otherdb.table do if I haven't explicitly connected to otherdb previously? I would assume it connects to otherdb and does the select on table but does it create a new connection each time? Is it as efficient as explicitly connecting to otherdb and then querying. I'm using webware DBUtils for connection pooling. Would these connections also be taken into account? Thank you so much, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql? [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
performance of heterogeneous joins
Hi Guys, Is there a performance hit when joining across multiple databases as opposed to joining multiples tables in one database? Suppose the same tables are available across all databases. Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
select from otherdb.table question?
Hi Guys, What does the statement select * from otherdb.table do if I haven't explicitly connected to otherdb previously? I would assume it connects to otherdb and does the select on table but does it create a new connection each time? Is it as efficient as explicitly connecting to otherdb and then querying. I'm using webware DBUtils for connection pooling. Would these connections also be taken into account? Thank you so much, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
basic style shema question
Hi Guys, Let's suppose I have the following table: create table companies ( id int(11) not null auto_increment primary key, # user's login and password email varchar(100), passwordvarchar(100), # basic information company_namevarchar(100), contact_namevarchar(100), street varchar(100), cityvarchar(100), state varchar(7), zip varchar(13), phone varchar(25), # user's company description description text, category_other text, # localization desired_zip varchar(7), latitudedec(10,7), longitude dec(10,7), # user's personalized options url varchar(200) not null unique, logo_md5varchar(32), linked_url varchar(200), color_bgrd varchar(16), # content moderation (null if nothing, 1 for new, 2 for updates) updates smallint, banned boolean ); Would you keep this as one table or would you split it into multiple tables such as users, company localizations, personalized options and moderation which would hold each the fields under each comment together with a company_id? The first way of doing it is easier to update because I would not have to update all the other linked tables. But the other way of doing seems cleaner. To summarize one table vs. many tables with one to one relations? Thank you, Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: basic style shema question
Hi Kevin, Well the basic information, company description and personalized options will be selected many times (whenever a user submits a query). It will basically be show on the result page of the search engine. The user's login / password well is used to login, then the user may update the company basic information, description and personalized options. These updates may happen sporadically though. Once every 3 minutes these fields are selected again in order to update the search engine index. Thank you, Alex On 18/01/2008, Kevin Hunter [EMAIL PROTECTED] wrote: At 11:44a -0500 on 18 Jan 2008, Alex K wrote: To summarize one table vs. many tables with one to one relations? As per usual, it depends on your needs. For most flexibility, and to give the DB the best chance to give the best plan for the possible requests I might make in the future, I generally tend towards normalization, not denormalization. The question is: what do you want to do with the schema? Kevin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
bulk updates/inserts and triggers
Hello, I am facing a problem related with triggers and bulk updates/inserts. I have two tables one is having 'transactions' and another is 'documents'. Each record in transactions table relates with a document by DocId. Foreign keys are created. I have activated triggers for transactions table for after insert, afters update, before delete. trigger makes a sum of amount in all transactions for a document referenced by docId in that perticular transaction and stores it in another temporary table and then updates the amounts in documents table. When I will go for inserting 10K records at a time, the insert becomes too slow. Why? Is there any solution to this? I am using MySQL 5.0.45 on Redhat Ent. Linux 5 - 64bit, 4 GB RAM, Xeon procesors and MySQL Connector ODBC 5.1 Beta, MS Acess 2003 on WindowsXp SP2. I also tried to inster this records directly by passing queries to MySQL, but still it is slow! Please help. Thanks CPK the scripts are as follows- -- -- Table structure for cb_canebills -- CREATE TABLE `cb_canebills` ( `ID` int(11) NOT NULL auto_increment, `AssociateSCPId` int(11) default NULL, `PeriodNo` int(11) default NULL, `SlipCount` int(11) default NULL, `TotalWeight` decimal(9,3) default NULL, `NormalWeight` decimal(9,3) default NULL, `JalitWeight` decimal(9,3) default NULL, `NormalAmount` decimal(15,2) default NULL, `JalitAmount` decimal(15,2) default NULL, `NormalRate` decimal(15,2) default NULL, `JalitRate` decimal(15,2) default NULL, `PayRate` decimal(15,2) default NULL, `TotalAmount` decimal(15,2) default '0.00', `HAmount` decimal(15,2) default NULL, `TAmount` decimal(15,2) default NULL, `HComissionAmount` decimal(15,2) default NULL, `TComissionAmount` decimal(15,2) default NULL, `TotalAgainstAmount` decimal(15,2) default '0.00', `NetPayAmount` decimal(15,2) default '0.00', `BankID` int(11) default '0', `BankAccNo` decimal(20,4) default NULL, `CaneBillNo` varchar(20) collate utf8_unicode_ci default NULL, `CaneBillDate` date default NULL, `AssociateType` int(11) default NULL, `CrushSeason` int(11) default NULL, `ChequeNo` varchar(10) collate utf8_unicode_ci default NULL, `ChequeDate` date default NULL, `ChequeAmount` decimal(15,2) default NULL, `BankOrCashAmount` decimal(15,2) default NULL, `InstallmentNo` int(11) default NULL, `tmpTS` timestamp NULL default '1999-11-11 11:11:11', `CreatedBy` int(11) default NULL, `CreatedTimeStamp` datetime default NULL, `LastModifiedBy` int(11) default NULL, `LastModifiedTimeStamp` datetime default NULL, `Locked` tinyint(4) default NULL, `CaneBillRemark` varchar(300) collate utf8_unicode_ci default NULL, `CoBranch` int(11) default NULL, `CoYear` int(11) default NULL, `CaneBillStatus` int(11) default NULL, `AccVoucherCreated` tinyint(4) default NULL, `Approved` tinyint(4) default NULL, `ApprovedBy` int(11) default NULL, `IsTemplate` tinyint(4) default NULL, `ReportH_RCS` int(11) default NULL, `ReportF_RCS` int(11) default NULL, `CaneBillCurrency` int(11) default NULL, `CaneBillExchangeRate` decimal(15,2) default NULL, `LastAccDate` date default NULL, `Billed` tinyint(4) default NULL, `tmpSelect` tinyint(4) default '0', `DocType` int(11) default NULL, `BillFromDate` date default NULL, `BillToDate` date default NULL, `Partial` tinyint(4) default NULL, `IsTemp` tinyint(4) default NULL COMMENT 'Temprory bill or not', `PaidThroughBankAccount` int(4) default NULL COMMENT 'bank account(associates) through which payment is issued', `BCId` int(11) default NULL, `PaymentThroughLedger` int(11) default NULL, `DCLogId` int(11) default NULL, `tmpSelectedByUser` int(11) default NULL, `AllowAllUsersToView` tinyint(4) default '-1' COMMENT 'view this doc to all while browsing except than created/Last', PRIMARY KEY (`ID`), KEY `First_billID` (`ID`), KEY `First_billP_no` (`PeriodNo`), KEY `FKAssociate` (`AssociateSCPId`), KEY `Indbcid` (`BCId`) ) ENGINE=InnoDB AUTO_INCREMENT=9455 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; -- -- Table structure for cb_canebilltransactions -- CREATE TABLE `cb_canebilltransactions` ( `CaneBillTransactionId` int(11) NOT NULL auto_increment, `DocId` int(11) NOT NULL, `DocType` int(11) default NULL, `AgainstId` int(11) NOT NULL, `AgainstAmount` decimal(15,2) unsigned zerofill default '0.00 ', `LineRemark` varchar(300) collate utf8_unicode_ci default NULL, `CreatedBy` int(11) default NULL, `CreatedTimeStamp` datetime default NULL, `LastModifiedBy` int(11) default NULL, `LastModifiedTimeStamp` datetime default NULL, `Locked` tinyint(4) default NULL, `tmpTS` timestamp NULL default '1999-11-11 11:11:11', `BankId` int(11) default NULL, `LoanSchemeId` int(11) default NULL, `DetailsForExtras` varchar(30) collate utf8_unicode_ci default NULL, `AddOrSubstract`
Replication - urgent
Hello All, I set up replication between 2 servers recently. I just need one db to be replicated and the SHOW SLAVE STATUS shows this: Relay_Master_Log_File: gyana01-bin.02 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: tallydb,tallydb Now the problem is that there is another db named tallydbopextblob and the tables from this db are also getting replicated as shown in the below line. Last_Error: Error 'Table 'tallydbopextblob.TBL_EVAL_PITEM_OP_EXT_DETAILS' doesn't exist' on query. Default database: 'tallydb'. Query: 'INSERT INTO tallydbopextblob.TBL_EVAL_PITEM_OP_EXT_DETAILS( FLD_OP_INPUT_DATA,FLD_OP_INDEX_DATA1,FLD_OP_INSTANCE_ID) VALUES('type=FPCONNECTsource=TALLYrequest_type=ACTIVATIONoffline_flag=0serial=372123675major_version=7minor_version=2major_release=3minor_release=14build_num=rel7.2_3.14_2007-05-21_19.33platform=WINos=WINuser_name=serverhost_name=SERVERlic_ver=2fingerprint63=1mQ0nE8HkaBlNRg==aa,3GAwmgzLMbCGNJyMyaa,3GAwmgzLMbCGNJyMyaa,2NDkxODk0MjUyAA==finger_print=5qZEKE5HgchxNpnOZ3GyZaaserver_id=0activation_code ... Why is this happening? Thanks regards, Ratheesh
Replication - urgent
Hello All, I need to add a couple of more tables to the list of tables to be replicated from a particular db in the salve my.cnf. Can this be done without restarting the mysql server on the slave? Thanks regards, Ratheesh
Replication - urgent
Hello all, I cannot afford to stop my slave server. I have list of tables of the master that are being replicated on the slave. Now I want to remove a couple of tables from this list without affecting the master and slave. How is this possible? Thanks regards, Ratheesh
[Replication] - load
@all, Currently we run all our complex reporting queries on a different server. We are not using replication though. What we are doing is to restore the backup of the live data every night onto the reporting server. So we are running reports on data that is not real time. And we are OK with that. We made this setup just to share the load between the two DB servers. SO on the reporting DB server there will be only select queries running. And so all the queries will be served from the Query cache and hence there is a big performance gain. Any updates that happen is only during the night when the backup from DB server 1 is restored onto the DB server 2 (Reporting DB server). So every morning all the queries will be slow for the first time on the DB server 2 and thereafter will be served by the query cache as they will be cached and never invalidated until the night. Now suppose we enable real time replication between DB server 1 and DB server 2 then: 1) Will our aim of load balancing be hit because there will be no overall performance gain as still INSERTS, UPDATES and DELETES will continue to run on DB server 2 due to replication ? i.e the load of the DML statements will still continue to be there. 2) Is this notion correct that Replication will provide a performance boost considering the context. Isnt the load same as when there was no replication? 3) And the query cache will get invalidated on DB server 2 when there is real time replication. So isnt it another perormance hit? These questions are just to get an hint of the performance benfit due to replication because conceptually I feel that there is still the same amount of load. On one hand I know that replication is not for load balancing ( am I right? ) and on the other hand I am doubting why Replication? if the load is same. Pleasecorrect me if I am wrong Thanks regards, Ratheesh
Re: [Replication] - urgent
Thanks, It helped me a lot. I wanted to know 1.. what are the various scenarios where my replication setup can fail? (considering even issues like network failure and server reboot etc). What is the normal procedure to correct the failure when something unpredicted happens? 2.. What are the scenarios where the SQL THREAD stops running and what are the scenarios where the IO THREAD stops running? 3.. Does SQL_SLAVE_SKIP_COUNTER skip the statement of the master binlog from being replicated to the slave relay log OR Has the statement already been copied into the slave relay log and has been skipped from the relay log? 4.. How do I know immediately that replication has failed? ( have heard that the enterprise edition has some technique for this )? Thanks regards, Ratheesh - Original Message - From: Jan Kirchhoff [EMAIL PROTECTED] To: Ratheesh K J [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Tuesday, October 02, 2007 4:16 PM Subject: Re: [Replication] - urgent Ratheesh K J schrieb: Hello all, I issued a create table statement on the master for a table which was not present on the master but present on the slave. I did this purposely to see the error on slave. I am a newbie to replication. Now when i see SLave status on the slave machine it shows that the SQL Thread has stopped. When I start the SQL thread it does not start and gives the error message that the table exists. How do i correct this and how do I calculate the next position that the slave must start executing from the relay log. Is there any article on MySQL replication that tells me how to deal when errors occur. Thanks regards, Ratheesh You have 2 options: 1. on the slave, enter SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; and then SLAVE START; on the slave. This skips the upcoming entry in the binlog which is the create table command that causes your problem. 2. if you don't have any data in the table on the slave, just drop the table and do a slave start;, it will then create the table again as this is the next command in the binlog. Remember: never write on the slave without knowing what you do and you'll be happy with your replication ;) Jan
MySQL Configuration for a powerful server?
Hello all, What is the best possible values in my.cnf for a 8 processor (Quad core-2 cpu) 8 GB RAM machine dedicated for MySQL server only. No other application will run on this machine. the innodb_buffer_pool_size cannot accept values above 2000 MB due to 32 bit machine constraint. So what other parameters can be tweaked to make use of this powerful server to its best? NOTE: All our tables are of INNODB storage engine.
[Replication] - urgent
Hello all, I issued a create table statement on the master for a table which was not present on the master but present on the slave. I did this purposely to see the error on slave. I am a newbie to replication. Now when i see SLave status on the slave machine it shows that the SQL Thread has stopped. When I start the SQL thread it does not start and gives the error message that the table exists. How do i correct this and how do I calculate the next position that the slave must start executing from the relay log. Is there any article on MySQL replication that tells me how to deal when errors occur. Thanks regards, Ratheesh
Storing Devnagari unicode data in MySQL
Namaskar, I am using Windows Xp SP2 and Mysql 5.0.45 and MyODBC 3.51.19 with Microsoft Marathi Indic IME 1 version 5. I am storing data in both languages i.e. Marathi and English. So I changed database character set to 'utf8 -- UTF-8 Unicode' and collation to 'utf8_unicode_ci'. Also I changed the field properties charset to 'utf8' and collation to 'utf8_unicode_ci' for all varchar and text fields. I am using MS Access 2003 as a front-end. When I am entering data in Marathi I can read and write data in Access tables, but when I am entering data I can view data in Marathi but after storing it in MySQL tables, I can not read data and just see question marks for the characters I entered. This is not happening only when I am storing data using Access as a front-end, but also I tried it by using other mysqk GUI tools, but i get same wrong results. Can any one please help me. Please give the reasons behind it and the solution to it. Thanks for your sincere help. Regards, CPK -- Keep your Environment clean and green.
Re: Storing Devnagari unicode data in MySQL
Thanks for your help. I upgraded MyODBC 3.51.19 to 5.1 beta, and tried it for Devnagari data entry and it worked well. But now a new problem of showing newly added record as '#Deleted' is there. and even after adding a time stamp field in the table, this problem exists,(this problem is solved in 3.51.19). Can any one please help? Thanks again CPK On 9/28/07, Jerry Schwartz [EMAIL PROTECTED] wrote: Try upgrading to a version 5 of the ODBC connector. It worked for our Chinese data. Regards, Jerry Schwartz The Infoshop by Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 www.the-infoshop.com www.giiexpress.com www.etudes-marche.com -Original Message- From: C K [mailto:[EMAIL PROTECTED] Sent: Friday, September 28, 2007 1:55 PM To: mysql@lists.mysql.com; [EMAIL PROTECTED] Subject: Storing Devnagari unicode data in MySQL Namaskar, I am using Windows Xp SP2 and Mysql 5.0.45 and MyODBC 3.51.19 with Microsoft Marathi Indic IME 1 version 5. I am storing data in both languages i.e. Marathi and English. So I changed database character set to 'utf8 -- UTF-8 Unicode' and collation to 'utf8_unicode_ci'. Also I changed the field properties charset to 'utf8' and collation to 'utf8_unicode_ci' for all varchar and text fields. I am using MS Access 2003 as a front-end. When I am entering data in Marathi I can read and write data in Access tables, but when I am entering data I can view data in Marathi but after storing it in MySQL tables, I can not read data and just see question marks for the characters I entered. This is not happening only when I am storing data using Access as a front-end, but also I tried it by using other mysqk GUI tools, but i get same wrong results. Can any one please help me. Please give the reasons behind it and the solution to it. Thanks for your sincere help. Regards, CPK -- Keep your Environment clean and green. -- Keep your Environment clean and green.
data position changes when 'Load Data infile'......
Hi Friend, Today I was testing the command 'Load data infile ...' command ( http://dev.mysql.com/doc/refman/5.0/en/loading-tables.html ) in my system. That time I was surprised when I put select statement in that table. The scenario as follows : In a text file which is to be loaded, I am having data as follows: 3 v,4 a mysql desc mytable; +---+-+--+-+-++ | Field | Type| Null | Key | Default | Extra | +---+-+--+-+-++ | id| int(11) | | PRI | NULL| auto_increment | | foo | char(1) | YES | | NULL|| +---+-+--+-+-++ 2 rows in set (0.00 sec) mysql select * from mytable; ++--+ | id | foo | ++--+ | 1 | a| | 2 | b| ++--+ 2 rows in set (0.01 sec) mysql load data infile '/home/mysql/egdata' into table mytable lines terminated by ','; Query OK, 2 rows affected, 0 warnings (0.00 sec) Records: 2 Deleted: 0 Skipped: 0 Warnings: 0 mysql select * from mytable; ++--+ | id | foo | ++--+ | 1 | a| | 2 | b| | 3 | v| | 4 | a| ++--+ 4 rows in set (0.00 sec) mysql delete from mytable where id in (3,4); Query OK, 2 rows affected (0.00 sec) mysql load data infile '/home/mysql/egdata' into table mytable lines terminated by ','; Query OK, 2 rows affected, 0 warnings (0.00 sec) Records: 2 Deleted: 0 Skipped: 0 Warnings: 0 mysql select * from mytable; ++--+ | id | foo | ++--+ | 1 | a| | 2 | b| | 4 | a| | 3 | v| ++--+ 4 rows in set (0.00 sec) The select query gives the different orders for value '3' and '4' which was loaded twice with same txt file. Why is this happening like this? Any reason or algorithm involve in this?
Fwd: more options for MySQL tools by MySQL
Hello all, I want to suggest one thing relating to MySQL Tools for 5.0(Administrator, Query browser etc.) - A new feature can be added to use command line options to synchronize structure and data, data transfer in many formats and scheduling above tasks on windows/linux and other OS. If these features will be available it will be better for users such as me to use only MySQL tools for all the needs. Please reply Thanks CPK -- Keep your Environment clean and green.
more options for MySQL tools by MySQL
Hello all, I want to suggest one thing relating to MySQL Tools for 5.0(Administrator, Query browser etc.) - A new feature can be added to use command line options to synchronize structure and data, data transfer in many formats and scheduling above tasks on windows/linux and other OS. If these features will be available it will be better for users such as me to use only MySQL tools for all the needs. Please reply Thanks CPK -- Keep your Environment clean and green.
Re: Synchronizing two MySQL servers over slow network
Thank you very much. The VPN bandwidth will be 128/256 kbps. Also we have to make updates to both locations simultaneously and our application is provided with locking mechanism for updating records and resolving conflicts to update same record. Also please give the details about term active/active and how it could be implemented? It can be possible to update the data to both servers and keep another for reporting only purpose. Is it possible to make HO as Master and Reporting as Slave and Site Server as another master and same Reporting as Slave. i.e. Reporting will be the salve for both masters in Replication? Thanks again CPK On 8/26/07, Gary W. Smith [EMAIL PROTECTED] wrote: for generating Statutory reports. Also cluster can not be a solution as it requires min. 100 MB network. Says who? But clustering won't help. You are looking for active/active, which could be accomplished but this would possibly lead to specific conflicts if people are trying to edit the same record. You've also failed to mention the speed you are talking about. My home office is a replication point for our large database at our CO. I do this over Cable. You could try active/active, assuming you work out some type of conflict resolution plan. If each site will generally be editing their own data, but combined for reporting only, then active/active should be fine, even if you are talking about 20KB/s. Can we generate scripts on windows to sync them manually? Thanks CPK On 8/25/07, Craig Huffstetler [EMAIL PROTECTED] wrote: I would probably recommend replication. It's not that bad to setup and once it catches up on the slave then it will continue to be an easy sync in the future as long as both are running. How big is the database you wish to synchronize? What connection are both servers on? Is there anyway possible to disable to VPN between the two D.B. servers? Perhaps a firewall in between is in use and you can simply setup access rules for both to talk to each other? The VPN tunnel is most likely a big slow down. On 8/25/07, C K [EMAIL PROTECTED] wrote: Hi, I have a problem as below- We have a MySQL server for our ERP database. Now we have to implement the ERP for Head office. HO is away from the current setup and connection between the two is through VPN at slow speed. How can we synchronize the two MySQL servers? Replication, Cluster, or manually? Thanks for your replies. CPK -- Keep your Environment clean and green. -- Keep your Environment clean and green. -- Keep your Environment clean and green.
Synchronizing two MySQL servers over slow network
Hi, I have a problem as below- We have a MySQL server for our ERP database. Now we have to implement the ERP for Head office. HO is away from the current setup and connection between the two is through VPN at slow speed. How can we synchronize the two MySQL servers? Replication, Cluster, or manually? Thanks for your replies. CPK -- Keep your Environment clean and green.
Re: Synchronizing two MySQL servers over slow network
We are using ODBC and Win Server 2K3. Also the database size is not so big ( i.e. upto 300 MB). The major problem is that both the databases at two servers will be same in structure and will be used by local group of users for daily operations. One of them must be treated as a MASTER for data. i.e. accounting will carried out at both, but only HO database will be considered for generating Statutory reports. Also cluster can not be a solution as it requires min. 100 MB network. Can we generate scripts on windows to sync them manually? Thanks CPK On 8/25/07, Craig Huffstetler [EMAIL PROTECTED] wrote: I would probably recommend replication. It's not that bad to setup and once it catches up on the slave then it will continue to be an easy sync in the future as long as both are running. How big is the database you wish to synchronize? What connection are both servers on? Is there anyway possible to disable to VPN between the two D.B. servers? Perhaps a firewall in between is in use and you can simply setup access rules for both to talk to each other? The VPN tunnel is most likely a big slow down. On 8/25/07, C K [EMAIL PROTECTED] wrote: Hi, I have a problem as below- We have a MySQL server for our ERP database. Now we have to implement the ERP for Head office. HO is away from the current setup and connection between the two is through VPN at slow speed. How can we synchronize the two MySQL servers? Replication, Cluster, or manually? Thanks for your replies. CPK -- Keep your Environment clean and green. -- Keep your Environment clean and green.
Password storage
Friends, I have one question - How to store passwords in MySQL database table in a secure way so that no one can see the password(understand the password string)? Please help Thanks CPK -- Keep your Environment clean and green.
Re: Password storage
Thanks to all, but the problem is that I am using external programs to insert data and I can't use MySQL functions directly. Can I call/implement such type of functions using MS Access 2003? Thanks CPK The md5 function encrypts the input string. - With Warm Regards, Sudheer. S www.binaryvibes.co.in www.lampcomputing.com -- Keep your Environment clean and green.
Fwd: [commercial] MySQL cluster setup and support
-- Forwarded message -- From: C K [EMAIL PROTECTED] Date: Aug 13, 2007 11:37 AM Subject: [commercial] MySQL cluster setup and support To: [EMAIL PROTECTED] Dear all, we are a medium sized company in India having mfg. facilities about 65 kms. from Pune and Head Office in Pune. We are running a mysql based ERP system for our operations. Now we are shifting some of our departments to Pune and requires support to connect two mysql database servers through VPN/Leased Line. If any commercial service provider from PUNE or nearby cities in INDIA only can give such service related to MySQL cluster/replication etc., please submit your proposals to [EMAIL PROTECTED] Please call 9975844665 for more details. Thank you. CPKulkarni -- Keep your Environment clean and green. -- Keep your Environment clean and green.
Re: MySQL database synchronizing from 2 locations
Hello, As per your suggestions I tried to get some correct solution for the problem, but there is a big problem for replication and it is network connection. Though Internet is available to the Mfg. Site, it is not having good speed and continuous. So that replication may not be a good choice. As we are using Auto-increment fields for each table and it is Primary Key and also physical records are already marked with this PK. Is there any other solution for this? Please give the details. Thanks CPK -- Keep your Environment clean and green.
calling stored procedure trough ODBC
Hello, I am using MySQL 5.0.17 and MyODBC 3.51.14. I ma using MS Access 2003 as front-end Can I call stored procedures from Access ? how ? Also can we use ODBC driver for calling stored procedures or linking SP to Access? Please help Thanks CPK -- Keep your Environment clean and green.
MySQL database synchronizing from 2 locations
Hello, My client has a mfg. unit at 65 Km from a city in India. He wants to connect to his corporate office in the city. Both offices will use same data and same ERP system. He is using Win 2K3 server and MySQL 5.0.17. Is it possible to make them synchronized at a particular or regular intervals? How? Please give details. Options I think - Replication (is it possible for Windows?) Cluster (Is it possible?) Manual Sync by using Navicat or any other tool (other tools please) Please help. Prior Thanks, CPK -- Keep your Environment clean and green.
Foreign key constraints - Known issues ?
Hello All, I just wanted to know whether there are any known issues in defining and using Foreign key constraints in MySQL 4 and MySQL 5. To be specific, are there any issues on using ON DELETE CASCADE and ON UPDATE CASCADE? Would there be any performance issues when we define Foreign key constraints? We dont define Foreign Key constraints here. But at the conceptual level we know that there are foreign keys. But we do not create foreign keys at the physical level. Is this right? Thanks Regards, Ratheesh
User Administration
Today I tried to set few user rights for a user. What I need is- There a database called 'mydb' Two users for Mysql 5 on Win 2003 server. One is root and second is 'systemuser'. 'systemuser' do not have right for anything on Mysql, information_schema, but should have only SELECT, INSERT, UPDATE and DELETE rights for 'mydb'. I tried it for a long, but didn't. I have to give SELECT as global privilege. Also I have to set INSERT, UPDATE and DELETE as global privilege. But due to this 'systemuser' can change data in 'mysql' database and I do not want this. If remove any of above from global privilege and set these to only 'mydb' it doesn't work. 'systemuser' will be used to connect to mysql as a default username for many software users. I need to give only SELECT, INSERT, UPDATE and DELETE rights for only 'mydb' while connecting from any location. What can I do? Please help. Thanks, CPK -- Keep your Environment clean and green.
Blob data
Hello All, I want a clarification. Whe run a forum wherein people send messages with/without attachments. Attachments may contain images, documents etc.. We are actually storing the attachment in a blob column. Sometimes the attachments are big. And today the table size has grown to 40 GB. This has created a headache for any maintanance task, backup, restoration. etc. I want to know whether this is the right approach. Or should we actually store the attachments in directories and just stiore the attachment path in the database. Kindly suggest the best approach so that I can reduce the database size. Thanks in advance
MySQL cluster for windows
I have read some where that MySQL cluster will be available in 5.1 release, will it? Is there some progress in this regard? Thanks CPK -- Keep your Environment clean and green.
row lock
My client got a strange problem today. Mysql returned err no 1205. lock wait timeout exceeded. why? this error occured while updating the record. This error isproduced for a single record. I have checked if that record is in use while updating etc., but this record is not in use. I wait for some time again try to complete ithe operation. but isn't. Then I duplicated the record with new rimery key Id and then deleted the first record. Then changwd the new Id to old id. and it worked well. I can't understand why theabove error occuerred and the solution to it. can you please help and give the details. Thanks and regards cpk -- Keep your Environment clean and green. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Data security - help required
Ok.. Will it be secure if the data is encrypted. mysqldump will show encrypted data right. Actually I want to know what is the best practice for such applications. Can I say that encryption alone is sufficient to secure my data. Or is there any other strategy used for data protection? - Original Message - From: Chris [EMAIL PROTECTED] To: Ratheesh K J [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Tuesday, May 15, 2007 2:42 PM Subject: Re: Data security - help required Ratheesh K J wrote: Hello all, I have a requirement of maintaining some secret information in the database. And this information should not be visible/accessible to any other person but the owner of the data. Whilst I know that encryption/decryption is the solution for this, are there any other level of security that I can provide to this? Which is the best security technique used in MySQL to store seceret information. PS: Even the database admin should not be able to access anybody else's information Then you're stuffed - *someone* has to be able to see everything so you can do a mysqldump. *Someone* has to be able to see everything so you can grant permissions to the other users too :) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Installing 2nd instance on windows.
Thanks to all for so good responce. Now I will experiment with it and reply earliest. Thanks CPK -- Keep your Environment clean and green.
Re: Installing 2nd instance on windows.
Dear friends, thank you for your response. but the problem is that when I try to install MySQL 5.0 from windows .msi installer on windows XP with MySQL 5.0 already installed, the installer does not shows any option regarding new installation. I can just rapair/remove the installation. Why? As I know we can install multiple instances of MySQL running for different ports, how to make it available on Windows? I need to run two different mysql servers on same machine at different ports(3306, 3307 etc) is it possible and how? Thanks again, CPK Keep your Environment clean and green.
Installing 2nd instance on windows.
Is it possible to install more than instances on Linux of MySQL 5.0? I am using WinXP SP 2 and MySQL 5.0.17. Thanks CPK -- Keep your Environment clean and green. -- Keep your Environment clean and green.
Installing 2nd instance on windows.
Is it possible to install more than instances on Linux of MySQL 5.0? I am using WinXP SP 2 and MySQL 5.0.17. Thanks CPK -- Keep your Environment clean and green.
Error with mysqldump
When my server was running MySQL 4.1 I used mysqldump for backup with no errors - Redhat Linux 9. In fact when the server crashed and I had to restore the whole box I used the backups from mysqldump to restore all the databases and tables including the users. However, the server is now running MySQL 5.0 on Redhat Linux 9. When I try to run mysqldump I encounter errors as shown below: [mysqldump -p -u eusers mysql] -- -- Dumping data for table `columns_priv` -- /*!4 ALTER TABLE `columns_priv` DISABLE KEYS */; LOCK TABLES `columns_priv` WRITE; UNLOCK TABLES; /*!4 ALTER TABLE `columns_priv` ENABLE KEYS */; mysqldump: mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE 'columns\_priv'': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'TRIGGERS LIKE 'columns\_priv'' at line 1 (1064) Does anyone know what is happening here? What am I doing wrong? -- Peter ___ Life is not measured by the number of breaths we take, but by the moments that take our breath away. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
ODBC connector 3.51/5.0
Dear developers from MySQL, can i know when the MySQL connector/ODBc 5.0 will be released. I am facing few major problems regarding 3.51.12 on windows as application crash and wrong data display. please reply thanks, CPK -- Keep your Environment clean and green.
Re: ODBC connector 3.51/5.0
I HAVE TO TRY IT OUT. BUT THE PROBLEM IS THAT I HAVE MYSQL 5.0.17 AND I THINK 3.51.06 CAN NOT CONNECT TO 5.0.17. BUT I HAVE TO TRY. THANKS AND REGARDS CPK On 2/14/07, Ron Alexander [EMAIL PROTECTED] wrote: CPK, I'm not from MySQL but I had the same issue with MyODBC-3.51.12. The way I resolved the problem was to rollback to MyODBC-3.51.06. It resolved the issue. I hope this helps. Ron -Original Message- From: C K [mailto:[EMAIL PROTECTED] Sent: Tuesday, February 13, 2007 1:01 PM To: [EMAIL PROTECTED]; mysql@lists.mysql.com Subject: ODBC connector 3.51/5.0 Dear developers from MySQL, can i know when the MySQL connector/ODBc 5.0 will be released. I am facing few major problems regarding 3.51.12 on windows as application crash and wrong data display. please reply thanks, CPK -- Keep your Environment clean and green. -- Keep your Environment clean and green.
FW: With Ref to the set autocommit =1
Hi Mark I would appreciate if we can define Under_extreme_conditions as you have mentioned in this link below http://archives.neohapsis.com/archives/mysql/2005-q3/0163.html I would appreciate if you can give us a workaround for this, which can eliminate this set autocommit =1 command being sent to the MYSQL DB. Regards Sandeep Below is the log generated by Mysql - 37 Query SET autocommit=1 37 Query SET autocommit=0 77 Connect [EMAIL PROTECTED] on b_08_11_2006 77 Init DB b_08_11_2006 77 Query SET NAMES latin1 77 Query SET character_set_results = NULL 77 Query select round('inf'), round('-inf'), round('nan') 77 Query SHOW VARIABLES 77 Query SHOW COLLATION 77 Query SET autocommit=1 77 Query select query 77 Quit 37 Query insert into channel(ChannelName,ChannelShortName,ChannelDescription,CreatedBy,Chan nelShortDescription,Status,CreatedOn) values('test','t','t','66','t',502, CURRENT_TIMESTAMP()) 37 Query insert into channelsPerType(idchannel,idchannelTypes) values((select max(idchannel) from channel),'5') 37 Query select max(idchannel) from channel 37 Query insert into channelMetaDataFields(idchannel,idSystemDataType,MetadatafieldName,sho rtName,Description,Searchable,Suggest,Status,Sortable,Required,Default Value,ParentID,relationshipType,shortDescription) select 45,idSystemDataType,MetadatafieldName,shortName,Description,Searchable ,Suggest,501,Sortable,Required,DefaultValue,ParentID,relationshipType, shortDescription from channelMetaDataFields where idchannelTypes = 5 and idchannel is NULL 37 Query commit 37 Query rollback 37 Query SET autocommit=1 37 Query SET autocommit=1 - -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
[urgent] - Problem with index_merge
; - *** row 1 *** table: TFMM type: range possible_keys: PRIMARY key: PRIMARY key_len: 4, ref: NULL rows: 1059133 Extra: Using where - Thanks Ratheesh K J
Urgent - MySQL 5 - mysqld using a lot of memory consistently
Hello all, Recently we upgraded from MySQL 4.1.11 to MySQL 5.0.22. The queries are taking a lot of time to execute in the newer version. The queries which were executing within 10 secs are now taking more than 100 secs. Running an expalin on the queries showed that an index_merge optimization is being used which is a new concept in MySQL 5. My initial doubt was on this but now when I checked top it shows that mysqld is consistently using 59% of Memory and 25% of cpu even when there is no load. the SHOW STATUS command in mysql shows: Threads_created21863 Threads_cached1 Threads_connected38 Connections5784350 Running a SHOW VARIABLES shows: thread_cache_size8 It is evident that mysqld is creating a lots of threads... Could this be the problem? Thanks, Ratheesh K J
Re: Urgent - MySQL 5 - mysqld using a lot of memory consistently
Thanks, By how much should I be increasing the thread_cache? currently it is 8... Currently I can provide the EXPLAIN result of a query using index_merge on MySQL 5.0.22. EXPLAIN SELECT IFNULL(TAEM.FLD_FULL_NAME, TFMM.FLD_ASSIGNED_TO) AS ELE1 , TFMM.FLD_ASSIGNED_TO AS KEY_ID, SUM(1) AS ELE2, SUM( IF(TFMM.FLD_ESCALATED_FLAG= 0 , 1 , 0) ) AS ELE3, SUM( IF(TFMM.FLD_ESCALATED_FLAG = 0 AND ( TFMM.FLD_ISSUE_CLOSED IN ( 2, 4,5,1 ) ) , 1, 0) ) AS ELE4, SUM( IF(TFMM.FLD_ESCALATED_FLAG = 0 AND TFMM.FLD_ISSUE_CLOSED = 3, 1, 0) ) AS ELE5, SUM( IF(TFMM.FLD_ESCALATED_FLAG = 1 , 1, 0) ) AS ELE6,SUM( IF(TFMM.FLD_ESCALATED_FLAG = 1 AND ( TFMM.FLD_ISSUE_CLOSED IN ( 2, 4,5,1 ) ) , 1, 0) ) AS ELE7, SUM( IF(TFMM.FLD_ESCALATED_FLAG = 1 AND TFMM.FLD_ISSUE_CLOSED = 3, 1, 0) ) AS ELE8, ROUND(( SUM( IF(TFMM.FLD_ESCALATED_FLAG = 1 , 1, 0) )/SUM(1)) * 100 ,2 ) AS ELE9 FROM TBL_FORUMS_MSG_MAIN TFMM LEFT JOIN TBL_ADMIN_EMP_MASTER TAEM ON TAEM.FLD_EMP_ID = TFMM.FLD_ASSIGNED_TO INNER JOIN TBL_FORUMS_MSG_OP_TRACK TFMOT ON ( TFMM.FLD_MSG_ID=TFMOT.FLD_MSG_ID AND TFMOT.FLD_OP_ID=15 AND TFMOT.FLD_LAST_FLAG=1 ) WHERE TFMM.FLD_ACC_ID IN ( 6, 375 ) AND TFMM.FLD_MARK_AS_DELETED = 0 AND TFMM.FLD_BOUNCED_MAIL_FLAG = 0 AND TFMM.FLD_BLOCK_STATE = 0 AND TFMM.FLD_PARENT_ID = 0 AND TFMM.FLD_ASSIGNED_TO IN ( 935,805,563,543,1352,670,571,530,655,577,355,885,392,155,1579,693,1577,509,199,770,1535,78,54,993,594,557,132,859,99,1557,645,527,79,181,1520,200,1350,1534,1591,545,70,191,1550,189,726,40,228,97,196,860,303,1321,394,363,1412,597,1013,1377,1250,1299,3,301,756,170,1553,1578,1343,953,593,250,600,1552,494,311,146,664,589,631,495,4,1254,678,511,931,1020,410,592,822,933,1531,1507,858,453,1257,555,897,352,188,546,1544,291,1529,370,765,963,356,1303,1328,354,414,1581,1030,382,1356,1521,227,396,1333,591,1249,760,1334,1034,51,80,1276,794,145,295,934,544,165,1594,886,929,558,685,880,831,1592,882,320,566,174,796,1593,5,1361,1522,435,388,951,1362,369,806,20,1336,330,77,907,754,507,1330,1364,1,202,1501,289,1296,1378,1061,1500,952,1439,1369,1358,373,1548,294,338,30,1351,1575,728,207,1558,406,837,210,970,620,387,450,1586,38,1227,460,455,1347,841,386,318,130,492,961,590,229,463,284,1380,1580,422,362,1337,581,1490,568,950,1083,960,1329,825,532,404,936,1251,552,1089,1585,1225,708,1564,817,260,372,965,305,456,847,192,1465,962,1523,1590,745,180,1540,753,585,890,1537,1099,225,750,230,7,413,1554,578,572,820,1549,883,810,1105,1403,423,1524,969,542,286,797,1582,1301,384,930,308,854,742,1107,1108,1555,1338,1562,947,673,1506,417,236,798,1576,63,27,1210,371,1485,82,272,1274,529,1277,381,1342,689,185,1118,235,1120,37,598,724,205,946,203,608,405,610,19,958,126,307,967,1360,218,954,1525,891,116,135,75,715,1547,431,1138,879,1498,211,1140,1463,6,1528,344,956,595,91,826,1145,1584,1545,1258,443,643,632,1526,267,1530,58,945,314,1470,763,1491,1595,968,385,955,282,684,179,178,666,409,663,390,1447,1341,1546,1587,125,358,173,279,957,50,182,840,107,580,807,133,1248,892,690,513,898,365,821,325,669,121,62,827,106,219,1253,633,41,562,1489,162,101,861,561,839,153,1565,1583,395,447,217,1551,888,1574,1176,743,446,556,787,263,949,1178,1331,209,134,1505,1354,55,306,31,964,1348,850,1252,862,966,100,521,175,709,1542,942,1335,297,445,296,411,525,420,266,102,1559,418,438,109,661,804,662,1543,984,1556,1409,522,195,1471,439,341,1209,878,838,1464,881,271,36,83,1379,857,944,656,959,538,2,764,1588,672,520,503,531,462,729,528,204,201,93,677,564,426,606,855,234,676,889) AND TFMOT.FLD_OP_DATE_TIME BETWEEN '2007-01-19 00:00:00' AND '2007-01-23 23:59:59' AND TFMM.FLD_MEDIUM IN ( 1 ) AND TFMM.FLD_MSG_ID 0 GROUP BY IFNULL(TAEM.FLD_FULL_NAME, TFMM.FLD_ASSIGNED_TO) ORDER BY IFNULL(TAEM.FLD_FULL_NAME, TFMM.FLD_ASSIGNED_TO): *** row 1 *** table: TFMM type: index_merge possible_keys: PRIMARY,FLD_MEDIUM,FLD_PARENT_ID,FLD_ASSIGNED_TO,FLD_MARK_AS_DELETED,FLD_ACC_ID,FLD_BLOCK_STATE,FLD_BOUNCED_MAIL_FLAG key: FLD_BLOCK_STATE,FLD_MEDIUM,FLD_MARK_AS_DELETED,FLD_BOUNCED_MAIL_FLAG,FLD_PARENT_ID key_len: 2,1,2,2,4 ref: NULL rows: 34468 Extra: Using intersect(FLD_BLOCK_STATE,FLD_MEDIUM,FLD_MARK_AS_DELETED,FLD_BOUNCED_MAIL_FLAG,FLD_PARENT_ID); Using where; Using temporary; Using filesort *** row 2 *** table: TAEM type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 4 ref: tallydb.TFMM.FLD_ASSIGNED_TO rows: 1 Extra: NULL *** row 3 *** table: TFMOT type: ref possible_keys: FLD_MSG_ID,FLD_OP_ID,FLD_OP_DATE_TIME,FLD_LAST_FLAG key: FLD_MSG_ID key_len: 4 ref: tallydb.TFMM.FLD_MSG_ID rows: 1 Extra: Using where - Original Message - From: Alex Arul To: Ratheesh K J Sent: Tuesday, January 23, 2007 11:57 AM Subject: Re: Urgent - MySQL 5 - mysqld using a lot of memory consistently you threads connected is 38 but your thread cache has
Re: Urgent - MySQL 5 - mysqld using a lot of memory consistently
Here is the CREATE TABLE Statement for the table on which the index_merge is being applied. There will be atleast 10 queries always running on this table with an index_merge optimization. We have the max_connections variable set to 100. Also repeatedly checking the Threads_Connected status variable shows varying b/w 16 to 40.. So I guess increasing the thread_cahce_size from 8 to 40 will help... Or shld it be even less? CREATE TABLE CREATE TABLE `TBL_FORUMS_MSG_MAIN` ( `FLD_MSG_ID` int(10) unsigned NOT NULL default '0', `FLD_ACC_ID` smallint(5) unsigned NOT NULL default '0', `FLD_DOMAIN_ID` tinyint(4) NOT NULL default '0', `FLD_TICKET_NUM` varchar(32) NOT NULL default '', `FLD_ADD_BY` int(10) unsigned NOT NULL default '0', `FLD_ADD_DATE_TIME` datetime NOT NULL default '-00-00 00:00:00', `FLD_UPDATE_BY` int(10) unsigned default NULL, `FLD_UPDATE_DATE_TIME` datetime default NULL, `FLD_MSG_DATE_TIME` datetime default '-00-00 00:00:00', `FLD_THREAD_ID` int(10) unsigned NOT NULL default '0', `FLD_PARENT_ID` int(10) unsigned NOT NULL default '0', `FLD_TREE_POS` int(10) unsigned NOT NULL default '0', `FLD_TREE_LEVEL` int(10) unsigned NOT NULL default '0', `FLD_RESTORE_THREAD_ID` int(10) unsigned default '0', `FLD_WORKFLOW_TYPE` tinyint(1) unsigned NOT NULL default '0', `FLD_MEDIUM` tinyint(1) unsigned NOT NULL default '0', `FLD_DIRECTION` tinyint(3) unsigned NOT NULL default '0', `FLD_ISSUE_TYPE` tinyint(3) unsigned NOT NULL default '0', `FLD_ZONE` int(11) unsigned default '0', `FLD_COMPANY_ID` int(11) unsigned default '0', `FLD_PRODUCT_ID` tinyint(4) unsigned default '0', `FLD_ASSIGNED_TO` int(10) unsigned default '0', `FLD_MSG_TYPE` tinyint(3) unsigned default '0', `FLD_MSG_INFO_ONLY_STATE` tinyint(1) unsigned default '0', `FLD_ATTACHMENT_FLAG` tinyint(1) unsigned default '0', `FLD_COUNTRY_TYPE` tinyint(1) unsigned default '0', `FLD_NO_SUPPORT_FLAG` tinyint(1) unsigned default '0', `FLD_CONTACT_PID` int(9) unsigned zerofill default NULL, `FLD_SUB_CONTACT_ID` int(10) unsigned default NULL, `FLD_BLOCK_STATE` tinyint(1) unsigned default '0', `FLD_MARK_AS_DELETED` tinyint(1) unsigned default '0', `FLD_SEEN_FLAG` tinyint(1) unsigned default '0', `FLD_REPLY_FLAG` tinyint(1) unsigned default '0', `FLD_FWD_FLAG` tinyint(1) unsigned default '0', `FLD_USR_SEEN_FLAG` tinyint(1) unsigned default '0', `FLD_USR_REPLY_FLAG` tinyint(1) unsigned default '0', `FLD_USR_FWD_FLAG` tinyint(1) unsigned default '0', `FLD_SUBM_OP_ID` tinyint(1) unsigned default '0', `FLD_ISSUE_CLOSED` tinyint(1) unsigned NOT NULL default '0', `FLD_TASK_STATE` tinyint(1) unsigned default '0', `FLD_ESCALATED_FLAG` tinyint(1) unsigned default '0', `FLD_BOUNCED_MAIL_FLAG` tinyint(1) unsigned default '0', `FLD_LEAD_ID` int(11) unsigned default '0', `FLD_BUG_ID` int(11) unsigned default '0', `FLD_EMAIL_IP_ADDRESS` varchar(15) default NULL, `FLD_EMAIL_FROM` varchar(150) default NULL, `FLD_EMAIL_TO` text, `FLD_EMAIL_CC_TO` text, `FLD_EMAIL_BCC_TO` text, `FLD_SUBJECT` varchar(100) default NULL, `FLD_PRIORITY` tinyint(1) unsigned default NULL, `FLD_TELCALL_FROMTO_NAME` varchar(128) default NULL, `FLD_TELCALL_FROMTO_DESC` varchar(64) default NULL, `FLD_CHAT_FROM_NAME` varchar(32) default NULL, `FLD_CHAT_FROM_DESC` varchar(64) default NULL, `FLD_CHAT_START_DATE_TIME` datetime default '-00-00 00:00:00', `FLD_CHAT_END_DATE_TIME` datetime default '-00-00 00:00:00', `FLD_CHAT_SESSION_ID` int(11) default '0', `FLD_CSS_INTERACTION_TYPE` tinyint(1) unsigned default '0', `FLD_CSS_ISSUE_CATEGORY` tinyint(1) unsigned default '0', `FLD_CSS_ISSUE_TAT_QTY` tinyint(1) unsigned default '0', `FLD_CSS_ISSUE_TAT_UNIT` varchar(6) default NULL, `FLD_CSS_ISSUE_SLA_QTY` tinyint(1) unsigned default '0', `FLD_CSS_ISSUE_SLA_UNIT` varchar(6) default NULL, PRIMARY KEY (`FLD_MSG_ID`), KEY `TREE_POS` (`FLD_TREE_POS`), KEY `FLD_MEDIUM` (`FLD_MEDIUM`), KEY `FLD_MSG_TYPE` (`FLD_MSG_TYPE`), KEY `FLD_PARENT_ID` (`FLD_PARENT_ID`), KEY `FLD_ADD_DATE_TIME` (`FLD_ADD_DATE_TIME`), KEY `FLD_CONTACT_PID` (`FLD_CONTACT_PID`), KEY `FLD_ASSIGNED_TO` (`FLD_ASSIGNED_TO`), KEY `FLD_THREAD_ID` (`FLD_THREAD_ID`), KEY `FLD_EMAIL_FROM` (`FLD_EMAIL_FROM`), KEY `FLD_TICKET_NUM` (`FLD_TICKET_NUM`), KEY `FLD_MARK_AS_DELETED` (`FLD_MARK_AS_DELETED`), KEY `FLD_ACC_ID` (`FLD_ACC_ID`), KEY `FLD_BLOCK_STATE` (`FLD_BLOCK_STATE`), KEY `FLD_FWD_FLAG` (`FLD_FWD_FLAG`), KEY `FLD_ISSUE_CLOSED` (`FLD_ISSUE_CLOSED`), KEY `FLD_REPLY_FLAG` (`FLD_REPLY_FLAG`), KEY `FLD_SUBJECT` (`FLD_SUBJECT`), KEY `FLD_WORKFLOW_TYPE` (`FLD_WORKFLOW_TYPE`), KEY `FLD_PRODUCT_ID` (`FLD_PRODUCT_ID`), KEY `FLD_SUB_CONTACT_ID` (`FLD_SUB_CONTACT_ID`), KEY `FLD_ESCALATED_FLAG` (`FLD_ESCALATED_FLAG`), KEY `FLD_BOUNCED_MAIL_FLAG` (`FLD_BOUNCED_MAIL_FLAG`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 - Original Message - From: Alex Arul To: Ratheesh K
Index_Merge : Very slow
Hello All, Our queries were running fine on MySQL 4.1.11. Since we upgraded to MySQL 5.0.22 The same queries are taking a long long time to execute. Running an explain on the queries shows an index_merge in the type column. And it shows using intersect algorithm in Extra column of the output. Previously(MySQL 4.1.11) Explain showed the usage of primary Key as the index. How can this be resolved? Thanks Ratheesh K J
Re: Innodb log sequence error - urgent
Thanks, I have the previous ib_log* files on the app server. And every thing on the cnf file was perfect. Only the ib_log file's size was a mismatch. Whats the best work around? Can I copy the log files of the App server to the DB server and change the innodb_log_file_size to 256M and then restart the MySQL server. If I do so will I lose the updates to the database that happened today? So my actual problem is this: I have two sets of ib_logfile* files. To be particular there are a.. ib_logfile0, ib_logfile1, ib_logfile2 on the App server-each 257M (when i did a du -sh). In the my.cnf file of the App server innodb_log_file_size is set to 256M b.. ib_logfile0, ib_logfile1, ib_logfile2 on the DB server -each 5M. These log files were created freshly by the MySQL server as the log files from the App server was not copied to the DB server. In the my.cnf file of the DB server innodb_log_file_size is set to 5M by mistake. All the other settings were same as on the app server. The ibdata1 file is that of the App server. And I get the log sequence errors as shown in my previous post. But everything seems to be working fine. There have been no problems accessing the data. What I can I possibly do to get everything right. How can I correct the log sequence error? Should the log files of App server be in the DB server? Thanks, Ratheesh K J - Original Message - From: Jan Kirchhoff [EMAIL PROTECTED] To: Ratheesh K J [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Monday, December 11, 2006 1:25 PM Subject: Re: Innodb log sequence error - urgent Ratheesh K J schrieb: Hello all, yesterday we seperated our app server and db server. We moved our 70GB of data from our app server to a new DB server. We installed MySQL 4.1.11 on the DB server. Now the following happened. On the DB server the ibdata1 and all the databases are the old ones (which were copied from the app server). But when Mysql was installed the ib_logfile0 ib_logfile1 and ib_logfile2 were created freshly on the DB serever. Each of these log files were created with 5M size. on the app server these files were 256M in size (innodb_log_file_size = 256M). On the DB server it is (innodb_log_file_size = 5M). Today morning when I checked the error log, there seems to be a lot of error msg flowing in. 061211 11:41:47 InnoDB: Error: page 203046 log sequence number 87 3002891543 InnoDB: is in the future! Current system log sequence number 86 4025048037. InnoDB: Your database may be corrupt. You cannot just copy innodb-databases to other servers without adjusting your my.cnf: Once you created an innodb-database, you cannot change parameters like innodb_log_file_size any more. (this is explained in the manual, you should read the chapter about backing up and restoring innodb-databases) So when you copy the database to the new server, be sure to copy the settings from the my.cnf, too! Jan
Innodb log sequence error - urgent
Hello all, yesterday we seperated our app server and db server. We moved our 70GB of data from our app server to a new DB server. We installed MySQL 4.1.11 on the DB server. Now the following happened. On the DB server the ibdata1 and all the databases are the old ones (which were copied from the app server). But when Mysql was installed the ib_logfile0 ib_logfile1 and ib_logfile2 were created freshly on the DB serever. Each of these log files were created with 5M size. on the app server these files were 256M in size (innodb_log_file_size = 256M). On the DB server it is (innodb_log_file_size = 5M). Today morning when I checked the error log, there seems to be a lot of error msg flowing in. 061211 11:41:47 InnoDB: Error: page 203046 log sequence number 87 3002891543 InnoDB: is in the future! Current system log sequence number 86 4025048037. InnoDB: Your database may be corrupt. 061211 11:41:48 InnoDB: Error: page 90766 log sequence number 87 2007657570 InnoDB: is in the future! Current system log sequence number 86 4025048133. InnoDB: Your database may be corrupt. 061211 11:41:48 InnoDB: Error: page 101643 log sequence number 87 1555755135 InnoDB: is in the future! Current system log sequence number 86 4025048213. InnoDB: Your database may be corrupt. 061211 11:41:48 InnoDB: Error: page 126123 log sequence number 87 2434816015 InnoDB: is in the future! Current system log sequence number 86 4025048253. InnoDB: Your database may be corrupt. 061211 11:41:48 InnoDB: Error: page 91391 log sequence number 87 3435504059 InnoDB: is in the future! Current system log sequence number 86 4025048310. InnoDB: Your database may be corrupt. 061211 11:41:49 InnoDB: Error: page 7520 log sequence number 87 558983226 InnoDB: is in the future! Current system log sequence number 86 4025049185. InnoDB: Your database may be corrupt. 061211 11:41:49 InnoDB: Error: page 70232 log sequence number 87 3176686221 InnoDB: is in the future! Current system log sequence number 86 4025049185. InnoDB: Your database may be corrupt. 061211 11:41:49 InnoDB: Error: page 9339 log sequence number 87 3426386305 InnoDB: is in the future! Current system log sequence number 86 4025051173. InnoDB: Your database may be corrupt. Its not affecting the database as such till now. 1. What is the actual problem? 2. What is the possible work around?
No of columns in a table
I need information for Maximum no. of columns in a table in MySQL with diff. engines. Where can I get it? Thanks and regards, CPK
MySQL Error for Blobs
I got an error while entering an BMP image of 1.7MB size in a mediumblob field through MS Access 2003 with MyODBC 3.51.12 on Windows XP SP2 also with MySQL Query Browser and Navicat GUI tool. Navicat returned the error as 'Got a packet bigger than Max_allowed_packet bytes'. What this means?. Access and Query browser simple gives error as MySQL has gone away? Why? please help. Also please give a solution if we have to insert images/Other Objects of size more than 1 MB what we have to do using ODBC driver with/without MS Access VB.net? Thanks CPK
check bugs.
please check bugs and please give the answer Bug ID=24216 and 24216 CPK
Views accessed as table in MS Access
Hello, I am using MySQL 5.0.17 and MyODBC 3.51.12 with MS Access 2003/Xp with Jet 4.0. Currently it is not giving the normal problem of Access while inserting new records (#deleted), but when a View created in database is linked with MS Access it is not giviing correct results. e.g. There are two tables 'Accdocs' ad 'Accdoctransactions'. Each Accdoctransactions has a reerence in Accdocs (MasterDocId--AccdocId). I have to access all records from Accdocs and Accdoctransactions. When I create a view for the same, it is giving correct results when the view is opened using MySQL Query Browser. Then if I link that table in any MS Access file, as a linkedtable, it is not gving me correct results. Even I create the same view as a query it gives same results as Query browser. What is going wrong? If I use MS SQL Server 2000 then it gives correct results for above situation. Is it MySQL ODBC driver probelm? and if yes it is solved in latest ODBC 5 version? or is it Access error? Please help. It's urgent. Thanks, CPK
Which to download
Hello all, Just wanted to know what is the difference between Red Hat Enterprise Linux 3 RPM (x86) downloads and Linux x86 generic RPM (dynamically linked) downloads Which one should I download for a RHEL 3 system? Thanks, Ratheesh Bhat K J
Re: Which to download
Hi Jacques, Thanks for your response. I can see the NDB cluster storage engine rpms under the Linux x86 generic RPM (dynamically linked) downloads but not under the Red Hat Enterprise Linux 3 RPM (x86) downloads. What If the same is needed for our system (later perhaps)? What exactly is the difference between the two set of rpms? Where (which system) are the Linux x86 generic RPM (dynamically linked) rpms installed then? Thanks, Ratheesh K J - Original Message - From: Jacques Marneweck [EMAIL PROTECTED] To: Ratheesh K J [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Thursday, October 19, 2006 3:16 PM Subject: Re: Which to download Ratheesh K J wrote: Hello all, Just wanted to know what is the difference between Red Hat Enterprise Linux 3 RPM (x86) downloads and Linux x86 generic RPM (dynamically linked) downloads Which one should I download for a RHEL 3 system? Hi Ratheesh, Use the RHEL3 one. Regards --jm Thanks, Ratheesh Bhat K J -- Jacques Marneweck http://www.powertrip.co.za/ http://www.powertrip.co.za/blog/ #include std/disclaimer.h
SQL Query help
Friends, I am developing a database for accounting software. I have one problem regarding calculation of balances on daily basis for all ledgers. I am using Access 2003 as frontend. While designing I found that maintaining of daily balances is impossible to client's requirements. But as the solution I to execute two SQL queries for 365 times to calculate Opening and closing balances. what i need is a hint/example to write a function/SQL statement to run these queries in single/minimum iterations. table format: LedgerID | Opening Credit | Opening Debit | Current Credit | Current Debit | Closing Credit | Closing Debit | Date Previous dates closing balance should be the opening for next date. Please suggest the answer. Thanks, CPK
Mysql Stat - Help required
Hlo, I got this Stat of MySQL. I want to know is there something to worry about, especially the number of temp tables. Key Reads/Key Read Requests = 0.007094 (Cache hit = 99.992906%) Key Writes/Key Write Requests = 0.239130 Connections/second = 8.741 (/hour = 31467.279) KB received/second = 0.730 (/hour = 2628.283) KB sent/second = 1.795 (/hour = 6460.812) Temporary Tables Created/second = 0.914 (/hour = 3290.991) Opened Tables/second = 0.616 (/hour = 2217.483) Slow Queries/second = 0.014 (/hour = 48.639) % of slow queries = 0.032% Queries/second = 41.838 (/hour = 150618.094) We have the tmp_table_size variable set to 64M. And we saw that there are many temp tables created on disk in about 3 - 5 seconds. How do we analyze the optimum value for the tmp_table_size variable? Thanks, Ratheesh Bhat K J
Re: Mysql Stat - Help required
I have checked the queries. They look quite ok. We have a lot lot of CREATE TEMPORARY TABLE running every second. How else can tmp tables be created? What is the exact relation between missing index and tmp tables being created. Do you mean to say in sorting? - Original Message - From: Chris [EMAIL PROTECTED] To: Ratheesh K J [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Wednesday, September 27, 2006 12:50 PM Subject: Re: Mysql Stat - Help required Ratheesh K J wrote: Hlo, I got this Stat of MySQL. I want to know is there something to worry about, especially the number of temp tables. That looks like a sign of missing indexes. Temporary tables shouldn't be created that often if your queries are indexed properly, even though they're not showing up as slow queries. It'll be a pain but you could write a script to go through your queries and 'explain' then and see which ones are using temp tables and see whether they need additional indexes. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
key_buffer_size - need help
Hlo, We have all our tables as Innodb type. As I understand the variable key_buffer_size is used only for MyISAM tables. Currently we have this var set to 256M on a 4GB RAM machine. Only the Temporary tables created using (CREATE TEMPORARY TABLE) will be of MyISAM type. And there are a lot of temporary tables being created in our system. What is an optimal value to the key_buffer_size variable in such a case? Ratheesh Bhat K J
Moving database to another machine
Hlo all, I wanted to move around 50 GB of data on Machine A to Machine B. Both the machines are of same architecture ( LAMP ) Dumping and restoring takes a lot of time. Is there a faster method to accomplish the same? Is there a way to tar the whole thing and untar on Machine B? or any other way? Ratheesh Bhat K J
Moving database to another machine
Hlo all, I wanted to move around 50 GB of data on Machine A to Machine B. Both the machines are of same architecture ( LAMP ) Dumping and restoring takes a lot of time. Is there a faster method to accomplish the same? Is there a way to tar the whole thing and untar on Machine B? or any other way? Thanks, Ratheesh Bhat K J
How to find the no of Inserts and selects
Hello all, I need to find out to some point of accuracy the ratio of Number Inserts/Updates to that of Selects. This is because we are thinking of setting up a Replication sysytem with one master and one slave. Now replication would be optimal only if there if the Select queries are dominating, so how would I determine this figure? Ratheesh Bhat K J
Re: Seperating Application server and Database server
I am a bit confused here!!! We thought seperating our App server and DB server bcoz we spotted some performance problems. There are queries taking about 40 seconds to fetch about 300 odd rows ( dont kno if I can relate this to a fight b/w app and db for cpu and/or mem). We have a 4GB RAM. When we see the result of TOP from the box it shows this: 98 processes: 97 sleeping, 1 running, 0 zombie, 0 stopped CPU states:cpuuser nice system irq softirq iowait idle total 2.9%0.0%0.2% 0.0% 0.0% 10.9% 85.8% cpu001.8%0.0%0.6% 0.0% 0.2% 2.0% 95.4% cpu014.0%0.0%0.2% 0.0% 0.0% 19.8% 76.0% cpu025.8%0.0%0.0% 0.0% 0.0% 2.4% 91.8% cpu030.0%0.0%0.2% 0.0% 0.0% 19.6% 80.2% Mem: 4114248k av, 4087836k used, 26412k free, 0k shrd, 78148k buff 3185764k actv, 363900k in_d, 97588k in_c Swap: 2048248k av, 228832k used, 1819416k free 2495180k cached I have a question here: Of the available 4GB i can see only 26412k (25M) free, and there are around 30 Mysql threads consuming around 30% mem. as can be seen below PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND 24583 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 11:11 0 mysqld 24584 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:07 2 mysqld 24585 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:02 0 mysqld 24586 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:05 1 mysqld 24587 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 228:05 1 mysqld 24588 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 2:50 0 mysqld 24589 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 7:07 0 mysqld 24590 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 5:33 3 mysqld 24591 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 6:35 3 mysqld 24594 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 4:45 0 mysqld 27078 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 8:27 1 mysqld 27330 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 16:01 2 mysqld 29496 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 24:27 0 mysqld 26657 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 2:51 0 mysqld 28535 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:03 0 mysqld 30578 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 1:42 3 mysqld 1664 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:29 3 mysqld 2546 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:21 2 mysqld 2767 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:23 3 mysqld 2770 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:00 0 mysqld 2772 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:00 2 mysqld 16895 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 2:26 2 mysqld 17058 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 1:35 2 mysqld 17181 mysql 15 0 1222M 1.2G 3296 S 0.8 30.4 1:02 1 mysqld 17182 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 1:15 2 mysqld 17267 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:44 2 mysqld 17300 mysql 15 0 1222M 1.2G 3296 S 0.6 30.4 1:10 3 mysqld 17701 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:24 0 mysqld 18018 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:05 0 mysqld 18019 mysql 15 0 1222M 1.2G 3296 S 0.0 30.4 0:23 1 mysqld But there are no queries running and no major applications running either. Then how is that a lot of RAM is being used up? I m totally clueless as to what is happening In such a case should we be thinking of increasing the RAM capacity or is seperating app and db server better?? any suggestion ll be hlpful Thanks Ratheesh K J - Original Message - From: Brent Baisley [EMAIL PROTECTED] To: Ratheesh K J [EMAIL PROTECTED]; mysql@lists.mysql.com Sent: Monday, August 28, 2006 7:33 PM Subject: Re: Seperating Application server and Database server In addition to what the others told you, security would be a big reason. If the application server is compromised, whether through your application code or some other service on the box, then they also have local access to the database files. If you setup a database server, you can open just the port(s) that are required to communicate with the database and allow only the ip address of the application server. Yes, if the application server is compromised, it can be used to connect to database server, but at least it's another hurdle. Also, by splitting them, it allows much more flexibility, For instance, what if there is a problem with the application server or you want to upgrade it? If everything is on one box, you would need to take the database down with everything else even though you are not changing anything
Re: Anyone tried solidDB for MySQL?
What is this solidDB? where can i get much info on this?? Thanks, Ratheesh Bhat K J
what should be the value of innodb_flush_log_at_trx_commit
hello all, We do not run transactions at all on our db. All our queries are autocommit. So what should be the value set to this variable : innodb_flush_log_at_trx_commit We currently have it set to 1 and all our tables are Innodb. Since we are not running any transactions at all, is it better to set this var to 0 or 2? Thanks, Ratheesh Bhat K J
Seperating Application server and Database server
Hello all, Currently our application and MySQL server are on the same machine. When should these be seperated? What are the main reasons that we should be having a seperate DB server? Ratheesh Bhat K J
Query takes different times for execution...
Hello all, I wanted to know why a select query takes ,say, 18 sec to execute the first time I execute it and then for every successive execution it takes, say, 10 sec. I have disabled Query Caching on the server. If its not Query Caching then what else is causing this variation in exec time? 1) Also i would like to know something about table caching (what exactly gets cached here?) and its advantages 2) What is an optimal join order. Should it be a smaller result joined with a bigger one or vice-versa, Or how does MySQL do it? Thanks, Ratheesh Bhat K J
Some questions on Storage engine
Hello all, I have a couple of questions on storage engine types wrt performance 1.. Will there be any performance degrade when we do joins with tables having different storage engines ? 2.. Where are the temporary tables created? (by default why not memory storage engine?) will it be helpful if all the temp tables are created with storage engine :MEMORY Thanks, Ratheesh Bhat K J
Buffer size for innodb tables
Hello All, I wanted to know what is the best size for Innodb key cache. We are currently running MySQL 4.1.11 And we have set the buffer size to 1GB. innodb_buffer_pool_size = 1G The system has 4 GB RAM. 1) In such a case is the above setting ok? 2) All the tables are of Innodb type 3) We are using the same server for Apache and MySQL Ratheesh Bhat K J
Adding index -- Need help
Hello All, Need a suggestion for this: We have tables which have very few number of rows ( less than 600 ). For a long period of time the number of rows are going to remain almost the same. 1) Is it better to index the columns of such tables? 2) There are Joins on this table and then a search on certain fields of this table in the where clause. Does index make a difference? Example: TBL_XXX and TBL_YYY are related through FLD_ACC_ID. This fld is indxed on both the tables. None of the others are. Would adding indexes help on such tables ( only 342 rows in both ) DESC TBL_XXX;342 Rows - FLD_EMAIL_ID varchar(150) PRI FLD_ACC_ID int(11) unsigned PRI 0 FLD_PRIMARY_FLAG tinyint(1) YES 0 -- DESC TBL_YYY; 342 Rows FLD_ACC_ID int(11) unsigned PRI 0 FLD_ACC_NAME varchar(32) YES \N FLD_MAIN_ZONE tinyint(1) 0 FLD_FOR_ZONE_ID int(10) unsigned YES 0 FLD_STATE tinyint(1) unsigned 0 FLD_DOMAIN_ID tinyint(4) unsigned YES 0 FLD_SCAN_CUST_SERIAL_NUM_FLAG tinyint(1) unsigned YES 0 FLD_SCAN_CUST_NO_SUPPORT_FLAG tinyint(1) unsigned YES 0 FLD_INBOX varchar(64) YES \N FLD_INBOX_PASSWD varchar(16) YES \N FLD_INBOX_LOCK_FLAG tinyint(1) YES 0 FLD_INBOX_LOCK_DATE_TIME datetime YES \N FLD_EMAIL_INBOX_OUTWARD varchar(32) YES \N FLD_WEBMAIL_INDIVIDUAL_FLAG tinyint(1) unsigned YES 0 FLD_WEBMAIL_INDIVIDUAL_TYPE tinyint(5) unsigned YES 0 FLD_EMAIL_DONT_PROCESS_FLAG tinyint(1) unsigned 0 FLD_ATTACH_MAX_UPLD_SIZE tinyint(2) unsigned YES 0 FLD_ATTACH_MAX_RECV_SIZE tinyint(2) unsigned YES 0 FLD_ADD_BY int(11) 0 FLD_ADD_DATE_TIME datetime -00-00 00:00:00 FLD_UPDATE_BY int(11) YES \N FLD_UPDATE_DATE_TIME datetime YES \N FLD_PARENT_ID int(11) unsigned YES 0 FLD_THREAD_ID int(11) unsigned YES 0 FLD_POS tinyint(5) unsigned YES 0 FLD_LEVEL tinyint(5) unsigned YES 0 FLD_OWNER_COMPANY_ID int(10) unsigned 0 FLD_FOR_COMPANY_ID int(10) unsigned YES 0 FLD_NO_DIRECT_SUBMISSION_FLAG tinyint(1) YES 0 Thanks, Ratheesh Bhat K J
Cardinality
Hello all, Need an explanation for this: I did the following - SELECT DISTINCT COLUMN1 FROM TBL_XXX ; I got the foll result 1 2 3 4 5 7 8 10 11 12 13 14 16 17 18 19 20 21 23 24 25 26 27 28 29 30 -- Totally 26 rows Now when I saw the Cardinality of this col ( COLUMN1 ) by doing a SHOW INDEX on TBL_XXX, It shows 93. How can this be possible, as Cardinality should be the number of distinct values ( 26 in this case ) for that column right? I also did an ANALYZE TABLE on TBL_XXX. The result is still the same. Thanks, Ratheesh Bhat K J
Table size??
Helo all, Just wanted to know when should a Table be considered for partitioning ( or should it be archiving ). Almost all of our tables are of Innodb type. I am looking for an estimate rather than a Depends on situation kind of an answer. We have few of our table swhich are very huge ( in terms of number of rows ), 70 lac rows. Should this be a factor for table partitioning or should the actual data size be a factor. 1) In that case for a system with 80 GB reserved for MySQL when should we worry about Table sizes? 2) We have 3 specific tables which are of size 5GB, 3GB and 1.7GB respectively. The first two of them have more than 70 lakh rows. As a preventive measure what could be the best way to optimize these tables? Thanks, Ratheesh Bhat K J
Another question on Cardinality??
Hello all, Another question on cardinality. Consider a table with 1000 rows and columnns. Details of the columns are as below: FLD_1 - int - cardinality 1000 - PRIMARY KEY FLD_2 - tinyint- cardinality 400 FLD_3 - varchar - cardinality 10 FLD_4 - varchar - cardinality 2 FLD_5 - varchar - cardinality 5 Assuming that cardinality exactly is the number of distinct values for that column, Which are the fields that is best for indexing for the table. Is it meaningful for me to index FLD_3, FLD_4 and FLD_5 knowing that their cardinality is always going to be the same? There are certain scenarios wherein I have queries on the tables as below: 1) Select * from table where FLD_4 = 1; 2) Select * from table where FLD_5 = 3; 3) Select * from table where FLD_3 1 AND FLD_5 6; considering all the above cases, what should I conclude? should I have indexes on these three fields? Looking for a specific answer than a depend on situation kind of an answer. Thanks Ratheesh Bhat K J