Re: Tracking changes in large datasets over time
On 7/19/08, Rob Wultsch [EMAIL PROTECTED] wrote: On Sat, Jul 19, 2008 at 6:54 AM, Jake Peavy [EMAIL PROTECTED] wrote: I may be wrong, but I think you could accomplish this through the use of triggers. Triggers are designed to monitor data change activity. -jp Good thought. Using triggers to keep track of changes in an intermediary tables allows for good performance using the approach shown below. I think that should work well, or at least be a good start... I guess this is where we make assumptions about the OPs schema and details of his methodology. My assumption is that this is simpler than your solution. The users table currently exists and is seeded with 500,000 users. I would use a straight select to baseline the user_changelog, then install the trigger on users and let it do all the work from then on. The only question is what happens when a new user is created - a second INSERT trigger might be required. Of course you could also store the pre-UPDATE point balance. mysql DROP TABLE IF EXISTS users,user_changelog; Query OK, 0 rows affected, 2 warnings (0.00 sec) mysql DROP TRIGGER IF EXISTS user_trigger; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql CREATE TABLE users (userid INT PRIMARY KEY AUTO_INCREMENT,point_balance INT DEFAULT 0); Query OK, 0 rows affected (0.06 sec) mysql INSERT INTO users VALUES (1,10*RAND()),(2,10*RAND()),(3,10*RAND()),(4,10*RAND()),(5,10*RAND()); Query OK, 5 rows affected (0.03 sec) Records: 5 Duplicates: 0 Warnings: 0 mysql CREATE TABLE user_changelog (userid INT,point_balance INT,update_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP, INDEX id (userid)); Query OK, 0 rows affected (0.04 sec) mysql INSERT INTO user_changelog (userid,point_balance) SELECT * FROM users; Query OK, 5 rows affected (0.03 sec) Records: 5 Duplicates: 0 Warnings: 0 mysql CREATE TRIGGER user_trigger AFTER UPDATE ON users FOR EACH ROW INSERT INTO user_changelog(userid,point_balance) VALUES (NEW.userid,NEW.point_balance); Query OK, 0 rows affected (0.01 sec) mysql SELECT SLEEP(15); +---+ | SLEEP(15) | +---+ | 0 | +---+ 1 row in set (15.03 sec) mysql UPDATE users SET point_balance=11 WHERE userid=1; Query OK, 1 row affected (0.02 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql SELECT * FROM users; ++---+ | userid | point_balance | ++---+ | 1 |11 | | 2 | 7 | | 3 | 6 | | 4 | 8 | | 5 | 0 | ++---+ 5 rows in set (0.00 sec) mysql SELECT * FROM user_changelog; ++---+-+ | userid | point_balance | update_time | ++---+-+ | 1 | 7 | 2008-07-20 09:30:57 | | 2 | 7 | 2008-07-20 09:30:57 | | 3 | 6 | 2008-07-20 09:30:57 | | 4 | 8 | 2008-07-20 09:30:57 | | 5 | 0 | 2008-07-20 09:30:57 | | 1 |11 | 2008-07-20 09:31:12 | ++---+-+ 6 rows in set (0.00 sec) Anyway, this should be enough to get the OP going. -jp -- I hope that someday we will be able to put away our fears and prejudices and just laugh at people. deepthoughtsbyjackhandy.com
MySQL Magazine Summer Issue Released
The next issue of MySQL Magazine is now available for download. Get it while it is hot! At forty-two great pages it is our biggest and best issue yet. This issue is anchored by the first annual MySQL Usage Survey results. Downloads at the MySQL Magazine homepage: http://www.mysqlzine.net Thanks to everyone who contributed. I couldn’t have done it without you all!! Keith Murphy -- Editor MySQL Magazine [EMAIL PROTECTED] http://www.mysqlzine.net (850) 637-3877 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
MySQL server statistics
Hello, I've installed MySQL server 5.0 and have written a small statistics script that regularly checks the number of connections and queries to the server, which I can then view in a diagram. But sometimes it just says that at a time, unusually many connections or queries have been made to the server. I cannot see what causes them. Neither the user nor the actual queries. At work I got in touch with the Oracle Enterprise Manager recently. I haven't looked at it too closely yet, but I think it could give useful information about each session, what it does and more importantly what it did. I have no idea what to search for to get this information from the MySQL server. So I had to ask here first. Is there any method to get those statistics? I don't mean the SHOW PROCESSES list, it only contains a snapshot of the very moment when MySQL got to execute my command. I mean information about recent activity, like 15 minutes, 2 hours or so. -- Yves Goergen LonelyPixel [EMAIL PROTECTED] Visit my web laboratory at http://beta.unclassified.de -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL server statistics
On Sun, Jul 20, 2008 at 1:33 PM, Yves Goergen [EMAIL PROTECTED] wrote: Hello, I've installed MySQL server 5.0 and have written a small statistics script that regularly checks the number of connections and queries to the server, which I can then view in a diagram. But sometimes it just says that at a time, unusually many connections or queries have been made to the server. I cannot see what causes them. Neither the user nor the actual queries. At work I got in touch with the Oracle Enterprise Manager recently. I haven't looked at it too closely yet, but I think it could give useful information about each session, what it does and more importantly what it did. I have no idea what to search for to get this information from the MySQL server. So I had to ask here first. Is there any method to get those statistics? I don't mean the SHOW PROCESSES list, it only contains a snapshot of the very moment when MySQL got to execute my command. I mean information about recent activity, like 15 minutes, 2 hours or so. Check out the general query log: http://dev.mysql.com/doc/refman/5.0/en/query-log.html -- Rob Wultsch -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Suggestion Setting For Highload Server
Hi Mike, I don't understand with the delete the 'deleted' rows. Can you explain me more? And about the RAM, yes we are going to upgrade it. The application opens 1000 simultan connections to service the requests For the tables, we have 2 identical tables, the flow will be like this. First when there is an incoming request application A will store it into the first table, and application A will parse it to redirect it application B to get the response. After responsed, application B will store it into the second table. And application A will always check the second table by searching new rows, if found it will copy it to the first table and push it to the external request as the result. In simple word there will always 3 records for each transaction and these 3 records have the same transaction ID. I heard the lock and unlocking method is only optimized for InnoDB type? Is it right? Regards, Willy -- --- SMS SERVER, SMS BULK, SMS PREMIUM, WEB DESIGN, NETWORKING Contact [EMAIL PROTECTED], 62 811 923 464, 62 21 944 8282 4 http://www.binbit.co.id -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Tracking changes in large datasets over time
On Sat, Jul 19, 2008 at 4:18 AM, Rob Wultsch [EMAIL PROTECTED] wrote: On Fri, Jul 18, 2008 at 3:46 PM, Rob Wultsch [EMAIL PROTECTED] wrote: I'm sure there is a way to write this in a single query, but I bet it it is ugly as heck, probably with a bunch of derived tables. INSERT INTO balances (userid, points) SELECT users.userid, users.points FROM users LEFT JOIN ( SELECT userid, MAX(timestamp_dump) AS 'timestamp_dump' FROM balances GROUP BY userid ) AS b1 USING(userid) LEFT JOIN balances b2 USING(userid,timestamp_dump) WHERE users.points != b2.points OR b2.points IS NULL ; This query is a dog, That stored procedure is pretty horrible. I'm going to play with trying to figure out how to optimize this. Fun stuff. Better version of the query broken up a bit above: DROP TABLE IF EXISTS balances_temp; CREATE TEMPORARY TABLE balances_temp(userid INT, timestamp_dump timestamp,INDEX(userid) ) SELECT userid, MAX(timestamp_dump) AS 'timestamp_dump' FROM balances GROUP BY userid; INSERT INTO balances (userid, points) SELECT users.userid, users.points FROM users LEFT JOIN balances_temp AS b1 USING(userid) LEFT JOIN balances b2 USING(userid,timestamp_dump) WHERE users.points != b2.points OR b2.points IS NULL; This runs ~30 seconds for 500k users, and will of course work with all version of mysql. I did not realize that derived table performance was that bad... -- Rob Wultsch -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Insert ... select ... On Duplicate Update Question
On Sun, Jul 20, 2008 at 12:12 AM, mos [EMAIL PROTECTED] wrote: Is there a way to get Insert ... select ... On Duplicate Update to update the row with the duplicate key? That's what it does. Why can't it do this? What makes you think it can't? - Perrin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]