RE: Using MySQL to store email
We have used procmail and PHP combination to achieve this for our department mail management and associated workflow. Sendmail invokes procmail which delivers the mail to the normal inbox and invokes a PHP script thru LYNX The PHP script opens an IMAP connection to the inbox, reads the mail and puts in a text data col in the database. This is working successfully for all our 5 departments and over 500-1000 mails a day. Except for big attachments where we have respective limitations (to some extent configurable - maxpacketsize options etc) in PHP as well as MYSQL. vikash -Original Message- From: William R. Mussatto [mailto:[EMAIL PROTECTED] Sent: Thursday, February 27, 2003 12:47 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: Using MySQL to store email First, which MYSQL, the program or the company. Assuming you mean a mysql database I think you will need to run it through a program which will parse it and load it into the database tables. Does anyone know where I can read more on how to redirect incoming email (via Sendmail) into MySQL for a given email address? I understand that I can use [EMAIL PROTECTED] | /path/mysql ... in the /etc/aliases file to do this, but I would like to see if anyone has had success with this. I also would like to know if this is not as simple as I may think. Thanks for any thoughts on this, Jeff William R. Mussatto, Senior Systems Engineer Ph. 909-920-9154 ext. 27 FAX. 909-608-7061 - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
resolving multiple ids in a single row from another table
I have a table (TABLE1) which contains multiple fields for user-IDs. Some examples are - the user who created it (FLD_U1), who last updated it (FLD_U2), who is the current owner (FLD_U3) and so on. If I want to make a query and resolve all these IDs to the corresponding user-names, do I need to join the TABLE2 (which contains FLD_UID and FLD_UNAME) as many times as there are fields for user ID in the parent table (TABLE1) or is there some SQL trick which I can resolve all of them with a single join OR is it that ne number of joins using the same table does not affect performance of the query in a significant way. Note that the parent table(TABLE1) may have a million rows and I m picking just one page (about 10-15) rows from it at one go. Thx Vikash - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
doc storage as BLOB and its search engine on MYSQL
Hi We want to create a knowledge base using PHP-MYSQL and if required other tools like specialized search engine. The knowledge base may consists for normal text content, HTML content, PDF files, DOC files etc. The preferred OS is RH Linux 7.3. My queries are: 1. Should we store these files as blobs in MYSQL or in the file system and then store only the name in the database. Please consider the archival issue in both 2. In either of the cases above how can I search thru non-text documents. Is there ne specialized search engine for that over and above full-text of MYSQL 3. I have found a limit of 16MB for blob storage in mysql 3.2x. Is that correct and therefore shud we start development assuming a time for 4.x to be available in stable-production form. Also I m new to mailing lists so let me know if I m not in conformance of ne rules, wud gladly correct myself. Thx Vikash K Agarwal - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
getting a page of data and total count
Hi I have a table holding a set of messages along with child tables holding more details for the same. Normal RDBMS scenario. The typical requirement is to see a set/page (say 10 per page) of such messages along with the details page by page and also the total rows available applying the filters specified by the user say a date-range. The platform/tools/language is IE, PHP/MYSQL/Apache/RH-Linux The table size may run into 1 million rows in a year. There will typically be 10-15 users doing this activity. What is the best way to get the total count as well as the paged content. Do I need to fire two queries or is there a trick which can do this in a single query? Thx Vikash K Agarwal - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Performance problems after record deletion.
This is what MYSQL manual 3.23.41 says, may be it helps you OPTIMIZE TABLE should be used if you have deleted a large part of a table or if you have made many changes to a table with variable-length rows (tables that have VARCHAR, BLOB, or TEXT columns). Deleted records are maintained in a linked list and subsequent INSERT operations reuse old record positions. You can use OPTIMIZE TABLE to reclaim the unused space and to defragment the data file. Rgds Vikash K Agarwal -Original Message- From: Gunnar Lunde [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 14, 2003 4:05 PM To: '[EMAIL PROTECTED]' Subject: Performance problems after record deletion. Hi We got a problem with a slow database after deleting records using the MySQL released with RedHat 7.2 (Server version 3.23.41). Here is the short story: We have a table with a lot of data, at the moment there are 85 million records in our table. We developed a script that deleted old data during the night. The script runs as a cron job and starts each evening and runs trough the night. The script runs a loop where it selects the 10.000 oldest records and runs a test to see if it is ok to delete them (some records are kept even if they are old if they have certain criterias fulfilled). The script builds a list and deletes 10 records at a time. The script runs fine, but after we have deleted a couple of million transactions the database is dog slow. We can see that a select statement that used to complete in seconds now takes 30 minutes and this select statement locks the table so that everything else must wait. Our best guess is that something happened to the indexes during the delete operation, but we did an explain on the select statement that locks everything and it does seem to use the indexes it should. We have read that a select statement will lock up the table if there are wholes in the database, but it indexes should not be affected? There are incoming inserts concurrently with running the large query. When the process is locking, it says sending data - nothing about temporary tables. Since we can afford a bit of downtime (we luckily have a backup system), and for debugging purposes, we have started an analyze table job, although we don't think it will make a huge difference. We hope this job will be completed by tomorrow. As a sidenote - is there any way of monitoring the progress of large jobs of this kind? If you have experienced the same type of problems, or if anyone have any thought on why this is happening and how to fix it we would really like to hear it. Best Regards Gunnar Lunde - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: getting a page of data and total count
Thx Adam But what if concurrent inserts are happening to the table while the users page-view thru the data. The count may change. vikash -Original Message- From: Adam Erickson [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 14, 2003 4:48 PM To: Vikash K Agarwal; [EMAIL PROTECTED] Subject: RE: getting a page of data and total count What is the best way to get the total count as well as the paged content. Do I need to fire two queries or is there a trick which can do this in a single query? I'm doing this on tables with many millions of records. I take a hit once to get a total record count then cache that value in the user's session. Subsequent requests use that value instead of doing count(*) every load. LIMIT offset,number handles pagination very well IMO. Adam Erickson - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Performance problems after record deletion.
Gunnar Something you can try: 1. Export all data (mysqldump, SELECT INTO OUTFILE), import it in a new table (mysqlimport, LOAD DATA INFILE), create indexes on the new table, drop the old table. This would remove the holes from data and indexes wud be fresh again. This wud also not hold up nething. BUT wud require space and some time. vikash -Original Message- From: Gunnar Lunde [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 14, 2003 5:17 PM To: '[EMAIL PROTECTED]' Subject: RE: Performance problems after record deletion. Thank you for your reply, Vikash We have decided not to use optimize because of the size of the table. Optimize would halt the system to long. So we decided just to reuse the space. Our problem is that after the deletion a select query that used to take a second now takes 30 minutes and locks up everything. So we suspect that something happened to the indexes and the select query does a full table scan. However an explain on the query suggest that the indexes are working. During normal operation reusing the space is ok, and it is ok that the select statement locks up everything, but we can not live with a select statement that takes 30 minutes when it used to complete in no time. Ideas to fix this is more than welcome! Best regards Gunnar Lunde -Original Message- From: Vikash K Agarwal [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 14, 2003 12:22 PM To: 'Gunnar Lunde'; [EMAIL PROTECTED] Subject: RE: Performance problems after record deletion. This is what MYSQL manual 3.23.41 says, may be it helps you OPTIMIZE TABLE should be used if you have deleted a large part of a table or if you have made many changes to a table with variable-length rows (tables that have VARCHAR, BLOB, or TEXT columns). Deleted records are maintained in a linked list and subsequent INSERT operations reuse old record positions. You can use OPTIMIZE TABLE to reclaim the unused space and to defragment the data file. Rgds Vikash K Agarwal -Original Message- From: Gunnar Lunde [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 14, 2003 4:05 PM To: '[EMAIL PROTECTED]' Subject: Performance problems after record deletion. Hi We got a problem with a slow database after deleting records using the MySQL released with RedHat 7.2 (Server version 3.23.41). Here is the short story: We have a table with a lot of data, at the moment there are 85 million records in our table. We developed a script that deleted old data during the night. The script runs as a cron job and starts each evening and runs trough the night. The script runs a loop where it selects the 10.000 oldest records and runs a test to see if it is ok to delete them (some records are kept even if they are old if they have certain criterias fulfilled). The script builds a list and deletes 10 records at a time. The script runs fine, but after we have deleted a couple of million transactions the database is dog slow. We can see that a select statement that used to complete in seconds now takes 30 minutes and this select statement locks the table so that everything else must wait. Our best guess is that something happened to the indexes during the delete operation, but we did an explain on the select statement that locks everything and it does seem to use the indexes it should. We have read that a select statement will lock up the table if there are wholes in the database, but it indexes should not be affected? There are incoming inserts concurrently with running the large query. When the process is locking, it says sending data - nothing about temporary tables. Since we can afford a bit of downtime (we luckily have a backup system), and for debugging purposes, we have started an analyze table job, although we don't think it will make a huge difference. We hope this job will be completed by tomorrow. As a sidenote - is there any way of monitoring the progress of large jobs of this kind? If you have experienced the same type of problems, or if anyone have any thought on why this is happening and how to fix it we would really like to hear it. Best Regards Gunnar Lunde - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http
RE: doc storage as BLOB and its search engine on MYSQL
Thx for the correction Paul. -Original Message- From: Paul DuBois [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 14, 2003 10:17 PM To: Vikash K Agarwal; [EMAIL PROTECTED] Subject: Re: doc storage as BLOB and its search engine on MYSQL At 16:00 +0530 1/14/03, Vikash K Agarwal wrote: Hi We want to create a knowledge base using PHP-MYSQL and if required other tools like specialized search engine. The knowledge base may consists for normal text content, HTML content, PDF files, DOC files etc. The preferred OS is RH Linux 7.3. My queries are: 1. Should we store these files as blobs in MYSQL or in the file system and then store only the name in the database. Please consider the archival issue in both 2. In either of the cases above how can I search thru non-text documents. Is there ne specialized search engine for that over and above full-text of MYSQL Just a small point: If you want to use full-text searching, you'll need to use a TEXT type rather than a BLOB type. You cannot create a FULLTEXT index on BLOB columns. 3. I have found a limit of 16MB for blob storage in mysql 3.2x. Is that correct and therefore shud we start development assuming a time for 4.x to be available in stable-production form. Also I m new to mailing lists so let me know if I m not in conformance of ne rules, wud gladly correct myself. Thx Vikash K Agarwal sql, query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php