[PATCHES] port/ build fix
This trivial patch fixes a missing dependency in src/port/Makefile: if "make install" is run without the "all" target having been previously built, the build fails. Per report from Kris Jurka. Barring any objections I'll apply this to HEAD before the end of the day. -Neil --- src/port/Makefile +++ src/port/Makefile @@ -36,7 +36,7 @@ all: libpgport.a libpgport_srv.a # libpgport is needed by some contrib -install: +install: all $(INSTALL_STLIB) libpgport.a $(DESTDIR)$(libdir) uninstall: ---(end of broadcast)--- TIP 8: explain analyze is your friend
[PATCHES] pg_autovacuum vacuum cost variables patch v2
Ok, here is an updated version of the patch I submitted last night. This patch now sets the appropriate vacuum cost variables for both vacuum commands and analyze commands. In addition I have added the new vacuum cost options to the win32 InstallService function. Please give it another look and apply if deemed acceptable. Mathew T. O'Connor *** ./pg_autovacuum.c.orig 2004-10-26 00:00:00.0 -0400 --- ./pg_autovacuum.c 2004-10-26 23:51:01.453522827 -0400 *** *** 905,910 --- 905,911 PQfinish(db_conn); db_conn = NULL; } + return db_conn; } /* end of db_connect() */ *** *** 973,978 --- 974,1044 return res; } /* End of send_query() */ + /* + * Perform either a vacuum or a vacuum analyze + */ + static void perform_maintenance_command(db_info * dbi, tbl_info * tbl, int operation) + { + char buf[256]; + + /* + * Go ahead and set the vacuum_cost variables + */ + if(args->av_vacuum_cost_delay != -1) + { + snprintf(buf, sizeof(buf), "set vacuum_cost_delay = %i", args->av_vacuum_cost_delay); + send_query(buf, dbi); + } + if(args->av_vacuum_cost_page_hit != -1) + { + snprintf(buf, sizeof(buf), "set vacuum_cost_page_hit = %i", args->av_vacuum_cost_page_hit); + send_query(buf, dbi); + } + if(args->av_vacuum_cost_page_miss != -1) + { + snprintf(buf, sizeof(buf), "set vacuum_cost_page_miss = %i", args->av_vacuum_cost_page_miss); + send_query(buf, dbi); + } + if(args->av_vacuum_cost_page_dirty != -1) + { + snprintf(buf, sizeof(buf), "set vacuum_cost_page_dirty = %i", args->av_vacuum_cost_page_dirty); + send_query(buf, dbi); + } + if(args->av_vacuum_cost_limit != -1) + { + snprintf(buf, sizeof(buf), "set vacuum_cost_limit = %i", args->av_vacuum_cost_limit); + send_query(buf, dbi); + } + + /* + * if ((relisshared = t and database != template1) or + * if operation = ANALYZE_ONLY) + * then only do an analyze + */ + if ((tbl->relisshared > 0 && strcmp("template1", dbi->dbname)) || (operation == ANALYZE_ONLY)) + snprintf(buf, sizeof(buf), "ANALYZE %s", tbl->table_name); + else if(operation == VACUUM_ANALYZE) + snprintf(buf, sizeof(buf), "VACUUM ANALYZE %s", tbl->table_name); + else + return 1; + + if (args->debug >= 1) + { + sprintf(logbuffer, "Performing: %s", buf); + log_entry(logbuffer, LVL_DEBUG); + fflush(LOGOUTPUT); + } + + send_query(buf, dbi); + if(operation == VACUUM_ANALYZE) + update_table_thresholds(dbi, tbl, VACUUM_ANALYZE); + else if(operation == VACUUM_ANALYZE) + update_table_thresholds(dbi, tbl, ANALYZE_ONLY); + + if (args->debug >= 2) + print_table_info(tbl); + + } static void free_cmd_args(void) *** *** 1015,1027 args->port = 0; /* * Fixme: Should add some sanity checking such as positive integer * values etc */ #ifndef WIN32 ! while ((c = getopt(argc, argv, "s:S:v:V:a:A:d:U:P:H:L:p:hD")) != -1) #else ! while ((c = getopt(argc, argv, "s:S:v:V:a:A:d:U:P:H:L:p:hIRN:W:")) != -1) #endif { switch (c) --- 1081,1102 args->port = 0; /* + * Cost-Based Vacuum Delay Settings for pg_autovacuum + */ + args->av_vacuum_cost_delay = -1; + args->av_vacuum_cost_page_hit = -1; + args->av_vacuum_cost_page_miss = -1; + args->av_vacuum_cost_page_dirty = -1; + args->av_vacuum_cost_limit = -1; + + /* * Fixme: Should add some sanity checking such as positive integer * values etc */ #ifndef WIN32 ! while ((c = getopt(argc, argv, "s:S:v:V:a:A:d:U:P:H:L:p:hD:c:C:m:n:N:")) != -1) #else ! while ((c = getopt(argc, argv, "s:S:v:V:a:A:d:U:P:H:L:p:hIRN:W:c:C:m:n:N:")) != -1) #endif { switch (c) *** *** 1044,1049 --- 1119,1139 case 'A': args->analyze_scaling_factor = atof(optarg); break; + case 'c': + args->av_vacuum_cost_delay = atoi(optarg); + break; + case 'C': + args->av_vacuum_cost_page_hit = atoi(optarg); + break; + case 'm': + args->av_vacuum_cost_page_miss = atoi(optarg); + break; + case 'n': + args->av_vacuum_cost_page_dirty = atoi(optarg); + break; + case 'N': + args->av_vacuum_cost_limit = atoi(optarg); + break; #ifndef WIN32 case 'D': args->daemonize++; *** *** 1142,1147 --- 1232,1243 fprintf(stderr, " [-L] logfile (default=none)\n"); + fprintf(stderr, " [-c] vacuum_cost_delay (default=none)\n"); + fprintf(stderr, " [-C] vacuum_cost_page_hit (default=none)\n"); + fprintf(stderr, " [-m] vacuum_cost_page_miss (default=none)\n"); + fprintf(stderr, " [-n] vacuum_cost_page_dirty (default=none)\n"); + fprintf(stderr, " [-N] vacuum_cost_limit (default=none)\n"); + fprintf(stderr, " [-U] username (libpq default)\n"); fprintf(stderr, " [-P] password (libpq default)\n"); fprintf(stderr, " [-H] host (libpq default)\n"); *** *** 1191,1196 --- 1287,1319 lo
Re: [PATCHES] HP-UX PA-RISC/Itanium 64-bit Patch and HP-UX 11.23 Patch
> > > Shinji Teragaito <[EMAIL PROTECTED]> writes: > > >> I made a patch to let PostgreSQL work in the LP64 data model on > > >> HP-UX PA-RISC and HP-UX Itanium platform. I see Shinji's patch changed the library suffix from .sl to .so for ia64. Is that is necessary? If so, why? Thanks, Ed ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[PATCHES] pgstat cleanup: use palloc and AllocateFile
This patch changes pgstat.c to use palloc(), AllocateFile() and FreeFile() rather than malloc(), fopen() and fclose(), respectively. I changed more_tabstat_space() (which is invoked at various times indirectly throughout the backend) to allocate memory in its own private memory context, rather than use malloc() -- we can't just use CurrentMemoryContext because that may not be sufficiently long-lived. Barring any objections I intend to apply this to HEAD tomorrow. -Neil --- src/backend/postmaster/pgstat.c +++ src/backend/postmaster/pgstat.c @@ -42,6 +42,7 @@ #include "miscadmin.h" #include "postmaster/postmaster.h" #include "storage/backendid.h" +#include "storage/fd.h" #include "storage/ipc.h" #include "storage/pg_shmem.h" #include "storage/pmsignal.h" @@ -118,6 +119,7 @@ static bool pgStatRunningInCollector = FALSE; +static MemoryContext tabstatCxt = NULL; static int pgStatTabstatAlloc = 0; static int pgStatTabstatUsed = 0; static PgStat_MsgTabstat **pgStatTabstatMessages = NULL; @@ -682,7 +683,7 @@ /* -- * pgstat_report_activity() - * - * Called in tcop/postgres.c to tell the collector what the backend + * Called from tcop/postgres.c to tell the collector what the backend * is actually doing (usually "" or the start of the query to * be executed). * -- @@ -988,49 +988,44 @@ /* * Create or enlarge the pgStatTabstatMessages array */ -static bool +static void more_tabstat_space(void) { PgStat_MsgTabstat *newMessages; PgStat_MsgTabstat **msgArray; int newAlloc = pgStatTabstatAlloc + TABSTAT_QUANTUM; int i; + MemoryContext oldCxt; + if (tabstatCxt == NULL) + tabstatCxt = AllocSetContextCreate(TopMemoryContext, + "per-backend statistics buffer", + ALLOCSET_DEFAULT_MINSIZE, + ALLOCSET_DEFAULT_INITSIZE, + ALLOCSET_DEFAULT_MAXSIZE); + + oldCxt = MemoryContextSwitchTo(tabstatCxt); + /* Create (another) quantum of message buffers */ newMessages = (PgStat_MsgTabstat *) - malloc(sizeof(PgStat_MsgTabstat) * TABSTAT_QUANTUM); - if (newMessages == NULL) - { - ereport(LOG, -(errcode(ERRCODE_OUT_OF_MEMORY), - errmsg("out of memory"))); - return false; - } + palloc0(sizeof(PgStat_MsgTabstat) * TABSTAT_QUANTUM); /* Create or enlarge the pointer array */ if (pgStatTabstatMessages == NULL) msgArray = (PgStat_MsgTabstat **) - malloc(sizeof(PgStat_MsgTabstat *) * newAlloc); + palloc(sizeof(PgStat_MsgTabstat *) * newAlloc); else msgArray = (PgStat_MsgTabstat **) - realloc(pgStatTabstatMessages, - sizeof(PgStat_MsgTabstat *) * newAlloc); - if (msgArray == NULL) - { - free(newMessages); - ereport(LOG, -(errcode(ERRCODE_OUT_OF_MEMORY), - errmsg("out of memory"))); - return false; - } + repalloc(pgStatTabstatMessages, + sizeof(PgStat_MsgTabstat *) * newAlloc); - MemSet(newMessages, 0, sizeof(PgStat_MsgTabstat) * TABSTAT_QUANTUM); for (i = 0; i < TABSTAT_QUANTUM; i++) msgArray[pgStatTabstatAlloc + i] = newMessages++; pgStatTabstatMessages = msgArray; pgStatTabstatAlloc = newAlloc; - return true; + Assert(pgStatTabstatUsed < pgStatTabstatAlloc); + MemoryContextSwitchTo(oldCxt); } /* -- @@ -1102,14 +1097,7 @@ * If we ran out of message buffers, we just allocate more. */ if (pgStatTabstatUsed >= pgStatTabstatAlloc) - { - if (!more_tabstat_space()) - { - stats->no_stats = TRUE; - return; - } - Assert(pgStatTabstatUsed < pgStatTabstatAlloc); - } + more_tabstat_space(); /* * Use the first entry of the next message buffer. @@ -1146,10 +1139,8 @@ * new xact-counters. */ if (pgStatTabstatAlloc == 0) - { - if (!more_tabstat_space()) - return; - } + more_tabstat_space(); + if (pgStatTabstatUsed == 0) { pgStatTabstatUsed++; @@ -1180,10 +1178,8 @@ * new xact-counters. */ if (pgStatTabstatAlloc == 0) - { - if (!more_tabstat_space()) - return; - } + more_tabstat_space(); + if (pgStatTabstatUsed == 0) { pgStatTabstatUsed++; @@ -1529,13 +1527,8 @@ /* * Create the known backends table */ - pgStatBeTable = (PgStat_StatBeEntry *) malloc( + pgStatBeTable = (PgStat_StatBeEntry *) palloc0( sizeof(PgStat_StatBeEntry) * MaxBackends); - if (pgStatBeTable == NULL) - ereport(ERROR, -(errcode(ERRCODE_OUT_OF_MEMORY), - errmsg("out of memory in statistics collector --- abort"))); - memset(pgStatBeTable, 0, sizeof(PgStat_StatBeEntry) * MaxBackends); readPipe = pgStatPipe[0]; @@ -1804,11 +1799,7 @@ /* * Allocate the message buffer */ - msgbuffer = (char *) malloc(PGSTAT_RECVBUFFERSZ); - if (msgbuffer == NULL) - ereport(ERROR, -(errcode(ERRCODE_OUT_OF_MEMORY), - errmsg("out of memory in statistics collector --- abort"))); + msgbuffer = (char *) palloc(PGSTAT_RECVBUFFERSZ); /* * Loop forever @@ -2416,7 +2412,7 @@ * simply return zero for anything and the collector simply starts * from scratch with empty counters. */ - if ((fpin = fopen(pgStat_fname,
Re: [PATCHES] pg_ctl -D canonicalization
Magnus Hagander wrote: > It seems pg_ctl calls canonicalize_path() only on the path as being used > to access for example the pid file, and not the path that is sent along > to the postmaster. > Specifically, this causes failure on win32 when a path is passed with a > trailing backslash, when it's inside quotes such as: > pg_ctl start -D "c:\program fiels\postgresql\data\" > > The quotes are of course necessary since there are spaces in the > filename. In my specific case the trailing backslash is automatically > added by windows installer. but other cases when the backslash is added > can easily be seen... > > Attached patch makes pg_ctl call canonicalize_path() on the path as > passed on the commandline as well. > > I have only tested this on win32, where it appears to work fine. I think > it would be good on other platsforms as well which is why I didn't > #ifdef it. If not, then please add #ifdefs. Uh, isn't the proper fix to fix the postmaster's handling of -D paths? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] pg_autovacuum vacuum cost variables patch
"Michael Paesold" <[EMAIL PROTECTED]> writes: > And it seems it affects analyze much more than vacuum, at least if there is > *nothing* to vacuum... (2 seconds -> 8 seconds) Fixed. The original coding was charging a page fetch cost for each row on each page that analyze looked at :-( regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PATCHES] pg_autovacuum vacuum cost variables patch
Michael Paesold wrote: Matthew T. O'Connor wrote: Two questions: 1) It is my understanding that these new GUC vars only effect vacuum. That is they do NOT have any effect on an analyze command right? (I ask since I'm only setting the vars before I issue a vacuum command) No, vacuum also affects analyze alone (cvs tip here): (2.5 seconds -> 50 seconds) [snip examples...] I suggest you also issue the SET commands for analyze also. ISTM that there is also no distinction between VACUUM and VACUUM FULL, but I think pg_autovacuum never does a vacuum full, so there is no problem with that. Ok, I'll do that too. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] pg_autovacuum vacuum cost variables patch
Dave Page wrote: Hi Matthew, It doesn't look like you modified the Win32 service installation code to write these options to the registry when installing the service (see szCommand in InstallService()). Oops Can you tell I didn't write that part of the code ;-) I'll take a look at this tonight after work and send in an updated patch. Matthew ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] [HACKERS] ARC Memory Usage analysis
On Tue, 2004-10-26 at 09:49, Simon Riggs wrote: > On Mon, 2004-10-25 at 16:34, Jan Wieck wrote: > > The problem is, with a too small directory ARC cannot guesstimate what > > might be in the kernel buffers. Nor can it guesstimate what recently was > > in the kernel buffers and got pushed out from there. That results in a > > way too small B1 list, and therefore we don't get B1 hits when in fact > > the data was found in memory. B1 hits is what increases the T1target, > > and since we are missing them with a too small directory size, our > > implementation of ARC is propably using a T2 size larger than the > > working set. That is not optimal. > > I think I have seen that the T1 list shrinks "too much", but need more > tests...with some good test results > > > If we would replace the dynamic T1 buffers with a max_backends*2 area of > > shared buffers, use a C value representing the effective cache size and > > limit the T1target on the lower bound to effective cache size - shared > > buffers, then we basically moved the T1 cache into the OS buffers. > > Limiting the minimum size of T1len to be 2* maxbackends sounds like an > easy way to prevent overbalancing of T2, but I would like to follow up > on ways to have T1 naturally stay larger. I'll do a patch with this idea > in, for testing. I'll call this "T1 minimum size" so we can discuss it. > Don't know whether you've seen this latest update on the ARC idea: Sorav Bansal and Dharmendra S. Modha, CAR: Clock with Adaptive Replacement, in Proceedings of the USENIX Conference on File and Storage Technologies (FAST), pages 187--200, March 2004. [I picked up the .pdf here http://citeseer.ist.psu.edu/bansal04car.html] In that paper Bansal and Modha introduce an update to ARC called CART which they say is more appropriate for databases. Their idea is to introduce a "temporal locality window" as a way of making sure that blocks called twice within a short period don't fall out of T1, though don't make it into T2 either. Strangely enough the "temporal locality window" is made by increasing the size of T1... in an adpative way, of course. If we were going to put a limit on the minimum size of T1, then this would put a minimal "temporal locality window" in placerather than the increased complexity they go to in order to make T1 larger. I note test results from both the ARC and CAR papers that show that T2 usually represents most of C, so the observations that T1 is very small is not atypical. That implies that the cost of managing the temporal locality window in CART is usually wasted, even though it does cut in as an overall benefit: The results show that CART is better than ARC over the whole range of cache sizes tested (16MB to 4GB) and workloads (apart from 1 out 22). If we were to implement a minimum size of T1, related as suggested to number of users, then this would provide a reasonable approximation of the temporal locality window. This wouldn't prevent the adaptation of T1 to be higher than this when required. Jan has already optimised ARC for PostgreSQL by the addition of a special lookup on transactionId required to optimise for the double cache lookup of select/update that occurs on a T1 hit. That seems likely to be able to be removed as a result of having a larger T1. I'd suggest limiting T1 to be a value of: shared_buffers <= 1000 T1limit = max_backends *0.75 shared_buffers <= 2000 T1limit = max_backends shared_buffers <= 5000 T1limit = max_backends *1.5 shared_buffers > 5000 T1limit = max_backends *2 I'll try some tests with both - minimum size of T1 - update optimisation removed Thoughts? -- Best Regards, Simon Riggs ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] pg_autovacuum vacuum cost variables patch
Matthew T. O'Connor wrote: Two questions: 1) It is my understanding that these new GUC vars only effect vacuum. That is they do NOT have any effect on an analyze command right? (I ask since I'm only setting the vars before I issue a vacuum command) No, vacuum also affects analyze alone (cvs tip here): (2.5 seconds -> 50 seconds) test=# SET vacuum_cost_delay TO 0; SET Time: 0.308 ms test=# analyze; ANALYZE Time: 2591.259 ms test=# SET vacuum_cost_delay TO 10; SET Time: 0.309 ms test=# analyze; ANALYZE Time: 51737.896 ms And it seems it affects analyze much more than vacuum, at least if there is *nothing* to vacuum... (2 seconds -> 8 seconds) test=# SET vacuum_cost_delay TO 0; SET Time: 0.261 ms test=# VACUUM; VACUUM Time: 1973.137 ms test=# SET vacuum_cost_delay TO 10; SET Time: 0.236 ms test=# vacuum; VACUUM Time: 7966.085 ms I suggest you also issue the SET commands for analyze also. ISTM that there is also no distinction between VACUUM and VACUUM FULL, but I think pg_autovacuum never does a vacuum full, so there is no problem with that. Best Regards, Michael Paesold ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] [HACKERS] ARC Memory Usage analysis
On Mon, 2004-10-25 at 16:34, Jan Wieck wrote: > The problem is, with a too small directory ARC cannot guesstimate what > might be in the kernel buffers. Nor can it guesstimate what recently was > in the kernel buffers and got pushed out from there. That results in a > way too small B1 list, and therefore we don't get B1 hits when in fact > the data was found in memory. B1 hits is what increases the T1target, > and since we are missing them with a too small directory size, our > implementation of ARC is propably using a T2 size larger than the > working set. That is not optimal. I think I have seen that the T1 list shrinks "too much", but need more tests...with some good test results The effectiveness of ARC relies upon the balance between the often conflicting requirements of "recency" and "frequency". It seems possible, even likely, that pgsql's version of ARC may need some subtle changes to rebalance it - if we are unlikely enough to find cases where it genuinely is out of balance. Many performance tests are required, together with a few ideas on extra parameters to includehence my support of Jan's ideas. That's also why I called the B1+B2 hit ratio "turbulence" because it relates to how much oscillation is happening between T1 and T2. In physical systems, we expect the oscillations to be damped, but there is no guarantee that we have a nearly critically damped oscillator. (Note that the absence of turbulence doesn't imply that T1+T2 is optimally sized, just that is balanced). [...and all though the discussion has wandered away from my original patch...would anybody like to commit, or decline the patch?] > If we would replace the dynamic T1 buffers with a max_backends*2 area of > shared buffers, use a C value representing the effective cache size and > limit the T1target on the lower bound to effective cache size - shared > buffers, then we basically moved the T1 cache into the OS buffers. Limiting the minimum size of T1len to be 2* maxbackends sounds like an easy way to prevent overbalancing of T2, but I would like to follow up on ways to have T1 naturally stay larger. I'll do a patch with this idea in, for testing. I'll call this "T1 minimum size" so we can discuss it. Any other patches are welcome... It could be that B1 is too small and so we could use a larger value of C to keep track of more blocks. I think what is being suggested is two GUCs: shared_buffers (as is), plus another one, larger, which would allow us to track what is in shared_buffers and what is in OS cache. I have comments on "effective cache size" below On Mon, 2004-10-25 at 17:03, Tom Lane wrote: > Jan Wieck <[EMAIL PROTECTED]> writes: > > This all only holds water, if the OS is allowed to swap out shared > > memory. And that was my initial question, how likely is it to find this > > to be true these days? > > I think it's more likely that not that the OS will consider shared > memory to be potentially swappable. On some platforms there is a shmctl > call you can make to lock your shmem in memory, but (a) we don't use it > and (b) it may well require privileges we haven't got anyway. Are you saying we shouldn't, or we don't yet? I simply assumed that we did use that function - surely it must be at least an option? RHEL supports this at least It may well be that we don't have those privileges, in which case we turn off the option. Often, we (or I?) will want to install a dedicated server, so we should have all the permissions we need, in which case... > This has always been one of the arguments against making shared_buffers > really large, of course --- if the buffers aren't all heavily used, and > the OS decides to swap them to disk, you are worse off than you would > have been with a smaller shared_buffers setting. Not really, just an argument against making them *too* large. Large *and* utilised is OK, so we need ways of judging optimal sizing. > However, I'm still really nervous about the idea of using > effective_cache_size to control the ARC algorithm. That number is > usually entirely bogus. Right now it is only a second-order influence > on certain planner estimates, and I am afraid to rely on it any more > heavily than that. ...ah yes, effective_cache_size. The manual describes effective_cache_size as if it had something to do with the OS, and some of this discussion has picked up on that. effective_cache_size is used in only two places in the code (both in the planner), as an estimate for calculating the cost of a) nonsequential access and b) index access, mainly as a way of avoiding overestimates of access costs for small tables. There is absolutely no implication in the code that effective_cache_size measures anything in the OS; what it gives is an estimate of the number of blocks that will be available from *somewhere* in memory (i.e. in shared_buffers OR OS cache) for one particular table (the one currently being considered by the planner). Crucially, the "size" referred
Re: [PATCHES] pg_autovacuum vacuum cost variables patch
> -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > Matthew T. O'Connor > Sent: 26 October 2004 06:40 > To: pgsql-patches > Subject: [PATCHES] pg_autovacuum vacuum cost variables patch > > Please review and if deemed accecptable, please apply to CVS HEAD. Hi Matthew, It doesn't look like you modified the Win32 service installation code to write these options to the registry when installing the service (see szCommand in InstallService()). Regards, Dave ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])