live website mirroring

2003-09-24 Thread Sagi Bashari
Hello

We are running a website that is written in PHP and is using a MySQL 
database. It runs on Linux, of course.

We're trying to find a method to setup a complete mirror of the server 
on a standby server that we can bring up easily if the main server goes 
down.

The website is very dynamic and data changes all the time. I'm looking 
for a way to keep the standby server in identical state as the master 
server.

After doing some research I found out that I could use the built-in 
MySQL replication for that, but we also have dynamic data directories on 
our server - I could setup an rsync script that runs every few minutes, 
but it'll probably take too many resources and won't be accurate.

I would like to hear the experiences of others who had the same problem 
(and hopefully solved it).

* Both servers are running at the same location right now, so I can 
takeover the main IP address easily. However, we considered having a 
backup server on another location (where we can't use the same ip 
address). Is there any way to do this?

Sagi



=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]


Re: live website mirroring

2003-09-24 Thread Dan Fruehauf
On Thursday 25 September 2003 02:42, Sagi Bashari wrote:
Hello,
I have a sollution which confronts the problem in a totally different way.
Given the time and resources, what you can do is connect both servers to a NAS 
switch, mapping them the same storage.
Then, in case of failing over, the active computer will shut itself down and 
will release it's NAS mountpoints so the passive server can take it's storage 
and launch mysql and the webserver, with the same identical data.
I know NAS is an expensive sollution, but that's what we mostly do in the 
enterprise.

Dan.

> After doing some research I found out that I could use the built-in
> MySQL replication for that, but we also have dynamic data directories on
> our server - I could setup an rsync script that runs every few minutes,
> but it'll probably take too many resources and won't be accurate.


=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: live website mirroring

2003-09-24 Thread linux-il
Dan Fruehauf wrote:

On Thursday 25 September 2003 02:42, Sagi Bashari wrote:
Hello,
I have a sollution which confronts the problem in a totally different way.
Given the time and resources, what you can do is connect both servers to a NAS 
switch, mapping them the same storage.
It's a possible solution, but:

1. The NAS is still a single point of failure.
2. They are talking about having a server at a different location,
so they'll need to replicate the NAS as well.
My suggestion for a general direction to investigate:

1. Check rsync more closely - as far as I remember from using it, it
is actually a very efficient program in CPU, memory, and network
usage.
2. What about using a content-management (CM) solution which manages
your site instead of handling files "manualy"? Keeping track of your
files through a CM software will enable it to copy over every file
when it's updated, and it can also sort-of track changes and versions.
I can't recall a particular software for this, maybe you can start
by looking for WebDAV implementations (e.g. Apache's WebDAV module).
--Amos



=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]


Re: live website mirroring

2003-09-25 Thread Sagi Bashari
On 25/09/2003 10:43, [EMAIL PROTECTED] wrote:

I have a sollution which confronts the problem in a totally different 
way.
Given the time and resources, what you can do is connect both servers 
to a NAS switch, mapping them the same storage.


It's a possible solution, but:

1. The NAS is still a single point of failure.
2. They are talking about having a server at a different location,
so they'll need to replicate the NAS as well.


We would like to do this with our current hardware at this stage, we 
already have RAID1 on both servers so I would just like to keep the 
arrays identical.

Right now we will stick to having both servers at the same location, my 
question was about changing the IP address dynamically if something 
happens if we ever decide to setup a backup server on another location.

My suggestion for a general direction to investigate:

1. Check rsync more closely - as far as I remember from using it, it
is actually a very efficient program in CPU, memory, and network
usage.


Yes, but I'm not sure that running it every 10 minutes is a good idea, 
especially if you have many files and some of them are being changed 
while rsync is running.

I found out about a kernel module named "enbd" which is supposed to do 
RAID over network somehow, but I would like to hear about the experience 
of others before I start to play with it.

2. What about using a content-management (CM) solution which manages
your site instead of handling files "manualy"? Keeping track of your
files through a CM software will enable it to copy over every file
when it's updated, and it can also sort-of track changes and versions.
I can't recall a particular software for this, maybe you can start
by looking for WebDAV implementations (e.g. Apache's WebDAV module). 


I'm not sure. Basically we're storing all the data in our database and 
only use the filesystem for files - image  files, flash files, website 
root directories etc.

All the files must be accessible by apache (it is serving the images 
directly, we don't read them in our application) .

Sagi



=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]


Re: live website mirroring

2003-09-25 Thread linux-il
Sagi Bashari wrote:

It's a possible solution, but:

1. The NAS is still a single point of failure.
2. They are talking about having a server at a different location,
so they'll need to replicate the NAS as well.


We would like to do this with our current hardware at this stage, we 
already have RAID1 on both servers so I would just like to keep the 
arrays identical.
As far as I understood it, the meaning of "NAS" is that you'll have
common storage accessed over the network ("NAS" = "Network Attached
Storage"). What does this have to do with each server having its own
RAID?
BTW, If you already have RAID 1 (mirroring) and if you have more than
one disk on each mirror side (i.e. at least 4 disks total in each
mirror) then you better do RAID 0+1 (mirroring over stripping), to
enhance your access times.
(e.g. http://www.acnc.com/04_01_0p1.html)
Right now we will stick to having both servers at the same location, my 
question was about changing the IP address dynamically if something 
happens if we ever decide to setup a backup server on another location.
You had two questions, haven't you?

As for this IP question:

1. If you want the backup server to serve also at "regular times"
then just advertise both addresses in A records in DNS, the DNS
server will alternate their order when resolving the server's address
which will be a very cheap way to load-balance between them.
2. If you just want the bakcup server to be accessed only when the
primary one goes down then the only way I can think off right now is
with having setting the A record with a very short time to live (5-15)
minutes, as much as you can stand with having an inaccessible server)
so your DNS will be accessed almost every time the site is accessed,
when the primary goes down (which can be discovered with various High
Availability solutions) the secondary updates your DNS server and takes
over control.
See http://linux-ha.org/ for instance.

(1) and (2) can be implemented together (advertise both, when one
server goes down the other will delete its address from DNS, when the
downed server goes back up it will register with the DNS).
There are also mailing lists, forums and sites about Linux high
availability, maybe you'll find there more knowledgeable crowde, though
I for one would be curious to hear what you decided to do.

My suggestion for a general direction to investigate:

1. Check rsync more closely - as far as I remember from using it, it
is actually a very efficient program in CPU, memory, and network
usage.


Yes, but I'm not sure that running it every 10 minutes is a good idea, 
especially if you have many files and some of them are being changed 
while rsync is running.
Have you conducted your own tests or found results of such tests?
In general, rsync is supposed to cope well with changing files.
I found out about a kernel module named "enbd" which is supposed to do 
RAID over network somehow, but I would like to hear about the experience 
of others before I start to play with it.
Never heard of it, though I did see the "Network Block Device" option
in the Kernel 2.4 config, which sounds related to this.  I'd expect you
can google for a community of this module.

2. What about using a content-management (CM) solution which manages
your site instead of handling files "manualy"? Keeping track of your
files through a CM software will enable it to copy over every file
when it's updated, and it can also sort-of track changes and versions.
I can't recall a particular software for this, maybe you can start
by looking for WebDAV implementations (e.g. Apache's WebDAV module). 


I'm not sure. Basically we're storing all the data in our database and 
only use the filesystem for files - image  files, flash files, website 
root directories etc.
That's what content management is about.

All the files must be accessible by apache (it is serving the images 
directly, we don't read them in our application) .
What I was suggesting (never tried this myself) is that instead of just
writing files directly to the disk and hope that some sync program will
find it and copy it over, use a tool which writes it to the disk (so
it's accessible by Apache) but in the same time can send it (or notify
about the change) to the backup server. WebDAV is one standard protocol
to achive the "write a file to a Web server" part.
Also when you have such tools instead of directly writing files to disk, 
you can start using other tools on top of them (QA, testting,
versioning, authentication, authorization, general site management)

--Amos



=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]


Re: live website mirroring

2003-09-25 Thread Dan Fruehauf
On Thursday 25 September 2003 11:45, [EMAIL PROTECTED] wrote:
it was probably my mistake, i said NAS, but i meant SAN (Storage Area 
Network).
I can see people are showing interest, so i'll give some more details on what 
were doing at work.

Assuming one server is always active, and the other one is passive, we map the 
same disks drives to them through the SAN (while only one server mounts the 
storage at a given time).
when a failover happens, the passive server "steals" the ip of the active 
server, and it's storage as well and mounts it.
after this, the application is being executed and all should be fine.
what's not too good is that sessions are lost, so if youre application is 
stateful, your users will feel the crash that happened, while if it's 
stateless the damage will be much smaller.

Dan.

> Sagi Bashari wrote:
> >> It's a possible solution, but:
> >>
> >> 1. The NAS is still a single point of failure.
> >> 2. They are talking about having a server at a different location,
> >> so they'll need to replicate the NAS as well.
> >
> > We would like to do this with our current hardware at this stage, we
> > already have RAID1 on both servers so I would just like to keep the
> > arrays identical.
>
> As far as I understood it, the meaning of "NAS" is that you'll have
> common storage accessed over the network ("NAS" = "Network Attached
> Storage"). What does this have to do with each server having its own
> RAID?
>
> BTW, If you already have RAID 1 (mirroring) and if you have more than
> one disk on each mirror side (i.e. at least 4 disks total in each
> mirror) then you better do RAID 0+1 (mirroring over stripping), to
> enhance your access times.
> (e.g. http://www.acnc.com/04_01_0p1.html)
>
> > Right now we will stick to having both servers at the same location, my
> > question was about changing the IP address dynamically if something
> > happens if we ever decide to setup a backup server on another location.
>
> You had two questions, haven't you?
>
> As for this IP question:
>
> 1. If you want the backup server to serve also at "regular times"
> then just advertise both addresses in A records in DNS, the DNS
> server will alternate their order when resolving the server's address
> which will be a very cheap way to load-balance between them.
>
> 2. If you just want the bakcup server to be accessed only when the
> primary one goes down then the only way I can think off right now is
> with having setting the A record with a very short time to live (5-15)
> minutes, as much as you can stand with having an inaccessible server)
> so your DNS will be accessed almost every time the site is accessed,
> when the primary goes down (which can be discovered with various High
> Availability solutions) the secondary updates your DNS server and takes
> over control.
>
> See http://linux-ha.org/ for instance.
>
> (1) and (2) can be implemented together (advertise both, when one
> server goes down the other will delete its address from DNS, when the
> downed server goes back up it will register with the DNS).
>
> There are also mailing lists, forums and sites about Linux high
> availability, maybe you'll find there more knowledgeable crowde, though
> I for one would be curious to hear what you decided to do.
>
> >> My suggestion for a general direction to investigate:
> >>
> >> 1. Check rsync more closely - as far as I remember from using it, it
> >> is actually a very efficient program in CPU, memory, and network
> >> usage.
> >
> > Yes, but I'm not sure that running it every 10 minutes is a good idea,
> > especially if you have many files and some of them are being changed
> > while rsync is running.
>
> Have you conducted your own tests or found results of such tests?
> In general, rsync is supposed to cope well with changing files.
>
> > I found out about a kernel module named "enbd" which is supposed to do
> > RAID over network somehow, but I would like to hear about the experience
> > of others before I start to play with it.
>
> Never heard of it, though I did see the "Network Block Device" option
> in the Kernel 2.4 config, which sounds related to this.  I'd expect you
> can google for a community of this module.
>
> >> 2. What about using a content-management (CM) solution which manages
> >> your site instead of handling files "manualy"? Keeping track of your
> >> files through a CM software will enable it to copy over every file
> >> when it's updated, and it can also sort-of track changes and versions.
> >> I can't recall a particular software for this, maybe you can start
> >> by looking for WebDAV implementations (e.g. Apache's WebDAV module).
> >
> > I'm not sure. Basically we're storing all the data in our database and
> > only use the filesystem for files - image  files, flash files, website
> > root directories etc.
>
> That's what content management is about.
>
> > All the files must be accessible by apache (it is serving the images
> > directly, we don't read them in our application) .
>
> W

Re: live website mirroring

2003-09-25 Thread Lior Kaplan
As far as I know, the cost of SAN is very high. It also gives your more GB
then you need. And if you need less GB than what given - you just pay too
much extra on each GB.

Also, don't forget the equipment using to work with fiber optic cables
(control cards, switches and so on).

At this cost I think you can build a HA solution:
1. A red rubin DNS.
2. heart bit.
3. using another computer for db/files and two servers to serve the files
(sure, has lower performance).


Regards,

Lior Kaplan
[EMAIL PROTECTED]
http://www.Guides.co.il

Come to write at the forums: http://www.guides.co.il/forums

- Original Message -
From: "Dan Fruehauf" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, September 25, 2003 7:39 PM
Subject: Re: live website mirroring


> On Thursday 25 September 2003 11:45, [EMAIL PROTECTED] wrote:
> it was probably my mistake, i said NAS, but i meant SAN (Storage Area
> Network).
> I can see people are showing interest, so i'll give some more details on
what
> were doing at work.
>
> Assuming one server is always active, and the other one is passive, we map
the
> same disks drives to them through the SAN (while only one server mounts
the
> storage at a given time).
> when a failover happens, the passive server "steals" the ip of the active
> server, and it's storage as well and mounts it.
> after this, the application is being executed and all should be fine.
> what's not too good is that sessions are lost, so if youre application is
> stateful, your users will feel the crash that happened, while if it's
> stateless the damage will be much smaller.
>
> Dan.
>
> > Sagi Bashari wrote:
> > >> It's a possible solution, but:
> > >>
> > >> 1. The NAS is still a single point of failure.
> > >> 2. They are talking about having a server at a different location,
> > >> so they'll need to replicate the NAS as well.
> > >
> > > We would like to do this with our current hardware at this stage, we
> > > already have RAID1 on both servers so I would just like to keep the
> > > arrays identical.
> >
> > As far as I understood it, the meaning of "NAS" is that you'll have
> > common storage accessed over the network ("NAS" = "Network Attached
> > Storage"). What does this have to do with each server having its own
> > RAID?
> >
> > BTW, If you already have RAID 1 (mirroring) and if you have more than
> > one disk on each mirror side (i.e. at least 4 disks total in each
> > mirror) then you better do RAID 0+1 (mirroring over stripping), to
> > enhance your access times.
> > (e.g. http://www.acnc.com/04_01_0p1.html)
> >
> > > Right now we will stick to having both servers at the same location,
my
> > > question was about changing the IP address dynamically if something
> > > happens if we ever decide to setup a backup server on another
location.
> >
> > You had two questions, haven't you?
> >
> > As for this IP question:
> >
> > 1. If you want the backup server to serve also at "regular times"
> > then just advertise both addresses in A records in DNS, the DNS
> > server will alternate their order when resolving the server's address
> > which will be a very cheap way to load-balance between them.
> >
> > 2. If you just want the bakcup server to be accessed only when the
> > primary one goes down then the only way I can think off right now is
> > with having setting the A record with a very short time to live (5-15)
> > minutes, as much as you can stand with having an inaccessible server)
> > so your DNS will be accessed almost every time the site is accessed,
> > when the primary goes down (which can be discovered with various High
> > Availability solutions) the secondary updates your DNS server and takes
> > over control.
> >
> > See http://linux-ha.org/ for instance.
> >
> > (1) and (2) can be implemented together (advertise both, when one
> > server goes down the other will delete its address from DNS, when the
> > downed server goes back up it will register with the DNS).
> >
> > There are also mailing lists, forums and sites about Linux high
> > availability, maybe you'll find there more knowledgeable crowde, though
> > I for one would be curious to hear what you decided to do.
> >
> > >> My suggestion for a general direction to investigate:
> > >>
> > >> 1. Check rsync more closely - as far as I remember from using it, it
> > >> is actually a very efficient program in CP

Re: live website mirroring

2003-09-25 Thread linux-il
Dan Fruehauf wrote:

On Thursday 25 September 2003 11:45, [EMAIL PROTECTED] wrote:
it was probably my mistake, i said NAS, but i meant SAN (Storage Area 
Network).

But still, wouldn't that keep the NAS as a single point of failure? Or 
is the NAS
implemented by some HA cluster of servers?
Also, the original author said, as far as I understood it, that when the 
backup
server moves to a separate location he won't be able to use the same IP (or
maybe he can channel the same IP over IP?)

what's not too good is that sessions are lost, so if youre application is 
stateful, your users will feel the crash that happened, while if it's 
stateless the damage will be much smaller.

If your application is writen as a Java web-app then you can take 
advantage of
clustered Servlet engines, which enable sharing of sessions in a 
cluster, just
for this kind of situations (in addition to the scalability advantage).

Does PHP have such stuff?

--Amos

=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]


Re: live website mirroring

2003-09-25 Thread Shachar Shemesh
Lior Kaplan wrote:

As far as I know, the cost of SAN is very high. It also gives your more GB
then you need. And if you need less GB than what given - you just pay too
much extra on each GB.
Also, don't forget the equipment using to work with fiber optic cables
(control cards, switches and so on).
At this cost I think you can build a HA solution:
1. A red rubin DNS.
2. heart bit.
3. using another computer for db/files and two servers to serve the files
(sure, has lower performance).
 

What you are saying is called "NAS". With SAN, you can achieve true no 
single point of failure. If you disks are fiber connected, and you are 
doing RAID over them, you can have two disks (raid-1) serving two 
computers (not simultaniously). Viola!

With NAS, you introduce a third computer, that serves the files to both 
servers (single point of failure), you need some way to ballance the 
load (round robin DNS, in your case, which suffers serious problems when 
trying to cope with a failure).

I'm not even sure that the fiber option is so expensive (quite honestly, 
I don't know). In any case, it appears as if the entire industry is 
shifting torwards NAS (Netapps, special Emc hardware that incorporates 
an NFS server, etc.). This means that most IT managers agree with you, 
on one level or another.

 Shachar

--
Shachar Shemesh
Open Source integration consultant
Home page & resume - http://www.shemesh.biz/


=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]


Re: live website mirroring

2003-09-25 Thread Sagi Bashari
On 25/09/2003 21:53, [EMAIL PROTECTED] wrote:

But still, wouldn't that keep the NAS as a single point of failure? Or 
is the NAS
implemented by some HA cluster of servers?
Also, the original author said, as far as I understood it, that when 
the backup
server moves to a separate location he won't be able to use the same 
IP (or
maybe he can channel the same IP over IP?)
For now both servers will be on the same location. I might have confused 
you when I asked that question, it wasn't directly connected to the 
original question (how to sync the data directories and database) - I 
just wanted to know if such thing as possible.

You confirmed my original thoughts about the DNS in your previous message.


what's not too good is that sessions are lost, so if youre 
application is stateful, your users will feel the crash that 
happened, while if it's stateless the damage will be much smaller.

If your application is writen as a Java web-app then you can take 
advantage of
clustered Servlet engines, which enable sharing of sessions in a 
cluster, just
for this kind of situations (in addition to the scalability advantage).

Does PHP have such stuff?


Thats not a problem. PHP uses plain files to store sessions data, so all 
I need to do is to set the sessions directory to a location that is 
shared by both servers.

Sagi



=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]


Re: live website mirroring

2003-09-25 Thread Sagi Bashari
On 25/09/2003 21:58, Lior Kaplan wrote:

As far as I know, the cost of SAN is very high. It also gives your more GB
then you need. And if you need less GB than what given - you just pay too
much extra on each GB.
 

Exactly. Those solutions are way out of our current budget - and right 
now the files that we store outside the database do not even reach 1GB 
so we won't invest in such expensive solution.

3. using another computer for db/files and two servers to serve the files
(sure, has lower performance).
 

We don't need this, our system runs perfectly on our server. My original 
idea was having 1 low end PC as a backup - not as an extra server (it 
won't be used when the main server is up).

So going back to my original question: is there a simple way to 
synchronize a directory between two linux servers, like rsync does -- 
but in real time?

Sagi



=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]


Re: live website mirroring

2003-09-26 Thread Oded Arbel
On Friday 26 September 2003 01:14, Sagi Bashari wrote:
> > If your application is writen as a Java web-app then you can take
> > advantage of
> > clustered Servlet engines, which enable sharing of sessions in a
> > cluster, just
> > for this kind of situations (in addition to the scalability advantage).
> >
> > Does PHP have such stuff?
>
> Thats not a problem. PHP uses plain files to store sessions data, so all
> I need to do is to set the sessions directory to a location that is
> shared by both servers.

Or you can store it in the same database as the rest of the site's data.

-- 
Oded


=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]



Re: live website mirroring

2003-09-26 Thread Muli Ben-Yehuda
On Fri, Sep 26, 2003 at 12:21:12AM +0200, Sagi Bashari wrote:

> So going back to my original question: is there a simple way to 
> synchronize a directory between two linux servers, like rsync does -- 
> but in real time?

A directory also has its metadata, which will be a lot harder to
synchronize in run time. If you're willing to go lower than the file
system, to the block device, you can take a look at DRBD (distributed
replicated block device,
http://www.complang.tuwien.ac.at/reisner/drbd/), which can synchronize
a local and remove block device at run time. Coupled with the
linux-ha.org software, you can have automatic failover, which sounds
pretty much like what you want. 

I haven't used either, but I've been giving them some serious thought
and reading the code, and both look quite reasonable to me. 

Cheers, 
Muli 
-- 
Muli Ben-Yehuda
http://www.mulix.org



signature.asc
Description: Digital signature


Re: live website mirroring

2003-09-26 Thread Oren Held
Hi,

Heartbeat can give you solutions for that.. The only real problem is
indeed the storage which you want to be synchronized.
High Availability clusters should provide a way for having shared
storage (i.e. scsi disk / JBOD connected to two servers), while the main
node is down, the other will mount the disk.

(Note that there's a big risk if two nodes mount the same storage. The
cluster should be connected with at least two internal cables so it
could monitor each node even when main network is down)

Anyway you'd better read some in www.linux-ha.org . Also note that
RedHat has its own HA cluster solution (pirhana I think) which comes
with their RH Advanced Server, which I have no experience with.

- Oren

On Thu, 2003-09-25 at 02:42, Sagi Bashari wrote:
> Hello
> 
> We are running a website that is written in PHP and is using a MySQL 
> database. It runs on Linux, of course.
> 
> We're trying to find a method to setup a complete mirror of the server 
> on a standby server that we can bring up easily if the main server goes 
> down.
> 
> The website is very dynamic and data changes all the time. I'm looking 
> for a way to keep the standby server in identical state as the master 
> server.
> 
> After doing some research I found out that I could use the built-in 
> MySQL replication for that, but we also have dynamic data directories on 
> our server - I could setup an rsync script that runs every few minutes, 
> but it'll probably take too many resources and won't be accurate.
> 
> I would like to hear the experiences of others who had the same problem 
> (and hopefully solved it).
> 
> * Both servers are running at the same location right now, so I can 
> takeover the main IP address easily. However, we considered having a 
> backup server on another location (where we can't use the same ip 
> address). Is there any way to do this?
> 
> Sagi
> 
> 
> 
> =
> To unsubscribe, send mail to [EMAIL PROTECTED] with
> the word "unsubscribe" in the message body, e.g., run the command
> echo unsubscribe | mail [EMAIL PROTECTED]
> 
> 


=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]