[squid-users] Squid / Heartbeat / IPtables

2007-05-01 Thread Paul Fiero

Greetings all, again,
I am back with yet more questions, though hopefully, this time, I
have better information for you.

We have moved past issues with trying to decide how to do our
failover with squid on our new router infrastructure.  We will be
using policy-based routing (PBR) pointing at a cluster of squid nodes.
At this point it's going to be configured for high-availability and
not for load-balancing, yet.

In any case here is my situation now.  :o)
I have my two Squid servers configured with heartbeat so that we
have one active node and one passive node waiting for failover should
the heartbeat be lost.  Given this configuration we have squid
configured as a transparent proxy with the following pertinent
settings as I found them in a couple of different documents on
transparent proxy:
http_port 192.168.1.6:3128
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on

At this point I also ensured that ipv4 ip_forward is set to 1, then I
set up an iptables rule to redirect traffic to the correct port:
iptables -t nat -A PREROUTING -p tcp -i eth1 --dport 80 -j REDIRECT
--to-port 3128

When I had Squid configured this way and did not have it being run via
the clustering services all worked fine with policy-based routes and
all.  It was a site to behold.  Then as soon as we reconfigured
everything for use in the cluster traffic has stopped flowing.  It
appears to be getting to at least the port on the switch where the
squid servers are plugged in so I know that the PBR is working.

Somewhere/somehow I'm pretty sure the issue has to do with the way
heartbeat runs the NICs on the Squid server.

So the question:  Given the above information regarding squid
configuration, ip_forwarding, and iptables can anyone point me to a
source of information for fixing the problem or can you give me the
data I need?

Thanks all, in advance, for at least patient with me.  I don't post
much because our Squid system has been running pretty much flawlessly
since I built it out several years ago.  It's just that times are
changing and I've got to accommodate those changes.

If you need to reply please do so either here privately at
pauldotfieroatgmaildotcom or on the list.either one.

--
May have been the losing side...not convinced it was the wrong one.

Keep Flyin'

PFiero


[squid-users] Squid + Policy-Based Routing + Load Balancing/Clustering???

2007-04-28 Thread Paul Fiero

Greetings All,
I have a rather odd situation that has cropped up here that I
would like get some help with. For some background information, we
have had a Cisco SE onsite assisting us and we haven't gotten very
far.
Essentially I have two squid routers sitting parallel to our
firewall (they bypass the firewall). They plug into Cisco catalyst
3500 switches (no layer 3 capabilities) both inside and outside of our
network. In the past we have used WCCP on our internet gateway router
to intercept HTTP traffic and sending it to our squid farm while the
rest goes to our firewall. It has worked fairly well for us but we are
upgrading our gateway router and it no longer supports WCCP and
instead, I'm told that it uses policy-based routing which, to my
knowledge doesn't provide for any sort of fail-over or load-balancing.
After some rough water we managed to get the policy based routing
working to a single squid server which leads to the next step. I have
gone down the road towards setting up a squid cluster using heartbeat.
I've gotten that configured and working so all was looking good. Right
up until we pointed to policy based routing next-hop command to point
to the virtual IP presented by the squid cluster.
So here is where I can use some help from you all.
1. Is there a better way to provide the HTTP redirection instead
of policy based routing or WCCP?
2. Assuming the policy based routing is best what would be the
better way of providing load-balancing/failover besides the
clustering?
If you feel like you can help me with this but would like a
diagram in order to see the picture a bit more clearly please let me
know and I'll provide you with one.

I can be reached at this e-mail address:
pauldotfieroatgmaildotcom pretty much any time from 5am till 1am
CST so please feel free to ask questions or pass on suggestions here.

Thanks in advance for whatever assistance you can provide. I have
had my current squid deployment in place for close to four-and-a-half
years with little problem and if it weren't for this system upgrade
I'd be sticking with it. And if I can't resolve this problem by this
coming Wednesday I will be forced to deploy a commercial system and
lose one more piece of open source software that keeps the door open
in my enterprise network for the continued use of open source
software.

PFiero


[squid-users] Squid-2.5.STABLE6 and WCCPv2

2005-08-16 Thread Paul Fiero
Hey all, I am having a bit of a problem here trying to get something
running.  I have the following:



 



Red Hat Enterprise Linux 4 (kernel 2.6.9-11)



Squid-2.5.STABLE6 (with wccp-buffer overflow patch and wccpv2 patch
from visolve applied) (installed via RedHat's RPM
squid-2.5.STABLE6-3.4.E.5.rpm)



 



When I do the configure command and include -enable-wccp and
-enable-wccpv2 everything compiles and the squid.conf shows the
options for wccpv2 router but when I run squid it shows WCCP disabled
in the cache.log file.  If I run configure using -disable-wccp and
-enable-wccpv2 I get the same thing.  So at this point it I am at a
bit of an impasse.  Can someone explain to me why

wccpv2 won't run with this or point me to a doc that could explain it?



 



Paul Fiero, RHCE

Information Security Analyst

Communications and Technology Management Office City of Austin

(512) 974-3559 



=== 



The information contained in this ELECTRONIC MAIL transmission is
confidential.  It may also be a privileged work product or proprietary
information. This information is intended for the exclusive use of the
addressee(s). If you are not the intended recipient, you are hereby
notified that any use, disclosure, dissemination, distribution [other
than to the addressee(s)], copying or taking of any action because of
this information is strictly prohibited.



===


Re: [squid-users] Blocking gzipped HTML pages

2005-02-08 Thread Paul Fiero
I brought this up only because of a query by my peers asking about the
possibility of people using gzipped HTML pages as a delivery vector
for virii.

My concern with this is that we would very likely end up blocking more
legitimate pages than we would protect ourselves against.  My
inclination is not to do this but I wanted to hear from the community
as a whole to see what every one else's opinions were.

PFiero


On Tue, 8 Feb 2005 08:16:15 +0100, Matus UHLAR - fantomas
[EMAIL PROTECTED] wrote:
 On 07.02 14:24, Paul Fiero wrote:
  I would like to know if anyone can help me with a question.  Is it
  possible or even advisable to block gzipped HTML content with
  squid/squidguard?
 
 it is possible but I would not advise it. I discourage it.
 Why at all do you think about it?
 --
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 2B|!2B, that's a question!



[squid-users] Blocking gzipped HTML pages

2005-02-07 Thread Paul Fiero
I would like to know if anyone can help me with a question.  Is it
possible or even advisable to block gzipped HTML content with
squid/squidguard?

Any assistance would be appreciated.

PFiero


[squid-users] Connection reset by peer error

2004-07-02 Thread Paul . Fiero
We are currently running Squid version 2.5 stable 3 with the following
compile options '--prefix=/usr/local/squid' '--enable-icmp'
'--enable-useragent-log' '--enable-referer-log' '--enable-wccpv2'
'--enable-linux-netfilter' '--enable-async-io' '--enable-ssl'
'--with-openssl'.  We are running this on a Dell PowerEdge 2350 server
with a 2.8Ghz processor and 2Gb RAM and 96GB of RAID5 storage.

I am utilizing SquidGuard for content filtering.

This setup is running on two identical boxes where the installation
process was duplicated from one box to the other.  We are running wccp
on our core Cisco switches to handle failover on those boxes.

This setup has been running fine for over a year with the occasional
wiping of the cache and restart using squid -z then re-running the
startup script I wrote.  I have not had a single problem with this
seetup until the last two days.  Suddenly I began to get errors when
connecting to Yahoo's mail system (just mail, the other sections of
Yahoo work fine), hotmail, and some sites that our users have to access
at some vendor sites that use .asp pages.  Now whenever I try to go to
one of those troubled sites I get a connection reset by peer error on
the browser and nothing else in the cache.log.

To compound the puzzle if I shut down both squid boxes then all my
traffic goes through our Pix firewall and works just fine (the squid
boxes sit parallel to the Pix not behind it) then as soon as we start up
squid and try it fails with the same errors.

Does anyone have any thoughts or comments?  Like I said this has run
fine for a year or so and just started in the last 2 days.nothing
has been done to these servers prior to this problem starting.

-- 
Paul Fiero
Information Security Analyst
Communications  Technology Office
Enterprise Support Group
(512) 974-3559


[squid-users] no_cache directive

2004-03-10 Thread Paul . Fiero
I am running squid2.5.Stable3 and am using it with SquidGuard.  Currently I
am using statements such as these below:

acl COAERS url_regex ^http://www.coaers.org.*
no_cache deny COAERS

to keep from caching the site above, however if I look in the access.log I
see the following type traffic:

1078922998.183226 xxx.xxx.xxx.xxx TCP_MISS/200 9256 GET
http://www.coaers.org/ - DIRECT/204.200.192.205 text/html
1078922998.319210 xxx.xxx.xxx.xxx TCP_MISS/200 8037 GET
http://www.coaers.org/images/coaers%20logo.gif - DIRECT/204.200.192.205
image/gif
1078922998.541 69 xxx.xxx.xxx.xxx TCP_MISS/200 6144 GET
http://www.coaers.org/images/mailbox.gif - DIRECT/204.200.192.205 image/gif
1078922998.609131 xxx.xxx.xxx.xxx TCP_MISS/200 1742 GET
http://www.coaers.org/_vti_bin/fpcount.exe/? - DIRECT/204.200.192.205
image/gif
1078922998.638166 xxx.xxx.xxx.xxx TCP_MISS/200 16936 GET
http://www.coaers.org/menu/Retirement%20Ofc.%20RGB.jpg -
DIRECT/204.200.192.205 image/jpeg

Does the DIRECT statement indicate that the traffic to the site is indeed
being pulled directly from the site and is not cached?

Essentially the problem is that users inside my network say that the web
page should show a different Updated on date than the one they are seeing
because outside of our network they see the proper date.  I have shutdown
and flushed all my cache files (about 40gb worth on each server), cleared my
local borwser cache, and restarted the caching serversall to no avail.
The data still seems to be getting cached somewhere by someone.  Any ideas
that are obvious which I don't seem to be catching?

PFiero


[squid-users] Disk space over limit

2003-09-26 Thread Paul . Fiero
What would cause me to continue to get a WARNING: disk space over limit
error mesage?  I have my cache set to about 30gb and I have 90gb free on
that disk.  It is causing me no end of problems because it causes squid not
to start back up cleanly.


Paul Fiero
Information Security Analyst
City of Austin Communications and Technology Management
(512) 974-3559
[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE: 

The information contained in this ELECTRONIC MAIL transmission is
confidential. It may also be privileged work product or proprietary
information. This information is intended for the exclusive use of the
addressee(s). If you are not the intended recipient, you are hereby notified
that any use, disclosure, dissemination, distribution [other than to the
addressee(s)], copying or taking of any action because of this information
is strictly prohibited.




Vampireware /n/, a project, capable of sucking the lifeblood out of anyone
unfortunate enough to be assigned to it, which never actually sees the light
of day, but nonetheless refuses to die.




[squid-users] Success

2003-08-28 Thread Paul . Fiero
Well all, I wanted to write to the group and give my thanks and appreciation
for the folks who have helped me get this beast up and running.

Just a lowdown here but I am running Squid v2.5-STABLE3 on a Dell Poweredge
server running a 2.4Ghz Xeon processor and 1GB RAM.  It has 3 36GB HDD with
9GB set up as the OS partition and the remaining space in a RAID 0 container
sized for cache and logging.  With some assistance from the folks here, the
many different FAQ's, and some trial and error I have been able to get
WCCPv2 redirection working from my Cisco backbone router.

The interesting thing about this set up is that it is providing caching
services for an enterprise network of just about 5500 PC's connecting to the
internet at 15Mbps.  Currently I maintain a peak hourly hit count of between
420,000 and 430,000 hits/hour.  The server is running at about 85% CPU
utilization but this whole set up represents an improvement in performance
over the existing Cacheflow appliance that we have had running for several
years now.

I am still learning much about writing the acls and tweaking the squid
configuration file, but all in all it works like a charm.  Thanks again to
all who have contributed to the application and to those who have helped me
through some of the tight spots.


Paul Fiero
Information Security Analyst
City of Austin Communications and Technology Management
(512) 974-3559
[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE: 

The information contained in this ELECTRONIC MAIL transmission is
confidential. It may also be privileged work product or proprietary
information. This information is intended for the exclusive use of the
addressee(s). If you are not the intended recipient, you are hereby notified
that any use, disclosure, dissemination, distribution [other than to the
addressee(s)], copying or taking of any action because of this information
is strictly prohibited.




Vampireware /n/, a project, capable of sucking the lifeblood out of anyone
unfortunate enough to be assigned to it, which never actually sees the light
of day, but nonetheless refuses to die.




[squid-users] File Descriptors

2003-08-19 Thread Paul . Fiero
I have finally managed to get WCCPv2 working on my box.  Works great.

Now I am having errors show up in my log saying I am running on of file
descriptors.

I have checked my /proc/sys/fs/file-max and /proc/sys/fs/inode-nr files and
they both are set pretty high.  When I check the cachemgr runtime
information page it shows squid seeing 1024 file descriptors available.

Do I need to recompile squid to fix this?


Paul Fiero
Information Security Analyst
City of Austin Communications and Technology Management
(512) 974-3559
[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE: 

The information contained in this ELECTRONIC MAIL transmission is
confidential. It may also be privileged work product or proprietary
information. This information is intended for the exclusive use of the
addressee(s). If you are not the intended recipient, you are hereby notified
that any use, disclosure, dissemination, distribution [other than to the
addressee(s)], copying or taking of any action because of this information
is strictly prohibited.




Vampireware /n/, a project, capable of sucking the lifeblood out of anyone
unfortunate enough to be assigned to it, which never actually sees the light
of day, but nonetheless refuses to die.




[squid-users] WCCPv2 problems....still

2003-08-15 Thread Paul . Fiero
Well I have managed to get some help from some other very helpful folks and
currently have kernel 2.4.20 patched with the ip_wccp module.  I have
applied the wccpv2 patch from visolve for version 2.5 of squid.  When I ran
the patch I got no errors, when I compiled everything indicated that WCCP
was turned on, but when I run make I get the following:


gcc -DHAVE_CONFIG_H
-DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I.
-I../include -I. -I. -I../include -I../include  -I/usr/kerberos/include  -g
-O2 -Wall -D_REENTRANT -c `test -f wccpv2.c || echo './'`wccpv2.c
wccpv2.c: In function `wccp2HandleUdp':
wccpv2.c:338: warning: unused variable `tmp'
wccpv2.c: In function `wccp2AssignBuckets':
wccpv2.c:442: warning: unused variable `wccp2_assign_bucket'
wccpv2.c:443: warning: unused variable `buckets_per_cache'
wccpv2.c:444: warning: unused variable `loop'
wccpv2.c:445: warning: unused variable `number_caches'
wccpv2.c:447: warning: unused variable `caches'
wccpv2.c:448: warning: unused variable `offset'
wccpv2.c:449: warning: unused variable `buckets'
wccpv2.c:450: warning: unused variable `buf'

At this point, I can finish compiling squid and installing it.  I can even
run it (with all the configurations necessary for activating WCCPv2), but
when I look at the cache.log files there is no WCCP traffic being generated.

Can someone give me some feedback on this?


On Wednesday 06 August 2003 04.41, Allen Stringfellow wrote:
 I am trying to compile Squid-2.5.3 on RedHat Linux 9.0 (kernel
 2.4.21 with ip_wccp.patch patched into the precompile includes).  I
 ran the wccpv2.patch against the precompiled squid source and ran
 the configure script with --linux-netfilter , --delay-pools,
 --snmp, and --wccpv2 options enabled.  When I run 'make all' I get
 the following errors from the wccpv2 mod:


Did you get any errors when you ran the patch command?

Regards
Henrik

Paul Fiero
Information Security Analyst
City of Austin Communications and Technology Management
(512) 974-3559
[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE: 

The information contained in this ELECTRONIC MAIL transmission is
confidential. It may also be privileged work product or proprietary
information. This information is intended for the exclusive use of the
addressee(s). If you are not the intended recipient, you are hereby notified
that any use, disclosure, dissemination, distribution [other than to the
addressee(s)], copying or taking of any action because of this information
is strictly prohibited.




Vampireware /n/, a project, capable of sucking the lifeblood out of anyone
unfortunate enough to be assigned to it, which never actually sees the light
of day, but nonetheless refuses to die.




[squid-users] bootstrap.sh

2003-08-15 Thread Paul . Fiero
Can anyone tell me where I can get the bootstrap.sh script from the CVS
tree?  I am lost trying to find it.  I think my WCCP warnings in 'make'
(refer to previous posting) comes from not having the proper bootstrap.sh
script.


Paul Fiero
Information Security Analyst
City of Austin Communications and Technology Management
(512) 974-3559
[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE: 

The information contained in this ELECTRONIC MAIL transmission is
confidential. It may also be privileged work product or proprietary
information. This information is intended for the exclusive use of the
addressee(s). If you are not the intended recipient, you are hereby notified
that any use, disclosure, dissemination, distribution [other than to the
addressee(s)], copying or taking of any action because of this information
is strictly prohibited.




Vampireware /n/, a project, capable of sucking the lifeblood out of anyone
unfortunate enough to be assigned to it, which never actually sees the light
of day, but nonetheless refuses to die.




RE: [squid-users] Compile WCCP module optimally

2003-08-14 Thread Paul . Fiero
Okay so if GRE is already compiled and loaded via insmod, and I can set up
the tunnel to the router in question, my next step is to configure the
router to redirect traffic to the squid server via WCCP as per normal?  I
don't have to configure the router for GRE tunneling specifically?

And once that is done, then I have to enable port redirection?  Via iptables
or another app?

Regards and Thanks for all the assistance,
Paul Fiero

-Original Message-
From: Henrik Nordstrom
To: Awie; Squid-users
Sent: 8/13/2003 1:48 AM
Subject: Re: [squid-users] Compile WCCP module optimally

On Wednesday 13 August 2003 04.49, Awie wrote:

 Anyway, should I also active the IP GRE when I use WCCP (let say
 ip_wccp.o module already loaded by insmod)?

NO.

 I become confuse, some
 documents explained to load WCCP and GRE together at the same time.
 But I agree with your email said that it can only have one
 installed at a time

Whish document explains to load bot modules? If there is such document 
is should be corrected as it is impossible to have two different GRE 
modules as the same time (the ip_wccp module is a GRE module only 
implementing WCCP decapsulation).

Regards
Henrik


RE: [squid-users] Compile WCCP module optimally

2003-08-14 Thread Paul . Fiero
I have this set up using the ip_gre module.  I didn't do anything to patch
it.  How do I do that?  I managed to find a patch for ip_gre at swelltech
but don't have a clue as to how to apply it.

The way I understand it all, the router intercepts the port 80 traffic and
forwards it to the proxy via WCCP.  The proxy receives the traffic via the
ip_gre module where it is grabbed by iptables and redirected to the port
that the proxy is listening on.  Then the proxy does its business and sends
the return traffic to the client via regular traffic and not back through
the gre tunnel.  Is this correct?

Okay, now the big question.  Is there an easier method of transparent
proxying.  LOL.

Cheers and Thanks for all the help,
Paul Fiero

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 13, 2003 7:16 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: RE: [squid-users] Compile WCCP module optimally


On Wed, 13 Aug 2003 [EMAIL PROTECTED] wrote:

 Okay so if GRE is already compiled and loaded via insmod, and I can set up
 the tunnel to the router in question, my next step is to configure the
 router to redirect traffic to the squid server via WCCP as per normal?  I
 don't have to configure the router for GRE tunneling specifically?

Are you using the ip_gre module? In such case is the module patched to 
support WCCP?

I think you just set up a gre interterface with the router as endpoint.  
Then the patch takes over. This GRE should only be used for receiving
traffic from the router, not for return traffic.

 And once that is done, then I have to enable port redirection?  Via
iptables
 or another app?

iptables, just as in the transparent interception case when not using 
WCCP.

The purpose of WCCP is just to have the packets routed to the proxy server 
box. It is the responsibility of the proxy server box to intercept the 
traffic and redirect it to the proxy application.

Regards
Henrik


[squid-users] WCCP

2003-08-12 Thread Paul . Fiero
I am looking at some documentation on installing and configuring WCCP for
use with squid.  However those docs are for RedHat Linux v7.1 with a 2.4.9
kernel and squid 2.4.STABLE2.  I am running RedHat Linux 9 with a 2.4.20
kernel and squid 2.5.STABLE3.

I need to get WCCP running and seem to have problems getting my fingers
around the concept.  I have seen a couple different docs for older versions
of Linux, one says I have to have wccp AND gre installed, one just calls for
gre.

I am following the docs for the previously mentioned install but I can't
seem to compile the wccp module.

I am, unfortunately, in a time crunch on this as I have to bring it up and
running for a test beginning on Friday the 15th and this was a twist I had
not previously planned on.  Any assistance would be appreciated.


Paul Fiero
Information Security Analyst
City of Austin Communications and Technology Management
(512) 974-3559
[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE: 

The information contained in this ELECTRONIC MAIL transmission is
confidential. It may also be privileged work product or proprietary
information. This information is intended for the exclusive use of the
addressee(s). If you are not the intended recipient, you are hereby notified
that any use, disclosure, dissemination, distribution [other than to the
addressee(s)], copying or taking of any action because of this information
is strictly prohibited.




Vampireware /n/, a project, capable of sucking the lifeblood out of anyone
unfortunate enough to be assigned to it, which never actually sees the light
of day, but nonetheless refuses to die.




[squid-users] Who uses squid?

2003-07-31 Thread Paul . Fiero
I am once again asking, in reference to justification at our location, if
anyone can give me an idea of any large networks using squid for their
caching requirements.

We are currently considering a move to squid as an alternative to our
current cache appliance.  The powers that be here would like a little bit of
an assurance that installations aren't just limited to small networks and
schools.  Some of you have already responded to a previous query of this
nature and I appreciate the data.  I have those numbers in a document
already.  I am just looking for a bit more.

Please feel free to respond either on or off list.

Thanks so very much,

Paul Fiero
Information Security Analyst
City of Austin Communications and Technology Management
(512) 974-3559
[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE: 

The information contained in this ELECTRONIC MAIL transmission is
confidential. It may also be privileged work product or proprietary
information. This information is intended for the exclusive use of the
addressee(s). If you are not the intended recipient, you are hereby notified
that any use, disclosure, dissemination, distribution [other than to the
addressee(s)], copying or taking of any action because of this information
is strictly prohibited.




Vampireware /n/, a project, capable of sucking the lifeblood out of anyone
unfortunate enough to be assigned to it, which never actually sees the light
of day, but nonetheless refuses to die.




Re: [squid-users] Hardware specs

2003-07-23 Thread Paul . Fiero
I have heard that it would be preferrable to set up the cache storage area
on a RAID 5 set using high speed ultra SCSI drives and  a caching
controller.  Does this sound right?  I haven't comitted to any hardware yet
but may actually have a budget to get two boxes (primary / failover) built
to my specs.  Currently looking at single 1Ghz processor boxes, 1Gb RAM, and
3 36gb 15k rpm drives attached to a caching RAID controller.

Any suggestions about the failover option?

PFiero

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.500 / Virus Database: 298 - Release Date: 7/10/2003
 


[squid-users] Question for the users....

2003-07-22 Thread Paul . Fiero
I am in the process of trying to put together a list of current squid users
that I could present to my management to convince them that other people
are using it.  If any would care to respond to me off list I would
appreciate it.  I would be interested in a) a list of user that anyone knows
of, or b) a response telling me if you use it or know of anyone using it.

My e-mail address is [EMAIL PROTECTED]



Paul Fiero
Information Security Analyst
City of Austin Communications and Technology Management
(512) 974-3559
[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE: 

The information contained in this ELECTRONIC MAIL transmission is
confidential. It may also be privileged work product or proprietary
information. This information is intended for the exclusive use of the
addressee(s). If you are not the intended recipient, you are hereby notified
that any use, disclosure, dissemination, distribution [other than to the
addressee(s)], copying or taking of any action because of this information
is strictly prohibited.




Vampireware /n/, a project, capable of sucking the lifeblood out of anyone
unfortunate enough to be assigned to it, which never actually sees the light
of day, but nonetheless refuses to die.