RE: [SLUG] home server on adsl; advice

2003-06-02 Thread Minh Van Le
Correct me if I'm wrong, but having two firewalls is better than one.

One for the DSL modem that is exposed to the internet, and then a separate
firewall for the internal lan that is only exposed to the DSL firewall is
better than firewalling everything from 1 box. It may delay a compromise and
make tracking logs easier.

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Chris D.
> Sent: Sunday, 1 June 2003 19:10
> To: [EMAIL PROTECTED]
> Subject: Re: [SLUG] home server on adsl; advice
>
>
> This one time, at band camp, Amanda Wynne wrote:
> >I'm looking at getting an Alcatel Pro. Currently running a P120
> with Freesco
> >via dialup.
>
> I'd recomend the DSL-300 from D-Link. There it maintains the
> authentication and you just plugin a cat5 crossover to your system.
> On the system it's connected to, you just use dhcp to configure the IP
> address on it.
>
> >What I'm thinking of doing, if it's possible (this was going to
> be my next
> >question) is change the Freesco box to bridge mode, feeding the
> alcatel, with
> >my web server (yet another box) hanging off the alcatel. That
> way my Lan is
> >effectively double-firewalled.
>
> 'double-firewalled' is really not going to mean much.
>
> I refuse to say free-->SCO<-- is a good idea.
>
> Cheers,
> Chris
> --
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] IMAP client on Windows

2003-06-02 Thread Stewart
I've used win32 Eudora with uw-imap for ages and had no complaints... 
and it seems to be going fine with cyrus now too.

(OTOH Eudora 5.2 on Mac OS 8.6 seems to have broken TLS/SSL but we 
won't go there ;-)

..S.


On 2/06/2003 7:10 AM +1000 Nik Belajcic wrote:
I know this is not a Linux question in the strict sense, but I am
posting it  here because I think that there may be others who have
already faced the same  problem and might offer suggestions. I have
just finished setting up my mail  server at home (Fetchmail,
Postfix, Procmail, Courier-IMAP) and I am trying  to find something
that would be more or less a Windows clone of KMail -  simple but
functional - to use as IMAP Windows client. I have looked at
Mahogany, Mulberry, Sylpheed, Pocomail3 (beta) and Mozilla mail,
besides the  obvious one that will remain nameless, but they all
have one problem or  another. I just can't believe that there is
nothing that fits this  description, so any suggestions are most
welcome.
--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] IMAP client on Windows

2003-06-02 Thread Brett Fenton
I have a very similar setup here and in the office, except it's getmail
> sortmail > uw impad

I also have an issue with finding a decent client to handle imap on
windows. I haven't tried some of those you've mentioned, but can give
advice on those you ommitted to name. Outlook is terrible and
continually crashed, Express was notably better but still has it's own
quirks. I find myself more often than not using a terminal emulator and
mutt. 

Brett

On Mon, 2003-06-02 at 07:10, Nik Belajcic wrote:
> On Sunday 01 June 2003 07:52 am, Nik Belajcic wrote:
> 
> I know this is not a Linux question in the strict sense, but I am posting it 
> here because I think that there may be others who have already faced the same 
> problem and might offer suggestions. I have just finished setting up my mail 
> server at home (Fetchmail, Postfix, Procmail, Courier-IMAP) and I am trying 
> to find something that would be more or less a Windows clone of KMail - 
> simple but functional - to use as IMAP Windows client. I have looked at 
> Mahogany, Mulberry, Sylpheed, Pocomail3 (beta) and Mozilla mail, besides the 
> obvious one that will remain nameless, but they all have one problem or 
> another. I just can't believe that there is nothing that fits this 
> description, so any suggestions are most welcome.
> 
> Thanks in advance.
> 
> Cheers,
> Nik Belajcic.
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] DHCP/LAN problem: can ping IP cannot ping Hostname

2003-06-02 Thread Laurie Savage
Thanks to all who helped. Problem mostly resolved. Falls on sword!

IAATS (I am ashamed to say) the mozilla proxy settings were not set up to
let me browse to the CUPS server on my home LAN.

The problem still exists however of having to comment out lines referring
to my home LAN in the /etc/hosts file when connecting to the Novell 
network at work. Gnome and OO.Org hang if the home host is referred to in 
/etc/hosts - "host blackbox does not exist on this network..." stuff.

I have added the line "server-name blackbox" to blackbox's /etc/dhcpd.conf 
but my client seems to need the reference line in its own /etc/hosts.

Any feedback on setting up laptops to talk to various servers would be 
appreciated.
-- 

Laurie Savage

Physics/Maths/IT Teacher
Pascoe Vale Girls' College
Pascoe Vale, Victoria, AUSTRALIA


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] can someone reply

2003-06-02 Thread Mary
On Sun, Jun 01, 2003, [EMAIL PROTECTED] wrote:
> can anyone enlighten me why are my posts to slug bouncings every time ?
> 
> I've asked the slug-bounce-admin, but got no reply

It's hard to tell, I had a look through the headers we hold for approval
and I can't see your headers in there. Hopefully another one of the
admins will have time to look soon.

PLEASE don't mail these questions to slug@

ONLY the admins have access to the information about who and what is
restricted. (Note to slug@ at large: we're not censoring SLUG, we're
attempting to filter spam - adding restricted headers to mailman's
restricted list was the best way to do this prior to us starting to use
SpamAssassin.)

Mailing these questions to slug@ is (I presume) annoying to subscribers,
since there is no way they can possibly answer your question. If the
admins don't reply, then mail us again. There's noone on slug@ who can
answer your question who is not also on [EMAIL PROTECTED]

-Mary
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Redundant Web Servers

2003-06-02 Thread Jon Biddell
Hi all,

Our "marketing types" want 24/7 availability of our corporate web 
site - a fair enough request, I guess...

However we have a number of restrictions on what we can do;

1. Must (presently) remain with IIS - moving to a Linux/Apache 
solution may become possible later, but it's "political"

2. Servers must be physically located on different campuses - 
because we connect tot he 'net through AARNET, we want them on 
different RNO's.

3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one 
fails. Doing a "shift-reload" in the browser is NOT an option. It 
must be TOTALLY TRANSPARENT.

Keeping the boxes in sync is no problem.

I was thinking of a Linux box with 3 NICs - one to each server and 
one to the 'net, but this will only work if the servers are 
physically located on the same network.

The only other solution I can come up with, given the above anal 
restrictions, is to use a "round robin" DNS setup, but this will 
involve doing a reload if the primary server fails to pick up the 
secondary DNS entry.

I'm open to suggestions if anyone knows of a more elegant way of 
doing it - hell, if anyone knows how to make it work, I'll listen 
!!

Jon
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] can someone reply

2003-06-02 Thread Dave Airlie

Received: from sbt.net.au (echidna.sbt.net.au [203.42.34.53])
by maddog.slug.org.au (Postfix) with SMTP id 1412A10A443
for <[EMAIL PROTECTED]>; Sun,  1 Jun 2003 22:41:56 +1000 (EST)
Received: from 210.49.76.118 (echidna.sbt.net.au [203.42.34.53]) by
sbt.net.au
(Hethmon Brothers Smtpd) id 20030601223537-65200-7 ;
Sun, 01 Jun 2003 22:35:37 -1000

the above looks suspicious to me .. the IP hostname and reverse IP are
different...

Dave.

On Sun, 1 Jun 2003 [EMAIL PROTECTED] wrote:

> can anyone enlighten me why are my posts to slug bouncings every time ?
>
> I've asked the slug-bounce-admin, but got no reply
>
>
>
> Return-Path: [EMAIL PROTECTED]
> Received: from maddog.slug.org.au (slug.progsoc.uts.edu.au [138.25.7.4]) by 
> sbt.net.au
> (Hethmon Brothers Smtpd) id 20030601220617-61409-8 ; Sun, 01 Jun 2003 22:06:17 
> -1000
> Received: from maddog.slug.org.au (localhost [127.0.0.1])
>   by maddog.slug.org.au (Postfix) with ESMTP id C861810A443
>   for <[EMAIL PROTECTED]>; Sun,  1 Jun 2003 22:12:34 +1000 (EST)
> MIME-Version: 1.0
> Content-Type: text/plain; charset="us-ascii"
> Content-Transfer-Encoding: 7bit
> Subject: Your message to slug awaits moderator approval
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Message-ID: <[EMAIL PROTECTED]>
> Date: Sun, 01 Jun 2003 22:12:33 +1000
> Precedence: bulk
> X-BeenThere: [EMAIL PROTECTED]
> X-Mailman-Version: 2.1.1
> List-Id: Linux and Free Software Discussion 
> X-List-Administrivia: yes
> Sender: [EMAIL PROTECTED]
> Errors-To: [EMAIL PROTECTED]
>
> Your mail to 'slug' with the subject
>
> bouncing email ?
>
> Is being held until the list moderator can review it for approval.
>
> The reason it is being held:
>
> Message has a suspicious header
>
> Either the message will get posted to the list, or you will receive
> notification of the moderator's decision.  If you would like to cancel
> this posting, please visit the following URL:
>
>
>
> Voytek
>

-- 
David Airlie, Software Engineer
http://www.skynet.ie/~airlied / [EMAIL PROTECTED]
pam_smb / Linux DecStation / Linux VAX / ILUG person

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] home server on adsl; advice

2003-06-02 Thread Phil Scarratt
It's effectively - in security speak - a DMZ (demilitarized zone) no?

Fil

Minh Van Le wrote:
Correct me if I'm wrong, but having two firewalls is better than one.

One for the DSL modem that is exposed to the internet, and then a separate
firewall for the internal lan that is only exposed to the DSL firewall is
better than firewalling everything from 1 box. It may delay a compromise and
make tracking logs easier.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Chris D.
Sent: Sunday, 1 June 2003 19:10
To: [EMAIL PROTECTED]
Subject: Re: [SLUG] home server on adsl; advice
This one time, at band camp, Amanda Wynne wrote:

I'm looking at getting an Alcatel Pro. Currently running a P120
with Freesco

via dialup.
I'd recomend the DSL-300 from D-Link. There it maintains the
authentication and you just plugin a cat5 crossover to your system.
On the system it's connected to, you just use dhcp to configure the IP
address on it.

What I'm thinking of doing, if it's possible (this was going to
be my next

question) is change the Freesco box to bridge mode, feeding the
alcatel, with

my web server (yet another box) hanging off the alcatel. That
way my Lan is

effectively double-firewalled.
'double-firewalled' is really not going to mean much.

I refuse to say free-->SCO<-- is a good idea.

Cheers,
Chris
--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug





--
Phil Scarratt
Draxsen Technologies
IT Contractor/Consultant
0403 53 12 71
--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] SQUID logfile parsers/analysers

2003-06-02 Thread Gareth Walters
G'day all,
Now I have the ntlm authentication working I need a squid logfile
parser/analyser that will
handle the usernames.

I am not having much luck, I have been using pwebstats but it doesn't handle
usernames at all.

Has anyone got any recommendations?

Ideally I would like monthly reports (our billing period) and
as up to date as possible reports on a daily/weekly basis.

---Gareth Walters

***
This information may contain PRIVILEGED AND CONFIDENTIAL information
intended only for the use of the addressee(s). Anyone who receives this
communication in error, should notify us immediately and destroy the
original message without reading, copying or forwarding it to anyone.
***

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread Steve Kowalik
At  9:17 am, Monday, June  2 2003, Jon Biddell mumbled:
> 3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one 
> fails. Doing a "shift-reload" in the browser is NOT an option. It 
> must be TOTALLY TRANSPARENT.
> 
You're going to get one anyway. If the machine falls over, you're not going
to get any more data, and the client will have to re-request.

One solution is mod_backhand with apache, and the IIS servers behind it.

That may conflict with the politics, but whatever.

-- 
   Steve
* StevenK laughs at Joy's connection.
* Joy spits on StevenK 
* StevenK sees the spit coming at him slowly and ducks in time.
 StevenK: how did you do that?  you moved like *them*
 jaiger: Can you fly that thing? *points*
 not yet
 apt-get install libpilot-chopper
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] SQUID logfile parsers/analysers

2003-06-02 Thread Peter Hardy
On Mon, 2 Jun 2003 12:02:11 +1000 Gareth Walters wrote:
> G'day all,
> Now I have the ntlm authentication working I need a squid logfile
> parser/analyser that will
> handle the usernames.
> 
> I am not having much luck, I have been using pwebstats but it doesn't
> handle usernames at all.

I've been playing with SARG (http://web.onda.com.br/orso/sarg.html). It
claims to handle usernames, but I haven't tried it.

Other than that, it gives very comprehensive web reports.  Not too sure
exactly what you need, but I'm willing to bet it'll do it for you. :-)

-- 
Pete
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] PHP & dates

2003-06-02 Thread Howard Lowndes
Is there any configuration parameter, or other setting that will force PHP 
to assume non_US date formats, esp for the strtotime() functions so that 
11/12/2003 is the 11th December 2003 and not 12th November 2003?  The doco 
does not appear to consider this need.

-- 
Howard.
LANNet Computing Associates - Your Linux people 
--
Flatter government, not fatter government - Get rid of the Australian states.
--
I before E except after C. We live in a weird society!

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] SQUID logfile parsers/analysers

2003-06-02 Thread Jeff_Allison%BLACKSHAW

Sarg and webalizer both Handle usernames
webalizer seems to be better for statistics and sarg for blaming people.
Depends on what you want it for


Jeff

[EMAIL PROTECTED] wrote on 02/06/2003 12:02:11:

> G'day all,
> Now I have the ntlm authentication working I need a squid logfile
> parser/analyser that will
> handle the usernames.
> 
> I am not having much luck, I have been using pwebstats but it doesn't
handle
> usernames at all.
> 
> Has anyone got any recommendations?
> 
> Ideally I would like monthly reports (our billing period) and
> as up to date as possible reports on a daily/weekly basis.
> 
> ---Gareth Walters
> 
> ***
> This information may contain PRIVILEGED AND CONFIDENTIAL information
> intended only for the use of the addressee(s). Anyone who receives
this
> communication in error, should notify us immediately and destroy the
> original message without reading, copying or forwarding it to anyone.
> ***
> 
> -- 
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug


smime.p7s
Description: S/MIME Cryptographic Signature
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] SQUID logfile parsers/analysers

2003-06-02 Thread Simon Bryan
Gareth Walters said:
> G'day all,
> Now I have the ntlm authentication working I need a squid logfile
> parser/analyser that will
> handle the usernames.
>
> I am not having much luck, I have been using pwebstats but it doesn't handle
> usernames at all.
>
> Has anyone got any recommendations?
>
> Ideally I would like monthly reports (our billing period) and
> as up to date as possible reports on a daily/weekly basis.
There are a number of good ones linkied to form the Squid FAQ, I use SARG and
Squidalyser. SARG gives excellent daily reports, per user plus a number of other
categories. If looking to tie this to a billing system then look at Squidalyser as
it stores the data in a database that you can use - we have a number of php scripts
that check users downloads and sticks them in a list if they go over the monthly
limit. Squid then uses that list to put them into a ver restrictive delay_pool.




Simon Bryan
IT Manager
OLMC Parramatta
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread James Gregory
Let me prefix this: I don't really know what I'm talking about, double
check anything I say.

On Mon, 2003-06-02 at 09:16, Jon Biddell wrote:

> 2. Servers must be physically located on different campuses - 
> because we connect tot he 'net through AARNET, we want them on 
> different RNO's.
> 
> 3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one 
> fails. Doing a "shift-reload" in the browser is NOT an option. It 
> must be TOTALLY TRANSPARENT.

Wow. Well, point 3 makes it pretty hard. As I understand it, that's an
intentional design decision of tcp/ip -- if it were easy to have another
computer interrupt an existing tcp connection and just take it over,
then I'm sure it would be exploited. Thus to keep a tcp connection open
you need to have a certain amount of state information; I think it does
this through so-called "sequence numbers", but I'm not a network ninja,
so I'm not sure. The point is that to be able to have another computer
step in half way through a transaction, you'll need to have state
information being transferred between the two computers constantly.

Now, the other option is to have some sort of proxying server which just
farms requests out to each server, but then you have a single point of
failure and you're right back where you started.

I believe that there are boxes that do this, but they're hugely
expensive. Like hundreds of thousands of dollars.

So, I suppose you need to analyze the risks that you're trying to
minimise. It would be easier to have a single box in a single building
with multiple connections that were arbitrated by bgp. I still think
you'd need to do a reload in most real situations.

I'll be interested to hear what you come up with. Sorry I can't be more
help.

James.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread Peter Chubb
> "James" == James Gregory <[EMAIL PROTECTED]> writes:


>> 2. Servers must be physically located on different campuses -
>> because we connect tot he 'net through AARNET, we want them on
>> different RNO's.
>> 
>> 3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one
>> fails. Doing a "shift-reload" in the browser is NOT an option. It
>> must be TOTALLY TRANSPARENT.

James> Wow. Well, point 3 makes it pretty hard. As I understand it,
James> that's an intentional design decision of tcp/ip -- if it were
James> easy to have another computer interrupt an existing tcp
James> connection and just take it over, then I'm sure it would be

If you're only serving static content, that's not an issue:  HTTP
version 1 uses a new tcp/ip connexion for each request anyway,
With round-robin DNS you may end up with different images on the same
page being served from different servers anyway.

Personally I'd go with round-robin DNS, and try to detect failure and
update the DNS fast.  Some people's browsers would appear to hang
for a short while when attempting to access the next page, until the
DNS caught up (this implies using a short timeout on the name).


Peter C
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread James Gregory
On Mon, 2003-06-02 at 14:49, Peter Chubb wrote:
> > "James" == James Gregory <[EMAIL PROTECTED]> writes:
> 
> 
> >> 2. Servers must be physically located on different campuses -
> >> because we connect tot he 'net through AARNET, we want them on
> >> different RNO's.
> >> 
> >> 3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one
> >> fails. Doing a "shift-reload" in the browser is NOT an option. It
> >> must be TOTALLY TRANSPARENT.
> 
> James> Wow. Well, point 3 makes it pretty hard. As I understand it,
> James> that's an intentional design decision of tcp/ip -- if it were
> James> easy to have another computer interrupt an existing tcp
> James> connection and just take it over, then I'm sure it would be
> 
> If you're only serving static content, that's not an issue:  HTTP
> version 1 uses a new tcp/ip connexion for each request anyway,
> With round-robin DNS you may end up with different images on the same
> page being served from different servers anyway.

Sure, that's a given. I thought the problem was that it had to happen
without a reload - server crashing halfway through serving a particular
html page. I considered 0 ttl dns as well, but it only works if you can
afford reloads.

James.

> 
> Personally I'd go with round-robin DNS, and try to detect failure and
> update the DNS fast.  Some people's browsers would appear to hang
> for a short while when attempting to access the next page, until the
> DNS caught up (this implies using a short timeout on the name).
> 
> 
> Peter C
> 

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread Andrew McNaughton





On Mon, 2 Jun 2003, James Gregory wrote:

> > >> 3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one
> > >> fails. Doing a "shift-reload" in the browser is NOT an option. It
> > >> must be TOTALLY TRANSPARENT.
> >
> > James> Wow. Well, point 3 makes it pretty hard. As I understand it,
> > James> that's an intentional design decision of tcp/ip -- if it were
> > James> easy to have another computer interrupt an existing tcp
> > James> connection and just take it over, then I'm sure it would be
> >
> > If you're only serving static content, that's not an issue:  HTTP
> > version 1 uses a new tcp/ip connexion for each request anyway,
> > With round-robin DNS you may end up with different images on the same
> > page being served from different servers anyway.
>
> Sure, that's a given. I thought the problem was that it had to happen
> without a reload - server crashing halfway through serving a particular
> html page. I considered 0 ttl dns as well, but it only works if you can
> afford reloads.

I suppose you might be able to hack something together with MIMEs
multipart/x-mixed-replace in a proxy which monitored content length and
was ready to fetch a second MIME part where required.  It would be a bit
messy though, not necessarily compatible with all browsers, and the proxy
is still going to be a single point of failure.

Andrew McNaughton



--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread Luke Burton
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Monday, June 2, 2003, at 09:16  AM, Jon Biddell wrote:

> 3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one
> fails. Doing a "shift-reload" in the browser is NOT an option. It
> must be TOTALLY TRANSPARENT.

The "marketing types" have to understand that nothing is perfect, for 
starters. HTTP and browsers aren't intelligent enough to go "oh, this 
feed stopped midway through. Let's see whether there are any secondary 
sites for this". Ultimately, you may end up with broken portions of the 
page, should something halt midway through serving a client.

That being said, they are probably not thinking of it in such a finely 
grained manner. That's worth clarifying though. Don't let them slip one 
past you!!

On to some technical stuff. I'm not really up to speed with exactly how 
squid works, but couldn't a round robin DNS present issues for clients 
accessing through a proxy? If squid has cached a DNS reply, it might 
query a stale IP address. Any squid boffins got comments on that one? 
I'm thinking of say, Telstra's proxy farm that all bigpond people go 
through for instance.

A good compromise might be to have a 'forwarder' machine hosted on a 
highly available, redundant network of your choosing. You make sure 
that the logic in this thing is as simple as possible, so that there is 
a minimised risk of it going wrong. You pay a few $$ to make sure that 
it's on failover hardware, redundant net connections, etc.

Its job is to forward requests to your bulkier, more failure prone IIS 
installations at your two campuses. It will know whether either of them 
have gone down or had performance unacceptably degraded, and start 
forwarding to your other box. There will be two processes - one will be 
a little httpd that executes a simple loop to decide where to forward 
the request; the other will something that polls your servers to 
determine health (maybe even via SNMP + ping + HTTP GET). The second 
process feeds a small table that the first process uses to make 
decisions on.

Yes, this is technically a single point of failure system - but you are 
mitigating that by 1. keep its job very, very simple; 2. putting it on 
a dedicated, simple Linux machine; 3. hosting it somewhere very highly 
available.

Regards,

Luke.

- --
Luke Burton.
(PGP keys: http://www.hagus.net/pgp)

"Yes, questions. Morphology, longevity, incept dates."


-BEGIN PGP SIGNATURE-
Version: PGP 8.0.2

iQA/AwUBPtruzYCXGdaqw+o1EQKBLACgp4N+fmgkt4EhyZaSevhD+vQpeqEAnjO8
dcC3gDLmv7x7heUkK6XW4AY1
=vpDV
-END PGP SIGNATURE-

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread Dave Airlie
> through for instance.
>
> A good compromise might be to have a 'forwarder' machine hosted on a
> highly available, redundant network of your choosing. You make sure
> that the logic in this thing is as simple as possible, so that there is
> a minimised risk of it going wrong. You pay a few $$ to make sure that
> it's on failover hardware, redundant net connections, etc.


I think ideally you round-robin DNS a couple of these also...

Dave.

-- 
David Airlie, Software Engineer
http://www.skynet.ie/~airlied / [EMAIL PROTECTED]
pam_smb / Linux DecStation / Linux VAX / ILUG person

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Mutt Question

2003-06-02 Thread Dan Treacy
Hey Sluggers,

Just a quick question for the mutt gurus amongst you. I get email to a
number of different addresses. Filtering etc places most of it in the
correct folder and a folder-hook takes care of making sure the mail is
sent from the appropriate user.

But there are a number of times when the mail isn't filtered and the
address it's sent to isn't the mutt default. My question is thus.

Is there any way to get Mutt to reply using the address the mail was
recieved at . so if we get mail for [EMAIL PROTECTED] and hit reply is uses
[EMAIL PROTECTED] as the from address regardless of what the mutt default is. And
if we get another mail to [EMAIL PROTECTED] it does the same.

Thanks in advance,

Dan.


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Mutt Question

2003-06-02 Thread Jeff Waugh


> Is there any way to get Mutt to reply using the address the mail was
> recieved at . so if we get mail for [EMAIL PROTECTED] and hit reply is uses
> [EMAIL PROTECTED] as the from address regardless of what the mutt default is. And
> if we get another mail to [EMAIL PROTECTED] it does the same.

set reverse_alias = yes

- Jeff

-- 
linux.conf.au 2004: Adelaide, Australia http://lca2004.linux.org.au/
 
   "Odd is good by the way. I knew normal in high school and normal hates
me." - Mary Gardiner
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Mutt Question

2003-06-02 Thread Jamie Wilkinson
This one time, at band camp, Dan Treacy wrote:
>Is there any way to get Mutt to reply using the address the mail was
>recieved at . so if we get mail for [EMAIL PROTECTED] and hit reply is uses
>[EMAIL PROTECTED] as the from address regardless of what the mutt default is. And
>if we get another mail to [EMAIL PROTECTED] it does the same.

No, I've never been able to find a way to do that perfectly.  I've tried
variants of using send-hook to make it work for a subset of cases, though.

Have a look at http://spacepants.org/conf/dot.muttrc and hunt for
'send-hook' near the bottom, which adjusts a few variables based on the
mailing list I'm replying to.

It doesn't do anything when you are composing a new mail, though, and
doesn't work when the mail you're replying is on a list mutt doesn't know
about.

-- 
[EMAIL PROTECTED]   http://spacepants.org/jaq.gpg
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Mutt Question

2003-06-02 Thread Jamie Wilkinson
This one time, at band camp, Jeff Waugh wrote:
>
>
>> Is there any way to get Mutt to reply using the address the mail was
>> recieved at . so if we get mail for [EMAIL PROTECTED] and hit reply is uses
>> [EMAIL PROTECTED] as the from address regardless of what the mutt default is. And
>> if we get another mail to [EMAIL PROTECTED] it does the same.
>
>set reverse_alias = yes

Umm, that's for displaying names instead of email addresses in the index,
nothing to do with from addresses in a reply.

-- 
[EMAIL PROTECTED]   http://spacepants.org/jaq.gpg
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread Chris D.
This one time, at band camp, Luke Burton wrote:
>On Monday, June 2, 2003, at 09:16  AM, Jon Biddell wrote:
>
>> 3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one
>> fails. Doing a "shift-reload" in the browser is NOT an option. It
>> must be TOTALLY TRANSPARENT.

>A good compromise might be to have a 'forwarder' machine hosted on a 
>highly available, redundant network of your choosing. You make sure 
>that the logic in this thing is as simple as possible, so that there is 
>a minimised risk of it going wrong. You pay a few $$ to make sure that 
>it's on failover hardware, redundant net connections, etc.

Just a thought, maybe just an old Pentium box that does port forwarding.

- Chris
[EMAIL PROTECTED]
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Mutt Question

2003-06-02 Thread Jeff Waugh


> Is there any way to get Mutt to reply using the address the mail was
> recieved at . so if we get mail for [EMAIL PROTECTED] and hit reply is uses
> [EMAIL PROTECTED] as the from address regardless of what the mutt default is. And
> if we get another mail to [EMAIL PROTECTED] it does the same.

Sorry, ignore my last email, I copied a bit from my own config but it had
nothing to do with what you were asking. I could swear there is a way, but
after a quick search I haven't re-found it. In the mean time, have a look at
this document:

  http://www.acoustics.hut.fi/~mara/mutt/profiles.html

- Jeff

-- 
linux.conf.au 2004: Adelaide, Australia http://lca2004.linux.org.au/
 
   "Driving Miss Daisy. Best film of 1989. So said the academy. What does
that tell you?" - Spike Lee
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Mutt Question

2003-06-02 Thread Jeff Waugh


> Umm, that's for displaying names instead of email addresses in the index,
> nothing to do with from addresses in a reply.

Yeah (see other mail), but I'm sure there's a simple setting to do it. Done
it before, I'm sure. Can't find/remember the setting. Gar.

- Jeff

-- 
linux.conf.au 2004: Adelaide, Australia http://lca2004.linux.org.au/
 
 "The GPL is good. Use it. Don't be silly." - Michael Meeks
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread Robert Collins
On Mon, 2003-06-02 at 09:16, Jon Biddell wrote:
> Hi all,
> 
> Our "marketing types" want 24/7 availability of our corporate web 
> site - a fair enough request, I guess...
> 
> However we have a number of restrictions on what we can do;
> 
> 1. Must (presently) remain with IIS - moving to a Linux/Apache 
> solution may become possible later, but it's "political"

 :}. I suppose windows/Apache is also political?

> 3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one 
> fails. Doing a "shift-reload" in the browser is NOT an option. It 
> must be TOTALLY TRANSPARENT.

This is (as has already been mentioned) tricky. See below for a
discussion

> Keeping the boxes in sync is no problem.
> 
> I was thinking of a Linux box with 3 NICs - one to each server and 
> one to the 'net, but this will only work if the servers are 
> physically located on the same network.

That box becomes a single point of failure.

> The only other solution I can come up with, given the above anal 
> restrictions, is to use a "round robin" DNS setup, but this will 
> involve doing a reload if the primary server fails to pick up the 
> secondary DNS entry.

Much more than a reload: If you encounter a flapping situation with both
servers, you may actually increase the perceived downtime (as a as worst
case...).

> I'm open to suggestions if anyone knows of a more elegant way of 
> doing it - hell, if anyone knows how to make it work, I'll listen 
> !!

Firstly, you haven't clear identified to us, your free conslutants, the
current greatest failure risks. I.e. if the mean time between failure
for the various components is (using arbitrary figures):
Firewalls 60,000hrs.
Lan switches 100,000 hrs.
IIS 48 hrs.
Windows 200hrs.
Linux front end server 30,000 hrs.

And for simplicity we'll assume that failure here is catastrophic: you
put in a cold spare in the event of a failure. It's easy to see in the
above scenario that anything that encapsulates IIS will give you huge
uptimes relative to the naked beast being directly visible. That said,
you can start to plan how you make it all hang together.

ASSUMING that you are only concerned about IIS, not about NIC failures,
switch failures, firewall or router failures, it's really quite trivial:
front end IIS with squid, with a couple of hacks. The hacks will be to
buffer entire objects before sending full headers to the client, that
way a crashed server can result in squid retrying from the other serer,
not in the client recieving an error.

If you want to protect against network failures, multihomed connectivity
*at each site* is the way to go. Unless you have a large network, many
core routers won't propogate dual homed routes (because of the filtering
of long prefixes) - so get your ISP to dual-home you to their network at
each site.

That protects you against transient link failures at each site, and the
multiple sites allows you to fail over. You'll need a hacked DNS setup
to dynamic add and remove virtual servers as each site comes online or
suffers a failure, and that means you'll want your TTL way down. Be sure
to have the DNS servers located far away from your hosting site.

The above will not get you your requirement to 'not have to reload'. To
do that you need another hack to the front end we've introduced - you
need to convert all dynamic content to fixed length content...

Here's why:
1) You cannot realistically force everyone to use HTTP/1.1.
2) HTTP/1.0 treats a TCP connection close as 'EOF' on dynamic content -
unless you have -only- static content, browsers WILL end up with corrupt
files from time to time.

So, the above covers:
unrecoverable front end server failure mid transmission (convert all
responses to static length)
back end server failure (front end reattempts from fall over server).
simple router failures (dual homed network links).
site failures (multiple sites, with dns updates triggered on link down /
heartbeat failure (link down is better - faster updates)).
round robin cache time issues (low DNS ttl).
There's more that can be down, but the above should keep you nice and
busy.

Lastly, let me add that in all the large scale sites I've been involved
with (usually web application hosting of some sort), the business folk
do not ACTUALLY want 100% 24/7 availability - which is what all your
requirements add up to - once the cost is detailed (with reasons).
Usually, 4 nines (99.99% uptime - 1 hour of unscheduled downtime per
year) is more than enough to keep clients paying large $$$ happy. IIRC
the rule of thumb is: for each 9 you add, multiple the total project
cost by 10. And, 4 nines is 'trivially' achievable from a single site
with the appropriate resources.

My suggestion for you:
A good ISP with a end to end redundant network (including standby
routers within each lan and redundant switches).
Dual homed connection to them, using two separate exchanges and/or
connection technologies - on two power grids... you may need to rent
facilities to get these tw

Re: [SLUG] Mutt Question

2003-06-02 Thread Jamie Wilkinson
This one time, at band camp, Jeff Waugh wrote:
>  http://www.acoustics.hut.fi/~mara/mutt/profiles.html

Looks like a good idea to separate them into profiles, it'd save me a bit of
redundancy in my own .muttrc, but it still doesn't do it automatically! :-)

-- 
[EMAIL PROTECTED]   http://spacepants.org/jaq.gpg
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] home server on adsl; advice

2003-06-02 Thread Amanda Wynne
Yes !

I did some more searching on the web today, and figured that's pretty well 
what DMZ means.

Now, I should be able to set up Apache on a machine in the DMZ, serving up web 
pages to the Internet. And an FTP server on this same machine accessible only 
from the internal Lan to update those pages. Yes? 
With only one network card?

So, it looks kinda like this.

Lan 192.168.0.x (2 workstations, file server, laptop, laser printer)

Freesco bridge eth0 192.168.0.1  
  eth1 192.168.1.3

DMZ with Alcatel pro at 192.168.1.1 to TPG static IP ADSL
   Apache web server at 192.168.1.2
   FTP server at 192.168.1.2

Sorry if I'm boring people with this, I'm just trying to get it straight in my 
own head where I'm  going with this.

Amanda


On Monday 02 Jun 2003 10:30 am, Phil Scarratt wrote:
> It's effectively - in security speak - a DMZ (demilitarized zone) no?
>
> Fil
>
> Minh Van Le wrote:
> > Correct me if I'm wrong, but having two firewalls is better than one.
> >
> > One for the DSL modem that is exposed to the internet, and then a
> > separate firewall for the internal lan that is only exposed to the DSL
> > firewall is better than firewalling everything from 1 box. It may delay a
> > compromise and make tracking logs easier.
> >
> >>-Original Message-
> >>From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> >>Behalf Of Chris D.
> >>Sent: Sunday, 1 June 2003 19:10
> >>To: [EMAIL PROTECTED]
> >>Subject: Re: [SLUG] home server on adsl; advice
> >>
> >>This one time, at band camp, Amanda Wynne wrote:
> >>>I'm looking at getting an Alcatel Pro. Currently running a P120
> >>
> >>with Freesco
> >>
> >>>via dialup.
> >>
> >>I'd recomend the DSL-300 from D-Link. There it maintains the
> >>authentication and you just plugin a cat5 crossover to your system.
> >>On the system it's connected to, you just use dhcp to configure the IP
> >>address on it.
> >>
> >>>What I'm thinking of doing, if it's possible (this was going to
> >>
> >>be my next
> >>
> >>>question) is change the Freesco box to bridge mode, feeding the
> >>
> >>alcatel, with
> >>
> >>>my web server (yet another box) hanging off the alcatel. That
> >>
> >>way my Lan is
> >>
> >>>effectively double-firewalled.
> >>
> >>'double-firewalled' is really not going to mean much.
> >>
> >>I refuse to say free-->SCO<-- is a good idea.
> >>
> >>Cheers,
> >>Chris
> >>--
> >>SLUG - Sydney Linux User's Group - http://slug.org.au/
> >>More Info: http://lists.slug.org.au/listinfo/slug

--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] home server on adsl; advice

2003-06-02 Thread Phil Scarratt
At a quick glance looks ok as long as firewall on public side of web 
server doesn't allow ftp thru as you say. Effectively for a DMZ you want 
a firewall in front of and behind the publicly accessible machine.

fil

Amanda Wynne wrote:
Yes !

I did some more searching on the web today, and figured that's pretty well 
what DMZ means.

Now, I should be able to set up Apache on a machine in the DMZ, serving up web 
pages to the Internet. And an FTP server on this same machine accessible only 
from the internal Lan to update those pages. Yes? 
With only one network card?

So, it looks kinda like this.

Lan 192.168.0.x (2 workstations, file server, laptop, laser printer)

Freesco bridge eth0 192.168.0.1  
  eth1 192.168.1.3

DMZ with Alcatel pro at 192.168.1.1 to TPG static IP ADSL
   Apache web server at 192.168.1.2
   FTP server at 192.168.1.2
Sorry if I'm boring people with this, I'm just trying to get it straight in my 
own head where I'm  going with this.

Amanda

On Monday 02 Jun 2003 10:30 am, Phil Scarratt wrote:

It's effectively - in security speak - a DMZ (demilitarized zone) no?

Fil

Minh Van Le wrote:

Correct me if I'm wrong, but having two firewalls is better than one.

One for the DSL modem that is exposed to the internet, and then a
separate firewall for the internal lan that is only exposed to the DSL
firewall is better than firewalling everything from 1 box. It may delay a
compromise and make tracking logs easier.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Chris D.
Sent: Sunday, 1 June 2003 19:10
To: [EMAIL PROTECTED]
Subject: Re: [SLUG] home server on adsl; advice
This one time, at band camp, Amanda Wynne wrote:

I'm looking at getting an Alcatel Pro. Currently running a P120
with Freesco


via dialup.
I'd recomend the DSL-300 from D-Link. There it maintains the
authentication and you just plugin a cat5 crossover to your system.
On the system it's connected to, you just use dhcp to configure the IP
address on it.

What I'm thinking of doing, if it's possible (this was going to
be my next


question) is change the Freesco box to bridge mode, feeding the
alcatel, with


my web server (yet another box) hanging off the alcatel. That
way my Lan is


effectively double-firewalled.
'double-firewalled' is really not going to mean much.

I refuse to say free-->SCO<-- is a good idea.

Cheers,
Chris
--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug




--
Phil Scarratt
Draxsen Technologies
IT Contractor/Consultant
0403 53 12 71
--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [installation-dev] [Fwd: [SLUG] FIXED! OpenOffice-1.0.3 +Debian(woody) = Aborted]

2003-06-02 Thread Chris Halls
On Sun, Jun 01, 2003 at 07:27:13PM -0400, [EMAIL PROTECTED] wrote:
> You don't happen to have a source line I could feed apt to get OOo-1.1??
> I'm running a pretty pure Debian Woody system with only KDE/Qt and
> OpenOffice back ports the exceptions.

http://openoffice.debian.net/mirrors.html

1.1 is only available from the 'unstable' section so far - we still have
issues to resolve with build dependencies on Woody.

Chris


pgp0.pgp
Description: PGP signature
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Knoppix love/hate...

2003-06-02 Thread Sonia Hamilton
Thanks guys, I'll try those ideas (below). One thing I've found useful
is to force certain versions of packages to install. eg when you try to
install package foo, it'll give an error similar to:

package foo requires bar1.3, but bar1.2 is available

I then do 

apt-get install bar=1.2

(notice equals) to force bar1.2 to install. This usually causes further
dependency problems, in which case you'll need to do the same again for
another package (very RedHat'ish ;-) ). Eventually you'll get to a
situation we're you need to remove the current version of a package, and
replace it with an older version.

--
Sonia.

* David Kempe  wrote:
> yeah I have had a similar experience.
> what sources.list are you using? I think it helps to trim it as much as
> possible and install only packages with as few dependencies as possible.
> For example - it survives getting evolution, but doesn't survive getting
> other gnome type packages - I have forgotten the one that broke my other
> knoppix.
> The other main knoppix user in the office has had a similar experience.
> We still have a copy of the version that had evolution included - it has
> been removed from recent builds.
> 
> dave
> 

* David Kempe  wrote:
> also,
> this might be useful
> 
> HOWTO upgrade to debian unstable
> 
> http://knoppix.net/forum/viewtopic.php?t=2251
> 
> dave
> 

* "Chris D."  wrote:
> This one time, at band camp, Sonia Hamilton wrote:
> >I like having all the latest debian
> >features in Knoppix, but being able to install *anything* at all would
> >be really nice.
> 
> Sounds like your after Debian unstable
> 
> Cheers,
> Chris
> -- 
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug

* Shayne O'Neill  wrote:
> Please note: this list is archived and searchable via the web.
> 
> 
> I suggest doing the update/upgrade tango on the system first. This will
> untie alot of dependency whackyness. That said, there are problems with
> knoppix (particularly in regards to borked kde3lib deps) that do require
> some majikal guessing surgery.
> 
> I *STRONGLY* recomend learning to use 'aptitude' (if it wont apt, get it
> via tarballs). That has some damn fine tools for analysing the situation
> when apt gets it knickers in a knot.
> 
> Shayne.
> 
> 
> "Must not Sleep! Must warn others!"
> -Aesop.
> Shayne O'Neill. Indymedia. Fun.
> http://www.perthimc.asn.au
> 
> On Fri, 30 May 2003, Sonia Hamilton wrote:
> 
> > Please note: this list is archived and searchable via the web.
> >
> > Any of you Knoppix users out there done an apt-get install of anything,
> > and had it totally get in a knot? I like having all the latest debian
> > features in Knoppix, but being able to install *anything* at all would
> > be really nice.
> >
> > Any hints?
> >
> > I'm currently learning more about apt than I ever wanted to know...
> >
> > --
> > SoniaToday's quote from the Jargon File 
> >
> > :MOTOS: /moh-tohs/ n. [acronym from the 1970 U.S. census forms via
> >Usenet: Member Of The Opposite Sex] A potential or (less often) actual
> >sex partner. See {MOTAS}, {MOTSS}, {SO}. Less common than MOTSS or
> >{MOTAS}, which has largely displaced it.
> >
> > ___
> > Catgeek mailing list
> > [EMAIL PROTECTED]
> > http://lists.cat.org.au/cgi-bin/mailman/listinfo/catgeek
> >
> 
> ___
> Catgeek mailing list
> [EMAIL PROTECTED]
> http://lists.cat.org.au/cgi-bin/mailman/listinfo/catgeek
> 


--
SoniaToday's quote from the Jargon File 

:fontology: n. [XEROX PARC] The body of knowledge dealing with the 
   construction and use of new fonts (e.g., for window systems and 
   typesetting software). It has been said that fontology recapitulates 
   file-ogeny. 

   ... 
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Knoppix love/hate...

2003-06-02 Thread Jeff Waugh


> apt-get install bar=1.2
> 
> (notice equals) to force bar1.2 to install. This usually causes further
> dependency problems, in which case you'll need to do the same again for
> another package (very RedHat'ish ;-) ). Eventually you'll get to a
> situation we're you need to remove the current version of a package, and
> replace it with an older version.

Unfortunately, this is a problem you're only likely to get if you're
installing with Knoppix, or have a system that is half-half stable and
unstable. A client had this problem on a system she half upgraded to
unstable recently -> she ended up downgrading everything she could to their
stable versions. Unless you completely upgrade to unstable, that's really
the only way to deal with it sanely.

- Jeff

-- 
GU4DEC: June 16th-18th in Dublin, Ireland http://www.guadec.org/
 
   "We are peaking sexually when they are peaking. And two peaks makes a
hell of a good mount." - SMH
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] IBM Z-Series Linux

2003-06-02 Thread Des Wass
I'm not sure if this is the right forum or not...

Does anyone out there have any technical experience with IBM's Z-Series
Linux on Mainframes?

I believe this is an IBM modifiction of SuSe Linux. I have a friend who
program's Mainframes and has some particular questions on running Linux
in multiple Virtual Machines accessing DB2 databases and a whole bunch
of other stuff.

They are only after some technical details in Q&A style at this stage -
I can't say that there will be financial kickbacks at this point. I am
asking about some consulting dollars as we speak.

If you can offer some friendly technical answers to the questions they
have, it'd be great if you can contact me.

Thanks,

--
 |Lanrex Computer Systems Pty Ltd
Desmond Wass |http://www.lanrex.com.au
0411 056 027 |Phone: +61 2 9416 1100
 |Fax: +61 2 9416 9633
--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] home server on adsl; advice

2003-06-02 Thread Chris D.
This one time, Amanda Wynne wrote:
>Now, I should be able to set up Apache on a machine in the DMZ, serving up web 
>pages to the Internet. And an FTP server on this same machine accessible only 
>from the internal Lan to update those pages. Yes? 
>With only one network card?
>
>So, it looks kinda like this.
>
>Lan 192.168.0.x (2 workstations, file server, laptop, laser printer)
>
>Freesco bridge eth0 192.168.0.1  
>  eth1 192.168.1.3
>
>DMZ with Alcatel pro at 192.168.1.1 to TPG static IP ADSL
>   Apache web server at 192.168.1.2
>   FTP server at 192.168.1.2

So what you'r doing is something like this

__
|   ADSL Router  |
--
  |
|--

| FreeSCO Firewall |

 |  _
 ---| Webserver Box |
-
 |
( Rest of LAN )

Right?

If so, on the FreeSCO firewall, you will want to port forward port 80 to
your webserver box.

- Chris
[EMAIL PROTECTED]
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] home server on adsl; advice

2003-06-02 Thread Phil Scarratt


Chris D. wrote:
This one time, Amanda Wynne wrote:

Now, I should be able to set up Apache on a machine in the DMZ, serving up web 
pages to the Internet. And an FTP server on this same machine accessible only 

from the internal Lan to update those pages. Yes? 

With only one network card?

So, it looks kinda like this.

Lan 192.168.0.x (2 workstations, file server, laptop, laser printer)

Freesco bridge eth0 192.168.0.1  
eth1 192.168.1.3

DMZ with Alcatel pro at 192.168.1.1 to TPG static IP ADSL
 Apache web server at 192.168.1.2
 FTP server at 192.168.1.2


So what you'r doing is something like this

__
|   ADSL Router  |
--
  |
|--

| FreeSCO Firewall |

 |  _
 ---| Webserver Box |
-
 |
( Rest of LAN )
Right?
I thought it was something more like this...

__
|   ADSL Router  |
--
|
-
| WebServer Box |
-
|
|

| FreeSCO Firewall |

|  _
---| Rest of lan   |
   -
In which case, the comment still stands but for Alcatel Pro.

Fil

--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] SpamAssassin further question

2003-06-02 Thread Simon Bryan
Hi all,
I have SA happily running on both my mailservers. At first I just had it tag the
mail, now I want to automatically dump it in the users Trash (the next step will be
to not deliver it to the user at all - we are a school).

On one system I changed spamassassin-spamc.rc to:
:0fw
| /usr/bin/spamc
:0:
* ^X-Spam-Status: Yes
~/INBOX.Trash

which seems to be working - without the ~ it gets put in /var/spool/mqueue/INBOX.trash

On the second system where the mail is in ~/mail and the Trash is ~/mail/Trash I put:
:0fw
| /usr/bin/spamc
:0:
* ^X-Spam-Status: Yes
~/mail/Trash

which should work, however SA is happily totally ignoring that and still just
tagging the mail and putting it back in the users INBOX. Yes I have stopped and
started the spamd daemon and am currently running with -x. I searched through all
the possible directories listed in the INSTALL file as to where there might be a
second set of configuration files, but could not find any except those in
/etc/mail/spamassassin.

For the completeness of the record /etc/procmailrc is:
INCLUDERC=/etc/mail/spamassassin/spamassassin-spamc.rc

All files in /etc/mail/spamassassin are owned by root and in the root group, however
they have been chmod to 777


Simon Bryan
IT Manager
OLMC Parramatta
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] OT: Linux Workshop

2003-06-02 Thread Craig Warner
Come and help get Linux into regional New South Wales by getting involved in the 
Annual General Meeting of Computerbank NSW
and Linux workshop in Armidale this long weekend. 

AGM at 2pm Saturday
Workshop on Sunday

The event is being supported by Computerbank New England. 

Computerbank New England is in need of technical skills transfer as they are about to 
start a project to network the local Aboriginal School.


Also SLUGGERs are encourage to  the nominate for the following positions on the 
Committee of Computerbank NSW:
 
3 ordinary members
President
Vice-President
Treasurer
Secretary
 
Please either email nominations to the [EMAIL PROTECTED] list and/or post to PO Box 
380 Surry Hills NSW 2010 

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] home server on adsl; advice

2003-06-02 Thread Amanda Wynne
I think this is a closer stick drawing


__

|   ADSL Router  |

--
  |
  | -
  |   
  |---  | WebServer Box |
  | 
  | -
  |
  |

| FreeSCO Firewall |

|  _
|
---| Rest of lan   |
   -

> In which case, the comment still stands but for Alcatel Pro.
>
> Fil

--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Mutt Question

2003-06-02 Thread Andrew Shipton
On Mon, Jun 02, 2003 at 05:32:19PM +1000, Jeff Waugh wrote:
> 
> 
> > Umm, that's for displaying names instead of email addresses in the index,
> > nothing to do with from addresses in a reply.
> 
> Yeah (see other mail), but I'm sure there's a simple setting to do it. Done
> it before, I'm sure. Can't find/remember the setting. Gar.

I have:

# Reply from the address it came in to
set reverse_name

in my .muttrc, which I'm pretty sure does it.

-- 
Andrew Shipton 
"It is inhumane, in my opinion, to force people who have a genuine medical
need for coffee to wait in line behind people who apparently view it as
some kind of recreational activity."-- Dave Barry
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Re: SMTP AUTH

2003-06-02 Thread Oscar Plameras
From: "Sychev Maxim" <[EMAIL PROTECTED]>

> > > I use postfix 2.0.10, cyrus-sasl 2.1.13
> > > I set up SMTP AUTH using saslauthd -a pam. Everything works fine,
except
> a
> > > bothering warning in syslog stating that database file
/var/sasl/sasldb
> > > can not be found.
> >
> > Your sasl library tries first the available auxprop methods (sasldb is
one
> > of them) before using saslauthd.
>
> And how to tell it to use saslauthd only?
>
> Authentification in Cyrus-imap 2.1.13 with the same saslauthd -a pam does
> not produce this type of warnings.
>


With your command,

# saslauthd -a pam

you already told the client application like your postfix not
to use the sasldb database.

Incidentally, is it not that your sasldb database should be in,

/etc/sasldb2 ?


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug