Re: [SLUG] X-MailScanner part of virus ?

2003-08-25 Thread Andrew McNaughton
On Mon, 25 Aug 2003, Voytek Eymont wrote:

> I'm getting literally 100's of mail mssg, all about 100k size, with about 7
> or so different subjects line, I guess these are the blaster or whatever
> 'worms' ?

[...]

> where is the: 'X-MailScanner: Found to be clean ' coming from ?
>
> is that part of the 'virus' ?

Looks that way to me.  I've got it in the procmail rule I added for this
virus, and so far nothing's come through without it.  I'm currently
throwing away about 12MB/hour of this stuff, which is down from about
30MB/hour a few days ago.

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] X min. h/ware specs, was: simple graphic utility ?

2003-08-26 Thread Andrew McNaughton
On Tue, 26 Aug 2003, Voytek Eymont wrote:

> what's the minimum RAM and CPU I'd need ?
>
> I've tried with PII-400 and 256MB, and, the GUI got very gluey.
> setting KDE to 'least details' didn't seem to make much difference.
> is there anything else that can be optimized for w/s use ?

What's your memory usage like?  IS swap being used much?  I generally go
for 512M ram with kde.  Dunno that that's a 'minimum', but its about what
I find comfortable and ram is cheap.

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-----------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Ping me please!

2003-08-26 Thread Andrew McNaughton

Note that its probably not a good idea to block ICMP source quench
packets.

Andrew McNaughton


On Tue, 26 Aug 2003, Adam Hewitt wrote:

> Date: Tue, 26 Aug 2003 14:30:21 +0800
> From: Adam Hewitt <[EMAIL PROTECTED]>
> To: David Fisher <[EMAIL PROTECTED]>
> Cc: SLUG List <[EMAIL PROTECTED]>
> Subject: Re: [SLUG] Ping me please!
>
> seems to be working
>
> #ping 202.12.88.42
> PING 202.12.88.42 (202.12.88.42) 56(84) bytes of data.
>
> --- 202.12.88.42 ping statistics ---
> 3 packets transmitted, 0 received, 100% packet loss, time 1999ms
>
> [EMAIL PROTECTED]:~$ ping 202.12.88.106
> PING 202.12.88.106 (202.12.88.106) 56(84) bytes of data.
>
> --- 202.12.88.106 ping statistics ---
> 2 packets transmitted, 0 received, 100% packet loss, time 999ms
>
>
>
>
> On Tue, 2003-08-26 at 14:26, David Fisher wrote:
> > Would some kind person please try pinging the addresses 202.12.88.42 or
> > 202.12.88.106 and let me know the results, please?
> >
> > I need to test the ICMP block on my router from external ping traffic.
> >
> > --
> > David
> >
> > Quidquid latine dictum sit, altum sonatur.
>
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] neat tricks used for the purposes of evil

2003-08-26 Thread Andrew McNaughton
On Tue, 26 Aug 2003, Del wrote:

> (Seriously, through, does anyone actually ever use
> mod_proxy in apache?).

Of course.  It's vastly more versatile than squid, and sometimes that's
what you need.  In particular it's commonly used in combination with
mod_rewrite and mod_perl to make a lightweight front end server which
handles all the image requests so the hulking mod_perl processes don't sit
around waiting to serving images to slow modem users.

Apache with mod_perl works pretty well as a highly configurable spooler,
along side serving simple requests, but its pretty poor as a caching
proxy for web surfing.  For that you use squid.

Andrew



--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] /bin/bash: bad interpreter: Permission denied

2003-08-27 Thread Andrew McNaughton
On Wed, 27 Aug 2003 [EMAIL PROTECTED] wrote:

> Voytek Eymont <[EMAIL PROTECTED]> writes:
>
> > what did i stuff up..?
> >
> > # ./backmysql.sh
> > bash: ./backmysql.sh: /bin/bash: bad interpreter: Permission denied
>
> dos2unix backmysql.sh

Also I recommend you make a habit of using #!/bin/sh - may not matter for
you just now, but it's important for compatibility across different unix
systems.

* On at least some linux distributions, /bin/sh is usually (and
peculiarly) provided as a hard link to bash, but on many systems sh is a
much lighter executable without all the stuff for interactive use.

* Most non-linux systems don't have a file at /bin/bash.  /bin/sh will
always be there.

Andrew



--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Ping me please!

2003-08-28 Thread Andrew McNaughton
On Thu, 28 Aug 2003, Glen Turner wrote:

> Fellas, how about using rate limiting.  Linux has marvellous
> QoS features, enough to allow a few ICMP ECHOs for fault
> diagnosis but to deny a ping flood.
>
>  > Note that its probably not a good idea to block ICMP source quench
>  > packets.
>
> Nah, block those suckers. Source Quench is deprecated.

I stand corrected.


> The list is
>
>Block
>  Obsolete
>Source Quench
>Information Request/Reply
>Datagram Conversion
>  Shouldn't cross network boundary
>Address Mask Request/Reply
>Redirect
>Domain Name
>Router Advertisment/Selection
>Required for operation (rate limit these to, say, 10% of bandwidth)
>  Destination Unreachable
>  Time Exceeded
>  Security Failure
>  Parameter Problem
>Required for diagnosis (rate limit these to, say, 1% of bandwidth)
>  Echo Request/Reply
>  Timestamp Request/Reply
>
> Regards,
> Glen


Cheers for the list

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] targeted virus or paranoia central ?

2003-08-29 Thread Andrew McNaughton
On Fri, 28 Aug 2003, Bret Comstock Waldow wrote:

> If there's a definitive way to be sure of the origin of an email, I'd
> like to know that's so, and how to determine it.

Try a little test.

Mail yourself an absolutely minimal message by doing an smtp session
manually and see what arrives.  eg:


[EMAIL PROTECTED] telnet a2.scoop.co.nz 25
Trying 203.96.152.68...
Connected to a2.scoop.co.nz.
Escape character is '^]'.
220 a2.scoop.co.nz ESMTP Sendmail; Fri, 29 Aug 2003 12:11:54 +1200 (NZST)
helo foobar
250 a2.scoop.co.nz Hello eth1383.nsw.adsl.internode.on.net [150.101.203.102], pleased 
to meet you
mail from: 
553 5.5.4 ... Domain name required for sender address andrew
mail from: <[EMAIL PROTECTED]>
250 2.1.0 <[EMAIL PROTECTED]>... Sender ok
rcpt to: <[EMAIL PROTECTED]>
250 2.1.5 <[EMAIL PROTECTED]>... Recipient ok
data
354 Enter mail, end with "." on a line by itself
.
250 2.0.0 h7T0BsgV076791 Message accepted for delivery
quit
221 2.0.0 a2.scoop.co.nz closing connection
Connection closed by foreign host.



I then recieve the following.  Exactly what you recieve will depend
somewhat on which mail software you run.



Return-Path: <[EMAIL PROTECTED]>
Received: from foobar (eth1383.nsw.adsl.internode.on.net [150.101.203.102])
by a2.scoop.co.nz (8.12.9/8.12.9) with SMTP id h7T0BsgV076791
for <[EMAIL PROTECTED]>; Fri, 29 Aug 2003 12:12:30 +1200 (NZST)
(envelope-from [EMAIL PROTECTED])
Date: Fri, 29 Aug 2003 12:11:54 +1200 (NZST)
From: [EMAIL PROTECTED]
Message-Id: <[EMAIL PROTECTED]>
To: undisclosed-recipients:;
X-Loop: [EMAIL PROTECTED]
X-Spam: unknown; 0.00; foobar:01 example:12 com:30
X-Bogosity: No, tests=bogofilter, spamicity=0.025957, version=0.13.7.2
X-DCC-SdV-Metrics: a2.scoop.co.nz 1179; Body=0



Looking at the Recieved header (the top one if there's more than one), you
can tell which machine delivered it to your server (150.101.203.102).
The name it reports for itself (foobar) might as well not be displayed,
and the name found by DNS lookup (eth1383.nsw.adsl.internode.on.net) may
not be reliable if the spammer has control over the appropriate DNS PTR
record.

The Date, From, To and Message-ID headers here have been added by my
system, but if they were present in the original, then they would have
been passed through un-modified.  They should not be relied upon.
Message-ID used to be a surprisingly good way to catch spammers out, but
that's a long time ago now.

All those X-* headers are added by my procmail rules or things added from
there.  Everything else is generated by my mail daemon based on the
limited info it recieved from the SMTP session.

This is the most important bit: *any* other header that might appear in
another recieved message was part of the body of the delivered message and
cannot be trusted.  It might be that the message has been relayed through
a bascially trustworthy server whose headers you can trust, but then again
those headers might be spoofed.

You really don't have much you can rely on besides the IP of the machine
(from the Recieved header) which sent the email to your server.  In the
case of Sobig.F however, this is the IP of the infected machine.  That's
good information, but you still don't have a contact address for the user.
Supposing you want to chase this up, the only thing you can really do is
to chase down the owner of that block of IP addresses and ask them to pass
on the message.  They'll need the IP and the time when it happened (for
dynamic IPs).  They probably won't bother with it unless you send full
headers, and even then they get so many of these they may not bother
anyway.  Don't expect them to tell you what they do or don't do.

Andrew McNaughton




--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Perl5.8 installed;Apt preconfigures, but fails to install.

2003-08-29 Thread Andrew McNaughton
On Tue, 26 Aug 2003, Angus Lees wrote:

> At Mon, 25 Aug 2003 22:52:32 +1200, Adam Bogacki wrote:
> > In sum, I'm starting to contemplate reinstalling Debian, something I
> > am prepared to go to lengths to avoid because I would lose a lot of
> > hard-won material, because I'm living with an 80 yr old who is only
> > starting to get over her computer-phobia, and because I also have
> > other things to do.
>
> Where/what is this "hard-won material"?
>
> Just copy your important data off somewhere temporarily (maybe just
> another partition) and reinstall.
>
> The alternative is going to be *weeks* of hand hacking stuff.  If
> you'd simply reinstalled you'd be done by now.

Probably right.

It's a good idea to get a firm understanding of which parts of a system
you do and don't modify from what the system sets up for you.

Almost everything you want to keep should be in /etc and /home.  On some
systems you'd include /usr/local/etc in that.  It's not a bad idea to
make a note of any areas outside of that where you modify files.

I know that it's possible under debian to get a list of where files come
from (ie which package), and probably you can also get checksums.  I'm not
sure on the details though.

You also want the list of installed software, which you should normally be
able to get from apt  Hmm.  Presumably you can get it from looking
directly at where apt stores its files.

The other thing is that while it can be time consuming, starting from
scratch is typically not at all a disempowering thing once the new user
gets going on it.  All the stuff that was mystifying the first time round
is at least somewhat familiar on a repeat run, and provides a sense that
actually they have got the hang of a lot of this stuff.



--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] POS Software for a Record Chain

2003-08-29 Thread Andrew McNaughton
On Fri, 29 Aug 2003, Del wrote:

> Kevin Fitzgerald wrote:
> > Hi All
> >
> > One of my clients is a Major Record Chain that is looking at moving from
> > their Windoze Environment to a Redhat onethe problem is their POS
> > Software (A product called Winstore for anyone who knows of it).
> >
> > Migrating all of the users to Linux machines is a No Brainer and will
> > work fine but Migrating their Sales software to a Linux alternative is a
> > little trikier.
> >
> > Does anyone know of a Retail operation like a record company using any
> > such software? Can anyone reccomend some good free POS software for
> > running on Redhat?
>
> I know that there is a large plumbing supplies operation in
> Christchurch, NZ that migrated entirely to Linux.  I think
> they were called "Mastertrade" or something, the setup was
> done by Fujitsu.

Mastertrade is a bit broader than plumbing supplies these days.  They
bought out the cabling supplies place I used to deal with in Wellington.

> You might try googling about a bit, and see what you come up
> with.

Or go to freshmeat.

http://freshmeat.net/browse/79/?topic_id=79

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] neat tricks used for the purposes of evil

2003-08-29 Thread Andrew McNaughton
On Sat, 30 Aug 2003, Robert Collins wrote:

> On Fri, 2003-08-29 at 17:54, Angus Lees wrote:
> > At Tue, 26 Aug 2003 23:40:40 +1000, Robert Collins wrote:
> > > I'm dubious about 'vastly more versatile' - that quite unsubstantiated.
> >
> > For example, you can't use random perl functions to control squid's
> > behaviour.  You can with apache+mod_perl, which in my book counts as
> > "vastly more versatility".
>
> Sure you can. You can use perl, python, shell, smalltalk  anything
> that can sit on an io loop.
>
> You don't have access to -all- of squids innards any more than you do in
> apache, but you most certainly can control the behaviour - access
> control, request rewriting, user identification, bandwidth allocation in
> perl.

One setup I did with apache, mod_proxy and mod_perl was a proxy which sat
in front of a web server and re-wrote the character set of the content
(including http requests) based on the value of a cookie.

Are such things possible with squid?

Andrew McNaughton

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] neat tricks used for the purposes of evil

2003-09-01 Thread Andrew McNaughton

On Sat, 30 Aug 2003, Robert Collins wrote:

> On Sat, 2003-08-30 at 03:17, Andrew McNaughton wrote:
>
> > One setup I did with apache, mod_proxy and mod_perl was a proxy which sat
> > in front of a web server and re-wrote the character set of the content
> > (including http requests) based on the value of a cookie.
> >
> > Are such things possible with squid?
>
> No, this is currently outside the 'canned' solutions squid can do.
>
> However, in squid3 it is possible via a clientStream module - which
> could using embedded perl if you wanted it to.

Does squid 3 allow url rewriting scripts access to cookies?  That's also
been a problem.

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: html2text ? Re: [SLUG] How do you post a follow up

2003-09-01 Thread Andrew McNaughton

html and other non plain-text formats should probably be rejected rather
than convered.

However, for palm-pilots, presumably this list isn't the only source of
html mail.  If you're in a position to do so you should probably figure
out a way to produce palm-friendly versions of your emails at your inbox.

Are you downloading directly onto your palm pilot, or does the mail come
to some (preferably nix) machine first?

Andrew


On Mon, 1 Sep 2003, Voytek wrote:

> ** Reply to note from "Paul Cameron Davies" <[EMAIL PROTECTED]> Sun, 31 Aug 2003 
> 23:49:37 +1000
>
> would that be a reasonable request, to ask for the SLUG mail list
> application to be set to convert html emails to text ? (perhaps attaching
> original html as an attachement ?)
>
> it's kinda hard reading html email on the Palm's little screen, with all
> the bumff taking most of the screen..
>
>
>  >  "-//W3C//DTD HTML 4.0 Transitional//EN"> > 
> >  > 6.00.2800.1170" nameNERATOR>
> > 
> > 
> > 
> > 
> >  
> >  > posted a couple of messages - in the way that the web site
> >  > href ailto:[EMAIL PROTECTED]">[EMAIL PROTECTED].
> >  
> >  > kind enough to post a follow up.  A follow up appears to be an answer to a
> > query.  These follow ups also appear in my personal mail as well as being
> > posted in the forum - very nice.
> >  
> >  > I haven't seen how to do it mentioned anywhere.  I have replied to the
> > personal emails of those who send a follow up which arrived in my personal
> > email.  My replies to these emails don't appear within the message
> > tree/discussion.
> >  
> >  > whereabouts on the web site that explains how to participate in a
> > discussion - as oppossed to kicking one off.
> >  
> >  
> > 
> >  
> >  
>
>
>
> Voytek Eymont
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: html2text ? Re: [SLUG] How do you post a follow up

2003-09-01 Thread Andrew McNaughton
On Mon, 1 Sep 2003, Tony Green wrote:

> On Mon, 2003-09-01 at 19:37, Voytek wrote:
> > would that be a reasonable request, to ask for the SLUG mail list
> > application to be set to convert html emails to text ? (perhaps attaching
> > original html as an attachement ?)
> >
>
> Forcing people into your preferences isn't the linux way.  Instead you
> should use the tools available to change what you want changed for
> yourself.
>
> Just this in your .procmailrc file :
>
> :0
> * ^Content-Type: text/html
> {
>
> :0 bfW:
> | (echo "[html stripped]"; lynx -dump -force_html -stdin)
>
> :0 ahfw:
> | formail -i"Content-Type: text/plain"
> }

I guess that probably works in this case, since if the html is a mime part
then there's usually a text part already and the problem doesn't arise.

Can someone recommend a good tool for the more general case where there's
a need to process mail based on the headers and content of particular
attachments?  Procmail's a bit to oriented to the top level mime object
sometimes.

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Colo somewhere *other* then Sydney CBD/North Sydney.

2003-09-02 Thread Andrew McNaughton
On Tue, 2 Sep 2003, [EMAIL PROTECTED] wrote:

> > Anyone know of any companies which are reasonably priced with good
> > bandwidth - but not in Sydney CBD / North Sydney?  Paramatta would be good.
> Not sure how close it qualifies to North Sydny, but my employer (large,
> faceless corporate monolith-type) uses space in a nice big facility in
> West Pennant Hills. I can't recall offhand who owns/operates it, as my
> team never need to go onsite, but will follow up in the next day or so
> with details.
> I'll find out what I can about their pricing deals, too.

I'm helping out with a group which is providing mostly web and mail
services for a bunch of community and activist organisations.  They need
to move their main server and are looking at options of where to move to.

So, I'd also like to hear about your company's pricing, and whether
they're approachable about special deals for non-profit organisations.

Andrew McNaughton


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] localhost /server-status access error

2003-09-03 Thread Andrew McNaughton
On Wed, 3 Sep 2003 [EMAIL PROTECTED] wrote:

> I'm trying to access /server-status with lynx on local host, get access
> error:

> [Wed Sep  3 09:46:25 2003] [error] [client 127.0.0.1] client denied by server co
> nfiguration: /home/sbt.net.au/www/server-status

> 
> SetHandler server-status
> Order deny,allow
> Deny from all
> Allow from localhost
> 

The httpd.conf stuff looks OK to me.  It's more or less identical to the
default configuration.

I wonder if there's something wrong with the way 'localhost' is set up?

Does your setup work if you put '127.0.0.1' in instead of 'localhost'?

What do you get if you type `dig -x 127.0.0.1` ?

Andrew McNaughton


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Perl configured, but Apt tells me it's not.

2003-08-25 Thread Andrew McNaughton

Been there before.  Debian's routing failure to keep anything like up to
date with perl versions is my only serious gripe with Debian.  As you're
discovering, trying to get Debian to work with a later version of perl can
be rather trying.

In the past I've installed (from source) a more recent verison of perl
than what Debian used.  The problems with apt-get et. al. can be dealt
with by making sure that you're newly installed perl knows to look in the
directories where debian's stuff is installed.  You should probably do
that as part of compiling perl, but you can often get by with using the
PERL5LIB environment variable.  Problem is that PERL5LIB puts directories
at the head of the list of places to search where you really want them at
the tail of the list so they act as a fallback for stuff where newer
versions are not found.  So it's better to get it right when compiling
perl.

You'll also find that there's an ongoing issue due to debian installing
libraries for the old perl.  OFten it makes no difference (so long as perl
searches the old library directories), but where there are binary
components, you will need to install the perl modules yourself from CPAN.






On Sun, 24 Aug 2003, Adam Bogacki wrote:

> Hi,
>
>   I'm running in console mode, with 'apt-get update' & Mutt
> functional, after mucking up a file transfer between partitions.
>
> 'Tux:~# apt-get install -f -u dist-upgrade --fix-broken'
>
> aborted with
> "Debconf: Perl may be unconfigured. Can't locate strict.pm in @INC ..."

Looks like whichever perl you are running is not finding its library
directories.  strict is part of the base perl libraries, and if you're not
finding it then most of perl's functionality is probably failing.

`perl -V` should show you where perl is looking for its libraries.  Check
that the libraries are actually where perl thinks they should be.


> I managed to get Lynx working by moving a missing file into its correct
> place and downloaded and configured Perl 5.8.0 from www.cpan.org
> following
> the instructions in /home. The only thing I did not do was 'make
> distclean' & 'make realclean' as I had not built Perl before.
>
> However, when I now try
>
> 'Tux:~# apt-get install -f -u dist-upgrade --fix-broken'
>
> I get "Perl may be unconfigured, can't locate Debconf/Log.pm in @INC
> (@INC contains /usr/local/lib/perl5/5.8.0)."

@INC needs to contain a good deal more than that.  Usually there's 5 or 6
directories.


> ... but I find
>
> ./usr/perl5/Debconf/Log.pm where I inspected it via vi !

right so '/usr/perl5' needs to be one of the entries in @INC.  Preferably
about the end of the list.

Take a look in your /usr/bin and /usr/local/bin directories, and you'll
probably find a few versions of perl.  eg you might find
/usr/bin/perl5.6.0 which debian installed, and /usr/local/bin/perl5.8.0 if
you installed 5.8.0 from source and didn't tell it to put perl somewhere
else.

run perl -V for each of these and see where debian's perl is
looking for libraries.  You'll probably have to re-compile perl5.8.0 and
tell it to add all of these to the list of directories to search for
libraries.

> How do I get
>
> 'Tux:~# apt-get install -f -u dist-upgrade --fix-broken'
>
> to work in order to get the system  working again ?

You might be able to just change the #! line in the script to use the old
perl. eg:

#!/usr/bin/perl5.6.0

Andrew McNaughton



--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Fwd: CERT Advisory CA-2003-24 Buffer Management Vulnerability in OpenSSH

2003-09-16 Thread Andrew McNaughton
On Wed, 17 Sep 2003, David wrote:

> 2: how do I figure out the version number of ssh there doesn't seem to
> be a -v option of anything equally sensible :(

Telnet to the ssh port just like everyone else out there will be doing.

Andrew

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   Currently in Boomer Bay, Tasmania
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Help Installing Perl GD Module

2002-11-12 Thread Andrew McNaughton

On Sun, 10 Nov 2002, Louis Selvon wrote:

> Hi Sluggers:
>
> I am trying to install the "GD" module on my server.
>
> I am encountering problems after running "make".
>
> The error I get is shown below
>
> +++
>
> LD_RUN_PATH="/usr/lib:/lib:/usr/X11R6/lib" gcc -o blib/arch/auto/GD/GD.so
> -shared -L/usr/local/lib
> GD.o-L/usr/lib/X11 -L/usr/X11R6/lib -L/usr/local/lib -lgd -lpng -lz
> -lfreetype -ljpeg -lm -lX11
> -lXpm
> /usr/bin/ld: cannot find -lfreetype
> collect2: ld returned 1 exit status
> make: *** [blib/arch/auto/GD/GD.so] Error 1

Sounds like the install process is failing to find the
freetype library.  Is it installed?

Andrew McNaughton



>
> +++
>
> I noticed that when running "Perl Makefile.PL" it said the following:
>
> "NOTICE: This module requires libgd 1.8.4 or higher (shared library version
> 4.X)."
>
> How do I find out what version of "libgd" I have installed ?
>
> Is the error I am getting above related to the version of "libgd" ?
>
> In "Makefile.PL" I got it to avoid the switch "-lfreetype", but after that it
> complains about "-lX11" .
>
> Any suggestions are welcome to help me install this module successfully.
>
> Louis.
>
> --
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug
>

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



Re: [Re: [Re: [SLUG] Help Installing Perl GD Module]]

2002-11-14 Thread Andrew McNaughton


On Thu, 14 Nov 2002, Louis Selvon wrote:

> Date: Thu, 14 Nov 2002 19:13:57 +1100
> From: Louis Selvon <[EMAIL PROTECTED]>
> To: Andrew McNaughton <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: [Re: [Re: [SLUG] Help Installing Perl GD Module]]
>
> Andrew McNaughton <[EMAIL PROTECTED]> wrote:
>
> *** My reply.
>
> On Tue, 12 Nov 2002, Louis Selvon wrote:
>
> > On Sun, 10 Nov 2002, Louis Selvon wrote:
> >
> > > Hi Sluggers:
> > >
> > > I am trying to install the "GD" module on my server.
> > >
> > > I am encountering problems after running "make".
> > >
> > > The error I get is shown below
> > >
> > > +++
> > >
> > > LD_RUN_PATH="/usr/lib:/lib:/usr/X11R6/lib" gcc -o blib/arch/auto/GD/GD.so
> > > -shared -L/usr/local/lib
> > > GD.o-L/usr/lib/X11 -L/usr/X11R6/lib -L/usr/local/lib -lgd -lpng -lz
> > > -lfreetype -ljpeg -lm -lX11
> > > -lXpm
> > > /usr/bin/ld: cannot find -lfreetype
> > > collect2: ld returned 1 exit status
> > > make: *** [blib/arch/auto/GD/GD.so] Error 1
> >
> > >Sounds like the install process is failing to find the
> > >freetype library.  Is it installed?
> >
> > Hi Andrew. I just did a "locate freetype", a few things was returned. Is
> this
> > the one you mean though :
> >
> > /usr/lib/libfreetype.so.6
> > /usr/lib/libfreetype.so.6.0.1
>
> >That looks like libfreetype is at least partially there, but I suspect
> >your installation is incomplete.  At best it's out of date.
>
> >I have the following (freebsd, installed most of a year ago):
>
> >/usr/local/lib/libfreetype.a
> >/usr/local/lib/libfreetype.so -> libfreetype.so.9
> >/usr/local/lib/libfreetype.so.9
>
> >I would expect that libfreetype.so is the file (or link) that perl will >be
> looking for, and you don't have it.  You may be able to get away with
> >adding a symbolic link by that name to the library you have installed, >but
> unless there's a compelling reason not to mess with the existing >setup, I'd
> go for the latest version while you're at it.
>
> *** Where can I get the latest version , and if possible how do I
> install this one I get it ?

For the freetype home page, go to http://www.google.com/ and search for
'freetype'.  I could give you an URL, but it's better to point you at the
tools to find things yourself.

However, if your distribution provides a way to install freetype, you
should probably prefer that rather than working from the freetype standard
distribution.


> >
> > If not where can I get this freetype thing. If yes sounds like I need to
> > modify "Makefile.PL" to get it from "/usr/lib". Right !!
>
> >I wondered about that, but also /usr/lib may be implicitly included.  If
> >there is a problem of this sort you should look into your perl
> >installation's configuration rather than the Makefile.PL file.
>
> *** What is the Perl installation configuration ? Is it like the paths
> included in @INC etc ... ?
>
> > Also when I removed the "freetype" switch I got an error with "/usr/bin/ld:
> > cannot find -lX11"
>
> >Is X Windows installed? (eg it's not on the server whih is the only
> >machine I have ready access to from this cafe).  I haven't used GD, but
> >I'd expect that it should be installable on a server with minimal X >stuff.
>
> >If X is there (`locate libX11`), then that supports the idea that it's
> >locating the directories that's the problem.
>
> *** Server returned:
>
> [root@ensim admin]# locate libX11
> /usr/X11R6/lib/libX11.so.6
> /usr/X11R6/lib/libX11.so.6.2
> [root@ensim admin]#

This looks like the same problem.  No 'libX11.so'.  This might work, but
if your system doesn't take this approach it might create problems at a
later date:

ln -s libfreetype.so.9 /usr/lib/libfreetype.so
ln -s libX11.so.6 /usr/X11R6/lib/libX11.so


> So it's installed, but again is the version out of date ?
>
> > Also What is the command to find the current version of "libgd" that is
> > installed ?
>
> >`locate libgd` and look at the version numbers in the file?  Depending on
> >your distribution though it's probably better to look to the package
> >management system for answers if you can, because it may have
> >patches applied by your distribution's vendor or contributors.
>
> *** I found out the command to run and 

Re: [SLUG] Home made supercomputer

2002-11-17 Thread Andrew McNaughton


On Fri, 15 Nov 2002, Paul L Daniels wrote:

> > that has me buggered, finding a high compute load task to put such a
> > cluster to the test.
>
> Oh, I've got one, my personal [ but not yet complete ] virtual
> wind-tunnel ( thought it's more of a dust-tunnel ).  The last time I ran
> it was back in '95 with 50,000 particles on a 100MHz Pentium under OS/2
> with a Borland Pascal compiler ( or some Russian all-ASM version ).
>
> Needless to say, it ran _slow_, taking about 1 hour per 'tick'.  The
> most fun was exporting the data to PCX files, then using a FLI animator
> to put it all into a little 'movie'.
>
> ... trouble is now I'm having to rewrite it all in C ...

How far into this sort of programming have you gone?  I know a project
which might need someone with a combination of turbulence modelling and
parallel computing knowledge.

Andrew McNaughton

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



Re: [SLUG] Transparent proxying w/ Squid and iptables.

2002-11-20 Thread Andrew McNaughton
On Wed, 20 Nov 2002 [EMAIL PROTECTED] wrote:

> G'day all...
>
> Just wanting to check something...
>
> When I manually set my browser to proxy via our squid cache, and I type in
> a mangled web address into my browser, I get a squid error message back.
>
> However, when I'm using transparent proxying via nat using iptables, and I
> type in a bogus web address into my browser, I get a browser  error
> message back.
>
> Actual web addresses appear in /var/log/squid/access.log and
> /var/log/squid/store.log but requests for the bogus ones do not.
>
> Is this normal behaviour? Is there a way to adjust this?

Sounds pretty normal.  Given a proxied mangled address, the browser is
successful in reaching the proxy and simply relays the message the proxy
sends.  You can probably mess with the error message the proxy sends, but
it's not going to hide the proxy - the tcp connection to the proxy is
successfully established, even if the browser thinks its talking directly
to the web server.  It's not till after the tcp handshake is complete that
the proxy can discover that the URL is no good.

If the proxying is transparent, then DNS errors will be caught by the
browser before the TCP connection is established, so that will give you a
browser error.  If there's no answer from the server though, this will be
discovered by the proxy after it has caught the connection from the
browser, so that error will be generated by the proxy.

Andrew

------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



Re: [SLUG] dynamic dns

2002-11-20 Thread Andrew McNaughton
On Wed, 20 Nov 2002, Kevin Waterson wrote:

> I wish to set up some dynamic dns.
> Could some kind soul please point me to a How-To.
> Google has failed me yet again.

Try this:

http://www.google.com/search?q=dynamic+dns+howto

Andrew


------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



Re: [SLUG] dynamic dns

2002-11-20 Thread Andrew McNaughton
On Wed, 20 Nov 2002, Kevin Waterson wrote:

> On Wed, 20 Nov 2002 22:27:55 +1300 (NZDT)
> Andrew McNaughton <[EMAIL PROTECTED]> wrote:
>
> > http://www.google.com/search?q=dynamic+dns+howto
>
> duh! like I would not have tried that first.
> All that returns is other peoples services.

It did seem rather obvious, though lots of people would leave out the
'howto' term.  The results include plenty of howto docs on setting up
dynamic dns which is what you asked for.  If this isn't what you're after,
then perhaps you should be more specific about what you need.

Andrew

------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792




-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



Re: [SLUG] html-php editor

2002-12-05 Thread Andrew McNaughton

I like nedit.  HTML syntax coloring comes as standard.  IT wouldn't be too
hard to set up a few extra patterns to cover PHP syntax, and I'm sure
someone must have done it.

nedit is no emacs, but it does have macros, and it allows you to set up a
few hooks to command line stuff.  eg I have stuff set up for uploading via
scp, and for downloading a fresh copy.  Also I run perl filters over my
code a lot.  nedit has regex search and replace which is an essential in
my book.  Key strokes are very much what you'd normally get in a windows
or mac editor.  There's no unicode support which bugs me at times.

Andrew McNaughton

----------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792



On Fri, 6 Dec 2002, Phil Scarratt wrote:

> Date: Fri,  6 Dec 2002 12:37:52 +1100
> From: Phil Scarratt <[EMAIL PROTECTED]>
> To: Robert Maurency <[EMAIL PROTECTED]>
> Cc: "'[EMAIL PROTECTED]'" <[EMAIL PROTECTED]>
> Subject: Re: [SLUG] html-php editor
>
> I've used Kate (KDE) and GEdit (Gnome) before. Both have syntax highlighting (at
> least I think getdit does - been awhile since I used it.
>
> FIl
>
> Quoting Robert Maurency <[EMAIL PROTECTED]>:
>
> > Hi Sluggers
> >
> > Does anyone know of a good editor for html/php?
> > I am used to working with Homesite on Windows (for html, asp, php, js etc),
> > but I'm developing more and more on my Linux laptop installation.
> >
> > I know there are plenty of text editors around, but I'm after one with the
> > pretty (and essential if you do as many typos as I do!) colour coding.
> >
> > Any suggestions? Much appreciated.
> >
> > Robert Maurency
> > IT Department
> > Ascham School
> > +61 2 8356 7004
> > www.ascham.nsw.edu.au
> >
> > *
> > This mail, including any attached files may contain
> > confidential and privileged information for the sole
> > use of the intended recipient(s). Any review, use,
> > distribution or disclosure by others is strictly prohibited.
> > If you are not the intended receipient (or authorised to
> > receive information for the recipient), please contact
> > the sender by reply e-mail and delete all copies of
> > this message.
> > *
> > --
> > SLUG - Sydney Linux User's Group - http://slug.org.au/
> > More Info: http://lists.slug.org.au/listinfo/slug
> >
>
>
> -
> Phil Scarratt
> It Consultant
> 0403 531 271
>
> --
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug
>




-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



Re: [SLUG] Question about text computer game from late seventies

2002-12-16 Thread Andrew McNaughton
On Mon, 16 Dec 2002, Jim Hague wrote:

> On 16-Dec-2002 John Clarke wrote:
> > On Mon, Dec 16, 2002 at 04:47:19PM +1100, Ron Daniel wrote:
> >> Does anybody remember the name of the game which people used to play on
> >> their mainframes at university in the late seventies where you explored
> >
> > Advent?  Also known as "Adventure" or "Colossal Cave".  "You are in a
> > maze of twisty little passages, all alike".
> >
> > Source should be available at:
> >
> > ftp://ftp.wustl.edu/doc/misc/if-archive/games/source/advent.tar.Z
>
> Debianites can apt-get install bsdgames and run /usr/games/adventure. This is
> the 'classic' 350 point version.

While present on bsd systems,  it might interest people to know that this
game pre-dates Unix.  It was written at Stanford in 1968.

Andrew

--
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



RE: [Re: [SLUG] Need Help With LWP and newlines]

2003-01-02 Thread Andrew McNaughton
On Mon, 30 Dec 2002, LS wrote:

> This raises my next question, does LWP has a way to send data
> between text and html format, or this is not even related to
> LWP at all ?
>
> I don't want to have to write something to convert the text
> message in HTML format. There must be some Perl module that
> does this, or something else ?

There's a whole bunch of design decisions to be made in making a text to
html translator, so you probably want to look around a few different
options to find something that fits your requirements.  Try searching
google for 'perl "text to html"'.

> Basically after each paragraph I need this
>
> "CRLFText"
>
> etc

There are some very simple tricks that will do most of what you need with
a few s/// operations, but probably it's better to find something a bit
more developed and just plug it in.

Andrew

------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792




-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



Re: [SLUG] sync filesystem on unconnected machines

2003-01-15 Thread Andrew McNaughton
On Wed, 15 Jan 2003, Colin Humphreys wrote:

> On Wed, Jan 15, 2003 at 05:10:16PM +1100, Matthew Dalton wrote:
> > You could use tar's 'only store files newer than DATE' option (-N) to do
> > this. See the tar manpage for details.
>
> That will probably do for now. (and so easy too) Note that it doesn't quite do what 
>I want
> in that deleted files on the source system will not be deleted on the
> remote system.

You're also quite dependent on file dates.  If for example someone uploads
a file from another machine and it retains its last-modified date from the
machine it was transferred from, then that file may not be picked up by
tar, so your target system won't receive it.

If you wind up rolling your own system with md5 sums, 'L5' might be a
useful component.  It's available from the COAST security archive.


ftp://coast.cs.purdue.edu/pub/tools/unix/sysutils/l5/L5.tgz

Abstract:
L5 simply walks down Unix or DOS filesystems, sort of like "ls -R"
or "find" would, generating listings of anything it finds there.
It tells you everything it can about a file's status, and adds on
the MD5 hash of it.  Its output is rather "numeric", but it is a
very simple format and is designed to be post-treated by scripts
that call L5.

If you run L5 on each system, then a `diff` of the output listings should
give you a complete listing of what files need to be transferred, deleted
or updated.

Andrew

--
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



Re: [SLUG] Problems with virtual hosts on apache.

2003-02-17 Thread Andrew McNaughton

A couple of possibilities.  I'd probably go for the second one.

1/ Define domain3 as an alias using the 'ServerAlias' directive within the
virtual host definition

2/ use mod_rewrite to redirect requests for short domain names to the
longer name.  This might be simpler if you have a lot of similarly
structured domain abreviations.  It would also encourage bookmarking and
sharing the canonical form of the URL which is generally a good idea.

Andrew McNaughton

------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792



On Tue, 18 Feb 2003, Matt Hyne wrote:

> Date: Tue, 18 Feb 2003 14:33:35 +1100
> From: Matt Hyne <[EMAIL PROTECTED]>
> To: slug <[EMAIL PROTECTED]>
> Subject: [SLUG] Problems with virtual hosts on apache.
>
>
> Folks, I have an apache problem that is probably quite simple to fix but
> I cannot find the solution.
>
> I have set up several virtual hosts on the box and these work fine,
> however I cannot get the correct virtualhost page if I drop the
> domainname from the URL.
>
> Eg (these are examples, so don't try accessing them):
>
> I have the following virtual domains:
>
> http://domain1.hyne.com
> http://domain2.hyne.com
> http://domain3.hyne.com
>
> Now, If I use the FQDN, I get the correct pages.  However if I just use
> http://domain2 or http://domain3 then I will always get the webpage for
> domain1.
>
> I've played around with the options and searched google to no avail.
>
> Anyone got an ideas ?
>
> Matt
>
> --
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug
>




-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug



Re: [SLUG] Case sensitivity

2003-02-25 Thread Andrew McNaughton
On Wed, 26 Feb 2003, Peter Vogel wrote:

> How do I configure Apache to not be sensitive to case of urls/filenames?
> Or is that a bad idea?

Doing this has drawbacks.  Your content might move through different
locations through its lifetime and retrospectively fixing case issues is
more of a pain than getting things right the first time.  As well as
browsers, you may want to run various scripts and utilities over
your content and many of these will assume that case should match.  Also
if you start uploading different versions of your scripts with mis-matched
case in filenames, a case sensitive file system will allow the two to
co-exist.

Apache's mod_speling has a nice appraoch.  In the event that an URL is
slightly mis-spelt, including case errors, it issues a page with a notice
that the URL did not match exactly, and a list of likely alternatives.
In the event that there is only one alternative, it issues a redirect.
Personally I'd rather not have the redirect, just a page with only one
option, because that would reqire the fixing of the link without leaving
users stranded in the meanwhile.

Andrew



------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Searching the slug archives

2003-03-03 Thread Andrew McNaughton

http://slug.org.au/archives.html

Andrew


On Tue, 4 Mar 2003, Bruce Badger wrote:

> Date: 04 Mar 2003 16:26:31 +1100
> From: Bruce Badger <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG] Searching the slug archives
>
> Is there any way to search the slug archives.  A search tool, that is,
> rather than just poking thought the archive?
>
> Thanks
>
>
>
> --
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug
>

------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Help I'm MS bound and I'm feeling down

2003-03-04 Thread Andrew McNaughton
On Wed, 5 Mar 2003, Colin Humphreys wrote:

> Date: Wed, 5 Mar 2003 11:50:39 +1100
> From: Colin Humphreys <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: [SLUG] Help I'm MS bound and I'm feeling down
>
> On Wed, Mar 05, 2003 at 10:12:10AM +1000, [EMAIL PROTECTED] wrote:
> > Hi All,
> >
> > Has anyone been able to import  .PST files from Outlook to another email
> > client.
>
> I did it using an imap server. (Setup an imap server on your linux box,
> conenct outlook to it, copy up all the mail, then the linux box can
> connect to the imap serve also).

I'll second that.  I mucked around with other options for a while and ran
into too many glitches.  Outlook crashed on me several times before I had
it all done, but was generally much easier to use this imap approach.

I think that the University of Washington IMAP server while
it has security issues (read any file as a logged in user) probably does
allow you to just dump your mail straight into mbox files rather than
having to do a second imap transfer to get your files to a usable form on
your linux box.

Andrew


--
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Searching the slug archives

2003-03-05 Thread Andrew McNaughton
On Thu, 6 Feb 2003, Terry Collins wrote:

> [EMAIL PROTECTED] wrote:
> >
> > Erps... searching seems to be ok now...  
>
> I can confirm that the slug archive search results are sometimes total
> trash.

What goes wrong?  Should a different search tool be used, or is it an
issue with the way things are set up?

Andrew





----------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] BIG IMAGES

2003-03-13 Thread Andrew McNaughton
On Thu, 13 Mar 2003 [EMAIL PROTECTED] wrote:

> On 13 Mar 2003 18:44:49 +1100
> James Gregory <[EMAIL PROTECTED]> wrote:
> >
> > I have some images, they are gifs, and they're approximately 14000
> > pixels square. I'd really like to view them, and ideally perform
> > transformations such as scaling to a less insane size.
>
> I suspect the problem is the uncompressed size
> Assuming they're 24bit, that's 14000 x 14000 x 24/8 = 588,000,000 bytes
> i.e. ~ 588 Mb.  So you might need a Gig of RAM to work on them.
>
> A quick search for a image slicer only turned up windows shareware ...
> but surely there is something for Linux out there somewhere.


libungif, formerly known as giflib might be what you need.
ftp://ftp.ayamura.org/pub/graphics/ has copies of both of them, wheras
they seem to be gone from the primary site.  Could be patent problems?

There's a bunch of utilities in there which require limited memory.  eg
gifrsiz to resize (by simple deletion of pixels I think rather than
averaging over an area).  Also gifburst which will segment the image into
smaller ones.

Incidentally, the main LZW patent expires in June this year.

Andrew

--
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] BIG IMAGES

2003-03-13 Thread Andrew McNaughton
On Thu, 13 Mar 2003, Andrew Bennetts wrote:

> IIRC, gifs are 8 bit (or less).
>
> > i.e. ~ 588 Mb.  So you might need a Gig of RAM to work on them.
>
> So this would be ~196 Mb.  You might still need alot of RAM, though :)

That's an 8 bit pallette size, where each pallette colour can take any 24
bit color value.

So yes a program could store only the 8 bits needed to identify which
colour from the palette is used for a given pixel, but many programs will
expand it out to 24 bits in memory.  Plus scratch memory used in any
transformation which might be substantial as well.  It all depends on the
implementation.

A smart program could do a scaling operation without storing the
original image in uncompressed form at all, and only ever storing part of
the resulting image in uncompressed form.  It would cost a little in
performance, but potentially not very much.

Andrew


------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] calling C libs from perl

2003-03-14 Thread Andrew McNaughton
On Fri, 14 Mar 2003, David Fitch wrote:

> Date: Fri, 14 Mar 2003 15:52:34 +1030
> From: David Fitch <[EMAIL PROTECTED]>
> To: "Broun, Bevan" <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SLUG] calling C libs from perl
>
> On Fri, Mar 14, 2003 at 04:14:28PM +1100, Broun, Bevan wrote:
> > I can tell you it's in chapter 18 of the "Advanced Perl Programming"
> > Oreilly book. There is some documentation at perl.com.au, "C and Perl" -
> > the first two look like putting perl in C and the next calling C from perl.
> >
> > It would seem that it's worth while buying the Perl CD bookshelf.
>
> ah thanks, got that book already (but only up to chapt 11 so far)
> so it was under my nose all the time!
> I'd seen that XS stuff but assumed it was for calling perl from
> other languages.

Besides using XS directly, you might want to consider using SWIG or h2xs
to generate the XS code, or perhaps you might want to use Inline.pm to
inline your C code into your perl.

All of these approaches can be quite simple when you're dealing with
simple data types at the interface, but get more involved where you need
to work with perl's data types.

Andrew McNaughton


--
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] TIP: Thesaurus for Linux

2003-04-03 Thread Andrew McNaughton
On Thu, 3 Apr 2003 [EMAIL PROTECTED] wrote:

> Just thought I'd mention that an excellent thesaurus program for Linux
> called aiksaurus (command line), with a very good GTK front end too.

Another excellent thesaurus is wordnet.  This is more oriented to
linguistic applications than to helping humans to write, but the results
are quite readable.  It can be operated via command line, API or various
web front ends.


A Web front end:
http://vancouver-webpages.com/wordnet/

A very funky but less functional front end:
http://www.visualthesaurus.com/

Home page:
http://www.cogsci.princeton.edu/~wn/



Andrew McNaughton




------
Andrew McNaughton   In Sydney and looking for work
[EMAIL PROTECTED]  http://staff.scoop.co.nz/andrew/cv.doc
Mobile: +61 422 753 792


-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] WORD (.doc) to PDF under Linux

2003-06-05 Thread Andrew McNaughton

Is antiword any better?  I use it to look at the plain text, but haven't
used its postscript generation.


On Thu, 5 Jun 2003, Matt Hyne wrote:

> Date: Thu, 5 Jun 2003 18:16:19 +1000
> From: Matt Hyne <[EMAIL PROTECTED]>
> To: 'Jan Schmidt' <[EMAIL PROTECTED]>, 'slug' <[EMAIL PROTECTED]>
> Subject: RE: [SLUG] WORD (.doc) to PDF under Linux
>
>
> I've played around with wvPS (and then ps2pdf) but wvPS pretty much
> destroys the formatting of most of the word files.
>
> Matt
>
> [EMAIL PROTECTED] wrote:
>
> > 
> >
> >> Does anyone know of a reliable way to convert a directory containing
> MS
> >> Word files to PDF files under Linux.  There appears to be plenty of
> >> Windows tools but I cannot find many for Linux.
> >
> > You might care to try 'wvPDF' from the wv package
> > (http://wvware.sourceforge.net)
> >
> > It doesn't do very much by way of preserving formatting, but it might
> > suffice as a scriptable solution.
> >
> >> I want to write a script that will run a WORD->PDF conversion nightly
> so
> >> PDF files can be available from a website.
> >
> > for dude in *.doc; do
> >   OUTFILE=${dude/.doc/.pdf}
> >   wvPDF $dude $OUTFILE
> > done
> >
> > --
> > Jan Schmidt  [EMAIL PROTECTED]
> >
> > Homer: "No TV and No Beer make Homer something something" Marge: "Go
> Crazy?"
> > Homer: "Don't mind if I do! rrrarrgghar!"
>
> --
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug
>

--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Redundant Web Servers

2003-06-02 Thread Andrew McNaughton





On Mon, 2 Jun 2003, James Gregory wrote:

> > >> 3. There must be NO DISCERNABLE INTERRUPTION TO SERVICE when one
> > >> fails. Doing a "shift-reload" in the browser is NOT an option. It
> > >> must be TOTALLY TRANSPARENT.
> >
> > James> Wow. Well, point 3 makes it pretty hard. As I understand it,
> > James> that's an intentional design decision of tcp/ip -- if it were
> > James> easy to have another computer interrupt an existing tcp
> > James> connection and just take it over, then I'm sure it would be
> >
> > If you're only serving static content, that's not an issue:  HTTP
> > version 1 uses a new tcp/ip connexion for each request anyway,
> > With round-robin DNS you may end up with different images on the same
> > page being served from different servers anyway.
>
> Sure, that's a given. I thought the problem was that it had to happen
> without a reload - server crashing halfway through serving a particular
> html page. I considered 0 ttl dns as well, but it only works if you can
> afford reloads.

I suppose you might be able to hack something together with MIMEs
multipart/x-mixed-replace in a proxy which monitored content length and
was ready to fetch a second MIME part where required.  It would be a bit
messy though, not necessarily compatible with all browsers, and the proxy
is still going to be a single point of failure.

Andrew McNaughton



--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] RE: IMAP and procmail (Regular Expression Question)

2003-06-05 Thread Andrew McNaughton
On Thu, 5 Jun 2003, Joel Heenan wrote:

> Date: Thu, 5 Jun 2003 16:13:59 +1000
> From: Joel Heenan <[EMAIL PROTECTED]>
> Reply-To: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: [SLUG] RE: IMAP and procmail (Regular Expression Question)
>
> > * ^((List-Id|X-(Mailing-)?List):(.*[<]\/[^>]*))
>
> My understanding of this regular expression. Indulge me:
>
> Message must begin with either List-Id or X-(Mailing-)List where the
> Mailing- part is optional.

Assuming we are ignoring everything before the ^, then so far so good.



> It must then be followed by a colon

Yep

>then anything up to a 

Not. quite.  This will match zero or more characters which are not '>'
after the '
> So help me out a bit, I copied the line from Slug into a file and egrep did
> not match slug's List-Id line!
>
> List-Id: Linux and Free Software Discussion 
>
> Why is the / in there? Why is the < in brackets what purpose does this
> serve?


the [<] bit is redundant, but depending on the flavour of regex, you may
need to use '\<' so that you don't match a word beginning instead of a
literal '<'.  the '/' looks to be the problem that's stumping you.  I'd
probably insist on the closing '<' as well, but it may be un-necessary
pedantry.

Andrew



--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Help I just destroyed my Root filesystem

2003-06-12 Thread Andrew McNaughton
On Fri, 13 Jun 2003, Richard Heycock wrote:

> Hi,
>
> I'm new to this list and I need some help! I just ran grub-install on my
> root partition instead of my boot partition and it appears to have wreaked
> it.

Ouch.

> If I run file -s /dev/hda5 I now get 'x86 boot sector' instead of
> 'ReiserFS V3.6 block...' and I can no longer mount it.
> When the machine tries to boot it loads the kernel (from /dev/hda6) but
> when it comes to mount the root partition it kernel panics as it cannot
> mountthe filesystem.
>
> I know at least some of the data is on the partition (`less /dev/hda5`).
> I'm guessing that grub-install has overwritten N number of bytes at the
> beginning of the partition but beyond that I'm at a complete loss at what
> to do.

Your probably not in too bad shape.  The boot sector is just a single
sector, and the rest of the drive should be untouched.  The trick is
recovering that first sector.

Your basic tool for moving stuff around is dd which you can use to copy
just the first sector from one file (ie probably a device file) to
another.

I don't know the details of ReiserFS.  There's a good chance though that
the first sector contains only really generic stuff, and/or there are
backups stored elsewhere on the device.  You might even find it's as easy
as copying the first sector (only) from a healthy ReiserFS file system to
your damaged one.  There's nothing on that sector you're trying to save,
so just give it a whirl, but DONT MOUNT THE FILE SYSTEM IN OTHER THAN READ
ONLY MODE until you're pretty sure its OK, and then consider copying the
most important files out of the file system when it becomes accessible.

If it's not so happily simple and the first sector contains the root of
your file system, device parameters or something similar, then the place
to get help is probably a developers list for the ReiserFS file system.

Andrew



--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] ssh question

2003-06-12 Thread Andrew McNaughton

IF you really wanted to do it with passwords, you could wrap ssh with an
expect script.  It's a pretty standard example case for expect, so I'm
sure you'd find the script out there already.  From memory I think perl's
Expect.pm documentation deals with it a bit.

Andrew




On Thu, 12 Jun 2003, Kevin Saenz wrote:

> I don't think there is a way except for creating keys and copying the
> public key to ~/.ssh directory and naming the key file as authorize_keys
> Then at least you could do commands like
> ssh -l admin 192.168.0.1 -X -C Eterm
>
> or run other jobs.
>
> > I am trying to ssh into a remote machine - what I want to do is log in
> > with a username and password supplied in the one command - is there a
> > way to this?
> >
> > #ssh -l [EMAIL PROTECTED] passwd test?
> >
> > HELP PLEASE!
>

--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Help I just destroyed my Root filesystem

2003-06-12 Thread Andrew McNaughton
On Fri, 13 Jun 2003, Richard Heycock wrote:

> Date: Fri, 13 Jun 2003 16:14:13 +1000 (EST)
> From: Richard Heycock <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED], [EMAIL PROTECTED]
> Subject: Re: [SLUG] Help I just destroyed my Root filesystem
>
> > On Fri, 13 Jun 2003, Richard Heycock wrote:
> >
> >> Hi,
> >>
> >> I'm new to this list and I need some help! I just ran grub-install on
> >> my root partition instead of my boot partition and it appears to have
> >> wreaked it.
> >
> > Ouch.
> >
> >> If I run file -s /dev/hda5 I now get 'x86 boot sector' instead of
> >> 'ReiserFS V3.6 block...' and I can no longer mount it.
> >> When the machine tries to boot it loads the kernel (from /dev/hda6)
> >> but when it comes to mount the root partition it kernel panics as it
> >> cannot mountthe filesystem.
> >>
> >> I know at least some of the data is on the partition (`less
> >> /dev/hda5`). I'm guessing that grub-install has overwritten N number
> >> of bytes at the beginning of the partition but beyond that I'm at a
> >> complete loss at what to do.
> >
> > Your probably not in too bad shape.  The boot sector is just a single
>
> Arr nice to hear some positive thoughts :-) I've only been met with stoney
> silence on other lists (debian-users and reiserfs), though I'm told there
> are these things called time zones...
>
> > sector, and the rest of the drive should be untouched.  The trick is
> > recovering that first sector.
>
> When you say sector do you mean the sector as in the disc drive or something
> else?

I mean the first chunk of space on the device.  ie the disk partition.
It's probably only the first 512 bytes which have been clobbered.


> >
> > Your basic tool for moving stuff around is dd which you can use to copy
> > just the first sector from one file (ie probably a device file) to
> > another.
> >
> > I don't know the details of ReiserFS.  There's a good chance though
> > that the first sector contains only really generic stuff, and/or there
> > are backups stored elsewhere on the device.  You might even find it's
> > as easy as copying the first sector (only) from a healthy ReiserFS file
> > system to your damaged one.  There's nothing on that sector you're
> > trying to save, so just give it a whirl, but DONT MOUNT THE FILE SYSTEM
> > IN OTHER THAN READ ONLY MODE until you're pretty sure its OK, and then
> > consider copying the most important files out of the file system when
> > it becomes accessible.
>
> The first thing I did was to dd the entire partition on to another partition
> so I've got some scope for experiment.
>
> I've dd'ed the first 512 bytes from a healthy reiserfs partition but without
> success. I also tried to 'strace grub-install' to see how much data was
> being copied to the disc but I got an error message (something about the
> bios) but I am trying to do this from within Knoppix so things are unlikely
> to be the same.

Try looking at the two of them and see how much is different?

Can you rig something up to look for a duplicate of the first sector of
the healthy system further into the device?  If so, is there more than one
duplicate copy?  Are the blocks at the same offsets the same in the
damaged system?

If it was me I'd do this with a simple perl script. YMMV.

Where I've screwed up a file system in the past I wrote a script which
chugged through the device file and took any blocks of text (byte values
0..127) longer than 1K or so and put these into successively numbered
files.  It's a long way from putting a system back together, but if all
else fails, its a ways to make at least some of your files recoverable.

> > If it's not so happily simple and the first sector contains the root of
> > your file system, device parameters or something similar, then the
> > place to get help is probably a developers list for the ReiserFS file
> > system.
>
> I posted a message to this list but I haven't heard anything back yet, I'll
> wait and see what happens this evening. I'm in a bit of a panic at the
> moment so I'm trying everything I can think of!

The thing is to make it clear that you are making an effort to help
yourself.  Give enough detail that someone knowledgable can set you
straight without having to explain things from the beginning in order to
pick you up along the way somewhere.  Ask questions which are as specific
as you can manage and which are quick to answer, even if that's just
'where can I read up about XYZ'.

Andrew


--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Meaning of Nonsense in s p a m

2003-06-15 Thread Andrew McNaughton

Talking of which, does anyone know any good or interesting approaches to
identifying these junk strings?

A checksum algorithm based spam system (eg vipul's razor) could be
modified to work with checksums of only the recognized words in an email.
All unrecognized stuff (based on a standard wordlist) would get stripped
before the checksum was generated.  This would help for a while, and I'd
be interested to hear about anything out there, but the spammers could
deal with it easily enough by modifying their approach to just tack on
half a dozen common words selected at random.

I presume that algorithms have been developed in the area of detecting
copyright violations which look at percentage overlap between different
bits of text, but I'm far from clear on how you could do that efficiently.

Anyone have any pointers?

Andrew



On Sun, 15 Jun 2003 [EMAIL PROTECTED] wrote:

> Date: Sun, 15 Jun 2003 12:25:16 +1000
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: Re: [SLUG] Meaning of Nonsense in  s p a m
>
>
> I've always presumed that it was some type
> of cookie; in the bovious way, to validate the
> email, but also if you complained to the isp
> they would be able to know who complained if
> the isp then showed the complaint to the
> spammer.
>
>
> But yeah, trying to fool filters is probably the
> main purpose.
>
> Matt
>
>
>

--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Please Help

2003-06-15 Thread Andrew McNaughton
On Mon, 16 Jun 2003, Erik de Castro Lopo wrote:

> On Sun, 15 Jun 2003 08:38:07 +1000
> Erik de Castro Lopo <[EMAIL PROTECTED]> wrote:
>
> > On Sun, 15 Jun 2003 01:34:18 +0800
> > "Edward Maloney" <[EMAIL PROTECTED]> wrote:
> >
> > > I am desperately seeking the ability to record my modem data
> > > transmissions (from phone line recording made w/sound board)
> > > and would like to decode both originating/answer sides of data
> > > transmissions after call has been made. I have no way of
> > > converting this .wav file to ASCII data.
> >
> > Do you mean performing speech recognition on the WAV file to produce a
> > transcript of the recorded voice? If so, then this technology does not
> > really exist yet.
> >
> > I do not know of an speech recognition software for Linux and even the
> > stuff available for windows is rather limited.

Dragon Dictate is reputed to be reasonably good for Windows.  I don't
really know what's around for *nix but a lot of researchers would be using
*nix for their development platform.  DARPA is the major funder of
research in this area (Echelon, etc.) and makes a good search term.

http://www.google.com/search?q=darpa+speech+recognition+software+download

Generally speaking, existing technology allows for pretty good results
with training of the speech recognition system to understand the
particular voice and some care on the part of the speaker to speak
clearly.  Recognising speech in typical recodings of conversational speech
between unknown parties is a much harder task.

Andrew




--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] email attack?

2003-06-18 Thread Andrew McNaughton

I've seen a couple of these come through with 1000s of messages bounced
back to my server, some of them with 100s of addresses per message, all
at the same ISP.

What amazed me about these was how few people actually believed that the
spam came from that address - you basically don't get complaints.  I've
had more trouble from spammers who put urls in their spam which point to
my site because some page said something agreeing with the point of view
they're pushing.

Andrew McNaughton


On Wed, 18 Jun 2003, Brian Robson wrote:

> Date: Wed, 18 Jun 2003 16:28:06 +1000
> From: Brian Robson <[EMAIL PROTECTED]>
> To: SLUG <[EMAIL PROTECTED]>
> Subject: Re: [SLUG] email attack?
>
> The same thing happened to me a couple of years ago, I got about 60
> undelivered emails returned to me.
>
> Some spammer has picked up a real email address on your domain, or has just
> guessed an address like [EMAIL PROTECTED]
>
> All you get are the bounces, so the actual number of emails sent with you as
> the origin is far higher.
>
> Keep a record, just in case some fraud or crime is being committed in your
> name.
>
> Brian
>
> PS: My home page is not currently being used for any actual email addresses,
> but it still gets 250 to 350 SPAMs per week.  Where are you Senator Alston
> when we need you, the man who was going to make SPAM illegal.
>
>
>
>
>
>
>
>
>
>
>
>
>
> At 03:25 PM 18/06/03 +1000, you wrote:
> >
> >I'm concerned that I'm being attacked in some way that I don't understand.
> >I've checked my logs and found over 400 "unknown user" messages for
> ><[EMAIL PROTECTED]>. Then I got the following MAILER-DAEMON email
> >telling me the address is undeliverable.
> >
> >I can't figure out why I should suddenly get this one apparently
> >inappropriate MAILER-DAEMON email.
> >
> >I am a legitimate relay for mydomain.com.au but user "rjnr" doesn't exist
> >and never did.
> >
> >Their are also 2000 other "unknown user" messages for this particular
> >domain in this week's log, so it looks like some spammer has targetted
> >this domain.
> >
> >Am I worrying about nothing?
> >
> >[Woody/Postfix, btw]
> >
> >Date: Tue, 10 Jun 2003 08:50:56 +1000 (EST)
> >From: Mail Delivery System <[EMAIL PROTECTED]>
> >To: [EMAIL PROTECTED]
> >Subject: Undelivered Mail Returned to Sender
> >Parts/Attachments:
> >   1   Shown 13 lines  Text, "Notification"
> >   2   Shown226 bytes  Message, "Delivery error report"
> >   3   Shown1.3 KB Message, "Undelivered Message"
> >   3.1 Shown 22 lines  Text
> >
> >
> >This is the Postfix program at host fast.kenpro.com.au.
> >
> >I'm sorry to have to inform you that the message returned
> >below could not be delivered to one or more destinations.
> >
> >For further assistance, please send mail to 
> >
> >If you do so, please include this problem report. You can
> >delete your own text from the message returned below.
> >
> >The Postfix program
> >
> ><[EMAIL PROTECTED]>: unknown user: "rjnr"
> >
> >[ Part 2: "Delivery error report" ]
> >
> >Reporting-MTA: dns; fast.kenpro.com.au
> >Arrival-Date: Tue, 10 Jun 2003 08:50:55 +1000 (EST)
> >
> >Final-Recipient: rfc822; [EMAIL PROTECTED]
> >Action: failed
> >Status: 5.0.0
> >Diagnostic-Code: X-Postfix; unknown user: "rjnr"
> >
> >[ Part 2: "Delivery error report" ]
> >
> >Reporting-MTA: dns; fast.kenpro.com.au
> >Arrival-Date: Tue, 10 Jun 2003 08:50:55 +1000 (EST)
> >
> >Final-Recipient: rfc822; [EMAIL PROTECTED]
> >Action: failed
> >Status: 5.0.0
> >Diagnostic-Code: X-Postfix; unknown user: "rjnr"
> >
> >
> >[ Part 3: "Undelivered Message" ]
> >
> >Date: Mon, 9 Jun 2003 15:55:28 -0700
> >From: Mail Delivery Subsystem <[EMAIL PROTECTED]>
> >To: [EMAIL PROTECTED]
> >Subject: MAILER-DAEMON Returned mail: User unknown
> >
> >The original message was received at 6/9/2003 3:55:27 PM -0100
> >[218.79.218.34]
> >- The following addresses had permanent fatal errors -
> ><[EMAIL PROTECTED]>
> >(expanded from: <[EMAIL PROTECTED]>)
> >
> >- Transcript of session follows -
> >mail.local

Re: [SLUG] Secondary MX record - To have or not

2003-06-19 Thread Andrew McNaughton



On Fri, 20 Jun 2003, Anth Courtney wrote:

> On Fri, 20 Jun 2003, Matt Hyne wrote:
>
> > There seems to be two camps here - those that do not believe that they
> > are needed (and thus don't provide them) and those that believe that
> > they are a mandatory part of a redundant mail system.
> >
> > I am sitting on the fence (I can see some merits to both sides of the
> > argument)
>
> Out of interest, I'd be interested in hearing some of the arguments for
> why they're not needed - personally, I wouldn't live without one.

In the event that a remote mail server is not immediately contactable,
mail generally just stays on the queue at the sender's end for up to a few
days until it can be delivered.  So If your mail server is offline for a
while then mail's going to get through when your server is back on line
unless you're out of action for several days.

Andrew

--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Tcpdump - multiple filters to multiple files?

2003-06-23 Thread Andrew McNaughton

FWIW

I don't know any way to do this with existing tools, but it would
presumably not be a particularly difficult task for a c programmer to
modify tcpdump for this purpose.

Depending how much speed you really need, this could also be done in perl
using Net::Pcap.

snort might also be of interest.  I'm not particularly familiar with it,
but it seems like the sort of thing I'd want it to do.

tcpflow splits trafic by tcp stream.  Not sure if that's useful to you.


Andrew



On Mon, 23 Jun 2003, Umar Goldeli wrote:

> Date: Mon, 23 Jun 2003 20:01:17 +1000 (EST)
> From: Umar Goldeli <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG] Tcpdump - multiple filters to multiple files?
>
> Howdy,
>
> How are we all? :)
>
> Here's an interesting question that I'm looking for a solution to - quite
> simply, is there a way to run tcpdump to capture different ip addresses
> and output them to different files without running multiple copies of
> tcpdump?
>
> Specifically - something along these lines:
>
> * A single tcpdump process captures packets with source or dest IP:
> 1.2.3.4 and outputs the results to 1.2.3.4.log whilst at the same time
> doing the same for 2.3.4.5 and 2.3.4.5.log respectively.
>
> Ideally - this scales to the 100 mark or so.. and FAST.
>
> I'm pretty sure this can't be done with tcpdump/libpcap - but is there
> another utility?
>
> If none exists - how hard would it be to code such a beast? Also - could
> it be coded portably so it could compile/run on Solaris etc?
>
> Looking forward to hearing your replies...
>
> Thanks in advance. :)
>
> Cheers,
> Umar.
>
>

--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] Opinions sought: Exim vs Sendmail

2003-06-23 Thread Andrew McNaughton
On Mon, 23 Jun 2003 [EMAIL PROTECTED] wrote:

> > -Original Message-
> > From: Andrew McNaughton [mailto:[EMAIL PROTECTED]
> > Sent: Monday, 23 June 2003 5:03 PM
> > To: James Gray
> > Subject: Re: [SLUG] Opinions sought: Exim vs Sendmail
> >
> >
> >
> > It sounds to me like you might want to allow more concurrent processes
> > than you are at present.  Also, the main resource you need more of is
> > going to be memory.  Have your bean counters taken in that an
> > extra 512MB
> > is really petty cash level expenditure?  In any case, is
> > there really any
> > problem with allowing more concurrent accesses?
> >
> > I agree with the suggestion about running your mail filter as
> > a daemon if
> > possible.  If this serializes your filtering then it should
> > help a great
> > deal.
> >
> > Andrew
>
> Spamassassin is already running as a daemon and chews up about 24Mb RAM
> just for the parent process - that's almost 10% of our physical RAM!.
> spamd children (according to vmstat + ps + top) are all reporting
> similar usage (23-27Mb RAM).  As you suggest the lack of RAM is killing

Most of that memory should be shared, so it's probably not quite as bad as
you're suggesting?

> us.  I did some burst load testing on it yesterday and found that
> without limiting child processes for spamd it only took 15 messages in
> under 5 seconds to sent system load to 15!  14 children of spamd caused
> so much paging that the system ground to a halt (load peaked at 19!!).

I don't know the ins and outs of how the spamassassin daemon does work,
but this is not how you'd want it to work. The daemon should limit the
number of children operating at any given time, with enough parallel
requests so that you're system has stuff to do during waits for remote dns
and checksum checks, but you don't load up on processes all competing for
CPU.

> So I did some quick calculations and decided 3 spamd children per CPU
> (with 256Mb RAM) would be appropriate (given average time per message,
> RAM, other process requirements etc).  I managed to send 50 messages in
> 7 seconds and system load hit 15 for less than 3 seconds and then
> quickly returned to <1.  Paging was non-existent and no connections were
> refused.  So startup scripts are now "spamd -m 3."

Sounds better.

You've got a fundamental limit on the ammount of messages you can process
in a given time - each message takes a given ammount of CPU time, which
gives you a pretty good idea of how many messages you can process in a
given block of time.

At some level of CPU activity you would ideally want to change strategy:
stop doing mail filtering while the remote MTA is connected and start
putting mail into a spool.  You'd want this spool to be processed as
resources are available, and mail either delivered or bounced accordingly.
A bounce isn't as good as giving an error while you've got the remote MTA
still connected, but it allows you to process things at your leisure.

This delayed processing has other advantages also.  Spam checksum
systems take a short while to get spam reports, so delaying processing
can mean improved results.  I've been thinking of doing a procmail based
solution to this on my own mailbox somewhat after delivery.

How you'd set up this spooling I'm not sure.  I would very much like to
hear about such arrangements using exim or postfix.

Andrew


--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] constant hard disk access

2003-06-24 Thread Andrew McNaughton

On Wed, 25 Jun 2003, Ben Donohue wrote:

> Hi Slugs,
> I have a Mandrake 9 box. Turn it on and after a week or so of running
> the hard disk access light seems to stay almost constantly on. This will
> continue for about a week and then stop back to normal ie very low
> activity. Give it another week or two and then another week or so of
> hard access again. If I reboot the box (ah windows training) the access
> is normal till a week or so and then heavy access all over again.

I mostly use FreeBSD, and the tools for monitoring this sort of thing are
a bit different to linux, but these should be useful.

Could be swap that's getting hammered (check your memory use) but the way
the problem comes and goes is a bit odd.  You should have vmstat on linux,
and that should give you a fair idea of how much swap activity is going
on.  As far as I know, sysstat is BSD sepcific, but if available to
you it's quite nice.

Use lsof to have a look at what file handles are open from each process on
your system.  Browsing through there thinking about why things are open
might turn something up.

Use top or ps to see what state processes are in.  You're interested in
any state which suggests a process might be waiting for the disk.  You
could use this info to cut down the number of processes you pay attention
to the lsof output for.

Andrew McNaughton


--

No added Sugar.  Not tested on animals.  If irritation occurs,
discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Opinions sought: Exim vs Sendmail

2003-06-30 Thread Andrew McNaughton
On Mon, 30 Jun 2003 [EMAIL PROTECTED] wrote:

> On 30 Jun, Oscar Plameras wrote:
> >  The reason is as follows:
> >
> >  Number of IPV4 addresses = 255*255*255*255 * 50 bytes (your  allocation)
> >=  4,228Mb * 50 =
> >  202,280MB
>
> A cache isn't a complete copy.  You store what you allow room for, and
> fall back to your normal mechanism if the entry isn't in the cache.
> You use LRU typically after the cache fills.
>
> This is all very standard stuff, and it's the technique that Solaris
> uses to get good performance.  So I can't see why Linux couldn't do the
> same.

It's quite straight-forward to implement a DNS cache on linux -- run bind
on the box.  Isn't this going in circles?  For some reason people wanted
to get the cache off the box.

Personally I haven't found that removing bind gives a performance benefit
(quite the opposite), but different systems use resources in different
combinations, so YMMV.

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Opinions sought: Exim vs Sendmail

2003-06-30 Thread Andrew McNaughton
On Tue, 1 Jul 2003, Oscar Plameras wrote:

> From: <[EMAIL PROTECTED]>
> > On 30 Jun, Oscar Plameras wrote:
> > >  The reason is as follows:
> > >
> > >  Number of IPV4 addresses = 255*255*255*255 * 50 bytes (your
> allocation)
> > >=  4,228Mb * 50 =
> > >  202,280MB
> >
> > A cache isn't a complete copy.  You store what you allow room for, and
> > fall back to your normal mechanism if the entry isn't in the cache.
> > You use LRU typically after the cache fills.
> >
>
> Just a point of clarification:
>
> Cache is structrured data, or data list, or list,  kept in CPU MEMORY
> all the time and maybe used by a software to locate other informatioin
> or to manipulate information.
>
> Database is structured data, or data list, or list, kept in DISK STORAGE
> and maybe used by a software to locate other information
> or to manipulate information.

This is simply not true.

A cache may be kept on disk, and commonly is.  eg Squid caches to disk
because the overhead of disk retrieval is less than the overhead of
repeating the network retrieval.

Also, a database may in some cases be implemented as an in-memory
structure, although this is sufficiently unusual that you should always be
clear about what you're talking about if you wish to avoid the assumption
that it is disk-based.

Andrew McNaughton


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] LDAP: perldap

2003-07-06 Thread Andrew McNaughton

This looks like the critical line in the error listing:

API.c:53:23: ldap_ssl.h: No such file or directory

What's supposed to provide that file?  Perhaps there's something you're
supposed to have installed first?

Andrew



On Sun, 6 Jul 2003, Phillipus Gunawan wrote:

> Date: Sun, 6 Jul 2003 16:57:11 -0700 (PDT)
> From: Phillipus Gunawan <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG] LDAP: perldap
>
> G'day,
>
> I'm trying to get my hand with LDAP. Does anyone know
> how to set-up ldap for the m$ outlook address book
> database? What fields should be in? Googling only
> directing me to set up the outlook connection to the
> LDAP server, nothing more... :(
>
> I read that webmin has a 3rd party modeule for LDAP.
> For that purpose, I need to install Nestcape LDAP SDK
> (done) and perldap (from www.mozilla.og/directory)
>
> In the how_to_install_perldap, after extracting the gz
> file, I need to do "perl Makefile.PL" if there is no
> error, do "make"
>
> The problem is that I'm having no error with "perl
> Makefile.PL" but I cant do the "make". The terminal
> show me 1 error which I couldn't figure it out what
> happen. I attached a txt file copy_paste from the
> terminal. Could someone please guide me whats wrong
> with it?
>
> Best Regards,
>
>
> Phillipus.
>
> __
> Do you Yahoo!?
> SBC Yahoo! DSL - Now only $29.95 per month!
> http://sbc.yahoo.com

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc




perl_ldap Error
Description: perl_ldap Error
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] ssh-agent passphrase-on-demand

2003-07-07 Thread Andrew McNaughton
On Tue, 8 Jul 2003, Jamie Wilkinson wrote:

> Hey slugs,
>
> 2 parts to this:
>
> Does anyone know of a way to have a single ssh-agent running on a machine
> per user, so that when they log in on the console, or via {k,g,x}dm, or ssh,
> only one ssh-agent is running?

if you run ssh-agent without giving it a child command to run, then it
outputs a bunch of stuff you can run in a shell command:

SSH_AUTH_SOCK=/tmp/ssh-8e1dxwe3/agent.24927; export SSH_AUTH_SOCK;
SSH_AGENT_PID=24928; export SSH_AGENT_PID;
echo Agent pid 24928;

You could pipe that to a file in the user's home directory which you will
run as part of your login procedure, whether that be through a .xsession,
a .profile, or whatever.

You will need to be able to identify whether the agent is actually
running or not, and start it if necessary.  I'm guessing that
$SSH_AUTH_SOCK dissapears when the agent dies.

You also need to think about when the agent should die, and make that
happen.

As always, be aware that anyone who can connect to the agent socket can
authenticate using whatever keys the agent has.  You've got to trust the
root user.  I bring this up because if you're logging in via ssh, then
it's worth thinking whether you should be logging in to somewhere else
from there rather than connecting directly.  Personally I don't like to
put extra machines in the middle of the connection.

> Does anyone know how to have ssh keys loaded into ssh-agent without having
> ssh-add ask for a passphrase, until that key is used?  So I can have all the
> keys I use loaded at ssh-agent start, but I get prompted for a passphrase on
> the key only when ssh tries to use that key?  Or perhaps a way for the key
> to get added to ssh-agent when ssh needs it?

That would be seriously insecure.

Andrew

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] command Line mailer that can send an attachment

2003-07-08 Thread Andrew McNaughton

My version of mail doesn't do attachments, so I wrote a simple script to
do the job.  It's attached.  The interface is similar to that for mail.

Andrew





On Wed, 9 Jul 2003, Terry Collins wrote:

> Date: Wed, 09 Jul 2003 10:58:11 +1000
> From: Terry Collins <[EMAIL PROTECTED]>
> To: Michael Lake <[EMAIL PROTECTED]>
> Cc: Slug List <[EMAIL PROTECTED]>
> Subject: Re: [SLUG] command Line mailer that can send an attachment
>
> Michael Lake wrote:
> >
> > Terry Collins wrote:
> >
> > > As per subject, need a command line mailer that can send a file as an
> > > attachment.
> >
> > Its called 'mail' and yes it can send attachments.
> > man mail
>
> Thanks.
> hmm, we must have different versions as there is no mention of
> attachments in the version I have (sans rh5.0) as all I can do is send
> it as the message.
>
> I'm wanting to do something like
>
> for o in `ls -1'
>   do
>   mailer -a $o  [EMAIL PROTECTED]
>   done
>
> and prefeerably with an older mailer
>
>
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc


#!/usr/local/bin/perl
#
# amail - a command line tool for sending files as mail attachments
# 
#  fairly similar in style to 'mail'
#
# usage: 
#   
#   amail [-s Subject] [-c Cc] [-b  Bcc] [-r Reply-to] 
# [-f From] [[-a attachment_file] ,...] address, ... 
#
#
#  addresses may optionally be preceded by -t, in which 
#  case they may appear anywhere in the command line
#
#
#
# Copyright (C) 2000, Andrew McNaughton.
# Distributed under the terms of the Perl Artistic License  
#

use MIME::Entity;

unless (@ARGV) {
print <<"END_USAGE";
Usage:
   amail [-s Subject] [-c Cc] [-b Bcc] [-r Reply-to] [-f From] [[-a attachment_file] 
,...] address, ...
END_USAGE
exit
}


my %types = qw(
.ai application/postscript
.aifc   audio/x-aiff
.aiff   audio/x-aiff
.au audio/basic
.binapplication/octet-stream
.c  text/plain
.c++text/plain  
.cc text/plain
.cdfapplication/x-netcdf
.cshapplication/x-csh
.dump   application/octet-stream
.dviapplication/x-dvi
.epsapplication/postscript
.exeapplication/octet-stream
.gifimage/gif
.gtar   application/x-gtar
.gz application/gzip
.gzip   application/gzip
.h  text/plain
.hdfapplication/x-hdf
.hqxapplication/mac-binhex40
.html   text/html
.jarapplication/java-archive
.jfif   image/jpeg
.jpeimage/jpeg
.jpeg   image/jpeg
.jpgimage/jpeg
.mime   message/rfc822
.mpeg   video/mpeg
.mpgvideo/mpeg
.nc application/x-netcdf
.pdfapplication/pdf
.phptext/html
.pjpimage/jpeg
.pjpeg  image/jpeg
.pl text/x-perl
.pngimage/png
.ps application/postscript
.rgbimage/x-rgb
.rtfapplication/x-rtf
.saveme application/octet-stream
.sh application/x-sh
.shar   application/x-shar
.sitapplication/x-stuffit
.sndaudio/basic
.srcapplication/x-wais-source
.tarapplication/x-tar
.tclapplication/x-tcl
.textext/plain
.text   text/plain
.tifimage/tiff
.tiff   image/tiff
.txttext/plain
.uu application/octet-strea

Re: [SLUG] Converting courier-imap Maildir to Cyrus Maildir structure

2003-07-08 Thread Andrew McNaughton

On Wed, 9 Jul 2003, Gonzalo Servat wrote:

> Date: Wed, 09 Jul 2003 15:02:51 +1000
> From: Gonzalo Servat <[EMAIL PROTECTED]>
> To: Andrew McNaughton <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SLUG] Converting courier-imap Maildir to Cyrus Maildir
> structure
>
> On 9/07/2003 4:50 PM +1200, Andrew McNaughton wrote:
>
> > If you have a mail client talking to both the old and new servers at once,
> > it might be as easy to just move the messages from one folder to another
> > using that client.  The mail would all have to move through the client,
> > but it might be faster than writing a script.  IF you can do it with a
> > mail client on the mail server then that stands in for your script pretty
> > well.
>
> If it was for only one client, sure (even though not many mail clients copy
> a bunch of folders across IMAP servers) but we're talking 200+ users here :)

Fair enough.

You might still find it convenient to look at IMAP client libraries rather
than thinking in terms of the mail folders.

eg if perl suits you then Mail::IMAPClient 's migrate method looks like
the sort of thing you are after.

You would have to be able to handle the authentication stuff.

Andrew




--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] ASP web pages on Apache/Linux

2003-07-14 Thread Andrew McNaughton

There's a mod_perl based system called
Apache::ASP which lives at http://www.apache-asp.org/

This doesn't mean that you can just grab your windows asp stuff and run it
on *nix unless the scripting language used within ASP is perl in which
case you are presumably not far off.  From the site:

"This is a portable solution, similar to ActiveState's
PerlScript for NT/IIS ASP. Work has been done and will
continue to make ports to and from this implementation
as smooth as possible."

Andrew



On Tue, 15 Jul 2003, Phil Scarratt wrote:

> won't work. You will need to use something like PHP. There is a script
> called asp2php that will do its best to convert asp to php - no idea on
> how successfull it is
>
> Bernhard Lüder wrote:
> > Hi,
> >
> > I was wondering, if anyone has any experience with running ASP (Active
> > Server Pages) on an Apache web server on a RedHat OS?
> >
> > Regards
> > Bernhard Lüder
> > BLUE NET - The business communication provider
> > 200 / 4 Young Street
> > Neutral Bay NSW 2089
> > Ph 1300 852 147
> > Fax 02 9908 8090
> >
>
>
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Apache vhost logs managemt

2003-07-17 Thread Andrew McNaughton

I like httplog <http://www.gnu.org/directory/all/httplog.html>

It doesn't resolve DNS names for you, but it does handle rolling over
files, with suitable naming and gzip compression.

Presumably you could put another program in the output chain to do the DNS
resolution on the fly. That would mean that you wouldn't get the latency
problems associated with resolving IPs before the request is processed,
but you would have resolved IPs in your logs right from when they are
written.

The key thing with DNS resolvers is that to operate at an acceptable speed
they need to cache results, and to run multiple DNS requests in parallel.

The perl script jdresolve <http://jdrowell.com/projects/jdresolve> might
do you for the name resolution, but if it was me I'd modify it so that it
only concerns itself with the first field in the http logs.  There must be
something around in C or C++ to do this job well.

Andrew McNaughton


On Thu, 17 Jul 2003, Voytek Eymont wrote:

> Date: Thu, 17 Jul 2003 23:57:09
> From: Voytek Eymont <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG] Apache vhost logs managemt
>
> I'm setting up Apache 1.3x with name vhost and I need to set up log
> handling for all vhosts; basically, I'd like to is:
>
> process yesterday's logs to resolve IP addresses;
> gzip yesterday's logs (is bzip2 what should be used instead of gzip ?);
> have logs saved with some sort of dd-mm-yy
>
> suggestions and useful scripts welcomed
>
> thanks,
>
> Voytek Eymont
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] anti-nida apache conf mods ?

2003-07-17 Thread Andrew McNaughton

That approach requires mod_perl though which makes for a heavy-weight
server if you don't already have it.  Try this for the same effect:


SetEnvIfNoCase Request_URI root\.exe|cmd\.exe|default\.ida worm_request
Deny from env=worm_request
CustomLog logs/access_log common env=!worm_request


Note that that last line is a replacement for your existing CustomLog
line.  You'll need to modify every CustomLog entry in your config file.

Andrew


On Fri, 18 Jul 2003, John Clarke wrote:

> Date: Fri, 18 Jul 2003 12:24:08 +1000
> From: John Clarke <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: [SLUG] anti-nida apache conf mods ?
>
> On Thu, Jul 17, 2003 at 11:35:41PM +, Voytek Eymont wrote:
>
> > are there any worthwile httpd.conf mods to minimize impact or ?
>
> This rejects the request without logging it:
>
> http://www.torkington.com/vermicide.txt
>
>
> Cheers,
>
> John
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Apache vhost logs managemt

2003-07-17 Thread Andrew McNaughton
On Fri, 18 Jul 2003, John Clarke wrote:

> On Fri, Jul 18, 2003 at 12:05:27PM +1200, Andrew McNaughton wrote:
>
> > The perl script jdresolve <http://jdrowell.com/projects/jdresolve> might
> > do you for the name resolution, but if it was me I'd modify it so that it
> > only concerns itself with the first field in the http logs.  There must be
> > something around in C or C++ to do this job well.
>
> There is.  It's called logresolve and is part of Apache:
>
> logresolve  is  a  post-processing  program to resolve IP-
> adresses in Apache's access logfiles.  To minimize  impact
> on  your  nameserver, logresolve has its very own internal
> hash-table cache. This means that each IP number will only
> be looked up the first time it is found in the log file.

I said to do the job well.

The problem with logresolve is that it doesn't do parallel dns requests,
so it's an order of magnitude slower than jdresolve or other solutions
which parallelize the requests.

I've just had a bit of a look around, and this one looks promising:

http://www.djmnet.org/sw/fastresolve/

Andrew McNaughton




--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] disk activity monitoring

2003-07-21 Thread Andrew McNaughton
On Tue, 22 Jul 2003, Binh Nguyen wrote:

> Date: Tue, 22 Jul 2003 12:15:03 +1000
> From: Binh Nguyen <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: [SLUG] disk activity monitoring
>
> How Wong wrote:
> > HI
> >
> > How do I know what processes are using the disk? if it is the sar
> > program to use then how would I use it?
> > thanks
>
> ps, top?

Those are good for monitoring active processes and CPU usage.  For
monitoring disk activity I'd go for iostat and lsof.

Andrew McNaughton


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-----------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Remote desktop

2003-07-21 Thread Andrew McNaughton
On Tue, 22 Jul 2003, Binh Nguyen wrote:

> Matthew Palmer wrote:
> > On Tue, Jul 22, 2003 at 12:37:04PM +1000, Rowling, Jill wrote:
> >> If I have an X terminal (ancient Red Hat thing with Gnome and
> >> Enlightenment) and another Linux system (SuSe thing with KDE), how
> >> would you suggest I get the SuSe desktop to appear on the X terminal?
> >
> > XDM query.
> >
> > - Matt
>
> Or SSH tunnel.

Right.  Pass port 5901 through for display remotehost:1, 5902 for :2, and
so forth.

ssh -L5901:remotehost:5901 [EMAIL PROTECTED]

then make a vnc connection to localhost:1 and you get the vnc server
running on display :1 on the remote host.

Andrew



--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] Remote desktop

2003-07-21 Thread Andrew McNaughton


Hmm.  Well, a dedicated X terminal probably is better at running X than
running vnc.

I'm not sure shortage of resources is a big issue though.  VNC runs just
fine on Palm Pilots and the like.  9 bit colour will significantly reduce
the ammount of memory it requires too.

Andrew



On Tue, 22 Jul 2003, Rowling, Jill wrote:

> Hi all,
>
> Remember I did say "not using VNC" - this is because VNC cannot be loaded
> onto the X terminal.
> Not enough resources, shall we say (9 bit colour, 32 MB RAM). So VNC is not
> an option.
>
> I gather that you have to run xdm before you run X11 on the X terminal, and
> let it start X with a chooser. That aspect isn't clear from the
> documentation.
> I will experiment a little at home some more.
>
> Just out of curiosity I tried running a Solaris dtsession redisplayed to a
> Red Hat Linux X session but it was not entirely satisfactory -- running the
> individual applications was better as Gnome positions things better when it
> has full control of the desktop (funny bout that).
>
> -Original Message-
> From: Dave Airlie [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, 22 July 2003 2:12 PM
> To: Andrew McNaughton
> Cc: [EMAIL PROTECTED]; Binh Nguyen
> Subject: Re: [SLUG] Remote desktop
>
>
> > Right.  Pass port 5901 through for display remotehost:1, 5902 for :2,
> > and so forth.
> >
> > ssh -L5901:remotehost:5901 [EMAIL PROTECTED]
> >
> > then make a vnc connection to localhost:1 and you get the vnc server
> > running on display :1 on the remote host.
>
> you might also specify encodings when doing this.. as when vnc sees
> localhost it goes wow fast connection.. and tries raw..
>
> Dave.
>
> --
> IMPORTANT NOTICES
> This email (including any documents referred to in, or attached, to this
> email) may contain information that is personal, confidential or the subject
> of copyright or other proprietary rights in favour of Aristocrat, its
> affiliates or third parties. This email is intended only for the named
> addressee. Any privacy, confidence, copyright or other proprietary rights in
> favour of Aristocrat, its affiliates or third parties, is not lost because
> this email was sent to you by mistake.
>
> If you received this email by mistake you should: (i) not copy, disclose,
> distribute or otherwise use it, or its contents, without the consent of
> Aristocrat or the owner of the relevant rights; (ii) let us know of the
> mistake by reply email or by telephone (+61 2 9413 6300); and (iii) delete
> it from your system and destroy all copies.
>
> Any personal information contained in this email must be handled in
> accordance with applicable privacy laws.
>
> Electronic and internet communications can be interfered with or affected by
> viruses and other defects. As a result, such communications may not be
> successfully received or, if received, may cause interference with the
> integrity of receiving, processing or related systems (including hardware,
> software and data or information on, or using, that hardware or software).
> Aristocrat gives no assurances in relation to these matters.
>
> If you have any doubts about the veracity or integrity of any electronic
> communication we appear to have sent you, please call +61 2 9413 6300 for
> clarification.
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] My MySQL mistake & how I recovered

2003-07-22 Thread Andrew McNaughton

You can just stop mysqld and start it up again with --skip-grant-tables
option.  This turns off all password protection.  You fix the password and
then restart mysql with password protection back on.

Depending where your server is located you might consider it important to
firewall mysql's TCP port while you do this.

Andrew




On Wed, 23 Jul 2003, Grant Parnell - EverythingLinux wrote:

> Date: Wed, 23 Jul 2003 09:01:06 +1000 (EST)
> From: Grant Parnell - EverythingLinux <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG] My MySQL mistake & how I recovered
>
> # mysql mysql
> > update user set password=('hello') where user='root';
> > quit
> # mysqladmin reload
> # mysql mysql --password=hello
> ERROR 1044: Access denied for user: '[EMAIL PROTECTED]'
> to database 'mysql'
>
> At this point I pondered and worked out I should have used
> password=password('hello'). The prospect of finding a password that
> encrypts to the word 'hello' was now rather daunting. I had to find
> another approach... I believe there is a way to reset the root password
> with a special command but for some reason I decided to be a bit more
> creative.
>
> # /etc/init.d/mysqld stop
> # cd /var/lib/mysql
> # cp -a mysql/user.* test/
> # /etc/init.d/mysqld start
> # mysql test
> > update user set password=password('hello') where user='root';
> > quit
> # /etc/init.d/mysqld stop
> # cp -a test/user.* mysql
> # rm -f test/user.*
> # /etc/init.d/mysqld start
> # mysql mysql --password=hello
> > select * from user;
>
> Whew! success. This relies on the fact that by default, under RedHat at
> least, there's a table called 'test' which everybody has access to without
> a password. I suppose this is something to consider if you need a bit more
> security.  Some would argue "once you've got root what else is there?".
>
>  --
> --
> Grant Parnell - senior consultant
> EverythingLinux services - the consultant's backup & tech support.
> Web: http://www.everythinglinux.com.au/services
> We're also busybits.com.au and linuxhelp.com.au.
> Phone 02 8752 6622 to book service or discuss your needs.
>
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Hard disk

2003-07-22 Thread Andrew McNaughton
On Wed, 22 Jul 2003, Bret Comstock Waldow wrote:

> You need to worry about the mbr (Master Boot Record) as well as the disk
> partitions.
>
> Keep in mind no malware (virus, etc.) does anything on it's own - your
> CPU must be tricked into running it.  Just 'cause there's a sequence of
> bytes some where on the disk doesn't mean your system's at risk - it
> must be in the file system so that it can be loaded into memory by your
> OS and the CPU's nose must be pointed at it and told to run it.
>
> I've had circumstances where using fdisk to kill, and then recreate
> partitions allowed me to recover files afterward, but recovery tools
> bypass the directory listings.  Otherwise, if you initialize a
> partition, there's no way for the OS to find the files, load the byte
> sequences, etc., 'cause the facilities the OS uses read the directory
> listing to find the files, which the directory listing says aren't
> there...

Correct.

I did hear about an interesting hack (by a cracker I suppose, but a hack
none the less) where a cache of tools was stashed in a hidden file system
located on blocks which the original file system had been told to regard
as bad blocks on the disk.

There's no way to access that cache without already having substantial
control over the system.  It's not the point at which the system becomes
vulnerable.

It's interesting though, which is the sole reason I mention it.

Andrew McNaughton


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Image viewer with full-screen slideshow capability

2003-07-24 Thread Andrew McNaughton





On Fri, 24 Jul 2003, Michael Kraus wrote:

> Subject : Re: [SLUG] Image viewer with full-screen slideshow capability
>
> Anyone know if there is an app around that will do the above?

xv -wait 20 -maxpect file1 file2 file3 ...

I'm sure there's plenty of solutions around, but xv is a bit of an old
standard item.

Andrew



--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Finding all config files... (Debian question))

2003-07-27 Thread Andrew McNaughton

Is what you want explicitly 'config' files, or is it a list of any files
which have been modified from the distribution version?  Would comparing
checksums with the original files do the trick?  Presumably there's not
much need to back up config scripts which are known to be unchanged from
the originally distributed version.

Depending on the system, you can get local configuration stuff worked into
a lot of places the package maintainer may not have expected.  Sometimes
the configuration files don't cover everything you need to customise and
you wind up doctoring a script or something of the sort.

I haven't used these, but debsums or dlocate might do what you need for
comparing checksums.

Restoring all modified files could have a downside of course in the event
that you are recovering after a security compromise.  You could wind up
recovering exactly the thing you're trying to eradicate by restoring from
backup.

Andrew





On Mon, 28 Jul 2003, Peter Chubb wrote:

> Hi,
>   I'm trying to set up a backup strategy.  What I want to find
> is all the configuration files for the various installed packages.
> Some are obvious (/etc/apache/http.d.conf, /etc/passwd) Others are not
> (/usr/lib/python2.2/site-packages/MoinMoin/config.py).
>
> Is there a way of extracting from the dpkg information a list of all
> changed configuration files?  Dpkg must know this info...
>
> Peter C
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] editing passwd: alters home dir location ? or

2003-07-27 Thread Andrew McNaughton

Be a little careful.  If the user's home directory is accessible from the
web then various dot files in their home directory might become
accessible.  If you don't trust the users then you might not even want to
give them access to what the system thinks of as their home directories.

My preferred solution is to use ncftpd as my ftp server which gives far
more configurability than most ftp servers, and should make it easy for
you to set the default directory on log in, and access to different
directories without messing with the user's home directory in /etc/passwd.

Andrew McNaughton


On Mon, 28 Jul 2003, Voytek Eymont wrote:

> Date: Mon, 28 Jul 2003 12:55:07
> From: Voytek Eymont <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG] editing passwd: alters home dir location ? or
>
> If I edit /etc/passwd
>
> from
> user.org.au:x:517:517:user.org.au:/home/user.org.au:/bin/false
> to
> user.org.au:x:517:517:user.org.au:/home/user.org.au/www:/bin/false
>
> am I altering the users' root ? or, the location where ftp will log him to
> ?
>
> what I'm trying to do is:
>
> I have web users, all they need, is to ftp (or, scp) files to their web
> server root, they do not need or have shell/telnet/ssh access.
>
> I now have:
>
> /home/user1.com.au(owned by root)
> /home/user1.com.au/www(owned by user1.com.au)
>
> ftp login takes user to /home/user1.com.au
>
> will I cause any probs editing /etc/passwd so
> ftp login will take user to /home/user1.com.au/www directly ?
>
> (hmmm, now that I typed it up, I'm not sure anymore if I should do that,
> anyhow)
>
> (and, thanks to all for the help so far)
>
> Voytek Eymont
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] editing passwd: alters home dir location ? or

2003-07-27 Thread Andrew McNaughton

IF they are just file upload users, its possible they don't even need to
be in /etc/passwd.  Most ftp daemons will allow you to use a separate
password file for ftp authentication.  This will mean that the files will
all be owned by a single system user, so be careful that users get
separate areas if necessary to prevent them clobbering each other's stuff.

If users are only using remote ftp, then they will not run bash, kde or
gtk and won't need any of the dot files you list.  I presume the www dir
is wanted?

Andrew


On Mon, 28 Jul 2003, Voytek Eymont wrote:

> Date: Mon, 28 Jul 2003 14:09:51
> From: Voytek Eymont <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: [SLUG] editing passwd: alters home dir location ? or
>
> ** Reply to note from Andrew McNaughton <[EMAIL PROTECTED]> Mon, 28 Jul 2003 
> 15:38:53 +1200 (NZST)
>
>
> > Be a little careful.  If the user's home directory is accessible from the
> > web then various dot files in their home directory might become
> > accessible.  If you don't trust the users then you might not even want to
> > give them access to what the system thinks of as their home directories.
>
> Andrew, Kevin, thanks, yes, I've decided it wasn't such a good idea..
>
> talking about various dor files: if these users are just 'file upload'
> users, can I just delete all the dot files... or, just leave them ?
>
> /..  ³UP--DIR³³
> /.kde³   4096³Jul 16 10:22³
> /www ³   8192³Jul 28 14:06³
>  .bash_logout³ 24³Jul 16 10:22³
>  .bash_profile   ³191³Jul 16 10:22³
>  .bashrc ³124³Jul 16 10:22³
>  .gtkrc  ³118³Jul 16 10:22³
>  ³   ³³
>
>
> Voytek Eymont
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Sending mail

2003-07-27 Thread Andrew McNaughton



How about `sendmail -bd -q1h -X/var/log/thoughtpolice` ?

It records incoming as well as outgoing mail though.  To get only outgoing
you'd presumably doctor your SMTP mailer definition.

Andrew



On Mon, 28 Jul 2003, Kevin Saenz wrote:

> You can try using procmail
> > Can Any one suggest me how I can send a copy of every outgoing
> > mail to a specific user account ? I am using Redhat 7.1 and sendmail
> > and ipop3 installed.
> >
> >
> > ===
> >
> > Thank You
> > Md. Ashraful Alam
> >
> >
> >
> > __
> > --
> > SLUG - Sydney Linux User's Group - http://slug.org.au/
> > More Info: http://lists.slug.org.au/listinfo/slug
>
>
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] httpd.conf performance options suggestions

2003-07-29 Thread Andrew McNaughton

This stuff tends to be quite dependent on the server.  Among other things
it depends on the kind of traffic pattern you get, the ammount of memory
you have, what's compiled into apache (especially if you use mod_perl).

My site (www.scoop.co.nz) got slashdotted 3 weeks back, and the main
problem we hit was with sockets left half open.  It's running on FreeBSD,
so bumping up the number of sockets on the system (to about 1) with
sysctl was the first change.  That was enough to stop errors for a short
while after a server restart, but the problems weren't really solved till
we turned KeepAlive on (contrary to stuff I'd heard earlier) and cut the
KeepAliveTimeout to 1 second.  This means that the images on a page will
generally go through the same socket as the html, but that the socket will
get freed in between.  After that we ran quite happily with about 30%
CPU and disk resource utilization.

The Scoop server runs with two apache daemon clusters, one with a light
weight apache proxying dynamic stuff on to a heavy weight mod_perl server.
As such the settings for most of the parameters you ask about are
different on each of the two servers, and probably not a good guide to
setting up in your situation.

Andrew McNaughton



On Tue, 29 Jul 2003, Voytek Eymont wrote:

> any thoughts on starting values for httpd.conf options like:
>
> (defaults)
> Timeout 300
> KeepAlive Off
> MaxKeepAliveRequests 100
> KeepAliveTimeout 15
> MinSpareServers 5
> MaxSpareServers 20
> StartServers 8
> MaxClients 150
> MaxRequestsPerChild 1000
>
> (just looking on my OS/2 Apache, for reasons that I no longer recall,
> values were modifed or set as follows: (but that was some years ago, with a
> system somewhat less resources)
>
> Timeout 60
> KeepAlive On
> MaxKeepAliveRequests 100
> KeepAliveTimeout 10
> MinSpareServers 5
> MaxSpareServers 10
> StartServers 5
> MaxClients 150
> MaxRequestsPerChild 100
>
> Voytek Eymont
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] httpd.conf performance options suggestions

2003-07-29 Thread Andrew McNaughton

Are you using mod_perl, or just perl CGI?

I doubt mrtg is running so often as to be a performance bottleneck?

Andrew

On Tue, 29 Jul 2003, Voytek Eymont wrote:

> ** Reply to note from Voytek Eymont <[EMAIL PROTECTED]> Tue, 29 Jul 2003 21:07:47
>
>
> > any thoughts on starting values for httpd.conf options like:
>
> also:
>
> this should be enabled, yes ?
>
> 'CacheNegotiatedDocs'
>
> should I also enable the perl section:
>
> #
> #Alias /perl /var/www/perl
> #
> #SetHandler perl-script
> #PerlHandler Apache::Registry
> #Options +ExecCGI
> #
> #
>
> mrtg uses perl, so, that should help ...?
>
>
>
> Voytek Eymont
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] httpd.conf performance options suggestions

2003-07-29 Thread Andrew McNaughton

On Wed, 30 Jul 2003, Voytek Eymont wrote:

> ** Reply to note from Andrew McNaughton <[EMAIL PROTECTED]> Wed, 30 Jul 2003 
> 12:44:32 +1200 (NZST)
>
> > Are you using mod_perl, or just perl CGI?
> >
> > I doubt mrtg is running so often as to be a performance bottleneck?
>
> Andrew,
>
> at this time, I think, the only perl we use is in mrtg;
> a friend of mine asked me to run a small 'dynamic' site that someone
> is supposed to do in perl, I guess, that might be mod_perl.
>
> again, it will be rather narrow interest site, so, I don't expect much
> usage.

I expect you're using some CGI scripts rather than mod_perl (which builds
perl into the apache engine).  That commented out code you had only has
any effect if you're using mod_perl.

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] httpd.conf performance options suggestions

2003-07-29 Thread Andrew McNaughton
On Wed, 30 Jul 2003, Voytek Eymont wrote:

> ** Reply to note from Andrew McNaughton <[EMAIL PROTECTED]> Wed, 30 Jul 2003 
> 12:43:19 +1200 (NZST)
>
>
> Andrew,
>
>
> > we turned KeepAlive on (contrary to stuff I'd heard earlier) and cut the
> > KeepAliveTimeout to 1 second.
>
> that similar to what we done few years back, set it 'ON', when we had a
> busy site serving like 10 hits/sec and saturating 128 link

If bandwidth is your bottleneck (or is expensive), look at using mod_gzip.
Very occasionally you get a problem because someone's behind a broken
proxy, but for the most part it works like a charm, and cuts your traffic
drastically.  Should only be used on content that's not allready
compressed already, not your gifs, jpegs, etc.

Andrew




--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] mixED CAse to lower.case ?

2003-07-30 Thread Andrew McNaughton


Here's a handy perl script which lets you use perl statements to modify
file names.  It may be the same thing as your rename program or not -
rename is not a standard item in all distributions.

--
#!/usr/local/bin/perl

# rename script examples from lwall:
#   rename 's/\.orig$//' *.orig
#   rename 'y/A-Z/a-z/ unless /^Make/' *
#   rename '$_ .= ".bad"' *.f
#   rename 'print "$_: "; s/foo/bar/ if  =~ /^y/i' *

$op = shift;
for (@ARGV) {
$was = $_;
eval $op;
die $@ if $@;
rename($was,$_) unless $was eq $_;
}
--

To do it recursively, you'd combine it with find and xargs:

find . -print0 | xargs -0 rename 's/\.JPE?G$/.jpg/i'

that will turn .JPG, .jpeg or .JPEG suffixes into .jpg


Andrew



On Wed, 30 Jul 2003, Voytek Eymont wrote:

> Date: Wed, 30 Jul 2003 21:02:29
> From: Voytek Eymont <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG] mixED CAse to lower.case ?
>
> what can I use to recursively change file names/extension to all lower case ?
>
> I have some files and/directories like:
>
> I tried rename few times with little effect:
>
> [EMAIL PROTECTED] photos]# rename .JPG *.jpg
> [EMAIL PROTECTED] photos]# ls
> atomfactory1.JPG  makitapage03-s.jpg  makitapage10-s.jpg
> atomfactory2.JPG  makitapage04-s.jpg  makitapage11-s.jpg
> atomfactory3.JPG  makitapage05-s.jpg  makitapage12-450.jpg
> makitapage01-450.jpg  makitapage06-s.jpg  makitapage12-s.jpg
> makitapage01-n.jpgmakitapage07-s.jpg  shopfront.jpg
>
>
>
>
> Voytek Eymont
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] attaching directories in pine

2003-08-03 Thread Andrew McNaughton
On Mon, 4 Aug 2003, David wrote:

> On 4 Aug 2003, Tony Green wrote:
>
> > On Sun, 2003-08-03 at 23:20, David wrote:
> > > can I attach all the files in a directory in one go? or does one have to
> > > add each file one at a time? If so, is there a short hand way of attaching
> > > directories one at a time?
> ^^^
>  files!
> > >
> >
> > Attach to what??? (I'm assuming an email).
>
> does pine do something other than email?   ;-)
>
> >
> > I'd say no, easiest way would be to tar it up and attach the tar file.
> >
>
> that's what I was afraid of :(
> tar is not an option when sending to brain-dead punters, unfortunately.

Is zip any better for your users?

What would you like to happen?

Do you want to attach all the files in a directory separately?  What do
you want to happen to nested directories?

Andrew McNaughton

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Apache vhost logs and file descriptors limits: individuallogs vs single log

2003-08-03 Thread Andrew McNaughton
On Mon, 4 Aug 2003, Voytek Eymont wrote:

> Date: Mon, 4 Aug 2003 10:53:01
> From: Voytek Eymont <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG]  Apache vhost logs and file descriptors limits:
> individual logs vs single log
>
> looking at some of the Apache docs on vhosting states in part:
> -
> /manual/vhosts/mass.html
>
> ...
> The main disadvantage is that you cannot have a different log file for
> each virtual host; however if you have very many virtual hosts then
> doing this is dubious anyway because it eats file descriptors. It is
> better to log to a pipe or a fifo and arrange for the process at the
> other end to distribute the logs to the customers (it can also
> accumulate statistics, etc.).
> ..
> -
>
> at what point should one get concerned: 100 logs ? 1000 ? ?

When you think you might run out of file handles.  That's very dependent
on your system's setup.  At 100 (or even 20) I'd set this up sooner or
later, but it wouldn't be high priority while other things needed doing.
I'd have it set up long before I tried to run 1000 servers in production
use.

> how does one asses how many file handles are available/free/in use ?

I mostly use Freebsd.  Not sure what the proper way to do this is on
linux, but you could just run `lsof|wc` to get the number in use.  The
number available is probably compiled into your kernel, but there would
also be a command to tweak it.

Andrew

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Apache vhost logs and file descriptors limits: individual

2003-08-03 Thread Andrew McNaughton
On Mon, 4 Aug 2003, Voytek Eymont wrote:

> ** Reply to note from Andrew McNaughton <[EMAIL PROTECTED]> Mon, 4 Aug 2003 13:42:19 
> +1200 (NZST)
>
>
> > I mostly use Freebsd.  Not sure what the proper way to do this is on
> > linux, but you could just run `lsof|wc` to get the number in use.  The
> > number available is probably compiled into your kernel, but there would
> > also be a command to tweak it.
>
> Andrew, thanks
>
> I'm getting 16 instances per each vhost log:
>
> # lsof|grep "sbt.net.au-access.log" |wc -l
>  16
>
> # lsof|wc -l
>3946
> # lsof|grep "/vhosts/" | wc -l
> 544
>
> this implies my vhosts are already using over 10% of all in-use file handles (no ?)

Hmm. They'll be sharing file handles.

lsof +f f turns on the FILE-ADDR field, which gives you a identifier for
each unique file handle.

There will be a better tool for finding out about available file handles.

Andrew

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Apache vhost logs and file descriptors limits: individuallogs vssingle log

2003-08-03 Thread Andrew McNaughton

If you're worried, then why not just go ahead and set up so that you log
to a single file and then split from there.  split-logfile comes with
apache for this purpose.

http://httpd.apache.org/docs/programs/other.html

Andrew




On Mon, 4 Aug 2003, Voytek Eymont wrote:

> Date: Mon, 4 Aug 2003 13:53:23
> From: Voytek Eymont <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: [SLUG] Apache vhost logs and file descriptors limits:
> individual logs vssingle log
>
> ** Reply to note from "Oscar Plameras" <[EMAIL PROTECTED]> Mon, 4 Aug 2003 13:33:12 
> +1000
>
>
> > There is a way around. You can have separate log files for each
> > virtual host. Assign ip number for each virtual host.
> >
> > You may also need encryption between your server and clients for
> > security, in these times when  users have snippers, tcpdump, and
> > ethereal. So, ip numbers for each virtual host is a must.
>
> thanks, Oscar
>
> all these are name based vhosts, and I don't have spare IP addresses to asign to 
> them.
>
> I'm just trying to scertain whether I should run individual vhost logs vs a single 
> log
> for all vhosts, from the perpective of file handles use.
>
>
>
> Voytek Eymont
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] multiple recursive s'n'replace, what's a good tool ?

2003-08-04 Thread Andrew McNaughton

I generally use:

perl -pi -e 's/string a/string b/g' file1 file2 etc

To recurse directories, use find and xargs to supply the file list on the
command line:

find /some/dir -type f -print0 | xargs -0 perl -pi -e 's/string a/string b/g'

It would only take a few minutes to wrap that up as a single command line
utility if you do it often enough to matter.

Andrew




On Mon, 4 Aug 2003, Voytek Eymont wrote:

> what's the recommendation for a m-r-s-r tool ?
>
> I'm looking to replace multiple string pairs in some web files.
> all I need is simple 'string a' for 'string b' swap, every instance,
>
> replace ? lreplace ?
>
> Voytek Eymont
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Benefits of source distro (Gentoo) somewhat elusive :-)

2003-08-04 Thread Andrew McNaughton
On Tue, 5 Aug 2003, Dave Airlie wrote:

> I'll throw my oar in with Jeff on this one.. (as another FOSS contributer)
>
> using Gentoo or LFS (scary thought) for a production Linux server is
> probably the dumbest thing you'll ever do involving Linux... the
> maintenance nightmare alone... gcc optimisation levels don't make a
> massive difference from a lot of real-world POVs, I'd like to see some
> useful real benchmarks but it still wouldn't be worth the hassle of a
> re-building everything from source just to get that small improvement..
> it would probably have to be worth 10-15% speed to make it worth the
> hassle.. you know you can also re-build RH and Debian with higher
> optimisations you could in theory get all the RH SRC RPM and --rebuild
> them with higher opts on ..

I've no experience with Gentoo, but I regularly build systems from source
with FreeBSD, and have been running production servers this way for years.
Using FreeBSD, this is not a maintenance hassle for a system with a single
experienced sysadmin, but where multiple admins are involved, and
particularly where that includes less experienced admins, flexibility of
approach ceases to be an advantage, and I tend towards using debian in
those cases.

I have had significant problems with debian systems where there has been a
policy of using only the official binary distributions.  Like the time we
had a 3 week wait for a debian apache bugfix which was mission critical
for us in putting a new server into production.  Apache fixed it quick,
but debian was slow to catch up.  That was on a testing rather than stable
release, but then the stable release had a version of perl that was nearly
2 years old, and that would not have worked for us either.

Doing a build, or even an install from source is really not difficult if
the distribution's build system is good.  On a modern machine it takes
less than an hour to compile a freebsd distribution, which is a good deal
larger than the core of most linuxes.  You can spend a bit longer going
through ports, but its still not all that long.

> I don't even re-compile my kernel nowadays unless there is something
> seriously wrong with it, my standard desktop PC at work runs RH standard
> kernel, my laptop sometimes gets pre-release kernels but that's because I
> like ACPI on it...

It's needed less and less often, but there are some nice things you can do
by compiling with non-standard options, or even with a modified compiler.
Stack guards can save a lot of maintenance time if the prevent someone
running a buffer overflow attack.  Not for everyone, but they have their
place.

> I'm not saying Gentoo et al don't have a place in the world, they do but
> that place is not running anything at a production/maintainable level,
> it's more a desktop for people with too much computing power and time on
> their hands or for someone who wants to learn how Linux distros work.
> I think one point that Jeff may be thinking of saying (he may be yet too
> polite :-), is that you are wasting time that would be better spent doing
> something else with, install RH or Debian and use it for stuff, rather
> than waiting for Gentoo to re-build itself...

Fire it up and then get on with all that other stuff.  It's not something
you do every day, and you don't have to sit there watching it.

Andrew McNaughton


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Problem with network printer

2003-08-08 Thread Andrew McNaughton

Check out the cups framework for starters.  Probably once you get going
you'll have questions, but they're not the same ones you have now.

KDE provides wizards for most of the setup.  Gnome very likely does also.

linuxprinting.org has PPD files (printer description files) for most
printers, and forums where people discuss lots of printing stuff.

Andrew


On Fri, 8 Aug 2003, N RamMohan wrote:

> Date: Fri, 08 Aug 2003 17:56:46 +0530
> From: N RamMohan <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG] Problem with network printer
>
>
> hai all,
> i have one printer connected to a machine in our network
> how we can access that printer without logging into the machine to which the
> printer is attached?
>
> i know some changes need to be made in /etc/printcap file ..
>
> but i dont have any idea of what changes to be made
>
> plz help me
>
> _
> It's raining movies. Bollywood is flooded.
> http://server1.msn.co.in/features/augustmovies03/index.asp August has
> arrived!
>
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Processing files with spaces

2003-08-14 Thread Andrew McNaughton

use grep's --null option in combination with xargs -0 option:

ls | grep -i foobar --null | xargs -0 mv %1 /newdir

A few other commands have options for using null separators.  eg

find -print0
perl -0

Thing is, a lot of the time what you want is to use the input you've
already got which is argument per line, as in the output from ls.  I used
to have a little script around called lxargs which wrapped xargs and
replaced line ends with nulls on stdin.  I've misplaced it but should
really re-implement it.

Andrew





On Sat, 9 Aug 2003, Joel Heenan wrote:

> Hello Sluggers!
>
> I am often moving or coping files with spaces in them. What I would really
> like is be able to get just go
>
> | xargs mv %1 /newdir
>
> or something similar. At the moment I am stuck writing scripts like this
>
> #!/bin/bash
> IFS='
> '
> for f in `/bin/ls | grep -i $1`
> do
> mv -v "$f" $2
> done
>
> What I would like is a general solution where I can say execute this command
> on every line in this file. Should I extend this script so it takes in the
> "mv -v" script on command line and works off standard input or is there an
> easier/better way? Surely this is a simple problem but I can't see the
> simple solution. :-(
>
> Joel
>
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] PHP and includes: outside/inside of web root ?

2003-08-14 Thread Andrew McNaughton
On Tue, 5 Aug 2003, Voytek Eymont wrote:

> looking at variety of php scripts/apps, these come with an 'includes' directory below
> the application directory
>
> (so, a brower could go there.)
>
> I always used to move the 'includes' dir to the
> outside-of-web-server-root php path (and, modify the scripts
> accordingly)
>
> BUT, now, as just about any php app has the 'include' below tha
> application path:
>
> so, is there a need to have php's inc files outside the web server root ??
>
> am I wasting my time moving the inc files and modifying scripts ?
> or, is it still a good idea ?

I prefer to keep related files together, but block direct access to the
scripts.

Several approaches come to mind:

1) change the suffixes of all includes (eg to .inc).  Arrange for apache
to deny access to any .inc files - and while you're at it, deny access to
any other extension not in your mime.types file.  That helps with things
like .php~ files left around by emacs users.

2) deny access to any directory with a path containing '/inc/'.  Maybe add
a few other names as well.

3) drop .htpasswd files into appropriate directories with directives to
block access.

Andrew




--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] HD Sluggish Problem Again

2003-08-14 Thread Andrew McNaughton
On Mon, 11 Aug 2003, Lyle Chapman wrote:

> I posted a message a couple of weeks ago about sluggish HD performance
> on a new computer - it is a 3ghz P4, Gigabyte MB, 1gb ram and 2 brand
> new 80gb Seagate 7200rpm drives.
>
> Even with this setup I am still getting sluggish performance, when I do
> a copy I am getting a burst of 45meg/second then it slowly dwindles
> down to around 20meg/second average but will jump up and down between
> 3.5meg/second and 20meg/second until the end of the copy.
>
> Any ideas anybody as I do a lot of DV editing and it is really
> defeating the purpose of purchasing a new ubeat go faster computer.

I may be out of date here, as this is stuff I remember from about 10 years
ago, but it used to be that as hard drives came under load they would heat
up, and the disk would expand slightly.  That would require a
re-callibration process which would stop the drive recording briefly.  Not
much of an issue for most system usage, but a substantial issue for
multimedia work.  It used to be that there were special drives designed to
avoid this problem.  IT could be that's why you're seeing occasional drops
to 3.5M/s

The initial speed burst you see is probably because you're filling up the
cache in your HD controller rather than actually writing to disk.

Andrew



--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] recursively change file permissions not directories

2003-08-14 Thread Andrew McNaughton
On Tue, 12 Aug 2003, Norman Gaywood wrote:

> Date: Tue, 12 Aug 2003 12:50:01 +1000
> From: Norman Gaywood <[EMAIL PROTECTED]>
> To: Ram Smith <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SLUG] recursively change file permissions not directories
>
> On Tue, Aug 12, 2003 at 11:32:06AM +1000, Ram Smith wrote:
> > I have a shared directory structure where alot of the files in each
> > directory have permisions of 644 I wanting to change it so that the
> > files are chmod 664 letting all users in the group read and write to the
> > data. without nuking the permissions on the directories along with the
> > files.
>
> The way to do this properly, as others are showing, is with a:
>
>   find . -type f | xargs chmod 644

This is not doing things properly and is highly dangerous. eg:

[EMAIL PROTECTED]  touch ";chmod u+sx"

[ some time later ]

[EMAIL PROTECTED] find . -type f | xargs chmod 644

Andrew McNaughton




--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] anti UCE, anti virus, RBL, postfix server recomendations

2003-08-14 Thread Andrew McNaughton
On Sun, 10 Aug 2003, Voytek Eymont wrote:

> what are suggestions/recommedations/experiences with:
>
> - 'virus' scanning ?
> - reduction/prevention of UCE/spam
>
> does anyone uses real time black hole, which one ?
> (the one I used in the past ceased to exist)

njabl.org is the one that works best for me.  IT automatically does relay
tests on any server which is looked up unless it's already in cache.  That
means that it generally finds spam relays within a minute or so of
spammers starting to use them.

> I'm looking at server-wide measures, rather than 'per user'
>
> and, on 'per user' basis: what's a good anti-spam filter ?
>
> can 'per user' filters be executed on the mail server ?

use procmail.

I use:

DCC and vipul's razor for checksums
bogofilter and spam oracle bayesian filters

spam oracle is the only one that comes up with false positives.  All of
those false positives are from outlook users.

spamassassin is also very popular.

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] ftp/shell descripancy, can not access symlinks from ftp

2003-08-14 Thread Andrew McNaughton
On Fri, 8 Aug 2003, Voytek Eymont wrote:

> > Is the FTP server configured to mask the pathnames?
> >
> > Some FTP servers by default "lock" the user inside their home
> > directories, so "/home/username" ends up being "/" in FTP.
>
> thanks, Theo
>
> don't know, it'd wuftpd, and, couldn't find any option like that

I'd strongly recommend not using wuftpd based on its security record.

For configurability, go for proftpd.

Andrew McNaughton



--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] recursively change file permissions not directories

2003-08-14 Thread Andrew McNaughton

yes, find ... -exec is a good deal safer.  It used to be that it was based
on (properly escaped) file names, and there was a lot of discussion over
the implications of this for cleaning out the temp directory.  If it uses
file names then there's a potential race condition which can be used to
substitute a symlink for a deeply nested directory tree so that the actual
exec points to some arbitrary file.

Andrew





On Tue, 12 Aug 2003, Rowling, Jill wrote:

> Date: Tue, 12 Aug 2003 17:27:12 +1000
> From: "Rowling, Jill" <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: RE: [SLUG] recursively change file permissions not directories
>
> The sequence find . -exec seems safe enough though.
> I suspect it just uses the inode numbers rather than the file name or
> something (but beware this is on Solaris ;)
> bash-2.05$ find thingy -type f -exec file {} \;
> thingy/one/this:empty file
> thingy/two/; file thingy:   empty file
> thingy/three/that:  empty file
> bash-2.05$ ls -lR thingy
> thingy:
> total 6
> drwxrwxr-x   2 rowling  staff512 Aug 12 17:22 one
> drwxrwxr-x   2 rowling  staff512 Aug 12 17:22 three
> drwxrwxr-x   2 rowling  staff512 Aug 12 17:22 two
>
> thingy/one:
> total 0
> -rw-rw-r--   1 rowling  staff  0 Aug 12 17:22 this
>
> thingy/three:
> total 0
> -rw-rw-r--   1 rowling  staff  0 Aug 12 17:22 that
>
> thingy/two:
> total 0
> -rw-rw-r--   1 rowling  staff  0 Aug 12 17:22 ; file thingy
> bash-2.05$
>
> - Jill
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] X Window Manager config question

2003-08-20 Thread Andrew McNaughton

Afterstep and it's derivatives are usually pretty good like this.  I think
windowmaker comes into that category and is probably a bit more developed
than most.

How minimalist do you want?  Afterstep derivatives aren't totally
minimalist, but aren't gnome/kde size either.  I use afterstep on an old
pentium 166MX laptop with 64M memory.  It used to have 32M, but that got
tedious with lots of browser windows.

Andrew


On Thu, 21 Aug 2003, Jonathan Kelly wrote:

> Date: Thu, 21 Aug 2003 12:30:08 +1000
> From: Jonathan Kelly <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: [SLUG] X Window Manager config question
>
> Hi,
>
> does anyone know if of a minimalist X WM that allows configuration of WM
> functions like windows resize?  I have an app that uses
> ALT-right-mouse-button for a key function, and my windows manager (icewm)
> uses that for windows resize, though I think it's a pretty standard WM
> function binding.
>
> Or a way of changing the binding, or cancelling it.
>
> cheers.
> Jonathan.
>
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] mysql ..... issues

2003-08-21 Thread Andrew McNaughton
On Fri, 22 Aug 2003, Craig Mead wrote:

> So...I'm on a learning curve and was playing with some user permissions for
> mysql yesterday and well, as I tend to do with things, I screwed it. Ended
> up changing the user permissions in the users table and now I can't reaccess
> the user permissions table to fix the permissionscause I don't have
> permission!
>
> Yet again, probably a stupid move, but I figured I'd just apt remove it and
> reinstall it and we'd be back at square one. HA! 1/2 my luck. Kept the
> tables and such all there. Anyways, I've now got the box in a bit of a mess
> but no matter what I do I can't seem to get rid of the existing tables
> (theres a few user created tables, but I'm only on the learning process so
> it's just crud data and if it goes, so be it). Anyone got any thoughts on
> how to just blast it back to a completely clean install or reset the user
> table or something!

start mysqld up with the --skip-grant-tables option.  Taht will turn off
all access restrictions so you can go in and fix the access tables
(erm, that's the mysql database, not access... you know what I mean)

When you're done, kill mysqld and restart it as normal.

It can be a bit of a pain figuring out the command line for mysqld which
is normally handled by safe_mysqld, or whatever your distribution uses
instead.  I recommend using 'ps wwaux|grep mysqld' to get the current
command line options and then entering that with the addition of the
--skip-grant-tables option.

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

---
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] Problem using 'perl -MCPAN -e shell'

2003-08-22 Thread Andrew McNaughton



On Fri, 22 Aug 2003, Michael Lake wrote:

> This is a perl question. I have installed CPAN.pm which will pull down
> and install packages via a shell like interface - its very similat to
> apt-get.
>
> Problem: it aint working.
>
> I neeed a perl guru. I had a brief look at cpan.org and their mailing
> list for cpan-interface archive and at the perl FAQ but cant see exactly
> the prob i get here.

[...]

> Scanning cache yes/build for sizes
> gzip: yes/sources/authors/id/A/AN/ANDK/CPAN-1.76.tar.gz: No such file or
> directory

I'd say the CPAN config process asced you for a path where you wanted to
store files while building stuff, and you answered 'yes'.  That's not a
path it can find, so everything screws up from there.

locate CPAN/Config.pm

That file contains the preferences you set up.  You could edit it, or you
could just back it up somewhere and delete it, in which case
`perl -MCPAN -e shell` should ask you to provide the configuration details
again.

Andrew


--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] How To Turning Serving Email Logging Off!!

2003-08-25 Thread Andrew McNaughton

change your cron entries so they pipe output to /dev/null.

Eg

15 0 * * */foo/bar  2>&1 > /dev/null

Alternatively, you could pipe output to a file where you check on it only
when necessary.

Or, you could just address the problems which are generating the warnings?

Andrew McNaughton




On Sat, 23 Aug 2003, Louis Selvon wrote:

> Hi:
>
> Each time cron run some scripts I keep getting the emails of warnings, status
> etc  sent to the admin address of each virtual site where the script ran
> from.
>
> I was not receiving these emails before. How do I turn this email logging off
> on the Apache server ? And how I turn it back on again when testing other
> scripts in the future ?
>
> The server is still on the old RH 7.3.
>
> Cheers.
>
>
>

--

No added Sugar.  Not tested on animals.  May contain traces of Nuts.  If
irritation occurs, discontinue use.

-------
Andrew McNaughton   In Sydney
Working on a Product Recommender System
[EMAIL PROTECTED]
Mobile: +61 422 753 792 http://staff.scoop.co.nz/andrew/cv.doc



-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug