Re: [SLUG] Sticky bit on /var/tmp

2010-03-24 Thread Craig Dibble

Quoting Joel Heenan :


In the past there have been exploits which relied upon racing
processes then modify files they have placed in /tmp or /var/tmp to
gain elevated privileges. Googling "race tmp exploit" will show up
lots of these. It is almost certainly bad practice to do this.


Hi Joel,

Given that these systems will only ever process one job at a time and  
have no interactive users that's not really a concern, but even so it  
still didn't sit comfortably with me, I just couldn't put my finger on  
why not so thanks for that.



The reason for this is that we have a large amount of data moving through
that folder, in the order of more than 100GB.


I think data of that size belongs in /var/cache/ or /var/spool/ or
simply somewhere else entirely.


We don't  particularly like the fact that the data is there. It's all  
scratch or cache data, some of which we want to persist for given  
periods of time, some we don't care about or want gone as soon as the  
current job finishes. It gets written by a variety of apps, and moving  
it elsewhere, though the best solution, simply isn't going to happen  
in the short term.


In the end I left the permissions the way they were and made the job  
scheduler the owner of the directory.


Cheers,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Sticky bit on /var/tmp

2010-03-10 Thread Craig Dibble

Quoting Jake Anderson :

Does anyone have any thoughts on removing the sticky bit on the  
/var/tmp directory and setting it to 777?


Something about it doesn't sit quite right with me but I can't so  
far find any negative impact of doing so.


Perhaps look at one of the more advanced access control setups, they  
extend the traditional unix user/group access and let you get fairly  
freeform.


I thought about acls, setfacl might do what we want but I was after  
the simplest possible option. I might still have a look at this one.


Alternately although not pretty could you not add your cleanup user  
to the group of the users? IE add cleanup to the frank group?


That was the first thing I thought (well the second after sudo), but  
group perms won't make a difference on the directory with the sticky  
bit set.


Thanks,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Sticky bit on /var/tmp

2010-03-09 Thread Craig Dibble

Quoting Daniel Pittman :


If you're curious, this is a large render farm controlled by a homegrown job
scheduler, the users submit jobs and the scheduler takes over - hence our
current problem.


See, this is why I like tools like Condor, PBS, or the Sun Grid Engine.  You
get to let other people pay the big money to build features like cleaning up
for you, so you don't have to.  You get to play more golf.[1]


Heh-heh. Unfortunately we're in full blown production, golf is not an  
option. Neither is replacing our systems.



Might it be easier to provide a compatible API to users, and replace the
inside with one of these tools?


Not my call, but I can guarantee it's never going to happen, it is way  
too entrenched!



We have pre and post hooks available though, so maybe doing a chmod or chown
on the directory at the start and end of every job would suffice to keep
everyone happy?


If you can do that, can't you just remove the files as root?  Unless you are
running on some strange platform that allows giving away file ownership to
unprivileged users or something...


Not quite as the cleanup process runs before the job exits. Anyway, it  
turns out this is not an option (my bad) as the pre and post hooks  
only operate at the farm process start/stop level, not on an  
individual job level. Hey-ho.



Daniel

Footnotes:
[1]  ...for some value of golf attractive to you, of course. ;)


Our job is to make sure these things play nice in our environment.  
That can be a game in itself at times - with the commercial apps more  
so than the homegrown ones most of the time ;-)


Thanks again,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Sticky bit on /var/tmp

2010-03-09 Thread Craig Dibble

Quoting Daniel Pittman :


Craig Dibble  writes:


Does anyone have any thoughts on removing the sticky bit on the  /var/tmp
directory and setting it to 777?


Why would you want to allow unprivileged user to delete temporary files
created by other unprivileged users?


For the reasons given below. If it was just a case of letting tmpwatch  
take care of it it wouldn't be a problem, but the amount of data churn  
is just too high.



Something about it doesn't sit quite right with me but I can't so far find
any negative impact of doing so.


Other than the marginally increased, and probably mostly theoretical in these
days of one-user-per-machine, security risk there isn't much.


That's good to know. Like I said, they're closed systems and already  
locked down so if that's all there is to it we could live with that.



The reason for this is that we have a large amount of data moving through
that folder, in the order of more than 100GB. We have cleanup scripts which
need to be able to remove files and folders to reclaim space every time a
job finishes but the files are created by the user who launched the job, and
the control process, and hence the cleanup, runs as a different user. And
there we have a problem as the sticky bit prevents the cleanup from running
and we have boxes falling over because their disks fill up.


...er, is there any strong reason to run the cleanup script as some  
user other

than root?


Only that it isn't really a cleanup script per-se, it's a process that  
runs deep in the job scheduler code - hence the fact that our original  
and perfectly

sensible suggestion of sudo would have been very painful to implement.

At the moment it looks like the least painful option is simply to make  
the job scheduler the owner of /var/tmp.


Thanks for your insight.

Cheers,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Sticky bit on /var/tmp

2010-03-09 Thread Craig Dibble

Quoting Peter Miller :


On Wed, 2010-03-10 at 10:07 +1100, Craig Dibble wrote:

Does anyone have any thoughts on removing the sticky bit on the
/var/tmp directory and setting it to 777?


Don't do it.


Yes, but why not? That's the bit I'm not sure about.


The sticky bit means a user can delete files in the directory if that
user owns the file they are deleting PLUS the owner of the directory can
delete files.


It's possibly worth saying that these are closed systems with no  
interactive users, so we are not worried about users accidentally  
deleting each others' files. My concern though is what other impact  
might there be?



I would suggest using a rwxrwxrwt spool directory
in /var/spool/something owned by the uid that process the data.
Any user can spool data for processing, but only the app (and the user
who spooled the data) can remove the data files.


This is a possibility, but an exceedingly difficult one as any number  
of homespun scripts, plugins and applications can be brought into  
play, along with a large number of commercial applications, and they  
would all need to be told to write to the spool directory. Nice idea  
though, and you have given me another idea - maybe just making the  
control process the owner of /var/tmp would solve the problem?


If you're curious, this is a large render farm controlled by a  
homegrown job scheduler, the users submit jobs and the scheduler takes  
over - hence our current problem. We have pre and post hooks available  
though, so maybe doing a chmod or chown on the directory at the start  
and end of every job would suffice to keep everyone happy?


Cheers,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Sticky bit on /var/tmp

2010-03-09 Thread Craig Dibble

Hi Hive Mind,

Does anyone have any thoughts on removing the sticky bit on the  
/var/tmp directory and setting it to 777?


Something about it doesn't sit quite right with me but I can't so far  
find any negative impact of doing so.


The reason for this is that we have a large amount of data moving  
through that folder, in the order of more than 100GB. We have cleanup  
scripts which need to be able to remove files and folders to reclaim  
space every time a job finishes but the files are created by the user  
who launched the job, and the control process, and hence the cleanup,  
runs as a different user. And there we have a problem as the sticky  
bit prevents the cleanup from running and we have boxes falling over  
because their disks fill up.


I'm fairly sure the first response to this will be "Use sudo", as that  
was our first response too, or "store the data somewhere else". Both  
of these are possible but difficult, the latter exceedingly so. We've  
tried to think of every sensible alternative but the simplest fix  
would be to just change the permissions and hope there isn't something  
which is going to bite us as a result.


Any cautionary tales gratefully received.

Cheers,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Ubuntu friendly PCI/USB WiFi?

2008-09-01 Thread Craig Dibble

Quoting Ben <[EMAIL PROTECTED]>:


I need to buy a PCI or USB WiFi card that works with Ubuntu, and will _keep
working_. I just can't seem to find anything concrete - maybe a market
opening I should be exploiting?


I've got a netgear 802.11g model at home that "just works", but I  
found I had to switch off network manager and configure it manually,  
otherwise it broke every time I logged in.


Can't remember the model but I can check for you this evening.

HTH,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Hardware stress test for 64bit arch

2008-08-20 Thread Craig Dibble

Hi all,

I'm after some hardware stress testing utils for 64bit linux -  
specifically network, CPU and memory.


I have a feeling this has come up recently but can't find the  
reference - I know someone suggested bonnie++ on a similar thread  
recently, but as far as I can see that hasn't been updated for 5 1/2  
years, and there are a few similarly unmaintained tool sets out there.


I'm going to have a look at bonnie++, but if anyone has any other  
suggestions I'd be most grateful.


Cheers,
Craig
PS - I'll be testing this on a Centos5 box, if that matters.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] OpenNMS Users - Drinks with Tarus Balog

2008-08-18 Thread Craig Dibble

Hi all,

Sorry for the short notice, but I've got Tarus Balog, the 
maintainer/mouth/CEO of OpenNMS out here for a project and he's 
interested in meeting up with any Sydney based users or interested 
parties for informal drinks this Friday night, 22nd August, somewhere in 
the CBD.


If anyone can suggest a venue that would be good as I've been out of the 
CBD for a while now so I'm not sure where is in favour, I know the James 
Squires Brewhouse used to be popular but if it's not on the hotlist any 
more I'm open to suggestion.


Thanks, and hope to finally meet some Sluggers if any of you can make it,

Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] reasonable mail message size ?

2008-07-01 Thread Craig Dibble

Jamie Wilkinson wrote:

2008/6/26 Craig Dibble <[EMAIL PROTECTED]>:


there are any number of online sites that will allow you to
move files around. Can't think of any of the names offhand but I'm sure a
search engine will be your friend here.


yousendit.com and filebucket are two sites that spring to mind.


Yeah, those are the ones I was thinking of. Since I never use them 
myself I obviously had no reason to store the names in any useful part 
of my brain ;-)

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] BBC iPlayer beta

2008-06-29 Thread Craig Dibble

Quoting Richard Ibbotson <[EMAIL PROTECTED]>:


Some of you might not know about this and so I thought I'd send in
some info



to try out the new BBC iPlayer go to...

http://www.bbc.co.uk/iplayerbeta/


Unfortunately that's not much use for most people on this list as they  
are outside the UK so it won't work!


I tried last night as I was trying to watch some of the Glastonbury  
coverage. I didn't expect it to work and what do you know, I was right  
:-(


Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] reasonable mail message size ?

2008-06-25 Thread Craig Dibble

Quoting Voytek Eymont <[EMAIL PROTECTED]>:


what's a reasonable email size limit that people set on their mail server ?

I have 10MB which I thought was 'reasonable' ?


A lot of (most/all?) mail servers by default will reject messages over  
10MB. Remember, there are encoding overheads as well - so a 10MB  
attachment will add several more MB to the resulting message.


With most people on superfast connections these days there is a  
tendency to think that it's ok to fling huge attachments around by  
email. In reality, email is a poor way to transfer large files. Aside  
from the encoding issues there are also SMTP transport/timeout issues  
to contend with.


Hands up everyone here who's constantly trying to educate their  
families not to send them huge emails with unresized digital photos  
attached? I try to tell them that anything over 3MB is really too big  
- even on my connection, currently sitting at 16687 kbps, it can take  
a long time to download a 3MB email.


There are plenty of other ways to transfer larger files these days. If  
FTP isn't an option there are any number of online sites that will  
allow you to move files around. Can't think of any of the names  
offhand but I'm sure a search engine will be your friend here.


HTH
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Is someone is snooping my wireless?

2008-06-23 Thread Craig Dibble

Quoting Jonathan Lange <[EMAIL PROTECTED]>:


More broadly, generating your wireless key with a cryptographically
secure RNG seems to me to be overkill for most people. Buying
specialty dice for it seems plain silly.[1] Flipping a coin eight
times doesn't take much longer than rolling 4d4, 2d16 or rolling 3d8
and dropping a bit, and saves you a trip to the shops.


Sorry, but all this talk of dice reminded me of this:

http://xkcd.com/221/

Just about sums it up really ;-)

Craig


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: Sending mail from within a highly locked down network

2008-04-20 Thread Craig Dibble

Quoting [EMAIL PROTECTED]:

maybe a quick and nasty shell/python/perl script to  
change/update/swap your configuration file is what you need


Indeed.

I've done it this way in the past, usually just by running the script  
manually, but you could attach it to an if-up script or even your  
.profile to work out where you are and make the changes automatically.


Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Sending mail from within a highly locked down network

2008-04-20 Thread Craig Dibble

Quoting Mary Gardiner <[EMAIL PROTECTED]>:


Background: my normal mail setup uses Postfix on my laptop to send
outgoing mail. My university has blocked all outgoing ports except 80
(and they may have a transparent proxy in front of that) and 443 on
their wireless network.


Might be stating the obvious, but webmail?

Not quite so obvious, if you don't have any webmail interface set up,  
but it is possible to tunnel an ssh proxy through apache. There are  
numerous sources for info on this, but something like this:


http://dag.wieers.com/howto/ssh-http-tunneling/

Hope that helps,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] getting SUSE 10.3 on a Mac PowerPC...

2008-03-18 Thread Craig Dibble

Quoting Stuart Waters <[EMAIL PROTECTED]>:

Does anyone have any tips (instructions?  where to get  
instructions?) on how to install SUSE 10.3 on the powerPC mac?


Perhaps I need to boot from the dvd?  I'm afraid I don't know how to  
do it.  I restart with the dvd in the drive and it opens up normally  
to mac os.


On the old PowerMac's you can boot from CD/DVD by holding down 'c' at startup.

You may find this site useful:

http://en.opensuse.org/POWER%40SUSE

Specific instructions for booting from CD here (you may have to edit  
to fit for the DVD)


http://en.opensuse.org/Booting_on_PowerMac

Good luck,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] hard drive failure, back-up, and other unhappiness

2008-03-05 Thread Craig Dibble

Quoting david <[EMAIL PROTECTED]>:


I've had a back-up hard drive fail today (just the backup drive, not the
original)

Worse still, my son's hard drive failed and then his back-up drive also
failed, so he is in deep doo-doo.

Fail = clicking noises, won't mount or mounts then won't read/write,
etc.


Whilst not an answer to your question, if you're fairly sure the drive  
is terminal it might be time to try a bit of percussive maintenance. I  
remember having similar problems with a drive many years ago and a  
sharp smack off the side of the desk did actually fix it. Whether the  
drive heads were stuck or what I don't know, but it was supremely  
satisfying nonetheless...


File that under the "Please don't try this at home" category ;-)
Craig

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] /etc/mail/access for a secondary MX

2008-02-13 Thread Craig Dibble

Quoting Nigel Allen <[EMAIL PROTECTED]>:



On 14/02/2008 10:17 AM, Craig Dibble wrote:


I might be missing something, but IIRC you can just list the actual  
[EMAIL PROTECTED] in the access file and filter for allowed users  
that way.



Not rubbish at all - I'm afraid it's just not quite that simple.

Here's what we were using - masked for privacy:

abc.com RELAY
abc.com.au RELAY
abc.net RELAY
xyz.com.au RELAY

[EMAIL PROTECTED]   RELAY
[EMAIL PROTECTED]   RELAY
[EMAIL PROTECTED]RELAY
[EMAIL PROTECTED] RELAY
[EMAIL PROTECTED] RELAY
[EMAIL PROTECTED] RELAY
[EMAIL PROTECTED]   RELAY

To:aaa.com.au  "550 User unknown"

This resulted in everything for aaa.com.au being bounced "We do not relay".


Ok, I got you. Try reversing the logic:

To:aaa.com.au  "550 User unknown" (or simply aaa.com.au REJECT)
[EMAIL PROTECTED]   RELAY
[EMAIL PROTECTED]   RELAY
[EMAIL PROTECTED]RELAY
[EMAIL PROTECTED] RELAY
[EMAIL PROTECTED] RELAY
[EMAIL PROTECTED] RELAY
[EMAIL PROTECTED]   RELAY

I seem to recall it operates on last match, not first.

Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] /etc/mail/access for a secondary MX

2008-02-13 Thread Craig Dibble

Quoting Nigel Allen <[EMAIL PROTECTED]>:

I want to change the /etc/mail/access from a simple "RELAY" to  
something that will check for valid addresses for that domain and  
reject any BS ones.



Can anyone point me in the right direction please?

sendmail 8-14-1 on FC6.


I might be missing something, but IIRC you can just list the actual  
[EMAIL PROTECTED] in the access file and filter for allowed users that  
way.


Please feel free to ignore me if I'm talking rubbish, it's been quite  
a few years since I was that intimate with Sendmail.


Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Which which?

2007-08-02 Thread Craig Dibble
Hi all,

Just an idle curiosity for a Friday afternoon, but does anyone know
which verison of which is included in the debianutils package (in Ubuntu
Feisty), and why it is so woefully out of date?

For instance, on one of the FC4 boxes I have to look after, which is the
standalone package:

which-2.16-6

Which is itself somewhat elderly as even though it is still the current
version it was released back in 2003, but it is far more sophisticated
than the totally no frills version in debianutils.

The reason I ask is because I was banging my head against the wall
trying to remember what that command is that can expand aliases in
bashrc files, because, you see, what I was thinking of is the fact that
on other systems which is itself an alias:

[EMAIL PROTECTED]:-$ which which
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot
--show-tilde'

So obviously, the answer to my earlier headbanging was in fact alias,
but the one I was actually thinking of was which, which means I don't
have to remember if a command is an alias or not in the first place.

Plus I wanted to see how many times I could write the word which in
seemingly odd places in one email whilst still being grammatically
correct throughout. Maybe I should just go to the pub instead ;-)

Thanks,
Craig
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] please unsubscribe all wildtecnology.net / wildit.com.au email addresses from SLUG lists

2007-07-17 Thread Craig Dibble
Timothy Bolot wrote:
> Please remove all wildtechnology.net / wildit.com.au email addresses from 
> your lists.

> 
>> Date: Tue, 17 Jul 2007 14:27:08 +1000> From: [EMAIL PROTECTED]> To: [EMAIL 
>> PROTECTED]> Subject: Re: [SLUG] SpamAssassin - MailMarshall Replacement


I never even noticed to start with that I had been singled out as the
cause of his outburst.

I wonder if that means I'm on his hitlist now?

Craig

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] SpamAssassin - MailMarshall Replacement

2007-07-16 Thread Craig Dibble
Trent Murray wrote:

> I currently have a customer using MailMarshall Email filter - this
> product allows the users to log on via a web client and check for
> messages that have been marked as spam, release mail if necessary and
> ammend rules.  Can anyone recommend a similar front end for
> spamassassin that can be used by Jo User without too much
> complication?

This may be a bit overblown for Jo User but have a look at MailWatch for
MailScanner:

http://mailwatch.sourceforge.net/doku.php

With the obvious implication being that you also need to install
MailScanner:

http://www.mailscanner.info/

It's a pretty impressive combo, but may require you to get your hands a
bit dirtier than you are after so may not be quite suitable.

Craig
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LDAP and keepalive errors

2007-03-07 Thread Craig Dibble
Jamie Wilkinson wrote:
> This one time, at band camp, Craig Dibble wrote:
>> ...right up until I deployed our new LDAP servers to production. Now I
>> find that I get intermittent failures from the keepalive script 

> Immediately I am thinking that the problem is somewhere in NSS.  Timeouts
> due to LDAP connection overheads, fd leaks in nss_ldap, nscd's very
> existence, all could be causing something to fail.

This is my thinking too, but I'm at a loss as to how I might debug this.

> Unlike Solaris, POSIX and Linux don't cater to temporary failure, so
> anything that explodes in the pipeline is going to return a failed lookup
> (and if you're using nscd, it'll cache that negative if you're really
> unlucky.)

Again, that's what I thought, but I still get it even with the caches
cleared, disabled, or nscd stopped.

>> [1] As a temporary fix I have put a simple hook in the keepalive script
>> to die if the returned process list is empty. 

> Is there a timeout on the process list command in the keepalive script?

No, that would be my next step in tidying up the script. I'll probably
do that today just so I can see if it is in fact timing out or if there
is some other issue causing the failures. To be honest, I'd rather bin
the script and start again but that won't help me understand why this is
happening.

> Do you get an empty process list when you run it by hand?

> The first thing to try is to replicate the conditions in the script to get a
> repeatable failure of ps.  Once you've done that, you'll have some idea as
> to where to look next.

Like I said, it's intermittent so very hard to replicate. I haven't yet
managed to figure out exactly what else may be occurring at the exact
instant it fails, or indeed how/why it fails.

As far as I have been able to ascertain simply running the ps command by
hand does not seem to fail, but interestingly, with the quick fix I put
in yesterday I put a backticked date command in the 'die' expression to
print a timestamp to the log file (to compare against any potential
further false positives). About 1/3 of the failures overnight in the log
have no timestamp on them. The ps command itself is also contained in
backticks. I'm not sure what that's telling me, but I think it's time to
add some debugging to the script and see what is going on at each stage.

Craig
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] LDAP and keepalive errors

2007-03-06 Thread Craig Dibble
Hi all,

I have another head scratcher that I hope someone can help shed some
light on:

We have a home-grown perl keepalive script that runs via cron every ten
minutes to monitor various processes on our production systems by
running a ps -ef command and comparing the output to a list, restarting
them if they stop or killing and restarting them if they run away. This
is a legacy script that has been running quite happily for several years...

...right up until I deployed our new LDAP servers to production. Now I
find that I get intermittent failures from the keepalive script whereby
it reports that some or all of the processes it is monitoring have died,
tries to restart them, and fails.

We eventually determined that the reason it is failing to restart the
processes was that they were in fact still running, and what had
actually happened was that the ps command had returned no output so the
script assumed they were all dead.

Now I know that this comes down to the fact that there is no error
handling in the script, and I could fix this quite easily[1], but what I
need to understand is why this is happening in the first place and how
LDAP could be having this effect on the script. Stop the LDAP servers
and the false warnings stop.

Initially I considered that the ps command might be timing out for some
reason when trying to look up the names of the owners of the processes,
but adding a flag to return only the UIDs made no difference.

I have read some reports of issues with nscd (which may be related to my
first supposition) but either switching it off or disabling the caches
also makes no difference.

Other information that may be useful: OS is FC4, I'm running nss_ldap,
using nisNetgroups and using ACLs for security via PAM's access module.

Finally - the problem doesn't just occur on the LDAP master and slave,
it also occurs on client systems.

Does anyone have any idea what else might be happening here as it's got
me fairly flummoxed right now?

[1] As a temporary fix I have put a simple hook in the keepalive script
to die if the returned process list is empty. This works ok for the
moment but without knowing why it is behaving like this I can't help but
feel all I might be doing is putting a bandaid on a bigger problem.

Thanks in advance for any pointers,
Craig


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RAID Performance Oddness - Update

2007-03-01 Thread Craig Dibble
Craig Dibble wrote:
>>> Hand-waving aside, I
>>> think this explanation fits the bill.

Also, the bill in question was me trying to convince the application
developers there was nothing wrong with the hardware ;-)
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RAID Performance Oddness - Update

2007-03-01 Thread Craig Dibble
Robert Collins wrote:

>> Hand-waving aside, I
>> think this explanation fits the bill.
> 
> I dont, because you have ignored the parallelism in each spindle.
> 
> With 10 disks, doing 10 writes, one per disk, should take precisely as
> long as 5 disks, doing 5 writes, one per disk, as long as you have
> bandwidth on your SCSI bus.

Yes, I'm aware of that. Unfortunately it completely blows my theory out
of the water and leaves me right back where I started.

To be honest, the only way I could get an accurate comparison would be
to take down ServerA and rebuild it as RAID10 and see what results I get
as there are simply too many other factors at play in attempting a
direct comparison of the two boxes, and unfortunately at the moment
there is no way I can do that.

Under certain conditions ServerA was indeed slower than ServerB but on
the whole the difference I saw when I changed from RAID10 to RAID5 on
ServerB was significant and consistent.

And like I said, the results I was seeing consistently defied anything I
would have expected.

Craig
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RAID Performance Oddness - Update

2007-03-01 Thread Craig Dibble
Hi folks,

After further testing on the new server I think I might be able to
explain the results of the benchmark tests for sequential and random
writes on RAID 1+0 (or RAID 10) as opposed to RAID 5.

Apologies for long (and possibly totally inaccurate) post, and I'll
state up front that most of the figures here are a bit hand-wavey and
probably require you to stand back and squint ;-)

So -

Our initial benchmark results (averaged out over multiple tests):

 ServerA (RAID5)   ServerB (RAID10)
SequentialWrite: 29  seconds   42 seconds
RandomWrite: 650 seconds   82 seconds

Now it gets interesting:

 ServerA (RAID5)   ServerB (RAID5)
SequentialWrite: 29  seconds   31  seconds
RandomWrite: 650 seconds   160 seconds

I did far more tests than this, but these are the salient results. So,
how to explain this?

Let's ignore the RandomWrites for now as, without going into it here,
the results are exactly as I'd expect.

So what is going on with the SequentialWrites, and why is RAID5 faster
than RAID10, and ServerA still faster than ServerB?

An important fact to bear in mind here is that both systems have the
same RAID controller with an onboard cache of 192MB, and both the
controller and the system bios have optimisations which will favour
sequential writes.

Taking ServerA first - we have 8 disks in a RAID5 array, simplistically
speaking that gives us 7+1 disks (7 for data, 1 for parity). So to write
7 blocks of data sequentially we need to do 8 writes (we are assuming
the controller at this point is intelligent enough to handle the parity
calculations and simply write from the cache).

On ServerB with 6 disks in a RAID1+0 array (a striped mirror) to write 6
blocks of data we have to do 12 writes as we are writing the data and
the mirror.

Changing ServerB to a RAID5 array with 6 disks (so 5+1), now to write 5
blocks of data we only have to do 6 writes, so now it only takes 31
seconds as opposed to 42 previously.

So it seems fairly obvious from this alone that RAID1+0 is not optimised
for sequential writes (again, a very hand-wavey generalisation).

But why is ServerA still faster under RAID5?

Putting some figures against this gives us a very interesting picture...

First:
6 writes for 5 blocks gives us: 6/5 = 1.2
8 writes for 7 blocks gives us: 8/7 = 1.4

The difference between these gives us:

1.2/1.4 = 1.05

Now - look again at the results for the RAID5 sequential writes:

 ServerA (RAID5)   ServerB (RAID5)
SequentialWrite: 29  seconds   31  seconds

So that's:

31/29 = 1.07

That's pretty damn close to the ratio of 6 disks as opposed to 8.

Obviously there are other factors that will influence the results here,
such as CPU usage, load, etc, but I have tried to get an average of the
test results under relatively low load conditions on as close to a level
playing field as it is possible to get.

I think it's fair to say from this that the hardware is actually
performing within reasonable parameters, and RAID1+0 is definitely not
the optimal configuration for sequential writes. Also, if I were to put
two more disks in ServerB then by my reckoning it would indeed be faster
than ServerA in a RAID5 configuration.

It may well be that spelling it out like this makes it look obvious what
the problem is, but since the results I was getting seemed to be
contrary to popular wisdom, and to any documentation I have read, I had
a hard time trying to explain what I was seeing. Hand-waving aside, I
think this explanation fits the bill.

The next step now is for the application developers to tell me how their
application actually works and what configuration they think will be
optimal.

Craig
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RAID Performance Oddness

2007-02-25 Thread Craig Dibble
Peter Chubb wrote:

> I'd suggest having a look at the disk scheduler.  If you're using AS,
> try deadline instead.  Otherwise write performance can suck for this
> kind of setup (AS is really aimed at a single-user machine).
> 
> # cat /sys/block/sda/queue/scheduler
> noop [anticipatory] deadline cfq
>    This can be bad on RAID if you're a server.

Nice thought, and something I hadn't checked, but unfortunately:

# cat /sys/block/cciss\!c0d1/queue/scheduler
noop anticipatory deadline [cfq]

Changing it to deadline showed no appreciable difference in the results,
but neither did any of the others.

I may be missing something (or showing my ignorance), but should I be
expecting to see a difference here given that this is hardware RAID?


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RAID Performance Oddness

2007-02-25 Thread Craig Dibble
Tony Sceats wrote:

> I'm really not too sure about the increase in sequential writes, however I
> imagine that this could very well be due to the disks being on the one bus,
> where they were on 2 previously. Can you try putting disks on different
> busses in the RAID 10 system? I think the best way is to have 1 set of
> stripes on each bus, so your mirrors are on different busses.

Yeah, I'm going to schedule that for this week. I think it got built
like that due to a difference in the chassis - the old box had an even
split of 5 slots on each channel. For some reason this box has 8 slots
on channel 1, and just 3 on channel 2, so whoever put the box together
put one system disk on each side and then just shoved all 6 data disks
in the other channel.

I'll post back to the list after the rebuild with the results of running
the tests again, but meantime if anyone has any other ideas please let
me know.

Thanks again,
Craig

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] appliance - run single application

2007-02-25 Thread Craig Dibble
david wrote:

>> * I won't say 'a quick google' as I don't use it ;-)
> 
> Am I always the dumb person who asks the obvious dumb question?
> 
> Why, and what do you use?

I use http://www.alltheweb.com/

Mainly for historical reasons really. Back when I first started in tech
support and search engines were few and far between I found the results
from all the web to be consistently better than any of the others,
including the fledgling google. Plus it never seems to have suffered
from the various ploys used on google to artificially influence the results.

And before anyone points it out - I am well aware that they have since
been taken over / co-opted by Yahoo and I'm not sure my preference has
any great merit any more, but having been using it for so many years I
can't be bothered making the switch.

Plus I've always been a supporter of the underdog. Macs and Linux
instead of Windows, AMD instead of Intel, anyone other than Michael
Schumacher - that kind of thing ;-)

Craig
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] appliance - run single application

2007-02-22 Thread Craig Dibble
Martin Barry wrote:
> my google-fu is letting me down.
> 
> i want run a single application on a machine, appliance like.
> 
> was looking for a howto for ubuntu or debian but i'm obviously using the
> wrong search terms.

Not sure I understand what you're asking, but do you by any chance mean
kiosk mode? A quick search* on that reveals plenty of links.

Craig
* I won't say 'a quick google' as I don't use it ;-)
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RAID Performance Oddness

2007-02-22 Thread Craig Dibble
Craig Dibble wrote:
> Hi all,
> 
> I have a question for any hardware experts out there: I'm currently
> scratching my head over an unexpected performance issue with a relative
> monster of a new machine compared to it's older, supposed to be
> superseded counterpart.

By the way - as far as I have been able to ascertain all other things
are equal. Both boxes are using the same HP Smart Array 6402/128
controller and all other settings are equivalent - such as the
cache/Accelerator Ratio settings.

Also, if it matters - the block buffer cache is disabled in the small
test program (which I didn't write).

Craig
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] RAID Performance Oddness

2007-02-22 Thread Craig Dibble
Hi all,

I have a question for any hardware experts out there: I'm currently
scratching my head over an unexpected performance issue with a relative
monster of a new machine compared to it's older, supposed to be
superseded counterpart.

Brief outline:

Server A: 2 x 3Ghz Xeon (with hyperthreading shows as 4 CPUs)
  2GB Ram
  900GB RAID5 array comprised of 8x146GB 7500rpm disks on 2 spindles
with a stripe size of 64k

Server B: 4 x Dual Core 2.66Ghz Xeon (with HT shows as 16 CPUs)
  3GB Ram
  900GB RAID1+0 comprised of 6x300GB 10k rpm disks on 1 spindle with a
stripe size of 128k

Running write tests on both boxes and comparing sequential versus random
writes shows some very unusual results that I'm having trouble
interpreting - the test program creates a 1GB file, then writes to it
again in 8k chunks, running each test twice to attempt to counter any
issues with the state of the controller cache, so the second result
should be a more realistic, or 'better' number:

Server A:

Fri Feb 23 03:00:01 EST 2007

sequentialWrite: 28.96 seconds
sequentialWrite: 28.88 seconds
randomWrite: 659.32 seconds
randomWrite: 701.60 seconds

Server B:

Fri Feb 23 03:00:01 EST 2007

sequentialWrite: 52.76 seconds
sequentialWrite: 41.32 seconds
randomWrite: 81.39 seconds
randomWrite: 82.20 seconds


What I can't explain here is why a sequential write on the new box would
take roughly 1.5 to 2 times as long as on the old box, yet the random
write is around 8 to 8.5 times faster.

Has anyone seen anything like this in the past and would care to hit me
with a cluestick about how I might fix it? Is it possible that a
combination of the stripe size and the single spindle on the new box
could be slowing it down to this extent, or is there something else I am
missing?

TIA for any sage counsel,
Craig
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: DIY networking kit at Aldi.

2007-01-07 Thread Craig Dibble
Byron Hillis wrote:

>> Those little ipod FM transmitters are also technically illegal

> Are they really? I always thought these sort of things were also based
> on signal strength and therefore legislation didn't apply to them.

They certainly were in the UK, it was illegal to sell or use them until
the incredibly strict 1949 Wireless Telegraphy Act legislation was
changed just last month (this also means that CB radio is now finally
legal in the UK without a licence!).

I wasn't aware the same was true over here as they were freely available
for sale.

Craig

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] evolution: how to change font size

2006-11-24 Thread Craig Dibble

david wrote:

I was reading my email in evolution and did an unintentional random
key/click sequence, and now the message frame in evolution has doubled
it's font size. (the other frames are the normal size). 


I would really like to know how to change it back. 


CTRL- should decrease the font size (and CRTL+ to increase it).
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] tailing, following and filtering

2006-11-22 Thread Craig Dibble

Matthew Hannigan wrote:

On Thu, Nov 23, 2006 at 08:23:37AM +1100, Penedo wrote:



What's wrong with "tail -f syslog | grep ..."?


Buffering



more or less


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Connecting to the Internet

2006-11-22 Thread Craig Dibble

Stephen Black wrote:
I have installed Suse Linux on my new computer with a Realtek RTL8111B 
ethernet controler.


I have not had any success in connecting to the net  


I don't yet have a broadband account but I have noticed that the phone line 
will connect to the ethernet port (RJ45) and I was wondering if I could use 
the port to dial into the internet 


Hi,

Unless I have very much misunderstood what you are saying here, if you 
don't have broadband and don't have a dial up modem you can plug in to 
that computer (plugging the phone line in to the ethernet port won't do 
much good), then you're not going to have much luck getting that box 
connected to the net.


If you have a dialup account on another PC have you tried connecting 
that modem to the linux box? Failing that (ie, if it's an internal 
modem), how about getting a small ethernet hub, connecting the two 
computers together and sharing the internet connection that way?


Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] kubuntu / vim key mapping weirdness

2006-11-15 Thread Craig Dibble

Hi all,

I've been scratching my head over this one and can't work it out so
thought I'd throw it out to the hive mind.

I've got two kubuntu boxes, one running 6.06 and the other 6.10.

In my .vimrc I have the following mapping set to automatically add
opening and closing {} for code blocks:

imap  {}O

I know there are other ways to do this but this was nice and simple and
was working fine on 6.06 using vim.basic 6.4.6.

It doesn't work at all on 6.10, vim.basic 7.0.35. If I hit CTRF-F now I
just get exactly that: ^F.

I also noticed that if I display the imap mappings from within vim
in 6.46 I get 11 entries, in 7.0.35 I get just the one, and having a
play around, it seems that I can't get any mappings to work in the later
version.

Has something happened to the mapping functions between these versions
or am I just missing something?

Anyone care to hit me with a cluestick here?

TIA,
Craig


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Edgy au repo 404 not found!

2006-10-12 Thread Craig Dibble

Morgan Storey wrote:

I think the au repo is having a fit I am getting 404 for clam in dapper



ditto in kubuntu all day yesterday, hadn't got round to checking it 
again today.

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] wget remembering previous downloads

2006-09-07 Thread Craig Dibble

Raphael Kraus wrote:

G'day all...
 
I'm doing some man'ning to no avail here...
 
Is there a way to have wget (downloading via ftp) to remember what it

has successfully downloaded, and not to download the same file again -
even if the file is deleted from disk?
 
If not, has anyone else had to face this problem before?


wget -nc will stop it downloading an existing file in the same 
directory, but to do what you're suggesting - if you then move or delete 
said file you would probably need to do something clever like output to 
a logfile (-o  initially, then -a  to append to the same 
file), then pipe that file back in using an 'exclude' to ignore those 
files. You'd probably need to wrap that in a script to get it to work 
correctly though.


HTH,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] identifying heavy load

2006-08-29 Thread Craig Dibble

Voytek Eymont wrote:


Expand your terminal to full screen and hit 'c' in top and it will show
you the command line that proces is running.


thanks, Craig

I need to move the cursor to the 'suspect line', yes ?
how do I do that in putty ?
in full screen putty session I'm still on top's upper line and 'c' gives
me only: 'show command line'


Er, no - c should just toggle the command column on or off to show the 
full command line. If it doesn't work for you then 'h' should show you 
the available options. Failing that, just try something like this:


ps wax -o cmd |grep [p]erl

And you should get a listing of all perl processes running (there are 
numerous different ways you can call that ps command to get the same 
result, before anyone points it out, just pick your favourite if you 
have one).


At a guess I'd say you're running apache_modperl and something's running 
amok.



it's a good idea to see if you can find out why
it's happening in the first place if you can, as in all likelihood it will
recur.


yes, I would like to find that out
as it was, when trying to restart apache, I ended up in a loop where it
failed to start up again, and, under the circumstances, I ended up
restarting the system (as I couldn't recall how to clear the problem...)


If you haven't cleared all the processes it won't restart. Try 
'apachectl stop' first, (or apache2ctl), if that doesn't work, try 
'killall apache' (or httpd, depending on your system), and if that 
doesn't work try:


ps wax |grep [a]pache

and do a kill -9 on the PIDs, which is the brute force way of clearing it.



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] identifying heavy load

2006-08-29 Thread Craig Dibble

Voytek Eymont wrote:

my web/mail server is experiencing an unusually heavy CPU and memory load,
how can I narrow down what it is ? 


I'd suggest here would be a pretty good place to start:


  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
 4368 apache20   0  3356 3348  1120 R62.7  0.3  75:48 perl


Expand your terminal to full screen and hit 'c' in top and it will show 
you the command line that proces is running.


The fact that you seem to have a whole heap of perl commands being run 
by apache definitely points to something being amiss. An apache restart 
will probably fix it, but it's a good idea to see if you can find out 
why it's happening in the first place if you can, as in all likelihood 
it will recur.


HTH
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Pushy Windows

2006-08-22 Thread Craig Dibble

john gibbons wrote:

I had XP and Dapper working together on a partitioned drive. XP decided 
it would not boot any more because of a missing file. So I reinstalled 
it on its own previous drive partition leaving Ubuntu's untouched.  Now 
on boot up I am offered only this unattractive choice: I can select XP 
or I can select XP. Yes, it lists itself twice. No other option.


Can I do something to get Ubuntu back as an option at boot up time or 
must I now reinstall it?


I haven't had to do this for years, but usually you have two choices 
when this happens - you can either find and edit the windows boot.ini 
file and list your ubuntu partition, or probably preferable would be to 
fire up a live CD, mount your partitions, chroot yourself to your grub 
partition and do a grub-install.


Last time I tried anything like that would probably have been back in 
the era of Windows NT, Redhat 5 and lilo, so YMMV.


Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Cron and shutdown command

2006-08-13 Thread Craig Dibble

Dean Hamstead wrote:

fwiw

some places may also recommend 'crontab -e' which will spin you off
into an editor to edit your users crontab file. 


And a word of caution if you are going to do this: Create your crontab 
as suggested, but then do something like this:


$ crontab -l > crontab.out

to take a backup, then when you want to make changes, edit crontab.out 
and reinstall it by typing:


$ crontab crontab.out

That will save you from accidentally typing 'crontab -r' and wiping out 
any trace of your crontab, which strangely is easier than you might 
think, I've seen it happen on a number of occasions and if you have 
custom crontabs running all manner of undocumented tasks it can cause 
real pain (another reason why you might want to think about putting 
scripts in /etc/cron.d instead).


Regards,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Cron and shutdown command

2006-08-13 Thread Craig Dibble

Simon Bowden wrote:


No, it must be crontab.

cron.daily is for generic "make sure it runs at least once a day", with 
no specific time.


crontab is where the crontab(8) format entries with specific times and 
users etc live.


You could also put it in /etc/cron.d/

Packages which need to install cron scripts will generally install them 
to here and it is checked by the cron daemon every minute. Some people 
prefer it to maintaining potentially unwieldy crontab files, some don't, 
YMMV ;-)


I would also be curious about why, or how often you want to do this 
though. As has already been suggested, judicious use of the 'shutdown' 
or 'at' commands may be more appropriate.


Regards,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] DHCP client vs sendmail

2006-08-08 Thread Craig Dibble

Erik de Castro Lopo wrote:


Aw man, you're picking nits that are sitting on the nit
that Steve already picked.

You're almost picking meta-nits.


Heh-heh, true, I should have said nitnitpick, but at least I 
pre-emptively apologised


;-)
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] DHCP client vs sendmail

2006-08-08 Thread Craig Dibble

Jamie Wilkinson wrote:

This one time, at band camp, Steve Kowalik wrote:

On Wed, 9 Aug 2006 12:12:37 +1000, Jamie Wilkinson uttered



Port 993 is POP3S, whereas SSMTP is port 465.


That'll learn me for just making things up... but the important part is that
it's not port 25 and thus not likely to be blocked by zealous ISPs.


Er, sorry to nitpick, but 993 is actually IMAP SSL.

POP3S is 995

Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Apache and SSI

2004-09-20 Thread Craig Dibble
Simon Bryan wrote:
Hi all,
We are trying to use SSI to read the IP address of requests to our
webserver so we can deny some ip addresses access to certain pages.
I have followed the instructions at:
http://httpd.apache.org/docs/howto/ssi.html#configuringyourservertopermitssi
Which method did you use to allow SSI?

We are viewing it using Firefox and IE, when we connect to some remote
servers using this system it displays fine. Just in our case the variable
is always empty.
I'm not quite sure what you mean by this, but if you haven't properly 
configured your SSI then yes, the page will display, but the variable 
will be blank.

I'd suggest you test it using the XBitHack as it's the quickest and 
easiest way to allow SSI (you can take up the issue of security 
implications separately). In which case, make sure you've set the 
executable bit (chmod +x) on the relevant page and you should be fine.

Unless I've got the wrong end of the stick entirely.
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Hotplug: USB memory stick

2004-05-13 Thread Craig Dibble
Erich Schulz wrote:

usbmodules can be told to return the matching value in the 
/etc/hotplug/usb.usermap file. But I don't know how to extract the 
values from the lsusb or usbview data, into the format of usb.usermap 
file to give me a match


Does anything at all happen when you plug it in?

If lsusb and usbview don't want to play then /var/log/messages should at 
least show what type of usb device it thinks the drive is (but won't 
give you the numbers you're after), and if there's any errors being 
generated /var/log/warn should handily report the error messages along 
with the vendor/product IDs that the system thinks it is dealing with. 
If it's truly unknown (ie not related to anything in the usb.ids file) 
then check your version of the file against the one I gave the link for 
earlier.

I had a similar problem with a usb stick a couple of months ago - a 
brand new 512MB Comsol drive that no matter what tricks I tried I simply 
could get to mount under linux. After a couple of days of head 
scratching (or banging against the wall) I eventually discovered it was 
due to a truly screwed partition table on the drive, so despite it 
working under OSX and windoze and showing up as a single 512MB drive, 
under linux it was showing up as:

/dev/sda 499MB  USB Flash Disk
/dev/sda180.4GB Linux Native
/dev/sda2   891.6GB Linux Native
/dev/sda3 1.5TB Win95 Fat32 LBA
Neat trick, huh? Over 2.5 terabytes of storage on a 512MB drive.

I have no idea why this should have been, but deleting the partition 
table and then creating a new partition on the drive fixed it.

Shame really, I'd quite like to have owned the first ever USB Tardis 
Drive...

Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html