[SLUG] Special Interest Group: Asterisk

2006-03-08 Thread craigw-blue . net . au
Is there a user group for Asterisk in Sydney?

If there is, could someone send me the contact details.

If there is not a group, would any one be interested in getting involve in
one?

Craig


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Special Interest Group: Asterisk

2006-03-08 Thread Howard Lowndes

[EMAIL PROTECTED] wrote:


Is there a user group for Asterisk in Sydney?

If there is, could someone send me the contact details.

If there is not a group, would any one be interested in getting involve in
one?
 



Yes, but Sydney is a long way away...


Craig


 



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Special Interest Group: Asterisk

2006-03-08 Thread Shane Machon
There is an australian asterisk usergroup. Not sure how official it is.

http://groups.yahoo.com/group/asterisk-anz/

There might be enough numbers within this group to start a sydney based
group that can actually meet up and have talks, if that is the goal.

Cheers,
Shane.

On Thu, 2006-03-09 at 09:35 +1100,
[EMAIL PROTECTED] wrote:
 Is there a user group for Asterisk in Sydney?
 
 If there is, could someone send me the contact details.
 
 If there is not a group, would any one be interested in getting involve in
 one?
 
 Craig
 


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Special Interest Group: Asterisk

2006-03-08 Thread Christopher Vance

On Thu, Mar 09, 2006 at 10:00:58AM +1100, Shane Machon wrote:

There is an australian asterisk usergroup. Not sure how official it is.

http://groups.yahoo.com/group/asterisk-anz/


Looks like a mailing list with about 1 message / working day.  I'd
never heard of it, but then I try to avoid anything to do with yahoo.

On the balance of probabilities, I'd guess most of the posters are not
in Sydney, which makes a Sydney meeting likely to have few of these
people.

--
Christopher Vance
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Nominations page (Re: Nominations hotting up)

2006-03-08 Thread Mary Gardiner
On 2006-02-27, Grant Parnell [EMAIL PROTECTED] wrote:
 Once again, I've updated the election page this morning...
 http://www.slug.org.au/~grant/election.html

A couple of suggestions for this page:

 1. Can you put a strike through (HTML strike/strike) the entries
for people who have declined a nomination?

 2. Can you make the entries for people who haven't yet accepted a
little bit lighter (span style=color: #bb;/span or
similar) or perhaps bold the not yet accepted bit.

It's really hard at the moment to distinguish people who are running
from people who aren't and this would help.

Also, why is there still a listing for Honourary committee member? As
I recall, this is completely unofficial (ie the constitution does not
provide for such a position) and was only ever there because for a while
it was thought under 18s couldn't be on committee officially. Since we
later decided that they could be ordinary members (but not executive
members because they can't act as signatories), there seems no reason to
keep mentioning it in elections. If the committee needs to be larger,
then we should change the constitution, if not there's no reason for the
position.

-Mary

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Backup Internet links

2006-03-08 Thread Carlo Sogono



This is quite lengthy so I hope people with 
a lot of time in their hands can help me out... =P

I'm trying to configure Internet redundancy 
for a medium-sized company using Linux as their primary router/firewall. We had 
a 2nd ADSL line connected both from different ISP's so if one link goes down we 
have another one going. Please note that I am *not* setting up load 
balancing.

The Linux router's setup is pretty 
straightforward. It has 3 network cards. 1 goes to ISP1, 1 to ISP2 and 1 to the 
LAN.

* ISP1 is only used for incoming traffic 
(Primary MX, HTTP/HTTPS, etc.)

* ISP2 is only used for outgoing traffic 
(primary gateway with the lowest metric on the routing table -- all traffic like 
web, outgoing SMTP passes through here; this is also configured as the seconday 
MX so if ISP1 goes down then mail flows through here)

Things I am aware of/have 
considered:

1. If the Linux system goes down then the 
net is fully offline.
2. We have tried 2 Linux machines, one for 
each ADSL link but we have a lot of Windows machines whose metric-based routing 
does not work as expected when one of the links go down.

Since our priorities are only internal web 
access and incoming/outgoing mails I have thought of periodically checking of 
the immediate gateways of both ISPs are online. If the primary gateway is down 
then run a script that changes the default route. As much as possible I would 
like to avoid using a routing protocol as these links are not that fast and the 
ISP supplied routers we have are not Cisco-enterprise-level. I do not know of 
such an app that checks if the immediate gateway is online then runs scripts if 
they are down...

Any ideas?
Carlo






Carlo Sogono
ConsultantHuon Computer ConsultantsPO Box 
390Frenchs Forest, 
NSW2086AUSTRALIAPh: 
+61 2 9975 
1077Fax: 
+61 2 9452 2359Email: 
[EMAIL PROTECTED]Website 
http://www.huoncc.com.au/

This email contains information intended for the 
addressee only. This information is confidential and may be 
privileged. If you are not the intended recipient, you must not read, use, 
copy or distribute this email or its attachments. If you have received 
this in error, please notify us immediately by return email or telephone, and 
delete and destroy this email.

Thank you. 


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

[SLUG] Re: search engine ranking

2006-03-08 Thread Carlo Sogono



 I was asked how to improve a web 
site to improve its position for search
 engine rankings. 
I had to reply "I don't have a clue". 
There is a lot of "noise" looking for info on the subject. 
Any body recommend some urls to read, so I can pass them on 
please?
Search engines alsorelies on the META 
data in HTML pages, specifically META Description and Keyword. Mind you that it 
does not affect page *rankings* but affects the kind of searches your pages 
should appear in. So if you put "microsoft" as your keyword, your page 
will definitely come up in a microsoft search on Google but whether it's on rank 
1 or 1,000,000,000 is where that black art comes into play.

More here: http://searchenginewatch.com/webmasters/article.php/2167931
Carlo

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

[SLUG] Linux application management for tomcat

2006-03-08 Thread Kasim, Yosep








Hi slugger



I am looking for a software for managing tomcat
server on multiple machine in my workplace is there any recommendation for
linux one

This software preferably be able to view tomcat
application, statictic, load etc etc 



Is anybody could recommend me one?



Many thanks in advance



cheers




DISCLAIMER
Email Confidentiality FooterThis message is for the named person's use 
only. Privileged/Confidential Information may be contained in this 
message.If you are not the addressee indicated in this message (or 
responsible for delivery of this message to such person), you may not copy 
or deliver this message to anyone. If you receive this correspondence in error, 
please immediately delete it from your system and notify the sender. You 
must not disclose, copy or rely on any part of this correspondence if you are 
not the intended recipient.
Internet communications are not secure and therefore Harvey Norman does not 
accept legal responsibility for the contents of this message. Any views or 
opinions presentedare solely those of the author and do not necessarily 
represent those of Harvey 
Norman.***



-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

[SLUG] hard links over the net

2006-03-08 Thread Julio Cesar Ody
Hi all,

I have a backup server in my LAN that keeps some rsnapshot
(www.rsnapshot.org) backups. For those of you who don't know the tool,
rsnapshot works by taking incremental backups, hard linking the files
that don't change from one backup to the next (thus, keeping the
consecutive backups smaller than the first big one.)

I want to send these backups to a remote server every day or so. I
tried scp'ing it, but scp resolves the hard links, causing me
headaches and too much space to be used in my remote server. I then
tried SSHFS (sshfs.sourceforge.net), and the same thing happened. Same
applies to SHFS.

I know NFS would keep the hard links instead of resolving them (cp
-d), but I'm not up to put NFS over the internet since that can be a
bad idea from the security standpoint. So my question is: is there a
way for me to make that transfer and keep the hard links?

Thanks.


--
Julio C. Ody
http://rootshell.be/~julioody
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] hard links over the net

2006-03-08 Thread Erik de Castro Lopo
Julio Cesar Ody wrote:

 I know NFS would keep the hard links instead of resolving them (cp
 -d), but I'm not up to put NFS over the internet since that can be a
 bad idea from the security standpoint. So my question is: is there a
 way for me to make that transfer and keep the hard links?

OpenVPN is trivial to set up and NFS should run over that without
a problem.

Erik
-- 
+---+
  Erik de Castro Lopo
+---+
C is a programming language. C++ is a cult.
-- David Parsons in comp.os.linux.development.apps
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Special Interest Group: Asterisk

2006-03-08 Thread Alexander Samad
On Thu, Mar 09, 2006 at 09:54:38AM +1100, Howard Lowndes wrote:
 [EMAIL PROTECTED] wrote:
 
 Is there a user group for Asterisk in Sydney?
 
 If there is, could someone send me the contact details.
 
 If there is not a group, would any one be interested in getting involve in
 one?
  
 
 
 Yes, but Sydney is a long way away...
i'm interested and I am in sydney

 
 Craig
 
 
  
 
 
 -- 
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
 


signature.asc
Description: Digital signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

[SLUG] Re: Fine tuning browser-plugin response time

2006-03-08 Thread Grant Parnell - slug

Further update...
I turned on debugging to mozplugger and added a timestamp to the debug 
routine and here's what happened...


PID3097: 1141880606 Same.
PID3097: 1141880606 Checking command: mplayer -really-quiet -nojoystick 
-nofs -zoom -vo xv,x11 -ao esd,alsa,oss,arts,null -osdlevel 0 -xy $width 
-wid $window $file /dev/null

PID3097: 1141880606 Match found!
PID3097: 1141880606 Command found.
PID3097: 1141880608 StreamAsFile
PID3097: 1141880608 NEW_CHILD(/ramdisk/media/1132199391.mov)
PID3097: 1141880608 Forking,

So this mysterious 2 second delay between clicking on the link to the 
movie and having it play the movie is indeed within the plugin system I 
was using.


On Mon, 6 Mar 2006, Grant Parnell - EverythingLinux wrote:

I have a need to improve the time taken to launch a video presentation from a 
web browser in the short term.


The 100MB video file is local (ie on the hard drive of the machine running 
the browser. In fact I've tried putting a smaller 20MB video, the mplayer 
app, it's libraries, the plugin manager and it's libraries and the html all 
in a ramdisk.


The sort of response I'm getting is after clicking the link to the video it's 
taking about 1 to 2 seconds to kick in. I am not sure if it's the browser 
itself or the plugin manager causing the delay, but if I replace mplaer with 
a shell script that logs the command line parameters the delay is between 
clicking the URL and the log entry appearing. Running mplayer directly gives 
excellent response, ie before the enter key lifts up.


I've tried a few browsers and tried to try a few plugin managers with varying 
success (ie got it to run or didn't). It looks like the common theme is it 
seems to want to buffer the video when it probably shouldn't.


Of course the long term plan would probably be to have somebody code up 
something that'll call mplayer or flashplayer or render a web page on cue.
I've heard gstreamer might be able to do something like this but so far I 
thought it was used to process video, not display it. Also not sure if 
annodex could be used here - the presentations could be re-encoded.





--
---GRiP---
Grant Parnell - SLUG President  LPIC-1 certified engineer
EverythingLinux services - the consultant's backup  tech support.
Web: http://www.elx.com.au/support.php
We're also busybits.com.au and linuxhelp.com.au and everythinglinux.com.au.
Phone 02 8756 3522 to book service or discuss your needs
or email us at paidsupport at elx.com.au

ELX or its employees participate in the following:-
OSIA (Open Source Industry Australia) - http://www.osia.net.au
AUUG (Australian Unix Users Group) - http://www.auug.org.au
SLUG (Sydney Linux Users Group) - http://www.slug.org.au
LA (Linux Australia) - http://www.linux.org.au
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Special Interest Group: Asterisk

2006-03-08 Thread Juergen Busam
Alexander Samad wrote:

On Thu, Mar 09, 2006 at 09:54:38AM +1100, Howard Lowndes wrote:
  

[EMAIL PROTECTED] wrote:



Is there a user group for Asterisk in Sydney?

If there is, could someone send me the contact details.

If there is not a group, would any one be interested in getting involve in
one?


  

Yes, but Sydney is a long way away...


i'm interested and I am in sydney
  


me too...

  

Craig




  

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html




-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] hard links over the net

2006-03-08 Thread dave kempe

Julio Cesar Ody wrote:

Hi all,

I have a backup server in my LAN that keeps some rsnapshot
(www.rsnapshot.org) backups. For those of you who don't know the tool,
rsnapshot works by taking incremental backups, hard linking the files
that don't change from one backup to the next (thus, keeping the
consecutive backups smaller than the first big one.)

I want to send these backups to a remote server every day or so. I
tried scp'ing it, but scp resolves the hard links, causing me
headaches and too much space to be used in my remote server. I then
tried SSHFS (sshfs.sourceforge.net), and the same thing happened. Same
applies to SHFS.

I know NFS would keep the hard links instead of resolving them (cp
-d), but I'm not up to put NFS over the internet since that can be a
bad idea from the security standpoint. So my question is: is there a
way for me to make that transfer and keep the hard links?



you could use rdiff-backup (rdiff-backup.org) which works the same sort 
of way, but gives you a few extra features. and of course works over ssh 
by default


dave
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] hard links over the net

2006-03-08 Thread Zhasper
rsync (http://www.samba.org/rsync/) should do what you want (and is
probably already installed, and tunnels over an ssh connection for
extra security, and has other features that probably make it a perfect
match for your needs as well :)

On 3/9/06, Julio Cesar Ody [EMAIL PROTECTED] wrote:
 Hi all,

 I have a backup server in my LAN that keeps some rsnapshot
 (www.rsnapshot.org) backups. For those of you who don't know the tool,
 rsnapshot works by taking incremental backups, hard linking the files
 that don't change from one backup to the next (thus, keeping the
 consecutive backups smaller than the first big one.)

 I want to send these backups to a remote server every day or so. I
 tried scp'ing it, but scp resolves the hard links, causing me
 headaches and too much space to be used in my remote server. I then
 tried SSHFS (sshfs.sourceforge.net), and the same thing happened. Same
 applies to SHFS.

 I know NFS would keep the hard links instead of resolving them (cp
 -d), but I'm not up to put NFS over the internet since that can be a
 bad idea from the security standpoint. So my question is: is there a
 way for me to make that transfer and keep the hard links?

 Thanks.


 --
 Julio C. Ody
 http://rootshell.be/~julioody
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html




--
There is nothing more worthy of contempt than a man who quotes himself
- Zhasper, 2005
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] hard links over the net

2006-03-08 Thread Michael Chesterton
Julio Cesar Ody [EMAIL PROTECTED] writes:

 I know NFS would keep the hard links instead of resolving them (cp
 -d), but I'm not up to put NFS over the internet since that can be a
 bad idea from the security standpoint. So my question is: is there a
 way for me to make that transfer and keep the hard links?

This doesn't answer your question, but you can setup rsync (from
memory with --backup and --backup-dir) so that you have one full copy
of the latest backup, and in separate directories just the files that
changed each backup run.

Alternatively, just run rsync, then on the remote host, run rsnapshot.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] hard links over the net

2006-03-08 Thread O Plameras

Julio Cesar Ody wrote:


Hi all,

I have a backup server in my LAN that keeps some rsnapshot
(www.rsnapshot.org) backups. For those of you who don't know the tool,
rsnapshot works by taking incremental backups, hard linking the files
that don't change from one backup to the next (thus, keeping the
consecutive backups smaller than the first big one.)

I want to send these backups to a remote server every day or so. I
tried scp'ing it, but scp resolves the hard links, causing me
headaches and too much space to be used in my remote server. I then
tried SSHFS (sshfs.sourceforge.net), and the same thing happened. Same
applies to SHFS.

I know NFS would keep the hard links instead of resolving them (cp
-d), but I'm not up to put NFS over the internet since that can be a
bad idea from the security standpoint. So my question is: is there a
way for me to make that transfer and keep the hard links?


OpenAFS (www.openafs.org) is designed and built for this category of
network connectivity and application with rock-solid security even on public
access networks.

If  interested email me offline and I can send you a set of scripts that 
will keep

you going rapidly.

Hope this helps.

O Plameras

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Backup Internet links

2006-03-08 Thread James Gray
On Thursday 09 March 2006 12:40, Carlo Sogono wrote:
 Since our priorities are only internal web access and incoming/outgoing
 mails I have thought of periodically checking of the immediate gateways
 of both ISPs are online. If the primary gateway is down then run a
 script that changes the default route. As much as possible I would like
 to avoid using a routing protocol as these links are not that fast and
 the ISP supplied routers we have are not Cisco-enterprise-level. I do
 not know of such an app that checks if the immediate gateway is online
 then runs scripts if they are down...

What about something like this (totally untested)!!:

 testnet.sh 
#!/bin/bash

# Author - James Gray (do I REALLY want my name on this?)

# Genesis - Today...I think.  What IS the date today anyway?!
#   9-Mar-2006

# Purpose - Handle multi-homed gateways routing tables in a 
#   semi-intelligent manner in the case of a link
#   failing.  It handles the reverse too, when the
#   link comes back up.

# Define Gateway IP's
# ..or roll your own to rip it out of the routing table etc.
ISPGW1=1.2.3.4
ISPGW2=2.3.4.5

# Generate the info to find the state (last run status) files.
STATE_PATH=/path/to/them# note no trailing /
STATE_FILES=ISP1.state ISP2.state ISPALL.state

# Set all states to safe defaults
STATE1=up
STATE2=up
STATE_ALL=up

# Read in the state files - ie status of links on last run!
for FILE in $STATE_FILES
do
[ -f $STATE_PATH/$FILE ]  . $STATE_PATH/$FILE
done

# Function to ping an IP/Host
testnet(){
ping -c 1 $1 /dev/null
echo $?
}

# What to do if ISP1 is down
ispgw1_actions(){
if [ $1 == down ]; then
# Dump everything in here you want to do
# if ISP1's Gateway is down, but leave these lines!
STATE1=down
echo STATE1=down  $STATE_PATH/ISP1.state
else
# Dump everything in here you want to do
# when ISP1's Gateway is back up, but leave these lines!
STATE1=up
echo STATE1=up  $STATE_PATH/ISP1.state
fi
}

# What to do if ISP2 is down
ispgw2_actions(){
if [ $1 == down ]; then
# Dump everything in here you want to do
# if ISP1's Gateway is down, but leave these lines!
STATE2=down
echo STATE1=down  $STATE_PATH/ISP2.state
else
# Dump everything in here you want to do
# when ISP1's Gateway is back up, but leave these lines!
STATE2=up
echo STATE1=up  $STATE_PATH/ISP2.state
fi
}

# What to to if they are BOTH down
all_actions(){
if [ $1 == down ]; then
# Dump everything in here you want to do
# if ISP1's Gateway is down, but leave these lines!
STATE_ALL=down
echo STATE_ALL=down  $STATE_PATH/ISPALL.state
else
# Dump everything in here you want to do
# when ISP1's Gateway is back up, but leave these lines!
STATE_ALL=down
echo STATE_ALL=down  $STATE_PATH/ISPALL.state
fi
}

#
# Check ISP1's Gateway
#
if [[ `testnet $ISPGW1` -ne 0 ]]; then
# Hrm ISP1 gateway isn't responding...see if ISP2 is up
if [[ `testnet $ISPGW2` -ne 0 ]]; then
# Both ISP's down - do something useful!
[ $STATE_ALL != down ]  all_actions down
else
# ISP1 is down - do something with routes etc.
[ $STATE1 != down ]  isp1gw_actions down
fi
else
#ISP1 is up - see if it was down on the last run
[ $STATE1 == down ]  ispgw1gw
fi

#
# Check ISP2's Gateway
#
if [[ `testnet $ISPGW2` -ne 0 ]]; then
# Hrm ISP2 gateway isn't responding...see if ISP1 is up
if [[ `testnet $ISPGW1` -ne 0 ]]; then
# Both ISP's down - do something useful!
[ $STATE_ALL != down ]  alldown_actions down
else
# ISP2 is down - do something with routes etc.
[ $STATE2 != down ]  isp2gw_actions down
fi
else
#ISP2 is up - see if it was down on the last run
[ $STATE1 == down ]  ispgw1gw
fi

exit 0
 testnet.sh 

Then make it executeable, cron it, or put it in a while loop if you need more 
than 1 minute granularity.

Flaws (among others):
1. Handling previous states (last run status) is rather rudimentary and could
   be done better.

2. Possible race conditions - not sure...looks ok, but I might have messed up
   the logic somewhere shrug  Be careful running this script too often.  If
   you run it with the routing tables in a state of flux/transition, this
   script will break and do horrible things to your system's nether
   regions! ;)

3. You could do more, with less code in Perl/Ruby/Python...but I just couldn't
   be bothered messing with syntax this 

Re: [SLUG] Nominations page (Re: Nominations hotting up)

2006-03-08 Thread Grant Parnell - slug

On Thu, 9 Mar 2006, Mary Gardiner wrote:


On 2006-02-27, Grant Parnell [EMAIL PROTECTED] wrote:

Once again, I've updated the election page this morning...
http://www.slug.org.au/~grant/election.html


A couple of suggestions for this page:

1. Can you put a strike through (HTML strike/strike) the entries
   for people who have declined a nomination?

2. Can you make the entries for people who haven't yet accepted a
   little bit lighter (span style=color: #bb;/span or
   similar) or perhaps bold the not yet accepted bit.

It's really hard at the moment to distinguish people who are running
from people who aren't and this would help.


Ok will do.


Also, why is there still a listing for Honourary committee member? As
I recall, this is completely unofficial (ie the constitution does not
provide for such a position) and was only ever there because for a while
it was thought under 18s couldn't be on committee officially. Since we
later decided that they could be ordinary members (but not executive
members because they can't act as signatories), there seems no reason to
keep mentioning it in elections. If the committee needs to be larger,
then we should change the constitution, if not there's no reason for the
position.


I felt that although the under 18's thing had been resolved there might be 
other reasons but I guess we can always add it again later. There's been 
no nominations anyway.


Give it another 15-20 mins and I'll have it updated.

--
---GRiP---
Grant Parnell - SLUG President  LPIC-1 certified engineer
EverythingLinux services - the consultant's backup  tech support.
Web: http://www.elx.com.au/support.php
We're also busybits.com.au and linuxhelp.com.au and everythinglinux.com.au.
Phone 02 8756 3522 to book service or discuss your needs
or email us at paidsupport at elx.com.au

ELX or its employees participate in the following:-
OSIA (Open Source Industry Australia) - http://www.osia.net.au
AUUG (Australian Unix Users Group) - http://www.auug.org.au
SLUG (Sydney Linux Users Group) - http://www.slug.org.au
LA (Linux Australia) - http://www.linux.org.au
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html