Re: [SLUG] Wireless Broadband?

2010-01-21 Thread Dean Hamstead




Part of the motivation for buying a modem out-right and use pre-paid
is that it doesn't tie us to any plan, plus we expect to use the
pre-paid modem very sporadically - in emergencies which happen when
the guy on call is out and just must access the network.



the down side of pre-paid is that the data expires fairly quickly.
a few gigs typically only has a 30 day expiry. larger data blocks tend 
to last longer (up to 90 days on optus)


you can just whip out your credit card and buy a data block.
that may not sit well with the on call person... "heres a usb 3g modem, 
just add your CC# and expiry date as needed"



Dean
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Wireless Broadband?

2010-01-21 Thread Amos Shapira
2010/1/22 Del :
> Amos Shapira wrote:
>>
>> Since this became a discussion of broadband modems - I got an OK from
>> my workplace to buy the Telstra Turbo USB pre-paid modem (currently
>> costs $149) but so far Google, whirlpool and ubuntuforums failed to
>> provide a positive answer about the hardware compatibility to linux
>> (Ubuntu 9.10).
>>
>> Can anyone here have positive experience with this moddem?
>
> No, I can not.  :)
>
> You're better off buying their network gateway for $399, which is the
> BigPond "Elite" network gateway on this page:
>
> http://www.bigpond.com/internet/plans/wireless/wireless_devices/

Thanks for the pointer but since the main point of having this device
is to allow our people to travel while on call duty (two of them
happen to be motorcyclists too), it's not practical.

> Bigpond are the biggest wunch(*) on the planet, so you have to be aware.
>  One issue is that although all of their devices are essentially compatible,
> your internet plan is tied to the device so if you get one of their plug in
> modems and decide later you want the gateway, you have to cancel (and pay
> out) your old plan and buy a new plan.  No other internet provider makes you
> do this -- e.g. iinet don't make you cancel your plan if you buy a new ADSL
> modem.

Part of the motivation for buying a modem out-right and use pre-paid
is that it doesn't tie us to any plan, plus we expect to use the
pre-paid modem very sporadically - in emergencies which happen when
the guy on call is out and just must access the network.

--Amos
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Wireless Broadband?

2010-01-21 Thread Del

Amos Shapira wrote:

Since this became a discussion of broadband modems - I got an OK from
my workplace to buy the Telstra Turbo USB pre-paid modem (currently
costs $149) but so far Google, whirlpool and ubuntuforums failed to
provide a positive answer about the hardware compatibility to linux
(Ubuntu 9.10).

Can anyone here have positive experience with this moddem?


No, I can not.  :)

You're better off buying their network gateway for $399, which is the 
BigPond "Elite" network gateway on this page:


http://www.bigpond.com/internet/plans/wireless/wireless_devices/

It works flawlessly, and since it has an internal wifi gateway and 4 
port switch it doesn't require any configuration with Linux.  I use mine 
on the boat with a 12v lead in from the house batteries but I've also 
run it while travelling off a 12v plug pack powered by a 7Ah sealed 
battery of a reasonably common type (Jaycar will have them).


Bigpond are the biggest wunch(*) on the planet, so you have to be aware. 
 One issue is that although all of their devices are essentially 
compatible, your internet plan is tied to the device so if you get one 
of their plug in modems and decide later you want the gateway, you have 
to cancel (and pay out) your old plan and buy a new plan.  No other 
internet provider makes you do this -- e.g. iinet don't make you cancel 
your plan if you buy a new ADSL modem.


(*) -- collective term for a group of bankers.

--
Del
Babel Com Australia
http://www.babel.com.au/
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Wireless Broadband?

2010-01-21 Thread Marghanita da Cruz

Amos Shapira wrote:

2010/1/21 Ben Donohue :

From memory you can get them in two general packages... time online or
monthly download.

you really have to watch the downloads of these...


Thanks for the warning but unless Telstra completely redefined the
meaning of "pre-paid" I shouldn't be concerned about over-charges - as
soon as I run out of credit the connection should drop, end of story.
Or should I?

The modem is meant to be rotated between people on call which might
need it to get away from their home computer while on duty.
Luckily all our ops guys have Ubuntu on their laptops so it limits the
search for support.



The virgin (post-paid) deal is not a
drop dead but a shaped response (ie slow
down of bandwidth) once you meet your
monthly limit (5G, which I haven't yet). I
have heard horror stories from people on
post-paid - not to mention living  a
life of fear.

The worst one was Telstra, where I think
the download limit was met once you
viewed the Telstra homepage a couple of
timesand that was the default
homepage for their browser.

With regard to the pre-paid you still
get stung as you are paying a premium
rate for what you do downloadthough
this looks like a reasonable deal...for the
infrequent user...
BTW the 3 prepaid is a very good deal - $149 for 12GB and 12 months to use it. If you don't use it in 12 months, you can rollover the balance. I'm mobile between the CBD, western suburbs, south western suburbs and southern suburbs and the coverage is pretty good. Also the network is not as congested as, say, Optus which can be unusable during normal online time. 

but maybe the 12GB will be gone in 3 months
and you are really looking  at $600 a year
compared to 12x40=$480/year capped shaped
from Virgin (and I think Optus directly 
now also).


Marghanita
--
Marghanita da Cruz
http://ramin.com.au
Tel: 0414-869202



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Ldd report from rkhunter - Update

2010-01-21 Thread Matthew Hannigan
On Thu, Jan 21, 2010 at 05:37:53PM -0600, Rodolfo Martínez wrote:
> Hi Matt,
> 
> rkhunter creates a database (MD5SUM's) of some files, if they change
> for any reason, like a system upgrade/update, it will complain about
> it. rkhunter should be run again to get the new MD5SUM's. This applies
> for any Host Intruder Detection System (HIDS) (i.e. tripwire, AIDE,
> etc...).

Ah, thought so, thanks.I think it would be worthwhile thing
for systems like AIDE to remove dpkg/rpm checkable files from its checks.
Perhaps as an option.

> > Anyway, this reminded me of an interesting article on ldd I read the other 
> > day:
> 
> I did read that article too, but who runs ldd as root? :P

Well, me, until recently :-).  But only with 'trusted' but bizarrely behaving
apps on solaris.

But running as root doesn't really matter.

A malicious app could just stick an alias for say sudo in your .bashrc
or any number of similar things - it's just the start of a possible penetration.


Matt

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Send EOF to Browser from LAMP stack.

2010-01-21 Thread Matthew Hannigan
On Thu, Jan 21, 2010 at 04:28:30PM +0100, justin randell wrote:
> hi,
> 
> 2010/1/21 Peter Rundle :
> > I said it might take a few seconds, I didn't say it was computationally
> > heavy.
> 
> fair enough, but still worth questioning, i think. for a typical app,
> i'd be concerned if a fat apache child process was spending more than
> a quarter to a third of a second servicing a single request.
> 

We have a similar issue here at work, and for us it just creeped up on us.
The code was written to deal with uploads and parse them / process them / stick
bits into the database.

As time goes by the uploads grow from kb to Mb and change from simple format to
vast swathes of @#...@^ xml.   The same job which used to take less than a few 
seconds
now causes timeouts in apache which we've had to raise and/or the browser. 
Which 
we can't do much about.

This is all in java  so it's not too difficult to make it asynch wrt to the 
original
http request.  And leave the use perhaps with an ajaxy update page.

I know next to nothing about PHP, but can't you spawn another thread unrelated 
to a
client http request?  Then you can finish this page, redirect them to another 
page
which just waits for the result to come back - checking now and then via the db 
or
some file on disk.

Other respondents of cron fail (AFAICS) to satisfy the responsiveness that the 
original
poster wants.


Matt

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Ldd report from rkhunter - Update

2010-01-21 Thread Rodolfo Martínez
Hi Matt,

rkhunter creates a database (MD5SUM's) of some files, if they change
for any reason, like a system upgrade/update, it will complain about
it. rkhunter should be run again to get the new MD5SUM's. This applies
for any Host Intruder Detection System (HIDS) (i.e. tripwire, AIDE,
etc...).


> Anyway, this reminded me of an interesting article on ldd I read the other 
> day:

I did read that article too, but who runs ldd as root? :P


Rodolfo Martínez
Dirección de Proyectos
Aleux México | http://www.aleux.com



2010/1/21 Matthew Hannigan :
> On Fri, Jan 22, 2010 at 09:20:46AM +1100, Alan L Tyree wrote:
>> On Thu, 21 Jan 2010 15:54:01 -0600
>> Rodolfo Martínez  wrote:
>>
>> > Hi Alan,
>> >
>> > You can find what package provides the ldd program, and then verify
>> > the integrity of the package. If it really changed I think you should
>> > look for any suspicious activity in your server.
>> >
>> > I think you can find the package with dpkg -S $(which ldd) and you can
>> > check its integrity with debsum.
>> >
>> > ldd shouldn't change, unless you have updated your system.
>>
>> Just checking the Debian Security site
>> ( http://www.debian.org/security/) I see that it was updated for the
>> amd64 architecture.
>>
>> Thanks for the lesson on how to check out this sort of thing.
>>
>> Cheers,
>> Alan
>
>
> So everything looks fine.  I wonder why rkhunter complained.  Doesn't
> coordinate with the packaging system?
>
> Anyway, this reminded me of an interesting article on ldd I read the other 
> day:
>
>    http://www.catonmat.net/blog/ldd-arbitrary-code-execution/
>
> Fun
>
> Matt
>
>
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Ldd report from rkhunter - Update

2010-01-21 Thread Matthew Hannigan
On Fri, Jan 22, 2010 at 09:20:46AM +1100, Alan L Tyree wrote:
> On Thu, 21 Jan 2010 15:54:01 -0600
> Rodolfo Martínez  wrote:
> 
> > Hi Alan,
> > 
> > You can find what package provides the ldd program, and then verify
> > the integrity of the package. If it really changed I think you should
> > look for any suspicious activity in your server.
> > 
> > I think you can find the package with dpkg -S $(which ldd) and you can
> > check its integrity with debsum.
> > 
> > ldd shouldn't change, unless you have updated your system.
> 
> Just checking the Debian Security site
> ( http://www.debian.org/security/) I see that it was updated for the
> amd64 architecture.
> 
> Thanks for the lesson on how to check out this sort of thing.
> 
> Cheers,
> Alan


So everything looks fine.  I wonder why rkhunter complained.  Doesn't
coordinate with the packaging system?

Anyway, this reminded me of an interesting article on ldd I read the other day:

http://www.catonmat.net/blog/ldd-arbitrary-code-execution/

Fun

Matt

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Push cron and system E-Mails to RSS (email2rss?)

2010-01-21 Thread Simon Males
Hello,

I'm interested to hear how system administrators handle the plethora
of E-Mails servers tend to generate. Especially ones with various cron
jobs.

Is the typically practice to add .forwards and consume the reports
with the rest of your E-Mail? How about an pushing them to a RSS feed
which isn't as intrusive ?

I'll occasionally jump onto the system and run mail/mutt and wish I never did.

-- 
Simon Males
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Ldd report from rkhunter - Update

2010-01-21 Thread Alan L Tyree
On Thu, 21 Jan 2010 15:54:01 -0600
Rodolfo Martínez  wrote:

> Hi Alan,
> 
> You can find what package provides the ldd program, and then verify
> the integrity of the package. If it really changed I think you should
> look for any suspicious activity in your server.
> 
> I think you can find the package with dpkg -S $(which ldd) and you can
> check its integrity with debsum.
> 
> ldd shouldn't change, unless you have updated your system.

Just checking the Debian Security site
( http://www.debian.org/security/) I see that it was updated for the
amd64 architecture.

Thanks for the lesson on how to check out this sort of thing.

Cheers,
Alan

> 
> Rodolfo Martínez
> Dirección de Proyectos
> Aleux México | http://www.aleux.com
> 
> 
> 
> On Thu, Jan 21, 2010 at 3:27 PM, Alan L Tyree 
> wrote:
> > Dear SLUGGERS,
> >
> > I just got this report from rkhunter on my machine:
> >
> > Warning: The file properties have changed:
> >         File: /usr/bin/ldd
> >         Current inode: 331476    Stored inode: 17196
> >         Current file modification time: 1263451668
> >         Stored file modification time : 1231069314
> >
> >
> > I see that ldd prints the shared libraries required by each program,
> > but I don't understand why it should have been changed or if I
> > should be worried about it.
> >
> > I ran chkrootkit and it showed no warnings. System is Debian Lenny
> > amd64.
> >
> > What does it all mean? Thanks for help.
> >
> > Alan
> >
> >
> > --
> > Alan L Tyree                    http://www2.austlii.edu.au/~alan
> > Tel:  04 2748 6206
> >
> > --
> > SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> > Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
> >
> 


-- 
Alan L Tyreehttp://www2.austlii.edu.au/~alan
Tel:  04 2748 6206

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Ldd report from rkhunter

2010-01-21 Thread Alan L Tyree
On Thu, 21 Jan 2010 15:54:01 -0600
Rodolfo Martínez  wrote:

> Hi Alan,
> 
> You can find what package provides the ldd program, and then verify
> the integrity of the package. If it really changed I think you should
> look for any suspicious activity in your server.
> 
> I think you can find the package with dpkg -S $(which ldd) and you can
> check its integrity with debsum.

OK, it is in libc6 and the debsum checked out OK.

> 
> ldd shouldn't change, unless you have updated your system.

I accept the regular Lenny security updates. I can't remember if libc6
was one of them or not.

Thanks for your help.

alan

> 
> 
> Rodolfo Martínez
> Dirección de Proyectos
> Aleux México | http://www.aleux.com
> 
> 
> 
> On Thu, Jan 21, 2010 at 3:27 PM, Alan L Tyree 
> wrote:
> > Dear SLUGGERS,
> >
> > I just got this report from rkhunter on my machine:
> >
> > Warning: The file properties have changed:
> >         File: /usr/bin/ldd
> >         Current inode: 331476    Stored inode: 17196
> >         Current file modification time: 1263451668
> >         Stored file modification time : 1231069314
> >
> >
> > I see that ldd prints the shared libraries required by each program,
> > but I don't understand why it should have been changed or if I
> > should be worried about it.
> >
> > I ran chkrootkit and it showed no warnings. System is Debian Lenny
> > amd64.
> >
> > What does it all mean? Thanks for help.
> >
> > Alan
> >
> >
> > --
> > Alan L Tyree                    http://www2.austlii.edu.au/~alan
> > Tel:  04 2748 6206
> >
> > --
> > SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> > Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
> >
> 


-- 
Alan L Tyreehttp://www2.austlii.edu.au/~alan
Tel:  04 2748 6206

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Ldd report from rkhunter

2010-01-21 Thread Rodolfo Martínez
Hi Alan,

You can find what package provides the ldd program, and then verify
the integrity of the package. If it really changed I think you should
look for any suspicious activity in your server.

I think you can find the package with dpkg -S $(which ldd) and you can
check its integrity with debsum.

ldd shouldn't change, unless you have updated your system.


Rodolfo Martínez
Dirección de Proyectos
Aleux México | http://www.aleux.com



On Thu, Jan 21, 2010 at 3:27 PM, Alan L Tyree  wrote:
> Dear SLUGGERS,
>
> I just got this report from rkhunter on my machine:
>
> Warning: The file properties have changed:
>         File: /usr/bin/ldd
>         Current inode: 331476    Stored inode: 17196
>         Current file modification time: 1263451668
>         Stored file modification time : 1231069314
>
>
> I see that ldd prints the shared libraries required by each program,
> but I don't understand why it should have been changed or if I should
> be worried about it.
>
> I ran chkrootkit and it showed no warnings. System is Debian Lenny
> amd64.
>
> What does it all mean? Thanks for help.
>
> Alan
>
>
> --
> Alan L Tyree                    http://www2.austlii.edu.au/~alan
> Tel:  04 2748 6206
>
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Ldd report from rkhunter

2010-01-21 Thread Alan L Tyree
Dear SLUGGERS,

I just got this report from rkhunter on my machine:

Warning: The file properties have changed:
 File: /usr/bin/ldd
 Current inode: 331476Stored inode: 17196
 Current file modification time: 1263451668
 Stored file modification time : 1231069314


I see that ldd prints the shared libraries required by each program,
but I don't understand why it should have been changed or if I should
be worried about it.

I ran chkrootkit and it showed no warnings. System is Debian Lenny
amd64.

What does it all mean? Thanks for help.

Alan


-- 
Alan L Tyreehttp://www2.austlii.edu.au/~alan
Tel:  04 2748 6206

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] SLUG Janurary Monthly Meeting

2010-01-21 Thread Tim 'mithro' Ansell
  You can read the full version of this announcement at
 http://slug.org.au/node/122

== Summary ==

Date: Friday 29nd of January (Friday next week). 
Start Time: Arrive at 6:15pm for a 6:30pm start
Format: Lightning Talks, BOFs, Pizza Dinner
Where: Google Australia, opposite Star City

== SLUG January Monthly Meeting ==

Instead of running two 45 minute talks will be having two sets of
Lightning talks. 

For those who don't know what a Lightning talk is, it's a very short
talk to get a short overview about a subject. They are great because if
you are not interested in the current subject you only have to listen for 5
minutes. I personally have found that I always learn something cool,
exciting or new!

The first section will be on "General Lightning talks", each person will
get 5 minutes (and 5 minutes only) to talk about anything from something
simple as "The new idea I want help with" to the "My summary of this 45
minute talk" or even "5 cool facts about Rusty Russell".

The second section will be on "In-depth Lightning Talks", each person
will get 7 minutes to talk about something a bit more indepth.

*Slots are still available*, so if you are interested in talking, please
sign up at the following URL:
   https://spreadsheets.google.com/ccc?key=t6Is2nMnHr3Q-Xt7cOAHpkQ&hl=en


= Meeting Details =

SLUG is the very mis-named Sydney Linux User Group. We are a general
Open Source interest group which runs our primary event on the last
Friday of every month (except December). Meetings are open to the
general public, and are free of charge.

Our venue is Google, Level 5, 48 Pirrama Road, Pyrmont. It's across the
road from Star City Casino. A map of the area can be found here[1], and
public transit directions are at [2]. Appropriate signage and directions
will be posted around the building.

You will need to sign-in to enter the venue. This can be performed when
you arrive, but to save time we recommend that you do so online
beforehand at Eventbrite[3]. 

If you are unsure, please sign up as a 'maybe'. This allows us to
organise adequate meeting space and facilities. You do not need to
create an account to indicate your attendance.

= Meeting Schedule =

We start at 18.30 but we ask that people arrive at least 15 minutes
early so we an all get into the building and start on time. Please do
not arrive before 18.00, as it may hinder business activities for our
host!

See here[5] for an explanation of the segments.

   * 18.15: Open Doors
   * 18.30: Announcements, News, Introductions
   * 18.45: General Talk
   * 19.30: Intermission
   * 19.45: In-Depth Talk
   * 20.30: Dinner

 BoFs and the Hackerspace run from the time the doors open.

= Bird of a Feather (BoF) Sessions =

The list of BoFs at the moment are:

* SLUGlets - our regular forum for newbies and desktop users

If you would like to run a BoF, please discuss on the SLUG Activities
mailing list[4].

= Hacker Space =

We have heaps of room available to us at Google. If the talks do not
grab you, feel free to come along and hack away on your favourite
project in the designated Hacker Space.

= Dinner =

For dinner, we order in a selection of pizzas. The cost is $10 per
head, and we will be collecting money from the beginning of the
meeting. If you have any particular dietary requirements (e.g.
vegetarian), let us know beforehand. Dinner is a great way to
socialise and learn in a relaxed atmosphere :)

For those who want to continue the conversation after dinner, some of
us will be heading to a pub in the local area.


[1] http://tinyurl.com/ParkingPyrmont
[2] http://wiki.slug.org.au/howtogetthere
[3] http://slug.eventbrite.com/
[4] http://lists.slug.org.au/listinfo/activities
[5] http://www.slug.org.au/meetings/meetingformat

--
SLUG Committee

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Send EOF to Browser from LAMP stack.

2010-01-21 Thread justin randell
hi,

2010/1/21 Peter Rundle :
> I said it might take a few seconds, I didn't say it was computationally
> heavy.

fair enough, but still worth questioning, i think. for a typical app,
i'd be concerned if a fat apache child process was spending more than
a quarter to a third of a second servicing a single request.

cheers
justin

>
>
> justin randell wrote:
>>
>> hi,
>>
>> 2010/1/21 Peter Rundle :
>>>
>>> It's alleged Ken Foskey did scribe:
>>>
 You could try closing STDOUT which will tell apache that your script has
 stopped output.
>>>
>>> This is interesting idea, I think I will give that a try if I can find
>>> out
>>> how to get hold of the STDOUT file pointer.
>>>
 In perl I executed a background task with an system( "command &" ); to
 perform the background tasks.  I then emailed a reponse to the client to
 tell them the job was done.
>>>
>>> That's the kinda thing I need to do. I was hoping to avoid doing a system
>>> command because the action I need to do is easily done right away in the
>>> php
>>> (database connection is already open with right privileges etc). I just
>>> need
>>> to let the browser know that there's not gonna be any more output, it's
>>> finished go and render the page and be happy. If I call a system command
>>> I
>>> have to pass all the info I current have in the application open a new
>>> connection to the database in the other process etc. Doable but if I can
>>> just close the network connection that'd be neater.
>>>
>>> Cron jobs aren't the go, this is an event driven task that needs to
>>> happen
>>> when the event occurs, not some minutes/hours later when the cron jobs
>>> wakes
>>> up at the specified interval.
>>
>> i'm interested in the requirements that led to this problem. to be
>> honest, it sounds a bit fishy from a design point of view. maybe it
>> just has to be that way, but requiring big chunks of computation that
>> have to happen straight away, are triggered by network requests (that
>> don't need to see the results of the processing in real time) is not
>> something i'd allow unless absolutely necessary. at the very least,
>> i'd want the resource that triggers that access controlled.
>>
>> sorry if this is a blind alley, but this is a problem i would be
>> trying *not* to solve if possible. any architecture that requires this
>> will be harder to scale and easier to DOS, which might not bite you
>> straight away, but will probably bite you at some point.
>>
>> so, the client request that triggers the processing doesn't see the
>> results. what is it about the app that requires it to happen straight
>> away? is it a consistency issue - no other client should see the site
>> before the processing is done? would it be enough for other clients to
>> just see the site with all or none of the processing finished?
>>
>> cheers,
>> justin
>
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Wireless Broadband?

2010-01-21 Thread Kevin Shackleton
A variety of responses.  I would like to ask - why have a tricky
device-dependant USB driver at all?  The Optus Huwei, Bigpond
whatever-it-is hub or a 3rd party device like an Ericsson W25/W35 is a
network device.  Sure it's not so handy since it requires a power supply
and it's a bit lumpier for cruising the M4, but if you had a back-seat
passenger wanting internet access too you would go for it.

I used to have to keep a remote site Bigpond Asiasat modem going, that
had a USB interface.  It was a frabjous day when Telstra replaced it
with a new modem with ethernet.  There was just a single Windows machine
hanging off it, causing all sorts of driver heartache.  Now it runs
through a wireless hub with several people using it trouble-free.

In my book USB is in the same camp as Bluetooth as a 'solution' to
avoid.

Kevin.

On Wed, 2010-01-20 at 20:44 -0800, j blrown wrote:
> I've been looking at getting a wireless Broadband Prepaid kit from
> either Vodaphone,Optus or Bigpond.
> 
> I just want it to use in addition to my ADSL Broadband connection, and
> will use it with either my Laptop or Netbook.
> 
> I'm running Ubuntu in one form or another, from 8.10 to 9.04.
> 
> Any advice Pros/Cons re the above providers and their supplied modems?
> 
> I've had no experience with wireless connectivity.
> 
> Thanks
> 
> Bill
> -- 
>   j blrown
>   gonz...@fastmail.fm
> 
> -- 
> http://www.fastmail.fm - The way an email service should be
> 

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Wireless Broadband?

2010-01-21 Thread UnclePete
I have a laptop with a Sierra Wireless modem built-in. Stuffed if I can 
get the sodding thing to work on Ubuntu Karmic. The SIM (3 prepaid) 
works fine with other modem dongles on Ubuntu and also works with the PC 
modem on Windoze.


I guess the moral is - check before you buy :)

BTW the 3 prepaid is a very good deal - $149 for 12GB and 12 months to 
use it. If you don't use it in 12 months, you can rollover the balance. 
I'm mobile between the CBD, western suburbs, south western suburbs and 
southern suburbs and the coverage is pretty good. Also the network is 
not as congested as, say, Optus which can be unusable during normal 
online time.


Pete

Ben Donohue wrote:
From memory you can get them in two general packages... time online or 
monthly download.


you really have to watch the downloads of these...

If you go over, you get slugged quite heavily. They are not capped at 
whatever and then shaped. You get hit for every additional MEGABYTE! 
(unless their plans have changed recently...)


If you are not careful, you'll have the finance dept. knocking on your 
door.


Also useful to record the IMEI number and serial number and anything 
else you can, 'cause if you lose it you want to disable it ASAP before 
someone else uses it to death and you get the $$$ bill at the end of 
the month. Treat it like a credit card with the PIN written on it. 
Anyone else who finds/steals it can simply plug it in and use it.


Ben




Amos Shapira wrote:

Since this became a discussion of broadband modems - I got an OK from
my workplace to buy the Telstra Turbo USB pre-paid modem (currently
costs $149) but so far Google, whirlpool and ubuntuforums failed to
provide a positive answer about the hardware compatibility to linux
(Ubuntu 9.10).

Can anyone here have positive experience with this moddem?

Thanks

-Amos

On 1/21/10, j blrown  wrote:
 

I've been looking at getting a wireless Broadband Prepaid kit from
either Vodaphone,Optus or Bigpond.

I just want it to use in addition to my ADSL Broadband connection, and
will use it with either my Laptop or Netbook.

I'm running Ubuntu in one form or another, from 8.10 to 9.04.

Any advice Pros/Cons re the above providers and their supplied modems?

I've had no experience with wireless connectivity.

Thanks

Bill
--
  j blrown
  gonz...@fastmail.fm

--
http://www.fastmail.fm - The way an email service should be

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Wireless Broadband?

2010-01-21 Thread Amos Shapira
2010/1/21 Ben Donohue :
> From memory you can get them in two general packages... time online or
> monthly download.
>
> you really have to watch the downloads of these...

Thanks for the warning but unless Telstra completely redefined the
meaning of "pre-paid" I shouldn't be concerned about over-charges - as
soon as I run out of credit the connection should drop, end of story.
Or should I?

The modem is meant to be rotated between people on call which might
need it to get away from their home computer while on duty.
Luckily all our ops guys have Ubuntu on their laptops so it limits the
search for support.

Cheers,

--Amos
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Copying HDD

2010-01-21 Thread Amos Shapira
2010/1/20 Ken Foskey :
> Simply create a new partition and copy  the contents.  Use cp -r /path
> /mounted/new/path/

Preferably "cp -a" - which should preserve more of the original file's
attributes.

--Amos
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Send EOF to Browser from LAMP stack.

2010-01-21 Thread Peter Rundle

I said it might take a few seconds, I didn't say it was computationally heavy.


justin randell wrote:

hi,

2010/1/21 Peter Rundle :

It's alleged Ken Foskey did scribe:


You could try closing STDOUT which will tell apache that your script has
stopped output.

This is interesting idea, I think I will give that a try if I can find out
how to get hold of the STDOUT file pointer.


In perl I executed a background task with an system( "command &" ); to
perform the background tasks.  I then emailed a reponse to the client to
tell them the job was done.

That's the kinda thing I need to do. I was hoping to avoid doing a system
command because the action I need to do is easily done right away in the php
(database connection is already open with right privileges etc). I just need
to let the browser know that there's not gonna be any more output, it's
finished go and render the page and be happy. If I call a system command I
have to pass all the info I current have in the application open a new
connection to the database in the other process etc. Doable but if I can
just close the network connection that'd be neater.

Cron jobs aren't the go, this is an event driven task that needs to happen
when the event occurs, not some minutes/hours later when the cron jobs wakes
up at the specified interval.


i'm interested in the requirements that led to this problem. to be
honest, it sounds a bit fishy from a design point of view. maybe it
just has to be that way, but requiring big chunks of computation that
have to happen straight away, are triggered by network requests (that
don't need to see the results of the processing in real time) is not
something i'd allow unless absolutely necessary. at the very least,
i'd want the resource that triggers that access controlled.

sorry if this is a blind alley, but this is a problem i would be
trying *not* to solve if possible. any architecture that requires this
will be harder to scale and easier to DOS, which might not bite you
straight away, but will probably bite you at some point.

so, the client request that triggers the processing doesn't see the
results. what is it about the app that requires it to happen straight
away? is it a consistency issue - no other client should see the site
before the processing is done? would it be enough for other clients to
just see the site with all or none of the processing finished?

cheers,
justin

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Copying HDD

2010-01-21 Thread Ken Foskey
Simply create a new partition and copy  the contents.  Use cp -r / 
path /mounted/new/path/


  If you have to use dd then create a partition exactly the same size  
then gparted can grow it afterwards


Ken Foskey
On the move

On 20/01/2010, at 2:00 PM, Mike Andy  wrote:


I've been thus far unable to do to - maybe you can explain how.

for example, if i do a dd from a 120Gb to a 150Gb and then enter into
something like gparted or fdisk there seems to be no way i can simply
expand the disk beyond the original 120Gb boundaries. If there was
unformatted/unpartitioned space within that 120Gb then things can be
moved around there but not outside the original disk boundaries.

On Tue, Jan 19, 2010 at 10:38 AM, Jake Anderson  
 wrote:

Mike Andy wrote:


from my experience when you use dd you cannot resize after that
because it's made an exact bit by bit clone of that hard drive



which you then can resize with the numerous partition resizing  
tools out

there.


if you're concerned about how much you're downloading use parted
magic, much smaller than ubuntu and includes both gparted and
clonezilla all in one





--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Send EOF to Browser from LAMP stack.

2010-01-21 Thread justin randell
hi,

2010/1/21 Peter Rundle :
> It's alleged Ken Foskey did scribe:
>
>> You could try closing STDOUT which will tell apache that your script has
>> stopped output.
>
> This is interesting idea, I think I will give that a try if I can find out
> how to get hold of the STDOUT file pointer.
>
>> In perl I executed a background task with an system( "command &" ); to
>> perform the background tasks.  I then emailed a reponse to the client to
>> tell them the job was done.
>
> That's the kinda thing I need to do. I was hoping to avoid doing a system
> command because the action I need to do is easily done right away in the php
> (database connection is already open with right privileges etc). I just need
> to let the browser know that there's not gonna be any more output, it's
> finished go and render the page and be happy. If I call a system command I
> have to pass all the info I current have in the application open a new
> connection to the database in the other process etc. Doable but if I can
> just close the network connection that'd be neater.
>
> Cron jobs aren't the go, this is an event driven task that needs to happen
> when the event occurs, not some minutes/hours later when the cron jobs wakes
> up at the specified interval.

i'm interested in the requirements that led to this problem. to be
honest, it sounds a bit fishy from a design point of view. maybe it
just has to be that way, but requiring big chunks of computation that
have to happen straight away, are triggered by network requests (that
don't need to see the results of the processing in real time) is not
something i'd allow unless absolutely necessary. at the very least,
i'd want the resource that triggers that access controlled.

sorry if this is a blind alley, but this is a problem i would be
trying *not* to solve if possible. any architecture that requires this
will be harder to scale and easier to DOS, which might not bite you
straight away, but will probably bite you at some point.

so, the client request that triggers the processing doesn't see the
results. what is it about the app that requires it to happen straight
away? is it a consistency issue - no other client should see the site
before the processing is done? would it be enough for other clients to
just see the site with all or none of the processing finished?

cheers,
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html