Re: [SLUG] SAMBA config experts?

2010-11-16 Thread justin randell
hi,

On 17 November 2010 14:38, DaZZa  wrote:
> Folks.
>
> I'm trying to setup a completely basic SAMBA server on a CentOS box
> which has been delivered for demonstration purposes.
>
> I want something dead simple - one directory, world writable to anyone
> who browses to it.
>
> I've put the following smb.conf file on the box
>
> [global]
>        workgroup = demo
>        server string = SAMBA Server
>        load printers = no
>        log file = /var/log/log.%m
>        max log size = 0
>        security = share
>        encrypt passwords = no
>        unix password sync = no
>        socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
>        dns proxy= no
>        host msdfs = no
>        passdb backend = smbpasswd
>
> [transfer]
>        comment = Export
>        path = /home/demo/dirwatched/
>        read only = no
>        public = yes
>        browsable = yes
>        writable = yes

perhaps you need:

   guest ok = yes

and then make sure you have:

   guest account = $UserWhoCanWriteToYourShare

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Is Windows XP much faster under hardware virtualization?

2010-10-31 Thread justin randell
hi,

On 1 November 2010 09:16, Daniel Pittman  wrote:
>
>>> Are there different types or levels of hardware virtualization available
>>> off the shelf, or it is one-size-fits-all?
>>
>> Well there’s Intel VT-x and there’s AMD-V, which are the duopoly’s
>> equivalents. Both are supported by VirtualBox, VMware, KVM, etc.
>>
>> Personally, I think that if you buy a new PC with hardware virtualisation,
>> the performance benefit you will see will be coming from the faster hardware
>> more than the VT-x/AMD-V support.
>
> *nod*  Also, keep in mind that one of the biggest factors in VM performance is
> going to be I/O for most users.
>
> That means that the performance of your paravirtualized devices is the key for
> getting better performance - and that usually just means picking a VM solution
> with appropriate "guest" drivers and all.
>
> (Unless you plan on mapping physical hardware into the VM, in which case VT-d
>  or the AMD equivalent makes a difference.)

yep, I/O is normally a killer. at work, all dev machines have (at
least) two physical drives, so VMs can be given a disc separate from
the host OS. we find that's the simplest, best bang-for-buck way to
get good VM performance.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Banning non Australian IP's from Aussie ecommerce site

2010-10-11 Thread justin randell
hi,

On 11 October 2010 17:54, Nick Andrew  wrote:
> On Mon, Oct 11, 2010 at 16:31, justin randell  
> wrote:
>
>> unless there's some really good reason not to, i'd strongly advise
>> securing your ssh so that it's public-key only. i've seen too many
>> places that rely on limiting the amount of ssh attempts get hacked to
>> put any faith in that method any more.
>
> Don't discount defense in depth. Hostile IP addresses found by ssh
> rate-limiting can be blocked from all ports. It doesn't preclude use of
> keys instead of passwords.

discount? how much is defence in depth going for these days? ;-)

perhaps i wasn't clear, but when i said "relying on rate limiting is
bad", i didn't mean to imply "using rate-limiting is evil in all forms
no matter what".

if i had to choose between rate limiting and strong passwords vs keys,
i'd choose keys.

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Banning non Australian IP's from Aussie ecommerce site

2010-10-10 Thread justin randell
hi,

On 11 October 2010 15:09, Ben Donohue  wrote:
>  Thanks all,
>
> I'm seeing mostly brute force password attacks on ssh.
>
> I've also found configserver firewall...
>
> Anyway still looking at what is around.

unless there's some really good reason not to, i'd strongly advise
securing your ssh so that it's public-key only. i've seen too many
places that rely on limiting the amount of ssh attempts get hacked to
put any faith in that method any more.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Re: WordPress, PHP ... Re: Ubuntu 10.04

2010-06-30 Thread justin randell
hi,

On 30 June 2010 20:17, Richard Ibbotson  wrote:
> On Wednesday 30 June 2010 06:08:42 justin randell wrote:
>> this looks like a wordpress code/mysql issue to me:
>  > some basic things - do you have APC enabled for php? if you don't,
>> apt-get install php5-apc, restart apache, and you'll get immediate
>> performance gains.
>
> Hmmm Google search... "ubuntu 10.04 php5-apc" ...
>
> http://constantshift.com/install-php-fpm-5-3-2-on-ubuntu-10-04-lucid-
> lynx/
>
> Installed that.

sorry, that was a typo, should be

apt-get install php-apc

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: WordPress, PHP ... Re: Ubuntu 10.04

2010-06-29 Thread justin randell
hi,

this looks like a wordpress code/mysql issue to me:

jus...@justinlappy:~$ time wget http://sleepypenguin.homelinux.org/blog/
--2010-06-30 14:54:39--  http://sleepypenguin.homelinux.org/blog/
--- snip ---
2010-06-30 14:54:51 (2.35 KB/s) - `index.html' saved [25659]
real0m11.861s
user0m0.000s
sys 0m0.020s

jus...@justinlappy:~$ time wget
http://sleepypenguin.homelinux.org/blog/wp-content/themes/suffusion/style.css
--2010-06-30 14:59:06--
http://sleepypenguin.homelinux.org/blog/wp-content/themes/suffusion/style.css
--- snip ---
2010-06-30 14:59:08 (31.2 KB/s) - `style.css' saved [63253/63253]
real0m2.780s
user0m0.000s
sys 0m0.010s

serving a static file is way, way faster, so its not primarily a
network or dns issue.

some basic things - do you have APC enabled for php? if you don't,
apt-get install php5-apc, restart apache, and you'll get immediate
performance gains.

i'm not a wordpress dev, so i don't know if they have any devel
modules that can give you info about that sort of stuff, but i'd look
for one and see what it tells you. does wordpress have any basic,
built-in caching you could turn on?

if that's not an option, then you can use xhprof or xdebug to get some
raw numbers. failing that, just try to cut the problem in half a few
times with some simple debug patches to wordpress that just write
times to a log file so you can zero in on the low hanging fruit.

On 30 June 2010 09:37, Richard Ibbotson  wrote:
> On Wednesday 30 June 2010 00:25:24 Mike Lampard wrote:
>> I'd suggest the WP-Cache or WP-Super-Cache plugins, which
>> precompile the php to html so the server doesn't have to recompile
>> the pages on each access. The GZIP-Output wordpress plugin is also
>> recommended.
>
> I'll try those.  Meanwhile I've been hacking the DNS and bind9
> configuration.  Looks like it might have taken a second or two off the
> download time.  More like downloading a web page from planet earth
> rather than the moon :)

if you control all the moving parts, i'd suggest pushing the
compression further up|down the stack (depending on how you look at
it) to apache by using mod_deflate, rather than using php. you'll want
compression for your static files as well, and this is the simplest
and most cpu efficient way to get it.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] WordPress & sessions [Was: WordPress, PHP]

2010-05-09 Thread justin randell
hi,

On 9 May 2010 09:30, Amos Shapira  wrote:
> On 8 May 2010 15:48, Jeff Waugh  wrote:
>>
>> 
>> > was it a desire to use a non-file based store and an aversion to using
>> > custom session handlers? was it a desire to control the strength of the
>> > cookie hash?
>
> Without getting into WordPress or the session storage options it
> provides - In principle I'd prefer none-file-based permanent session
> store simply to allow multiple front server to share the load of
> serving any session from any server.
>
> This usually leads to client-server style databases or things like
> memcachedb in redundant configuration.

yes, every time i see a load-balanced setup with sticky sessions, i
cringe. "i know, now that we've eliminated our single points of
failure, lets do extra work to bring a SPOF back into our redundant
setup."

in drupal land, there's been a bit of interest in mongo db for session
(and other) storage, as a lot more of the backend in drupal 7 is
pluggable. some of the people working on porting examiner.com to
drupal 7 released this, which includes using a mongodb session store:

http://drupal.org/project/mongodb

i'm not sold on replacing relational dbs for most things yet, but a
key-value session store seems like a good fit.

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: WordPress, PHP ... Re: Ubuntu 10.04

2010-05-07 Thread justin randell
hi,

On 8 May 2010 14:34, Jeff Waugh  wrote:
> 
>
>> > [Sun May 02 16:17:47 2010] [error] [client 10.0.0.2] PHP Warning:
>> > session_start() [function.session-
>> > start]: Cannot send session cache limiter - h
>> > [Sun May 02 16:17:55 2010] [error] [client 10.0.0.2] PHP Deprecated:
>> > Function set_magic_quotes_runtime() is deprecated in /var/www/blog/wp-
>> > settings.php on line 27, referer: http://sl
>>
>> what that's telling you is that wordpress core code will not run on
>> php 5.3 without throwing heaps of warnings.
>
> That is not the case, however, certainly not with WordPress 2.9 (and I'm
> pretty sure, all the way back to 2.7 and earlier)... in normal operation,
> there should be *no* warnings whatsoever running WordPress core.

i stand corrected. wordpress will throw E_DEPRECATED for php 5.3, not
warnings, so its possible to adjust your error_reporting settings to
deal with that without ignoring warnings.

> The second last log line, and inaccuracy of "line 27" (given that call is on
> line 18 in WordPress 2.9), seem suspicious to me... sounds like Richard has
> something else running on every request? Notably session_start is not called
> in the WordPress codebase.

well colour me surprised, i'd just ass u me'd that session_start was
wordpress code. learn something new about wordpress every day.

that really got me curious, so i had a poke around the 2.9.2 code
base. jeff, i'm wondering what led to a decision to reimplement php
session handling in custom code? seems the code that leads to pulling
the $user from a permanent store via an encrypted cookie value is
exactly what sessions are for?

was it a desire to use a non-file based store and an aversion to using
custom session handlers? was it a desire to control the strength of
the cookie hash?

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: WordPress, PHP ... Re: Ubuntu 10.04

2010-05-02 Thread justin randell
hi,

On 3 May 2010 11:56, david  wrote:
>
> justin randell wrote:
>
>> can i interest you in a drupal 7 blog? works like a charm on php 5.3 ;-)
>
> What about Drupal 6 and php 5.3??
>
> I can't/don't want to upgrade to Drupal 7 just yet.

and that's fair enough. drupal 6 core is 99% there (there are still
some E_DEPRECATED warnings issued), but will be 100% when the patches
in this issue land:

http://drupal.org/node/360605

as for contrib modules, you'll have to take that module by module.

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: WordPress, PHP ... Re: Ubuntu 10.04

2010-05-02 Thread justin randell
hi,

On 3 May 2010 10:25, Richard Ibbotson  wrote:
>  Justin
>
>> i guess the wordpress devs would have read this:
>>
>> http://au.php.net/manual/en/migration53.deprecated.php
>>
>> and just decided that people using 5.3 can just deal with those
>> warnings in their logs on every, single, request.
>
> Was hoping someone was going to give me a simple answer.  I've always
> thought that a good way to learn was by making mistakes.  As long as
> it's on my own Apache box and not somewhere else.
>
> I just have to hope that I can find a way to fix my own Wordpress
> installation.  I was thinking I wouldn't have to re-install from
> scratch but I might have to do that anyway.
>
> Thanks very much :)

given that the current stable ubuntu, the coming stable debian and
redhat, and stable just about everyting-else in linux land will/does
ship php 5.3, maybe a patch set will develop?

the origins of this policy can be seen here, where matt mullenweg
spells out why wordpress will still run with php4 code:

http://ma.tt/2007/07/on-php/

that was nearly 3 years ago, but i guess it still holds.

can i interest you in a drupal 7 blog? works like a charm on php 5.3 ;-)

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: WordPress, PHP ... Re: Ubuntu 10.04

2010-05-02 Thread justin randell
hi,

On 3 May 2010 01:31, Richard Ibbotson  wrote:
>
> [Sun May 02 16:16:38 2010] [error] [client 10.0.0.2] PHP Warning:
> session_start() [function.session-
> start]: Cannot send session cache limiter - h
> 

you'll have to turn off display_errors in your php.ini or via ini_set
to make wordpress run properly (that is, with working sessions). you
could set error reporting to ignore warnings, but that's probably the
worse of two bad options.

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: WordPress, PHP ... Re: Ubuntu 10.04

2010-05-02 Thread justin randell
hi,

On 3 May 2010 01:31, Richard Ibbotson  wrote:
>
> Checked the Flora Relief theme.  Seems to be up to date.  Tried to use
> some other themes.  Same result.  The error.log reveals something but
> I'm not quite sure what it's telling me..
>
> Warning:  Directive 'magic_quotes_gpc' is deprecated in PHP 5.3
> and greater in Unknown on line 0
> [Sun May 02 16:16:38 2010] [error] [client 10.0.0.2] PHP Deprecated:
> Function set_magic_quotes_runtime() is deprecated in /var/www/blog/wp-
> settings.php on line 27, referer: http://sl
> [Sun May 02 16:16:38 2010] [error] [client 10.0.0.2] PHP Warning:
> session_start() [function.session-
> start]: Cannot send session cache limiter - h
> 
> Warning:  Directive 'register_long_arrays' is deprecated in PHP
> 5.3 and greater in Unknown on line 0
> 
> Warning:  Directive 'magic_quotes_gpc' is deprecated in PHP 5.3
> and greater in Unknown on line 0
> [Sun May 02 16:17:14 2010] [error] [client 10.0.0.2] PHP Deprecated:
> Function set_magic_quotes_runtime() is deprecated in /var/www/blog/wp-
> settings.php on line 27, referer: http://sl
> [Sun May 02 16:17:14 2010] [error] [client 10.0.0.2] PHP Warning:
> session_start() [function.session-
> start]: Cannot send session cache limiter - h
> [Sun May 02 16:17:29 2010] [error] [client 10.0.0.2] PHP Deprecated:
> Function set_magic_quotes_runtime() is deprecated in /var/www/blog/wp-
> settings.php on line 27, referer: http://sl
> [Sun May 02 16:17:29 2010] [error] [client 10.0.0.2] PHP Warning:
> session_start() [function.session-
> start]: Cannot send session cache limiter - h
> [Sun May 02 16:17:47 2010] [error] [client 10.0.0.2] PHP Deprecated:
> Function set_magic_quotes_runtime() is deprecated in /var/www/blog/wp-
> settings.php on line 27, referer: http://sl
> [Sun May 02 16:17:47 2010] [error] [client 10.0.0.2] PHP Warning:
> session_start() [function.session-
> start]: Cannot send session cache limiter - h
> [Sun May 02 16:17:55 2010] [error] [client 10.0.0.2] PHP Deprecated:
> Function set_magic_quotes_runtime() is deprecated in /var/www/blog/wp-
> settings.php on line 27, referer: http://sl

what that's telling you is that wordpress core code will not run on
php 5.3 without throwing heaps of warnings.

i guess the wordpress devs would have read this:

http://au.php.net/manual/en/migration53.deprecated.php

and just decided that people using 5.3 can just deal with those
warnings in their logs on every, single, request.

not much fun for those who like to develop with E_ALL so they can
catch errors straight away.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: Ubuntu 10.4

2010-04-30 Thread justin randell
hi,

On 30 April 2010 21:15, Richard Ibbotson
 wrote:
>
>> i know at work we'll be avoiding lucid until we start doing more
>> Drupal 7 work, because while drupal 6 core is 99% ok with php 5.3,
>> contributed modules are a lucky dip.
>
> Yes.  Good idea.  I was thinking that on my own Apache box it wouldn't
> matter a lot.  Mostly used for experimental stuff anyway.  Drupal 7
> seems to be a bit of a mess just now but better than it was a few
> months ago.  Hopefully Wordpress 3 might be usable when it is released
> :)

oh, oh, drupal 7 seems a bit of a mess? last time i help out a
wordpress user... ;-)

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: Ubuntu 10.4

2010-04-30 Thread justin randell
hi,

On 30 April 2010 20:12, Richard Ibbotson
 wrote:
> Hi
>
>> Josh Smith wrote:
>> > It was released this morning. First to update or install, please
>> > report results :)
>
> My Ubuntu server seems to be working fine after the upgrade.  Also, my
> Ubuntu desktop in my kitchen.  However, I now find that my Wordpress
> 2.9.2 blog postings have disappeared.
>
> http://sleepypenguin.homelinux.org/blog/
>
> any suggestions about how to fix this greatly appreciated.  Can't find
> an answer with a Google search.

i don't mess with wordpress much, but lucid ships php 5.3 by default,
and a lot of php webapps (and plugins for those) haven't bee
thoroughly vetted to run smoothly on php 5.3.

try googling for wordpress 5.3 issues, and maybe throw in the names of
any plugins you have installed.

i know at work we'll be avoiding lucid until we start doing more
Drupal 7 work, because while drupal 6 core is 99% ok with php 5.3,
contributed modules are a lucky dip.

another option is to pin you php5-* packages to a karmic repository.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Send EOF to Browser from LAMP stack.

2010-01-21 Thread justin randell
hi,

2010/1/21 Peter Rundle :
> I said it might take a few seconds, I didn't say it was computationally
> heavy.

fair enough, but still worth questioning, i think. for a typical app,
i'd be concerned if a fat apache child process was spending more than
a quarter to a third of a second servicing a single request.

cheers
justin

>
>
> justin randell wrote:
>>
>> hi,
>>
>> 2010/1/21 Peter Rundle :
>>>
>>> It's alleged Ken Foskey did scribe:
>>>
>>>> You could try closing STDOUT which will tell apache that your script has
>>>> stopped output.
>>>
>>> This is interesting idea, I think I will give that a try if I can find
>>> out
>>> how to get hold of the STDOUT file pointer.
>>>
>>>> In perl I executed a background task with an system( "command &" ); to
>>>> perform the background tasks.  I then emailed a reponse to the client to
>>>> tell them the job was done.
>>>
>>> That's the kinda thing I need to do. I was hoping to avoid doing a system
>>> command because the action I need to do is easily done right away in the
>>> php
>>> (database connection is already open with right privileges etc). I just
>>> need
>>> to let the browser know that there's not gonna be any more output, it's
>>> finished go and render the page and be happy. If I call a system command
>>> I
>>> have to pass all the info I current have in the application open a new
>>> connection to the database in the other process etc. Doable but if I can
>>> just close the network connection that'd be neater.
>>>
>>> Cron jobs aren't the go, this is an event driven task that needs to
>>> happen
>>> when the event occurs, not some minutes/hours later when the cron jobs
>>> wakes
>>> up at the specified interval.
>>
>> i'm interested in the requirements that led to this problem. to be
>> honest, it sounds a bit fishy from a design point of view. maybe it
>> just has to be that way, but requiring big chunks of computation that
>> have to happen straight away, are triggered by network requests (that
>> don't need to see the results of the processing in real time) is not
>> something i'd allow unless absolutely necessary. at the very least,
>> i'd want the resource that triggers that access controlled.
>>
>> sorry if this is a blind alley, but this is a problem i would be
>> trying *not* to solve if possible. any architecture that requires this
>> will be harder to scale and easier to DOS, which might not bite you
>> straight away, but will probably bite you at some point.
>>
>> so, the client request that triggers the processing doesn't see the
>> results. what is it about the app that requires it to happen straight
>> away? is it a consistency issue - no other client should see the site
>> before the processing is done? would it be enough for other clients to
>> just see the site with all or none of the processing finished?
>>
>> cheers,
>> justin
>
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Send EOF to Browser from LAMP stack.

2010-01-21 Thread justin randell
hi,

2010/1/21 Peter Rundle :
> It's alleged Ken Foskey did scribe:
>
>> You could try closing STDOUT which will tell apache that your script has
>> stopped output.
>
> This is interesting idea, I think I will give that a try if I can find out
> how to get hold of the STDOUT file pointer.
>
>> In perl I executed a background task with an system( "command &" ); to
>> perform the background tasks.  I then emailed a reponse to the client to
>> tell them the job was done.
>
> That's the kinda thing I need to do. I was hoping to avoid doing a system
> command because the action I need to do is easily done right away in the php
> (database connection is already open with right privileges etc). I just need
> to let the browser know that there's not gonna be any more output, it's
> finished go and render the page and be happy. If I call a system command I
> have to pass all the info I current have in the application open a new
> connection to the database in the other process etc. Doable but if I can
> just close the network connection that'd be neater.
>
> Cron jobs aren't the go, this is an event driven task that needs to happen
> when the event occurs, not some minutes/hours later when the cron jobs wakes
> up at the specified interval.

i'm interested in the requirements that led to this problem. to be
honest, it sounds a bit fishy from a design point of view. maybe it
just has to be that way, but requiring big chunks of computation that
have to happen straight away, are triggered by network requests (that
don't need to see the results of the processing in real time) is not
something i'd allow unless absolutely necessary. at the very least,
i'd want the resource that triggers that access controlled.

sorry if this is a blind alley, but this is a problem i would be
trying *not* to solve if possible. any architecture that requires this
will be harder to scale and easier to DOS, which might not bite you
straight away, but will probably bite you at some point.

so, the client request that triggers the processing doesn't see the
results. what is it about the app that requires it to happen straight
away? is it a consistency issue - no other client should see the site
before the processing is done? would it be enough for other clients to
just see the site with all or none of the processing finished?

cheers,
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Send EOF to Browser from LAMP stack.

2010-01-20 Thread justin randell
hi,

2010/1/20 Peter Rundle :
> Hi Sluggers,
>
> I hope this question is appropriate for this list. I have a PHP web-site
> running on Apache and Linux. A PHP routine produces a page that is sent back
> to the browser, but then it has some house-keeping to do which takes some
> time, perhaps many seconds but the housekeeping doesn't result in any more
> output to the browser (any output from that point on goes to a log).
>
> What I would like to do is end/close the http request so that the browser
> gets the HTTP equivelent of an "EOF" but allow the php script to keep
> running. Now flush() does send the output to date to the browser but the
> browsers "busy" icon keeps running because the http session isn't closed
> until the php ends.

woops, ignore my last post about flush(), should have read the whole
post. i blame the wine and being on holiday in europe...

> I thought of doing a "fork" but the PHP docs say that fork doesn't work when
> php is running under apache. I could write a shell script and invoke that
> with a system/exec call from php and have the shell run into the background
> and do the house-keeping thus allowing the php to finsih, but I'm wondering
> if sluggers know of "a better way (tm)".

how immediate does this need to be? unless this really needs to run
straight away, i'd put the "needs background work" request in a simple
queue and process it via a cron script. IMHO, putting a layer between
a web request and any serious out-of-band processing is the best way
to handle these cases.

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Send EOF to Browser from LAMP stack.

2010-01-20 Thread justin randell
hi,

2010/1/20 Peter Rundle :
> Hi Sluggers,
>
>
> I hope this question is appropriate for this list. I have a PHP web-site
> running on Apache and Linux. A PHP routine produces a page that is sent back
> to the browser, but then it has some house-keeping to do which takes some
> time, perhaps many seconds but the housekeeping doesn't result in any more
> output to the browser (any output from that point on goes to a log).

http://php.net/flush

and possibly

http://php.net/ob_flush

are likely the droids you are looking for.

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LAMP - researching setup for hosting on multiple servers

2009-12-19 Thread justin randell
hi,

2009/12/19 Peter Rundle :
>> Personally I would try to stop using the sessions, and store the client
>> data in the cookie (assuming its not too heavy)
>> your servers are then just a pool of state machines. You can yank any at
>> any time with no loss of service (assuming you take it out of the load
>> balancer first)
>
> Yes I agree, I think this is the best solution as the application has so
> many queries to do on the database in any case before it presents any pages
> I hardly think that the minimal data stored in the session is worth any
> speed saving. Then it doesn't matter which server you hit for the "next
> query".

there are trade-offs here as well - remember that cookie data goes
back and forth with each request, so this gets painful very quickly.
personally, i think its best to look at this case by case and use a
mixture of cookies and session data. also, you'll likely want to
encrypt the cookies if you want to put lots of user-data within them.

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LAMP - researching setup for hosting on multiple servers

2009-12-18 Thread justin randell
hi,

2009/12/18 Daniel Pittman :
> justin randell  writes:
>> 2009/12/18 Daniel Pittman :
>>> justin randell  writes:
>>>> 2009/12/17 Daniel Pittman :
>
> [...]
>
>>> Anyway, I am curious to know if that is still true: if I can't modify the
>>> PHP code, can I store sessions in a database these days?
>>
>> ah, now i see what you mean. yes, its still true, unless you install a php C
>> extension that defines a session.save_handler for you to write session info
>> to a database, then you need php code.
>
> I guess the last, obvious, question is: has someone written a standard C
> extension that does that, targeting MySQL or PostgreSQL?  Google didn't give
> me a convincing answer, and I am hoping an expert can. :)

no, not that i know of.

the PECL memcache extension defines a session save handler, which is a
very fast, scalable way to do it, provided you don't mind session data
going away when your memcache instances do...

personally, i'd be more interested in an extension that used a fast,
scalable key-value style db as a session backend, rather than a
relational db with all those fancy ACID features you never use when
pulling out and sticking in session data. for example, django has a
session backend for tokyo cabinet via tokyo tyrant:

http://github.com/ericflo/django-tokyo-sessions/

python people have all the fun ;-)

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LAMP - researching setup for hosting on multiple servers

2009-12-17 Thread justin randell
hi,

2009/12/18 Daniel Pittman :
> justin randell  writes:
>> 2009/12/17 Daniel Pittman :
>>>
>>> Use session affinity in your load balancer.  No, really, with PHP it will
>>> almost certainly hurt less.  Sorry.
>>
>> i'm interested in the war-wounds that made you write that ;-)
>
> Perhaps I should confess to being semi-ignorant about PHP: it could well be
> that this was always easy, and I only found bad documentation about how to get
> it working.
>
> Way back when, during the days that PHP4 was still a going concern, and PHP5
> pretty new, the best mechanism we could find for doing sessions not-on-disk
> with PHP5 was to add a bunch of custom code to each application.
>
> Given we had a pool of something like six custom applications, two commercial
> and obfuscated with some PHP source-code-encrypted widget, the overhead of
> maintaining custom changes to the PHP code for each application was too high
> for either my tastes, or my client.
>
> As far as I could tell it wasn't possible to just change, say, PHP.ini and
> have it take care of storing all session data in the database using the
> standard mechanisms.
>
> So, there you have it: possibly poor choice of PHP applications, not written
> by us, made life painful. :)
>
>> having setup share-nothing php-heads writing session data to a database on
>> several load-balanced architectures without any issues (directly related to
>> that technique, of course), that response seems a bit blanket.
>
> It probably was, even if I noted later that things may have improved since
> I had my painful times. :)
>
> Anyway, I am curious to know if that is still true: if I can't modify the PHP
> code, can I store sessions in a database these days?

ah, now i see what you mean. yes, its still true, unless you install a
php C extension that defines a session.save_handler for you to write
session info to a database, then you need php code.

a simple, per-application way to do this is to set auto_prepend_file
to a file with db-backed session handler functions and a call to
session_set_save_handler().

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LAMP - researching setup for hosting on multiple servers

2009-12-17 Thread justin randell
hi

2009/12/17 Daniel Pittman :
>
> Use session affinity in your load balancer.  No, really, with PHP it will
> almost certainly hurt less.  Sorry.

i'm interested in the war-wounds that made you write that ;-)

having setup share-nothing php-heads writing session data to a
database on several load-balanced architectures without any issues
(directly related to that technique, of course), that response seems a
bit blanket.

> I doubt you will find anything much more exciting that the above, but I don't
> actually have much useful reference material on hand at the moment.

i threatened to do a slug presentation on this some time ago, but
never came through with it. /me hangs his head in shame...

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LAMP - researching setup for hosting on multiple servers

2009-12-17 Thread justin randell
hi,

2009/12/17 Peter Rundle :
> G'day Sluggers,
>
> I've inherited a LAMP stack which uses the php session stuff to maintain a
> session with an authenticated user. A cookie gets sent to the user and upon
> it's return PHP retrieves the "session" from a file in the "sessions"
> folder.
>
> I would like to change the setup such that the site could be hosted on
> multiple web servers. The PHP sessions would then be badly broken because
> the user could potentially be directed to the "other" server depending on
> what load balancing solution was used, which would not have the matching
> "sessions" file. I do not want to use a solution that has the load balancer
> direct the user back to the same server because a lot of the reason for load
> balancing is for redundancy or to be able to take a server off-line to
> service / upgrade it etc.

right, using a database is the most common way to solve this. what
options do you have code-wise? are you free to write your own session
handlers? if so, there are quite a few ready-baked options:

http://framework.zend.com/manual/en/zend.session.savehandler.dbtable.html

and many others google will find for you.

> The multiple web servers would initially share a a single back-end DBMS
> server, but in future would have their own dedicated back-end DBMS, with the
> DBMS servers using replication to keep in sync.

this is a bit unclear to me. do you want real sync between multiple
dbs, so the data is always consistent or just fail-over? real sync
requires mysql cluster or similar, even master-master can produce
inconsistency in the data seen by the application. to get real sync, a
write has to make it to all nodes before being visible on any node,
which is non-trivial, as mysql's built in cluster setup puts serious
limitations on the way you can store your across different data-nodes.

here's a tool i've had success with for failover:

http://mysql-mmm.org/

key thing here is that you have to make some trade-off decisions - do
i want things to be always up, always vs do i want to make sure i
never lose anything, ever.

> I am looking for documentation, user-groups, articles, advice etc that
> describe the pros/cons of different solutions to meeting this requirement
> and thought I might ask the collective wisdom of Slug.

can you give us any more general info about the setup? how much
control do you have over each layer? can you change the application
code? is my assumption this is hitting mysql correct?

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] 64-bit Karmic Koala or not?

2009-11-19 Thread justin randell
hi,

On Thu, Nov 19, 2009 at 5:23 AM, Amos Shapira  wrote:
> Hi,
>
> I'm going to get a new desktop at work and was wondering whether it's
> worth moving to 64-bit.
>
> It'll have 4Gb RAM, which should be enough for my work needs.
>
> Skype is an absolute must.
> I use the system for mostly browsing/ssh/thunderbird (managing a few
> dozens of remote CentOS 5 servers), I might want to have Windows in
> VMware/kvm/whatever and maybe a private virtual CentOS for testing.
>
> I found links like:
> http://blog.dipinkrishna.info/2009/10/how-to-install-skype-on-ubuntu-910.html
> (installing skype)
> and 
> http://technologycrowd.com/2009/11/01/installing-64-bit-flash-player-in-ubuntu-9-10-karmic-koala/
> (installing 64-bit flash) which look encouraging.
>
> What's the collective wisdom/experience on the list? Is it worth
> moving to 64-bit or should I stay away?
>
> I'd also like to move my home desktop to 64 bit when I get around to
> buy extra RAM (it's 2Gb now).

running 64bit here for with 9.04 and 9.10, i use skype and flash with
no issues. i'd say go for it.

this is probably unlikely to be an issue anymore, but just in case -
make sure your new work desktop's CPU supports KVM, some intel dual
core duos don't.

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] comments in scripts and source code

2009-01-11 Thread justin randell
hi,

On Mon, Jan 12, 2009 at 11:17 AM, Sebastian  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi all,
>
> recently I've started getting into Python and Django programming as
> well as shell scripting.
>
> I was wondering is there any rule or guide on good practice on how to
> comment code?

i agree with the replies to this thread already.

the only thing i'd add as advice to a new programmer - try to avoid
the need for comments by making the code as human-readable as
possible. yeah, this may seem like not quite what you were asking, but
i think its part of the same problem.

so:

- try to make variable names, function names, etc, as meaninful as
possible within the constraints of the language and keystroke-sanity
- whenever possible, avoid magic numbers/strings, and define a
variable/constant with a meaningful name

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] PostgreSQL slowing down on INSERT

2008-04-29 Thread justin randell
On Wed, Apr 30, 2008 at 2:32 AM, Howard Lowndes <[EMAIL PROTECTED]> wrote:
> I have a PHP script that inserts around 100K of records into a table on
>  each time that it runs.

maybe pastebin the relevant bits somewhere?

>  It starts off at a good pace but gets progressively slower until it falls
>  over complaining that it cannot allocate sufficient memory.
>
>  I have increased the memory allocation in the script with:
>  ini_set('max_execution_time', '3600');
>  ini_set('memory_limit', '128M');
>   but this only seems to delay the crash.
>
>  I have also tried closing and reoprning the database` every 10K inserts,
>  but that doesn't seem to speed things up either.
>
>  Any other suggestions?

php4 or php5?

php5 has some nasty memory leaks with objects that reference each other:

http://bugs.php.net/bug.php?id=33595

if this applies to you, there are workarounds in the issue.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Outputing progress counters with PHP/HTML

2008-04-28 Thread justin randell
On Tue, Apr 29, 2008 at 3:45 PM, Howard Lowndes <[EMAIL PROTECTED]> wrote:
> I have a need to output a progress counter from a PHP script that takes a
>  while to run whilst writing a large number of records out to an SQL
>  database, mainly so that the user knows that things are still happening
>  and not hung.
>
>  It seems a simple thing to do, but when I try it, the progress counter
>  (say, every 100 records) instead of being output at the correct time, gets
>  delayed until the whole process has finished.
>
>  What is the best way to get around this problem.

not necessarily the *best* way:

";
  flush();
  sleep(1);
  $i++;
} while ($i < 15);
?>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] perl equivalent for cd $(dirname $0)?

2008-04-24 Thread justin randell
On Wed, Apr 23, 2008 at 5:37 PM, Sonia Hamilton <[EMAIL PROTECTED]> wrote:
> In my bash scripts I often use this (to change to the directory where
>  the script is):
>
>  cd $(dirname $0)
>
>  Is there an equivalent in perl?

might not be exactly what you're after...

http://perldoc.perl.org/FindBin.html

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] multiple domain to one web site

2008-04-22 Thread Justin Randell

Voytek Eymont wrote:


so is this something like this in the virtual host container:

Redirect / http://www.name.com.au


from your description of the problem you probably also want a permanent 
redirect?


Redirect permanent / http://www.name.com.au



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RedHat Cluster Suite as a replacement for linux-ha?

2007-12-02 Thread justin randell
On Dec 1, 2007 9:00 PM, Amos Shapira <[EMAIL PROTECTED]> wrote:
>
> But I don't see from initial browsing of the web sites whether it can
> be used to facilitate fail-over of other things besides virtual IP. I
> mean - I want a heartbeat tool that will notice that the other side is
> down and take over DRBD, for instance, and when the master comes back
> up it will notice that DRBD is already served by the slave and won't
> take it over without coordination.
>
> Is this possible with these tools as they come or will I have to
> program something myself?

you will have to program something yourself. wackamole embeds a perl
interpreter, and has hooks for events related to IP changes etc, so
you can run perl code in response to these.

given your description above, i think wackamole+spread might be on the
too-simple end of "as simple as possible, but no simpler".

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RedHat Cluster Suite as a replacement for linux-ha?

2007-11-30 Thread justin randell
On Nov 30, 2007 6:31 PM, Amos Shapira <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I'm beginning to give up on making Linux-HA's heartbeat work for my
> environment (CentOS x86_64) and am wondering what other option have I got to
> help me:
> 1. Use IPVS to maintain a cluster of servers serving virtual IP, either
> master/slave or load-balanced.
> 2. Use DRBD in master/slave fashion to keep a home-grown application
> highly-available.

not sure if this is perfect fit, but check out wackamole and spread.
i've successfully used those to manage active-active load balancer
pairs. much much simpler than heartbeat, well worth checking out.

http://www.backhand.org/wackamole/
http://www.spread.org/

cheers
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] our Jeff featureing in online ads?

2007-06-13 Thread justin randell

On 6/13/07, Jeff Waugh <[EMAIL PROTECTED]> wrote:



> Check out this disturbing ad I saw today
> http://cdn.fastclick.net/fastclick.net/cid52467/media103572.gif
> Is that our own Jeff Waugh boosting Dada Mobile?

It is entirely more bizaare that three (make that four) people have pointed
this image out to me today. What pr0n sites are you all reading?


we serve this add on several games sites, so you now surely famous
with casual gamers...
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] drupal question - external urls in primary menu?

2007-06-05 Thread justin randell

hi,

On 6/5/07, Sonia Hamilton <[EMAIL PROTECTED]> wrote:

Here's a question for the Drupal experts on the list...

I've got an existing Drupal site I've developed (www.example.com, say).
I want to add an online store to it using os-commerce [1], under
www.example.com/store.

How do I get Drupal to ignore URLs under www.example.com/store and pass
them thru to os-commerce? Would I being looking to make settings changes
in Apache or Drupal?


well, probably neither.

are you using the clean urls with the standard drupal rewrite rules?

if so, drupal shouldn't try to handle requests for files that exist:

RewriteCond %{REQUEST_FILENAME} !-f

so requests for say

store/foo.php

should pass through provided there's a foo.php in the store directory.

hope that helps.

cheers
justin

ps - does oscommerce still require register_globals? if so, ick, have
you looked at the drupal shop module -
http://drupal.org/project/ecommerce
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] ssh questions

2007-06-04 Thread justin randell

On 6/5/07, Zhasper <[EMAIL PROTECTED]> wrote:


You've already got this quite locked down. You could take it a step
further by not allowing passwords at all, and relying on the SSH key
you carry on your USB stick to authenticate you. Of course, that again
makes things inconvenient for you - if you left the USB stick at home,
you can't log in. If it gets stolen, not only can you not log in, but
you can't even revoke your key until you get home and get your backup
key on the spare usb stick - meanwhile, whoever stole the key has
(potentially) free access to your machine..


yep, i'd second this. i don't allow access without a trusted key to
any machine unless i'm forced to.

just make sure you keep your private keys password protected whenever
possible and look at something like keychain to make that easier to
manage.

cheers,
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] PHP include path Q

2007-06-04 Thread justin randell

hi,

On 6/4/07, Rick Welykochy <[EMAIL PROTECTED]> wrote:

Simon Males wrote:

> One reason I have heard is to have DB passwords outside the web root,
> just in case permissions go all weird and are being openly displayed on
> the interweb.

This works only if the web admin has securely sandboxed each
web user from the others. On a shared service, if each user
is not su-exec'd properly, it is child's play to open another
user's scripts and include files and read passwords and other
"privileged" information.


very true, but in no way an argument against keeping such things out
of the webroot. "if you have control of the hosting setup" is the key
phrase here.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] PHP include path Q

2007-06-03 Thread justin randell

hi,

On 6/3/07, Voytek Eymont <[EMAIL PROTECTED]> wrote:

> One reason I have heard is to have DB passwords outside the web root,
> just in case permissions go all weird and are being openly displayed on the
> interweb.

thanks, Simon

yes, that was the reason why I've set it as such in the past;
but, then I look at variety of php apps that are around, and, just about
all of them have includes and db passwords in an inc directory
inside/below web root, so I was thinking, if it's good for them


many php apps are written this way is so they can run on shared
hosting with no access to any directories below the webroot.

if you have control over the hosting setup, then keeping passwords etc
out of the webroot is a good thing.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] vhosts on Apache2 config probs

2007-05-30 Thread justin randell

On 5/30/07, Voytek Eymont <[EMAIL PROTECTED]> wrote:


I've put this in /etc/httpd/httpd.conf:

NameVirtualHost 203.42.34.53:80


ServerName ww.sbt.net.au
DocumentRoot /home/sbt.net.au

Options -Indexes
ErrorDocument 403 /error/noindex.html




what are you trying to do with the location match? perhaps take it out
while you're trying to debug this?


DocumentRoot /home/sbt.net.au/www



---8k snip 8k---


where am I going wrong...?


can't see anything wrong from an apache config point of view, which
makes me suspect something else. is SELinux enabled and
doing something wierd here?

one thing that does look odd is apache owning the webroot - is there
any reason you are doing that? its generally a bad idea to allow
apache write access to any more of the file system than is absolutely
necessary.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Apache reverse proxy not rewriting location header

2007-04-28 Thread justin randell

hi rick,


I have configured apache2 with a reverse proxy to some internal
web servers, also running apache2.

Example:


 ServerName something.whatever.net.au
 ServerAdmin [EMAIL PROTECTED]
 DocumentRoot /var/www/
 
 Order allow,deny
 allow from all
 
 ProxyPass / http://10.11.12.3:80/
 ProxyPassReverse / http://10.11.12.3:80/



the only thing i can see missing from this vhost config is

ProxyRequests Off



Trouble is, when host 10.11.12.3 replies with a Location: header,
e.g.

Location: http://10.11.12.3/test/perl-redirected.html

the reverse proxy does not rewrite the header. I would expect the
above header to reach the client in the following form:

Location: http://something.whatever.net.au/test/perl-redirected.html

The Apache docs here 
indicate that

   "This directive lets Apache adjust the URL in the Location,
Content-Location and URI headers on HTTP redirect responses.
This is essential when Apache is used as a reverse proxy to
avoid by-passing the reverse proxy because of HTTP redirects
on the backend servers which stay behind the reverse proxy."

I'm stumped!


also from the apache 2.2 ProxyPassReverse docs:

"Note that the hostname used for constructing the URL is chosen in
respect to the setting of the UseCanonicalName directive."

do you have UseCanonicalName on or off?

if its off, then that might be your problem, because apache will be
using the reverse proxy as the hostname:

http://httpd.apache.org/docs/2.2/mod/core.html#usecanonicalname

"With UseCanonicalName Off Apache will form self-referential URLs
using the hostname and port supplied by the client if any are supplied
(otherwise it will use the canonical name, as defined above). These
values are the same that are used to implement name based virtual
hosts, and are available with the same clients. The CGI variables
SERVER_NAME and SERVER_PORT will be constructed from the client
supplied values as well."

if this is the issue, then you can either set UseCanonicalName to on,
or use the ProxyPreserveHost directive:

http://httpd.apache.org/docs/2.2/mod/mod_proxy.html#proxypreservehost

"When enabled, this option will pass the Host: line from the incoming
request to the proxied host, instead of the hostname specified in the
ProxyPass line."

hope that helps.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] feedback on SLUG presentation

2007-04-23 Thread justin randell

hi all,

i've volunteered to give a talk at the May SLUG meeting:

"Building a load-balanced, highly-available web site with debian,
pound, apache, spread and wackamole"

this will be my first SLUG talk, and i'm looking for feedback on what
people would like to hear on the subject, or just general advice on
how to make talks in the special interests slot work.

(yes, i've read the speakers guide - http://slug.org.au/meetings/guide.html)

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Could someone please remind me...

2007-04-15 Thread justin randell

On 4/16/07, Howard Lowndes <[EMAIL PROTECTED]> wrote:

...what is the syntax to include one HTML document into another so that
they present as one, similar to the 

do you mean server side includes?
http://httpd.apache.org/docs/2.2/howto/ssi.html

or iframes? http://en.wikipedia.org/wiki/Iframe
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] VMware compiles

2007-04-06 Thread justin randell

hi howard,

On 4/7/07, Howard Lowndes <[EMAIL PROTECTED]> wrote:

Has anyone been successful in getting VMware to compile under the 2.6.19
or 2.6.20 kernels.  I can get it to compile fine under 2.6.18 but not later.


i've been able to get vmware server + the any-any patch to compile on
amd64 and i386 feisty (2.6.20) and FC6 i386 (2.6.19).


Both failures have different messages.


what are the error messages?

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] bash question

2007-03-07 Thread justin randell

hi all,

after reading up on ssh config options, i've gone with some Host
sections in ~/.ssh/config:

Host glebe1
   User johndoe
   Hostname xxx.xxx.xxx.xxx

does what i want without any worries of 'doing it the csh way' ;-)

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] bash question

2007-03-05 Thread justin randell

On 3/6/07, Peter Chubb <[EMAIL PROTECTED]> wrote:

 - Bash evaluates aliases before variable expansion.


ah, that's what i'm missing.


You want shell functions instead.


ok, i'll try that.


In fact, aliases are for people used
to the csh way of doing things .. I'd *never* use them


never used csh before - i'm just a programmer masquerading (badly) as
a sysadmin...

thanks for the pointers.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] bash question

2007-03-05 Thread justin randell

hi all,

i have a bunch of aliases in ~/.bash_aliases like:

alias glebe1='ssh [EMAIL PROTECTED]'
alias glebe2='ssh [EMAIL PROTECTED]'

if i just run any of the aliases from the command line, all is well.

if i try to run:

for BOX in glebe1 glebe2 ; do
   $BOX 
done

bash throws:
bash: glebe1: command not found
bash: glebe2: command not found

what am i missing about the way bash evaluates aliases?

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] dual monitor video card

2006-10-16 Thread justin randell

hi ben,

thanks for the feedback.

On 10/17/06, Ben <[EMAIL PROTECTED]> wrote:

PCIx or AGP (or PCI even)?


PCIx (HP Pavilion t760a, P4 540)


 * If you want to use digital monitors, minimum is a 6600GT, but not
all have dual digital outputs, only the more expensive ones. (but you
said mid range, so that should be ok $ wise).


thanks, i'll check out prices/specs for 6600GTs.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] dual monitor video card

2006-10-16 Thread justin randell

hi all,

after getting used to it at work, i've decided i can't live without a
dual monitor setup at home any longer.

so, i'm looking for recommendations for a mid-level dual monitor video
card to run with ubuntu edgy.

thanks in advance.
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] ADSL2+ netcomm NB5+

2006-10-04 Thread justin randell

alex,

i just switched over from adsl1 to adsl2+, and i'm using NB5, and i
can confirm bridged mode works just fine.

cheers
justin

On 10/5/06, Alexander Samad <[EMAIL PROTECTED]> wrote:

Hi

Just in the process of checking out tpgs adsl2 +.  I currently have adsl
running in bridged mode and having the adsl session run by oe on my
linux firewall.

I am presuming I am going to be able to do the same with nb5+ and adsl2+


Alex


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFJEfIkZz88chpJ2MRAqXYAJ454oXTcXvfDSchCQQF+QqDoTPgAwCfUfnw
DyIPZmOo46Kc2a0khM/1Rx0=
=47Mi
-END PGP SIGNATURE-


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] awk

2006-07-05 Thread justin randell

On 7/5/06, Shane Fishlock <[EMAIL PROTECTED]> wrote:


hi ,

i've been scouring the local (sydney) book places for a couple of days
looking for a good (in depth) book on awk.
i need it asap (always the case eh).
i'm hoping that there may be some (2nd hand is fine) lying 'round in a
member's garage that they're willing to sell.
 (not being forward - just crossed fingers).


does it have to be on dead wood?

if you don't mind reading from it online, safari.informit.com has
O'Reilly's "sed & awk, 2nd Edition" here:

http://safari.informit.com/1565922255

you can get a free 14 day trial account - might be all you need.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] better visual diff tools?

2006-06-19 Thread justin randell

Anyone know of a gui tool that allows you to do this? I usually use
vimdiff, but I'm looking for an easier to use tool for my (linux)
students.


kde's kompare is very nice for this.

cheers
justin
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Google Earth available for Linux

2006-06-12 Thread justin randell

hmmm. works fine for me on dapper at home, and FC 5 and 3 at work.


Totally borked.  Segfaults immediately for me on Dapper.  Only the
crashdump functionality works - and even that's half broken: it can't
actually *send* any of them (crashes too soon).

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] escaping a ' in php

2006-06-04 Thread justin randell

On 6/5/06, Voytek Eymont <[EMAIL PROTECTED]> wrote:

I'm trying to insert some text into a php file, the text is enclosed in ' '

I'd like to include a word with an "'s" like "individual's permission"
what the proper way to do this ? (change the outer  ' ' to " " ?)


yes, that will work.

you can also escape the ' (or ") with a backslash:

define('TEXT_INFORMATION', 'escape this \'');

if you have a string that will have both ' and ", and you don't want
to use lots of backslashes, you can use heredoc[1]:

$string = <

Re: [SLUG] Apache2 authconfig help, please?

2006-05-29 Thread justin randell

i don't think you can "uninherit" the directives, but you can override
them. so, i think you want:


   Satisfy All  # this should overide Satisfy any in /path/to/app/myapp


if this is not what you want, try here for more info:

http://httpd.apache.org/docs/2.0/mod/core.html#satisfy

cheers

On 5/29/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

I am tweaking the authorisation/access to a set of directories on an
apache2 server:

Assuming I have an Apache Directory:



Satisfy Any
AllowOverride AuthConfig
Order deny,allow
Deny From all
Allow from my.com.au

(LDAP stuff goes here)



Now the above works fine with the Satisfy Any directive and I get the
result I want.

But now I want to have a Second directory under the first, that I want
to reset all the directives for:



AllowOverride AuthConfig



And there is a .htaccess & .htpasswd file in this second sub directory.

When we take the Satisify directive out of the first Dir, the second
then presents with the passwd popup as required.

Can anyone please explain how to reset the directives for a sub directory.
I know all the options get inherited, but there must be a way to
also uninherit?

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: OSDC (Re: [SLUG] Snakes and Rubies?)

2006-05-22 Thread justin randell

On 5/23/06, Erik de Castro Lopo <[EMAIL PROTECTED]> wrote:

Mary Gardiner wrote:

> On Sun, May 21, 2006, Jeff Waugh wrote:
> > At which point, you're only a couple of steps away from making it an "Open
> > Source Developer's Club" [1] for Sydney folks. Thoughts?
>
> +1 basically. I think there is definitely a group of more language
> agnostic people about who aren't really attracted to a language specific
> interest group.

I wouldn't go so far as to call myself language agnostic, but since
there aren't a lot of ocaml developers around, I would be interested
in a group which is open to all languages (other than VB of course ;-)).


me too.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] another good one

2005-02-07 Thread justin randell
http://www.funnyville.com/fv/pictures/winrg.shtml
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] just for a laugh

2005-02-07 Thread justin randell
(warning: gratuitous novel plug at the end of video.)

http://www.novell.com/linux/windowstolinux/publicservice/
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] PHP, MySQL

2005-01-17 Thread justin randell
hi rob,

i haven't tried php5 + mysql 5, but i think if you post this question
to the sydphp group forum (http://forum.sydphp.org/) you might find
someone who has.

good luck.

justin


On Mon, 17 Jan 2005 19:44:53 +1100, Andrew Robson
<[EMAIL PROTECTED]> wrote:
> 
> G'day all
> 
> Sorry if this is a little off topic.
> 
> But has anyone tried out PHP5 and MySQL5
> Specifically Stored Procedures, so far I have not been able to get a record
> set back.
> 
> Ie. Using "Call sp_functionName()" as the query string doesn't seem to work.
> 
> Can anyone point me to an answer.
> 
> Tbanks.
> 
> Robbo.
> 
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] problems getting apt-get through a firewall

2004-12-23 Thread justin randell
thanks dave :-)

i'm sure i tested using a http source already, but whatever,
http://mirror.pacific.net.au/debian works fine now.


On Fri, 24 Dec 2004 09:16:04 +1100, David Kempe
<[EMAIL PROTECTED]> wrote:
> justin randell wrote:
> > Err ftp://debian.ihug.com.au testing/main Packages
> >   Could not connect data socket, connection timed out
> > Get:5 ftp://debian.ihug.com.au testing/main Release [81B]
> > 0% [5 Release 0/81B 0%]
> >
> > any ideas?
> >
> 
> Dont use ftp at all? you probably need to use passive ftp if you want it
> to work. the test is to take your ftp test further by trying to actually
> download something - if it gives you an error, then the high data port
> that ftp is trying to talk on is getting blocked. So you need to issue a
> PASV first. (just switch on passive mode really)
> 
> I would just use an http mirror.
> mirror.pacific.net.au/debian should work fine
> 
> dave
>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] problems getting apt-get through a firewall

2004-12-23 Thread justin randell
> I don't recognise the firewall rule syntax, so I don't know if there's
> anything wrong with it.

neither do i :-) its cisco IOS stuff i think, but i'm not much of a
network person.

both wget and ftp work fine:

[EMAIL PROTECTED]:/home/justin$ ftp home.exetel.com.au
Connected to el.exetel.com.au.
220 ProFTPD 1.2.9 Server (home.exetel.com.au) [el.exetel.com.au]
Name (home.exetel.com.au:justin): 0295664082
331 Password required for 0295664082.
Password:
230 User 0295664082 logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>

justin-desktop:/home/justin$ wget http://slug.org.au
--08:59:19--  http://slug.org.au/
   => `index.html'
Resolving slug.org.au... 138.25.7.4
Connecting to slug.org.au[138.25.7.4]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5,002 [text/html]

100%[>] 5,002 --.--K/s

08:59:19 (171.84 KB/s) - `index.html' saved [5002/5002]

anyone got any other ideas?

thanks 
justin

On Fri, 24 Dec 2004 08:57:43 +1100, Jan Schmidt <[EMAIL PROTECTED]> wrote:
> On Fri, 2004-12-24 at 08:04 +1100, justin randell wrote:
> > hi all,
> >
> > since the network firewall was tightened where i work, i can't get
> > apt-get to work.
> >
> > i can't figure out why, because i thought apt-get used ftp and http,
> > and both of these are allowed through:
> 

> 
> apt-get should only require http and ftp as you say, so I would try
> using wget and ftp type rules on the commandline to check whether they
> work.
> 
> Cheers,
> Jan
> 
>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] problems getting apt-get through a firewall

2004-12-23 Thread justin randell
hi all,

since the network firewall was tightened where i work, i can't get
apt-get to work.

i can't figure out why, because i thought apt-get used ftp and http,
and both of these are allowed through:

-start snip from the firewall rules-

!
! general permit list for all workstations
!
access-list 190 permit tcp 192.168.1.0 0.0.0.255 any eq www
access-list 190 permit tcp 192.168.1.0 0.0.0.255 any eq 443
access-list 190 permit tcp 192.168.1.0 0.0.0.255 any eq domain
access-list 190 permit udp 192.168.1.0 0.0.0.255 any eq domain
access-list 190 permit tcp 192.168.1.0 0.0.0.255 any eq 22
access-list 190 permit tcp 192.168.1.0 0.0.0.255 any eq telnet
access-list 190 permit tcp 192.168.1.0 0.0.0.255 any eq ftp
access-list 190 permit tcp 192.168.1.0 0.0.0.255 any eq ftp-data
access-list 190 permit tcp 192.168.1.0 0.0.0.255 any eq smtp
access-list 190 permit tcp 192.168.1.0 0.0.0.255 any eq pop3

-end snip from the firewall rules-

this is what i get when i run apt-get update:

justin-desktop:/home/justin# apt-get update
Get:1 ftp://debian.ihug.com.au testing/main Packages [3191kB]
Hit http://security.debian.org testing/updates/main Packages
Get:2 http://security.debian.org testing/updates/main Release [111B]
Hit http://security.debian.org testing/updates/contrib Packages
Get:3 http://security.debian.org testing/updates/contrib Release [114B]
Hit http://security.debian.org testing/updates/non-free Packages
Get:4 http://security.debian.org testing/updates/non-free Release [115B]
Err ftp://debian.ihug.com.au testing/main Packages
  Could not connect data socket, connection timed out
Get:5 ftp://debian.ihug.com.au testing/main Release [81B]
0% [5 Release 0/81B 0%]

any ideas?

thanks
justin
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Unwired for Broadband ?

2004-12-13 Thread justin randell
for a bunch of feedback from people using the unwired network via
exetel, go here:

http://forum.exetel.com.au/index.php?c=9


On Tue, 14 Dec 2004 14:04:34 +1100, Pete de Zwart <[EMAIL PROTECTED]> wrote:
> If you are considering going with Unwired, please be aware that they will
> not support you if they smell Linux.
> 
> I've had a lot of DHCP issues as their DHCP server that serves my area
> around Central Station keeps running out of leases so I get stuck waiting
> for a lease to expire for hours on end and they refuse to purge stale leases
> from their database.
> 
> Anyhoo, you may experience a much better service but I'm happy with it's
> bang for buck, however, any sort of real time gaming is out of the question,
> I got better latency with my 2400baud modem.
> 
> Pete de Zwart. 
> 
> 
> 
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
> Of Jason Rennie
> Sent: Tuesday, 14 December 2004 1:28 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [SLUG] Unwired for Broadband ?
> 
> > As with all wireless stuff, your mileage will vary, but it's probably
> worth trying it out.  It would be interesting to hear more details about the
> disser's experience.  There are lots of factors that can make something
> "slow and crap".
> 
> Thanks everybody who responded.
> 
> I just thought i'd ask because of the negative comment I heard.
> 
> Based on some comments I got at work about the service, it seems like I will
> go with Unwired (or IBurst through Ozemail).
> 
> Jason
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
> 
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html