Re: [SLUG] Download speed

2014-05-22 Thread David Lyon
ok, I patched those links in. Forgive my code posting, last time. I will
answer offlist if I get any further requests.

It has some command line options to specify the size to download, where to
put the logfile, and the duration to wait before downloads.

$ python x_download_test.py 1M -i 5

-Python-

# /usr/bin/python

import time, optparse, urllib2, csv


if __name__ == __main__:

dlurls = {1M   : http://mirror.internode.on.net/pub/test/1meg.test;,
  10M  : http://mirror.internode.on.net/pub/test/10meg.test;,
  50M  : http://mirror.internode.on.net/pub/test/50meg.test;,
  100M : http://mirror.internode.on.net/pub/test/100meg.test
,
  1G   : http://mirror.internode.on.net/pub/test/1000meg.test
,
  5G   : http://mirror.internode.on.net/pub/test/5000meg.test

 }

print(Network Download speed logger. Freeware Licence)

usage = usage: %prog [options] arg1 arg2
parser = optparse.OptionParser(usage=usage)
parser.add_option(-i, --interval, action=store, type=float,
dest=interval, default=10, help=Interval in minutes between downloads)
parser.add_option(-l, --logfile, action=store,
dest=logfilename, default=download_times.csv, help=Interval in minutes
between downloads)

(options, args) = parser.parse_args()

download_size = 10M
if len(args)  0:
download_size = args[0]

download_interval = options.interval * 60
download_url = dlurls[download_size]

# setup a logfile
f = open(options.logfilename, 'a')
writer = csv.writer(f)

while (1):

print(Downloading %s % download_size)

# Initial Time reading
start = time.clock()

mp3file = urllib2.urlopen(download_url)
mp3file.read()

elapsed = time.clock() - start

writer.writerow([time.strftime(%c),download_size,elapsed,])

print(Pausing for %f minute(s) % int(download_interval/60))

time.sleep(download_interval) # Time in seconds.



On Thu, May 22, 2014 at 1:40 PM, Rick Welykochy r...@vitendo.ca wrote:

 David wrote:

 On 22/05/14 08:38, Rick Welykochy wrote:

 Edwin Humphries (text) wrote:

 Can anyone suggest a way of testing the download speed of my NBN fibre
 connection every hour and logging it? I have an ostensibly 100Mbps
 connection, but the speed seems to vary enormously, so an automated process
 would be good.


 Download a file of known length, say 1000 MB, from a server
 whose speed you can trust every hour. Time and log each download.
 Also verify the contents of the downloaded file with an md5 or sha
 digest.

 This can be automated with an scp inside a simple (shell) script.


 Westnet used to have a file available for exactly this purpose - I dare
 say other ISP's do too. Perhaps you could ask your own ISP.

 This looks promising:

 http://mirror.internode.on.net/pub/test/

 I found this via a web search for test download file residing on an isp
 australia.


 cheers
 rickw


 --
 
 Rick Welykochy || Vitendo Consulting

 If consumers even know there's a DRM, what it is, and how it works, we've
 already failed.
 -- Peter Lee, Disney Executive


 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Best (most efficient method) recursive dir DEL

2014-05-22 Thread James Polley


 On 22 May 2014, at 9:10, Kyle k...@attitia.com wrote:
 
 Hi folks,
 
 I was wondering what is the best (as in most efficient method) for doing an 
 automated, scheduled recursive search and DEL exercise. The scheduled part is 
 just a cron job, no problem. But what's the most efficient method to loop a 
 given structure and remove all (non-empty) directories below the top dir?
 
 The 3 examples I've come up with are;
 
 find top_dir -name name_to_find_and_DEL -exec rm -rf {} \;  - what's 
 the '\' for and is it necessary?
 
 rm -rf `find top_dir -type d -name name_to_find_and_DEL` - does it 
 actually require the ' ` ' or are ' ' ' good enough?
 
 find top_dir -name 'name_to_find_and_DEL' -type d -delete- or won't 
 this work for a non-empty dir?

How do you define most efficient? Run time? CPU cycles? Memory usage? Forks? 
Disk reads/writes? Readability/maintainability?

My personal guess is that a find command that locates the things you want to 
delete and ends with -print0 | xargs -0 rm -rf will satisfy most of those 
criteria.

(Xargs will stuff as many file names as it thinks will fit on the command line, 
but sometimes it gets ambitious - you might have to use xargs -0 -n 100 rm 
-rf to limit it to 100 file names per invocation of rm)

 
 Or is there a more efficient manner which I can slot into a cron job?
 
 Much appreciate the input.
 
 -- 
 
 Kind Regards
 
 Kyle
 
 -- 
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Best (most efficient method) recursive dir DEL

2014-05-22 Thread Darragh Bailey
Hi Kyle,

You might find it worth looking at the following invocation of find:

find top_dir -name name_to_del -exec rm -rf {} \+ -prune

the '+' will support expansion of arguments, thus it works exactly like
xargs in building up a command line that is passed to rm. You may also need
to specify \{}\ to handle whitespace in directory names, untested.


On 22 May 2014 00:10, Kyle k...@attitia.com wrote:

 Hi folks,

 I was wondering what is the best (as in most efficient method) for doing
 an automated, scheduled recursive search and DEL exercise. The scheduled
 part is just a cron job, no problem. But what's the most efficient method
 to loop a given structure and remove all (non-empty) directories below the
 top dir?

 The 3 examples I've come up with are;

 find top_dir -name name_to_find_and_DEL -exec rm -rf {} \;  -
 what's the '\' for and is it necessary?


You need to escape ';' from the shell, otherwise it will think it's the end
of the command and strip it from what is passed to 'find' which will in
turn exit with an exception in that it couldn't work out where the end of
the 'exec' command occurred.




 rm -rf `find top_dir -type d -name name_to_find_and_DEL` - does it
 actually require the ' ` ' or are ' ' ' good enough?

 find top_dir -name 'name_to_find_and_DEL' -type d -delete- or
 won't this work for a non-empty dir?

 Or is there a more efficient manner which I can slot into a cron job?



As someone else already pointed out, it'll probably depend on

-- 
Darragh Bailey
Nothing is foolproof to a sufficiently talented fool
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] WiFi problem.

2014-05-22 Thread Ben Donohue

Hi Will,

+1 again on below. Just what I was going to say.
However it may not just be a hard switch.

I've had switches that are not a button as such. They detect your 
finger swiping over them.
Such that is there somewhere on your laptop that has lights that you 
run your finger over and it will turn on/off whatever soft switch is 
under the plastic. The surface is smooth. You just move your finger over 
it to turn off/on anything under it. Run your finger around all the 
smooth plastic bits of the laptop. (while switched on of course ;-)


Ben


On 22/05/14 14:39, David wrote:

+1

I've made this mistake several times

On my lapdog the hardware switch is small and obscure and easy to not 
even realise it's there.




On 22/05/14 14:37, David Lyon wrote:

First thing to check is that the Wifi button is set to on.

Sometimes it's very easy to accidently bump them to off without even
realising.


On Thu, May 22, 2014 at 2:18 PM, William Bennett 
wrbennet...@gmail.comwrote:


I'm sure someone has seen this before: there doesn't seem to be a 
problem

posted that nobody knows.


  I have a Toshiba Satellite A660, running Ubuntu 14.04


  In the past, I've been able to :--


  1. tether my smartphone to the laptop

2. go to a coffeeshop that has a WiFi and pick it up with the laptop.


  Now I can't.


  I can click on the “fan” and whilst it will open, nothing WiFi 
registers.

Not evne when the smartphone swears it's emulating a portable hotspot.


  Took the laptop to the local computer shop. Was asked whether I'd had
Windows on the laptop in the past. Answer yes. Well, since the 
switch is a
Window switch, it might be a vagrant piece of Window leftover that 
turned

it off.


  This sounds like Olde Stuffe.


  Nevertheless, I can't pick up any WiFi. And Fn-F8 doesn't turn on
anything.


  I'm reluctant to believe this is a Toshiba peculiarity (as I've 
also been

told). I've had it working with earlier versions of Ubuntu.


  Any suggestions will be gratefully acted upon.


  William Bennett.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html




--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] multiple grep conditions ?

2014-05-22 Thread Amos Shapira
On 22 May 2014 14:12, li...@sbt.net.au wrote:

 On Wed, May 21, 2014 12:28 pm, pe...@chubb.wattle.id.au wrote:

  As you're not using regular expressions, but just strings, fgrep is
  the way to do it. fgrep -q '07/2014 15/06/2014
  20/06/2014
  25/06/2014' part 2  exit 0

 Peter, thanks

 Amos, you've per-emptied my next Q, thanks


Glad I did :)



 I actually should move it totally out of script, as this list will often
 change, so (I think?) I can enter dates into a file, say 'patterns'

 and, use like

 fgrep -q -f /path/to/pattern part2  exit 0


Yes this will work. One pattern per line. See man grep. No need to quote
or anything since it's not parsed through the shell.



 ?

 will I need any quotes in file 'pattern', or simply like:

 07/2014
 15/06/2014
 20/06/2014
 25/06/2014

 thanks again

 V


 On Wed, May 21, 2014 7:24 pm, Amos Shapira wrote:
  It might be more maintainable to keep the list of patterns in a variable
  (line per pattern) then pass it to grep using grep's -f/--file= argument:
 
 
  PATTERNS=15/06/2014
  20/06/2014
  25/06/2014
 
 
  ...
  grep -q -f (echo $PATTERNS) file2  exit 0
 
  Note the use of double quotes around the variable interpolation in the
  grep command line, they are essential to preserve the newlines in the
  variable's value.
 
  The (bash specific, I think) trick here if the use of (command) which
  causes bash to open a pipe to the command and pass its name as
  /dev/fd/FILE-DESC-NUMBER to grep so grep thinks it's a regular file to
  read match patterns from while its stdin is still free to read the input
  to match against the patterns. If grep doesn't read its input from stdin
  but from a regular file then you don't need this trick and can just pass
  -f -
  to make grep read the patterns from stdin and the matching text from the
  regular file:
 
  grep -q -f - file2  exit 0 $PATTERNS
 
  (actually this uses another bash specific trick, you can do the following
   to get rid of bash'ism completely:
 
  echo $PATTERN | grep -q -f files  exit 0 )


 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html




-- 
 [image: View my profile on LinkedIn]
http://www.linkedin.com/in/gliderflyer
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Best (most efficient method) recursive dir DEL

2014-05-22 Thread Amos Shapira
On 22 May 2014 19:16, Darragh Bailey daragh.bai...@gmail.com wrote:

 Hi Kyle,

 You might find it worth looking at the following invocation of find:

 find top_dir -name name_to_del -exec rm -rf {} \+ -prune

 the '+' will support expansion of arguments, thus it works exactly like
 xargs in building up a command line that is passed to rm. You may also need
 to specify \{}\ to handle whitespace in directory names, untested.


Kudos for bringing this up. I wasn't aware of the + option.

1. There is no need to quote the {}, find will pass the file names already
as separate arguments without splitting them on white space.
2. As you demo above but possibly worth to stress - the + form does NOT
take a terminating ;.

Test (my Mac home directory, which contains a few standard directory names
with spaces):

~$ gfind -maxdepth 1 -type d -exec ls -dF {} \+
./ ./Downloads/ ./Sites/
./.Trash/ ./Google Drive/ ./Snapshots/
./.config/ ./Library/ ./VirtualBox VMs/
./.ssh/ ./Movies/ ./bin/
./.vagrant.d/ ./Music/ ./git-dotfiles/
./Applications/ ./Pictures/ ./macports/
./Desktop/ ./Programs/ ./tmp/
./Documents/ ./Public/

Notice how ls is passed the right directory names for Google Drive and
VirtualBox VMs

--Amos



 On 22 May 2014 00:10, Kyle k...@attitia.com wrote:

  Hi folks,
 
  I was wondering what is the best (as in most efficient method) for doing
  an automated, scheduled recursive search and DEL exercise. The scheduled
  part is just a cron job, no problem. But what's the most efficient method
  to loop a given structure and remove all (non-empty) directories below
 the
  top dir?
 
  The 3 examples I've come up with are;
 
  find top_dir -name name_to_find_and_DEL -exec rm -rf {} \;  -
  what's the '\' for and is it necessary?
 

 You need to escape ';' from the shell, otherwise it will think it's the end
 of the command and strip it from what is passed to 'find' which will in
 turn exit with an exception in that it couldn't work out where the end of
 the 'exec' command occurred.



 
  rm -rf `find top_dir -type d -name name_to_find_and_DEL` - does
 it
  actually require the ' ` ' or are ' ' ' good enough?
 
  find top_dir -name 'name_to_find_and_DEL' -type d -delete- or
  won't this work for a non-empty dir?
 
  Or is there a more efficient manner which I can slot into a cron job?
 


 As someone else already pointed out, it'll probably depend on

 --
 Darragh Bailey
 Nothing is foolproof to a sufficiently talented fool
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html




-- 
 [image: View my profile on LinkedIn]
http://www.linkedin.com/in/gliderflyer
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] WiFi problem.

2014-05-22 Thread Jake Anderson

sometimes its a soft switch too
Fn + Fsomething (F6 as i recall seems to be the default)


On 22/05/14 19:30, Ben Donohue wrote:

Hi Will,

+1 again on below. Just what I was going to say.
However it may not just be a hard switch.

I've had switches that are not a button as such. They detect your 
finger swiping over them.
Such that is there somewhere on your laptop that has lights that you 
run your finger over and it will turn on/off whatever soft switch is 
under the plastic. The surface is smooth. You just move your finger 
over it to turn off/on anything under it. Run your finger around all 
the smooth plastic bits of the laptop. (while switched on of course ;-)


Ben


On 22/05/14 14:39, David wrote:

+1

I've made this mistake several times

On my lapdog the hardware switch is small and obscure and easy to not 
even realise it's there.




On 22/05/14 14:37, David Lyon wrote:

First thing to check is that the Wifi button is set to on.

Sometimes it's very easy to accidently bump them to off without even
realising.


On Thu, May 22, 2014 at 2:18 PM, William Bennett 
wrbennet...@gmail.comwrote:


I'm sure someone has seen this before: there doesn't seem to be a 
problem

posted that nobody knows.


I have a Toshiba Satellite A660, running Ubuntu 14.04


In the past, I've been able to :--


1. tether my smartphone to the laptop

2. go to a coffeeshop that has a WiFi and pick it up with the laptop.


Now I can't.


I can click on the “fan” and whilst it will open, nothing WiFi 
registers.

Not evne when the smartphone swears it's emulating a portable hotspot.


Took the laptop to the local computer shop. Was asked whether I'd had
Windows on the laptop in the past. Answer yes. Well, since the 
switch is a
Window switch, it might be a vagrant piece of Window leftover that 
turned

it off.


This sounds like Olde Stuffe.


Nevertheless, I can't pick up any WiFi. And Fn-F8 doesn't turn on
anything.


I'm reluctant to believe this is a Toshiba peculiarity (as I've 
also been

told). I've had it working with earlier versions of Ubuntu.


Any suggestions will be gratefully acted upon.


William Bennett.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html






--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] WiFi problem.

2014-05-22 Thread Rick Phillips
Hi William,


 I'm sure someone has seen this before: there doesn't seem to be a problem
 posted that nobody knows.
 
 
  I have a Toshiba Satellite A660, running Ubuntu 14.04
 
 
  In the past, I've been able to :--
 
 
  1. tether my smartphone to the laptop
 
 2. go to a coffeeshop that has a WiFi and pick it up with the laptop.
 
 
  Now I can't.
 
 
  I can click on the “fan” and whilst it will open, nothing WiFi registers.
 Not evne when the smartphone swears it's emulating a portable hotspot.
 
 
  Took the laptop to the local computer shop. Was asked whether I'd had
 Windows on the laptop in the past. Answer yes. Well, since the switch is a
 Window switch, it might be a vagrant piece of Window leftover that turned
 it off.
 
 
  This sounds like Olde Stuffe.
 
 
  Nevertheless, I can't pick up any WiFi. And Fn-F8 doesn't turn on anything.
 
 
  I'm reluctant to believe this is a Toshiba peculiarity (as I've also been
 told). I've had it working with earlier versions of Ubuntu.
 
 
  Any suggestions will be gratefully acted upon.


I have had this problem recently on an old HP laptop.  On this laptop
there is a switch which is electronic not mechanical.  Try as I might in
Linux, I could not get the wireless to switch on yet in Windows XP it
was fine.  Turns out that the wireless card software in Windows was able
to permanently turn the wireless off so I had to reset it there.  After
that, it is always fine in Linux (Fedora 20).  So you might have to look
hard at the software in Windows for the solution to your problem.

Happy hunting.

Regards,

Rick Phillips
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Download speed

2014-05-22 Thread Julien Goodwin
On 22/05/14 08:27, Edwin Humphries (text) wrote:
 Can anyone suggest a way of testing the download speed of my NBN fibre
 connection every hour and logging it? I have an ostensibly 100Mbps
 connection, but the speed seems to vary enormously, so an automated
 process would be good.

The speed would be expected to vary enormously across the internet, only
to nearby (and that pretty much means NSW, QLD or VIC for a user in NSW)
servers that themselves have 100Mbit spare would you actually approach
100Mbit.

Reasons you might not get 100Mbit, just off the top of my head, almost
certainly widely incomplete.

But first there's one important thing to know, the bandwidth/delay
product, which in very simple terms means that as soon as there's any
packet loss at all (and there's always some) the usable throughput is
defined by a function of that loss and the latency of the connection.
This is why achieving 100Mbit to Australian servers isn't too hard, even
with a moderate amount of packet loss, but you can tolerate nearly no
loss on a connection to a European server.

http://en.wikipedia.org/wiki/Bandwidth_delay_product

Your domain:
1. Client device incapable of 100Mbit transfers, still surprisingly
common. OS, hardware, and client software all matter.

2. Client device to local gateway not capable of 100Mbit transfers.
Wired gigabit ethernet (or *perhaps* 802.11n in an RF quiet environment)
is needed for this. A bad ethernet cable may cause this too, always use
pre-made (moulded strain relief) cables, humans are not reliable enough
to make gigabit ethernet cables. Also, avoid a total cable length above
50m, many lower-end switches/routers can't actually drive ethernet to
the full spec distance.

3. Local gateway (What's commonly called a router, industry call CPE),
even fairly current gen stuff can struggle at 100Mbit transfer rate,
especially if there's a high rate of session creation, or the router
suffers from bufferbloat.

Nbnco's domain:

1. Media errors on the fibre, this should be monitored by NBNco.

2. Oversubscription on the fibre, the NBNco fibre is (currently) lit
with GPON, which is a 2.5Gb signal (down), normally shared between 32
premises (64 or more is possible, but IIRC not used in .au). This should
almost never occur if everyone's only at 100Mbit, even at higher speeds
it works surprisingly well. Again, NBNco should know if this happens.

3. Oversubscription between the OLT (fibre head) and the ISP. Shouldn't
happen, again monitoring should happen.

(Your) ISP's domain:
(Actually if you're not with one of the major providers there's a couple
of extra interconnect points that can congest as well)

1. Oversubscription on the connection from NBNco to the ISP, this is
known to cause some issues, sadly the best writeup of it I can't find.
Actually monitoring this is a pain, the ISP can do some bits but are
unlikely to do so, and NBNco probably don't either.

2. Congestion within the ISPs access network between the NBNco POI (120
nationally) and the ISP's core POP (usually 1 or 2 per city for all but
the very largest). For large ISPs this is highly unlikely, and the
smaller guys are covered by the wholesalers who don't save anything
(significant) to get it wrong.

3. Congestion within an ISPs core. Unlikely, expect perhaps on
inter-state links.

4. Congestion between an ISP and the next ISP in the path. Depending on
who's involved this is either very unlikely or guaranteed.

In the middle:

1. Did someone put in a stateful firewall which breaks TCP. This is
pretty much all of them, this makes things a lot worse in high-latency
or high-loss environments.

2. Is the server far away (see bandwidth/delay product above).

The end server:

1. Does it have a path into a decent ISP network at  100Mbit?

2. Can it actually serve at  100Mbit? (plain static files are fairly
easy, dynamic content can get expensive quick). The same caveats about
OS, hardware and software apply as for the clients.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Internet at 500m

2014-05-22 Thread Rick Welykochy

Hi Sluggers,

I have a friend living in near jungle conditions in a small town
in the Philipines that wishes to span about 400m - 500m from
an Internet connection to his house in the bush.

Ethernet seems limited to 100m.
Wifi seems limited to about 100m - 200m.
Any suggestions for bridging this gap?

thanks,
rickw


--

Rick Welykochy || Vitendo Consulting

If consumers even know there's a DRM, what it is, and how it works, we've 
already failed.
-- Peter Lee, Disney Executive


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Best (most efficient method) recursive dir DEL

2014-05-22 Thread Kyle

Thanks to all for the responses.

Interestingly, everyone has come back with find (followed by..) as 
the best option. Perhaps this is simply a reflection of the fact my 3 
examples all used 'find'.


I have always thought (believed) 'find' was a less efficient process 
than 'locate' and kind of hoped 'locate' (or some other cmd I don't 
know) might pop up as a solution. I understand 'locate' depends on an 
updated 'db', but I figured that indexing process was still more 
efficient than 'find' trawling the structure in realtime.


Kyle


On 22-05-2014 19:16, Darragh Bailey wrote:

Hi Kyle,

You might find it worth looking at the following invocation of find:

find top_dir -name name_to_del -exec rm -rf {} \+ -prune

the '+' will support expansion of arguments, thus it works exactly 
like xargs in building up a command line that is passed to rm. You may 
also need to specify \{}\ to handle whitespace in directory names, 
untested.



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Best (most efficient method) recursive dir DEL

2014-05-22 Thread Amos Shapira
Locate only indexes path names, not other attributes (like type, size, time
etc)
On 23 May 2014 06:11, Kyle k...@attitia.com wrote:

 Thanks to all for the responses.

 Interestingly, everyone has come back with find (followed by..) as
 the best option. Perhaps this is simply a reflection of the fact my 3
 examples all used 'find'.

 I have always thought (believed) 'find' was a less efficient process than
 'locate' and kind of hoped 'locate' (or some other cmd I don't know) might
 pop up as a solution. I understand 'locate' depends on an updated 'db', but
 I figured that indexing process was still more efficient than 'find'
 trawling the structure in realtime.

 Kyle


 On 22-05-2014 19:16, Darragh Bailey wrote:

 Hi Kyle,

 You might find it worth looking at the following invocation of find:

 find top_dir -name name_to_del -exec rm -rf {} \+ -prune

 the '+' will support expansion of arguments, thus it works exactly like
 xargs in building up a command line that is passed to rm. You may also need
 to specify \{}\ to handle whitespace in directory names, untested.

  --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Internet at 500m

2014-05-22 Thread Jason Ball
The old Pringles\milo tin antenna/yagi may do the trick.I know people
had some interesting results, although unimpeded line of sight will be
needed.



On Friday, 23 May 2014, Rick Welykochy r...@vitendo.ca wrote:

 Hi Sluggers,

 I have a friend living in near jungle conditions in a small town
 in the Philipines that wishes to span about 400m - 500m from
 an Internet connection to his house in the bush.

 Ethernet seems limited to 100m.
 Wifi seems limited to about 100m - 200m.
 Any suggestions for bridging this gap?

 thanks,
 rickw


 --
 
 Rick Welykochy || Vitendo Consulting

 If consumers even know there's a DRM, what it is, and how it works, we've
 already failed.
 -- Peter Lee, Disney Executive


 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html



-- 
--
Teach your kids Science, or somebody else will :/

ja...@ball.net
vk2...@google.com vk2f...@google.com
callsign: vk2vjb
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Internet at 500m

2014-05-22 Thread David Lyon
Have they seen products like this?
http://www.aliexpress.com/item/150Mbps-high-power-outdoor-wi-fi-wireless-outdoor-wireless-access-point-cpe-equipment/1489776809.html



On Fri, May 23, 2014 at 2:57 AM, Rick Welykochy r...@vitendo.ca wrote:

 Hi Sluggers,

 I have a friend living in near jungle conditions in a small town
 in the Philipines that wishes to span about 400m - 500m from
 an Internet connection to his house in the bush.

 Ethernet seems limited to 100m.
 Wifi seems limited to about 100m - 200m.
 Any suggestions for bridging this gap?

 thanks,
 rickw


 --
 
 Rick Welykochy || Vitendo Consulting

 If consumers even know there's a DRM, what it is, and how it works, we've
 already failed.
 -- Peter Lee, Disney Executive


 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Internet at 500m

2014-05-22 Thread Ken Foskey
There is a reflector that uses tin foil and catdboard to focus the signal.  
Uses a hyperbola like head lights

  First thing is to get a good wireless first.  I had a wrt54g and it was 
excellent everything since has been rubbish and I did my research.

Sydney wireless was getting up to 2 klm for wireless but they were boosting 
signals and using specialist aerials

Use different channels to span multiple hops.

On 23 May 2014 2:57:44 AM AEST, Rick Welykochy r...@vitendo.ca wrote:
Hi Sluggers,

I have a friend living in near jungle conditions in a small town
in the Philipines that wishes to span about 400m - 500m from
an Internet connection to his house in the bush.

Ethernet seems limited to 100m.
Wifi seems limited to about 100m - 200m.
Any suggestions for bridging this gap?

thanks,
rickw


-- 

Rick Welykochy || Vitendo Consulting

If consumers even know there's a DRM, what it is, and how it works,
we've already failed.
 -- Peter Lee, Disney Executive


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Internet at 500m

2014-05-22 Thread Jason Ball
The high gain antenna would work, just feed it with some decent coax so
there actually is some signal.   LMR400 or similar.

J.




On Fri, May 23, 2014 at 7:59 AM, Ken Foskey kfos...@tpg.com.au wrote:

 There is a reflector that uses tin foil and catdboard to focus the signal.
  Uses a hyperbola like head lights

   First thing is to get a good wireless first.  I had a wrt54g and it was
 excellent everything since has been rubbish and I did my research.

 Sydney wireless was getting up to 2 klm for wireless but they were
 boosting signals and using specialist aerials

 Use different channels to span multiple hops.

 On 23 May 2014 2:57:44 AM AEST, Rick Welykochy r...@vitendo.ca wrote:
 Hi Sluggers,
 
 I have a friend living in near jungle conditions in a small town
 in the Philipines that wishes to span about 400m - 500m from
 an Internet connection to his house in the bush.
 
 Ethernet seems limited to 100m.
 Wifi seems limited to about 100m - 200m.
 Any suggestions for bridging this gap?
 
 thanks,
 rickw
 
 
 --
 
 Rick Welykochy || Vitendo Consulting
 
 If consumers even know there's a DRM, what it is, and how it works,
 we've already failed.
  -- Peter Lee, Disney Executive
 
 
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

 --
 Sent from my Android device with K-9 Mail. Please excuse my brevity.
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html




-- 
--
Teach your kids Science, or somebody else will :/

ja...@ball.net
vk2...@google.com vk2f...@google.com
callsign: vk2vjb
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Internet at 500m

2014-05-22 Thread Edwin Humphries (text)

Rick

Last time I checked (and I'm going from memory), there were high power 
access points that put out 20-25dBa, instead of the 8-12dBa that is 
standard (that's 500mW, instead of around 18mW).


There are also commercial directional antennas, such as Yagi and 
backfire antennas, with gains of up to 16dBi. But, as Jason said, there 
is a need for them to be line-of-sight. If that's not possible, you may 
need an antenna with circular polarisation - these are more expensive. 
But most are not suited for dual frequency signals (and in any event you 
probably won't get a 5MHz signal to propagate 500m).


Of course, you should plan on two matched antennas.

Regards
Edwin Humphries

On 23/05/14 02:57, Rick Welykochy wrote:

Hi Sluggers,

I have a friend living in near jungle conditions in a small town
in the Philipines that wishes to span about 400m - 500m from
an Internet connection to his house in the bush.

Ethernet seems limited to 100m.
Wifi seems limited to about 100m - 200m.
Any suggestions for bridging this gap?

thanks,
rickw



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Internet at 500m

2014-05-22 Thread David

Hi...

I'm the proud possessor of a pair of  Ultrawap access points. They were 
used for years over a 200metre line of sight connection and worked 
perfectly. I believe you can attach yagi's to them to extend their range 
if the standard aerial isn't good enough.


They are now surplus to my requirements if you are interested.



On 23/05/14 02:57, Rick Welykochy wrote:

Hi Sluggers,

I have a friend living in near jungle conditions in a small town
in the Philipines that wishes to span about 400m - 500m from
an Internet connection to his house in the bush.

Ethernet seems limited to 100m.
Wifi seems limited to about 100m - 200m.
Any suggestions for bridging this gap?

thanks,
rickw




--
David McQuire
0418 310312

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Internet at 500m

2014-05-22 Thread Rick Welykochy

Hey Penguinistas,

Thanks to all for loads of useful info on extending my friend
Andy's digital reach in the Philipine jungle. I will pass on all
replies to him.

He is a Linux developer. I've worked with him on a Python + Qt
project. So you are helping one of the converted.

cheers
rickw




Rick Welykochy wrote:

Hi Sluggers,

I have a friend living in near jungle conditions in a small town
in the Philipines that wishes to span about 400m - 500m from
an Internet connection to his house in the bush.

Ethernet seems limited to 100m.
Wifi seems limited to about 100m - 200m.
Any suggestions for bridging this gap?

thanks,
rickw





--

Rick Welykochy || Vitendo Consulting

If consumers even know there's a DRM, what it is, and how it works, we've 
already failed.
-- Peter Lee, Disney Executive


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html