Re: [SLUG] trivial, but banging head on wall ...

2013-12-04 Thread Mark Suter
James,  

> First I thing that having spaces in filenames is like wearing a   
> tee shirt saying "hit me!".   

Please remember that a pathname for valid POSIX filesystem may  
contain anything except the null character. 

> I'm trying to backup all my wife's pictures and although I can
> do any one file on CLI doing a script is humbling me. If anyone   
> can help I'd be grateful. Thanks  

If possible, just back everything up.  I'd much rather waste a  
bit of disk space than have to tell someone that I didn't backup
something because they didn't ask for it.   

That said, here's a quick command based on your script - try putting
this into http://explainshell.com/ if you don't grok the mechanics: 

  find . -type f \( -iname \*.jpg -o -iname \*.tif -o -iname \*.jpeg -o -iname 
\*.qrf -o -iname \*.nef \) -print0 | 
cpio --null --format=crc --create | 
ssh j...@dvr.home cd /mnt/photos \; cpio --make-directories 
--preserve-modification-time --extract  

The first command, find, just lists all the matching files with 
a null character between the list.  This will handle all kinds of   
weird characters in the filenames.  

The second command, cpio, reads a list of filenames from standard   
input, expecting them to be separated with null characters and  
creates an archive on standard out. 

The third command, ssh, executes the given command on the remote
system.  That command is in two parts: first change directory   
into /mnt/photos and then extract the archive.  

If you wanted to tradeoff CPU and RAM to save network bandwidth, this   
might be a suitable variant, adding compression and decompression at the
inside of the pipeline over the ssh connection: 

  find . -type f \( -iname \*.jpg -o -iname \*.tif -o -iname \*.jpeg -o -iname 
\*.qrf -o -iname \*.nef \) -print0 | 
cpio --null --format=crc --create | 
xz -9 --compress |  
ssh j...@dvr.home cd /mnt/photos \; xz --decompress \| cpio 
--make-directories --preserve-modification-time --extract   

--  
Mark Suter http://zwitterion.org/ | I have often regretted my   
email addr  | speech, never my silence.   
mobile 0411 262 316  gpg FB1BA7E9 | Xenocrates (396-314 B.C.)   
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] help with proxy code...

2013-09-30 Thread Mark Suter
Ken,

> Fantastic response Mark,  thank you.  

You're welcome. 

In the code provided, the HTTP response code will be a from the proxy  f
itself or passed through from the origin server.  To be safe, if you do 
not get a 200 success code, then just accept that it's an error.

> For the record the generic C# connections for Microsoft are apparently:   

Are these the ones you're talking about?

http://msdn.microsoft.com/en-us/library/system.net.webclient.aspx   

http://msdn.microsoft.com/en-us/library/system.net.http.httpclient.aspx 

> a) restricted to 2 at a time. 

http://www.danielroot.info/2009/02/improve-net-web-client-performance-by.html   

> b) try and resolve the proxy every single time using the complex  
> proxy settings under windows. 

http://stackoverflow.com/questions/4415443/system-net-webclient-unreasonably-slow
   

> c) just generally perform badly.  

You should not need to resort to sockets in order to do a basic web 
request.  There are good packages in any real production language, for  
example, Go, Perl, Python and Java: 

http://golang.org/pkg/net/http  

http://search.cpan.org/perldoc?WWW%3A%3AMechanize   

http://docs.python.org/3/library/urllib.request.html

http://hc.apache.org/   

If this code is for "production", I'd find a proper package to  
handle all the complexity of http.  Otherwise, you'll find that 
over time you end up writing more than you want to maintain ;)  

> I grabbed some code of the internet with direct connection using  
> sockets and run times dropped from approx 23 seconds to 11 seconds.   

Even 11 seconds sounds way too long.  What's the time for a     
simple curl on the command line?

--  
Mark Suter http://zwitterion.org/ | I have often regretted my   
email addr  | speech, never my silence.   
mobile 0411 262 316  gpg FB1BA7E9 | Xenocrates (396-314 B.C.)   


signature.asc
Description: Digital signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] help with proxy code...

2013-09-29 Thread Mark Suter
> I have to code on the dark side and the code from microsoft   
> is really slow to the point where replacing it with socket
> connections directly is about a 1/3 of the time.  

Are you permitted to share example code?

> Does anyone know how a proxy works if I directly call a connect?  Is  
> it the same http codes just connecting to the proxy?  

I've written the explanation below assuming you're talking about the
CONNECT method (it's a standard verb, like GET, POST and HEAD).  Please 
re-ask if you meant something else. 

The proxy will give a HTTP response to the CONNECT method request.  If  
that response is anything other than 200, you don't get a tunnel and
your second question wouldn't matter.   

For a 200 response, the proxy gets out of the way and just passes bytes 
over the established tunnel.  For a HTTP request over the tunnel, expect
HTTP response codes.  For a TLS request, expect a TLS responses ;)  

Here's an example of a HEAD request to google.com over a tunnel.  The   
first response (line 1 & 2) are from the local squid proxy and the  
rest of the response lines are from Google. 

## Install squid and permit a CONNECT to port 80.   
$ sudo apt-get install squid
[ ... ] 
$ sudo perl -i -pe 'm/acl SSL_ports port 443/ and print "acl SSL_ports port 
80\n"' /etc/squid/squid.conf
$ sudo service squid reload 
Reloading Squid configuration files.
done.   

## Bogus port - an error from the proxy.
$ echo -ne "CONNECT google.com:25 HTTP/1.0\n\n" | tee /dev/stderr | nc 
localhost 3128 | perl -pe 's/^/$.: /'
CONNECT google.com:25 HTTP/1.0  

1: HTTP/1.0 403 Forbidden   
2: Server: squid/2.7.STABLE9
[elided]

## Bogus URL - an error from the target server. 
$ ( echo -ne "CONNECT google.com:80 HTTP/1.0\n\n"; sleep 1; echo -ne "HEAD 
/does-not-exist HTTP/1.0\n\n" ) | tee /dev/stderr | nc localhost 3128 | perl 
-pe 's/^/$.: /' 
CONNECT google.com:80 HTTP/1.0  

1: HTTP/1.0 200 Connection established  
2:  
HEAD /does-not-exist HTTP/1.0   

3: HTTP/1.0 404 Not Found   
4: Content-Type: text/html; charset=UTF-8   
[elided]

## A valid, if boring, request. 
$ ( echo -ne "CONNECT google.com:80 HTTP/1.0\n\n"; sleep 1; echo -ne "HEAD 
/ HTTP/1.0\n\n" ) | tee /dev/stderr | nc localhost 3128 | perl -pe 's/^/$.: /'  
 
CONNECT google.com:80 HTTP/1.0  

1: HTTP/1.0 200 Connection established  
2:  
HEAD / HTTP/1.0 

3: HTTP/1.0 302 Found   
4: Location: http://www.google.com.au/[elided]  
5: Cache-Control: private   
6: Content-Type: text/html; charset=UTF-8   
[elided]

## Do the cleanup.  
$ sudo apt-get purge --auto-remove squid
$ sudo rm -r /var/spool/squid   

--  
Mark Suter http://zwitterion.org/ | I have often regretted my   
email addr  | speech, never my silence.   
mobile 0411 262 316  gpg FB1BA7E9 | Xenocrates (396-314 B.C.)   


signature.asc
Description: Digital signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html