Re: [lopsa-tech] C coding

2010-10-09 Thread Phil Pennock
On 2010-10-08 at 08:07 -0700, Andrew Hume wrote:
 now i would have just said
   hour = t/(3600 * 1000); // remember t is in milliseconds
   // if i wanted hr in 00..23 range: hour = hour%24;
 
 bill said
   xx = t/1000
   strncpy(hr_str, (ctime(xx)[11]), 2);
   hr_str[2] = 0;
   hour = atoi(hr_str);
 
 ignoring logic errors (we really wanted hour from teh epoch, and not
 hour of teh day) and time errors (ctime does localtime, not UTC);
 my question is, where does someone learn this technique?
 i am gobsmacked.

In his defence: if you had wanted hour of the day, it's far better to
use system libraries, which are well debugged, than to roll your own
date calculations and discover the need to deal with things like
leap-seconds, etc.  And for hour-of-the-day, in those cases where that's
actually needed, localtime is normally what's wanted by non-sysadmins.

Now, still better to just use localtime() and access the tm_hour field
from the resulting struct tm, sure.  But I've seen self-calculations
snowball out of control as people just add a little bit here or there,
until you end up with something almost as bad as the extraneous string
intermediate steps that Bill came up with.

So the logic errors are really communication failures at the
requirements stage.  Now, the string munging, that's another story
entirely and someone who'd do that, I can believe that they just weren't
paying attention to the requirements as they were specified, as they
were off mentally solving the problem already.

-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] The FPGA and the NFS mount: A tale of bursty writes

2010-09-26 Thread Phil Pennock
On 2010-09-25 at 02:03 -0500, Brad Knowles wrote:
 Yup.  The mmap() system call on NFS is undefined, which means that Berkeley 
 DB, and any other software that uses mmap() cannot safely be used on NFS.  
 This is supposedly fixed in the latest versions of NFS, but I have yet to 
 have that claim actually demonstrated to me.

My recollection is that it's actually the flushing behaviour of writes
to mmap()d memory that is undefined and not flushable and so can't be
combined with locks, of any kind, for protecting access to data.  You
also can't predict order-of-writes, etc.

So mmap() works just fine, as long as you're very certain that only one
host is ever going to be accessing the file at any time.  So a single
dedicated host, with manual failover to a hot standby after taking down
the normal server, would work.

I'm not saying that I recommend this, but knowing the actual limitations
can make a difference when you're trying to duct-tape your way out of a
problem.

-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Compression Algorithms

2010-06-29 Thread Phil Pennock
On 2010-06-29 at 17:14 -0400, Edward Ned Harvey wrote:
 Using default settings, 7-zip  lzma are much slower than bzip2.  However,
 if you specify --fast, then lzma is both 2x faster and 2x stronger than any
 level of bzip2, which IMHO obsoletes bzip2.
 
 I noticed my LZO results were screwed up.  Updated results are attached.

You've given figures for a typical case.  Without worst-case figures,
it's premature to declare anything else obsolete, as some of us have to
worry about people deliberately choosing to interact via worst-cases to
see what fun they can have.

You also haven't touched memory stability -- do some situations, while
streaming, suddenly cause the available implementations to balloon in
memory requirements?

There's a lot more than how fast and how small to choosing a
compression algorithm for many situations beside just shrink some files
somewhere under my homedir.

-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] iSCSI HBA

2010-06-28 Thread Phil Pennock
On 2010-06-28 at 12:26 -0700, Brent Chapman wrote:
 Everybody says but what if the switch somehow leaks packets from one VLAN
 to another?  Well, what if the switch ACLs didn't work, and passed traffic
 that it shouldn't?  Those would both be major security bugs, drawing a quick
 response from the vendors in question.

You might think that.  For us, at $former_employer, it just led us to
stop buying Alcatel switches when we couldn't get them to see the
problem with having broadcast traffic leak across all VLANs.

In this day and age, I'm inclined to agree that switches should do as
you say, but I'd still want to stress-test before trusting their VLAN
logic to be an unreinforced part of a security periphery.

-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Anybody with a netapp?

2010-04-23 Thread Phil Pennock
On 2010-04-23 at 08:15 -0400, Edward Ned Harvey wrote:
 I was concluding precisely the opposite.  Namely:  To have the .snapshot
 directory in every directory is nice and convenient for user access, but if
 a user can mv some directory and suddenly it's not being backed up
 anymore, not included in the previous snaps of the parent directory ...
 
 I think it sounds like the Netapp implementation is something I would call
 broken albeit more convenient from a user perspective.  Personally I
 choose reliability over convenience any day.  Especially when it comes to
 backups.  

My recollection, from when I admin'd NetApps, was that *BOTH* the
mountpoint .snapshot dir works, and the per-directory .snapshot, so that
when there aren't directory renames happening, these are all identical:

  /.snapshot/$snapshot_name/foo/bar/baz.txt
  /foo/bar/.snapshot/$snapshot_name/baz.txt
  /foo/.snapshot/$snapshot_name/bar/baz.txt

For backups, you'd use the mount-point .snapshot, which by default is
exposed to readdir().  For users, you'd use the most convenient
directory to get to it, and if there was a rename, you'd use a .snapshot
from an ancestor directory above the rename and then drill back down the
directories.

Directories are snapshotted too, .snapshot is simply a way of vectoring
off and choosing which direction to go before continuing down a stored
directory tree.

You might also want to grab the na_* manpages from NetApp, if you have a
support contract and read up on the snap command there.

-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Backing up sparse files ... VM's and TrueCrypt ... etc

2010-02-19 Thread Phil Pennock
On 2010-02-18 at 22:10 -0600, Brad Knowles wrote:
 On Feb 18, 2010, at 8:43 PM, Edward Ned Harvey wrote:
 
  The ultimate goal is to have the job completed quickly.  But that can only
  be done if the number of blocks sent is minimized.  Presently, rsync reads
  the whole local file, and also reads the whole remote file to diff them, and
  send only the changed blocks.  Read the entire remote file is the fault
  here.  You could write the entire remote file, faster and with less traffic,
  than reading it and sending changes.
 
 If you have rsyncd running on the remote machine instead of mounting
 it as a remote filesystem on the local client, then the rsync local
 client will communicate with the remote daemon, and they will each
 calculate their own respective checksums, which can then be compared.

Actually, I believe this happens anyway.

The normal mode of operation, where you use ssh/rsh to connect to a
remote host, invokes { rsync --server --sender } on the remote side.

I'm not aware of any mode of operation in which rsync pulls the entire
contents of a file from remote in order to minimise what is sent to the
remote; that seems counter-intuitive, but I'm not expert in the workings
of rsync.  (I looked into it a little a few years ago, to have restricted
rsync-over-ssh via command=rsync --server [...] in authorized_keys, to
permit only very restricted rsync access with a dedicated ssh key;
worked nicely.)

-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] IPv6 on OS X

2010-02-17 Thread Phil Pennock
On 2010-02-17 at 22:44 +, Colm Buckley wrote:
 So, I've been learning all about IPv6; lots of good stuff.  Got myself a
 SixXS tunnel, set up a routable subnet, set up radvd, configured DNS...
  everything works great.
 
 Slight strangeness with the OS X laptops, though - they seem to neither
 consistently select IPv6 over IPv4 when both are available, nor have a knob
 to tweak to change the behaviour.  It seems to be pretty random whether an
 IPv4 or IPv6 address is returned by the native resolver routines.
 
 Anyone know whether there's a button to press to make it prefer IPv6 over
 IPv4 when both are available?

This came up on the ipv6-ops list recently, which is why I know of this,
not from first-hand experience.

Ron Broersma notes that:
} The confusing behavior you're seeing in OSX 10.6 is most likely the
} mDNSResponder issue reported earlier by Bernhard.  Apple
} switched to mDNSResponder for unicast DNS resolution in 10.6, but that
} broke IPv6 address selection because mDNSResponder takes
} whatever is the first response (A or ) and drops (rejects) all other
} answers.  So, depending on how you ask, and depending on
} what address comes back first, there is no way for the rest of the
} system or applications to see all the responses and make
} appropriate address choices.  This seriously impacts IPv6 interactions,
} because it no longer deterministically prefers  over A,
} but rather uses only the response that arrives first.
} 
} A detailed bug report (#733104) was filed last October.
} 
} As always, multiple bug reports to Apple will get their attention.

Regards,
-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Platform bashing (was Re: Email naming convention)

2009-10-26 Thread Phil Pennock
On 2009-10-26 at 19:41 -0700, Robert Hajime Lanning wrote:
 DNS can very easily not work.  You just need every site managing their 
 own, while not listening to the One Who Knows DNS TM.
 
 You should see the horrible mess some zone files are.

Wait, you mean those people actually updated DNS?  Oh frabjous joy!

I still recall the admin at $previous_employer who thought that if he
ran out of IPs for the DHCP pool, the appropriate action was to ping
some IPs and if there was no response, grab that IP.

Look in DNS zonefiles for allocations, or update DNS?  Who bothers with
that?

Fortunately, one of the IPs which he grabbed was for the laptop of the
General Manager.  Never had an easier time reverting stupidity ...

-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Email naming convention

2009-10-23 Thread Phil Pennock
On 2009-10-23 at 20:12 -0600, Jan L. Peterson wrote:
 On Fri, 2009-10-23 at 22:00 -0400, Edward Ned Harvey wrote:
   you could do most of this today with plus addressing like
   david+catch...@lang.hm
  
  How?  I don't know of any option in exchange or gmail to enable such a
  feature.
 
 Actually, Gmail supports this out of the box:
 http://gmailblog.blogspot.com/2008/03/2-hidden-ways-to-get-more-from-your.html

Disclaimer: speaking in a personal capacity

Also, Gmail canonicalises away any dots, so jan.l.peter...@gmail.com ==
janlpeter...@gmail.com == jan.l.pete.r@gmail.com.  So you can use
plus sub-addressing where supported and if not supported you can insert
some extra dots in the address.  Not so convenient, but you're working
around buggy code elsewhere and I know some people who like having the
option away.

Also, Google Apps for your Domain supports catch-all addresses.  Go to
manage the domain, Email settings, main config page under Email
routing -- you can choose what to do with Unknown account messages
and the default is Discard but you can choose to route them to a
catch-all address.  I just checked this on the family domain account.
It might be a premium feature, I don't recall.

For large domains, enabling a catch-all is almost certainly a mistake.
The volumes are prohibitive, even after spam-filtering.  For small
domains, *shrug*.

On my personal email which goes to my colo-box, I used to have a
catchall address.  When I transitioned mail to my colo-box from a
friend's machine who'd helped me out for a while, I enabled catchall on
the older domain.  That lasted not-very-many minutes and proved to be
unwise.  For a newer domain like spodhuis.org, I could get away with it
for a little while.

However, there are harvesters which don't understand the different
between an email address and a message-id, or which break at the hyphen
in lopsa-tech.  And then I got the joe-jobs from random
left-hand-sides, resulting in bounces.  So after a while, I gave in and
demoted the catchall to kinda works in a pinch status -- I configured
my MTA so that the catchall address only exists if the SMTP Envelope
Sender is not empty.  Since I never send from a catchall, this works,
but it does break some sign-up.

I could get away with this because I had configured my system so that if
a Shared Folder in Cyrus was created, then that left-hand-side springs
into existence.  Ie, by creating Shared Folders/spodhuis/bert, the
address b...@spodhuis.org becomes valid and is delivered to the shared
folder with no further configuration.  With that, my wife could just
create a new folder for a new LHS and things would work.

But these days, she mostly just uses Gmail anyway.  She finds it much
easier to use than Thunderbird.

Regards,
-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] auto outbound ssh from windows

2009-10-11 Thread Phil Pennock
On 2009-10-11 at 23:41 -0400, Edward Ned Harvey wrote:
 In openvpn, each individual client gets a private key or a private
 certificate.  If that key or cert were acquired by anyone other than the
 intended end user, it would be possible for someone unauthorized to get in.
 
 Basically, this is a one-stage authentication, and I don't think openvpn
 supports two or more.  Ideally, authentication will require both something
 physical and something known.  (A preshared key or a certificate which is
 too large for a human to be expected to memorize or type it in, can be used
 as a substitute for something physical as a key.)  Ideally, no user could
 even receive the password prompt unless they've already passed the physical
 authentication.

You can combine the something physical + something known in one step, by
subverting the DRM capabilities of a laptop to do what DRM was actually
advertised to do -- support the users, not the OS vendors.  :)

ipsec using http://sourceforge.net/projects/opencryptoki/ as the
pkcs11module to access a Thinkpad's DRM chip.  I've not been involved in
the setup/admin so can't help beyond pointing out that it's possible and
works; it does impose limits on which vendors you buy laptop hardware
from for Linux users, though.

The something you have is, theoretically, then tied to the actual
laptop as a physical item rather than just some data; if someone can get
hold of the laptop for long enough to open it up and shove probes onto
the DRM chip then there are probably various attacks to extract the data
from it (Cambridge University has done lots of work there) but AFAIK the
attacks involve triggering authentications, proving that it's difficult
to defend against a malicious holder of the item.  But in this case,
there's also the something you know, the account password to
sudo/whatever to get access to the chip.

So you're still vulnerable if your employee leaks their password to the
people with the laptop, or the employee is malicious and wants to let
lots of other people in using their account, with audit trails pointing
to ... them.  So really, you're still vulnerable to rubber-hose
cryptanalysis.  But otherwise, it should be fairly solid, barring
attacks where an OS compromise can use some undocumented DRM chip
backdoor to subvert the DRM and get the signing key out.

-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Swap sizing in Linux HPC cluster nodes.

2009-09-04 Thread Phil Pennock
On 2009-09-04 at 12:27 -0700, da...@lang.hm wrote:
 for historical reasons (since it used to be the limit on swap partition 
 size), I have fallen in the habit of creating a 2G swap partition on all 
 my systems. If I was going to change it I would probably shrink it down 
 (by the time a system is using 512M to 1G of swap, it's probably slowed to 
 unusuable levels anyway and so I would just as soon have the system crash 
 so that my clustering HA solution can kick in instead)

While I mostly agree about the limited utility of swap, on FreeBSD I
still went (go, on my personal box) for swap = RAM for one simple
reason: kernel crash dumps.

If you want to be able to figure out *why* a kernel has fubar'd, it's
good to be able to get a crash dump and since the swap partition is used
for writing that out, you need enough swap to hold the contents of RAM.

I've debugged a few issues this way.  Given the tendency of the awkward
problems to only show up in production systems, no matter *how* good
your staging and load test environments, I'd be very loath to give it
up.

I tend to peruse Linux Weekly News to keep vaguely up-to-date on what's
going on in Linux kernel work and I understand that there's a project
working on Linux kernel crash dumps too.  A search engine yielded:
  http://lkcd.sourceforge.net/

So, given that you're unlikely to be using all the disk on the systems,
it might be worth creating the swap partition, even if you don't enable
it now, so that two years from now you don't need to sort out
re-partitioning across your cluster so that you can get the dumps to
debug the strange problem that keeps killing nodes.

Regards,
-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] shared network disks - vs gfs - vs distributed filesystem - vs ...

2009-07-02 Thread Phil Pennock
On 2009-07-02 at 11:31 -0400, Edward Ned Harvey wrote:
 Until today, I only had one idea which came close - google gfs (not
 global gfs) does exactly what I want except that it always writes to 3
 or more peers.  If google gfs is available for use (can install and be
 used on linux) and if it's configurable to take that number down to 1,
 then it might do what I want.

I've double-checked the published paper on this, to be sure I'm not
revealing anything not already published, but am just drawing your
attention to things you've perhaps overlooked.

GFS almost certainly doesn't do what you want.  It gets some of its
advantages by not being a regular file-system, you can't mount it and if
you try to hack in support via FUSE then you can find some normal POSIX
ops being rather unhealthy for the GFS cell.

You'd need all of your applications to link against GFS client
libraries.

 Clarification - Suppose I have local raid 0+1, and I do random
 read/write to those disks.

Google GFS is not what you want.  Files are append-only.  You can hack
around that, but it's probably more development work than you want to
spend on it.

-Phil, who has run GFS cells for a living
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Any SCO MMDF gurus on the list?

2009-01-10 Thread Phil Pennock
On 2009-01-10 at 22:48 -0500, John  BORIS wrote:
 Thanks. No we didn't fill any disks. I have about 18 messages hung on
 each of the 20 servers. I am about to just delete all of them and
 restart things. I did a restart of the mmdf sctipts but that didn't
 help. In the past the main relay server used to stop accepting mail
 beacause of a bad or malformed email Like you said it is flaky. I
 learned this and when mail used to hang on the relay server I would find
 the culprit and get delete. This time it is all 20 servers taht sort of
 got in unison to be stubborn and stop sending the mail. The relay server
 is sending email like a champ.

Oh yes, I mercifully forgot about how flaky it got with junk files
clogging things up and the clean-up scripts I wrote.  They were for my
employer and I no longer have copies.

Why are you sticking with MMDF, just the cost of migration, or are you
stuck using SCO anyway?

If your main limitation is the mailstore format for delivered mail, note
that Exim also supports that format so can be used as a drop-in
upgrade replacement with appropriate configuration.

(I believe Exim also compiles on SCO -- there's Makefile and OS header
support for the platform but I don't know how extensively it's tested).

(Yes, there are other MTAs but I don't know if they have mailstore
support)
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Vuln: OpenSSL DSA/ECDSA server checks invalid

2009-01-07 Thread Phil Pennock
On 2009-01-07 at 11:05 -0800, Phil Pennock wrote:
 The new hole above looks as though it's useful for a direct
 man-in-the-middle, but for as long as you accept certificates where a
 path in the trust chain uses MD5 signatures you're also up a dark creek
 without a paddle.  Expecting users to start checking the hash algorithms
 for bank sites, etc, is a definite non-starter.  Until the NIST
 competition yields a new standard hash algorithm it looks as though
 we're using the less-broken SHA1 for certs.

Sorry to follow up to myself, but before pedants start pointing it out:
yes, I know we have the SHA2 family of hash functions, such as
SHA-256/SHA-512, but in practice AIUI they're not widely supported in
older browsers, so a CA which wants to issue widely usable certs has a
choice of MD5 or SHA1.

My understanding from following what cryptographers write is that the
SHA2 stuff is better than SHA1 but has some of the same theoretical
basis so there is cause for concern and the current NIST competition
could ideally have started sooner.

So when checking your certs and replacing MD5 certs, and when checking
your code, keep good notes and treat this as a practice run.  In the
next couple of years, you're likely to need to repeat this at least
once, possibly twice (MD5-SHA1-SHA-512-new_SHA3_algorithm).

I'm trying to find authoritative sources on which hash algorithms are
supported by which browsers, hoping to skip a step and go MD5-SHA-512
or the like; if anyone wants to see the candidates, they're in RFCs 3279
and 4055.  3279 gives us MD2, MD5 and SHA1.  4055 pulls in the PKCS#1
updates (RFC 3447) and allocates the codes for SHA-224, SHA-256, SHA-384
and SHA-512.  That happened in 2005.

If anyone has pointers to decent information on what is supported by the
crypto used in various browsers, that would be appreciated.  :)

Thanks,
-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] SOHO Wireless AP recommendations revisited

2008-11-09 Thread Phil Pennock
Back in 2006, Hal Pomeranz asked for SOHO wifi AP recommendations.  I'd
like to re-open the topic, asking for recommendations for 2008/2009,
off-list and I'll summarise.

Requirements:
 * no client OS dependencies for configuration (see below)
 * WPA2 PSK personal, more better
 * IPv6 router, 6to4 support
 * 802.11a, or draft-n at 5MHz, or a/b/g/n or whatever
Nice to have:
 * file server via USB-connected disks

I live in an apartment complex and can currently see 17 networks, which
is on the low side for what I normally see, so 5MHz 802.11a is a must
for reliable wifi.

I currently use an Apple Airport Extreme and am, on the whole, fairly
happy with it.  However, come December I will be refreshing my work
laptop which is the only MacOS client I have; I have no intention of
sticking with MacOS (Leopard is disappointing, especially given some of
the things $work does to Leopard laptops; no, waiting 2 minutes for the
machine to become responsive after joining the VPN while the LDAP
directory resynchronises is not acceptable when you're oncall and trying
to respond to a page).

So, I'll be using some kind of Linux distribution and need to be able to
admin my base-station.  The closest I can find to support the current
base-station is something last updated in 2002 which doesn't support all
of the current features and has no clearly legal approach to getting
firmware for security updates.  I can not rely upon something so tenuous
for my wireless connectivity.  Thus it's time to get a new base-station
before I lose my MacOS client.

I have a WRT54GL for b/g connectivity for devices limited to that; it
tends to degrade rather fast for reasons described above, but it is
adequate in a pinch.

I use IPv6 extensively and am stuck behind a Cable provider with no IPv6
for customers, so need 6to4.  I'd much rather see this provided at the
wifi router stage, so that there's IPv6 on the network for all clients
without special configuration.

Recommendations appreciated.

Thanks,
-Phil
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/