Re: [lopsa-tech] Cloud Databases?

2010-10-07 Thread Nathan Hruby
On Thu, Oct 7, 2010 at 6:26 PM, Atom Powers  wrote:

>> You also mentioned file access time - could you describe that problem a bit 
>> more?
>
> Files are primarily served though either CIFS (samba) or HTTP (Web
> CMS) and files can be very large, up to 1GB. Rough estimates show that
> a 200ms latency increases the download time by up to 20%,
> proportionally larger for smaller files.

"CIFS" makes me think many of these things are internal facing apps?

Have you looked at putting WAN accelerators on the links in-between
sites?  At $lastjob we used Steelheads to turn cruddy t-1's in the
middle of nowhere into respectable links and for ds3's between major
sites they turned CIFS traffic into nearly-next-door.  Short of that,
tuning/fixing  MTU's, pMTUd, and VPN settings to cope with latency /
be more optimal and dealing with bandwidth hogs always tend to help.
For me, cloudifying existing entrenched enterprise level apps isn't my
idea of fun.  So, I'd ensure that the entire network path between $far
and $near is as optimal as possible before doing major surgery to the
application stack.

After that I'd start looking for slack in the edges of the apps that
can be solved with CDN-like or proxy like  infrastructure at sites to
reduce calls across the long links.  It may be that the  20k of html
gets to the client very quickly, but the 75k of images in 4k chunks
take forever to load.  (Note, some WAN accelerators will do this sort
of thing for you too).

Another way to solve this kind of issue for people in really, really
far flung places is with Citrix/RDP/VNC/NoMachine to let the user
remote in to a machine closer to the data.  These kinds of services
stream and compress pretty well and for the office of users 6k miles
away who need to importantly run some critical reports once a
reporting period, they can be a perfect fit.

-n
-- 
---
nathan hruby 
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Recommendations for VPS providers

2010-08-10 Thread Nathan Hruby
On Tue, Aug 10, 2010 at 12:18 PM, Yves Dorfsman  wrote:

> You don't tell what OS you need?

Doh!  Yes, Linux would be ideal, though a Solaris zone may be doable
as well.  We're currently all Linux though and would prefer to keep it
that way if possible.

> I believe they are hosted in the US, I'm not sure if you don't want anything
> based in the US, or meant you are open to other locations.

We have two locations in the US presently, and would like1 - 2 more US
locations (one on the east cost and another perhaps in the midwest) --
I think linode is on our list, but I don't have it handy at them
moment.

Primarily though we're looking for hosts in non-US locations so we can
get a more accurate picture of site performance form a global
perspective.

Thanks!

-n
-- 
-------
nathan hruby 
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] Recommendations for VPS providers

2010-08-10 Thread Nathan Hruby
Hi,

We have a project whereby we'd like to stand up a number of small
virtual machines that are geographically distributed around the globe.
 On these we would like to run some small, simple, custom monitoring
scripts on them to give us some very specific performance information.
 We might be able to use a 3rd party monitoring provider with a
suitable amount of data manipulation to get their data into our
format, but franky we have this code in-house already.  Having some
shells available in far-flung places is an added plus for diagnosis
runs that makes rolling our own little monitoring solution seem like
the best option.

We've looked at a few VPS providers and haven't really found anyone
that we really truly like so I'm reaching out to the LOPSA community
to see if anyone have VPS providers who they like and/or recommend.
At this point, we might be willing to engage a number of smaller
regional providers to get the best deal versus one large global
provider, so give a shout out to your favorite non-US small VPS
provider if you have one as well!

Thanks!

-n
-- 
-------
nathan hruby 
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Nathan Hruby
On Fri, Jul 2, 2010 at 8:06 AM, Nick Silkey  wrote:
> tech@ --
>
> $work has many svn repos hosted atop direct-attached disk. As time
> rolls on, we are encountering space constraint where they are stored.
> Rather than take an outage to resize disk/move repos/etc where this
> could potentially happen again over time, were looking to move this to
> our 3170 filers where disk is air (need more space?  *poof*).
>
> It appears fsfs-formatted svn repos are indeed NFS safe, but I wanted
> to ask the audience.  Anyone with experience doing this (good, bad,
> otherwise), let me know.  I welcome responses like 'yeah, it can be
> done.  i did it, but the performance stunk!' ... not just the simple
> 'yes' or 'no'.

We have many tens of thousands of svn repos on NFS with no NFS
specific tunings to the repo usage, mostly it's perfectly happy stuff.

You can see performance issues with httpd+mod_svn+dav and large repos
(where large is both in size of files stored, as well as number of
commits) when your clients are separated from the server via slow or
high latency links.  For example,  "svn up" to pull 1,000 changes via
dav on a client with a DSL link on the other side of the globe from
the server tends to work poorly because the command channel timeouts
while other threads are doing pull operations.  If that sort of thing
is a big use case for you, I'd recommend looking into using svnserve
as well as, or instead of, DAV.

HTH,

-n
-- 
---
nathan hruby 
metaphysically wrinkle-free
---

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Help with Kickstart policy

2010-05-18 Thread Nathan Hruby
[I seem to have missed this thread until now]

On Tue, May 4, 2010 at 4:45 PM, Matt Lawrence  wrote:
> On the the issue.  I have finally gotten access to the kickstart files
> that are used to install most of the systems.  The one I am looking at
> right now is 1648 lines long, with about 1600 of them in the %post clause.
> I am of the opinion this is a bad idea, a kickstart shouldn't do much more
> than get a system up, running and able to talk to a configuration
> management system.  Naturally, there is no configuration management system
> and systems are left as initially installed for years.

ZOINKS!

Correct, kickstart puts bits on disk and that's about it, %post is for
tweakage to make first boot successful and kick off at boot what you
really need to happen.  It's not JumpStart, or any of the other
Solaris install options, which is what it appears this kickstart setup
is emulating.  There are some fundamental differences between the two
and one would be wise to play to the various strengths of each system.

Searching around the Red Hat docs (possibly the anaconda README and
anaconda list as well) should reveal several dire warnings about
things not to do in %post (like patch), mainly because you're only
merely chroot'ed into the installed environment.  You're still running
under a kernel that's very different than the boot kernel and in a
device environment that may be very different from what you will see
at first boot.  Minimally, %post should write the bulk of that 1600
lines into a script to do the config you need and trigger that script
to be run at boot.

For the pragmatic future, once things are out of %post and running
under a real system it becomes easier to to break that 1600 lines into
a number of smaller, independent, and single purpose scripts that are
easier to tune and SCM-ifty.  After than it's not a large leap to
importing those smaller scripts into a configuration management system
snippets.  That seem like a nice, sane way to kind of bootstrap some
better CM tools into your environment.  Authoring configs with a nod
to the fact that Linux isn't the only UNIX also helps, as does proper
and honest monitoring :)

For the selling of your vision outward and upward future, I think
starting with some of concepts in the LISA '04 paper by Paul Anderson
and Alva Couch [1] might be useful as a place to start.  I also
suggest looking at the business and see what people in the business
are having problems and are experiencing pain and addressing those
issues first, even if that's not the most efficient method overall.
The goal shouldn't be to prove the other guy wrong or have things your
way but to make the business run better.

Frankly and 100% IMHO, doing an end-run around entrenched culture
rarely works, and even more rarely works well for the runner.  Biting
your tongue to work your way in/up and attacking from the inside at
opportunistic moments, or playing a very Machiavellian game between
people and politics seem to be your only options.  If neither of those
suit you or your personality I suggest keeping your nose clean and
looking for other work.

-n

1 - http://www.usenix.org/event/lisa04/tech/talks/couch.htm

-- 
---
nathan hruby 
metaphysically wrinkle-free
---

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] DNS proxy to execute suffix search order

2009-10-21 Thread Nathan Hruby
On Wed, Oct 21, 2009 at 8:16 AM, Jeremy Charles  wrote:
> I actually downloaded dnsmasq and read the docs for how to configure it, but 
> that feature doesn't seem to be in there.  One of my coworkers thought that 
> too, but when he also read the docs, he changed his mind.
>
> Can you provide some details to help show me what I'm missing?
>

See the -S switch, which allows specification of upstreams for a
particular domain, or series of domains:
- http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html

The FAQ also has a section on doing reverses:
- http://www.thekelleys.org.uk/dnsmasq/docs/FAQ

HTH,

-n
-- 
-------
nathan hruby 
metaphysically wrinkle-free
---

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] DNS proxy to execute suffix search order

2009-10-21 Thread Nathan Hruby
On Tue, Oct 20, 2009 at 10:49 AM, Jeremy Charles  wrote:
>
> I'm wondering if anyone is aware of software that acts like a DNS server, 
> accepting DNS queries from clients and then executing a DNS suffix search 
> order behind the scenes.
>

dnsmasq will do this on *nixy systems.  Not sure about windows tools
that will, though I'm sure there's something somewhere.

-n
-- 
-------
nathan hruby 
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] DNS functional testing

2009-10-20 Thread Nathan Hruby
On Tue, Oct 20, 2009 at 4:37 PM, Tracy Reed  wrote:
> But having learned from the past I am very afraid of taking on any
> such cleanup because that A record which everyone agrees isn't used
> anymore actually serves some hidden critical function.
>
> I am wondering if there are any tools out there which can make this
> easier. For example I am thinking that if I had a way to capture a
> month's worth of DNS traffic and then replay that against the new name
> server and make sure that any queries which returned responses on the
> old setup also returned the same responses on the new server that
> would make things much better.
>
> Does such a tool exist? Is this a good idea? Any better way?

Others have mentioned query logging, which is a great thing and should
help in a quicker way than packet captures.  When we had to attack
this problem at a previous job, we took an iterative outside-to-inside
approach at cleaning things up in small(er) digestible chunks.  IIRC,
it roughly went like this:

1) Fix delegations of sub domains and your zone assignments with your
registrar.  Use a tool like zonecut's DNS Bajaj and other delegation
and zone checkers to find problems and get them addressed.

2) Fix secondary servers and monitor them / ensure zones are never
terribly behind.  Setup ixfer and notifies or take appropriate actions
with your infrastructure tools to ensure secondaries are up-to-date
and redundant (eg: 3 nameservers in the same vmware cluster are not
redundant).

3) Use tools like bind's named-checkzone, nslint, etc.. to walk your
zones and address cruft and warnings.  Chained CNAMES, missing or
incorrect glue records, missing or incorrect reverses, inconsistent or
misleading formatting, wrong serials, etc..there's a ton of stuff that
creeps in over time that can and should be addressed.  Look for places
where techniques like RFC2317 classless delegation can help later on.
Dropping the zone TTL's and RR TTL's is possibly a good idea to do
before doing this work so problems pop up faster.

4)  Do a quick pass to fix/remove the obvious broken things.  The MX
record that points at a dead/gone IP is already broken -- verify that
the IP really is dead and pitch the record.  The RP record that points
at your predecessor's predecessor needs updating.  You'll find a lot
of funk here, including dead/broken apps and configs that will need
fixing.  This will also pop up a lot of questions too, be sure to
record/write these down.

5) Fix up your zone maintenance tools.  If you need to do a huge
cleanup, then there are probably places in your tools that need some
love so these problems don't keep happening.  Minimally, integrate
some of the checking tools/processes you found/used in step 3 above so
that changes are vetted as they checked in or made and rejected if
they don't pass the tests.  Optimally ensure that as much stuff as
possible gets auto-generated (reverses, serials, hostmaster addresses,
etc, etc..).  Adding additional monitoring to capture data on query
rates, etc... is also helpful so you can help correlate problems and
get a better idea of trending.  This is where we did our cutover from
old DNS master server to new, mainly because the tool changes were
sizable enough to make in place upgrades really hard, and we needed to
move to new hardware anyway.

6) At this point you should have a much better DNS infrastructure and
a lot less cruft and noise to cause false positives.  Raise TTL's to a
normal value again, and enable query logging and do a daily compare of
requests versus zone files to see what's not being used and can be
removed, what you may be missing, should be added, etc... This should
also give you some data about setting up views and where they are
needed, moving recursive services, adding local caching name servers
on app servers, etc...

7) Purge, add, fix, enhance, etc... using the data/questions/ideas
from previous steps.

HTH,

-n

-- 
---
nathan hruby 
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Is there an OpenLDAP Doctor in the house?

2008-11-26 Thread Nathan Hruby
On Wed, Nov 26, 2008 at 10:12 AM, Gilbert Wilson <[EMAIL PROTECTED]> wrote:
>
> I'm having a problem with persistent corruption in Apple's Open
> Directory.  I believe this corruption is related to OpenLDAP and the
> BerkeleyDB.  I was hoping that folks here might be able to help me
> track down whether this is the problem or not.

Last time I ran OpenDirectory (10.3-ish) passwords were stored in a
separate facility called PasswordServer, which LDAP, etc.. used by way
of some Apple Magic Smoke.  If that's how they still do it, you may
want to check that password server is operating normally and that it's
databases are fine as well.

Additionally, enable debugging in slapd, check the expiration of the
SSL certs and SSL CA certs you're using with OpenDirectory as well as
any replication setups you may have and/or any kerberos setups that
you may have created.

-n
-- 
---
nathan hruby <[EMAIL PROTECTED]>
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Backup Systems Mashup?

2008-11-26 Thread Nathan Hruby
On Wed, Nov 26, 2008 at 8:38 AM, Elizabeth Schwartz
<[EMAIL PROTECTED]> wrote:
> Am I correct in thinking that amanda uses /etc/amandates, bacula uses
> /etc/dumpdates,  Legato uses its own database, and rsync checks the
> files on disk against the existing archive, so none of these systems
> would run interference with each other?

Bacula uses it's own catalog based on a SQL database (MySQL, Postgres)
and figures out what needs to happen at job start time by looking at
the schedule, Job Options, and the catalog (eg: If there's no suitable
Full found in the catalog, it'll promote a job to Full, if the option
is set to do so, it'll try to grab a missed job of a higher level,
etc..)  I don't think it would cause problems with another backup
software.

> Am I missing any obvious good choices?

BackupPC: uses rsync or tar from disk-to-disk and "dedupes" identical
files across the store.  You may want to backup everything with
BackupPC and then just Legato the BackupPC store offsite.

Of course, this kind of defeats the purpose of outsourcing your
backups (eg: you still have to burn time, gear and money on it).  I
would advocate working with the Legato team first to find a schedule
that's right for you -- You're a customer, they should be able to try
to fit your needs, esp. if their service comes with a premium
pricetag.  Bringing the person on your side who signs the checks might
be helpful too, if they are on the same page as you.

-n

-- 
---
nathan hruby <[EMAIL PROTECTED]>
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Desireable pop up windows in *nix console sessions

2008-11-20 Thread Nathan Hruby
On Wed, Nov 19, 2008 at 11:45 PM, unix_fan <[EMAIL PROTECTED]> wrote:
>
> I'm not keen on creating a new service for something like this, but I 
> appreciate the idea. One guy suggested forcing every *nix user to log on to 
> Lotus Notes Sametime (maybe covers Linux, not sure if Solaris will work), but 
> I think that's someone being a wise guy, not serious.
>

Gaim/Pidgin has a sametime client in it and AFIK works on Solaris,
though I've never tried the Sametime client on Solaris.

-n
-- 
-------
nathan hruby <[EMAIL PROTECTED]>
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Small business backups over the internet

2008-11-18 Thread Nathan Hruby
On Tue, Nov 18, 2008 at 10:48 AM, Christophe Kalt <[EMAIL PROTECTED]> wrote:
> i'm used to doing backups internally, but i've recently been asked twice for
> backup recommendations for small business setups with relatively small data
> sets to backup.  There are quite a few options out there, does any one have
> any recommendation for this?  Things i've played with previously were more
> geared towards individuals than businesses.

The commercial two offerings I know about (but have not used) are:
- http://www.symantec.com/business/online-backup # they also seem to
have a plugin for BackupExec
- http://mozy.com/free?ref=451c76aa # this is an EMC hosted service

Zmanda seems to have s3 integration:
- http://www.zmanda.com/backup-Amazon-S3.html

Additionally, there are probably a few ways to get Bacula to write to
Amazon S3, though I don't see a definitive guide:
- s3pipe -> http://landonf.bikemonkey.org/code/s3
- posting about bacula's vchanger for s3 ->
http://www.mail-archive.com/[EMAIL PROTECTED]/msg27594.html
- script to dupe bacula file volumes to s3 ->
http://tracyreed.org/blog/archive/2007/06/28/s3-backuppy

Please let us know what else you find!

-n
-- 
---
nathan hruby <[EMAIL PROTECTED]>
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Live Sync / Backup / Sync without crawling

2008-11-03 Thread Nathan Hruby
On Mon, Nov 3, 2008 at 8:34 AM, Yves Dorfsman <[EMAIL PROTECTED]> wrote:
> Edward Ned Harvey wrote:
>
>>> Hmmm... rsync is so efficient that I have to wonder what kind of
>>> extreme case would make this attractive. I'd be so afraid that one
>>> transaction
>>
>> For backup and redundancy purposes, I have an NFS server which I rsync to
>> local disk every night.  It takes approx 6 hours for 1TB over direct
>> attached GB.
>
> Just curious here: do you think it takes so long because you have a zillion
> files in there, or because it transfers a lot of data over a relatively slow
> link ?
>

The former.  I've had multiple experiences where a rsync of a large FS
on a busy or slow machine would die or take out the machine before any
actual data was transferred.  rsync has some scale issues and isn't
the appropriate tool for every situation.

-n
-- 
---
nathan hruby <[EMAIL PROTECTED]>
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Live Sync / Backup / Sync without crawling

2008-11-02 Thread Nathan Hruby
On Sun, Nov 2, 2008 at 9:28 AM, Yves Dorfsman <[EMAIL PROTECTED]> wrote:
> Edward Ned Harvey wrote:
>>>
>>> http://code.google.com/p/lsyncd/
>>
>> Yup, that one fits the description.  It looks really cool!  :-)
>
> Hmmm... rsync is so efficient that I have to wonder what kind of extreme
> case would make this attractive. I'd be so afraid that one transaction get
> missed, and then because "notification" has been done, it would never get
> sync'ed again... That here and there, and over a long enough time period,
> you have two different file system.

I have the same fear as you which may be mitigated by using a more
robust notification and transmission system on top of inotify (eg:
amqp) though lsyncd looks like a winner too :)

I however can also see the utility of not using rsync in the case
where you're rsync'ing a fs with several million files -- rsync keeps
that filelist in memory while building it and you can hit some ungood
edge cases.  Additionally, rsync isn't exactly the right tool to use
for something closer to synchronous replication because of the thrash
it can cause on a high use fs.  I have

Other options that I can think of for a random "would like some kind
of replication thingy without thrashing my filesystem regularly"
thingy:
- Your SAN probably has a replication engine, use that (for vast
quantities of random unstructured end-user data, this is probably the
beast/easiest method)
- Chop up the rsync into smaller parts that can run in
parallel/different times/based on some other notifier
- Replicated backups (eg, stick to your normal backup routine,
clone/dupe/copy from the backup system)
- Append/update to a tar file, then sync that tarfile
- OS native-ish replication:
  - csync2 -> http://oss.linbit.com/csync2/
  - drbd -> http://www.drbd.org/
  - On Windows 2003R2 (and up) DFSr replicates based actions to the
NTFS journal and it much easier to use than FRS
  - FreeBSD has something called ggated (?)
- AFIK, most of the varied "cluster filesystems" aimed at the HPC
crowed also offer replication/duplication of data for additional
throughput/redundancy
- Don't do that: Put your data in a more structured container that has
replication (RDMBS, Hadoop, Hypertable)

I suspect that there's probably a different way to efficiently do
replication for every combination of data/OS/need out there, and a lot
of what one would need to do is look at the specifics of the situation
to figure out the best method.  Though, I guess that's true for
everything, isn't it?

> I have been using those scripts on my laptop, but I think eventually (once
> I've got the deletes worked out) I'll put that on all my machine, because
> then, it means that everybody can work while the server is down, it also
> means that I can suspend/hybernate desktop while not in use (hybernate and
> automount are NOT friends :-), etc...
>
> I've looked at CODA, but it was designed a long time ago, and does not work
> for today's sizes + their authentication mechanism is a headache.
>
> Anybody's been giving thought to this ?

Heh.  CODA.  Have you looked at unison?
http://www.cis.upenn.edu/~bcpierce/unison/  I suspect that its'
multi-way merge is probably closer to what you may want, if I
understand your use case correctly.

-n
-- 
---
nathan hruby <[EMAIL PROTECTED]>
metaphysically wrinkle-free
---
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/