Re: [lopsa-tech] The FPGA and the NFS mount: A tale of bursty writes

2010-09-23 Thread Daniel Pittman
Patrick Cable  writes:

> I have a device that sends out information at 4.7 Megabytes a second.
> I have a desktop that receives the data from this device that runs Red Hat
> Enterprise Linux 5.5. They are on the same switch, a 24-port Juniper EX2200.
>
> When I write the data to the desktop on the local filesystem, there's no
> dropped information. When I write the data to an NFS share, the device
> reports dropped packets.

[...]

> I find it hard to believe that a machine on the same (recent, gigabit)
> switch can't write out 4.7MB/sec. Am I wrong?

No, it should be fine doing that.  One test that might be illuminating, if
painful, would be to try running 'while sleep 1; do sync; done' while doing
the capture and see if that will smooth things out.

While I would have hoped more modern systems have improved things, it used to
be back in the bad old days when I was working with DV capture on Linux that
the system had plenty of average bandwidth to write the stream, but would
batch work until the bursts were blocking long enough to drop frames.

That hack was my cheap test for figuring out that having the kernel / app
flushing more often, so lowering the peak requirement, would fix things.

If it does, having your application flush the output while writing should help
sort things out.  (Traditionally, fsync from another thread works...)

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] wifi - ethernet

2010-09-04 Thread Daniel Pittman
Andrew Hume  writes:

> for my home network, i want to connect my kids pc's to my home network. the
> transport from the access point to their pc's is wifi, but for
> unsatisfactory reasons, the wifi adapter cards for their pc's are a bust.
> so i thought to get a (unknown term - maybe bridge) that is a wifi at one
> end and ethernet at the other.
>
> is bridge the right term?

Yes.

> and can anyone specifically recommend a brand or disrecommend a brand?

Yeah: the Linksys WET54G is pretty good at 802.11[bg] bridging.  I think they
still mostly sell it to connect "game consoles" to the network, but it is an
Ethernet port and a wireless network *client* in a little box.

They work just fine for getting an Ethernet-only device attached to wireless
without much complexity, and given how commodity the service is these days
pretty much any vendor who offered a similar device would probably be good.

They are usually single Ethernet port devices, but you could attach a switch
without any huge hardship or anything.


Alternately, USB wireless things usually work fairly robustly on Windows or
MacOS-X hosts, so they might be an alternative to otherwise investing in
such a device.

> i am contemplating a linksys wrt54gl

That is a wireless "Access Point" device, and I don't off-hand know if the
stock firmware can act as a client.  It probably can, though.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] Splunk and CLI searching from workstations.

2010-08-30 Thread Daniel Pittman
G'day.

We have a Splunk 4.1 trial deployment that has proved itself successful enough
that users are asking us for more features.  The main one they are after is
that they can perform a Splunk search and feed the results into a Unix
pipeline easily.

I know we can do that with the splunk binary, but installing Splunk on every
workstation just to get that feature seems overkill; does anyone have a simple
CLI search tool, probably against the REST API, that we can just pick up and
use?

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] whole disk encryption

2010-08-25 Thread Daniel Pittman
Paul Graydon  writes:
> On 08/24/2010 02:25 PM, Doug Hughes wrote:
>>
>> You're right. I confused secure erasing (which no longer requires many
>> passes, even though it remains part of the common cargo-cult lore), with
>> recovery under normal circumstances. It is plausible, that normal
>> non-erased data could be recovered with a controller change on different
>> commonly used drive models of similar types.
>
> That one whizzed past my head with an audible whooshing sound.  Since when
> does secure erasing no longer require multiple passes?

The current Australian standards only require a single pass for disks post
2001, or larger than 15GB, with an arbitrary pattern followed by read-back
verification.  (...or an approved degauser.)

This covers everything including top secret labeled storage; see §6.2.92 of
the ISM Sep 09, although I understand it has been this way for a while.

http://www.dsd.gov.au/library/infosec/acsi33.html

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] whole disk encryption

2010-08-24 Thread Daniel Pittman
Doug Hughes  writes:
> Edward Ned Harvey wrote:
>>
>> The disadvantage of the HD pass is: You have to constantly enter the HD
>> pass.  Every time you power-on, or wake up.  The drive is not encrypted;
>> just locked.  Which means data could be recovered from it by disassembling
>> it, or maybe by swapping the electronic circuit.  Also, the HD pass would
>> be subject to a brute-force attack.  If you lose your password, there's
>> nothing you can do about it.
>
> Well, swapping the electronics will not help anybody. A modern drive is
> carefully factory calibrated with its particular electronics and heads.

er, are you aware that one of the recovery techniques used by the
companies that do this stuff is to swap the electronics between drives?

> There is nothing that you can swap between drives to make them useful.
> (in fact, it is said that breaking the electronics on a drive is as good as
> crushing the drive for data protection from all but those with large
> resources, advanced, expensive electro-microscope technology and a lot of
> times on their hands).

At least, last I checked they were going to charge me about $3,000 to do just
that in either Melbourne or Sydney, if they had appropriate hardware on hand.
More extensive recovery would be a little more, but not that much.

You /can/ make it harder, but either the vendors were lying to us, or this
isn't quite as delicate and fragile as you state here.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] ATA over Ethernet experience...

2010-08-07 Thread Daniel Pittman
G'day.

I was wondering what sort of experiences y'all had with ATA over Ethernet in a
primarily Linux environment.  This would be operated as backing storage for
KVM based virtual machines, Linux host and guest.

We would be looking, at this stage, to attach the AOE devices to the host,
then use an LVM layer atop that, then the KVM guest devices stored as raw LVM
logical volumes.[1]

The deployment would be Gigabit Ethernet, typically using a dedicated SAN NIC
in each host — but we have a bunch of single NIC legacy hardware that I would
be loath to throw out just for this, so comments on the cost of sharing that
port with regular TCP would be interesting.


Anyway, I am interested in feedback in terms of:

Performance of the Linux ATAoE client for "extra" disk storage, rather than
storage needed to actually boot the system.  I am happy with a couple of local
disks for storing the OS.

Performance of the vblade and ggaoed software ATAoE target implementations,
hosted on Linux.  Backing would be solid local storage on LSI hardware RAID,
and currently these give great performance, so I am happy we have plenty of
IOPS and disk bandwidth to play with there.

Performance and manageability of the Coraid hardware — and, ideally, how well
it plays in a mixed software and dedicated environment with the software
targets.


Comments are much appreciated.

Regards,
Daniel

Footnotes: 
[1]  This isn't set in stone, but is the model most easily integrated with
 both our current practice, and with our virtualization toolchain, plus it
 offers some advantages like live data migration with pvmove and friends,
 or encryption by layering dmcrypt over the LV.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Ticketing system

2010-08-03 Thread Daniel Pittman
Martin Markovski  writes:

> Looking for a ticketing system that is preferably open source.
> I've been looking into RT and OTRS but they were rejected as a solution.
>
> The requirements are the following:
> - ability to have multiple queues
> - have a web fronted where customers can create new and check the
>   status of their tickets
> - able to create reports based on time and % of answered/closed tickets
> - ability to interact with the tickets via email

FWIW, I would be really interested in hearing suggestions that met these
needs, and also:

 - ability to have RT style security for users: if you don't have rights to
   the individual ticket and/or queue you don't get to even touch it.

Super-nice to have would be something that helped track project planning,
like an XP "velocity" style system, but hardly essential.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] disabling VT and for VT-d performance?

2010-07-21 Thread Daniel Pittman
Yves Dorfsman  writes:
> On 10-07-21 01:22 PM, Matt Simmons wrote:
>
>> I know that ESX(i) has a setting that allows you to take advantage of
>> hyperthreading if it's enabled in the bios. I haven't done any tests
>> to determine if it actually improves performance or not, but since
>> there's a setting for it, I'm blindly guessing that it doesn't hurt
>> (assuming you have the little box checked)
>
> Ah! I guess my original message wasn't clear.
>
> What I am trying to find out is, if you are NOT going to run anything
> virtual, and just install an OS (RHEL) directly on the hardware, no VM on
> top of RHEL, will it impact your performance, gain or lost, if you disable
> VT and VT-d?

No.
Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] lowering net.ipv4.tcp_fin_timeout

2010-07-02 Thread Daniel Pittman
Matt Lawrence  writes:
> On Thu, 1 Jul 2010, Matt Lawrence wrote:
>> On Thu, 1 Jul 2010, Aleksey Tsalolikhin wrote:
>>
>>> Do you have HTTP pipelining enabled?  So you can download all 60
>>> images over a single TCP/IP connection?
>>>
>>> This is "KeepAlive On" in Apache httpd httpd.conf
>>
>> Good point.  I just asked and it is turned off.  Since Apache configs are
>> out of my juristiction, I have passed along the recommendation that it be
>> turned on.  Thanks for the pointer.
>
> I have done more research.  The reason it is turned off is that the number
> of unique visitors is so high that Apache winds up with too many idle
> threads eating up too much memory which also crashes the box.

Idle, as controlled by MaxSpareServers, or active, as controlled by MaxClients
or so?  In my experience usually the later, but I have watched a number of my
staff spend inordinate amounts of time trying to tune the former to fix this
over the years.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] File access auditing

2010-06-23 Thread Daniel Pittman
Jonathan  writes:

> I need to provide a file server, with a good level of auditing.  At a
> minimum I need to audit file creation, file access, and file deletion.
>
> The clients will be Windows, so it would be easiest to offer the data as a
> CIFS share, though the PCs also have a Novell client on them. (The auditing
> needs to be done at the server end.)
>
> I am neutral about server OS.  Has anyone tried this?

Yes.  The Samba "VFS" plugin for auditing generated quite effective records,
which I was then able to easily enough read and report on using a fairly
trivial Perl script.

Which probably gives you some idea of the level of complexity of my reporting
needs; if yours are greater, this may be less desirable.  However, the Samba
facility was very effective for generating the audit records themselves.

    Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] ZeroMQ, STOMP, or...

2010-04-22 Thread Daniel Pittman
Tracy Reed  writes:
> On Thu, Apr 22, 2010 at 07:25:43AM +1000, Daniel Pittman spake thusly:
>
>> At work we are running into a need to deal with message passing in more
>> places, and I have a general policy of picking up existing, standard tools
>> rather than letting the developers invent their own messaging-on-HTTP
>> layer.
>
> This is an area I've sometimes wondered about ever since I heard about
> midleware years ago but never really understood what it did... What is
> message passing used for?
>
> Can you give me some general or generic examples of what sort of application
> would use message passing and how it is architected?

Andrew gave a good answer to some of this, but the things that I want to avoid
having to build ourselves are:

Message routing: having the developer stick something an address on their
message, or subscript to an address, rather than having to know which machine
is handling the content.

Message efficiency: for broadcast or publish/subscribe messages, aside from
the cost of having to implement that at all, being able to use the middleware
to send one copy of the message across the WAN rather than one-per-subscriber.

Message reliability: being able to provide a strong, well defined assurance
about what the reliability of a message will be, and when it is persistent,
avoid developers best-guessing their own implementation.

Logging and metrics: being able to know what messages are moving, what volume,
how long they are in flight, or in the queue for processing, in a *standard*
way makes it easier for me to spot problems.


...and which applications benefit from this?  Anything that needs to
communicate between two or more machines *can* benefit from this, because they
don't have to invent their own mechanism, they just use the standard one.

Daniel

Don't mistake a message queue system like this for something more than a low
level message passing tool, though: you can't deploy one and get a "batch
processing system" for free or anything like that.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] ZeroMQ, STOMP, or...

2010-04-21 Thread Daniel Pittman
G'day.

At work we are running into a need to deal with message passing in more
places, and I have a general policy of picking up existing, standard tools
rather than letting the developers invent their own messaging-on-HTTP layer.


So, at the moment ZeroMQ is looking like a pretty good candidate for what we
need, especially as it doesn't impose formatting on the messages — so we can
lay convenient formats over them easily.

The two questions I have are:

If y'all run ZeroMQ, does it perform as it says on the box, and is it
generally low maintenance and easy to troubleshoot?


Do any of y'all know of a STOMP "adapter" to ZeroMQ?  We have several
promising tools we are looking at that are built to target that, and a bunch
of other things seem to like it ... but no native ZeroMQ support for it.[1]


Alternately, do any of y'all have a better suggestion.  By "better" I mean one
that you have actually tested, and compared to ZeroMQ, and found to be less
trouble overall.


Our preferred platform is Debian, but I am not especially worried by having to
package whatever tool we choose to the platform.

Daniel

Footnotes: 
[1]  Which makes perfect sense given their goals and the level they operate at.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] code suggestion

2010-04-21 Thread Daniel Pittman
Adam Tauno Williams  writes:
> On Wed, 2010-04-21 at 08:30 -0400, Andrew Hume wrote:
>
>> i realise this is a tad off subject, but i am an optimist!
>
>> i am tired of reinventing a particular wheel, namely that of a library
>> supporting a server with multiple threads servicing multiple inbound and
>> outbound sockets.  the language needs to be C/C++ and it will run on redhat
>> linux.  i am considering the zeromq library.  can anyone recommend some
>> (hopefully public library) code to do this?
>
> doesn't libevent do this?

Yup.  I prefer libev[1], but there are any number of equivalent libraries for
doing the same thing.  Typically C libraries, but adding C++ isn't too hard.

> I'd suspect that something named zeromq is several tiers up from managing
> sockets.

A long way; another user recently spent some time asking the same questions on
the ZeroMQ list, and getting told that ZeroMQ is great if you just want binary
messaging, but isn't going to play nice with exposing FDs to you; it owns
those and runs threads in the background to service them for you.

    Daniel

Footnotes: 
[1]  http://software.schmorp.de/pkg/libev.html

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Windows SBS 2003 Lost 2 Months Of Data - Help!

2010-04-16 Thread Daniel Pittman
Richard Maloley II  writes:

> Earlier this week I went to a client’s business to finish some work on their
> server.  Something went horribly wrong and I could use some help figuring
> out what happened.
>
> Some background information first… This is a small business with about 15
> workstations and a single Server 2003 SBS installation. Their C: drive is a
> mirror of two 80GB drives using a Silicon Graphics SATA RAID card (add on
> card, not an original part of the server).

I assume you mean "Silicon Image", who make SATA fakeRAID cards, not Silicon
Graphics, here.  Given later comments about low-cost "RAID", though, I think
that is a pretty safe assumption.

[...]

> I unplugged all the SATA cables and put it all back together. I am fairly
> confident that the two drives for the OS were reconnected to the proper
> ports on the RAID card, otherwise I feel the RAID BIOS would have given me
> an error message.

You have a lot more confidence in the BIOS than my experience supports.

[...]

> I checked the event logs – same thing! Log files are all blank from
> 2/11/2010 until 4/14 /2010.

[...]

> It appears as though Windows lost two months of data. I’m at a loss as to
> how this could have happened.

Well, my guess is that you hit the same problem that I had a handful of
clients hit in the past:

The Windows vendor-supplied software RAID drivers for things like the Silicon
Image "RAID" controllers are terrible, and are prone to things like...

> The only initial thought is that I swapped the SATA cables on the OS mirror
> set… but it’s a mirror, so it should be in sync 100%, not 2 months behind!

...dropping a drive out of the array, so that you discover well after the fact
that one disk was not up to date.  Usually because the system reboots and
decides to use the out-of-date disk for some reason — usually a failure of the
second disk.

Worse, our experience is that the RAID BIOS or software RAID driver usually
ends up writing over the "out of date" second disk (with the newer data) if it
is at all accessible, to "fix" the broken array after a reboot.

> Since this is a consumer level card it has no usable tools or log files that
> I can see.  Swapping the cables again sounds dangerous to me – I don’t
> believe that it would be a safe thing to do.
>
> Has anyone heard or experienced anything like this?

Sure.  Several times, on which basis I pretty much started telling clients
that they better maintain excellent, and well-tested, backups if they intend
to run critical systems on any of the low-end "RAID" hardware.

Intel were the least-worst vendor for software RAID solutions, but I have very
little faith in their tools.  At least they put a little more effort into
keeping them up to date than many of the other vendors.

> Does the community have any other options that I might be able to try?

You could *try* pulling the second disk and reading it in another machine, to
see if it contains the extra data, then hand-integrating that back into the
primary device.

I wouldn't hold my breath, however: the odds are good that the controller
overwrote the second (good) disk with the old data, in my experience.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Custom APT repository

2010-03-01 Thread Daniel Pittman
Ben Poliakoff  writes:
> * John Stoffel  [20100301 09:11]:
>> 
>> Skylar> Does anyone have any tips/tools for managing custom apt
>> Skylar> repositories?  Here's the requirements I have:
>> 
>> Skylar> 1. Take a list of packages and fetch all those packages plus their
>> Skylar> dependencies. I could download the entire mirror but that's a lot of
>> Skylar> disk space I'd rather not dedicate to the task.
>> 
>> Sounds like what you want here is the apt-cacher stuff.  Point all
>> your internal hosts at the cacher and let it fetch the .debs as needed
>> and cache them locally.  
>
> I've found it simplest to have one repo dedicated to an apt caching
> service (I use "approx"), and another separate one for locally generated
> packages, backports etc.

I can also recommend this strategy; having used a variety of options, I found
this the least trouble and most effective solution in the longer term.  While
my proxy of choice is 'apt-cacher-ng', any of them should be fine.

>
>> Skylar> 2. Maintain multiple architectures with one tool (currently i386 and
>> Skylar> amd64, but potentially more in the future).
>> 
>> Should be able to do this just fine.
>> 
>> Skylar> 3. Add custom packages easily.
>> Skylar> 4. Rebuild indices (e.g. Packages.gz and Sources.gz) automatically.
>
> Index rebuilds could be automatically triggered based on some sort of
> periodic find command using the Packages.gz as the timestamp file

For your own stuff, reprepro is the least-worst solution I have found.
Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization: Death of Commodity Hardware?

2009-09-19 Thread Daniel Pittman
Luke S Crawford  writes:
> Lamont Granquist  writes:
>> On Tue, 15 Sep 2009, Daniel Pittman wrote:
>
>> > For what it is worth, I have found that "container" solutions often scale
>> > better to the actual workload than "pretend hardware" virtualization
>> > does, because it shares RAM between the containers much more easily for
>> > these tiny, almost nothing, applications.

[...]

> A previous poster suggested os-level virtualization like OpenVZ; that is
> sort of a middle of the road solution between consolidating the old
> fashioned way and fully virtualizing.

[...]

> (personally, I prefer paravirtualization for this reason.  I host untrusted
>  users, and I don't mind throwing away some ram if it means I don't have to
>  worry about it when some joker decides to run mprime, or when someone tries
>  to run a giant webapp on the smallest plan.)

For what it is worth, having used Virtuozzo (the commercial parent of OpenVZ)
in an environment where we hosted untrusted users, it did just fine in
addressing these constraints.

I further understand that Solaris soft partitioning can achieve the same
results, in terms of guaranteed CPU, memory and other resource bounding.

So, while I agree that OpenVZ does poorly I suggest that is an implementation,
not an architectural, issue with the tool.

Regards,
Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
   Looking for work?  Love Perl?  In Melbourne, Australia?  We are hiring.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] SSD's - really any better performance?

2009-09-15 Thread Daniel Pittman
Edward Ned Harvey  writes:

> Today I poked around looking at specs of SSD’s and 7.2krpm SATA hard disks.
> I just sampled a bunch of whitepapers on various drives and averaged the
> results together.  Also, when there wasn’t an apples-to-apples measurement
> to compare, I had to calculate, as evidenced by the IOPS versus avg seek
> time.

[...]

> Write latency: SSD somewhat slower (12ms vs 8.5ms) (which I derived from 84
> IOPS and 8.5ms avg seek time)

Given the noted performance differences, in reviews, between the Intel and
!Intel SSD implementations, I would be interested to know which vendor(s) you
were looking at here...

Regards,
    Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
   Looking for work?  Love Perl?  In Melbourne, Australia?  We are hiring.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization: Death of Commodity Hardware?

2009-09-14 Thread Daniel Pittman
Lamont Granquist  writes:

> Here we were back in 2001-2003 buying up cheap 1U dual proc ~4GB RAM server
> with a 1-4 drives in them and putting them all over the datacenter.  It all
> made a lot of sense in the push to having lots of smaller, cheaper
> components.
>
> Now with virtualization it seems we're back to buying big Iron again.  I'm
> seeing a lot more 128GB RAM 8-core servers with 4 GigE drops and FC
> attatched storage to SANs.
>
> Has anyone really costed this out to figure out what makes sense here?
>
> An awful lot of our virtualization needs at work *could* be solved simply by
> taking some moderately sized servers (we have lots of 16GB 4-core servers
> lying around) and chopping them up into virts and running all the apps that
> we have 4 copies of that do *nothing at all*.  Lots of the apps we have
> require 1% CPU, 1% I/O, 1% network bandwidth and maybe an image with
> 512MB-1GB or RAM (and *never* peak above that) -- and I thought the idea
> behind virtualization was to take existing hardware and just be more
> efficient with it.

For what it is worth, I have found that "container" solutions often scale
better to the actual workload than "pretend hardware" virtualization does,
because it shares RAM between the containers much more easily for these tiny,
almost nothing, applications.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
   Looking for work?  Love Perl?  In Melbourne, Australia?  We are hiring.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] UNIX/Linux Site Management Tool?

2009-09-05 Thread Daniel Pittman
"Michael D. Parker"  writes:

> I have just started a gig and have inherited a large collection of
> heterogeneous UNIX systems of the following flavors all running: hpux
> (11.11, 11.31), aix, mpras, sun solaris (sun 8 9 10), redhat (as3, as4, as5)
> , and suse (9 10 11). What would be the ideal from management's point of
> view is to have all of these systems configuration controlled and managed
> from hopefully one program. It is understood that each of these operating
> systems will have different base configurations. The items to be managed
> include patches, packages, and configuration files.
>
> I have just started looking at cfengine, and looking at some type of
> do-it-yourself hybrid using subversion.
>
> Management would prefer to use a commercial package if possible and was
> wondering if you all had any ideas of this type of application or vendors?

I would strongly suggest puppet, if you are looking at the open source
options, is a vastly better choice than either of the two you listed.
(...and, yes, I /do/ still have the scars from using both in production.)

You can also purchase commercial support for puppet, although it doesn't have
the standard "giant company" attached that your management probably want.

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
   Looking for work?  Love Perl?  In Melbourne, Australia?  We are hiring.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization that doesn't suck

2009-09-05 Thread Daniel Pittman
Edward Ned Harvey  writes:

> I can’t believe, after all these years, and countless deployments, I’m still
> so dramatically dissatisfied with virtualization.  Wondering if anyone knows
> something that doesn’t suck.  I may have some incomplete or incorrect
> information, so please – comment your heart out.  ;-)
>
> It is worth note, that my target is small office, small business.  I’m not
> looking at any crazy awesome enterprise products.

You may find that investigating a "container" solution like OpenVZ brings you
more joy than pain for the use-case you are talking about: a single kernel
image, with each container looking enough like an independent OS that you can
reasonably deploy anything inside them.  I find it a vastly better solution
that "pretend hardware" or paravirtualized solutions for almost all of what
I need to do.

(*BSD Jails are more or less the same sort of solution, and Solaris has some
 equivalent, I understand, in recent releases.)

[...]

> 2.   VMWare ESXi (free)
>
> a.   I have not yet actually used this, so tell me where I’m wrong.
>  Based on what I read on the internet …
>
> b.  You install bare metal.  It’s supposedly a version of RHEL.  So you
> must be using RHEL supported hardware in order to install.

That would be ESX; ESXi is much, much smaller and doesn't have a real OS in
the management partition, just the absolute bare-bones.  This means that
unlike ESX you *don't* get to install random (out-of-date) Linux software.

This bites when, for example, you can't get RAID hardware monitoring because
VMWare say the vendor needs to ship them a version of the monitoring tools
specific to ESXi, and the vendor doesn't care enough.  (*cough* Adaptec)

ESXi is small enough that some Dell hardware, and perhaps other vendors, ship
with it in firmware though, so a diskless ESXi server is completely possible.

> c.   The console client “vsphere client” is windows-only; you can’t
>  manage your VM’s from a mac or linux

There is a completely disastrous admin thing at the console, but this is more
or less true.

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
   Looking for work?  Love Perl?  In Melbourne, Australia?  We are hiring.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Experience with the Condor batch processing system on Linux.

2009-07-19 Thread Daniel Pittman
Richard Chycoski  writes:
> Narayan Desai wrote:
>> On Thu, 16 Jul 2009 12:16:14 -0400 Doug Hughes wrote:
>>
>>   Doug> Narayan Desai wrote:
>>   Doug> > On Thu, 16 Jul 2009 11:15:48 -0400 Edward Ned Harvey wrote:
>>   Doug> >
>>   Doug> >   Ned> > I am interested in soliciting experiences deploying, 
>> using and
>>   Doug> >   Ned> > maintaining the
>>   Doug> >   Ned> > Condor batch processing system, especially under Linux / 
>> Debian.
>>   Doug> >   Ned> >   Ned> > Our use would predominantly be many small jobs,
>>   Doug> > rather than a few large
>>   Doug> >   Ned> > jobs,
>>   Doug> >   Ned> > with runtimes measured in a few hours.  Probably only a 
>> handful of
>>   Doug> >   Ned> > nodes, on
>>   Doug> >   Ned> > the order of half a dozen, in total.[1]
>>   Doug> >
>>   Doug> >
>>   Doug> >   Ned> I don't know anything about condor, or torque.  The obvious
>>   Doug> >   Ned> choice to me would be SGE.  I wonder what advantage there 
>> is to
>>   Doug> >   Ned> using something other than SGE?
>>   Doug> >
>>   Doug> > Well, the area where condor is pretty much the undisputed king is 
>> in the
>>   Doug> > scavenger arena. The basic idea is that you could deploy condor on 
>> top
>>   Doug> > of your regular desktops and jobs would be deployed to use wasted
>>   Doug> > cycles (during idle periods or on a set schedule, etc).  -nld
>>   Doug> >
>>   Doug> >   
>>   Doug> Doesn't it also excel at the whole state/migration thing? E.G. you 
>> can
>>   Doug> take a node out for maintenance and migrate a running job off to
>>   Doug> another node by saving the memory state and performing the migration
>>   Doug> and then resuming the job. (May only work for some job 
>> configurations)
>>
>> So I hear. I don't have any direct experience with the
>> checkpointing/migration stuff. I gather they are starting to use VMs for
>> this sort of thing as well as library-based checkpointing.
>
> This depends on the purpose of the batch jobs. If you're looking for simple
> load sharing/cloud computing, we've used LSF in our engineering environment
> for a long time.

Thanks.  With the number of recommendations I will definitely take a closer
look at the facilities and cost of LSF — though I fear that our budget won't
go that far, so a "free" starting point will be the solution.

> It has the option of consuming unused desktop cycles, but we found this to
> be unreliable and problematic - not because LSF was bad, but because
> individuals had messed around with their desktops in such a way as to mangle
> any jobs distributed to them.

*nod*  Even with Condor I would be looking to deploy on semi-dedicated server
hardware, not end-user machines, so while they may also have other load it
would be fairly predictable.

[...]

> I work in a group who's main purpose is to provide automation, especially
> for the batch processing environment at $WORK. You're welcome to ping me -
> here on the list or privately - if you would like more help.

Thank you; I appreciate the offer.  At this stage it looks likely that Condor
will be the tool of choice, and I will be looking to deploy a small trial
cluster in the near future.

At least this new environment adds variety and spice to the job. ;)

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Experience with the Condor batch processing system on Linux.

2009-07-19 Thread Daniel Pittman
Richard Chycoski  writes:
> Edward Ned Harvey wrote:
>>
>> By default, it will distribute jobs every 15 sec, but it can be configured
>> down to 1sec.  So if you have jobs of 2sec, the efficiency might not be
>> great.
>>
>> You can have job dependencies, but I'm not sure how much knowledge it has
>> that's relevant to your situation.  If you submit a job, let's say jobid is
>> 500, and you submit another job, let's say 501, you can make job 501 depend
>> on 500.  There isn't any conditional detection of "pass/fail" on 500
>> ... just a scheduling delay to ensure 501 runs after 500.

*nod*  That is actually a pretty big strike against SGE compared to Condor,
which at least offers "success before the next job" level dependencies without
an additional intelligent scheduler we would have to write.

I think most of our jobs would be pretty independent, but at least a couple of
the current uses do have dependencies internal to them.

[...]

> The comment above about 'seconds' and 'distribute' is important. A typical
> load sharing system is working hard to spread work across all of the
> machines that it 'owns'. Typical uses are software builds. For the second
> comment - builds have few dependencies - these are usually resolved in a
> 'make' or similar process.

M.  I think our use case is more firmly in the "load sharing" camp, which
is terminology that I had not run into before this.  Thank you for the pointer.

[...]

> If you need load sharing, it would seem that systems like SGE and LSF are
> what you are looking for. If you need batch processing, Autosys, BMC, Orsyp,
> Tidal, and Tivoli are the places to look for answers. These are the
> commercial (or near-commercial solutions), I haven't worked with the Open
> Source alternatives in this field - which for batch processing is common
> since most companies want some company 'on the hook' if their batch
> processing system (usually inextricably linked to their 'bread-and-butter')
> fails. For load sharing, I'd certainly look over open systems like SGE.

Well, this work is to a reasonable degree our "bread and butter", but not at
the level where it would be easy to get much funding behind this.  So, open
source alternatives it probably is.

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Experience with the Condor batch processing system on Linux.

2009-07-16 Thread Daniel Pittman
Edward Ned Harvey  writes:

>> I am interested in soliciting experiences deploying, using and maintaining
>> the Condor batch processing system, especially under Linux / Debian.
>> 
>> Our use would predominantly be many small jobs, rather than a few large
>> jobs, with runtimes measured in a few hours.  Probably only a handful of
>> nodes, on the order of half a dozen, in total.[1]
>
> I don't know anything about condor, or torque.  The obvious choice to me
> would be SGE.

OK.  I have not looked at that previously, but will look into it now.

So, you presumably have experience running the Sun Grid Engine[1]; how does it
stack up in the scenario that I outlined?

> I wonder what advantage there is to using something other than SGE?

My question was based on ignorance of the tool, actually, so I have no
particular opinion on that question.

Regards,
Daniel


Footnotes: 
[1]  ...assuming that is the SGE you mean here.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] Experience with the Condor batch processing system on Linux.

2009-07-15 Thread Daniel Pittman
G'day.

I am interested in soliciting experiences deploying, using and maintaining the
Condor batch processing system, especially under Linux / Debian.

Our use would predominantly be many small jobs, rather than a few large jobs,
with runtimes measured in a few hours.  Probably only a handful of nodes, on
the order of half a dozen, in total.[1]


My key concerns are:

1. How stable is Condor on Linux, and especially Debian?

2. Is it reasonably easy to manage over time, especially when software
   upgrades are required to Condor, or to the underlying platform?

3. We presently have Etch on most of our systems, and are looking to migrate
   to Lenny in the future; is mixing underlying distributions going to give
   Condor heartburn?

4. How efficient is it when used for jobs with processing requirements ranging
   from a few seconds[2] through to a few hours?  Are we going to need to put
   in substantial work to avoid running tiny jobs through Condor, and only
   push the big ones out?

5. How do you find Condor for dependent jobs, both from the point of view of
   one job spawning multiple subsequent (or sub) jobs, and from the point of
   view of "dependency based" solutions: deliver result X, where X requires Y
   first, and Y requires A and B, etc.


I am also interested in experiences with torque, on the same sort of criteria,
and any suggestions y'all might have for other tools that merit consideration.


Oh, the jobs are very mixed: from "mail merge" type applications, through to
sequential and branching data processing for reporting, through to data
validation.

We do have a mixture of tools, languages and data I/O requirements for the
applications, so something that requires rebuilding these into something other
than Perl is unlikely to happen, and being tied to Perl is a negative.

Regards,
    Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Help with bastion host

2009-05-16 Thread Daniel Pittman
Lois Bennett  writes:

>> Are you absolutely sure you don't want to forward port 22/tcp to the
>> inside machine, and so make your system a tiny bit simpler?
>
> I am not sure.  The idea is to protect the inner system.

I strongly suspect that you will get absolutely no improvement in
security from the bastion host in this case, but you obviously need to
do your own risk analysis to be confident.

> It may be that a simple port forwarding would accomplish that but I am
> not sure I can convince my boss.  If I were to do a simple port
> forwarding this bastion machine would only have port 22 open to the
> outside world and then a port to the inner system.

*nod*

> A user will not login to it but only connect to it.  I will look into
> port forwarding. Thanks

No worries.  My logic, if it helps, is:

If your bastion host is blindly forwarding these connections on no
stronger authority than a user logging in, and if it doesn't impose
stronger security than the inner host, you gain nothing.

If one of those conditions isn't true then you /might/ gain something,
but it probably isn't worth the risk that another host to manage. :)

>> In any case, can you explain what isn't working?  "being
>> recalcitrant" isn't the most descriptive failure in the world, and
>> the examples in the manual page are fairly straight forward for
>> running commands...
>>
> Sorry I have copious debug output but I hesitated to put that in since
> I was really looking for pointers to online guides.  I did see lots of
> good examples in the man pages and other place all for commands.

*nod*  That is a fair call.  If you keep having trouble, though, more
detail is probably helpful. :)

[...]

> Thanks, Daniel, this was helpful.  I will go and a mmend something and
> if it doesn't work I will send some more details.

No worries.  Good luck.

Regards,
Daniel
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Help with bastion host

2009-05-15 Thread Daniel Pittman
Lois Bennett  writes:

> I need help with setting up a bastion host that will only allow users
> to ssh through.  I know I should use the force command option in the
> sshd_conf file but it is being recalcitrant.  Can anyone point me to a
> good tutorial on setting this up.  I keep finding info about how to
> set up ssh tunneling for personal use but not how to set it up as the
> server default.  The goal is a machine in the DMZ that users ssh into
> which does nothing but ssh them into the login server inside the
> firewall.

Are you absolutely sure you don't want to forward port 22/tcp to the
inside machine, and so make your system a tiny bit simpler?

In any case, can you explain what isn't working?  "being recalcitrant"
isn't the most descriptive failure in the world, and the examples in the
manual page are fairly straight forward for running commands...

My guess is that you are setting the forced command to 'ssh ...', which
is failing because it doesn't have access to the users public key,
and/or because it doesn't have access to a pty, but guessing is ...

Regards,
Daniel
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Linux 32 vs 64 BIT and the future of 32 BIT

2009-03-16 Thread Daniel Pittman
da...@lang.hm writes:
> On Tue, 17 Mar 2009, Brad Knowles wrote:
>> on 3/16/09 11:20 PM, da...@lang.hm said:
>>
 It depends on what you're doing.  Some applications have not yet been
 ported to 64-bit, and may not run correctly on a machine that has a
 64-bit kernel.  Other applications may run better in 64-bit mode.  You
 need to know your specific application.
>>> 
>>> do you have any specific examples? the last usespace program that I
>>> ran into with this sort of bug was the ipchains binary, and that was
>>> fixed several years ago.
>>
>> I've heard of no end of problems with precompiled binaries provided
>> by vendors for things like Flash, Nvidia drivers, etc

The binary video drivers are now as stable on 64-bit as the binary video
drivers ever are, which is to say that they are relatively reliable and
cause no more failures or corruptions than the 32-bit versions.

> using a 32 bit flash as a plugin to a 64 bit browser is an issue. that
> issue ended up getting resolved by people figuring out that you could
> use ndiswrapper to run flash (and several distros do this by default
> now).

You mean "nspluginwrapper"; ndiswrapper is a "Windows wireless NIC
driver" wrapper that runs inside the Linux kernel.

[...]

> there is a 64 bit flash in alpha status right now. I'm running it on
> my laptop and it crashes once in a while, taking the browser down with
> it.

The most recent nspluginwrapper versions include a "native" mode, where
they run the 64-bit flash plugin out-of-process, so a crash doesn't take
Firefox with it.

This is the model Opera have used for years and, for that, works
reliably.


That said, the 64-bit Flash requires a CPU with lahf_lm support since
their native code JIT engine generates the extended instructions without
regard to the actual capabilities of the CPU.

So, you know, not all that crash-hot yet, really, in terms of platform
support.

[...]

Overall, 64-bit Linux on the server seems fine, and on the desktop is a
PITA until you find the combination of features that happen to work
... and never upgrade again.

Maybe next year it will be ready to face the end user reliably.

Daniel
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] something like getdropbox

2009-01-29 Thread Daniel Pittman
Tom Limoncelli  writes:
> On Wed, Jan 28, 2009 at 10:53 PM, Edward Ned Harvey  
> wrote:

[...]

>> Does anybody know of anything like getdropbox, which you can install on your
>> own server and maintain yourself?
>
> If you have Windows or a Mac, your SAMBA or WebDAV servers can all be
> mounted as a disk.  I'm sure there are similar things for Linux boxes
> (oh wait... NFS would do the trick, right?)

davfs2 or fusedav should work on Linux (and, I believe, FreeBSD[1]), to
provide a mounted WebDAV filesystem just like the other platforms.

Which is to say that I wouldn't try and use the Windown WebDAV
redirector, and I probably wouldn't try those in production either, even
if they were both vaguely successful in my personal trials of 'em.

Regards,
Daniel

Footnotes: 
[1]  http://fuse4bsd.creo.hu/

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] how to tell what's chewing up CPU time on a Linux system... what's my top system call?

2008-12-01 Thread Daniel Pittman
[EMAIL PROTECTED] writes:
> On Mon, 1 Dec 2008, Aleksey Tsalolikhin wrote:
>
>> Hi.  I've got a Linux system that is spending more CPU time in system
>> than in user.  sometimes more than double.
>>
>> I'd like to see what is the system call(s) that is chewing up the CPU.
>>
>> Is there any way to do that in Linux out of the box?   I am afraid the
>> answer is no but I am hoping it's just a gap in my knowledge.   Are
>> people using SystemTap?  I really don't want to install some
>> expiremental kernel module...
>
> OProfile is one way to tell what system calls are being made. it's
> part of the standard kernel (although your distro may or may not have
> enabled it by default)

I have, previously, found this very valuable.  It takes a bit of setup
effort, but can apply across more or less anything, and gives good
performance details.

To my surprise, I have also found the 'powertop' tool to be valuable in
finding out what is causing activity on the system, and reasoning from
that about where that causes load.

[...]

> what's your disk activity like? that can eat a _lot_ of CPU time.

My first guess was a PIO disk or similar, also.

Daniel
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Live Sync / Backup / Sync without crawling

2008-11-02 Thread Daniel Pittman
Yves Dorfsman <[EMAIL PROTECTED]> writes:
> Edward Ned Harvey wrote:
>>> 
>>> http://code.google.com/p/lsyncd/
>> 
>> Yup, that one fits the description.  It looks really cool!  :-)
>
> Hmmm... rsync is so efficient that I have to wonder what kind of
> extreme case would make this attractive.

Anything where the cost in memory, IOPS or other resources was
undesirable?  Seriously, rsync has some small scaling issues, even if
the rsync 3 release helps with some.

[...]

>> I'm not trying to solve any particular problem specifically.  This is
>> really for the sake of discussion and understanding of what new
>> technologies are out there, for possible future use.
>
> One thin I am playing with is disconnected filesystem, and right now
> nothing beats rsync...

The revolution of lowered expectations wins again. ;)

[...]

> My intention is to write a fuse module to keep track of deletes and
> apply them automatically.

You might find it more useful to consider GLusterFS, implemented with
FUSE, which already does what you are talking out:
http://www.gluster.org/

> I have also been thinking of using git, but that would be quite
> involved (every machine is a branch, give the user the possibility to
> merge the branch at a latter time).

That would require some significant changes to git to allow pruning
revisions out of the history, or manual commits, either of which has
some nasty failure modes, I would have thought.

Regards,
Daniel
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/