On Tue, Jul 12, 2011 at 7:47 PM, Mike Meyer <m...@mired.org> wrote:
>> Another misunderstanding. Many developers working at one physical,
>> co-located computer has the keyboard and monitor as "a single global
>> lock". In the terminal server case there could be a finer locking
>> granularity. As for "still developed and used", what for?
>
> What makes you think a computer can have only a single
> keyboard/monitor?

And yet another misunderstanding. If it has more, it's a terminal
server of some sort rather than just having its one local console. And
there's that "s" word again. :)

> But this is all irrelevant

Hardly.

> from the point of view of an application, it doesn't make any difference if
> someone is issuing commands from a device directly connected to the
> hardware, from a device connected to a terminal server, or sitting at
> a second computer and connected back to the single computer where the
> work is being done.

The application's perspective is what's irrelevant. It just means the
sysadmin has to secure a telnetd/sshd/whatever rather than a version
control server.

> Which is why such things are still being developed. There's
> fundamentally no difference between many developers running commands
> on the single computer to manipulate the data and many developers
> running clients that talk to apache running on that single computer
> and causing it to issue those commands.

Well, actually, there is. You see, when using a terminal server to
talk to a central unix box, a) all of the CPU and memory resources for
separate editing sessions, as well as those for file access and doing
the version control work (if any), are consumed on the server, and b)
all of your developers are stuck using vi and/or emacs in text mode on
a crummy little 80x24 display.

When using regular version control servers, on the other hand, a) the
server's only responsible for the CPU and memory use needed for file
access and version control work, with the editing etc. using the
developers' own desktop machines' resources, and b) the developers get
to use whatever editors, IDEs, and whatnot they're most comfortable
with, and get to use GUIs if they want to, so have more choice.

The security situation is also starkly different. There's a server to
secure in every case; but in the first case, you also have a bunch of
people running around with shell accounts on the server, whereas their
interactions with the server are far more constrained in the second
case. On the other hand, you can more tightly control the source code
in the first case -- all other things being equal (e.g. the only
machines that can talk to the server are in a locked-down part of a
military base, say, including the server), dumb terminals instead of
smart desktop computers means the source code isn't copied to many
general-purpose computers but stays on just the one. Spying means
actually taking pictures of the terminal screen while scrolling,
rather than just smuggling in a thumb drive or something; the server
can be in an even more tightly locked down room with even fewer people
having access than the terminals.

So, for the standard threat model (this big open source project is a
likely target for hackers out to bring the web site down for kicks or
maliciously sneak bad stuff such as security holes into the product
we're building; we have thousands of potential developers and can't
vet them all; and we're operating on a budget) remote, smart clients
and a dumb server works better. For the military-paranoia threat model
(Iran will try very hard to get ahold of the source code for our
next-generation Predator drones; we have 18 developers, all unix
wizards with top security clearance; and we have thirty billion
black-budget Pentagon dollars to spare on beefing up the server to
handle high loads) dumb terminals and smart server works better, given
the whole lot are on a closed, local network behind physically locked
and guarded doors.

>> Which means it's not really case 4 at all.
>
> Well, it's very clearly not cases 1, 2 or 3.

No, it's case zero: standard multi-developer, multi-computer, single
canonical master copy on one computer/cluster somewhere. The thing
cases 1 through 4 were *alternatives* to.

>> Except that it has an official build repository with more stringent
>> criteria for what gets in there, so not really.
>
> Half right.

All right.

> As I said, it's got one repository that the official
> builds come from. Other people are free use builds from their own
> repositories, and often do

Same as any case-zero, open source development effort.

> I don't think any of the GNU/Linux
> distributions actually use binaries built by Linus. Instead, they each
> have their own "master" repository from which they do their "official"
> builds.

Forks, each their own example of case zero.

> However, the criteria for what gets into that so-called "master"
> repository are no more stringent than for any other repository in the
> project: only patches the owner wants get in.

Technically true, but meaningless. The master gets tens of zillions of
submissions instead of next-to-none for the typical random Linux
hacker who has his own repository of the kernel code. He's got to be
proportionately more selective just to not be spending 27 hours a day
patching the kernel with minus three left over for eating, sleeping,
and all of that stuff. :)

-- 
Protege: What is this seething mass of parentheses?!
Master: Your father's Lisp REPL. This is the language of a true
hacker. Not as clumsy or random as C++; a language for a more
civilized age.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to