Re: [9fans] security questions

2009-04-17 Thread Bruce Ellis
As a another data point I'll offer IW9P2009-Bondi - involved a lot of
beer and beach/camping but we wrote a shit-load of code. And it was
fun. Not much sleep. Had to eat too but time sharing coding and
cooking went well.

brucee

On Fri, Apr 17, 2009 at 3:52 PM, andrey mirtchovski
mirtchov...@gmail.com wrote:
 5. No code is ever implemented by anyone

 extremely efficient, from a SLOC point of view, no?

 it  also leaves a lot of time for drinking belgian beer, which is nice.





Re: [9fans] security questions

2009-04-17 Thread Eris Discordia

Plan 9 itself makes a great platfrom on which to construct
virtualisation.


I don't know what Inferno is but the phrase 'virtual machine' appears 
somewhere in the product description. Isn't Inferno the 'it' you're 
searching for?


--On Friday, April 17, 2009 6:48 AM +0200 lu...@proxima.alt.za wrote:


One can indirectly (and more consistently) limit the number of
allocated resources in this fashion (indeed, the number of open file
descriptors) by determining the amount of memory consumed by that
resource as proportional to the size of the resource. If I as a user
have 64,000 allocations of type Foo, and struct Foo is 64 bytes, then
I hold 1,000 Foos.


And by this, I clearly mean 64,000 bytes of allocated Foos.


From purely a spectator's perspective, I believe that if one needs to
add considerable complexity to Plan 9 in the form of user-based kernel
resource management, one may as well look carefully at the option of
adding self-virtualisation to the Plan 9 kernel and manage resources
in the virtualisation layer.

Plan 9 has provided a wide range of sophisticated, yet simple
techniques to solve a wide range of computer/system problems, but I'm
of the opinion that it missed virtualisation as one of these
techniques.  I may be dreaming, but I've long been of the opinion that
Plan 9 itself makes a great platfrom on which to construct
virtualisation.

++L










Re: [9fans] security questions

2009-04-17 Thread Richard Miller
 having the potential for running out of memory in an interrupt
 handler might be a sign that a little code reorg is in order, if you
 are worried about this sort of thing.  (and even if you're not.)

To begin with:

   grep -n '.((iallocb)|(qproduce))' /sys/src/9/^(port pc)^/*.c




Re: [9fans] security questions

2009-04-17 Thread lucio
 I don't know what Inferno is but the phrase 'virtual machine' appears 
 somewhere in the product description. Isn't Inferno the 'it' you're 
 searching for?

No, Inferno resembles - very superficially, as you will discover if
you study the literature - a JAVA interpreter surrounded by its own
operating system.  There are so many clever things about Inferno, it
is hard to do it justice.  But it is not a virtualiser.  More's the
pity, of course.  A virtualiser with Inferno's good features would be
a very useful device.

Actually, I have long had a feeling that there is a convergence of
VNC, Drawterm, Inferno and the many virtualising tools (VMware, Xen,
Lguest, etc.), but it's one of these intuition things that I cannot
turn into anything concrete.

++L




Re: [9fans] security questions

2009-04-17 Thread lucio
 Unlike
 securitization in the hedge fund world.

Actually, it is a lot safer to provide something like securitisation
(hm, make that s a z, it is no doubt a native, American word) in a
virtualised environment, you're much less likely to bring down the
entire system's economy, then.

++L




Re: [9fans] security questions

2009-04-17 Thread Steve Simon
I am interested in the idea of adding some kind of resource limits
to plan9. If they existsed I would probably open it up to external
users, however different things would worry me:

CPU use
Implement the Fair share scheduler

User memory
Working swap would do me to fix this, but sadly rlimits would probably
be easier to implement. 

Network bandwidth
Again a FSS type algorithm delaying or dropping packets could rate
control the network well I think.

Dialing remote ports
I don't become a spam relay so some restriction must be in place,
I guess this would require a minor modification to the IP stack.

Fork bombs
Erik's mod would help, but add a seccond threshold where after 15 secconds
you kill the proc failed the most fork() calls - the danger here is a spam
storm may cause listen(1) to be killed.

Running out of kernel memory
I don't perceive this as a problem, though this could be my lack of vision.

My 2¢ worth.

-Steve



Re: [9fans] security questions

2009-04-17 Thread Mechiel Lukkien
On Fri, Apr 17, 2009 at 11:29:47AM +0100, Steve Simon wrote:
 I am interested in the idea of adding some kind of resource limits
 to plan9. If they existsed I would probably open it up to external
 users, however different things would worry me:
 
 CPU use
 Implement the Fair share scheduler
 
 User memory
 Working swap would do me to fix this, but sadly rlimits would probably
 be easier to implement. 
 
 Network bandwidth
 Again a FSS type algorithm delaying or dropping packets could rate
 control the network well I think.
 
 Dialing remote ports
 I don't become a spam relay so some restriction must be in place,
 I guess this would require a minor modification to the IP stack.
 
 Fork bombs
 Erik's mod would help, but add a seccond threshold where after 15 secconds
 you kill the proc failed the most fork() calls - the danger here is a spam
 storm may cause listen(1) to be killed.
 
 Running out of kernel memory
 I don't perceive this as a problem, though this could be my lack of vision.

of all the resource capping on a public plan 9 server, i would say the
limits should be per user.  not per-process (group) limits or similar.
i don't know how feasable that (accounting) is.

e.g. make sure a single user gets at most e.g. 50% of all available
resources (memory, procs, cpu time).  seems fairest to me.  leftover cpu
time can be given to active users.  leftover memory should probably just
go unused (unless you want to start with swap, which lets you scale a
bit further but has limits too).  if the per-user memory is too low,
just add more memory so it won't be.  then at least multiple users can
use the system and a single one cannot lock it up.

dialing to the outside is perhaps easiest with an external firewall
(e.g. on adsl modem, they all have one nowadays).  same for bandwidth
limiting.  that won't fairly share the network bandwidth among the users
though of the cpu servers, but will leave your home connection usable.

then there is none.  anyone can become none, and services run as
none (at least initially).  with per-user limits, anyone can hog none's
resources, leaving none left for network services (which other users
need to login).  perhaps this is the reason per-user limits won't work?
or what would be the impact of disallowing becoming none for
non-hostowners?  normal users might not need it?

mjl



Re: [9fans] security questions

2009-04-17 Thread Eris Discordia
I see. Thanks for the edification :-) I found--still find--it hard to 
understand what Inferno is/does. Actually read 
http://www.vitanuova.com/inferno/papers/bltj.html but it isn't very 
direct about what it is that Inferno does for a user or what a user can do 
with it; what distinguishes it from other (operating?) systems. I've 
decided to try it because documentation says it will readily run on Windows.


As a side note, I found a short passage in the Inferno paper that confirmed 
something I had pointed out previously on this list in almost identical 
wording (and been ridiculed for):



The Styx protocol lies above and is independent of the communications
transport layer; it is readily carried over TCP/IP, PPP, ATM or various
modem transport protocols.


--On Friday, April 17, 2009 11:47 AM +0200 lu...@proxima.alt.za wrote:


I don't know what Inferno is but the phrase 'virtual machine' appears
somewhere in the product description. Isn't Inferno the 'it' you're
searching for?


No, Inferno resembles - very superficially, as you will discover if
you study the literature - a JAVA interpreter surrounded by its own
operating system.  There are so many clever things about Inferno, it
is hard to do it justice.  But it is not a virtualiser.  More's the
pity, of course.  A virtualiser with Inferno's good features would be
a very useful device.

Actually, I have long had a feeling that there is a convergence of
VNC, Drawterm, Inferno and the many virtualising tools (VMware, Xen,
Lguest, etc.), but it's one of these intuition things that I cannot
turn into anything concrete.

++L






Re: [9fans] security questions

2009-04-17 Thread erik quanstrom
 What if each user can have a separate IP stack, separate
 (virtualized) interfaces and so on?

already possible, but you do need 1 physical ethernet
per ip stack if you want to talk to the outside world.

 But you'd have to implement some sort of limits on
 oversubcribing (ratio of virtual to real resources). Unlike
 securitization in the hedge fund world.

this would add a lot of code and result in the same problem
as today — you can be run out of a criticial resource.

- erik



Re: [9fans] security questions

2009-04-17 Thread lucio
 Erik's mod would help, but add a seccond threshold where after 15 secconds
 you kill the proc failed the most fork() calls - the danger here is a spam
 storm may cause listen(1) to be killed.

You could put the rate limiting in listen(8) first, you may have
noticed that inetd(8) has this feature, at least in NetBSD, enabled by
default, in contravention of the POLA.

++L




Re: [9fans] security questions

2009-04-17 Thread lucio
 Working swap would do me to fix this, but sadly rlimits would probably
 be easier to implement. 

There's an intrinsic belief that there cannot be anything wrong with
Plan 9's swap.  Having encountered the rather tightly embedded use of
swap/segmentation/etc.  in the Plan 9 kernel, but without having
explored it to any extent, I'm beginning to see where the faith
principle comes from.  Before anyone can be convinced to fix swap,
it is imperative to be able to supply a reproducible error case.  The
virtual memory management is too persuasive to be broken in any
significant way.

++L




Re: [9fans] web server

2009-04-17 Thread maht



How difficult would it be to use rails or merb in plan9? Is it feasible?
Not Rails or merb or anything non Plan 9 but a few of us are building an 
rc shell based system that works anywhere CGI and Plan 9 / plan9port is 
available.


http://werc.cat-v.org/






Re: [9fans] security questions

2009-04-17 Thread lucio
 what it is that Inferno does for a user or what a user can do 
 with it; what distinguishes it from other (operating?) systems. I've 
 decided to try it because documentation says it will readily run on Windows.

Let's start with the fact that Inferno is a small-footprint, hosted
operating environment with its own, complete development tool set.  As
such it is strictly portable across many architectures with all the
advantages of such portability as well as all the useful features
Inferno inherited from Plan 9.  Not least of these is Limbo, a
programming language based on the mourned Alef and, conveniently,
interpreted by the Limbo virtual machine, not dissimilar from, but
much better thought out than the JAVA virtual machine.

You can pile on any number of additional great attributes of Inferno
and Limbo that make them highly useful.  There is also the option to
run Inferno natively on some architectures (I've never dug any deeper
than the PC for this, so off the top of my head I can provide no
exciting examples) with all the drawbacks of needing device drivers
for all sorts of inconsiderate platforms.

In a way, I guess Inferno is a slightly different Plan 9 with built-in
virtualisation for a wide range of platforms.  But the differences are
notable even if the philosophy is the same between the two
environment.

++L




Re: [9fans] web server

2009-04-17 Thread Rudolf Sykora
2009/4/17 maht mattmob...@proweb.co.uk:

 How difficult would it be to use rails or merb in plan9? Is it feasible?

 Not Rails or merb or anything non Plan 9 but a few of us are building an rc
 shell based system that works anywhere CGI and Plan 9 / plan9port is
 available.

 http://werc.cat-v.org/

Yes, I've noticed the existence of werc. I'll take a look at that, sure.
However, I have just discovered 'seaside' web framework and am looking
at it now. It seems to be pretty interesting. Based on smalltalk and
using a different (and to me appealing) philosophy, other than MCV.

Thanks
ruda



Re: [9fans] security questions

2009-04-17 Thread Devon H. O'Dell
2009/4/17 Bakul Shah bakul+pl...@bitblocks.com:
 On Thu, 16 Apr 2009 22:19:21 EDT Devon H. O'Dell devon.od...@gmail.com  
 wrote:
 2009/4/16 Bakul Shah bakul+pl...@bitblocks.com:
  Why not give each user a virtual plan9? Not like vmware/qemu
  but more like FreeBSD's jail(8), done more elegantly[TM]!
  To deal with potentially malicious users you can virtualize
  resources, backed by limited/configurable real resources.

 I saw a talk about Mult at DCBSDCon. I think it's a much better idea
 than FreeBSD jail(8), and its security is provable.

 See also: http://mult.bsd.lv/

 But is it elegant?

Rather.

 [Interviewer: What do you think the analog for software is?
  Arthur Whiteny: Poetry.
  Interviewer: Poetry captures the aesthetics, but not the precision.
  Arthur Whiteny: I don't know, may be it does.
  -- ACM Queue Feb/Mar 2009, page 18.
    http://mags.acm.org/queue/20090203]

 Perhaps Plan9's model would be easier (and more fun) to
 extend to accomplish this. One can already have a private
 namespace.  How about changing proc(3) to show only your
 login process and its descendents? What if each user can have
 a separate IP stack, separate (virtualized) interfaces and so
 on?  But you'd have to implement some sort of limits on
 oversubcribing (ratio of virtual to real resources). Unlike
 securitization in the hedge fund world.





Re: [9fans] security questions

2009-04-17 Thread maht



If you want true isolation between the users you should give
them each a VM, not a Plan 9 account.

Russ

  

So we chose to use a VM, now we have two problems

*http://tinyurl.com/cuul2m

or
*
http://www.computerworld.com/action/article.do?command=viewArticleBasictaxonomyName=operating_systemsarticleId=9131647taxonomyId=89intsrc=kc_top









Re: [9fans] security questions

2009-04-17 Thread erik quanstrom
 Dialing remote ports
 I don't become a spam relay so some restriction must be in place,
 I guess this would require a minor modification to the IP stack.

does ip/hogports solve your problem?

- erik



Re: [9fans] security questions

2009-04-17 Thread Devon H. O'Dell
2009/4/17 erik quanstrom quans...@quanstro.net:
 What if each user can have a separate IP stack, separate
 (virtualized) interfaces and so on?

 already possible, but you do need 1 physical ethernet
 per ip stack if you want to talk to the outside world.

I'm sure it wouldn't be hard to add a virtual ``physical'' interface,
even though that seems a little bit pervasive, given the already
semi-virtual nature due to namespaces. Not sure how much of a hassle
it would be to make multiple stacks bindable to a single interface...
but perhaps that's the better way to go?

 But you'd have to implement some sort of limits on
 oversubcribing (ratio of virtual to real resources). Unlike
 securitization in the hedge fund world.

 this would add a lot of code and result in the same problem
 as today — you can be run out of a criticial resource.

Oversubscribing is the root of the problem. In fact, even if it was
already done, on a terminal server, imagmem is also set to kpages. So
if someone found a way to blow up the kernel's draw buffer, boom. I
don't know how far reaching that is, as I've never really seen the
draw code.

Unfortunately, that's what you have to do unless you can afford to
invest in more hardware, or have a small userbase. Or find some middle
ground -- and maybe that's what the `virtualization' would address.

 - erik

--dho



Re: [9fans] security questions

2009-04-17 Thread Charles Forsyth
Conceptually, anyway. Why is everyone always so hell-bent on hair-splitting? :P

probably the other options suggested by the careers advisor were theology and 
hairdressing.



Re: [9fans] security questions

2009-04-17 Thread erik quanstrom
 The
 virtual memory management is too persuasive to be broken in any
 significant way.

do you mean pervasive?  if you do, i don't buy the argument.
it's easy to get lucky when doing concurrent programming with
locks, as in the plan 9 kernel.  it's easy to get lucky in many cases,
and yet have completely bogus locking.  (as i rediscovered this
morning.)

- erik



Re: [9fans] web server

2009-04-17 Thread Uriel
 How difficult would it be to use rails or merb in plan9? Is it feasible?

 Very difficult. No, not feasible. You would have to port Ruby. And
 then possibly rails, too. Plan 9 isn't UNIX, or UNIX-like, or POSIX
 (or POSIX-like). APE helps with some stuff, but not all the way.

And then you would need some hideous SQL database.

As ken said: we have persistent objects, they are called files; and
that is what werc uses.

Writing the core of a blog engine in three lines of rc is hard to
beat, plus you get the benefit of being able to manipulate and manage
all your data using the tools any self respecting Unix user loves.

uriel



Re: [9fans] web server

2009-04-17 Thread Rudolf Sykora
 Writing the core of a blog engine in three lines of rc is hard to
 beat, plus you get the benefit of being able to manipulate and manage
 all your data using the tools any self respecting Unix user loves.

 uriel

well, I haven't thought about it deeply yet, but what I guess could be
a problem with your approach is that many features would have to be
somehow implemented first so that it all be useable. I mean e.g. ajax
style of page content refresh, session management, perhaps POST method
too.

ruda



Re: [9fans] web server

2009-04-17 Thread Devon H. O'Dell
2009/4/17 Rudolf Sykora rudolf.syk...@gmail.com:
 Writing the core of a blog engine in three lines of rc is hard to
 beat, plus you get the benefit of being able to manipulate and manage
 all your data using the tools any self respecting Unix user loves.

 uriel

 well, I haven't thought about it deeply yet, but what I guess could be
 a problem with your approach is that many features would have to be
 somehow implemented first so that it all be useable. I mean e.g. ajax
 style of page content refresh, session management, perhaps POST method
 too.

Not really. There's nothing magical about AJAX. It's just HTTP
requests. As long as you support those, your pages can use AJAX.

--dho

 ruda





Re: [9fans] noweb and literal programming

2009-04-17 Thread Aharon Robbins
I have used it also.  Circa 10.5 years ago there was a race condition in
the scripts that ran it with troff which I fixed and sent back in; I think
they got into the dist.

Literate programming is a lot of fun and works well if you have the mindset
for it.

Arnold

In article dd6fe68a0904111628h20406a52xd702d276bf278...@mail.gmail.com,
Russ Cox r...@swtch.com wrote:
Noweb has a nice simple interface (if literate programming
is what you want) and runs on Plan 9.  It's somewhere:
I'm sure if you dig around you can find it.  Maybe it's in
/n/sources/extra.  I used it quite a bit with latex.  I don't
remember whether I ever used it with troff.

Russ



-- 
Aharon (Arnold) Robbins arnold AT skeeve DOT com
P.O. Box 354Home Phone: +972  8 979-0381
Nof Ayalon  Cell Phone: +972 50  729-7545
D.N. Shimshon 99785 ISRAEL



Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread Balwinder S Dheeman
On 04/15/2009 05:22 PM, Pietro Gagliardi wrote:
 On Apr 15, 2009, at 4:26 AM, Eris Discordia wrote:
 
 Plan 9 is not intended for home or home office.
 
 True, but that doesn't mean it can't be used in such an environment. I
 type all my reports up in Plan 9.

Please set aside rare cases and let us know who except for the students,
teachers and, or researchers uses Plan9 and, or Inferno in the offices,
homes and, or cafes and for what?

The Plan9 project started in 1980, took around 9 years to be solid
enough to be usable and that too by the internal and, or lab people
[http://plan9.bell-labs.com/sys/doc/9.html] only. Whereas, the FreeBSD
and, or Linux (though not an OS or Unix variant in a sense) came into
existence later in 1993 and 1991 respectively are more popular among any
other variants of Unix.

IMHO, the Plan9 and, or Inferno are just failed attempts and have no
real and, or viable commercial and, or industrial use in absence of
hardware drivers and, or not the killer but some useful applications.

Moreover, the user interface and, or window manager i.e. rio is too
technical for an average user to put in to a good use. It lacks usual
buttons for minimizing (hiding), maximizing, controlling windows. You
can't even send a window to background and even if Inferno's wm has some
of these including title bars, but the meanings and, or behavior of the
same is quite different from other popular GUI systems.

-- 
Balwinder S bdheeman DheemanRegistered Linux User: #229709
Anu'z li...@home (Unix Shoppe)Machines: #168573, 170593, 259192
Chandigarh, UT, 160062, India Plan9, T2, Arch/Debian/FreeBSD/XP
Home: http://cto.homelinux.net/~bsd/  Visit: http://counter.li.org/



Re: [9fans] a bit OT, programming style question

2009-04-17 Thread Balwinder S Dheeman
On 04/10/2009 05:08 AM, Eris Discordia wrote:
 this is the space-shuttle dichotomy.  it's a false one.  it's a
 continuum. its ends are dangerous.
 
 So somewhere in the middle is the golden mean? I have no objections to
 that. *BSD systems very well represent a silver, if not a golden,
 mean--just my idea, of course.
 
 it is interesting to me that some software manages to run off both
 ends of this continuum at the same time.  in linux your termcap
 from 1981 will still work, but software written to access /sys last
 year is likely out-of-date.
 
 While I won't vouch for Linux as a good OS (user-land and kernel
 combined) I understand what you see as its eccentricity is merely a
 side-effect of openness. Tighten the development up and you get a
 BSD-style system (committer/contributor/maintainer/grunt/user
 highest-to-lowest ranking, with a demiurge position for Theo de Raadt).
 Tighten it even further up with in-ken shared among a core group of
 old-timers and thoroughbreds transmitted only to serious researchers and
 you get Plan 9.
 
 You are right, after all. It all lies on a continuum. Actually, more
 tightly regulated Linux distros such as Slackware readily demonstrate
 that; they easily beat all-out all-open distros like Fedora (whose
 existence is probably perceived at Red Hat as a big brainstorming project).
 
 your insinuation that *bsd is a real serious system and plan 9 is
 a research system doesn't make any historical sense to me.  they
 both started as research systems.  i am not aware of any law that
 prevents a system that started as a research project from becoming
 a serious production system.
 
 What I am insinuating is more like this: any serious system will sooner
 or later have to grow warts and/or contract herpes. That's an
 unavoidable consequence of social life. If you do insist that Plan 9 has
 no warts, or far less warts than the average, or that it has never seen
 a cold sore on its upper lip then I'll happily conclude it has never
 lived socially. And I haven't really ever used Plan 9 or been into it.
 The no-herpes indicator is that strong.

I for one could not resist adding that no doubt, Plan9 and *BSD are
quite clean and well maintained systems, but these IMHO, have but only a
little use for an average user, because of a noticeable scarcity of
hardware drivers and real applications. Hence, who cares a cow gone dry.

Years ago, in an article, 'Program design in UNIX environment', Rob Pike
and Brain W. Kernighan discussed UNIX programming environment, program
design, tools and some problems introduced by the users, after UNIX
commercially became a success. Have, they mentioned and, or do they know
who indeed is behind that success?

 i know of many thousands of plan 9 systems in production right
 now.

Erik, you might want to know how many *million* people use Linux ;)
Won't you?

-- 
Dr Balwinder S bsd Dheeman  Registered Linux User: #229709
Anu'z li...@home (Unix Shoppe)Machines: #168573, 170593, 259192
Chandigarh, UT, 160062, India Plan9, T2, Arch/Debian/FreeBSD/XP
Home: http://cto.homelinux.net/~bsd/  Visit: http://counter.li.org/



Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread Jim
On Apr 14, 7:15�pm, szhil...@gmail.com (Sergey Zhilkin) wrote:
  My wireless card is not listed in Plan9.ini. Does that mean there's no
  way for me to connect with that card?

  Hi !

 What type of wireless card you have 

 --
 ? ?? ???
 ?? ??
 With best regards
 Zhilkin Sergey

Sorry, I forgot to say! It's Atheros AR5001X+.



Re: [9fans] web server

2009-04-17 Thread erik quanstrom
On Fri Apr 17 08:33:12 EDT 2009, urie...@gmail.com wrote:
 And then you would need some hideous SQL database.
 
 As ken said: we have persistent objects, they are called files; and
 that is what werc uses.

i feel compelled to defend one of my favorite quotes
of all time from misapplication.  i'm sure that werc is
well-engineered for its domain, but the mistake i see
is generalizing this into sql sucks.

just as a point of pedantry, in a standard sql database,
there are no objects.

sql does not suck.  here's why.  sql databases are really
good at keeping relationships between rows (here's the
important part) with no locking visible to the client.
even better in the face of non-static requirements,
more relationships can be added on the fly.  it's hard
to do this with flat files, and file-based locking (like
upas does for mbox files) is pretty tricky.

- erik



Re: [9fans] a bit OT, programming style question

2009-04-17 Thread erik quanstrom
  i know of many thousands of plan 9 systems in production right
  now.
 
 Erik, you might want to know how many *million* people use Linux ;)
 Won't you?

the criticisim of plan 9 that i was respnding to was that
plan 9 was not used for anything serious or capable of
being used in production.

i was specificly *not* making an appeal to the majority.

maybe you misunderstood, but from where i sit your
argument consists of putting words in my mouth and
a logical fallacy.

- erik



Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread erik quanstrom
 The Plan9 project started in 1980, took around 9 years to be solid
 enough to be usable and that too by the internal and, or lab people
 [http://plan9.bell-labs.com/sys/doc/9.html] only. 

unless one is speaking in geologic terms, there's a significant difference
between the mid-1980s and 1980.

in fact the quote is Plan 9 began in the late 1980's and ... by
1989 the system had become solid enough that some of us begain
using it as our exclusive computing environment.

i'd encourage you to read your source material.

- erik



Re: [9fans] a bit OT, programming style question

2009-04-17 Thread Devon H. O'Dell
Wait, am I on the wrong mailing list? Since when was this Fans of BSD
and Linux Talk about why Plan 9 Sucks Donkey Shit?

(I use FreeBSD and Linux. OTOH, I'm not on freebsd-general@ and centos
mailing lists talking about how our private namespaces and 9p are so
much shinier than VFS)



Re: [9fans] security questions

2009-04-17 Thread Steve Simon
My understanding is that would prevent people listening and pretending to
offer services on my behalf, but would not stop them dialing SMTP ports
on other machines and sending them spam.

-Steve



Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread Devon H. O'Dell
2009/4/17 Eris Discordia eris.discor...@gmail.com:
 It's like I'm seeing an apparition of myself back more than a year ago. No
 wonder 9fans got to dislike me so much. Do 9fans get nuisances like me in
 regular intervals?

From time to time :)

We have a high conversion rate, though.

--dho



Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread Eris Discordia
It's like I'm seeing an apparition of myself back more than a year ago. No 
wonder 9fans got to dislike me so much. Do 9fans get nuisances like me in 
regular intervals?


--On Friday, April 17, 2009 1:14 PM + Balwinder S Dheeman 
bdhee...@gmail.com wrote:



On 04/15/2009 05:22 PM, Pietro Gagliardi wrote:

On Apr 15, 2009, at 4:26 AM, Eris Discordia wrote:


Plan 9 is not intended for home or home office.


True, but that doesn't mean it can't be used in such an environment. I
type all my reports up in Plan 9.


Please set aside rare cases and let us know who except for the students,
teachers and, or researchers uses Plan9 and, or Inferno in the offices,
homes and, or cafes and for what?

The Plan9 project started in 1980, took around 9 years to be solid
enough to be usable and that too by the internal and, or lab people
[http://plan9.bell-labs.com/sys/doc/9.html] only. Whereas, the FreeBSD
and, or Linux (though not an OS or Unix variant in a sense) came into
existence later in 1993 and 1991 respectively are more popular among any
other variants of Unix.

IMHO, the Plan9 and, or Inferno are just failed attempts and have no
real and, or viable commercial and, or industrial use in absence of
hardware drivers and, or not the killer but some useful applications.

Moreover, the user interface and, or window manager i.e. rio is too
technical for an average user to put in to a good use. It lacks usual
buttons for minimizing (hiding), maximizing, controlling windows. You
can't even send a window to background and even if Inferno's wm has some
of these including title bars, but the meanings and, or behavior of the
same is quite different from other popular GUI systems.

--
Balwinder S bdheeman DheemanRegistered Linux User: #229709
Anu'z li...@home (Unix Shoppe)Machines: #168573, 170593, 259192
Chandigarh, UT, 160062, India Plan9, T2, Arch/Debian/FreeBSD/XP
Home: http://cto.homelinux.net/~bsd/  Visit: http://counter.li.org/





Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread Steve Simon
 The Plan9 project started in 1980, took around 9 years to be solid
 enough to be usable and that too by the internal and, or lab people
 [http://plan9.bell-labs.com/sys/doc/9.html] only. 

I was using plan9 outside of bell labs in 1993 - not very aggressively
I admit but I didn't have the skils then that I do now. It was solid
and usable at the time.

 Whereas, the FreeBSD
 and, or Linux (though not an OS or Unix variant in a sense) came into
 existence later in 1993 and 1991 respectively are more popular among any
 other variants of Unix.

I first remember seeing references to Linux as a reworking of the Minix project
in 1988. BSD has been around forever.

 IMHO, the Plan9 and, or Inferno are just failed attempts and have no
 real and, or viable commercial and, or industrial use in absence of
 hardware drivers and, or not the killer but some useful applications.

You are, of course, entitled to your own opinion, its a shame you didn't
do more research however.

 Moreover, the user interface and, or window manager i.e. rio is too
 technical for an average user to put in to a good use.

Too technical? Really? 

 It lacks usual
 buttons for minimizing (hiding), maximizing, controlling windows. You
 can't even send a window to background and even if Inferno's wm has some
 of these including title bars, but the meanings and, or behavior of the
 same is quite different from other popular GUI systems.

Here we agree

-Steve  Registered Plan9 User #954854834843




Re: [9fans] web server

2009-04-17 Thread Rudolf Sykora
2009/4/17 maht mattmob...@proweb.co.uk:

 well, I haven't thought about it deeply yet, but what I guess could be
 a problem with your approach is that many features would have to be
 somehow implemented first so that it all be useable. I mean e.g. ajax
 style of page content refresh, session management, perhaps POST method
 too.

 ruda


 never say it is impossible to man busy doing it

have I ?
r



Re: [9fans] security questions

2009-04-17 Thread Eris Discordia
Very nice of you to go to lengths for describing Inferno to a non-techie. 
Thank you. Just got the Fourth Edition ISO and will try it. Maybe even 
learn some Limbo in long term.


--On Friday, April 17, 2009 1:55 PM +0200 lu...@proxima.alt.za wrote:


what it is that Inferno does for a user or what a user can do
with it; what distinguishes it from other (operating?) systems. I've
decided to try it because documentation says it will readily run on
Windows.


Let's start with the fact that Inferno is a small-footprint, hosted
operating environment with its own, complete development tool set.  As
such it is strictly portable across many architectures with all the
advantages of such portability as well as all the useful features
Inferno inherited from Plan 9.  Not least of these is Limbo, a
programming language based on the mourned Alef and, conveniently,
interpreted by the Limbo virtual machine, not dissimilar from, but
much better thought out than the JAVA virtual machine.

You can pile on any number of additional great attributes of Inferno
and Limbo that make them highly useful.  There is also the option to
run Inferno natively on some architectures (I've never dug any deeper
than the PC for this, so off the top of my head I can provide no
exciting examples) with all the drawbacks of needing device drivers
for all sorts of inconsiderate platforms.

In a way, I guess Inferno is a slightly different Plan 9 with built-in
virtualisation for a wide range of platforms.  But the differences are
notable even if the philosophy is the same between the two
environment.

++L










Re: [9fans] security questions

2009-04-17 Thread gdiaz
hello 

you might want to take a look to vitanuova resources page for other inferno 
flavours than the official release.

inferno-os.googlecode.com
acme-sac.googlecode.com


slds.

gabi



Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread hiro
 It lacks usual
 buttons for minimizing (hiding), maximizing, controlling windows. You
 can't even send a window to background and even if Inferno's wm has some
 of these including title bars, but the meanings and, or behavior of the
 same is quite different from other popular GUI systems.

 Here we agree

Huh? Rio works fine here, you can resize, move and hide windows; also
a click brings the window to the front.
I prefer tiling window managers, but rio comes just afterwards in my
list of preferences.

I agree, that inferno's attempt to imitate popular GUIs failed ;)



Re: [9fans] security questions

2009-04-17 Thread Robert Raschke
On Fri, Apr 17, 2009 at 2:08 PM, Eris Discordia
eris.discor...@gmail.com wrote:
 Very nice of you to go to lengths for describing Inferno to a non-techie.
 Thank you. Just got the Fourth Edition ISO and will try it. Maybe even learn
 some Limbo in long term.

Also note there's a new book out that includes Inferno as a major
example, essentially explaining OS principles in general, in Inferno,
and in Linux:

Principles of Operating Systems: Design and Applications
by Brian Stuart

( http://www.amazon.co.uk/exec/obidos/ASIN/1418837695 )

I've only just started reading it, so can't really comment on how good
it is yet. Looks promising so far though.

Robby



[9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread blstuart
 Actually, I have long had a feeling that there is a convergence of
 VNC, Drawterm, Inferno and the many virtualising tools (VMware, Xen,
 Lguest, etc.), but it's one of these intuition things that I cannot
 turn into anything concrete.

This brings to mind something that's been rolling around
in the back of my head for a while.  It was about 20 years
between the earliest UNIX and early Plan 9, and it's been
about 20 years since early Plan 9.  One of the major drivers
of Plan 9 was the change in the computing landscape that
UNIX had adapted to at best clunkily.  In particular, unlike
1969, 1) networks were nearly universal, 2) significant computing
was done at the terminal rather than back at the mini
at the other end of the RS-232 line, and 3) graphical interfaces
were quite common.  None of these were part of the UNIX
model and the ways they were acommodated by 1989
were, in allusion to Kidder, like paper bags taped on the
side of the machine.  Don't get me wrong, given the constraints
and uncertainties of the times, the early networking, GUI,
and distributed techniques were pretty good first cuts.
But by 1989, they were well-understood enough that
it made sense to reconsider them from scratch.

So what about today?  It seems to me there are also three
major aspects of the milieu that have changed since 1989.
- First, the gap between the computational power at the
terminal and the computational power in the machine room
has shrunk to the point where it might no longer be significant.
It may be worth rethinking the separation of CPU and terminal.
For example, I'm typing this in acme running in a 9vx terminal
booted using using a combined fs/cpu/auth server for the
file system.  But I rarely use the cpu server capability of
that machine.
- Second, network access has since become both ubituitous
and sporadic.  In 1989 being on the network meant sitting
at a fixed machine tethered to the wall by Ethernet.  Today,
one of the most common modes of use is the laptop that
we use to carry our computing world around with us.  We
might be on the network at home, at work, at a hotel, at
Starbucks, or not at all, even all in the same day.  So how
can a laptop and a file server play nice?
- Third, virtualization is no longer the domain of IBM big
iron (VM) and low-performance experiments (e.g. P-machines).
The current multi-core CPUs practically beg for virtualized
environments.

Am I suggesting another start-from-scratch project?  Not
necessarily, but I don't want to reject that out of hand,
either.  I tend to think that Plan 9 and Inferno can be a
good base that can adapt well to these changes.  Though
I'm inclined to think that there's an opportunity to create
a better hypervisor, inspired by these systems we know
and love.  As an example of the kind of rumination that
would be part of this process, is it possible to create a
hypervisor where the resources of one VM can be imported
by another with minimal (or better, no) modification to
the mainstream guys?  This would allow any OS to
leverage the device drivers written for another.

I've gone on long enough.  Those of you who have not
recently been laid off don't need to spend too much time
on my musings.  But the question in my mind for a while
has been, is it time for another step back and rethinking
the big picture?

BLS




Re: [9fans] security questions

2009-04-17 Thread lucio
 Very nice of you to go to lengths for describing Inferno to a non-techie. 
 Thank you. Just got the Fourth Edition ISO and will try it. Maybe even 
 learn some Limbo in long term.

My pleasure.  I just hope no one decides to confront me on all the
inaccuracies that are likely to have crept in :-)

++L




Re: [9fans] Help for a home user discovering Plan 9

2009-04-17 Thread blstuart
Oops: sent too early...  Here's the rest

 It would be nice if someone could point me to some step-by-step
 instructions for Plan 9 dummies,

I don't think such a thing currently exists, but if you keep
notes as you go along, you could provide the welcome service
of writing one...

But there are some general direction to point you in for
these specific things:

 for a wireless connection to a DHCP
 router network,

ip/ipconfig looks for DHCP if you don't give it explicit
address, so that part is easy.  The real challenge is in
the device driver for any given wireless card.  Because
our community is small, we don't have an army of
device driver writers.  So the easiest way to do this is
run Plan 9 along with something else using 9vx, qemu,
Xen, lguest, kvm, virtualbox, vmware, ...

 changing the display resolution or the Acme font,

When running natively, the resolution is set by vga(8)
form the vgasize= parameter in plan9.ini.  Acme
takes two command-line font parameters: -f and -F.
Usually, it's started reading from an acme.dump file
where the desired font has already been recorded.

 browsing the Web,

Ah, the web; our thorn in the flesh :)  There is abaco
and Inferno's charon, but neither supports the java/flash/
extension of the week that so many sites seem to assume.
It'd be great if someone wrote a brower that did support
them, but that's not an interesting problem for most of
the people here.

 and accessing files and running applications on a
 Vista laptop.

There's aquarela which is a CIFS server, but I'm not sure
about client.  I seem to remember it being worked on at
one point, but I'm not sure if it was ever completed.

 I'd also welcome any other ideas about learning to use
 Plan 9.

I'll have to leave that to others.  I tend to be interested in
things as objects of study, rather than as things to use.
I just happen to use my objects of study along the way.

BLS




Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread tlaronde
On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstu...@bellsouth.net wrote:
 - First, the gap between the computational power at the
 terminal and the computational power in the machine room
 has shrunk to the point where it might no longer be significant.
 It may be worth rethinking the separation of CPU and terminal.
 For example, I'm typing this in acme running in a 9vx terminal
 booted using using a combined fs/cpu/auth server for the
 file system.  But I rarely use the cpu server capability of
 that machine.

I'm afraid I don't quite agree with you.

The definition of a terminal has changed. In Unix, the graphical
interface (X11) was a graphical variant of the text terminal interface,
i.e. the articulation (link, network) was put on the wrong place,
the graphical terminal (X11 server) being a kind of dumb terminal (a
little above a frame buffer), leaving all the processing, including the
handling of the graphical interface (generating the image,
administrating the UI, the menus) on the CPU (Xlib and toolkits run on
the CPU, not the Xserver).

A terminal is not a no-processing capabilities (a dumb terminal):
it can be a full terminal, that is able to handle the interface,
the representation of data and commands (wandering in a menu shall
be terminal stuff; other users have not to be impacted by an user's
wandering through the UI).

More and more, for administration, using light terminals, without
software installations is a way to go (less ressources in TCO). Green
technology. Data less terminals for security (one looses a terminal, not
the data), and data less for safety (data is centralized and protected).


Secondly, one is accustomed to a physical user being several distinct
logical users (accounts), for managing different tasks, or accessing
different kind of data.

But (to my surprise), the converse is true: a collection of individuals
can be a single logical user, having to handle concurrently the very
same rw data. Terminals are then just distinct views of the same data
(imagine in a CAD program having different windows, different views of a
file ; this is the same, except that the windows are on different
terminals, with different instances of the logical user in front of
them).

The processing is then better kept on a single CPU, handling the
concurrency (and not the fileserver trying to accomodate). The views are
multiplexed, but not the handling of the data.

Thirdly, you can have a slow/loose link between a CPU and a terminal
since the commands are only a small fraction of the processing done.
You must have a fast or tight link between the CPU and the fileserver.

In some sense, logically (but not efficiently: read the caveats in the
Plan9 papers; a processor is nothing without tightly coupled memory, so
memory is not a remote pool sharable---Mach!), even today on an
average computer one has this articulation: a CPU (with a FPU
perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
graphical capacities (terminal) : GPU.

-- 
Thierry Laronde (Alceste) tlaronde +AT+ polynum +dot+ com
 http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C



Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread erik quanstrom
 In some sense, logically (but not efficiently: read the caveats in the
 Plan9 papers; a processor is nothing without tightly coupled memory, so
 memory is not a remote pool sharable---Mach!), 

if you look closely enough, this kind of breaks down.  numa
machines are pretty popular these days (opteron, intel qpi-based
processors).  it's possible with a modest loss of performance to
share memory across processors and not worry about it.

there is such an enormous difference in network speeds
(4 orders of magnitude 1mbps dsl/wireless up to 10gbps)
that it's hard to generalize but i don't see why tightly
coupled memory is an absolutely necessary.  you could
think of the network as 1/10th to 1/1th speed quickpath.
it may still be a big win.

 even today on an
 average computer one has this articulation: a CPU (with a FPU
 perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
 graphical capacities (terminal) : GPU.

plan 9 can make the nas/dasd dichotomy disappear.

import -E ssl storage.coraid.com '#S' /n/bigdisks

- erik



[9fans] Security, take 2.

2009-04-17 Thread Devon H. O'Dell
Given the feedback from the list, I've come up with two alternatives.
(Well, one of them was actually Mechiel's brainchild).

Idea #1 (From Mechiel)
Instead of doing typed allocations, give every user an allocation
pool, from which all kernel allocations will take place. To extend on
this, the size of the pool is somewhat dynamic -- as new users log in,
all users' ability to consume kernel resources goes down by a fair
percentage. (Except for eve.) As users log out, all users gain the
percentage of resources back. The number is based on a 90% resource
allocation -- i.e. the kernel may keep 10% of its initial resources
for things it needs to do all by itself, without users interfering.
When a malloc occurs with up, that size is stored in a counter. The
proc also holds per-proc information, so that a username change can
intelligently move only the resources from that proc over to the new
user, instead of everything from the old user (which is clearly
wrong).

This implementation has one magic number: 0.9. The fair share is
percentage based from that, but I'm not experience in fair share
algorithms, so maybe there's a better way to do that. Also, the
security of this implementation is provable.

Downside is that this implementation is somewhat intrusive:
introducing 9/port/kreslimit.c and touching 9/port/portdat.h,
9/port/portfns.h, 9/port/alloc.c, 9/port/proc.c, and 9/port/devcap.c

I only have one question about implementation: where in process
creation is the process username set? In newproc(), I see p-user set
to *nouser; I can only assume this is `fixed' later, but I don't
know where. I ask, because for natively started processes (i.e. not a
user logging in from drawterm, that's handled through devcap), I need
to incref on the proc structure that holds the user's pool info. I
know I need to do this in e.g. #c/user, and devcap. But when a user
starts a new process after logging in, it needs to add a ref. Where in
proc.c (or elsewhere) is it finally determined who the user is? (Is
that in renameuser()? I wasn't sure).

Idea #2
Implement a similar thing as mult.bsd.lv. This would be implemented as
a device and would give you a `blank' Plan 9 system:
echo {cpushare} {maxmem} {newroot}  /dev/virtual/new

I haven't thought a whole lot about how this would work, but I'm
guessing at least the maxmem would be implemented similarly, by
creating a new pool. I'd have to learn more about the scheduler to do
the CPU limiting, and newroot would be as easy as a bind(). This would
also be somewhat intrusive (but maybe not more so than Kreslimit), but
has hard values, and provable security. (And the advantage of being
able to spawn `new' Plan 9 instances from Plan 9.) More like jail(8)
in FreeBSD (but much cleaner, due to its provability), less like
vkernel(8) in DragonFly BSD.

--dho



Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread tlaronde
On Fri, Apr 17, 2009 at 01:29:09PM -0400, erik quanstrom wrote:
  In some sense, logically (but not efficiently: read the caveats in the
  Plan9 papers; a processor is nothing without tightly coupled memory, so
  memory is not a remote pool sharable---Mach!), 
 
 if you look closely enough, this kind of breaks down.  numa
 machines are pretty popular these days (opteron, intel qpi-based
 processors).  it's possible with a modest loss of performance to
 share memory across processors and not worry about it.

NUMA are, from my point of view, tightly connected.

By loosely, I mean a memory accessed by non dedicated processor 
hardware means (if this makes sense). Moving data from different
memories via some IP based protocol or worse. But all in all,
finally a copy is put in the tightly connected memory, whether huge
caches, or dedicated main memory.

The disaster of Mach (I don't know if my bad english is responsible for
this, but in the Plan9 paper the research or university OS that is
implicitely gibed at is Mach) is a kind of example.

NUMA are sufficiently special beasts that the majority of huge computing
facilities have been done by clusters (because it was easier for
software only organizations).

This definitively doesn't mean NUMA has no raison d'être. On the
contrary, this is an argument supplementary to the distinction
between the UI (terminals) and the CPU.

-- 
Thierry Laronde (Alceste) tlaronde +AT+ polynum +dot+ com
 http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C



Re: [9fans] security questions

2009-04-17 Thread Bakul Shah
On Fri, 17 Apr 2009 08:14:12 EDT Devon H. O'Dell devon.od...@gmail.com  
wrote:
 2009/4/17 erik quanstrom quans...@quanstro.net:
  What if each user can have a separate IP stack, separate
  (virtualized) interfaces and so on?
 
  already possible, but you do need 1 physical ethernet
  per ip stack if you want to talk to the outside world.
 
 I'm sure it wouldn't be hard to add a virtual ``physical'' interface,
 even though that seems a little bit pervasive, given the already
 semi-virtual nature due to namespaces. Not sure how much of a hassle
 it would be to make multiple stacks bindable to a single interface...
 but perhaps that's the better way to go?

You'd have to add a packet classifier of some sort.  Packets
to host A get delivered to logical interface #1, host B get
delivered to #2 and so on.  Going out is not a problem.

Alternatively put each virtual host on a different VLAN (if
your ethernet controller does VLANs).

  But you'd have to implement some sort of limits on
  oversubcribing (ratio of virtual to real resources). Unlike
  securitization in the hedge fund world.
 
  this would add a lot of code and result in the same problem
  as today =97 you can be run out of a criticial resource.
 
 Oversubscribing is the root of the problem. In fact, even if it was
 already done, on a terminal server, imagmem is also set to kpages. So
 if someone found a way to blow up the kernel's draw buffer, boom. I
 don't know how far reaching that is, as I've never really seen the
 draw code.

If you are planning to open up a system to the public, then
provisioning for the peak use of your system will result in a
lot of waste (even if you had the resources to so provision).
Even your ISP uses oversubscription (probably by a factor of
100, if not more. If his upstream data pipes give him N bps,
he will give out 100N bps of total bandwidth to his
customers.  If you want guaranteed bandwidth, you have to
shell out a lot more for a gold service level agreement).

What I meant is
a) you need to ensure that a single user can't exceed his resoucre limits,
b) enforce a sensible oversubscription limit (if you oversubscribe
   by a factor of 30, don't let in the 31st concurrent user), and
c) very likely you also want to put these users in different
   login classes (ala *BSD) and disallow each class to
   cumulatively exceed configured resource limit (*BSD
   doesn't do this) -- this is where I was thinking of CBQ.



Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread blstuart
 The definition of a terminal has changed. In Unix, the graphical

In the broader sense of terminal, I don't disagree.  I was
being somewhat clumsy in talking about terminals in
the Plan 9 sense of the processing power local to my
fingers.

 A terminal is not a no-processing capabilities (a dumb terminal):
 it can be a full terminal, that is able to handle the interface,
 the representation of data and commands (wandering in a menu shall
 be terminal stuff; other users have not to be impacted by an user's
 wandering through the UI).

Absolutly, but part of what has changed over the past 20
years is that the rate at which this local processing power
has grown has been faster than rate at which the processing
power of the rack-mount box in the machine room has
grown (large clusters not withstanding, that is).  So the
gap between them has narrowed.

 The processing is then better kept on a single CPU, handling the
 concurrency (and not the fileserver trying to accomodate). The views are
 multiplexed, but not the handling of the data

That is part of the conversation the question is meant
to raise.  If cycles/second isn't as strong a justification
for separate CPU servers, then are there other reasons
we should still have the separation?  If so, do we need
to think differently about the model?

 In some sense, logically (but not efficiently: read the caveats in the
 Plan9 papers; a processor is nothing without tightly coupled memory, so

The flip side is actually what intrigues me more, namely
machines where the connection to the file system is
even more loosly coupled than sharing Ethernet.  I'd
like to have my usage on the laptop sitting in Starbucks
to be as much a part of the model as using one of
the BlueGene machines as an enormous CPU server
while sitting in the next room.

BLS




Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread erik quanstrom
 Absolutly, but part of what has changed over the past 20
 years is that the rate at which this local processing power
 has grown has been faster than rate at which the processing
 power of the rack-mount box in the machine room has
 grown (large clusters not withstanding, that is).  So the
 gap between them has narrowed.

or, we have miserably failed as of late in putting ever cycle we
can dream about to good use; we'd care more about the cycles
of a cpu server if we were better at using them up.

every cycle's perfect, every cycle's great
if one cycle's wasted, god gets quite irate

that, plus the fact that the the mhz wars are dead and
gone.

- erik



Re: [9fans] Help for a home user discovering Plan 9

2009-04-17 Thread Steve Simon
 There's aquarela which is a CIFS server, but I'm not sure
 about client.  I seem to remember it being worked on at
 one point, but I'm not sure if it was ever completed.

cifs(1) (cifs client) is alive and well at contrib/install steve/cifs

I use it every day at work, its only (known) limitation
is that its DFS client can only follow intra-server links.
You can work around this by mounting serves as you need them
and bind(1)ing over the broken DFS link.

-Steve



Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread blstuart
 if you look closely enough, this kind of breaks down.  numa
 machines are pretty popular these days (opteron, intel qpi-based
 processors).  it's possible with a modest loss of performance to
 share memory across processors and not worry about it.

Way back in the dim times when hypercubes roamed
the earth, I played around a bit with parallel machines.
When I was writing my master's thesis, I tried to find
a way to dispell the idea that shared-memory vs interconnection
network was as bipolar as the terms multiprocessor and
multicomputer would suggest.  One of the few things
in that work that I think still makes sense is characterizing
the degree of coupling as a continuum based on the ratio
of bytes transferred between CPUs to bytes accessed in
local memory.  So C.mmp would have a very high degree
of coupling and s...@home would have a very low degree
of coupling.  The upshot is that if I have a fast enough
network, my degree of coupling is high enough that I
don't really care whether or how much memory is local
and how much is on the other side of the building.

Of course, until recently, the rate at which CPU fetches
must be to keep the pipeline full has grown much
faster than network speeds.  So the idea of remote
memory hasn't been all that useful.  However, I wouldn't
be surprised to see that change over the next 10 to 20
years.  So maybe my local CPU will gain access to most
of its memory by importing /dev/memctl from a memory
server (1/2 :))

BLS




Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread blstuart
 Absolutly, but part of what has changed over the past 20
 years is that the rate at which this local processing power
 has grown has been faster than rate at which the processing
 power of the rack-mount box in the machine room has
 grown (large clusters not withstanding, that is).  So the
 gap between them has narrowed.
 
 or, we have miserably failed as of late in putting ever cycle we
 can dream about to good use; we'd care more about the cycles
 of a cpu server if we were better at using them up.

What?  Dancing icons and sound effects for menu selections
are good use of cycles? :)

   every cycle's perfect, every cycle's great
   if one cycle's wasted, god gets quite irate

I often tell my students that every cycle used by overhead
(kernel, UI, etc) is a cycle taken away from doing the work
of applications.  I'd much rather have my DNA sequencing
application finish in 25 days instead of 30 than to have
the system look pretty during those 30 days.

 that, plus the fact that the the mhz wars are dead and
 gone.

Does that mean we're all playing core wars now? :)

BLS




Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread erik quanstrom
On Fri Apr 17 14:21:03 EDT 2009, tlaro...@polynum.com wrote:
 On Fri, Apr 17, 2009 at 01:29:09PM -0400, erik quanstrom wrote:
   In some sense, logically (but not efficiently: read the caveats in the
   Plan9 papers; a processor is nothing without tightly coupled memory, so
   memory is not a remote pool sharable---Mach!), 
  
  if you look closely enough, this kind of breaks down.  numa
  machines are pretty popular these days (opteron, intel qpi-based
  processors).  it's possible with a modest loss of performance to
  share memory across processors and not worry about it.
 
 NUMA are, from my point of view, tightly connected.
 
 By loosely, I mean a memory accessed by non dedicated processor 
 hardware means (if this makes sense). Moving data from different
 memories via some IP based protocol or worse. But all in all,
 finally a copy is put in the tightly connected memory, whether huge
 caches, or dedicated main memory.

why do you care what gives you the illusion of a large, flat
address space?  that is, what is special about having a quick
path network instead of, say, infiniband or ethernet?

why does networking imply ip networking?

my point is that i think we need to recognize that there vast differences
in performance between, say, local memory, memory across the quickpath
bus, memory on the the next machine, and these differences may vary
greatly between one set of machines and another.

then, the 64¢ question is, how does one use this to one's advantage
without assuming ahead of time what's faster than what.

(one could easily imagine a 40gbps ethernet connection being competitive
with a 3-hop numa connection.)

- erik



Re: [9fans] Help for a home user discovering Plan 9

2009-04-17 Thread blstuart
 There's aquarela which is a CIFS server, but I'm not sure
 about client.  I seem to remember it being worked on at
 one point, but I'm not sure if it was ever completed.
 
 cifs(1) (cifs client) is alive and well at contrib/install steve/cifs

I happily stand corrected.

BLS




Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread lucio
 But the question in my mind for a while
 has been, is it time for another step back and rethinking
 the big picture?

Maybe, and maybe what we ought to look at is precisely what Plan 9
skipped, with good reason, in its infancy: distributed core
resources or the platform as a filesystem.

What struck me when first looking at Xen, long after I had decided
that there was real merit in VMware, was that it allowed migration as
well as checkpoint/restarting of guest OS images with the smallest
amount of administration.  Today, to me, that means distributed
virtualisation.  So, back to my first impression: Plan 9 would make a
much better foundation for a virtualiser than any of the other OSes
currently in use (limited to my experience, there may be something in
the league of IBM's 1960s VMS (do I remember right?  sanctions made
IBM a little scarce in my formative years) out there that I don't know
about).  Given a Plan 9 based virtualiser, are we far from using
long-running applications and migrating them in flight from whichever
equipment may have been useful yesterday to whatever is handy today?

The way I see it, we would progress from conventional utilities strung
together with Windows' crappy glue to having a single profile
application, itself a virtualiser's guest, which includes any
activities you may find useful online.  It sits on the web and follows
you around, wherever you go.  It is engineered against any possible
failures, including security-related ones and is always there for you.
Add Venti to its persistent objects and you can also rewind to a
better past state.

Do you not like it?  It smacks of Inferno and o/mero on top of a
virtualiser-enhanced Plan 9.  Those who might prefer the conventional
Windows/Linux platforms may have to wait a little longer before they
figure out how to catch up :-)

++L




Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread erik quanstrom
 I often tell my students that every cycle used by overhead
 (kernel, UI, etc) is a cycle taken away from doing the work
 of applications.  I'd much rather have my DNA sequencing
 application finish in 25 days instead of 30 than to have
 the system look pretty during those 30 days.

i didn't mean to imply we should not be frugal with cycles.
i ment to say that we simply don't have anything useful to
do with a vast majority of cycles, and that's just as wasteful
as doing bouncing icons.  we need to work on that problem.

  that, plus the fact that the the mhz wars are dead and
  gone.
 
 Does that mean we're all playing core wars now? :)

yes it does.  i've got $50 that says that in 2011 we'll be
saying that this one goes to eleven (cores).

- erik



[9fans] Plan9 - the next 20 years

2009-04-17 Thread Steve Simon
I cannot find the reference (sorry), but I read an interview with Ken
(Thompson) a while ago.

He was asked what he would change if he where working on plan9 now,
and his reply was somthing like I would add support for cloud computing.

I admin I am not clear exactly what he meant by this.

-Steve



Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread Gorka Guardiola
On Fri, Apr 17, 2009 at 3:14 PM, Balwinder S Dheeman bdhee...@gmail.com wrote:

 Please set aside rare cases and let us know who except for the students,
 teachers and, or researchers uses Plan9 and, or Inferno in the offices,
 homes and, or cafes and for what?

 The Plan9 project started in 1980, took around 9 years to be solid
 enough to be usable and that too by the internal and, or lab people
 [http://plan9.bell-labs.com/sys/doc/9.html] only. Whereas, the FreeBSD
 and, or Linux (though not an OS or Unix variant in a sense) came into
 existence later in 1993 and 1991 respectively are more popular among any
 other variants of Unix.

That is the difference between coming up with a design an rethinking the
system and just copying one and porting software already written. Linux
started mostly using all the gnu stuff and copied all the design from already
existing Unix things. That of course takes less than rethinking
everything carefully
from scratch. For example UTF. Among other things.

That said what is the points of this discussions?. Use whatever you want
and have fun. I use 4 or 5 operating systems
for different things. One of them is Plan 9. Not only for teaching but
as infrastructure
For example this is the CMS for our courses:
http://lsub.org/magic/group?o=ig=c
And we ran several labs which runs diskless for teaching and so.
This infrastructure serves hundreds of students. I can even have 100 computers
running diskless with students with daily automatic incremental backups (venti)
using the CMS (yes, with abaco) and compiling and running programs
at the same time against one file server. Try that with *any* other
operating system
(and our hardware infrastructure).

Then again, that may not be solid enough for you. I happen to work
at a University, sorry.

I also run Mac OS and use it for web browsing. Windows for several
devices (like a USB sniffer) which I don't have drivers nor I do I
feel like writing.
Linux in my illiad ebook.
And inferno/octopus for integrating all this stuff into a usable environment.
And some time even others.

If Plan 9 is not useful for you nor you get how it can be, good, don't use it.

For me it is.
-- 
- curiosity sKilled the cat



Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread tlaronde
On Fri, Apr 17, 2009 at 01:31:12PM -0500, blstu...@bellsouth.net wrote:
 
 Absolutly, but part of what has changed over the past 20
 years is that the rate at which this local processing power
 has grown has been faster than rate at which the processing
 power of the rack-mount box in the machine room has
 grown (large clusters not withstanding, that is).  So the
 gap between them has narrowed.

This is a geek attitude ;) You say that since I can buy something more
powerful (if I do not change the programs for fatter ones...) for the
same amount of money or a little more, I have to find something to do
with that.

My point of view is: if my terminal works, I keep it. If not, I buy
something cheaper, including in TCO, for happily doing the work that has
to be done ;)

I don't have to buy expensive things and try to find something to do
with them.

I try to have hardware that matches my needs.

And I prefer to put money on a CPU, more powerful, far from average
user creativity, and the only beast I have to manage.

 
  The processing is then better kept on a single CPU, handling the
  concurrency (and not the fileserver trying to accomodate). The views are
  multiplexed, but not the handling of the data
 
 That is part of the conversation the question is meant
 to raise.  If cycles/second isn't as strong a justification
 for separate CPU servers, then are there other reasons
 we should still have the separation?  If so, do we need
 to think differently about the model?

The main point I have discovered very recently is that giving access to 
the system resources is a centralized thing, and that a logical user
can have several distinct sessions on several distinct terminals, but
these are just views: the data opened, especially for random rw is
opened by a single program.

Fileservers have only to provide what they do provide :

1) Random read/write for an uniq user.

2) Append only for shared data. (In KerGIS for example, some attributes
can be shared among users. So distinct (logical) users can open a file
rw, but they only append/write and the semantics of the data is so that
appending the n+1 records doesn't invalidate the [0,n]---records are
partitions, there is no overlapping. Changing the records (random
access) is possible but the cases are rare, and the stuff is done by the
user manager (another logical user)).

So the semantics of the data and the handling of users is so that a user
can randomly read/write (not sharable). A group can append/write but
without modifying records. And others can only (perhaps) read.

-- 
Thierry Laronde (Alceste) tlaronde +AT+ polynum +dot+ com
 http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread tlaronde
On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
 I cannot find the reference (sorry), but I read an interview with Ken
 (Thompson) a while ago.
 
 He was asked what he would change if he where working on plan9 now,
 and his reply was somthing like I would add support for cloud computing.
 
 I admin I am not clear exactly what he meant by this.

My interpretation of cloud computing is precisely the split done by
plan9 with terminal/CPU/FileServer: a UI runing on a this Terminal, with
actual computing done somewhere about data stored somewhere.

Perhaps tools for migrating tasks or managing the thing. But I have the
impression that the Plan 9 framework is the best for such a scheme.
-- 
Thierry Laronde (Alceste) tlaronde +AT+ polynum +dot+ com
 http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C



Re: [9fans] Rails? (was Re: web server)

2009-04-17 Thread Tom Lieber
On Thu, Apr 16, 2009 at 2:51 PM, erik quanstrom quans...@quanstro.net wrote:
 without some constraints on the data, you can't show that your design
 works.  without some idea of what the data could be, how do you pick
 appropriate algorithms?

The point of the model is to enforce constraints. It is the gateway to
the data store. The algorithms are part of the model code.

 and then two weeks later the director of marketing would be in my office
 talking about his new idea.  it was uncanny how it managed to always ask
 for something we just couldn't do.

Rails' model library gets bigger and bigger all the time!

-- 
Tom Lieber
http://AllTom.com/



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread J.R. Mauro
On Fri, Apr 17, 2009 at 3:43 PM,  tlaro...@polynum.com wrote:
 On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
 I cannot find the reference (sorry), but I read an interview with Ken
 (Thompson) a while ago.

 He was asked what he would change if he where working on plan9 now,
 and his reply was somthing like I would add support for cloud computing.

 I admin I am not clear exactly what he meant by this.

 My interpretation of cloud computing is precisely the split done by
 plan9 with terminal/CPU/FileServer: a UI runing on a this Terminal, with
 actual computing done somewhere about data stored somewhere.

The problem is that the CPU and Fileservers can't be assumed to be
static. Things can and will go down, move about, and become
temporarily unusable over time.


 Perhaps tools for migrating tasks or managing the thing. But I have the
 impression that the Plan 9 framework is the best for such a scheme.
 --
 Thierry Laronde (Alceste) tlaronde +AT+ polynum +dot+ com
                 http://www.kergis.com/
 Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C





Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread Eris Discordia

even today on an average computer one has this articulation: a CPU (with
a FPU perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
graphical capacities (terminal) : GPU.


It happens so that a reversal of specialization has really taken place, as 
Brian Stuart suggests. These terminals you speak of, GPUs, contain such 
vast untapped general processing capabilities that new uses and a new 
framework for using them are being defined: GPGPU and OpenCL.


http://en.wikipedia.org/wiki/OpenCL
http://en.wikipedia.org/wiki/GPGPU

Right now, the GPU on my low-end video card takes a huge burden off of the 
CPU when leveraged by the right H.264 decoder. Two high definition AVC 
streams would significantly slow down my computer before I began using a 
CUDA-enabled decoder. Now I can easily play four in parallel.


Similarly, the GPUs in PS3 boxes are being integrated into one of the 
largest loosely-coupled clusters on the planet.


http://folding.stanford.edu/English/FAQ-highperformance

Today, even a mere cellphone may contain enough processing power to run a 
low-traffic web server or a 3D video game. This processing power comes 
cheap so it is mostly wasted.


I'd like to add to Brian Stuart's comments the point that previous 
specialization of various boxes is mostly disappearing. At some point in 
near future all boxes may contain identical or very similar powerful 
hardware--even probably all integrated into one black box. So cheap that 
it doesn't matter if one or another hardware resource is wasted. To put to 
good use such a computational environment system software should stop 
incorporating a role-based model of various installations. All boxes, 
except the costliest most special ones, shall be peers.


--On Friday, April 17, 2009 7:11 PM +0200 tlaro...@polynum.com wrote:


On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstu...@bellsouth.net wrote:

- First, the gap between the computational power at the
terminal and the computational power in the machine room
has shrunk to the point where it might no longer be significant.
It may be worth rethinking the separation of CPU and terminal.
For example, I'm typing this in acme running in a 9vx terminal
booted using using a combined fs/cpu/auth server for the
file system.  But I rarely use the cpu server capability of
that machine.


I'm afraid I don't quite agree with you.

The definition of a terminal has changed. In Unix, the graphical
interface (X11) was a graphical variant of the text terminal interface,
i.e. the articulation (link, network) was put on the wrong place,
the graphical terminal (X11 server) being a kind of dumb terminal (a
little above a frame buffer), leaving all the processing, including the
handling of the graphical interface (generating the image,
administrating the UI, the menus) on the CPU (Xlib and toolkits run on
the CPU, not the Xserver).

A terminal is not a no-processing capabilities (a dumb terminal):
it can be a full terminal, that is able to handle the interface,
the representation of data and commands (wandering in a menu shall
be terminal stuff; other users have not to be impacted by an user's
wandering through the UI).

More and more, for administration, using light terminals, without
software installations is a way to go (less ressources in TCO). Green
technology. Data less terminals for security (one looses a terminal, not
the data), and data less for safety (data is centralized and protected).


Secondly, one is accustomed to a physical user being several distinct
logical users (accounts), for managing different tasks, or accessing
different kind of data.

But (to my surprise), the converse is true: a collection of individuals
can be a single logical user, having to handle concurrently the very
same rw data. Terminals are then just distinct views of the same data
(imagine in a CAD program having different windows, different views of a
file ; this is the same, except that the windows are on different
terminals, with different instances of the logical user in front of
them).

The processing is then better kept on a single CPU, handling the
concurrency (and not the fileserver trying to accomodate). The views are
multiplexed, but not the handling of the data.

Thirdly, you can have a slow/loose link between a CPU and a terminal
since the commands are only a small fraction of the processing done.
You must have a fast or tight link between the CPU and the fileserver.

In some sense, logically (but not efficiently: read the caveats in the
Plan9 papers; a processor is nothing without tightly coupled memory, so
memory is not a remote pool sharable---Mach!), even today on an
average computer one has this articulation: a CPU (with a FPU
perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
graphical capacities (terminal) : GPU.

--
Thierry Laronde (Alceste) tlaronde +AT+ polynum +dot+ com
 http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C









Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread J.R. Mauro
On Fri, Apr 17, 2009 at 2:59 PM, Eris Discordia
eris.discor...@gmail.com wrote:
 even today on an average computer one has this articulation: a CPU (with
 a FPU perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
 graphical capacities (terminal) : GPU.

 It happens so that a reversal of specialization has really taken place, as
 Brian Stuart suggests. These terminals you speak of, GPUs, contain such
 vast untapped general processing capabilities that new uses and a new
 framework for using them are being defined: GPGPU and OpenCL.

 http://en.wikipedia.org/wiki/OpenCL
 http://en.wikipedia.org/wiki/GPGPU

 Right now, the GPU on my low-end video card takes a huge burden off of the
 CPU when leveraged by the right H.264 decoder. Two high definition AVC
 streams would significantly slow down my computer before I began using a
 CUDA-enabled decoder. Now I can easily play four in parallel.

 Similarly, the GPUs in PS3 boxes are being integrated into one of the
 largest loosely-coupled clusters on the planet.

 http://folding.stanford.edu/English/FAQ-highperformance

 Today, even a mere cellphone may contain enough processing power to run a
 low-traffic web server or a 3D video game. This processing power comes cheap
 so it is mostly wasted.

I can't find the link, but a recent article described someone's
efforts at CMU to develop what he calls FAWN Fast Array of Wimpy
Nodes. He basically took a bunch of eeePC boards and turned them into
a single computer.

The performance per watt of such an array was staggeringly higher than
a monster computer with Xeons and disks.

So hopefully in the future, we will be able to have more fine-grained
control over such things and fewer cycles will be wasted. It's time
people realized that CPU cycles are a bit like employment. Sure
UNemployment is a problem, but so is UNDERemployment, and the latter
is sometimes harder to gauge.


 I'd like to add to Brian Stuart's comments the point that previous
 specialization of various boxes is mostly disappearing. At some point in
 near future all boxes may contain identical or very similar powerful
 hardware--even probably all integrated into one black box. So cheap that
 it doesn't matter if one or another hardware resource is wasted. To put to
 good use such a computational environment system software should stop
 incorporating a role-based model of various installations. All boxes, except
 the costliest most special ones, shall be peers.

 --On Friday, April 17, 2009 7:11 PM +0200 tlaro...@polynum.com wrote:

 On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstu...@bellsouth.net wrote:

 - First, the gap between the computational power at the
 terminal and the computational power in the machine room
 has shrunk to the point where it might no longer be significant.
 It may be worth rethinking the separation of CPU and terminal.
 For example, I'm typing this in acme running in a 9vx terminal
 booted using using a combined fs/cpu/auth server for the
 file system.  But I rarely use the cpu server capability of
 that machine.

 I'm afraid I don't quite agree with you.

 The definition of a terminal has changed. In Unix, the graphical
 interface (X11) was a graphical variant of the text terminal interface,
 i.e. the articulation (link, network) was put on the wrong place,
 the graphical terminal (X11 server) being a kind of dumb terminal (a
 little above a frame buffer), leaving all the processing, including the
 handling of the graphical interface (generating the image,
 administrating the UI, the menus) on the CPU (Xlib and toolkits run on
 the CPU, not the Xserver).

 A terminal is not a no-processing capabilities (a dumb terminal):
 it can be a full terminal, that is able to handle the interface,
 the representation of data and commands (wandering in a menu shall
 be terminal stuff; other users have not to be impacted by an user's
 wandering through the UI).

 More and more, for administration, using light terminals, without
 software installations is a way to go (less ressources in TCO). Green
 technology. Data less terminals for security (one looses a terminal, not
 the data), and data less for safety (data is centralized and protected).


 Secondly, one is accustomed to a physical user being several distinct
 logical users (accounts), for managing different tasks, or accessing
 different kind of data.

 But (to my surprise), the converse is true: a collection of individuals
 can be a single logical user, having to handle concurrently the very
 same rw data. Terminals are then just distinct views of the same data
 (imagine in a CAD program having different windows, different views of a
 file ; this is the same, except that the windows are on different
 terminals, with different instances of the logical user in front of
 them).

 The processing is then better kept on a single CPU, handling the
 concurrency (and not the fileserver trying to accomodate). The views are
 multiplexed, but not the handling of the data.

 Thirdly, you 

Re: [9fans] security questions

2009-04-17 Thread John Barham
Robert Raschke wrote:
 Also note there's a new book out that includes Inferno as a major
 example, essentially explaining OS principles in general, in Inferno,
 and in Linux:

 Principles of Operating Systems: Design and Applications
 by Brian Stuart

 ( http://www.amazon.co.uk/exec/obidos/ASIN/1418837695 )

 I've only just started reading it, so can't really comment on how good
 it is yet. Looks promising so far though.

I recently bought this book and have read most of it.  It's especially
good at bridging the gap between OS theory and the gritty details of
implementation with clear explanations of selected source code
extracts from the Inferno and Linux kernels.  The chapter on Inferno
process management and its scheduler is especially illuminating.

Although it focuses on the implementation of Inferno I've also found
it helpful for understanding the Plan 9 kernel since it covers the
Inferno device driver model, viz. embedded 9p/Styx servers.  It also
reviews the Inferno implementation of kfs, which is written in Limbo,
but the mental translation to C is easy.

  John



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread Eric Van Hensbergen
On Fri, Apr 17, 2009 at 2:43 PM,  tlaro...@polynum.com wrote:
 On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
 I cannot find the reference (sorry), but I read an interview with Ken
 (Thompson) a while ago.


 My interpretation of cloud computing is precisely the split done by
 plan9 with terminal/CPU/FileServer: a UI runing on a this Terminal, with
 actual computing done somewhere about data stored somewhere.


That misses the dynamic nature which clouds could enable -- something
we lack as well with our hardcoded /lib/ndb files -- there is no
provisions for cluster resources coming and going (or failing) and no
control facilities given for provisioning (or deprovisioning) those
resources in a dynamic fashion.  Lucho's kvmfs (and to a certain
extent xcpu) seem like steps in the right direction -- but IMHO more
fundamental changes need to occur in the way we think about things.  I
believe the file system interfaces While not focused on cloud
computing in particular, the work we are doing under HARE aims to
explore these directions further (both in the context of Plan
9/Inferno as well as broader themes involving other platforms).

For hints/ideas/whatnot you can check the current pubs (more coming
soon): http://www.research.ibm.com/hare

  -eric



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread John Barham
Steve Simon wrote:
 I cannot find the reference (sorry), but I read an interview with Ken
 (Thompson) a while ago.

 He was asked what he would change if he where working on plan9 now,
 and his reply was somthing like I would add support for cloud computing.

Perhaps you were thinking of his Ask a Google engineer answers at
http://moderator.appspot.com/#15/e=c9t=2d, specifically the question
If you could redesign Plan 9 now (and expect similar uptake to UNIX),
what would you do differently?



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread Benjamin Huntsman
Speaking of NUMA and such though, is there even any support for it in the 
kernel?
I know we have a 10gb Ethernet driver, but what about cluster interconnects 
such as InfiniBand, Quadrics, or Myrinet?  Are such things even desired in Plan 
9?

I'm glad see process migration has been mentioned
winmail.dat

Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread blstuart
 I often tell my students that every cycle used by overhead
 (kernel, UI, etc) is a cycle taken away from doing the work
 of applications.  I'd much rather have my DNA sequencing
 application finish in 25 days instead of 30 than to have
 the system look pretty during those 30 days.
 
 i didn't mean to imply we should not be frugal with cycles.
 i ment to say that we simply don't have anything useful to
 do with a vast majority of cycles, and that's just as wasteful
 as doing bouncing icons.  we need to work on that problem.

I gotcha.  I guess it depends on what you're doing.  I remember
years ago running a simulation on the 11/750 we had.  It
simulated a DSP chip running 2 seconds of real time.  It
ran for over a week.  (While it was running, I took the time
to write another, faster simulator that was able to run
the simulation in about 2 hours.)  For something like that,
we can certainly use all the cycles we can get.  On the other
hand, I might look for a faster way to compile a kernel
a while back, but now it compiles fast enough on most
any machine that I'm not too concerned about where to
use the cycles.  (I'm speaking of a Plan 9 or Inferno kernel
here; not a *BSD or Linux kernel.)  But I suspect that
virtualization and Dis-style VMs are a pretty good use of
cycles we have to spare.

  that, plus the fact that the the mhz wars are dead and
  gone.
 
 Does that mean we're all playing core wars now? :)
 
 yes it does.  i've got $50 that says that in 2011 we'll be
 saying that this one goes to eleven (cores).

Excellent.  I never expected to see core wars and Spinal
Tap in the same discussion about Plan 9.

BLS




Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread J.R. Mauro
On Fri, Apr 17, 2009 at 4:14 PM, Eric Van Hensbergen eri...@gmail.com wrote:
 On Fri, Apr 17, 2009 at 2:43 PM,  tlaro...@polynum.com wrote:
 On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
 I cannot find the reference (sorry), but I read an interview with Ken
 (Thompson) a while ago.


 My interpretation of cloud computing is precisely the split done by
 plan9 with terminal/CPU/FileServer: a UI runing on a this Terminal, with
 actual computing done somewhere about data stored somewhere.


 That misses the dynamic nature which clouds could enable -- something
 we lack as well with our hardcoded /lib/ndb files -- there is no
 provisions for cluster resources coming and going (or failing) and no
 control facilities given for provisioning (or deprovisioning) those
 resources in a dynamic fashion.  Lucho's kvmfs (and to a certain
 extent xcpu) seem like steps in the right direction -- but IMHO more
 fundamental changes need to occur in the way we think about things.  I
 believe the file system interfaces While not focused on cloud
 computing in particular, the work we are doing under HARE aims to
 explore these directions further (both in the context of Plan
 9/Inferno as well as broader themes involving other platforms).

Vidi also seems to be an attempt to make Venti work in such a dynamic
environment. IMHO, the assumption that computers are always connected
to the network was a fundamental mistake in Plan 9


 For hints/ideas/whatnot you can check the current pubs (more coming
 soon): http://www.research.ibm.com/hare

      -eric





Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread blstuart
 What struck me when first looking at Xen, long after I had decided
 that there was real merit in VMware, was that it allowed migration as
 well as checkpoint/restarting of guest OS images with the smallest
...
 
 The way I see it, we would progress from conventional utilities strung
 together with Windows' crappy glue to having a single profile
 application, itself a virtualiser's guest, which includes any
 activities you may find useful online.  It sits on the web and follows

I guess I'm a little slow; it's taken me a little while to get
my head around this and understand it.  Let me see if I've
got the right picture.  When I login I basically look up a
previously saved session in much the same way that LISP
systems would save a whole environment.  Then when I
log off my session is suspended and saved.  Alternatively,
I could always log into the same previously saved state.

 you around, wherever you go. ...
 
 Do you not like it? 

If I understand it, I at least find it interesting.  (I think I'd
have to try using it before I decided on preference.)  I can
easily see different saved environments that I use depending
on whether I'm at home or at work or wherever.  But what
happens if I'm not on any network at all?  The more I think
about it, the more I think this could be handled with the
same mechanism that handles better integration of laptops
and file servers.

 It smacks of Inferno and o/mero on top of a
 virtualiser-enhanced Plan 9. 

Hmmm.  It might be pretty easy to whip up a prototype
based on Inferno.  I must give this some thought...

BLS




Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread blstuart
 Absolutly, but part of what has changed over the past 20
 years is that the rate at which this local processing power
 has grown has been faster than rate at which the processing
 power of the rack-mount box in the machine room has
 grown (large clusters not withstanding, that is).  So the
 gap between them has narrowed.
 
 This is a geek attitude ;) You say that since I can buy something more
 powerful (if I do not change the programs for fatter ones...) for the
 same amount of money or a little more, I have to find something to do
 with that.

I'm not sure I follow.  The point where I would do something
special to get a more powerful system are several years past.
For example, a little over a year ago, the hinges on my work
laptop broke.  When ordering a new one, there was no need
to get a quote for one more powerful than the coporate standard
ones.  The ones in the catalog were powerful enough to do
pretty much anything I needed.  This is partly because the
performance has grown faster than my need and because the
performance gap with larger systems has closed.  In '89, a
desktop box would be something along the lines of an early
SPARCstation.  There was a pretty large gap between its
power and that of a large SGI machine one might use for a
CPU server.  Today, the difference between a base-model
machine and a single machine CPU server isn't as big as it
once was.

 My point of view is: if my terminal works, I keep it. If not, I buy
 something cheaper, including in TCO, for happily doing the work that has
 to be done ;)

I don't disagree.  For that matter, pretty much all the machines
I use here at home are ones that were surplus and I rescued.
But once you get to the point where the cheapest one you can
find has more than enough capability, performance ceases to
be a motivator for a separate CPU server.

Again, that's not to say that there aren't other valid motivators
for some centralized functionality.  It's just that in my opinion,
we're at the point were if it's raw cycles we need, we'll have
to be looking at a large cluster and not a simple CPU server.

BLS




Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread blstuart
 I'd like to add to Brian Stuart's comments the point that previous 
 specialization of various boxes is mostly disappearing. At some point in 
 near future all boxes may contain identical or very similar powerful 
 hardware--even probably all integrated into one black box. So cheap that 

The domination of the commodity reminds me a lot of the
parallel processing world.  At one time, the big honkin'
machines had very custom interconnect designs and often
custom CPUs as well.  But by the time commodity CPUs
got to the point where they were competitive with what
you could do custom, and Ethernet got to the point where
it was competitive with what you could do custom, it
became very rare that you could justify a custom machine.
It was much more cost-effective to build a large cluster
of commodity machines.

For me, personally, this is leading to a point where my
home network is converging on a collection of laptops,
some get used the way most laptops get used, and some
just sit closed on shelves in the rack.  The primary hardware
differences between servers and terminals is that servers
have bigger disks and the lids on terminals tend to stay
open where on servers they tend to stay closed.  It's
getting farther away from the blinkin lights I miss, but
it sure makes my office more comfortable in the summer
both in terms of heat and noise.  Now if I could just get
that Cisco switch to be quieter...

BLS




Re: [9fans] security questions

2009-04-17 Thread blstuart
 Principles of Operating Systems: Design and Applications
 by Brian Stuart

 ( http://www.amazon.co.uk/exec/obidos/ASIN/1418837695 )

 I've only just started reading it, so can't really comment on how good
 it is yet. Looks promising so far though.
 
 I recently bought this book and have read most of it.  It's especially
 good at bridging the gap between OS theory and the gritty details of
 implementation with clear explanations of selected source code
 extracts from the Inferno and Linux kernels.  The chapter on Inferno
 process management and its scheduler is especially illuminating.
 
 Although it focuses on the implementation of Inferno I've also found
 it helpful for understanding the Plan 9 kernel since it covers the
 Inferno device driver model, viz. embedded 9p/Styx servers.  It also
 reviews the Inferno implementation of kfs, which is written in Limbo,
 but the mental translation to C is easy.

Thank you.  I'm glad you're finding it useful.

BLS




Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread tlaronde
On Fri, Apr 17, 2009 at 04:25:40PM -0500, blstu...@bellsouth.net wrote:
 
 Again, that's not to say that there aren't other valid motivators
 for some centralized functionality.  It's just that in my opinion,
 we're at the point were if it's raw cycles we need, we'll have
 to be looking at a large cluster and not a simple CPU server.

Well there is perhaps a hint about what we disagree about. I'm not using
CPU with the strict present meaning in Plan 9 but as a _logical_
processing unit (this can actually be, in this scheme, a cluster or
whatever).

This does not invalidate the logical difference between a terminal and
a CPU. A node can be both a CPU (resp. member of a CPU) and a terminal
etc. The plan 9 distinction, on the usage side et on the topology,
between FileServer, CPU and Terminal is sound and fundamental IMHO.

Enough for me at the moment since, even if I have some things on the
application side, for the rest my discussion of cloud computing could
be a discussion about vapor computing ;)
-- 
Thierry Laronde (Alceste) tlaronde +AT+ polynum +dot+ com
 http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread Francisco J Ballesteros
Well, in the octopus you have a fixed part, the pc, but all other  
machines come and go. The feeling is very much that your stuff is in  
the cloud.


I mean, not everything has to be dynamic.

El 17/04/2009, a las 22:17, eri...@gmail.com escribió:


On Fri, Apr 17, 2009 at 2:43 PM, tlaro...@polynum.com wrote:

On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
I cannot find the reference (sorry), but I read an interview with  
Ken

(Thompson) a while ago.



My interpretation of cloud computing is precisely the split done by
plan9 with terminal/CPU/FileServer: a UI runing on a this Terminal,  
with

actual computing done somewhere about data stored somewhere.



That misses the dynamic nature which clouds could enable -- something
we lack as well with our hardcoded /lib/ndb files -- there is no
provisions for cluster resources coming and going (or failing) and no
control facilities given for provisioning (or deprovisioning) those
resources in a dynamic fashion. Lucho's kvmfs (and to a certain
extent xcpu) seem like steps in the right direction -- but IMHO more
fundamental changes need to occur in the way we think about things. I
believe the file system interfaces While not focused on cloud
computing in particular, the work we are doing under HARE aims to
explore these directions further (both in the context of Plan
9/Inferno as well as broader themes involving other platforms).

For hints/ideas/whatnot you can check the current pubs (more coming
soon): http://www.research.ibm.com/hare

-eric

[/mail/box/nemo/msgs/200904/38399]




Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread ron minnich
if you want to look at checkpointing, it's worth going back to look at
Condor, because they made it really work. There are a few interesting
issues that you need to get right. You can't make it 50% of the way
there; that's not useful. You have to hit all the bits -- open /tmp
files, sockets, all of it. It's easy to get about 90% of it but the
last bits are a real headache. Nothing that's come along since has
really done the job (although various efforts claim to, you have to
read the fine print).

ron



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread J.R. Mauro
On Fri, Apr 17, 2009 at 6:15 PM, ron minnich rminn...@gmail.com wrote:
 if you want to look at checkpointing, it's worth going back to look at
 Condor, because they made it really work. There are a few interesting
 issues that you need to get right. You can't make it 50% of the way
 there; that's not useful. You have to hit all the bits -- open /tmp
 files, sockets, all of it. It's easy to get about 90% of it but the
 last bits are a real headache. Nothing that's come along since has
 really done the job (although various efforts claim to, you have to
 read the fine print).

 ron



Amen. Linux is currently having a seriously hard time getting C/R
working properly, just because of the issues you mention. The second
you mix in non-local resources, things get pear-shaped.

Unfortunately, even if it does work, it will probably not have the
kind of nice Plan 9-ish semantics I can envision it having.



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread ron minnich
On Fri, Apr 17, 2009 at 3:35 PM, J.R. Mauro jrm8...@gmail.com wrote:

 Amen. Linux is currently having a seriously hard time getting C/R
 working properly, just because of the issues you mention. The second
 you mix in non-local resources, things get pear-shaped.

it's not just non-local. It's local too.

you are on a node. you open /etc/hosts. You C/R to another node with
/etc/hosts open. What's that mean?

You are on a node. you open a file in a ramdisk. Other programs have
it open too. You are watching each other's writes. You C/R to another
node with the file open. What's that mean?

You are on a node. You have a pipe to a process on that node. You C/R
to another node. Are you still talking at the end?

And on and on. It's quite easy to get this stuff wrong. But true C/R
requires that you get it right. The only system that would get this
stuff mostly right that I ever used was Condor. (and, well the Apollo
I think got it too, but that was a ways back).

ron



Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread Mechiel Lukkien
On Fri, Apr 17, 2009 at 04:25:40PM -0500, blstu...@bellsouth.net wrote:
 Again, that's not to say that there aren't other valid motivators
 for some centralized functionality.  It's just that in my opinion,
 we're at the point were if it's raw cycles we need, we'll have
 to be looking at a large cluster and not a simple CPU server.

exactly.

the main use of a cpu server for me (and many others i suspect) is running
network services.  it's still nice to have a machine that's always on for
that (my terminals are not stable/always on enough for providing services
to others).  perhaps cpu server is a wrong name name.  service server
anyone? ;)

mjl



Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread Robert Raschke
On 4/17/09, Balwinder S Dheeman bdhee...@gmail.com wrote:
 Please set aside rare cases and let us know who except for the students,
 teachers and, or researchers uses Plan9 and, or Inferno in the offices,
 homes and, or cafes and for what?

At the risk (or maybe honour :-) of being branded as a rare case (I'm
neither student, nor teacher, nor hobbyist), I use Plan 9 in to
maintain my own network, email, web server and wiki, remote editing
facility (ftpfs) and in terms tools, I use acme a lot wherever I go. I
also use it as a handy way to store stuff centrally, for easy
worldwide access via drawterm. I would classify myself as slightly
paranoid, in that I don't really feel comfortable with letting Google
have at it willy nilly. Storing stuff at home may be more prone to
loss, but makes me feel better.

Plan 9 satisfies my curiosity in that I can understand and learn
things within it quite easily. Every time I have to use something like
Linux or MS, I feel overwhelmed by the sheer complexity of it all.
That's fine if it's for work (I get paid for that, after all), but not
for my private life.

Robby



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread J.R. Mauro
On Fri, Apr 17, 2009 at 7:01 PM, ron minnich rminn...@gmail.com wrote:
 On Fri, Apr 17, 2009 at 3:35 PM, J.R. Mauro jrm8...@gmail.com wrote:

 Amen. Linux is currently having a seriously hard time getting C/R
 working properly, just because of the issues you mention. The second
 you mix in non-local resources, things get pear-shaped.

 it's not just non-local. It's local too.

 you are on a node. you open /etc/hosts. You C/R to another node with
 /etc/hosts open. What's that mean?

 You are on a node. you open a file in a ramdisk. Other programs have
 it open too. You are watching each other's writes. You C/R to another
 node with the file open. What's that mean?

 You are on a node. You have a pipe to a process on that node. You C/R
 to another node. Are you still talking at the end?

 And on and on. It's quite easy to get this stuff wrong. But true C/R
 requires that you get it right. The only system that would get this
 stuff mostly right that I ever used was Condor. (and, well the Apollo
 I think got it too, but that was a ways back).

 ron



Yeah, the problem's bigger than I thought (not surprising since I
didn't think much about it). I'm having a hard time figuring out how
Condor handles these issues. All I can see from the documentation is
that it gives you warnings.

I can imagine a lot of problems stemming from open files could be
resolved by first attempting to import the process's namespace at the
time of checkpoint and, upon that failing, using cached copies of the
file made at the time of checkpoint, which could be merged later.

But this still has the 90% problem you mentioned.



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread ron minnich
On Fri, Apr 17, 2009 at 7:06 PM, J.R. Mauro jrm8...@gmail.com wrote:

 Yeah, the problem's bigger than I thought (not surprising since I
 didn't think much about it). I'm having a hard time figuring out how
 Condor handles these issues. All I can see from the documentation is
 that it gives you warnings.

the original condor just forwarded system calls back to the node it
was started from. Thus all system calls were done in the context of
the originating node and user.


 But this still has the 90% problem you mentioned.

it's just plain harder than it looks ...

ron



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread J.R. Mauro
On Fri, Apr 17, 2009 at 10:39 PM, ron minnich rminn...@gmail.com wrote:
 On Fri, Apr 17, 2009 at 7:06 PM, J.R. Mauro jrm8...@gmail.com wrote:

 Yeah, the problem's bigger than I thought (not surprising since I
 didn't think much about it). I'm having a hard time figuring out how
 Condor handles these issues. All I can see from the documentation is
 that it gives you warnings.

 the original condor just forwarded system calls back to the node it
 was started from. Thus all system calls were done in the context of
 the originating node and user.

Best effort is a good place to start.



 But this still has the 90% problem you mentioned.

 it's just plain harder than it looks ...

Yeah. Every time I think of a way to address the corner cases, new ones crop up.



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread erik quanstrom
 I can imagine a lot of problems stemming from open files could be
 resolved by first attempting to import the process's namespace at the
 time of checkpoint and, upon that failing, using cached copies of the
 file made at the time of checkpoint, which could be merged later.

there's no guarantee to a process running in a conventional
environment that files won't change underfoot.  why would
condor extend a new guarantee?

maybe i'm suffering from lack of vision, but i would think that
to get to 100% one would need to think in terms of transactions
and have a fully transactional operating system.

- erik



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread erik quanstrom
 Vidi also seems to be an attempt to make Venti work in such a dynamic
 environment. IMHO, the assumption that computers are always connected
 to the network was a fundamental mistake in Plan 9

on the other hand, without this assumption, we would not have 9p.
it was a real innovation to dispense with underpowered workstations
with full adminstrative burdens.

i think it is anachronistic to consider the type of mobile devices we
have today.  in 1990 i knew exactly 0 people with a cell phone.  i had
a toshiba orange screen laptop from work, but in those days a 9600
baud vt100 was still a step up.

ah, the good old days.

none of this is do detract from the obviously good idea of being
able to carry around a working set and sync up with the main server
later without some revision control junk.  in fact, i was excited to
learn about fossil — i was under the impression from reading the
paper that that's how it worked.

speaking of vidi, do the vidi authors have an update on their work?
i'd really like to hear how it is working out.

- erik



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread J.R. Mauro
On Fri, Apr 17, 2009 at 11:37 PM, erik quanstrom quans...@quanstro.net wrote:
 I can imagine a lot of problems stemming from open files could be
 resolved by first attempting to import the process's namespace at the
 time of checkpoint and, upon that failing, using cached copies of the
 file made at the time of checkpoint, which could be merged later.

 there's no guarantee to a process running in a conventional
 environment that files won't change underfoot.  why would
 condor extend a new guarantee?

 maybe i'm suffering from lack of vision, but i would think that
 to get to 100% one would need to think in terms of transactions
 and have a fully transactional operating system.

 - erik


There's a much lower chance of files changing out from you in a
conventional environment. If the goal is to make the unconventional
environment look and act like the conventional one, it will probably
have to try to do some of these things to be useful.



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread J.R. Mauro
On Fri, Apr 17, 2009 at 11:56 PM, erik quanstrom quans...@quanstro.net wrote:
 Vidi also seems to be an attempt to make Venti work in such a dynamic
 environment. IMHO, the assumption that computers are always connected
 to the network was a fundamental mistake in Plan 9

 on the other hand, without this assumption, we would not have 9p.
 it was a real innovation to dispense with underpowered workstations
 with full adminstrative burdens.

 i think it is anachronistic to consider the type of mobile devices we
 have today.  in 1990 i knew exactly 0 people with a cell phone.  i had
 a toshiba orange screen laptop from work, but in those days a 9600
 baud vt100 was still a step up.

 ah, the good old days.

Of course it's easy to blame people for lack of vision 25 years later,
but with the rate at which computing moves in general, cell phones as
powerful as workstations should have been seen to be on their way
within the authors' lifetimes.

That said, Plan 9 was designed to furnish the needs of an environment
that might not ever have had iPhones and eeePCs attached to it even if
such things existed at the time it was made.

But I'll say that if anyone tries to solve these problems today, they
should not fall into the same trap, and look to the future. I hope
they'll consider how well their solution scales to computers so small
they're running through someone's bloodstream and so far away that
communication in one direction will take several light-minutes and be
subject to massive delay and loss.

It's not that ridiculous... teams are testing DTN, which hopes to
spread the internet to outer space, not only across this solar system,
but also to nearby stars. Now there's thinking forward!


 none of this is do detract from the obviously good idea of being
 able to carry around a working set and sync up with the main server
 later without some revision control junk.  in fact, i was excited to
 learn about fossil — i was under the impression from reading the
 paper that that's how it worked.

 speaking of vidi, do the vidi authors have an update on their work?
 i'd really like to hear how it is working out.

 - erik





Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread erik quanstrom
 But I'll say that if anyone tries to solve these problems today, they
 should not fall into the same trap,  [...]

yes.  forward thinking was just the thing that made multics
what it is today.

it is equally a trap to try to prognosticate too far in advance.
one increases the likelyhood of failure and the chances of being
dead wrong.

- erik



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread erik quanstrom
 On Fri, Apr 17, 2009 at 11:37 PM, erik quanstrom quans...@quanstro.net 
 wrote:
  I can imagine a lot of problems stemming from open files could be
  resolved by first attempting to import the process's namespace at the
  time of checkpoint and, upon that failing, using cached copies of the
  file made at the time of checkpoint, which could be merged later.
 
  there's no guarantee to a process running in a conventional
  environment that files won't change underfoot.  why would
  condor extend a new guarantee?
 
  maybe i'm suffering from lack of vision, but i would think that
  to get to 100% one would need to think in terms of transactions
  and have a fully transactional operating system.
 
  - erik
 
 
 There's a much lower chance of files changing out from you in a
 conventional environment. If the goal is to make the unconventional
 environment look and act like the conventional one, it will probably
 have to try to do some of these things to be useful.

* you can get the same effect by increasing the scale of your system.

* the reason conventional systems work is not, in my opinion, because
the collision window is small, but because one typically doesn't do
conflicting edits to the same file.

* saying that something isn't likely in an unquantifiable way is
not a recipie for success in computer science, in my experience.

- erik



Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread erik quanstrom
 Speaking of NUMA and such though, is there even any support for it in the 
 kernel?
 I know we have a 10gb Ethernet driver, but what about cluster interconnects 
 such as InfiniBand, Quadrics, or Myrinet?  Are such things even desired in 
 Plan 9?

there is no explicit numa support in the pc kernel.
however it runs just fine on standard x86-64 numa
architectures like intel nelaham and amd opteron.

we have two 10gbe ethernet drivers the myricom driver
and the intel 82598 driver.  the blue gene folks have
support for a number of blue-gene-specific networks.

i don't know too much about myrinet, infiniband or
quadratics.  i have nothing against any of them, but
10gbe has been a much better fit for the things i've
wanted to do.

- erik



Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread lucio
 I guess I'm a little slow; it's taken me a little while to get
 my head around this and understand it.  Let me see if I've
 got the right picture.  When I login I basically look up a
 previously saved session in much the same way that LISP
 systems would save a whole environment.  Then when I
 log off my session is suspended and saved.  Alternatively,
 I could always log into the same previously saved state.

The seesion would not be suspended, it would continue to operate as
your agent and identity and, typically, accept mail on your behalf,
perform background operations such as pay your accounts and in
general represent you to the web to the extent that security (or lack
thereof, for many unsophisticated users) permits.  Nothing wrong with
me having a private search bot to look for particular pornography or
art or documentation while I'm asleep, the trick is to run it on
whatever platform(s) are suitable at the time.

Take my situation, for example.  I am at the dial-up end of an ISDN
BRA connection (2 x 64kbps channels for all intents and purposes, one
of them reserved for voice calls) which costs me a nominal amount to
stay connected (when the powers that be allow it) from 19:00 to 07:00
each weekday and from Friday evening to Monday morning (and a fortune
during what the Telco calls peak time).  The rest of the time, I
find it preferable to use GPRS (3G is not yet available) for on-demand
connections because I pay per volume and not for connect time.
Naturally, that makes my network a roaming one.  Having my mail
exchanger et al.  in Cape Town permanently on line at a client's
premises provides the visibility I need all the time, but it is not
something I will continue to be able to afford as my involvement with
that client will eventually stop.  I'm not sure I can afford hosting
thereafter, but that is a separate issue.

The other organisation I am associated with has a hosted Linux server
I may use, or I may piggyback on their hosting contract, but I get too
little choice of platform on which to operate and even the hosting
structure may not suit me for a number of technical and political
reasons.

My dream is to be able to virtualise not so much the platform as the
application where application means whatever I feel like using at
the time.  Including being able to access on a low speed line the
stuff that is, say, strictly text based.  Or, as I often do, download
big volume items overnight, while I sleep.  But most of all, I want to
walk away from the workstation and pick up where I left off anywhere
else, including accessing the profile using the local resources, not
necessarily the extraordinary features I may have built for myself in
my workshop (I wish!).

Most of all, it must be possible for me to enhance my profile
wherever I am, teach it new tricks whenever I discover them, make it
aware that they may only work in specific locations.

Do you get my drift?

The crucial bit is that it depends heavily on being on the network
insofar as having access to resources you cannot possibly be expected
to carry on your laptop (now you need Windows, just now you need
MacOS, say, or connection to your burglar alarm).  In fact, my idea of
security is to deploy my mobile phone as the key, GPRS allows me a
very inexpensive, always on-line tool to provide, say, encryption keys
that have my identity firmly attached to them, practically anywhere in
South Africa and in most places in Africa, nevermind Europe
(connectivity was superb in Italy and Greece, last October) or the
USA.  Given the access key, any terminal ought to be able to provide
at least part of the experience I'm likely to need.

In passing, a device that struck me as being extremely handy is the
3G, USB dongle that is highly popular here, you mey be more familiar
with it than I: it contains a simulated CD-ROM that it uses to install
its software.  I though that was particularly clever, specially if you
transform it into a Plan 9 or Inferno boot device.

I'm sorry if I'm throwing around too many ideas with too little flesh,
I must confess that I find this particular discussion very exciting, I
have never really had occasion to look at these ideas as carefully as
I am doing now.

I was going to address the issue of being disconnected and I note that
to some extent I have, because once you treat your mobile phone as a
factor, being disconnected becomes a non-issue.  But if you do land in
a dead spot, for real, then, sure, you need much of your profile on
your portable.  How much lives in your phone (no matter how, that has
to be connected to a computing device or _be_ a computing device) and
how much on, say, on your laptop, is not important, as both have to be
with you, ideally they ought to be the same device and most likely
will be.  In fact, in a Plan 9 paradigm, the phone is the
CPU/fileserver, the laptop is the terminal (now you got me thinking!).
Replication is another issue that needs careful thought, although once
again, it gets resolved 

Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread lucio
 Every time I have to use something like
 Linux or MS, I feel overwhelmed by the sheer complexity of it all.

Possibly OT, my main beef with Linux and Windows is that they keep
wanting to update themselves and the effort to manage these updates
is enormous (less so with Ubuntu, but still great).  With Plan 9, I
find I can control the updating process and do not feel I'm leaving
myself exposed whenever I do.  Of course, the factors involved are
very different, but I have a suspicion that with Windows and Linux one
relinquishes control at too deep a level and the continual updates are
a particularly visible case of this loss of control.

++L




Re: [9fans] Plan9 - the next 20 years

2009-04-17 Thread J.R. Mauro
On Sat, Apr 18, 2009 at 12:16 AM, erik quanstrom quans...@quanstro.net wrote:
 But I'll say that if anyone tries to solve these problems today, they
 should not fall into the same trap,  [...]

 yes.  forward thinking was just the thing that made multics
 what it is today.

 it is equally a trap to try to prognosticate too far in advance.
 one increases the likelyhood of failure and the chances of being
 dead wrong.

 - erik



I don't think what I outlined is too far ahead, and the issues
presented are all doable as long as a small bit of extra consideration
is made.

Keeping your eye only on the here and now was just the thing that
gave Unix a bunch of tumorous growths like sockets and X11, and made
Windows the wonderful piece of hackery it is.

I'm not suggesting we consider how to solve the problems we'll face
when we're flying through space and time in the TARDIS and shrinking
ourselves and our bioships down to molecular sizes to cure someone's
brain cancer. I'm talking about making something scale across
distances and magnitudes that we will come accustomed to in the next
five decades.