Re: [9fans] killing processes
[EMAIL PROTECTED] wrote: On Thu Sep 15 11:40:59 EDT 2005, rminnich@lanl.gov wrote: ... Meant to be shared, by lots of folks, hence that ' ... big boys' comment in the startup code, reserving more kernel memory since there would be more users on a cpu than on a terminal. life has changed. ron Actually, that code RESTRICTS the amount of kernel memory on a machine that has lots of physical memory if it is being used as a cpu server. --jim ah, then, I misread it. OOPS. ron
Re: [9fans] killing processes
ozinferno is not plan9 and they are diverging rapidly. once the drivers were interchangeable. i'll try and release something soon if i can find the appropriate spare cycles. an auth server on a $40 router is not out of the question. BTW qantas uses $100 pcs from china running inferno for their airport displays. and if ou can't find a auth server compliant throw-out on the street then you live in the wrong suburb. i found three this week. brucee On 9/16/05, Uriel [EMAIL PROTECTED] wrote: On Fri, Sep 16, 2005 at 09:34:39AM -0400, erik quanstrom wrote: you know, i was thinking the linux folks have hacked those linksys wireless routers. now that would be an excellent auth server. ;-) Rumor has it that those things run great with OzInferno, now if we could only convince Brucee to to release it... ;) This days they are dirty cheap, either from ebay or new, when mechiel was working here we had this idea to build a mini-inferno-cluster out of a dozen of those things... but then Dell delayed the OzInferno release ... damn Hell ;P uriel
Re: [9fans] killing processes
Just to be clear, Sape is saying that if you boot and sometimes you use one file server as root and sometimes you use a different one, then cfs will use the data cached on behalf of the first one when you're using the other one. Is there not a simple mechanism to clean the cache on boot, manually, possibly, or perhaps giving CFS enough information to pick a cache based on the root fileserver? I mean, this sounds more like a bug than a feature. ++L
Re: [9fans] killing processes
Just to be clear, Sape is saying that if you boot and sometimes you use one file server as root and sometimes you use a different one, then cfs will use the data cached on behalf of the first one when you're using the other one. That was indeed what I meant. Is there not a simple mechanism to clean the cache on boot, manually, possibly, or perhaps giving CFS enough information to pick a cache based on the root fileserver? I mean, this sounds more like a bug than a feature. Yes, you can tell cfs to clear the cache on startup, but then you lose a lot of speed during the early phases of running. Sape
Re: [9fans] killing processes
You can do that with plan9.ini: set up two different [whatever] sections with root= and cfs= lines. It's only when you're typing the root at root is from: that you get in trouble, because there is no cache is from: prompt. In other words, Sape's complaints is more highlighting a surmountable drawback than really suggesting that cacheing isn't usable. He gave the impression that he'd stopped useing cacheing because of this. I guess in a high-bandwidth environment the additional administration required wouldn't be justified, but as bandwidth becomes more scarce, cacheing increases in value. ++L
Re: [9fans] killing processes
One day I logged into our file server and ran it out of swap by mistake. although you can now run all components on one cpu server, it's still best to scrounge the extra machine to keep the file server and cpu server separate, and to run a limited set of services on the file server. i don't see why it can't be the auth server as well. i put the file servers on a cheap UPS as well, to reduce anxiety about scrambled or lost data in venti archives (although on wednesday morning i bumped into someone else in a shared machine room who was crouched down changing some batteries that had gone bad on their UPS).
Re: [9fans] killing processes
I didn't say that wasn't the main reason, just that proximity to the file server was also a factor, I can't find the quote I'm looking for which I think was more explicit, but from http://www.cs.bell-labs.com/sys/doc/9.html The effect of running a cpu command is therefore to start a shell on a fast machine, one more tightly coupled to the file server And I wasn't as much trying to make history remark, as trying to point an important and often overlooked feature of 'cpu' servers. I think this tighter coupling is exactly the reason why I cpu from home to work instead of just mounting work-fs at home: the connection home-work is fast enough to do remote editing, but limited enough to make local (at home) compilation of files residing on the remote (work) fs more painful. Axel.
Re: [9fans] killing processes
the connection home-work is fast enough to do remote editing, but limited enough to make local (at home) compilation of files residing on the remote (work) fs more painful. Are you using caching? Would/does it make a difference? What speed is the link? Just to get an idea of the options... ++L
Re: [9fans] killing processes
the connection home-work is fast enough to do remote editing, but limited enough to make local (at home) compilation of files residing on the remote (work) fs more painful. just to be complete: the home machine takes root from local disk (I have been using a diskless setup in the past where the home machine took root from work fs. that worked too, but application startup was slower, of course.) Are you using caching? not that I'm aware of. Would/does it make a difference? good question. experience, anyone? What speed is the link? cable modem, I think it is 1024/256. Just to get an idea of the options... Axel.
Re: [9fans] killing processes
Are you using caching? not that I'm aware of. I have. Would/does it make a difference? good question. experience, anyone? not for compilation. as it creates object files and binaries it necessarily needs to write absolutely everything past the cache and onto the file server mounted. on a sufficiently fast connection one actually sees a slowdown when compared with non-cached compilations. for any other general access caching really helps. not all news are bad though -- one can boot from a remote file server (presumably with caching) and edit the files locally (with all the benefits of a very small response time) and cpu to a remote cpu server just for the compilation. i, for example, run 'win' in acme, cpu close to the file server and keep a compile string handy on the top of the window. this all assumes that the uplink (writing to the file server) is the slow part of the connection, which is the case with most home dsl/cable providers.
Re: [9fans] killing processes
On Fri, Sep 16, 2005 at 09:34:39AM -0400, erik quanstrom wrote: you know, i was thinking the linux folks have hacked those linksys wireless routers. now that would be an excellent auth server. ;-) Rumor has it that those things run great with OzInferno, now if we could only convince Brucee to to release it... ;) This days they are dirty cheap, either from ebay or new, when mechiel was working here we had this idea to build a mini-inferno-cluster out of a dozen of those things... but then Dell delayed the OzInferno release ... damn Hell ;P uriel
Re: [9fans] killing processes
Are you using caching? not that I'm aware of. Would/does it make a difference? good question. experience, anyone? What speed is the link? cable modem, I think it is 1024/256. I've connected through cfs for years but gave it up now that I have to connect to more than one file server — cfs isn't good at tracking what server you use and will interpret cached data from one server as belonging to another. Not good. Cfs gives a fair amount of speedup. It still contacts the server on every open, but, if the file hasn't changed, won't need to fetch it from the server again. Sape
Re: [9fans] killing processes
although you can now run all components on one cpu server, it's still best to scrounge the extra machine to keep the file server and cpu server separate, and to run a limited set of services on the file server. That's what made sense to me. And I was hoping for a slick trick resulting in file server accepts logins only from people in group sys. Thanks for the other suggestions. Dave Eckhardt
Re: [9fans] killing processes
On Sep 15, 2005, at 11:50 AM, Ronald G Minnich wrote: I want a backpack full of cpu servers, a laptop with no disk, and a fossil in my pocket (maybe an ipod? Or see the blackdog device -- can't turn on fossil until it takes your thumbprint). Blackdog? Nice, but it runs Linux. I have a fingerprint USB token that runs Inferno so I can connect with a grid from anywhere. OK, well it doesn't really run on the token, it uses the PC's processor. But that'll change shortly when I get a new chip from Atmel. I believe Blackdog uses Atmel's chipset. Wes
Re: [9fans] killing processes
If you are running as hostowner you can use Kill (capital K) which chmods the file first: chmod 666 /proc/868/ctl;echo kill /proc/868/ctl This will not work for factotum as it marks itself as private, [perhaps this is a bug - private need not chmod of ctl is impossible?] Its not much of an issue as if you kill the users rio then their factotum will lose its stdin and exit anyway. -Steve
Re: [9fans] killing processes
it seems a bit restrictive to stop use of the cpu server. anyhow, if you're hostowner (eg, bootes) try using Kill instead of kill. it chmods the ctl file so the hostowner has permissions.
Re: [9fans] killing processes
Fco. J. Ballesteros wrote: Well, we could use Kill as said here, or even reboot the machine on saturdays 5am to make it clean, etc. that's the one nice thing about a cluster node. You have lots of 'em, they can be single user. So just let one cpu user in to one cluster node at a time, and when they leave, reboot the node. If it's linuxbios the node is back in 10 seconds or so, and if it is a linuxbios+plan 9 node running xcpu, even faster than that (Plan 9 xcpu nodes boot in 1 second in Xen). However, the PCs have so much CPU today that they don't even feel the need for a CPU server. And that's the fun part. The relative power relationship of terminal/cpu server got inverted about 10 years ago. In the kernel there is this comment about ' ... for the big boys'. But, nowadays, the desktop is way more powerful than any individual cluster node (well, if by nowadays, you mean, starting in 1992...). So the big boy is on your desk, and the toy computer is in your rack. It's just that there are so MANY toy computers in the racks ... the ants overwhelm the elephant. And on the really Big Boy, i.e. the BG/L machine at livermore, the individual CPUs are running at clock rates that are SO 1990s -- 600 Mhz! But, given 65K of them, well, you don't mind that they're slow. In that sense, the 'cpu server' is outdated nomenclature. ron
Re: [9fans] killing processes
: In that sense, the 'cpu server' is outdated nomenclature. Yep. In Plan B we don't have CPU servers, actually. (We made an experiment but its result was not clear). We have permanent terminals, though. If you own a machine, you can arrange for remote omeros to browse/exec on it. I wonder, how many 9fans are *actually* using CPU servers? [do not count a CPU server that runs your fossil as such, it's a file server, isn't it?]
Re: [9fans] killing processes
I wonder, how many 9fans are *actually* using CPU servers? [do not count a CPU server that runs your fossil as such, it's a file server, isn't it?] i'm using a cpu server 100% of the time -- my terminal is a drawterm session.
Re: [9fans] killing processes
I wonder, how many 9fans are *actually* using CPU servers? [do not count a CPU server that runs your fossil as such, it's a file server, isn't it?] [haven't followed the discussion closely, sorry if this is off target] I'm using a cpu server (even as we speak) that I drawterm into from my office (have a sun on my desk), and cpu into from my home plan 9 machine. At home I could instead mount the fs from work, but since I'm mostly editing and compiling cpu works better. However, to support the slow/fast cpu/terminal case, the home machine _is_ faster than the office cpu. Axel.
Re: [9fans] killing processes
Hi Is there any way to know when a user is connected or not?a relation between netstat and procs running by the user may be ?or something easier i missed? gabi 2005/9/15, andrey mirtchovski [EMAIL PROTECTED]: I wonder, how many 9fans are *actually* using CPU servers? [do not count a CPU server that runs your fossil as such, it's a file server, isn't it?]i'm using a cpu server 100% of the time -- my terminal is a drawterm session.
Re: [9fans] killing processes
These CPU servers you use to drawterm into, are shared with other users? Or do you own the machine?
Re: [9fans] killing processes
On Thu, Sep 15, 2005 at 04:44:46PM +0200, Fco. J. Ballesteros wrote: : In that sense, the 'cpu server' is outdated nomenclature. I wonder, how many 9fans are *actually* using CPU servers? [do not count a CPU server that runs your fossil as such, it's a file server, isn't it?] I think it's already mentioned in the original papers that one of the main reason for 'cpu' servers is bandwidth/proximity to the file server(s), so I in a way it has always been a misnomer. uriel
Re: [9fans] killing processes
I think it's already mentioned in the original papers that one of the main reason for 'cpu' servers is bandwidth/proximity to the file server(s), so I in a way it has always been a misnomer. A good point. Fossil does provide, at a price, the features of both worlds and in fact encourages being used in both roles. Until now, I had considered the Fossil host as strictly out of bounds for computation. But _that_ is obsolete thinking, my approach ought to be to enhance its resources as far as possible. Unfortunately, it is no longer possible, in this type of scenario, to identify clearly which of two otherwise distinguishable needs is not being met when the Fossil server runs out of steam. It makes more sense to cluster CPU servers around it and alleviate its computational load, if feasible, reverting to the obsolete model. I dunno, it sounds like too much of a judgement call. ++L
Re: [9fans] killing processes
Well, we use a separate CPU server to provide web,mail,dhcp, etc., and try to keep the file server undisturbed. I wouldn't call this `obsolete', we no longer have file server blockouts when spammers find a way to really overload our smtpd/httpd. : Until now, I : had considered the Fossil host as strictly out of bounds for : computation. But _that_ is obsolete thinking, my approach ought to be : to enhance its resources as far as possible.
Re: [9fans] killing processes
Uriel wrote: I think it's already mentioned in the original papers that one of the main reason for 'cpu' servers is bandwidth/proximity to the file server(s), so I in a way it has always been a misnomer. yeah, but ... those cpu servers, IIRC, were big 'ol power challenge machines.Big fat SMP, faster than the terminals, much more memory, etc. I remember Rob's talk at '89 usenix (or some such) and it was clear at the time that the cpu servers really were where you did computing, not on your weakling terminal. Meant to be shared, by lots of folks, hence that ' ... big boys' comment in the startup code, reserving more kernel memory since there would be more users on a cpu than on a terminal. life has changed. ron
Re: [9fans] killing processes
And, by the way, cpu servers are the only way I use Plan 9 these days. Thought provoking. My experiences with early drawterm were not promising (NetBSD as opposed to Windows or Linux, the multithreading is still not entirely adequate), so I use VNCviewer on NetBSD and VNCS on a CPU server. Probably masochistic of me, I'll have to try drawterm once again. At home, it's worse as I have to use VNCV on my Plan 9 workstation to access the NetBSD hosts so I have one VNCV session and I do all the remote work through it. I guess it's good to get the occasional jolt and investigate the options. To be sure, an X server session or, even better, some way of getting RIO and and X clients to communicate seemlessly would be exactly what I believe I need. ++L
Re: [9fans] killing processes
Lucio De Re wrote: In particular, the 100MHz Cyclone connection between CPU server and file server always suggests a reality check to me. It's my turn to miss the point, can you expound on this a bit more? thanks ron
Re: [9fans] killing processes
I don't see any point in computing on a file server. CPUs are so cheap. Just throw as many of them as you need at a problem until the problem succumbs to them. ron
Re: [9fans] killing processes
On Thu Sep 15 11:40:59 EDT 2005, rminnich@lanl.gov wrote: ... Meant to be shared, by lots of folks, hence that ' ... big boys' comment in the startup code, reserving more kernel memory since there would be more users on a cpu than on a terminal. life has changed. ron Actually, that code RESTRICTS the amount of kernel memory on a machine that has lots of physical memory if it is being used as a cpu server. --jim
Re: [9fans] killing processes
I understood that on terminals you want more memory for images and the like; On CPUs it seems they wanted more memory for user processes (more users, more processes).
Re: [9fans] killing processes
life has changed. ron The only thing we can be sure of is that it will change again. When we get 256 CPUs on a die and Optical CPUs maybe we may find the cpu server model fits again. -Steve
Re: [9fans] killing processes
On Thu, Sep 15, 2005 at 11:27:51AM -0400, Russ Cox wrote: This just isn't true. The cpu server lets you use its cpu. And in the early days, it was a lot easier to buy a really fast cpu server than it was to buy a really fast terminal. It's still more cost-effective. I didn't say that wasn't the main reason, just that proximity to the file server was also a factor, I can't find the quote I'm looking for which I think was more explicit, but from http://www.cs.bell-labs.com/sys/doc/9.html The effect of running a cpu command is therefore to start a shell on a fast machine, one more tightly coupled to the file server And I wasn't as much trying to make history remark, as trying to point an important and often overlooked feature of 'cpu' servers. uriel
Re: [9fans] killing processes
I don't see any point in computing on a file server. CPUs are so cheap. Just throw as many of them as you need at a problem until the problem succumbs to them. That's the view from a particular location. Think in terms of limited resources, like electricity, for example. Or networking bandwidth. Or cooling. And, lastly, multitasking programming skills. ++L
Re: [9fans] killing processes
And in the early days, it was a lot easier to buy a really fast cpu server than it was to buy a really fast terminal. It's still more cost-effective. indeed, and you can put collections of cpu and file servers in cooling rooms, keeping a smaller, cooler laptop that you can actually put on your lap without risking burns or setting fire to your trousers...
Re: [9fans] killing processes
Lucio De Re wrote: That's the view from a particular location. Think in terms of limited resources, like electricity, for example. Or networking bandwidth. Or cooling. And, lastly, multitasking programming skills. gotcha. thanks, good point. ron
Re: [9fans] killing processes
Charles Forsyth wrote: indeed, and you can put collections of cpu and file servers in cooling rooms, keeping a smaller, cooler laptop that you can actually put on your lap without risking burns or setting fire to your trousers... I've been planning to put a cluster of embedded (HOT!) Pentium Ms in a wine cooler for some time now. Tasteful design, nice glass door, quiet, the height of elegance! ron
Re: [9fans] killing processes
These CPU servers you use to drawterm into, are shared with other users? Or do you own the machine? i own the machine but it is shared with other users. it's hidden in a server room at ucalgary and has been up for three months.
Re: [9fans] killing processes
I have only two boxen: - auth-server - cpu-server and drawterm is used to login into cpu-server -ishwar On Thu, 15 Sep 2005, Fco. J. Ballesteros wrote: We forbid them to cpu into a cpu server. They run their own diskless terminals, which they reboot when they are done. You might do the same. : Students leave around running processes on the : system. Is there a way to kill these? :echo kill /proc/868/note : says permission denied (which makes sense as : I am trying to kill them logged as bootes).
Re: [9fans] killing processes
We forbid them to cpu into a cpu server. Ok, I'll ask this question which I've been meaning to look into: what is the easiest/cleanest way to restrict logins to our file server to certain people (to avoid, say, it running out of swap) while allowing everybody to log into our CPU server? Dave Eckhardt
Re: [9fans] killing processes
On Thu, Sep 15, 2005 at 01:03:43PM +0100, Steve Simon wrote: If you are running as hostowner you can use Kill (capital K) which chmods the file first: chmod 666 /proc/868/ctl;echo kill /proc/868/ctl This will not work for factotum as it marks itself as private, [perhaps this is a bug - private need not chmod of ctl is impossible?] I noticed that, too. Any user process can make its memory private, and then the hostowner has to reboot the machine in order to kick it out. Am I wrong about this ? Regards, Adi
Re: [9fans] killing processes
We forbid them to cpu into a cpu server. Ok, I'll ask this question which I've been meaning to look into: what is the easiest/cleanest way to restrict logins to our file server to certain people (to avoid, say, it running out of swap) while allowing everybody to log into our CPU server? Your file server swaps when too many people log in? Russ
Re: [9fans] killing processes
I'm drawterm'ed in from a client site. If I'm someplace where the link is very slow, I boot up Plan9 under VMWare and import parts of the namespace I need from the cpu. As a standalone computing service, they're still relevant. Also it's important to consider them service nodes, serving parts of the namespace (fossil) or performing specific function (auth, mail).---BeginMessage--- : In that sense, the 'cpu server' is outdated nomenclature. Yep. In Plan B we don't have CPU servers, actually. (We made an experiment but its result was not clear). We have permanent terminals, though. If you own a machine, you can arrange for remote omeros to browse/exec on it. I wonder, how many 9fans are *actually* using CPU servers? [do not count a CPU server that runs your fossil as such, it's a file server, isn't it?]---End Message---
Re: [9fans] killing processes
I forgot mention that echo Kill /proc/pid/note does not help. I will have to set up a time for reboot of cpu-server. -ishwar On Thu, 15 Sep 2005, ISHWAR RATTAN wrote: I have only two boxen: - auth-server - cpu-server and drawterm is used to login into cpu-server -ishwar On Thu, 15 Sep 2005, Fco. J. Ballesteros wrote: We forbid them to cpu into a cpu server. They run their own diskless terminals, which they reboot when they are done. You might do the same. : Students leave around running processes on the : system. Is there a way to kill these? :echo kill /proc/868/note : says permission denied (which makes sense as : I am trying to kill them logged as bootes).
Re: [9fans] killing processes
I think you are confused: % cat /bin/Kill #!/bin/rc for(i){ ps | sed -n '/ '^$i^'$/s%^[^ ]* *([^ ]*).*%chmod 666 /proc/\1/ctl;echo kill /proc/\1/ctl%p' } On Thu, Sep 15, 2005 at 03:55:10PM -0400, ISHWAR RATTAN wrote: I forgot mention that echo Kill /proc/pid/note does not help. I will have to set up a time for reboot of cpu-server. -ishwar On Thu, 15 Sep 2005, ISHWAR RATTAN wrote: I have only two boxen: - auth-server - cpu-server and drawterm is used to login into cpu-server -ishwar On Thu, 15 Sep 2005, Fco. J. Ballesteros wrote: We forbid them to cpu into a cpu server. They run their own diskless terminals, which they reboot when they are done. You might do the same. : Students leave around running processes on the : system. Is there a way to kill these? :echo kill /proc/868/note : says permission denied (which makes sense as : I am trying to kill them logged as bootes).
Re: [9fans] killing processes
I've been planning to put a cluster of embedded (HOT!) Pentium Ms in a wine cooler for some time now. Tasteful design, nice glass door, quiet, the height of elegance! the heat will ruin the burgundy, though.
Re: [9fans] killing processes
Ok, I'll ask this question which I've been meaning to look into: what is the easiest/cleanest way to restrict logins to our file server to certain people (to avoid, say, it running out of swap) while allowing everybody to log into our CPU server? Split the authentication domain into two. One for ordinary users in which our CPU server and the file server (fossil processes) runs, and the other in which the file server (the box itself) boots and runs. my memo: http://p9c.cc.titech.ac.jp/plan9/secp9.html#secventi Hope this helps. --
Re: [9fans] killing processes
life has changed. life is always changing. I'm getting older, and I have to find a new way for my left life etc... By the way, I agree the role of CPU servers is changing. Here, it's only for inter/intranet daemon processes. Terminals are powefull enough these days for our work here. Kenji
Re: [9fans] killing processes
On Sep 15, 2005, at 1:27 PM, Charles Forsyth wrote: I've been planning to put a cluster of embedded (HOT!) Pentium Ms in a wine cooler for some time now. Tasteful design, nice glass door, quiet, the height of elegance! the heat will ruin the burgundy, though. And the acid from the wine will do bad things to the PCB traces! Not to mention what the sugar from the cooler component will do to the cleanup effort :-( --lyndon
Re: [9fans] killing processes
Lyndon Nerenberg wrote: On Sep 15, 2005, at 1:27 PM, Charles Forsyth wrote: I've been planning to put a cluster of embedded (HOT!) Pentium Ms in a wine cooler for some time now. Tasteful design, nice glass door, quiet, the height of elegance! the heat will ruin the burgundy, though. And the acid from the wine will do bad things to the PCB traces! Not to mention what the sugar from the cooler component will do to the cleanup effort :-( Drink enough wine and you won't care anyway. The supercomputer wine cooler comes with a built-in moral booster in liquid form. ron
Re: [9fans] killing processes
Split the authentication domain into two. One for ordinary users in which our CPU server and the file server (fossil processes) runs, and the other in which the file server (the box itself) boots and runs. I remember reading about that. To be honest, I was wondering if there might be a simpler way, without having to run a second auth server. For example (and I haven't tried either): * arrange for the cpu/ncpu listener to run in a namespace where /bin/rc is mode 750, so only members of the designated group can run it * put a group-membership check in some early /bin/rc startup file Dave Eckhardt
Re: [9fans] killing processes
You don't need to run a second authentication server, just a second authentication domain. The way to do this is to start the fossil as normal but then replace the usual aux/listen command with @{ rfork n auth/factotum read -m new.factotum /mnt/factotum/ctl aux/listen tcp } and then the listeners will be using the new factotum. If you put in new.factotum (which should be handled some other way but so be it) a key like key proto=p9sk1 user=davide dom=other.cs.cmu.edu !password=asdf then you will find that cpu'ing into that machine will prompt for a key from other.cs.cmu.edu, and your account will be the only one that works (any others would require an authentication server). Russ