Re: [9fans] threads vs forks

2009-03-03 Thread andrey mirtchovski
> the ssd drives we've
> (coraid) tested have been spectacular --- reading at > 200mb/s.

you know, i've read all the reviews and seen all the windows
benchmarks. but this info, coming from somebody on this list, is much
more assuring than all the slashdot articles.

the tests didn't involve plan9 by any chance, did they? ;)



Re: [9fans] threads vs forks

2009-03-03 Thread erik quanstrom
> >
> > Both AMD and Intel are looking at I/O because it is and will be a limiting
> > factor when scaling to higher core counts.

i/o starts sucking wind with one core.  
that's why we differentiate i/o from everything
else we do.

> And soon hard disk latencies are really going to start hurting (they
> already are hurting some, I'm sure), and I'm not convinced of the
> viability of SSDs.

i'll assume you mean throughput.  hard drive latency has been a big deal
for a long time.  tanenbaum integrated knowledge of track layout into
his minix elevator algorithm.

i think the gap between cpu performance and hd performance is narrowing,
not getting wider.

i don't have accurate measurements on how much real-world performance
difference there is between a core i7 and an intel 5000.  it's generally not
spectacular, clock-for-clock. on the other hand, when the intel 5000-series
was released, the rule of thumb for a sata hd was 50mb/s.  it's not too hard
to find regular sata hard drives that do 110mb/s today.  the ssd drives we've
(coraid) tested have been spectacular --- reading at > 200mb/s.  if you want
to talk latency, ssds can deliver 1/100th the latency of spinning media.
there's no way that the core i7 is 100x faster than the intel 5000.

- erik



Re: [9fans] threads vs forks

2009-03-03 Thread erik quanstrom
> Now there is another use that would at least be intellectually interesting
> and possible useful in practice.  Use the transistors for a really big
> memory running at cache speed.  But instead of it being a hardware
> cache, manage it explicitly.  In effect, we have a very high speed
> main memory, and the traditional main memory is backing store.
> It'd give a use for all those paging algorithms that aren't particularly
> justified at the main memory-disk boundary any more.  And you
> can fit a lot of Plan 9 executable images in a 64MB on-chip memory
> space.  Obviously, it wouldn't be a good fit for severely memory-hungry
> apps, and it might be a dead end overall, but it'd at least be something
> different...

ken's fs already has the machinery to handle this.  one could imagine
a cachefs that knew how to manage this for venti.  (though venti seems
like a poor fit.)  there are lots of interesting uses of explicitly managed,
heirarchical caches.  yet so far hardware has done it's level best to hide
this.

- erik



Re: [9fans] threads vs forks

2009-03-03 Thread John Barham
> I believe GIL is as present in Python nowadays as ever. On a related
> note: does anybody know any sane interpreted languages with a decent
> threading model to go along? Stackless python is the only thing that
> I'm familiar with in that department.

Check out Lua's coroutines: http://www.lua.org/manual/5.1/manual.html#2.11

Here's an implementation of the sieve of Eratosthenes using Lua
coroutines similar to the Limbo one:
http://www.lua.org/cgi-bin/demo?sieve



Re: [9fans] threads vs forks

2009-03-03 Thread blstuart
> it's interesting that parallel wasn't cool when chips were getting
> noticably faster rapidly.  perhaps the focus on parallelization
> is a sign there aren't any other ideas.

Gotta do something will all the extra transistors.  After all, Moore's
law hasn't been repealed.  And pipelines and traditional caches
are pretty good examples of dimishing returns.  So multiple cores
seems a pretty straightforward approach.

Now there is another use that would at least be intellectually interesting
and possible useful in practice.  Use the transistors for a really big
memory running at cache speed.  But instead of it being a hardware
cache, manage it explicitly.  In effect, we have a very high speed
main memory, and the traditional main memory is backing store.
It'd give a use for all those paging algorithms that aren't particularly
justified at the main memory-disk boundary any more.  And you
can fit a lot of Plan 9 executable images in a 64MB on-chip memory
space.  Obviously, it wouldn't be a good fit for severely memory-hungry
apps, and it might be a dead end overall, but it'd at least be something
different...

BLS




Re: [9fans] threads vs forks

2009-03-03 Thread David Leimbach
On Tue, Mar 3, 2009 at 5:54 PM, J.R. Mauro  wrote:

> On Tue, Mar 3, 2009 at 7:54 PM, erik quanstrom 
> wrote:
> >> I should have qualified. I mean *massive* parallelization when applied
> >> to "average" use cases. I don't think it's totally unusable (I
> >> complain about synchronous I/O on my phone every day), but it's being
> >> pushed as a panacea, and that is what I think is wrong. Don Knuth
> >> holds this opinion, but I think he's mostly alone on that,
> >> unfortunately.
> >
> > it's interesting that parallel wasn't cool when chips were getting
> > noticably faster rapidly.  perhaps the focus on parallelization
> > is a sign there aren't any other ideas.
>
> Indeed, I think it is. The big manufacturers seem to have hit a wall
> with clock speed, done a full reverse, and are now just trying to pack
> more transistors and cores on the chip. Not that this is evil, but I
> think this is just as bad as the obsession with upping the clock
> speeds in that they're too focused on one path instead of
> incorporating other cool ideas (i.e., things Transmeta was working on
> with virtualization and hosting foreign ISAs)


Can we bring back the Burroughs? :-)


>
>
> >
> > - erik
> >
> >
>
>


Re: [9fans] threads vs forks

2009-03-03 Thread David Leimbach
On Tue, Mar 3, 2009 at 10:11 AM, Roman V. Shaposhnik  wrote:

> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>
> > My knowledge on this subject is about 8 or 9 years old, so check with
> your local Python guru
> >
> >
> > The last I'd heard about Python's threading is that it was cooperative
> > only, and that you couldn't get real parallelism out of it.  It serves
> > as a means to organize your program in a concurrent manner.
> >
> >
> > In other words no two threads run at the same time in Python, even if
> > you're on a multi-core system, due to something they call a "Global
> > Interpreter Lock".
>
> I believe GIL is as present in Python nowadays as ever. On a related
> note: does anybody know any sane interpreted languages with a decent
> threading model to go along? Stackless python is the only thing that
> I'm familiar with in that department.


I'm a fan of Erlang.  Though I guess it's technically a compiled virtual
machine of sorts, even when it's "escript".

But I've had an absolutely awesome experience over the last year using it,
and so far only wishing it came with the type safety of Haskell :-).

I love Haskell's threading model actually, in either the data parallelism or
the forkIO interface, it's pretty sane.  Typed data channels even between
forkIO'd threads.


>
>
> Thanks,
> Roman.
>
>
>


Re: [9fans] threads vs forks

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 11:44 PM, James Tomaschke  wrote:
> erik quanstrom wrote:
>>>
>>> I think the reason why you didn't see parallelism come out earlier in the
>>> PC market was because they needed to create new mechanisms for I/O.  AMD did
>>> this with Hypertransport, and I've seen 32-core (8-socket) systems with
>>> this.  Now Intel has their own I/O rethink out there.
>>
>> i think what you're saying is equivalent to saying
>> (in terms i understand) that memory bandwidth was
>> so bad that a second processor couldn't do much work.
>
> Yes bandwidth and latency.
>>
>> but i haven't found this to be the case.  even the
>> highly constrained pentium 4 gets some milage out of
>> hyperthreading for the tests i've run.
>>
>> the intel 5000-series still use a fsb.  and they seem to
>> scale well from 1 to 4 cores.
>
> Many of the circuit simulators I use fall flat on their face after 4 cores,
> say.  However I blame this on their algorithm not hardware.
>
> I wasn't making an AMD vs Intel comment, just that AMD had created HTX along
> with their K8 platform to address scalability concerns with I/O.
>
>> are there benchmarks that show otherwise similar
>> hypertransport systems trouncing intel in multithreaded
>> performance?  i don't recall seeing anything more than
>> a moderate (15-20%) advantage.
>
> I don't have a 16-core Intel system to compare with, but:
> http://en.wikipedia.org/wiki/List_of_device_bandwidths#Computer_buses
>
> I think the reason why Intel developed their Common Systems Interconnect
> (now called QuickPath Interconnect) was to address it's shortcomings.
>
> Both AMD and Intel are looking at I/O because it is and will be a limiting
> factor when scaling to higher core counts.

And soon hard disk latencies are really going to start hurting (they
already are hurting some, I'm sure), and I'm not convinced of the
viability of SSDs.


There was an interesting article I came across that compared the
latencies of accessing a register, a CPU cache, main memory, and disk,
which put them in human terms. As much as we like to say we understand
the difference between a millisecond and a nanosecond, seeing cache
access expressed in terms of moments and a disk access in terms of
years was rather illuminating, if only to me.

Same article also put a google search at only slightly slower latency
than hard disk access. The internet really is becoming the computer, I
suppose.

>
>>
>> - erik
>>
>>
>
>
>



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 9:15 PM, Rob Pike  wrote:
> .,.1000
>
> and then snarf.
>
> It's a different model from the one you are familiar with.  That is
> not a value judgment either way, but before pushing too hard in
> comparisons or suggestions it helps to be familiar with both.

I understand, I didn't really want to press the issue because I'm not
that familiar with Acme. It just seemed to me that having both modes
available would be useful. As much as cursor address movement
practically goes out the window with the ability to use a mouse, it
does seem natural to go "move forward 5 sentences/paragraphs/blocks",
but I suppose you'd need modal editing for that, which would clash
pretty badly with Acme.

>
> -rob
>



Re: [9fans] threads vs forks

2009-03-03 Thread James Tomaschke

erik quanstrom wrote:
I think the reason why you didn't see parallelism come out earlier in 
the PC market was because they needed to create new mechanisms for I/O. 
  AMD did this with Hypertransport, and I've seen 32-core (8-socket) 
systems with this.  Now Intel has their own I/O rethink out there.


i think what you're saying is equivalent to saying
(in terms i understand) that memory bandwidth was
so bad that a second processor couldn't do much work.

Yes bandwidth and latency.


but i haven't found this to be the case.  even the
highly constrained pentium 4 gets some milage out of
hyperthreading for the tests i've run.

the intel 5000-series still use a fsb.  and they seem to
scale well from 1 to 4 cores.


Many of the circuit simulators I use fall flat on their face after 4 
cores, say.  However I blame this on their algorithm not hardware.


I wasn't making an AMD vs Intel comment, just that AMD had created HTX 
along with their K8 platform to address scalability concerns with I/O.



are there benchmarks that show otherwise similar
hypertransport systems trouncing intel in multithreaded
performance?  i don't recall seeing anything more than
a moderate (15-20%) advantage.


I don't have a 16-core Intel system to compare with, but:
http://en.wikipedia.org/wiki/List_of_device_bandwidths#Computer_buses

I think the reason why Intel developed their Common Systems Interconnect 
(now called QuickPath Interconnect) was to address it's shortcomings.


Both AMD and Intel are looking at I/O because it is and will be a 
limiting factor when scaling to higher core counts.




- erik







Re: [9fans] threads vs forks

2009-03-03 Thread erik quanstrom
> I think the reason why you didn't see parallelism come out earlier in 
> the PC market was because they needed to create new mechanisms for I/O. 
>   AMD did this with Hypertransport, and I've seen 32-core (8-socket) 
> systems with this.  Now Intel has their own I/O rethink out there.

i think what you're saying is equivalent to saying
(in terms i understand) that memory bandwidth was
so bad that a second processor couldn't do much work.

but i haven't found this to be the case.  even the
highly constrained pentium 4 gets some milage out of
hyperthreading for the tests i've run.

the intel 5000-series still use a fsb.  and they seem to
scale well from 1 to 4 cores.

are there benchmarks that show otherwise similar
hypertransport systems trouncing intel in multithreaded
performance?  i don't recall seeing anything more than
a moderate (15-20%) advantage.

- erik



Re: [9fans] threads vs forks

2009-03-03 Thread James Tomaschke

J.R. Mauro wrote:

On Tue, Mar 3, 2009 at 7:54 PM, erik quanstrom  wrote:

I should have qualified. I mean *massive* parallelization when applied
to "average" use cases. I don't think it's totally unusable (I
complain about synchronous I/O on my phone every day), but it's being
pushed as a panacea, and that is what I think is wrong. Don Knuth
holds this opinion, but I think he's mostly alone on that,
unfortunately.

it's interesting that parallel wasn't cool when chips were getting
noticably faster rapidly.  perhaps the focus on parallelization
is a sign there aren't any other ideas.


Indeed, I think it is. The big manufacturers seem to have hit a wall
with clock speed, done a full reverse, and are now just trying to pack
more transistors and cores on the chip. Not that this is evil, but I
think this is just as bad as the obsession with upping the clock
speeds in that they're too focused on one path instead of
incorporating other cool ideas (i.e., things Transmeta was working on
with virtualization and hosting foreign ISAs)


Die size has been the main focus for the foundries, reduced transistor 
switch time is just a benefit from that.  Digital components work well 
here, but Analog suffers and creating a stable clock at high frequency 
is done in the Analog domain.


It is much easier to double the transistor count than it is to double 
the clock frequency.  Also have to consider the power/heat/noise costs 
from increasing the clock.


I think the reason why you didn't see parallelism come out earlier in 
the PC market was because they needed to create new mechanisms for I/O. 
 AMD did this with Hypertransport, and I've seen 32-core (8-socket) 
systems with this.  Now Intel has their own I/O rethink out there.


I've been trying to get my industry to look at parallel computing for 
many years, and it's only now that they are starting to sell parallel 
circuit simulators and still they are not that efficient.  A 
traditionally week-long sim is now taking a single day when run on 
12-cores.  I'll take that 7x over 1x anytime though.


/james



Re: [9fans] threads vs forks

2009-03-03 Thread John Barham
On Tue, Mar 3, 2009 at 4:54 PM, erik quanstrom  wrote:
>> I should have qualified. I mean *massive* parallelization when applied
>> to "average" use cases. I don't think it's totally unusable (I
>> complain about synchronous I/O on my phone every day), but it's being
>> pushed as a panacea, and that is what I think is wrong. Don Knuth
>> holds this opinion, but I think he's mostly alone on that,
>> unfortunately.
>
> it's interesting that parallel wasn't cool when chips were getting
> noticably faster rapidly.  perhaps the focus on parallelization
> is a sign there aren't any other ideas.

That seems to be what Knuth thinks.  Excerpt from a 2008 interview w/ InformIT:

"InformIT: Vendors of multicore processors have expressed frustration
at the difficulty of moving developers to this model. As a former
professor, what thoughts do you have on this transition and how to
make it happen? Is it a question of proper tools, such as better
native support for concurrency in languages, or of execution
frameworks? Or are there other solutions?

Knuth: I don’t want to duck your question entirely. I might as well
flame a bit about my personal unhappiness with the current trend
toward multicore architecture. To me, it looks more or less like the
hardware designers have run out of ideas, and that they’re trying to
pass the blame for the future demise of Moore’s Law to the software
writers by giving us machines that work faster only on a few key
benchmarks! I won’t be surprised at all if the whole multithreading
idea turns out to be a flop, worse than the "Itanium" approach that
was supposed to be so terrific—until it turned out that the wished-for
compilers were basically impossible to write."

Full interview is at http://www.informit.com/articles/article.aspx?p=1193856.



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Rob Pike
.,.1000

and then snarf.

It's a different model from the one you are familiar with.  That is
not a value judgment either way, but before pushing too hard in
comparisons or suggestions it helps to be familiar with both.

-rob



Re: [9fans] 8c, 8l, and empty files

2009-03-03 Thread Federico G. Benavento
I consider that a feature, specially when starting something new
I just touch the files I think I will need and put them in the mkfile

it just works!

On Tue, Mar 3, 2009 at 5:03 PM, Enrique Soriano  wrote:
> term% cd /tmp
> term% ls nothing.c
> ls: nothing.c: 'nothing.c' file does not exist
> term% touch nothing.c
> term% 8c -FVw nothing.c
> term% 8l -o nothing nothing.8
> term% echo $status
>
> term% ls -l nothing
> --rwxrwxr-x M 8 glenda glenda 0 Mar  3 21:49 nothing
> term% ./nothing
> ./nothing: exec header invalid
>
>
> Why does the loader work?
> Is there any reason to not report an error?
>
> Regards,
> Q
>
>



-- 
Federico G. Benavento



Re: [9fans] threads vs forks

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 7:54 PM, erik quanstrom  wrote:
>> I should have qualified. I mean *massive* parallelization when applied
>> to "average" use cases. I don't think it's totally unusable (I
>> complain about synchronous I/O on my phone every day), but it's being
>> pushed as a panacea, and that is what I think is wrong. Don Knuth
>> holds this opinion, but I think he's mostly alone on that,
>> unfortunately.
>
> it's interesting that parallel wasn't cool when chips were getting
> noticably faster rapidly.  perhaps the focus on parallelization
> is a sign there aren't any other ideas.

Indeed, I think it is. The big manufacturers seem to have hit a wall
with clock speed, done a full reverse, and are now just trying to pack
more transistors and cores on the chip. Not that this is evil, but I
think this is just as bad as the obsession with upping the clock
speeds in that they're too focused on one path instead of
incorporating other cool ideas (i.e., things Transmeta was working on
with virtualization and hosting foreign ISAs)

>
> - erik
>
>



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 7:56 PM, Rob Pike  wrote:
>  Do you see utility in counting/movement commands if they are not
>  combined with regular expressions?
>
> If you want to make a substitution to the thousandth match of a
> regular expression on a line, try
>
>   s1000/[^ ]+/yyy/
>
> But to navigate to that place is not as straightforward. Counting only
> works for characters and lines.

I meant more along the lines of adding in movement commands, so you
could do something like 1000w to move to the 1000th
(whitespace-delimited) word. There are also other "abstractions" like
blocks, sentences, pages, etc.

I'm not extremely adept with Acme and Sam, but it might also be useful
to have something like 1000y to copy 1000 lines without having to
select and scroll across them all. Of course this is a contrived
example, and as I said, I'm a bit ignorant as to whether the editors
already have something like this, but that's the thing I was wondering
if you found useful, or if you have a better alternative available in
Acme and Sam.

>
> -rob
>
>



Re: [9fans] threads vs forks

2009-03-03 Thread erik quanstrom
> I should have qualified. I mean *massive* parallelization when applied
> to "average" use cases. I don't think it's totally unusable (I
> complain about synchronous I/O on my phone every day), but it's being
> pushed as a panacea, and that is what I think is wrong. Don Knuth
> holds this opinion, but I think he's mostly alone on that,
> unfortunately.

it's interesting that parallel wasn't cool when chips were getting
noticably faster rapidly.  perhaps the focus on parallelization
is a sign there aren't any other ideas.

- erik



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Rob Pike
  Do you see utility in counting/movement commands if they are not
  combined with regular expressions?

If you want to make a substitution to the thousandth match of a
regular expression on a line, try

   s1000/[^ ]+/yyy/

But to navigate to that place is not as straightforward. Counting only
works for characters and lines.

-rob



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 7:31 PM, Rob Pike  wrote:
> Sam and Acme use a simple, pure form of regular expressions.  If they
> had the counting operations, this would be a trivial task, but to add
> them would open the door to the enormous, ill-conceived complexity of
> (no longer) regular expressions as the open source community thinks of
> them.

Do you see utility in counting/movement commands if they are not
combined with regular expressions?

>
> So yes: use other tools, with my apologies.
>
> -rob
>
>



Re: [9fans] threads vs forks

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 6:54 PM, Devon H. O'Dell  wrote:
> 2009/3/3 J.R. Mauro :
>> Concurrency seems to be one of those things that's "too hard" for
>> everyone, and I don't buy it. There's no reason it needs to be as hard
>> as it is.
>
> That's a fact. If you have access to The ACM Queue, check out
> p16-cantrill-concurrency.pdf (Cantrill and Bonwich on concurrency).

Things like TBB and other libraries to automagically scale up repeated
operations into parallelized ones help alleviate the problems with
getting parallelization to work. They're ugly, they only address
narrow problem sets, but they're attempts at solutions. And if you
look at languages like LISP and Erlang, you're definitely left with a
feeling that parallelization is being treated as harder than it is.

I'm not saying it isn't hard, just that there are a lot of people who
seem to be throwing up their hands over it. I suppose I should stop
reading their material.

>
>> And nevermind the fact that it's not really usable for every (or even
>> most) jobs out there. But Intel is pushing it, so that's where we have
>> to go, I suppose.
>
> That's simply not true. In my world (server software and networking),
> most tasks can be improved by utilizing concurrent programming
> paradigms. Even in user interfaces, these are useful. For mathematics,
> there's simply no question that making use of concurrent algorithms is
> a win. In fact, I can't think of a single case in which doing two
> lines of work at once isn't better than doing one at a time, assuming
> that accuracy is maintained in the result.

I should have qualified. I mean *massive* parallelization when applied
to "average" use cases. I don't think it's totally unusable (I
complain about synchronous I/O on my phone every day), but it's being
pushed as a panacea, and that is what I think is wrong. Don Knuth
holds this opinion, but I think he's mostly alone on that,
unfortunately.

Of course for mathematically intensive and large-scale operations, the
more parallel you can make things the better.

>
> --dho
>
>



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Rob Pike
Sam and Acme use a simple, pure form of regular expressions.  If they
had the counting operations, this would be a trivial task, but to add
them would open the door to the enormous, ill-conceived complexity of
(no longer) regular expressions as the open source community thinks of
them.

So yes: use other tools, with my apologies.

-rob



Re: [9fans] threads vs forks

2009-03-03 Thread Devon H. O'Dell
2009/3/3 J.R. Mauro :
> Concurrency seems to be one of those things that's "too hard" for
> everyone, and I don't buy it. There's no reason it needs to be as hard
> as it is.

That's a fact. If you have access to The ACM Queue, check out
p16-cantrill-concurrency.pdf (Cantrill and Bonwich on concurrency).

> And nevermind the fact that it's not really usable for every (or even
> most) jobs out there. But Intel is pushing it, so that's where we have
> to go, I suppose.

That's simply not true. In my world (server software and networking),
most tasks can be improved by utilizing concurrent programming
paradigms. Even in user interfaces, these are useful. For mathematics,
there's simply no question that making use of concurrent algorithms is
a win. In fact, I can't think of a single case in which doing two
lines of work at once isn't better than doing one at a time, assuming
that accuracy is maintained in the result.

--dho



Re: [9fans] 8c, 8l, and empty files

2009-03-03 Thread Uriel
"Unix never says 'please'"

Nor is it supposed to keep users from doing stupid things... thank
God, or I could not use it.

uriel

On Wed, Mar 4, 2009 at 12:45 AM, andrey mirtchovski
 wrote:
>> Or perhaps, since the user went to trouble of making sure the file
>> didn't exist and then creating the empty file, the compiler and linker
>> felt it would be rude if they didn't do something with it?
>
> you can call Plan 9 whatever you'd like, but don't call it "impolite" :)
>
>



Re: [9fans] 8c, 8l, and empty files

2009-03-03 Thread andrey mirtchovski
two things: the linker doesn't only produce binaries, it has options
for producing other output in which a null object file may be
applicable; furthermore, it takes more than a single file, so you can
see how a #ifdef-ed C file compiles to nothing (even if it's bad
practice) but then is linked with other parts of the program just
fine.

so, yes. nil input should result in nil output, at least in this case.

would you like a warning? warnings are fun!



Re: [9fans] 8c, 8l, and empty files

2009-03-03 Thread andrey mirtchovski
> Or perhaps, since the user went to trouble of making sure the file
> didn't exist and then creating the empty file, the compiler and linker
> felt it would be rude if they didn't do something with it?

you can call Plan 9 whatever you'd like, but don't call it "impolite" :)



Re: [9fans] 8c, 8l, and empty files

2009-03-03 Thread Anthony Sorace
i could see this going either way, but from my perspective the linker
did what you told it. it didn't see anything it couldn't recognize,
and didn't find any symbols it wasn't able to resolve. it's a weird
case, certainly, but it doesn't strike me as wrong.

if i were inclined to submit a patch, it'd be to add a note to the
BUGS section of the man page.



Re: [9fans] 8c, 8l, and empty files

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 6:37 PM, andrey mirtchovski
 wrote:
>> Does it have any sense to create a 0 byte executable file?
>> Success or failure? Can you execute it?
>
> "Garbage-in, Garbage-out"

Or perhaps, since the user went to trouble of making sure the file
didn't exist and then creating the empty file, the compiler and linker
felt it would be rude if they didn't do something with it?



Re: [9fans] 8c, 8l, and empty files

2009-03-03 Thread andrey mirtchovski
> Does it have any sense to create a 0 byte executable file?
> Success or failure? Can you execute it?

"Garbage-in, Garbage-out"



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Anthony Sorace
i agree complaining about the formats is pointless. and hey, at least
it's text. last plain text format with slightly awkward lines i had to
play with, they went and changed the next version to be ASN.1.

but i don't think the suggestions here for how to make it play well
with Acme are all that bad. personally, i'd go rog's route of writing
a little program to pop out the address, as having things jump around
when changing tabs for newlines and back would be kind jarring. Acme's
not ideally suited to the task at hand, but it's not an awful fit,
either, and has many other nice benefits that likely make up for the
disconnect (or would for me, anyway).

and, as they say, if you want vim you know where to find it.

oh, wait, maybe you don't:
:; contrib/list stefanha/vim
stefanha/vim: Vim: enhanced vi editor
never used it myself, but it exists. i find vi(1) more fun, myself.

anthony



Re: [9fans] 8c, 8l, and empty files

2009-03-03 Thread Enrique Soriano

On Mar 3, 2009, at 11:54 PM, andrey mirtchovski wrote:


if nobody replies to your email, would you report an error?

or, if you prefer:

if a linker has nothing to link (in the forest), should everybody  
hear about it?


:)



Commands are expected to be loud on errors and silent
on success.

Does it have any sense to create a 0 byte executable file?
Success or failure? Can you execute it?


Q






Re: [9fans] threads vs forks

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 6:15 PM, Uriel  wrote:
> You are off. It is doubtful that the GIL will ever be removed.

That's too bad. Things like that just reinforce my view that Python is a hack :(

Oh well, back to C...

>
> But that really isn't the issue, the issue is the lack of a decent
> concurrency model, like the one provided by Stackless.
>
> But apparently one of the things stackless allows is evil recursive
> programming, which Guido considers 'confusing' and wont allow in
> mainline python (I think another reason is that porting it to jython
> and .not would be hard, but I'm not familiar with the details).

Concurrency seems to be one of those things that's "too hard" for
everyone, and I don't buy it. There's no reason it needs to be as hard
as it is.

And nevermind the fact that it's not really usable for every (or even
most) jobs out there. But Intel is pushing it, so that's where we have
to go, I suppose.

>
> uriel
> - Show quoted text -
>
> On Wed, Mar 4, 2009 at 12:08 AM, J.R. Mauro  wrote:
>> On Tue, Mar 3, 2009 at 1:11 PM, Roman V. Shaposhnik  wrote:
>>> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>>>
 My knowledge on this subject is about 8 or 9 years old, so check with your 
 local Python guru


 The last I'd heard about Python's threading is that it was cooperative
 only, and that you couldn't get real parallelism out of it.  It serves
 as a means to organize your program in a concurrent manner.


 In other words no two threads run at the same time in Python, even if
 you're on a multi-core system, due to something they call a "Global
 Interpreter Lock".
>>>
>>> I believe GIL is as present in Python nowadays as ever. On a related
>>> note: does anybody know any sane interpreted languages with a decent
>>> threading model to go along? Stackless python is the only thing that
>>> I'm familiar with in that department.
>>
>> I thought part of the reason for the "big break" with Python 3000 was
>> to get rid of the GIL and clean that threading mess up. Or am I way
>> off?
>>
>>>
>>> Thanks,
>>> Roman.
>>>
>>>
>>>
>>
>>
>
>



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 5:13 PM, ron minnich  wrote:
> This discussion strikes me as coming from a different galaxy. It seems
> to me that Acme and Sam clearly don't match the task at hand. We're
> trying to use a screwdriver when we need a jackhammer .
>
> I don't see the point in complaining about file formats. The
> scientists in this case don't much care what we think. They're not
> going to rewrite their formats so someone can use Acme.
>
> Here's my suggestion:
>
> vi

I seem to remember an interview with Rob Pike where he said that the
reason he hated vi and emacs was that they didn't support mouse
placement, and other than that, they were fine. I wonder if he would
find Vim usable these days, given that you can mouse around in it
without sacrificing the really powerful command syntax that Rudolf
seems to be missing here.

It's definitely efficient to be able to just "point and say 'here'",
which I think is the wording Rob used, but it's also useful to be able
to just say "go down 5 lines and then forward 5 words", which vi
excels at. I really like Acme, but (and maybe it's just me being a
newbie) I find myself really missing some of the vi
movement/delete/yank syntax.



Just my $.02 USD, which, due to inflation, is pretty worthless these days.

>
> ron
>
>



Re: [9fans] threads vs forks

2009-03-03 Thread Uriel
You are off. It is doubtful that the GIL will ever be removed.

But that really isn't the issue, the issue is the lack of a decent
concurrency model, like the one provided by Stackless.

But apparently one of the things stackless allows is evil recursive
programming, which Guido considers 'confusing' and wont allow in
mainline python (I think another reason is that porting it to jython
and .not would be hard, but I'm not familiar with the details).

uriel


On Wed, Mar 4, 2009 at 12:08 AM, J.R. Mauro  wrote:
> On Tue, Mar 3, 2009 at 1:11 PM, Roman V. Shaposhnik  wrote:
>> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>>
>>> My knowledge on this subject is about 8 or 9 years old, so check with your 
>>> local Python guru
>>>
>>>
>>> The last I'd heard about Python's threading is that it was cooperative
>>> only, and that you couldn't get real parallelism out of it.  It serves
>>> as a means to organize your program in a concurrent manner.
>>>
>>>
>>> In other words no two threads run at the same time in Python, even if
>>> you're on a multi-core system, due to something they call a "Global
>>> Interpreter Lock".
>>
>> I believe GIL is as present in Python nowadays as ever. On a related
>> note: does anybody know any sane interpreted languages with a decent
>> threading model to go along? Stackless python is the only thing that
>> I'm familiar with in that department.
>
> I thought part of the reason for the "big break" with Python 3000 was
> to get rid of the GIL and clean that threading mess up. Or am I way
> off?
>
>>
>> Thanks,
>> Roman.
>>
>>
>>
>
>



Re: [9fans] threads vs forks

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 1:11 PM, Roman V. Shaposhnik  wrote:
> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
>
>> My knowledge on this subject is about 8 or 9 years old, so check with your 
>> local Python guru
>>
>>
>> The last I'd heard about Python's threading is that it was cooperative
>> only, and that you couldn't get real parallelism out of it.  It serves
>> as a means to organize your program in a concurrent manner.
>>
>>
>> In other words no two threads run at the same time in Python, even if
>> you're on a multi-core system, due to something they call a "Global
>> Interpreter Lock".
>
> I believe GIL is as present in Python nowadays as ever. On a related
> note: does anybody know any sane interpreted languages with a decent
> threading model to go along? Stackless python is the only thing that
> I'm familiar with in that department.

I thought part of the reason for the "big break" with Python 3000 was
to get rid of the GIL and clean that threading mess up. Or am I way
off?

>
> Thanks,
> Roman.
>
>
>



Re: [9fans] 8c, 8l, and empty files

2009-03-03 Thread andrey mirtchovski
if nobody replies to your email, would you report an error?

or, if you prefer:

if a linker has nothing to link (in the forest), should everybody hear about it?

:)



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread ron minnich
This discussion strikes me as coming from a different galaxy. It seems
to me that Acme and Sam clearly don't match the task at hand. We're
trying to use a screwdriver when we need a jackhammer .

I don't see the point in complaining about file formats. The
scientists in this case don't much care what we think. They're not
going to rewrite their formats so someone can use Acme.

Here's my suggestion:

vi

ron



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Arvindh Rajesh Tamilmani
> You can double-click at the beginning of the line and then execute
>
> s//\n/g
> .-0+1000
> u
>
> that will show you what the 1000th word is

it is useful to note down the address here.

s//\n/g
.-0+1000
=#
u

the output of '=#' can then be 'sent' to the
sam window to reach the 1000th word.

setting the address mark (k) doesn't
seem to work in this case.

arvindh



[9fans] 8c, 8l, and empty files

2009-03-03 Thread Enrique Soriano

term% cd /tmp
term% ls nothing.c
ls: nothing.c: 'nothing.c' file does not exist
term% touch nothing.c
term% 8c -FVw nothing.c
term% 8l -o nothing nothing.8
term% echo $status

term% ls -l nothing
--rwxrwxr-x M 8 glenda glenda 0 Mar  3 21:49 nothing
term% ./nothing
./nothing: exec header invalid


Why does the loader work?
Is there any reason to not report an error?

Regards,
Q



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Rudolf Sykora
> Using a text editor to manipulate files with lines that are thousands
> of words long seems like a not very good idea to me.

1st : I don't see why. I had a feeling there was some tendency (at
least R Pike could have one) not to look at a file as on a list of
lines, but as on a linear stream of bytes. I find it really
frustrating when I see comments, like above, that eg. sed has problems
with 'too-long' lines.

2nd: as long as you communicate with people who use common measuring
instruments, you just have to edit such files. They are plain-text
files but have long lines. That doesn't mean they are extraordinarily
big; they may have only a few lines. And moreover, the structure of
those file is sensible.

3rd: awk might be a good instrument (although eg. Raymond argues it is
flawed) for analysis carried out automatically by machines, but it is,
for me, not an editor for manual, human, interactive work.

In the light of aforementioned: vim's ability to work in the nowrap
mode and the ability to repeat commands (or regexps) [and I could add
also the existence of column blocks] makes it superior to both
acme/sam when editing considered files. As usually, this is the tax
for sam/acme's simplicity. And, understand, I am for simple things. I
somehow can understand why sam and acme don't have the nowrap mode
and, followingly, the column blocks. It is, as far as I know, due to
the stream-like character of sam/acme's view on files. What I can't
understand is, why I can't repeat my commands / regexps. One of these
would be right enough to easily do my task and many more. Correct me
if I am wrong, but even the simplest regexps used in linux can have a
number denoting repetition. Why plan9's regexp(7) doesn't have it?

Thanks
Ruda



Re: [9fans] threads vs forks

2009-03-03 Thread Bakul Shah
On Tue, 03 Mar 2009 10:11:10 PST "Roman V. Shaposhnik"   wrote:
> On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:
> 
> > My knowledge on this subject is about 8 or 9 years old, so check with your 
> local Python guru
> > 
> > 
> > The last I'd heard about Python's threading is that it was cooperative
> > only, and that you couldn't get real parallelism out of it.  It serves
> > as a means to organize your program in a concurrent manner.  
> > 
> > 
> > In other words no two threads run at the same time in Python, even if
> > you're on a multi-core system, due to something they call a "Global
> > Interpreter Lock".  
> 
> I believe GIL is as present in Python nowadays as ever. On a related
> note: does anybody know any sane interpreted languages with a decent
> threading model to go along? Stackless python is the only thing that
> I'm familiar with in that department.

Depend on what you mean by "sane interpreted language with a
decent threading model" and what you want to do with it but
check out www.clojure.org.  Then there is Erlang.  Its
wikipedia entry has this to say:
Although Erlang was designed to fill a niche and has
remained an obscure language for most of its existence,
it is experiencing a rapid increase in popularity due to
increased demand for concurrent services, inferior models
of concurrency in most mainstream programming languages,
and its substantial libraries and documentation.[7][8]
Well-known applications include Amazon SimpleDB,[9]
Yahoo! Delicious,[10] and the Facebook Chat system.[11]



Re: [9fans] threads vs forks

2009-03-03 Thread Roman V. Shaposhnik
On Tue, 2009-03-03 at 07:19 -0800, David Leimbach wrote:

> My knowledge on this subject is about 8 or 9 years old, so check with your 
> local Python guru
> 
> 
> The last I'd heard about Python's threading is that it was cooperative
> only, and that you couldn't get real parallelism out of it.  It serves
> as a means to organize your program in a concurrent manner.  
> 
> 
> In other words no two threads run at the same time in Python, even if
> you're on a multi-core system, due to something they call a "Global
> Interpreter Lock".  

I believe GIL is as present in Python nowadays as ever. On a related
note: does anybody know any sane interpreted languages with a decent
threading model to go along? Stackless python is the only thing that
I'm familiar with in that department.

Thanks,
Roman.




Re: [9fans] threads vs forks

2009-03-03 Thread ron minnich
On Tue, Mar 3, 2009 at 8:28 AM, hugo rivera  wrote:

> It is a small cluster, of 6 machines. I think each job runs for a few
> minutes (~5), take some input files and generate a couple of files (I
> am not really sure about how many output files each proccess
> generates). The size of the output files is ~1Mb.

for that size cluster, and jobs running a few minutes, fork ought to be fine.

ron



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Uriel
Using a text editor to manipulate files with lines that are thousands
of words long seems like a not very good idea to me.

But all you need is two awk one liners to automate such task. Get desired word:

awk -v w=1000 -v ORS=' ' -v 'RS= ' 'NR==w { print } '

Replace it with a new value:

awk -v w=1000  -v nw='NewValue' -v ORS=' ' -v 'RS= ' 'NR==w { print
nw; next } { print } '

And so on for any other similar tasks.

A script that prompts you for line and word number, prints it, and
lets you enter a new value should be under a dozen lines of rc.

uriel

On Tue, Mar 3, 2009 at 5:31 PM, roger peppe  wrote:
> 2009/3/3 Russ Cox :
>> s//\n/g
>> .-0+1000
>> u
>>
>> that will show you what the 1000th word is, and then you
>> can go back to it after the undo.  It's not ideal, but you asked.
>
> watch out though... that actually takes you to the 1001st word!
>
>



Re: [9fans] threads vs forks

2009-03-03 Thread John Barham
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera  wrote:

> I have to launch many tasks running in parallel (~5000) in a
> cluster running linux. Each of the task performs some astronomical
> calculations and I am not pretty sure if using fork is the best answer
> here.
> First of all, all the programming is done in python and c...

Take a look at the multiprocessing package
(http://docs.python.org/library/multiprocessing.html), newly
introduced with Python 2.6 and 3.0:

"multiprocessing is a package that supports spawning processes using
an API similar to the threading module. The multiprocessing package
offers both local and remote concurrency, effectively side-stepping
the Global Interpreter Lock by using subprocesses instead of threads."

It should be a quick and easy way to set up a cluster-wide job
processing system (provided all your jobs are driven by Python).

It also looks like it's been (partially?) back-ported to Python 2.4
and 2.5: http://pypi.python.org/pypi/processing.

  John



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread roger peppe
2009/3/3 Russ Cox :
> s//\n/g
> .-0+1000
> u
>
> that will show you what the 1000th word is, and then you
> can go back to it after the undo.  It's not ideal, but you asked.

watch out though... that actually takes you to the 1001st word!



Re: [9fans] threads vs forks

2009-03-03 Thread hugo rivera
2009/3/3, ron minnich :
>
> lots of questions first .
>
>  how  many cluster nodes. how long do the jobs run. input files or
>  args? output files? how big? You can't say much with the information
>  you gave.

It is a small cluster, of 6 machines. I think each job runs for a few
minutes (~5), take some input files and generate a couple of files (I
am not really sure about how many output files each proccess
generates). The size of the output files is ~1Mb.

-- 
Hugo



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Rudolf Sykora
> s//\n/g
> .-0+1000
> u
> Russ

Either I don't understand or this can't help me much. It's true that I
can see the 1000th word with this, but I need to edit that word then.
Just seeing it is not enough. The very same word can be on the very
line many times.

Anyway the idea is quite the same as of the others.

Thanks
Ruda



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Rudolf Sykora
Thanks for the suggestions. Basically you propose breaking the line
into many lines, navigate using lines, edit, and then go back. That's
possible and manageable.

There is probably no need for having sth simple for this particular
task, however, generally thinking about it, being able to repeat
either a regex or a command known number of times might be quite
useful for me. (Then the task would be trivial.)

Thanks
Ruda

PS.:
Btw., as I said some time ago, files like mine do appear really often
in any (perhaps non-computer) science --- physics, medecine,
biology... There it is not so weird, but a necessity.



> Or you could also substitute the newline for whatever you want, so you
> don't have to copy/paste to another window, eg:
>
> Edit ,x/[\n]+/a/ENDOFLINE/
> Edit ,x/[       ]+/a/\n/
>
> Now you can go to the 1000 word with
> :/ENDOFLINE/+1000
>
> and once you are done:
> Edit ,x/\n/d
> Edit ,x/ENDOFLINE/c/\n/
>
> If you are sure you don't have blank fields you don't need ENDOFLINE
> and can use ^$ instead (don't forget to use the g command when you
> remove the new lines). A bit awkward, but I don't think there is
> (there should be?) a simple way to do such a weird task.
>
> hth,



Re: [9fans] threads vs forks

2009-03-03 Thread hugo rivera
2009/3/3, Uriel :

>  Oh, and as I mentioned in another thread, in my experience if you are
>  going to fork, make sure you compile statically, dynamic linking is
>  almost as evil as pthreads. But this is lunix, so what do you expect?
>

not much. Wish I could get it done with plan 9.

-- 
Hugo



Re: [9fans] my /dev/time for Linux

2009-03-03 Thread J.R. Mauro
On Tue, Mar 3, 2009 at 9:42 AM, Chris Brannon  wrote:
> J.R. Mauro writes:
>> > Two things. First, I had to include  to get this to
>> > build on my machine with 2.6.28 and second, do you have any plans to
>> > get this accepted upstream?
>
> Thanks for the report.  This is fixed in the latest code, available from
> the same URL: http://members.cox.net/cmbrannon/devtime.tgz
> That's just a snapshot of whatever happens to be current.

Cool

>
> I hadn't really thought about submitting it to the kernel developers, as I
> don't know the process.  It needs some tidying as well.
> Anant wants to include it in Glendix.

I can help with the tidying; should just be a matter of running
checkpatch.pl/Indent. I'll send you a patch personally if I get to it
in the next couple days.

As far as submitting it goes, I've been working with Anant at getting
the Glendix stuff in. I also spurred the developer of the Plan 9
authentication device to get that code included. I've been working
with Greg Kroah-Hartman for a while on the Staging tree, so I can get
things fast-tracked into the kernel if that's wanted. Ashwin's Plan 9
capability device will be available in 2.6.29 when it comes out as an
experimental option. If you want to post your code to LKML and see if
someone picks it up, that's a good route, but if no one looks
interested, just let me know and I can pretty much guarantee it will
get in, possibly even for 2.6.29.

>
> -- Chris
>
>



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Russ Cox
> I just had to edit a file which has very long lines having >1000
> 'words' seperated e.g. with a TAB character. I had to find say 1000th
> word on such a line.
>
> In vim, it's easy. You use '1000W' command and there you are.
> Can the same be achieved in sam/acme? The main problem for me is the
> repetition --- i.e. how to do sth. known number of times...

You can double-click at the beginning of the line and then execute

s//\n/g
.-0+1000
u

that will show you what the 1000th word is, and then you
can go back to it after the undo.  It's not ideal, but you asked.

Russ



Re: [9fans] netbook ( no cd ) install help

2009-03-03 Thread Latchesar Ionkov
You can try /n/sources/contrib/lucho/usbinst9.img.gz.

Just dd it to a USB flash drive and try booting from it.

Thanks,
Lucho

On Sun, Mar 1, 2009 at 11:37 PM, Ben Calvert  wrote:
> ya, that would be great
> On Mar 1, 2009, at 2:45 PM, Latchesar Ionkov wrote:
>
>> Booting from a USB flash drive is possible (if the BIOS supports
>> booting from USB), but a bit tricky. I had to make few small changes
>> in 9load. I have an image somewhere, if anybody is interested in
>> trying it I can try to find it.
>>
>> Thanks,
>>   Lucho
>>
>> On Sun, Mar 1, 2009 at 6:27 AM, Steve Simon  wrote:
>>>
>>> You can install from a local fat partition if you put the plan9.iso file
>>> in the partition - don't unpack it, just put the single big file there
>>> and the installer scripts will allow you to chose it as a source for the
>>> full install.
>>>
>>> A bigger problem is you have to boot the installer kernel, normally this
>>> comes from either a floppy disk or the install CDROM (which contains a
>>> floppy disk inage). If you don't have either of these you may be able to
>>> do
>>> some tricks by creating a bootable partition by hand on your hard disk,
>>> though this is going to get painful.
>>>
>>> Maybe somone else can think of some other technique.
>>>
>>> -Steve
>>>
>>>
>>>
>>
>
>
>



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread yy
2009/3/3 roger peppe :
> 2009/3/3 Rudolf Sykora :
>>> I would do it with awk myself, Much depends on what you want to
>>> do to the 1000'th word on the line.
>>
>> Say I really want to get there, so that I can manually edit the place.
>
> if i really had to do this (as a one-off), i'd probably do it in a
> few stages:
>
> copy & paste the line to a New blank window.
> in the new window:
> Edit ,x/[       ]+/a/\n/
> :1000
>
> edit as desired
> Edit ,x/\n/d
>
> copy and paste back to the original window.
>
> if you were going to do this a lot, you could easily make a little
> script to tell you the offset of the 1000th word.
>

Or you could also substitute the newline for whatever you want, so you
don't have to copy/paste to another window, eg:

Edit ,x/[\n]+/a/ENDOFLINE/
Edit ,x/[   ]+/a/\n/

Now you can go to the 1000 word with
:/ENDOFLINE/+1000

and once you are done:
Edit ,x/\n/d
Edit ,x/ENDOFLINE/c/\n/

If you are sure you don't have blank fields you don't need ENDOFLINE
and can use ^$ instead (don't forget to use the g command when you
remove the new lines). A bit awkward, but I don't think there is
(there should be?) a simple way to do such a weird task.

hth,


-- 
- yiyus || JGL .



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Uriel
Ok, I'm a moron for not reading the original post before answering. Never mind.

uriel

On Tue, Mar 3, 2009 at 4:58 PM, Uriel  wrote:
> awk '{n=n+NF} n>1000 {print ":"NR; exit}'
>
> That will print something you can plumb and go to the line you want.
>
> Should be obvious enough how to generalize into a reusable script.
>
> (Typed from memory and not tested.)
>
> uriel
>
> On Tue, Mar 3, 2009 at 4:40 PM, roger peppe  wrote:
>> 2009/3/3 Rudolf Sykora :
 I would do it with awk myself, Much depends on what you want to
 do to the 1000'th word on the line.
>>>
>>> Say I really want to get there, so that I can manually edit the place.
>>
>> if i really had to do this (as a one-off), i'd probably do it in a
>> few stages:
>>
>> copy & paste the line to a New blank window.
>> in the new window:
>> Edit ,x/[       ]+/a/\n/
>> :1000
>>
>> edit as desired
>> Edit ,x/\n/d
>>
>> copy and paste back to the original window.
>>
>> if you were going to do this a lot, you could easily make a little
>> script to tell you the offset of the 1000th word.
>>
>> e.g.
>> sed 's/[ \t]+/&\n/' | sed 1000q | tr -d '\012' | wc -c
>>
>> actually that doesn't work 'cos sed has line length issues.
>> so i'd probably do it in C - the program would take the line
>> as stdin and could print out address
>> of the word in acme-friendly notation, e.g. :-++#8499;+#6
>>
>> it'd only be a few minutes to write.
>>
>> another option would be to write a little script that used the
>> addr file repeatedly to find the nth match of a regexp.
>>
>>
>



Re: [9fans] threads vs forks

2009-03-03 Thread ron minnich
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera  wrote:

> You see, I have to launch many tasks running in parallel (~5000) in a
> cluster running linux. Each of the task performs some astronomical
> calculations and I am not pretty sure if using fork is the best answer
> here.


lots of questions first .

how  many cluster nodes. how long do the jobs run. input files or
args? output files? how big? You can't say much with the information
you gave.

ron



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Uriel
awk '{n=n+NF} n>1000 {print ":"NR; exit}'

That will print something you can plumb and go to the line you want.

Should be obvious enough how to generalize into a reusable script.

(Typed from memory and not tested.)

uriel

On Tue, Mar 3, 2009 at 4:40 PM, roger peppe  wrote:
> 2009/3/3 Rudolf Sykora :
>>> I would do it with awk myself, Much depends on what you want to
>>> do to the 1000'th word on the line.
>>
>> Say I really want to get there, so that I can manually edit the place.
>
> if i really had to do this (as a one-off), i'd probably do it in a
> few stages:
>
> copy & paste the line to a New blank window.
> in the new window:
> Edit ,x/[       ]+/a/\n/
> :1000
>
> edit as desired
> Edit ,x/\n/d
>
> copy and paste back to the original window.
>
> if you were going to do this a lot, you could easily make a little
> script to tell you the offset of the 1000th word.
>
> e.g.
> sed 's/[ \t]+/&\n/' | sed 1000q | tr -d '\012' | wc -c
>
> actually that doesn't work 'cos sed has line length issues.
> so i'd probably do it in C - the program would take the line
> as stdin and could print out address
> of the word in acme-friendly notation, e.g. :-++#8499;+#6
>
> it'd only be a few minutes to write.
>
> another option would be to write a little script that used the
> addr file repeatedly to find the nth match of a regexp.
>
>



Re: [9fans] threads vs forks

2009-03-03 Thread Uriel
Python 'threads' are the same pthreads turds all other lunix junk
uses. The only difference is that the interpreter itself is not
threadsafe, so they have a global lock which means threads suck even
more than usual.

Forking a python interpreter is a *bad* idea, because python's start
up takes billions of years. This has nothing to do with the merits of
fork, and all with how much python sucks.

There is Stackless Python, which has proper CSP threads/procs and
channels, very similar to limbo.

http://www.stackless.com/

But that is too sane for the mainline python folks obviously, so they
stick to the pthrereads turds, ...

My advice: unless you can use Stackless, stay as far away as you can
from any concurrent python stuff. (And don't get me started on twisted
and their event based hacks).

Oh, and as I mentioned in another thread, in my experience if you are
going to fork, make sure you compile statically, dynamic linking is
almost as evil as pthreads. But this is lunix, so what do you expect?

uriel

On Tue, Mar 3, 2009 at 4:19 PM, David Leimbach  wrote:
>
>
> On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera  wrote:
>>
>> Hi,
>> this is not really a plan 9 question, but since you are the wisest
>> guys I know I am hoping that you can help me.
>> You see, I have to launch many tasks running in parallel (~5000) in a
>> cluster running linux. Each of the task performs some astronomical
>> calculations and I am not pretty sure if using fork is the best answer
>> here.
>> First of all, all the programming is done in python and c, and since
>> we are using os.fork() python facility I think that it is somehow
>> related to the underlying c fork (well, I really do not know much of
>> forks in linux, the few things I do know about forks and threads I got
>> them from Francisco Ballesteros' "Introduction to operating system
>> abstractions").
>
> My knowledge on this subject is about 8 or 9 years old, so check with your local Python guru
> The last I'd heard about Python's threading is that it was cooperative only,
> and that you couldn't get real parallelism out of it.  It serves as a means
> to organize your program in a concurrent manner.
> In other words no two threads run at the same time in Python, even if you're
> on a multi-core system, due to something they call a "Global Interpreter
> Lock".
>
>>
>> The point here is if I should use forks or threads to deal with the job at
>> hand?
>> I heard that there are some problems if you fork too many processes (I
>> am not sure how many are too many) so I am thinking to use threads.
>> I know some basic differences between threads and forks, but I am not
>> aware of the details of the implementation (probably I will never be).
>> Finally, if this is a question that does not belong to the plan 9
>> mailing list, please let me know and I'll shut up.
>> Saludos
>
> I think you need to understand the system limits, which is something you can
> look up for yourself.  Also you should understand what kind of runtime model
> threads in the language you're using actually implements.
> Those rules basically apply to any system.
>
>>
>> --
>> Hugo
>>
>
>



[9fans] clump magic number is 0

2009-03-03 Thread Fco. J. Ballesteros
Hi,

just saw this in our venti server:

err 2: clump has bad magic number=0x != 0xd15cb10c
err 2: loadclump worm13 194683771: clump has bad magic number=0x0


Time ago this message was scary but not serious. My question is,
is that still the case? :)

I was just putting more stuff into our venti, as a result of a pull from 
sources.

thanks



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread roger peppe
2009/3/3 Rudolf Sykora :
>> I would do it with awk myself, Much depends on what you want to
>> do to the 1000'th word on the line.
>
> Say I really want to get there, so that I can manually edit the place.

if i really had to do this (as a one-off), i'd probably do it in a
few stages:

copy & paste the line to a New blank window.
in the new window:
Edit ,x/[   ]+/a/\n/
:1000

edit as desired
Edit ,x/\n/d

copy and paste back to the original window.

if you were going to do this a lot, you could easily make a little
script to tell you the offset of the 1000th word.

e.g.
sed 's/[ \t]+/&\n/' | sed 1000q | tr -d '\012' | wc -c

actually that doesn't work 'cos sed has line length issues.
so i'd probably do it in C - the program would take the line
as stdin and could print out address
of the word in acme-friendly notation, e.g. :-++#8499;+#6

it'd only be a few minutes to write.

another option would be to write a little script that used the
addr file repeatedly to find the nth match of a regexp.



Re: [9fans] threads vs forks

2009-03-03 Thread hugo rivera
thanks a lot guys.
I think I should study this issue in greater detail. It is not as easy
as I tought it would be.

2009/3/3, David Leimbach :
>
>
> On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera  wrote:
> > Hi,
> > this is not really a plan 9 question, but since you are the wisest
> > guys I know I am hoping that you can help me.
> > You see, I have to launch many tasks running in parallel (~5000) in a
> > cluster running linux. Each of the task performs some astronomical
> > calculations and I am not pretty sure if using fork is the best answer
> > here.
> > First of all, all the programming is done in python and c, and since
> > we are using os.fork() python facility I think that it is somehow
> > related to the underlying c fork (well, I really do not know much of
> > forks in linux, the few things I do know about forks and threads I got
> > them from Francisco Ballesteros' "Introduction to operating system
> > abstractions").
>
> My knowledge on this subject is about 8 or 9 years old, so
> check with your local Python guru
>
> The last I'd heard about Python's threading is that it was cooperative only,
> and that you couldn't get real parallelism out of it.  It serves as a means
> to organize your program in a concurrent manner.
>
> In other words no two threads run at the same time in Python, even if you're
> on a multi-core system, due to something they call a "Global Interpreter
> Lock".
>
> >
> > The point here is if I should use forks or threads to deal with the job at
> hand?
> > I heard that there are some problems if you fork too many processes (I
> > am not sure how many are too many) so I am thinking to use threads.
> > I know some basic differences between threads and forks, but I am not
> > aware of the details of the implementation (probably I will never be).
> > Finally, if this is a question that does not belong to the plan 9
> > mailing list, please let me know and I'll shut up.
> > Saludos
> >
>
> I think you need to understand the system limits, which is something you can
> look up for yourself.  Also you should understand what kind of runtime model
> threads in the language you're using actually implements.
>
> Those rules basically apply to any system.
>
> >
> > --
> > Hugo
> >
> >
>
>


-- 
Hugo



Re: [9fans] threads vs forks

2009-03-03 Thread David Leimbach
On Tue, Mar 3, 2009 at 3:52 AM, hugo rivera  wrote:

> Hi,
> this is not really a plan 9 question, but since you are the wisest
> guys I know I am hoping that you can help me.
> You see, I have to launch many tasks running in parallel (~5000) in a
> cluster running linux. Each of the task performs some astronomical
> calculations and I am not pretty sure if using fork is the best answer
> here.
> First of all, all the programming is done in python and c, and since
> we are using os.fork() python facility I think that it is somehow
> related to the underlying c fork (well, I really do not know much of
> forks in linux, the few things I do know about forks and threads I got
> them from Francisco Ballesteros' "Introduction to operating system
> abstractions").


My knowledge on this subject is about 8 or 9 years old, so check with
your local Python guru

The last I'd heard about Python's threading is that it was cooperative only,
and that you couldn't get real parallelism out of it.  It serves as a means
to organize your program in a concurrent manner.

In other words no two threads run at the same time in Python, even if you're
on a multi-core system, due to something they call a "Global Interpreter
Lock".


>
> The point here is if I should use forks or threads to deal with the job at
> hand?
> I heard that there are some problems if you fork too many processes (I
> am not sure how many are too many) so I am thinking to use threads.
> I know some basic differences between threads and forks, but I am not
> aware of the details of the implementation (probably I will never be).
> Finally, if this is a question that does not belong to the plan 9
> mailing list, please let me know and I'll shut up.
> Saludos
>

I think you need to understand the system limits, which is something you can
look up for yourself.  Also you should understand what kind of runtime model
threads in the language you're using actually implements.

Those rules basically apply to any system.


>
> --
> Hugo
>
>


Re: [9fans] my /dev/time for Linux

2009-03-03 Thread Chris Brannon
J.R. Mauro writes:
> > Two things. First, I had to include  to get this to
> > build on my machine with 2.6.28 and second, do you have any plans to
> > get this accepted upstream?

Thanks for the report.  This is fixed in the latest code, available from
the same URL: http://members.cox.net/cmbrannon/devtime.tgz
That's just a snapshot of whatever happens to be current.

I hadn't really thought about submitting it to the kernel developers, as I
don't know the process.  It needs some tidying as well.
Anant wants to include it in Glendix.

-- Chris



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Rudolf Sykora
> It's horribly inelegant, but I have occasionally done the following:
> Suppose I want to repeat the command xyz 64 times.  I type xyz,
> snarf it and paste it three times.  Then I snarf the lot of them,
> and paste three times.  Then I snarf that and paste three times.
> Ugly as hell, but it does work.

... :)

sounds like pioneering a way to hell
Ruda



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread John Stalker
> I just had to edit a file which has very long lines having >1000
> 'words' seperated e.g. with a TAB character. I had to find say 1000th
> word on such a line.
> 
> In vim, it's easy. You use '1000W' command and there you are.
> Can the same be achieved in sam/acme? The main problem for me is the
> repetition --- i.e. how to do sth. known number of times...
> 
> Thanks
> Ruda

It's horribly inelegant, but I have occasionally done the following:
Suppose I want to repeat the command xyz 64 times.  I type xyz,
snarf it and paste it three times.  Then I snarf the lot of them,
and paste three times.  Then I snarf that and paste three times.
Ugly as hell, but it does work.
-- 
John Stalker
School of Mathematics
Trinity College Dublin
tel +353 1 896 1983
fax +353 1 896 2282



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Rudolf Sykora
> I would do it with awk myself, Much depends on what you want to
> do to the 1000'th word on the line.

Say I really want to get there, so that I can manually edit the place.

Ruda



Re: [9fans] command repetition in sam/acme

2009-03-03 Thread Steve Simon
I would do it with awk myself, Much depends on what you want to
do to the 1000'th word on the line.

in sam you can even play with your awk script in the command window, editing it
submitting it and if its wrong you just Undo and try again. Similar things can 
be
done in acme I believe but I don't know that.

-Steve



[9fans] command repetition in sam/acme

2009-03-03 Thread Rudolf Sykora
Hello,

I just had to edit a file which has very long lines having >1000
'words' seperated e.g. with a TAB character. I had to find say 1000th
word on such a line.

In vim, it's easy. You use '1000W' command and there you are.
Can the same be achieved in sam/acme? The main problem for me is the
repetition --- i.e. how to do sth. known number of times...

Thanks
Ruda



[9fans] threads vs forks

2009-03-03 Thread hugo rivera
Hi,
this is not really a plan 9 question, but since you are the wisest
guys I know I am hoping that you can help me.
You see, I have to launch many tasks running in parallel (~5000) in a
cluster running linux. Each of the task performs some astronomical
calculations and I am not pretty sure if using fork is the best answer
here.
First of all, all the programming is done in python and c, and since
we are using os.fork() python facility I think that it is somehow
related to the underlying c fork (well, I really do not know much of
forks in linux, the few things I do know about forks and threads I got
them from Francisco Ballesteros' "Introduction to operating system
abstractions").
The point here is if I should use forks or threads to deal with the job at hand?
I heard that there are some problems if you fork too many processes (I
am not sure how many are too many) so I am thinking to use threads.
I know some basic differences between threads and forks, but I am not
aware of the details of the implementation (probably I will never be).
Finally, if this is a question that does not belong to the plan 9
mailing list, please let me know and I'll shut up.
Saludos

-- 
Hugo



Re: [9fans] AC97 Driver Bug (w/FIX)

2009-03-03 Thread James Tomaschke

Apologies for the vcard, first post noobness.