Re: [9fans] dup(3)

2009-10-19 Thread hugo rivera
Thanks for your feedback.

-- 
Hugo



Re: [9fans] utf-8 text files from httpd

2009-10-19 Thread Akshat Kumar
new/sendfd.c:243 c old/sendfd.c:243

---
 /*
new/sendfd.c:246 c old/sendfd.c:246

---
 */

(context: text/plain - text/plain; charset=utf-8)

Now my text files can be read in the proper encoding
by default, and are not interpreted by browsers (as
well as certain applications) to be whack ASCII.

Is the output of file(1) appropriate for this purpose?
Shouldn't your sample file also be sent as UTF-8?

Thank you for the input, Mr. Arisawa. I agree with
Erik in this case, as you wouldn't be doing much with
files of other encodings on Plan 9 (well, prior to a
tcs(1)), you really only need to worry about getting
across UTF-8.

The point about file handling being up to browsers is
appropriate. However, I'd like to push as much standard
behaviour from the server as I can. If there's an explicit
account of the encoding and type of a file, then there
ought to be no ambiguity.


Thanks,
ak



Re: [9fans] Barrelfish

2009-10-19 Thread erik quanstrom
 At the hardware level we do have message passing between a
 processor and the memory controller -- this is exactly the
 same as talking to a shared server and has the same issues of
 scaling etc. If you have very few clients, a single shared
 server is indeed a cost effective solution.

just to repeat myself in a context that hopefully makes things
clearer:  sometimes we don't admit it's a network.  and that's
not always a bad thing.

- erik



Re: [9fans] Barrelfish

2009-10-19 Thread matt




The  misinterpretation of Moore's Law is to blame here, of course: Moore
is a smart guy and he was talking about transistor density, but pop culture
made is sound like he was talking speed up. For some time the two were
in lock-step. Not anymore.
 

I ran the numbers the other day based on sped doubles every 2 years, a 
60Mhz Pentium would be running 16Ghz by now

I think it was the 1ghz that should be 35ghz




Re: [9fans] ape/psh (still) can't exec in 9vx

2009-10-19 Thread ron minnich
On Mon, Oct 19, 2009 at 1:57 AM, Dmitry Golubovsky golubov...@gmail.com wrote:

 /bin/sh XXX: cannot execute - Access denied

 XXX is any program I am trying to run from ape/psh.

 Has anyone else run into this recently?


it all works for me.

ron



Re: [9fans] Barrelfish

2009-10-19 Thread Sam Watkins
On Fri, Oct 16, 2009 at 12:18:47PM -0600, Latchesar Ionkov wrote:
 How do you plan to feed data to these 31 thousand processors so they
 can be fully utilized? Have you done the calculations and checked what
 memory bandwidth would you need for that?

I would use a pipelining + divide-and-conquer approach, with some RAM on chip.
Units would be smaller than a 6502, more like an adder.

Sam



Re: [9fans] ape/psh can't exec in 9vx

2009-10-19 Thread Russ Cox
 /bin/sh: uname: cannot execute - Access denied

I believe that if you build a new binary from the sources
instead of using the pre-compiled binary, this bug is fixed.
The binaries are lagging behind the actual source code.

Russ



[9fans] rides

2009-10-19 Thread ron minnich
confusion reigns. People I thought I was giving a ride to are coming
in at different times.

So, let's try again.

I have one rider: maht.
I can take two more. Roger?

Anyway I get in 835PM on the 20th, I can take 2 people besides maht,
so let me know.

ron



Re: [9fans] ape/psh (still) can't exec in 9vx

2009-10-19 Thread andrey mirtchovski
% ape/psh
$ uname
/bin/sh: uname: cannot execute - Access denied
% cd /sys/src/ape
% mk install
mk lib.install
mk cmd.install
mk 9src.install
[...]
cp 8.tar /386/bin/ape/tar
% ape/psh
$ uname
Plan9
$



Re: [9fans] Barrelfish

2009-10-19 Thread andrey mirtchovski
 I would use a pipelining + divide-and-conquer approach, with some RAM on chip.
 Units would be smaller than a 6502, more like an adder.

you mean like the Thinking Machines CM-1 and CM-2?

it's not like it hasn't been done before :)



Re: [9fans] drawterm tearing

2009-10-19 Thread erik quanstrom
 what changed recently?
 not drawterm.

no.  not drawterm.  not the video card.
i've recently upgraded x.

; ls -l `{which drawterm}
-rwxr-xr-x 1 quanstro users 1226424 Mar 12  2009 
/home/quanstro/bin/amd64/drawterm

but that may be a red herring, see below

 can you tell us more about the bug?
 if you resize the window, forcing faces to redraw,
 does steve's face get put back together?
 is it always faces?
 is it always small screen regions?
 etc.

scrolling does not fix the problem.  the framebuffer
is wrong.  forcing faces to redraw does fix the problem.
i don't have enough screen real estate so faces is sometimes
partially obscured by acme.  the tear appears to be in line
with the top of acme's window.  in testing just now it
happened 4/4 times with faces partially obscured and 0/3
times with faces not obscured.

- erik



Re: [9fans] Barrelfish

2009-10-19 Thread ron minnich
On Mon, Oct 19, 2009 at 8:26 AM, Sam Watkins s...@nipl.net wrote:
 On Fri, Oct 16, 2009 at 12:18:47PM -0600, Latchesar Ionkov wrote:
 How do you plan to feed data to these 31 thousand processors so they
 can be fully utilized? Have you done the calculations and checked what
 memory bandwidth would you need for that?

 I would use a pipelining + divide-and-conquer approach, with some RAM on chip.
 Units would be smaller than a 6502, more like an adder.

I'm not convinced. Lucho just dropped a well known hard problem in
your lap (one he deals with every day) but your reply sounds like
handwaving.

This stuff is harder than it sounds. Unless you're ready to come up
with a simulation of your claim -- and it had better be a pretty good
one -- I don't think anybody is going to be buying.

If you're going to just have adders, for example, you're going to have
to explain where the instruction sequencing happens. If there's only
one sequencer, then you're going to have to explain why you have not
just reinvented the CM-2 or similar MPP.

Again, this stuff is quantifiable. A pipeline implies a clock rate.
Divide and conquer implies fanout. Where are the numbers?

ron



Re: [9fans] Barrelfish

2009-10-19 Thread Sam Watkins
On Sat, Oct 17, 2009 at 07:45:40PM +0100, Eris Discordia wrote:
 Another embarrassingly parallel problem, as Sam Watkins pointed out, arises 
 in digital audio processing.

The pipelining + divide-and-conquer method which I would use for parallel
systems is much like a series of production lines in a large factory.

I calculated roughly that encoding a 2-hour video could be parallelized by a
factor of perhaps 20 trillion, using pipelining and divide-and-conquer, with a
longest path length of 1 operations in series.  Such a system running at
1Ghz could encode a single 2-hour video in 1/10 second (latency), or 2
billion hours of video per second (throughput).

Details of the calculation: 7200 seconds * 30fps * 12*16 (50*50 pixel chunks) *
50 elementary arithmetic/logical operations in a pipeline (unrolled).
7200*30*12*16*50 = 20 trillion (20,000,000,000,000) processing units.
This is only a very rough estimate and does not consider all the issues.

The slow latency of 1/10 second to encode a video is due to Ahmdal's Law,
assuming a longest path of 1 operations.  The throughput of 2 billion hours
of video per second would be achieved by pipelining.  The throughput is not
limited by Ahmdal's Law, as a longer pipeline/network holds more data.

Ahmdal's Law gives us a lower limit for the time taken to perform a task with
some serial components; but does not limit the throughput of a pipelining
system, the throughput is simply one data unit per clock cycle.

In reality, it would be hard to build such a system, and one would prefer a
system with much less parallelization.  However, the human brain does contain
100 billion neurons, and electronic units can be smaller than neurons.

My point is, one can design systems to solve practical problems that use almost
arbitrarily large numbers of processing units running in parallel.

Sam



Re: [9fans] Barrelfish

2009-10-19 Thread Sam Watkins
On Sun, Oct 18, 2009 at 01:12:58AM +, Roman Shaposhnik wrote:
 I would appreciate if the folks who were in the room correct me, but if I'm
 not mistaken Ken was alluding to some FPGA work/ideas that he had done
 and my interpretation of his comments was that if we *really* want to
 make things parallel we have to bite the bullet, ditch multicore and rethink
 our strategy.

Certainly, I agree that normal multi-core is not the best approach, FPGA systems
or similar could run a lot faster.

Sam



Re: [9fans] Barrelfish

2009-10-19 Thread erik quanstrom
 Details of the calculation: 7200 seconds * 30fps * 12*16 (50*50 pixel chunks) 
 *
 50 elementary arithmetic/logical operations in a pipeline (unrolled).
 7200*30*12*16*50 = 20 trillion (20,000,000,000,000) processing units.
 This is only a very rough estimate and does not consider all the issues.

could you do a similar calcuation for the memory
bandwidth required to deliver said instructions to
the processors?

if you add that to the memory bandwith required
to move the data around, what kind of memory architecture
do you propose to move this much data around?

- erik



Re: [9fans] Barrelfish

2009-10-19 Thread erik quanstrom
 I ran the numbers the other day based on sped doubles every 2 years, a 
 60Mhz Pentium would be running 16Ghz by now
 I think it was the 1ghz that should be 35ghz

you motivated me to find my copy of _high speed
semiconductor devices_, s.m. sze, ed., 1990.

there might be one our two little problems with
chips that speed that have nothing to do with
power — make that cooling.

0.  frequency prop electron mobility prop 1/eff. bandgap.
unfortunately there's a lower limit on the band gap —
kT, thermal energy.

1.  p. 8. the most promising devices are quantum effect
devices.  (none are currently in use in processors.)

2.  p. 192, ...device size will continue to be limited by
hot-electron damage.  oops.

that fills one with confidence, doesn't it?

- erik



Re: [9fans] Barrelfish

2009-10-19 Thread Russ Cox
 My point is, one can design systems to solve practical problems that use 
 almost
 arbitrarily large numbers of processing units running in parallel.

design != build

russ



Re: [9fans] ape/psh (still) can't exec in 9vx

2009-10-19 Thread Dmitry Golubovsky
On Oct 19, 11:50 am, mirtchov...@gmail.com (andrey mirtchovski) wrote:
 % ape/psh
 $ uname
 /bin/sh: uname: cannot execute - Access denied
 % cd /sys/src/ape
 % mk install
 mk lib.install
 mk cmd.install
 mk 9src.install
 [...]
 cp 8.tar /386/bin/ape/tar
 % ape/psh
 $ uname
 Plan9
 $

OK, maybe I just did not complete the whole thing.

I'll try this.

Thanks.



Re: [9fans] drawterm tearing

2009-10-19 Thread erik quanstrom
 great. now that you have a reproducible test case, try this:
 
 in drawterm/gui-x11/x11.c:/^xdraw it says
 
 /*
  * drawterm was distributed for years with
  * return 0; right here.
  * maybe we should give up on all this?
  */
 
 if((dxm = dst-X) == nil)
 return 0;
 
 try adding an unconditional return 0;
 right there and see if the problem goes away.
 if so, problem solved, or at least pinned on
 some combination of the drawterm x11 code
 and the new x11 server you have.
 that code is trying to do a good job when
 x11 is on the other end of a network connection,
 but that case is getting less and less important.

does fix!

curiosity: is this a locking problem or an arithmetic
problem?

- erik



Re: [9fans] ape/psh can't exec in 9vx

2009-10-19 Thread Dmitry Golubovsky
On Oct 19, 11:50 am, r...@swtch.com (Russ Cox) wrote:
  /bin/sh: uname: cannot execute - Access denied

 I believe that if you build a new binary from the sources
 instead of using the pre-compiled binary, this bug is fixed.
 The binaries are lagging behind the actual source code.

 Russ

OK, I'll try both this and what Andrey recommended.

Thanks.



Re: [9fans] utf-8 text files from httpd

2009-10-19 Thread lucio
 2009/10/19 erik quanstrom quans...@quanstro.net:
 why try that hard?  just call it utf-8.  i can't think of
 any browsers that would have a problem with that today.
 
 the instance of the problem that i had was when
 adding an attachment to a upas mail.
 file -m is useful when the attachment might be
 binary.

Why not enhance file -m so that it is instructed to read the entire
file, then?  Knowing the context, adding, say, a b option (for
big) would not do any damage, right?

++L




Re: [9fans] rides

2009-10-19 Thread Latchesar Ionkov
I will arrive at 1pm on Wednesday and will have a car. Let me know if
you would like to wait until then.

Lucho

On Mon, Oct 19, 2009 at 2:56 PM, Anthony Sorace ano...@gmail.com wrote:
 i arrive around 9am on wednesday. anyone have a ride
 arranged and have an extra seat?





Re: [9fans] Parallelism is over a barrel(fish)?

2009-10-19 Thread Lyndon Nerenberg (VE6BBM/VE7TFX)
From last week's ACM Technews ...

Why Desktop Multiprocessing Has Speed Limits
Computerworld (10/05/09) Vol. 43, No. 30, P. 24; Wood, Lamont

Despite the mainstreaming of multicore processors for desktops, not
every desktop application can be rewritten for multicore frameworks,
which means some bottlenecks will persist.  If you have a task that
cannot be parallelized and you are currently on a plateau of
performance in a single-processor environment, you will not see that
task getting significantly faster in the future, says analyst Tom
Halfhill.  Adobe Systems' Russell Williams points out that performance
does not scale linearly even with parallelization on account of memory
bandwidth issues and delays dictated by interprocessor communications.
Analyst Jim Turley says that, overall, consumer operating systems
don't do anything smart with multicore architecture.  We have to
reinvent computing, and get away from the fundamental premises we
inherited from von Neumann, says Microsoft technical fellow Burton
Smith.  He assumed one instruction would be executed at a time, and
we are no longer even maintaining the appearance of one instruction at
a time. Analyst Rob Enderle notes that most applications will operate
on only a single core, which means that the benefits of a multicore
architecture only come when multiple applications are run.  What we'd
all like is a magic compiler that takes yesterday's source code and
spreads it across multiple cores, and that is just not happening,
says Turley.  Despite the performance issues, vendors prefer multicore
processors because they can facilitate a higher level of power
efficiency.  Using multiple cores will let us get more performance
while staying within the power envelope, says Acer's Glenn Jystad.

http://www.computerworld.com/s/article/342870/The_Desktop_Traffic_Jam?intsrc=print_latest




Re: [9fans] Parallelism is over a barrel(fish)?

2009-10-19 Thread W B Hacker

Lyndon Nerenberg (VE6BBM/VE7TFX) wrote:

From last week's ACM Technews ...


Why Desktop Multiprocessing Has Speed Limits
Computerworld (10/05/09) Vol. 43, No. 30, P. 24; Wood, Lamont

Despite the mainstreaming of multicore processors for desktops, not
every desktop application can be rewritten for multicore frameworks,
which means some bottlenecks will persist.  If you have a task that
cannot be parallelized and you are currently on a plateau of
performance in a single-processor environment, you will not see that
task getting significantly faster in the future, says analyst Tom
Halfhill.  Adobe Systems' Russell Williams points out that performance
does not scale linearly even with parallelization on account of memory
bandwidth issues and delays dictated by interprocessor communications.
Analyst Jim Turley says that, overall, consumer operating systems
don't do anything smart with multicore architecture.  We have to
reinvent computing, and get away from the fundamental premises we
inherited from von Neumann, says Microsoft technical fellow Burton
Smith.  He assumed one instruction would be executed at a time, and
we are no longer even maintaining the appearance of one instruction at
a time. Analyst Rob Enderle notes that most applications will operate
on only a single core, which means that the benefits of a multicore
architecture only come when multiple applications are run.  What we'd
all like is a magic compiler that takes yesterday's source code and
spreads it across multiple cores, and that is just not happening,
says Turley.  Despite the performance issues, vendors prefer multicore
processors because they can facilitate a higher level of power
efficiency.  Using multiple cores will let us get more performance
while staying within the power envelope, says Acer's Glenn Jystad.

http://www.computerworld.com/s/article/342870/The_Desktop_Traffic_Jam?intsrc=print_latest




'The usual talking through their anatomy suspects'

But they miss the point. Work will always be foudn for 'faster' devices, but the 
majority of the 'needed' benefit has been accomplished until entirely new 
challenges surface. Computer games and digital video special-effects are just candy.


Even a dual-core allows moving the overhead, housekeeping, I/O, interrupt 
servicing, et al out of the way of a single-core-bound application. OS/2 Hybrid 
Multi-Procesing - even with unmodified Win 3X era apps.


Beyond that it matters little.

Given a 'decent' (not magical)[1] OS, and environment, the apps that actually 
*matter* to 99% of the population are more than fast enough on the likes of a 
VIA C6 -- nano/Geode/Atom [2], embedded Ppc [3], or even an ARMish single-core 
[4]- with or without DSP etc. on-substrate.


Faster storage and networks now matter far more than faster local CPU.

The ratio of these 'goodies', and their benefits to the population in general 
to the count of supercomputers [5] and near-real-time video-stream processors 
[6] is - and will remain - extremely lopsided in favor of the small 'appliance'.


Those hyping multi-multi core for the consumer 'PC market are locked ino an 
obsolete behaviour pattern.


Lower power consumption, smaller form-factor, better display and input interface 
faster networking is where the need lies.


Nothing yet shipped can match the effectiveness of an experienced Wife or 
Personal Assistant (human) at the other end of an ordinary phone line when (s)he 
has *anticipated* your needs and called you *before* you recognized the need 
yourself.


Code THAT into silicon, teach it to cook, and you still have a lousy 
bed-partner...


Bill Hacker


[1] Anything not horribly wasteful (eg - not Windows), such as Plan9, any *BSD, 
the leaner Linux (Vector/Slackware), Haiku - all make a more than fast enough 
desktop on any single-core of 700 MHz or better, even if dragging X-Windows and 
the like around as a boat-anchor.


[2] Laptops amd Netbooks

[3] Embedded high-end. Game boxen, Ford and other motor cars

[4] A large percentage of PDA's and telecoms handhelds

[5] Devilishly hard to substitute for, SETI et al notwithstanding, but needed in 
relatively small numbers vs, for example, a mobile phone or automobile 
fuel/pollution reduction system.


[6] Given the preponderance of dreck spewed from television and cinema, 
civilization could well be better-off if all such devices on the planet went on 
a long holiday and humans returned to actually paying attention to one another.




Re: [9fans] Barrelfish

2009-10-19 Thread matt

erik quanstrom wrote:



you motivated me to find my copy of _high speed
semiconductor devices_, s.m. sze, ed., 1990.

 


which motivated me to dig out the post I made elsewhere :

Moore's law doesn't say anything about speed or power. It says 
manufacturing costs will lower from technological improvements such that 
the reasonably priced transistor count in an IC will double every 2 years.


And here's a pretty graph 
http://en.wikipedia.org/wiki/File:Transistor_Count_and_Moore%27s_Law_-_2008.svg


The misunderstanding makes people who say such twaddle as Moore's Law, 
the founding axiom behind Intel, that chips get exponentially faster.


If we pretend that 2 years = double speed then roughly :
The 1993 66Mhz P1 would now be running at 16.9Ghz
The 1995 200Mhz Pentium now would be 25.6Ghz
The 1997 300Mhz Pentium now would be 19.2Ghz
The 1999 500Mhz Pentium now would be 16Ghz
The 2000 1.3Ghz Pentium now would be 20Ghz
The 2002 2.2Ghz Pentium would now be 35Ghz
The 2002 3.06Ghz Pentium would be going on 48Ghz by Xmas

If you plot speed vs year for Pentiums you get two straight lines with a 
change in gradient in 1999 with the introduction of the P4






Re: [9fans] Barrelfish

2009-10-19 Thread matt

Eris Discordia wrote:


Moore's law doesn't say anything about speed or power.



But why'd you assume people in the wrong (w.r.t. their understanding 
of Moore's law) would measure speed in gigahertz rather than MIPS or 
FLOPS?



because that's what the discussion I was having was about



Re: [9fans] Barrelfish

2009-10-19 Thread erik quanstrom
 you motivated me to find my copy of _high speed
 semiconductor devices_, s.m. sze, ed., 1990.
 
   
 
 which motivated me to dig out the post I made elsewhere :
 
 Moore's law doesn't say anything about speed or power. It says 
 manufacturing costs will lower from technological improvements such that 
 the reasonably priced transistor count in an IC will double every 2 years.

this is quite an astounding thread.  you brought
up clock speed doubling and now refute yourself.

i just noted that 48ghz is not possible with silicon
non-quantium effect tech.

- erik



Re: [9fans] Barrelfish

2009-10-19 Thread matt



this is quite an astounding thread.  you brought
up clock speed doubling and now refute yourself.

i just noted that 48ghz is not possible with silicon
non-quantium effect tech.

- erik

 

I think I've been misunderstood, I wasn't asserting the clock speed 
increase in the first place, I was hoping to demonstrate what would have 
happened if Moore's law was the often misquoted speed doubles every 2 
years when measured in Ghz (not flops as noted by Eris)