Re: [9fans] plan9port rendering garbled

2016-08-30 Thread Eris Discordia

Look into hinting and subpixel hinting modes of Freetype2.

For different people different combinations of modes (full, medium, 
slight, none for hinting; 0/1/2 for subpixel hinting) give optimal results.


Hinting is usually set via symlinks under /etc/fonts/conf.d/ to what's 
available under /etc/fonts/conf.avail. Subpixel hinting via environment 
variable FT2_SUBPIXEL_HINTING.



On 08/30/2016 05:08 PM, Skip Tavakkolian wrote:
Sorry to post here; I've not been able to post the issue to plan9port 
on github.


after updating to ubuntu 16.04 (from 14.04), i see garbled fonts in 
sam/acme/9term, etc. it doesn't seem to matter if native or host fonts 
are used (via fontsrv). attached is what it looks like in acme.


i've been looking at p9p's devdraw and i'm suspecting something with 
Xlib calls has changed slightly.






Re: [9fans] OT: What linux has become

2014-08-12 Thread Eris Discordia
That's mainly interpersonal politics. Poettering probably pounded him 
too hard one time.


He isn't giving a technical refutation of systemd and that's actually 
very well possible.


Why shouldn't someone turn the ranter to LFS instead? Someone with 20 
years of so-called loyalty and evangelism of Debian would surely know 
the horrors of Debian's SysV init script maze and should be able to roll 
his own free of all distro nastiness.


On 08/13/2014 04:53 AM, Aharon Robbins wrote:

http://lkml.iu.edu//hypermail/linux/kernel/1408.1/02496.html

Someone should turn this guy on to Plan 9. :-)

Arnold






Re: [9fans] Small 9pcram update

2013-03-20 Thread Eris Discordia

It's ccTLD for South Georgia and the South Sandwich Islands. Of course,
that's a fancy domain he is using. Tempted to reserve ho.gs and clo.gs.

On Wed, 2013-03-20 at 11:40 +0200, lu...@proxima.alt.za wrote:
> > [Postscript:  Sorry for the paid placements.  Bandwidth costs money.]
> 
> I kinda like the paid placements, they are refreshing, if a bit
> incoherent.  Make what you like of that, sadly I won't be able to
> contribute :-(
> 
> ++L
> 
> PS: where's ".gs"?
> 
> 






Re: [9fans] Plan 9 on VIA C7

2011-07-09 Thread Eris Discordia
Again, great point. An evaluation board built around S3C2440 I experimented 
with worked surprisingly well at 50+ degrees Celsius ambient temperature, 
no ventilation, running straight for over two months until somebody turned 
it off. And it wasn't even near industrial grade. Linux support for that 
SoC is quite mature, too.



--On Friday, July 08, 2011 21:13 -0700 ron minnich  
wrote:



On Fri, Jul 8, 2011 at 5:26 PM, Eris Discordia 
wrote:

If given a choice I'd go with something that does not generate
the heat in the first place.


agree. Get an ARM :=)

ron





Re: [9fans] Plan 9 on VIA C7

2011-07-08 Thread Eris Discordia
Very good point. And, an extremely tempting experiment you have introduced 
me/us to out of your mighty rucksack. Could prove to be the downfall of me, 
buying a few more PC-104 (don't need be PC-104+, right?) Geode boards (I 
already got one based on LX 800). Thank you :-)


Then, even without active cooling heat does flow more rapidly from a hotter 
source to the ambient reservoir than from a cooler source. However, even if 
you manage to get as cool as the cooler source by throwing in lightweight 
active cooling you have barely arrived at the start line. Besides, you know 
far far far better than I, final temperatures equal, heat dissipated thus 
shows up on your electricity bill. (May be negligible for running just a 
few such cores.) If given a choice I'd go with something that does not 
generate the heat in the first place.



--On Friday, July 08, 2011 14:59 -0700 ron minnich  
wrote:



Systems that get hot as hell can need a surprisingly small amount of
air motion to cool down.

I built this minicluster that got incredibly hot, i.e. you could burn
yourself on it:
http://tinyurl.com/3o229ho

What you see strapped to it is a 12V fan from a dell desktop which I
ran at 5V, not 12V (a trick I learned from John DeGood). Very little
air had to move, it was noiseless, and it all cooled right down. You
don't need huge noisy fans in all cases.

ron





Re: [9fans] Plan 9 on VIA C7

2011-07-08 Thread Eris Discordia
Despite being touted as fanless and most C7-based boards being equipped 
only with heatsinks they get hot as hell. Right now I'm experimenting on a 
board (custom form factor) built around VIA Eden 1.2 GHz, CX700 chipset, 
with FreeBSD 8.2-RELEASE (1 GB RAM, 8 GB IDE SSD, networked, and an 
external HDD over USB). The following make it undesirable:


1. the temperature when it's passively cooled,

2. frequent unexplained spontaneous reboots (botched ACPI? missed IRQs?),

3. low-quality (RTL-81xx family, in this case 8139, a.k.a 8100) NIC chip.

Lowering CPU frequency down to 400 MHz does not help, either.

I suggest getting an Atom-based board instead, if it makes sense for you.

P.S. As a last resort I'm trying to go without ACPI and see if the thing 
will stay up for more than a week.


P.P.S. Various steppings of the C3 family, too, caused various headaches 
(with Longhaul or VIA Padlock). Overall, VIA's track record in this field 
is not remarkable.



--On Wednesday, July 06, 2011 11:34 -0700 Akshat Kumar 
 wrote:



Looking to get the following motherboard:

Jetway VIA C7 1.5GHz CN700

It would work well to house 2 IDE and 1 SATA
drives. Has anyone tried Plan 9 on this, before
I commit $100 to it?


Thanks,
ak





Re: [9fans] sheevaplug catatonic

2011-01-16 Thread Eris Discordia

Another possibility -- the omap "on chip" firmware will use SD-based
uboot image if it finds it, instead of flash. If the plug plays by the
same rules, you might be able to put a known-good u-boot on an SD and
use that image.


Marvell chips lack that. TI and Samsung were a tad more thoughtful.


--On Sunday, January 16, 2011 12:03 -0800 ron minnich  
wrote:



On Sun, Jan 16, 2011 at 11:41 AM, erik quanstrom 
wrote:


i haven't cracked the case.  does anyone know if there's
a jtag port at all in a sheevaplug?


Another possibility -- the omap "on chip" firmware will use SD-based
uboot image if it finds it, instead of flash. If the plug plays by the
same rules, you might be able to put a known-good u-boot on an SD and
use that image.

ron









Re: [9fans] sheevaplug catatonic

2011-01-16 Thread Eris Discordia

no output from uboot at all.  no output of any kind.


Assuming it isn't hardware death the only option I can see is re-flashing.


i haven't cracked the case.  does anyone know if there's
a jtag port at all in a sheevaplug?


Even if there's no ready-to-use JTAG port on the board on the PCB there 
sure will be a site where you can add a pin header or box header (requires 
some soldering). From there it should be straight forward. Identify the 
site (ARM JTAG comes in 10- and 20-pin varieties)and pinout, then connect 
to your JTAG board/box.


I have tried that with a bricked Patriot NAS that used a Marvell 88F5182. 
The device didn't have a serial port nor JTAG but both facilities had been 
thought of on the PCB. Now it has serial console and can be un-bricked at 
will.


P.S. Other two replies show Marvell has been more considerate than Patriot.



--On Sunday, January 16, 2011 14:41 -0500 erik quanstrom 
 wrote:



On Sun Jan 16 14:38:27 EST 2011, eris.discor...@gmail.com wrote:

Is it configured to use a serial console? Tried checking the output
there?


yes.  that's how i set it up to pxe from the auth server.


What's the output from uBoot, if any? As a last resort, tried
re-flashing  it over JTAG?


no output from uboot at all.  no output of any kind.

i haven't cracked the case.  does anyone know if there's
a jtag port at all in a sheevaplug?

- erik









Re: [9fans] sheevaplug catatonic

2011-01-16 Thread Eris Discordia
Is it configured to use a serial console? Tried checking the output there? 
What's the output from uBoot, if any? As a last resort, tried re-flashing 
it over JTAG?


--On Saturday, January 15, 2011 15:09 -0500 erik quanstrom 
 wrote:



has anyone else had a sheevaplug go catatonic?
mine reset yesterday and now no longer responds
to the usb/serial interface and the ethernet lights
are stuck.

- erik





Re: [9fans] Google code-in?

2010-11-12 Thread Eris Discordia
The compound 'code-in' follows the pattern of 'be-in' as in 'Human be-in.' 
You can google that.


--On Friday, November 05, 2010 15:23 -0400 Jacob Todd 
 wrote:




Code-in? Could you elaborate?
On Nov 5, 2010 1:22 PM, "EBo"  wrote:

Google just announced a code-in. Is Plan9 participating?

EBo --










Re: [9fans] quote o' the day

2010-03-28 Thread Eris Discordia

In fact, we have both printed on paper hanging from the wall of the
corridor near our office. Let's hope they learn.


Learn to...

1. ... not comment their code?

2. ... not include usage instructions?

3. ... not heed that their code might need to compile on any one of a 
number of platforms that are far from glitch-free?


4. ... not include a preamble introducing their file, automatically 
assuming they work in "clean environs" where nobody except people they know 
on a face-to-face basis commits to their code repository?


5. ... not accommodate their user base insisting they know better what's 
good for the users thereby dramatically cutting down the number of people 
who may want to merely use, and not hack, their code?


6. ... forget to see past appearances in others' code instead of simply and 
rationally counting the lines of code in the body of function 'simple_cat' 
for a proper comparison of equivalent functionality between a feature-heavy 
'cat' and a minimalist 'cat' each with its own merits?


7. ... avoid provisioning for a time when 'coreutils,' in order to become 
feature-heavy, will inevitably contain copious amount of code that needs to 
be amenable to automated testing and documentation?


8. ... avoid any secondary optimization of their first solution under the 
illusion that every optimization counts as the dreaded "premature 
optimization?"


9. ... condescendingly refuse to write or maintain code that is capable of 
cooperation with a dominant archaic design which can only be phased out 
gradually?


10. ... allow themselves to be flattered by agreement from the close-knit 
community of like-minded developers fully shutting their minds close to the 
potential merits of functionally rival software?



Never mind my trolling. I just needed to attention-whore. Continue, please.



--On Thursday, March 25, 2010 22:17 +0100 Francisco J Ballesteros 
 wrote:



As a example for our students we use

http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f=src/cat.c;hb
=HEAD

versus

http://plan9.bell-labs.com/sources/plan9/sys/src/cmd/cat.c

In fact, we have both printed on paper hanging from the wall of the
corridor near our office. Let's hope they learn.


On Thu, Mar 25, 2010 at 7:51 PM,   wrote:

in similar vein, there's this handful guide on how to make your life
really hard in 11 easy steps:

http://www.pixelbeat.org/docs/unix_file_replacement.html

make sure you check out the final copy.c linked at the bottom of the
page


It's a sign of the apocalypse.  The configuration of the 6th edition
kernel Lions presented was about 10,000 lines of code.  This version
of cp is nearly 1/4 of that, and the function copy_internal() is over
1000 lines long.  I'm clearly not smart enough to function in a world
where cp is that complex...

Back to real work...again...for real this time...I promise...
BLS













Re: [9fans] pppoe on Plan 9

2010-02-22 Thread Eris Discordia
The IP address is probably that of the service provider's DNS server, for 
use on machines other than the one that establishes the PPPoE connection.


--On Monday, February 22, 2010 10:02 -0800 Russ Cox  wrote:


I got a username, a password, and an IP address from the Internet
provider. How do I give this information to ip/pppoe?


It should just prompt you (via factotum) for the username/password.
The IP address should be irrelevant - the ppp server
will tell you the IP address anyway.

Russ





Re: [9fans] dataflow programming from shell interpreter

2010-01-22 Thread Eris Discordia

i don't think a direct mapping of COM to Plan 9 fs model is
unnecessary.  for example, instead of mapping every control or
configuration interface and method to synthetic directories and files,
a single ctl file will do.


It didn't occur to me at all that anyone would want to implement DirectShow 
or anything like that on Plan 9. Anyhow, I suppose if anyone's going to do 
that they should probably first work on fast display drivers that leverage 
modern cards' overlay capabilities and a facilitating media infrastructure 
equivalent to DirectX. On run-of-the-mill PCs good video works depends a 
lot on software support of video hardware, of course.


The logic inside most DirectShow filters either is open source (like 
ffdshow) or has good open source equivalents. The interfacing (COM), as you 
have noted, and input/output, which is hardware-dependent and therefore 
probably weakly developed in Plan 9 (I don't really have an idea, just 
guessing), are the missing bits from a DirectShow-like (multi-pipe) video 
processing pipeline on Plan 9.



--On Thursday, January 21, 2010 13:36 -0800 Skip Tavakkolian 
<9...@9netics.com> wrote:



Aren't DirectShow filter graphs and programs like GraphStudio/GraphEdit
one  possible answer to the video processing question? Filter graphs can
be  generated by any program, GUI or CLI, and fed to DirectShow provided
one  learns the in and out of generating them.


DirectShow is COM; source/mux/transform/sink filters must provide a
number of interfaces (e.g.  IFileSinkFilter); other components
(e.g.  GraphBuilder) are there to make it easier to hook them
together.

i don't think a direct mapping of COM to Plan 9 fs model is
unnecessary.  for example, instead of mapping every control or
configuration interface and method to synthetic directories and files,
a single ctl file will do.  something like this seems sufficient:

/ctl# e.g. accepts run, stop, etc.  returns: paused, 
#outputs, config,
etc./event  # instead of callback notification
/ipin/clone
/ipin/n/ctl
/ipin/n/event
/ipin/n/data
/opin/clone
/opin/n/ctl
/opin/n/event
/opin/n/data

for a special purpose kernel one could add a driver and a fancy new
hook syscall (similar to pushssl and '#D') that would hook two fd's
together to eliminate the need for a user proc to transfer between
ipin/?/data and opin/?/data.






Re: [9fans] dataflow programming from shell interpreter

2010-01-20 Thread Eris Discordia
Aren't DirectShow filter graphs and programs like GraphStudio/GraphEdit one 
possible answer to the video processing question? Filter graphs can be 
generated by any program, GUI or CLI, and fed to DirectShow provided one 
learns the in and out of generating them.


The OP's question, too, finds one answer in MS PowerShell where instead of 
byte streams .NET objects are passed between various tools and a C#-like 
shell language is used for manipulating them. .NET objects can at any point 
be serialized/deserialized to/from XML using stock classes and routines in 
System.Xml.Serialization namespace.


Just a note that at least some implementations of both ideas exist in 
production settings.



--On Tuesday, January 19, 2010 15:40 + Steve Simon  
wrote:



The PBM utilities (now net pbm) did something similar for bitmaps.
I think V10 also had some pipeline utils for manipulating images.


Indeed, however I make a firsm distinction between image proccessing (2d)
and video processing (3d).

In Video processing the image sequences can be of arbitary length, the
processing is often across several fields, and, because we want our
results ASAP tools should present the minimum delay possible (e.g. a
gain control only needs a one pixel buffer).

Aditionally image processing pipelines often have nasty things like
feedback loops and mixing different paths with differing delays which all
need special care.

We have a package of good old unix tools developed jointly by us and the
BBC which works as you might expect

cat video-stream | interpolate -x 0.7 -y 0.3 | rpnc - 0.5 '*' | display

however this can get quite ugly when the algorithm gets complex.

We need to cache intermediate results - processing HD (let alone 2k 3d)
can get time consuming so we want an environment which tee's off
intermediate results automagicially and uses them if possible - sort of
mk(1) combined with rc(1).

It is also a pain that its not easy to work at different scales i.e.
writing expressions to operate at the pixel level and using large blocks
like interpolate, the rpnc is an attempt to do this but its interpreted
(slow).

a restricted rc(1)-like language which supports pipelines,
and scalar (configuration) variables combined with a JIT compiler
(in the vein of popi) looks like a solution but I have never go further
than wishful thinking.

-Steve









Re: [9fans] grëp (rhymes with creep) and cptmp

2009-11-30 Thread Eris Discordia

$ time grëp Obergruppenfuhrersaal *


Touché :-)


--On Monday, November 30, 2009 01:52 -0600 Jason Catena 
 wrote:



hey, this is great stuff!  i really like the approach.


Thank you.  It evolved from wanting to cut-and-paste character
classes, to automatically applying them to test them.  I suppose the
character classes file could be useful in other applications that
selectively don't want to care about accents.

I added a dash-and-hyphen class, keyed to the hyphen-minus as the
first character (since it's overused), so I had to change the sed
command.

sed '/^\[.+-/d;...

I also now "rm $classes" at the end, of course, though I guess it now
doesn't exit with the exit status of grep.  I should probably save
$status after the grep command, and exit with it.  Or, save the
expanded regex in a new shell variable, rm $classes, then grep with
the new shell variable so the grep is the last command.


the patterns get really big in a hurry.


Agreed.  Part of grep's job is to be a regex engine, so I thought in
general it would be okay to push it here.


i played with this a little bit, but quickly ran into problems.



"reasonable" re size limits of say 300 characters
just don't work if you're doing expansion.  expanding "cooperate"
results in a 460-byte string!


Where does this 300-character limit come from?  If you code them by
hand I agree that a 300 character regex could be hard to fully
understand.  The regexes this script generates are very simple in
structure and (ahem) regular, so I'd be inclined to allow them past a
size restriction based on style.  As far as time and space required to
wade through the character sets, I haven't yet run into performance
problems or actual failures in my tests.

$ which grep
/usr/local/plan9/bin/grep

$ wc *|tail -1
  17655  118910  774237 total

$ time grëp Obergruppenfuhrersaal *
wewelsburg:155: (1938–1943): The "Obergruppenführersaal" (SS Generals'
Hall) and wewelsburg:161: floor of the "Obergruppenführersaal" lie on
this axis.  Both redesigned
wewelsburg:180: The "Obergruppenführersaal" (SS Generals' Hall).  On the
ground wewelsburg:181: floor the "Obergruppenführersaal" (literally
translated: wewelsburg:236: castle, in the so-called
Obergruppenführersaal
("Obergruppenführer
0.00u 0.03s 0.03rgrëp Obergruppenfuhrersaal 0–31acme 0–31i850
1920s ...

0.03 was the biggest result I got in practice.  The first run had 0.02
user time.  This seems negligible to me, so I'm not yet pushing its
performance boundaries with this string (lots of vowels and other
characters with bigger classes) on this data set (a collection of
notes largely cut-and-pasted from the web).


- erik


Jason Catena









Re: [9fans] Go

2009-11-11 Thread Eris Discordia

arabic numeral 9 is very close: ۹


Puny pedantry: that's a(n) Hindi/Indic numeral. 9 is already an "Arabic 
numeral."


If playing on numerals is allowed why shouldn't they call it IXgo or even 
Kyuugo?



--On Tuesday, November 10, 2009 22:47 -0800 Skip Tavakkolian 
<9...@9netics.com> wrote:



Another thorny
issue is what to name the package, since you can't start a
package name with a digit.


arabic numeral 9 is very close: ۹










Re: [9fans] Pictures from IWP9?

2009-11-07 Thread Eris Discordia

Well, stretching things a little, but there's a certain element to
cultures stretching from West Africa to East India.


Backwardness? Terrorism? Misogyny? Pedophilia? Goat herding? Deserts? Brown 
people?


"West Africa to East India," you said:

<http://upload.wikimedia.org/wikipedia/commons/2/21/Stoddard_race_map_1920.jpg>




--On Saturday, November 07, 2009 10:54 + Ethan Grammatikidis 
 wrote:




On Thu, 05 Nov 2009 23:41:35 +, "Eris Discordia"
 said:

> Ah, bad middle-eastern humour. :} I haven't heard any of this since my
> father passed away. ;)

When exactly did India subscribe to the Mideast mailing list?


Well, stretching things a little, but there's a certain element to
cultures stretching from West Africa to East India.




--On Thursday, November 05, 2009 16:48 + Ethan Grammatikidis
 wrote:

>
> On Thu, 5 Nov 2009 09:35:08 GMT, "Balwinder S Dheeman"
>  said:
>> On 11/05/2009 01:11 AM, Michaelian Ennis wrote:
>> > On Mon, Nov 2, 2009 at 4:56 AM, Jonas A 
>> > wrote:
>> >> Does anyone have pictures from the workshop?
>> >
>> > Ok I didn't take as many as I thought either.  Here's a link to my
>> > photos.
>> >
>> > http://snipurl.com/t25kq
>>
>> Hmm... Now I guess well... Why Plan9 geeks feel the need of wife?
>>
>> Since all Plan9 geeks possess a large fortune db and according to the
>> Law of Jane Austen, It is a universal truth that a single man with a
>> large fortune is indeed definitely needs of a wife ...
>
> Ah, bad middle-eastern humour. :} I haven't heard any of this since my
> father passed away. ;)
>











Re: [9fans] Announcing ninefs for win32

2009-11-05 Thread Eris Discordia
'dokan /i a' fails to install the (kernel mode) driver but succeeds in 
installing the mounter service. There's a build of Dokan for Vista x64 but 
none for XP x64.


Thanks for the tip. I won't post here on this matter anymore (unless I find 
a definite solution in which case I'll post a succinct description for 
whoever may be interested).




--On Thursday, November 05, 2009 14:33 -1000 Tim Newsham  
wrote:



I just downloaded the binaries and tried:


ninefs -cDd sources.lsub.org z


This resulted in:


<<< Tversion tag 65535 msize 8216 version '9P2000.u'
 Rversion tag 65535 msize 8216 version '9P2000'
<<< Tattach tag 0 fid 0 afid -1 uname nobody aname
 Rattach tag 0 qid  (0002 5cabc3 'd')
Dokan: debug mode on
Dokan: use stderr
Dokan Error: CreatFile Failed : 2
dokan main: fffd


What am I doing wrong? :(


This is prob best done on the ninefs mailing list to
save the 9fans who aren't interested in windows stuff.

Most likely when you ran "dokan /i a" (if you did, at all)
it failed.  If you're running on windows xp x64, you will
need a dokan.sys that is compiled for your platform.  The
prebuilt one is for the 32-bit kernel.

Tim Newsham | www.thenewsh.com/~newsham | thenewsh.blogspot.com









Re: [9fans] Announcing ninefs for win32

2009-11-05 Thread Eris Discordia

This is great news, I guess.

I just downloaded the binaries and tried:


ninefs -cDd sources.lsub.org z


This resulted in:


<<< Tversion tag 65535 msize 8216 version '9P2000.u'
 Rversion tag 65535 msize 8216 version '9P2000'
<<< Tattach tag 0 fid 0 afid -1 uname nobody aname
 Rattach tag 0 qid  (0002 5cabc3 'd')
Dokan: debug mode on
Dokan: use stderr
Dokan Error: CreatFile Failed : 2
dokan main: fffd


What am I doing wrong? :(

I know 'ninefs.exe' successfully resolves the host and attempts to connect 
to it on port 564 but have no idea what happens afterwards that results in 
this error and no Z: for me.


(This is on Windows XP x64.)



--On Thursday, November 05, 2009 11:53 -1000 Tim Newsham  
wrote:



I'd like to announce ninefs for win32.  This is a Dokan
based 9p filesystem driver for win32 systems built with
npfs.  This is an early release intended for the bolder
user.  I've set up a mailing list for the project so
please direct feedback there.

   http://code.google.com/p/ninefs/
   http://ninefs.googlecode.com/files/README.txt
   http://groups.google.com/group/ninefs

Tim Newsham | www.thenewsh.com/~newsham | thenewsh.blogspot.com









Re: [9fans] Pictures from IWP9?

2009-11-05 Thread Eris Discordia

Ah, bad middle-eastern humour. :} I haven't heard any of this since my
father passed away. ;)


When exactly did India subscribe to the Mideast mailing list?


--On Thursday, November 05, 2009 16:48 + Ethan Grammatikidis 
 wrote:




On Thu, 5 Nov 2009 09:35:08 GMT, "Balwinder S Dheeman"
 said:

On 11/05/2009 01:11 AM, Michaelian Ennis wrote:
> On Mon, Nov 2, 2009 at 4:56 AM, Jonas A  wrote:
>> Does anyone have pictures from the workshop?
>
> Ok I didn't take as many as I thought either.  Here's a link to my
> photos.
>
> http://snipurl.com/t25kq

Hmm... Now I guess well... Why Plan9 geeks feel the need of wife?

Since all Plan9 geeks possess a large fortune db and according to the
Law of Jane Austen, It is a universal truth that a single man with a
large fortune is indeed definitely needs of a wife ...


Ah, bad middle-eastern humour. :} I haven't heard any of this since my
father passed away. ;)









Re: [9fans] sed question (OT)

2009-10-30 Thread Eris Discordia

Listing of file 'sedscr:'


s/^/ /;
s/$/aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ/;
s/ \([a-z]\)\(.*\1\)\(.\)/ \3\2\3/;
s/ \([a-z]\)\(.*\1\)\(.\)/ \3\2\3/;
s/.\{52\}$//;
s/ //;


$ echo This is a test | sed -f sedscr
This Is a test
$ echo someone forgot to capitalize | sed -f sedscr
Someone Forgot to capitalize

This works with '/usr/bin/sed' from a FreeBSD 6.2-RELEASE installation.

Above sed script stolen from:



With a minor change: first three words to first two words.




--On Thursday, October 29, 2009 15:41 + Steve Simon 
 wrote:



Sorry, not really the place for such questions but...

I always struggle with sed, awk is easy but sed makes my head hurt.

I am trying to capitalise the first tow words on each line (I could use
awk as well but I have to use sed so it seems churlish to start another
process).

capitalising the first word on the line is easy enough:

h
s/^(.).*/\1/
y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/
x
s/^.(.*)/\1/
x
G
s/\n//

Though there maye be a much easier/more elegant way to do this,
but for the 2nd word it gets much harder.

What I really want is sam's ability to select a letter and operate on it
rather than everything being line based as sed seems to be.

any neat solutions? (extra points awarded for use of the branch operator
:-)

-Steve









Re: [9fans] sed question (OT)

2009-10-30 Thread Eris Discordia
The script has a small "bug" one might say: it capitalizes the first two 
words on a line that are _not_ already capitalized. If one of the first two 
words is capitalized then the third will get capitalized.


--On Thursday, October 29, 2009 15:41 + Steve Simon 
 wrote:



Sorry, not really the place for such questions but...

I always struggle with sed, awk is easy but sed makes my head hurt.

I am trying to capitalise the first tow words on each line (I could use
awk as well but I have to use sed so it seems churlish to start another
process).

capitalising the first word on the line is easy enough:

h
s/^(.).*/\1/
y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/
x
s/^.(.*)/\1/
x
G
s/\n//

Though there maye be a much easier/more elegant way to do this,
but for the 2nd word it gets much harder.

What I really want is sam's ability to select a letter and operate on it
rather than everything being line based as sed seems to be.

any neat solutions? (extra points awarded for use of the branch operator
:-)

-Steve









Re: [9fans] go to this site

2009-10-27 Thread Eris Discordia
I was curious (not that I had any hope of understanding what's going on) so 
I visited the place. I got this:



HTTP/1.0 500 Internal Server Error
Date: Tue, 27 Oct 2009 12:58:31 GMT
Server: Apache/2.2.11 (Debian) PHP/5.2.6-0.1+b1 with Suhosin-Patch
mod_python 3.3.1 Python/2.5.2 mod_wsgi/2.3
Content-Type: text/html; charset=utf-8
[...]
[...]
[...]
[...]
Via: 1.0 nebula.nasa.gov, [...]
Connection: close
12:59:25 ERROR 500: Internal Server Error.


And this,


You broke it. Arg.


in the response body, when I visited it through an open proxy server in 
another country.


But visiting from my own place results in,


HTTP/1.1 200 OK
Date: Tue, 27 Oct 2009 12:56:33 GMT
Server: Apache/2.2.11 (Debian) PHP/5.2.6-0.1+b1 with Suhosin-Patch
mod_python/3.3.1 Python/2.5.2 mod_wsgi/2.3
Vary: Cookie
Content-Type: text/html; charset=utf-8
Via: 1.0 nebula.nasa.gov
Connection: close


And a "normal" index.html.

Wonder why the response depends on location.



--On Monday, October 26, 2009 19:39 -0700 ron minnich  
wrote:



On Mon, Oct 26, 2009 at 6:52 PM, Latchesar Ionkov 
wrote:

Did anybody come up with cloud management software called Zeus or
Jupiter yet?


interesting. I got
arg. you broke it.

and you guys got the web page and I just did.

And, yes, it's another !@@#$! cloud.

But why is USG competing with amazon?

ron









Re: [9fans] Barrelfish

2009-10-19 Thread Eris Discordia

"Moore's law doesn't say anything about speed or power.


But why'd you assume "people in the wrong" (w.r.t. their understanding of 
Moore's law) would measure "speed" in gigahertz rather than MIPS or FLOPS?




--On Tuesday, October 20, 2009 02:38 +0100 matt  
wrote:



erik quanstrom wrote:



you motivated me to find my copy of _high speed
semiconductor devices_, s.m. sze, ed., 1990.




which motivated me to dig out the post I made elsewhere :

"Moore's law doesn't say anything about speed or power. It says
manufacturing costs will lower from technological improvements such that
the reasonably priced transistor count in an IC will double every 2 years.

And here's a pretty graph
http://en.wikipedia.org/wiki/File:Transistor_Count_and_Moore%27s_Law_-_20
08.svg

The misunderstanding makes people who say such twaddle as "Moore's Law,
the founding axiom behind Intel, that chips get exponentially faster".

If we pretend that 2 years = double speed then roughly :
The 1993 66Mhz P1 would now be running at 16.9Ghz
The 1995 200Mhz Pentium now would be 25.6Ghz
The 1997 300Mhz Pentium now would be 19.2Ghz
The 1999 500Mhz Pentium now would be 16Ghz
The 2000 1.3Ghz Pentium now would be 20Ghz
The 2002 2.2Ghz Pentium would now be 35Ghz
The 2002 3.06Ghz Pentium would be going on 48Ghz by Xmas

If you plot speed vs year for Pentiums you get two straight lines with a
change in gradient in 1999 with the introduction of the P4"








Re: [9fans] utf-8 text files from httpd

2009-10-19 Thread Eris Discordia
The decision whether to open in place or save to disk based on MIME type is 
up to the browser. For example, I set my browsers to ask to save to disk 
application/pdf documents (rather than opening them with Adobe Acrobat's 
problem plugin). A MIME type of text/plain (without any specification of 
encoding) is correct (and expected by any mainstream browser) for text 
files. Opera opens those by default but can be set to do any one of a 
variety of tasks when encountering text/plain. All mainstream browsers also 
include encoding autodetection routines which may or may not fail depending 
on your file's contents. All mainstream browsers also allow you to select 
an encoding to decode and view your document in.


Assuming the right bytes arrive at your client it is always possible to 
read the file in the right encoding. The encoding specified in response 
header has no say in the bytes that are transmitted.


If your "any browser" includes Opera try Preferences > Advanced > Downloads 
(Uncheck "Hide file types opened with Opera") > Quick Search text/plain > 
Edit > Action: Open with Opera (if the setting has been altered). Then 
retry visiting your remote file. Even if response header contains the wrong 
encoding (ISO-8859-1, EUC-KR, whatever) or no encoding specification at all 
Opera should retrieve the document and display it. If the display is wrong, 
try View > Encoding > Unicode > UTF-8.


The behavior you describe of "having to download the file" and "characters 
being garbled" is not "any browser" sort of behavior. Neither Opera, nor 
Firefox, nor Chrome display such behavior for the example I have supplied 
below.


If all else fails... why not wget -S [URI] and check (and probably post) 
the response header?


This resource, for example:



results in this response header:


  HTTP/1.1 200 OK
  Date: Sun, 18 Oct 2009 10:45:56 GMT
  Server: Apache
  X-Powered-By: PHP/5.2.8-pl2-gentoo
  Cache-Control: no-store, no-cache
  Connection: close
  Content-Type: text/plain


And there's no problem whatsoever with its display in either Opera, Chrome, 
or Firefox. Opera Info Panel says, by the way:



Encoding (used by Opera):
- not supplied - (windows-1252)





--On Sunday, October 18, 2009 20:34 -0400 Akshat Kumar 
 wrote:



I'm trying to put up a plain text file containing UTF-8
characters from httpd, but when viewing it from any
browser, it comes off as an ASCII file that needs to
be downloaded (so, those characters are garbled).
Is this due to some behaviour of httpd?

ak





Re: [9fans] Barrelfish

2009-10-18 Thread Eris Discordia

Could be wrong, but I think he's referring to the SPURS Engine:
http://en.wikipedia.org/wiki/SpursEngine


I had never seen that but I had encountered news on the Leadtek card based 
on it.


--On Saturday, October 17, 2009 16:18 -0500 Eric Van Hensbergen 
 wrote:



Could be wrong, but I think he's referring to the SPURS Engine:
http://en.wikipedia.org/wiki/SpursEngine

  -eric

On Oct 17, 2009, at 4:07 PM, Steve Simon wrote:


I'm a tiny fish, this is the ocean. Nevertheless, I venture: there
are
already Cell-based expansion cards out there for "real-time"
H.264/VC-1/MPEG-4 AVC encoding. Meaning, 1080p video in, H.264
stream out,
"real-time."


Interesting, 1080p? you have a link?

-Steve












Re: [9fans] Barrelfish

2009-10-18 Thread Eris Discordia

Interesting, 1080p? you have a link?


The one I read long ago:


First Google "sponsored link:"

(This one's an industrial rackmounted machine. No expansion card.)

BadaBoom is just software that uses CUDA:


"Real-time" performance with CUDA can be achieved on (not-so-)recent 
Cell-based GPUs.


BadaBoom did make a boom in fansubbing community. Every group wants an 
"encoding officer" with either an i7 or a highly performing GPU. Custom 
builds of x264 (the most widely used software codec at the moment) already 
can take advantage of multi-core in encoding.



--On Saturday, October 17, 2009 22:07 +0100 Steve Simon 
 wrote:



I'm a tiny fish, this is the ocean. Nevertheless, I venture: there are
already Cell-based expansion cards out there for "real-time"
H.264/VC-1/MPEG-4 AVC encoding. Meaning, 1080p video in, H.264 stream
out,  "real-time."


Interesting, 1080p? you have a link?

-Steve





Re: [9fans] Barrelfish

2009-10-17 Thread Eris Discordia

There is a vast range of applications that cannot
be managed in real time using existing single-core technology.


please name one.


I'm a tiny fish, this is the ocean. Nevertheless, I venture: there are 
already Cell-based expansion cards out there for "real-time" 
H.264/VC-1/MPEG-4 AVC encoding. Meaning, 1080p video in, H.264 stream out, 
"real-time." I can imagine a large market for this in broadcasting, 
netcasting, simulcasting industry. Simulcasting in particular is a prime 
application. Station X in Japan broadcasts a popular animated series in 
1080i, while US licensor of the same content simulcasts for subscribers 
through its web interface. This applies all the more to live feeds.


What seems to go ignored here is the class of embarrassingly parallel 
problems which--while they may or may not be important to CS people, I 
don't know--appear in many areas of applied computing. I know one person 
working at an institute of the Max Planck Society who regularly runs a few 
hundred instances of the same program (doing some sort of matrix 
calculation for a problem in physics) with different input. He certainly 
could benefit from a hundred cores inside his desktop computing platform 
_if_ fitting that many cores in there wouldn't cause latencies larger than 
the network latencies he currently experiences (at the moment he uses a job 
manager that controls a cluster). "INB4" criticism, his input matrices are 
small and his work is compute-intensive rather than memory-intensive.


Another embarrassingly parallel problem, as Sam Watkins pointed out, arises 
in digital audio processing. I might add to his example of applying a 
filter to sections of one track the example of applying the same or 
different filters to multiple tracks at once. Multitrack editing was/is a 
killer application of digital audio. Multitrack video editing, too. I 
believe video/audio processing software were among the first applications 
for "workstation"-class desktops that were parallelized.


By the way, I learnt about embarrassingly parallel problems from that same 
Max Planck research fellow who runs embarrassingly parallel matrix 
calculations.




--On Thursday, October 15, 2009 09:27 -0400 erik quanstrom 
 wrote:



On Thu Oct 15 06:55:24 EDT 2009, s...@nipl.net wrote:

task.  With respect to Ken, Bill Gates said something along the lines of
"who would need more than 640K?".


on the other hand, there were lots of people using computers with 4mb
of memory when bill gates said this.  it was quite easy to see how to use
more than 1mb at the time.  in fact, i believe i used an apple ][ around
that time that had ~744k.  it was a wierd amount of memory.


There is a vast range of applications that cannot
be managed in real time using existing single-core technology.


please name one.

- erik






Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-21 Thread Eris Discordia
Upon reading more into that study it seems the Wikipedia editor has derived 
a distorted conclusion:



In our data sets, the replacement rates of SATA disks are not worse than
the replacement rates of SCSI or FC disks. This may indicate that
disk-independent factors, such as operating conditions, usage and
environmental factors, affect replacement rates more than component
specific factors. However, the only evidence we have of a bad batch of
disks was found in a collection of SATA disks experiencing high media
error rates. We have too little data on bad batches to estimate the
relative frequency of bad batches by type of disk, although there is
plenty of anecdotal evidence that bad batches are not unique to SATA
disks.


-- the USENIX article

Apparently, the distinction made between "consumer" and "enterprise" is 
actually between technology classes, i.e. SCSI/Fibre Channel vs. SATA, 
rather than between manufacturers' gradings, e.g. Seagate 7200 desktop 
series vs. Western Digital RE3/RE4 enterprise drives.


All SATA drives listed have MTTF (== MTBF?) of > 1.0 million hours which is 
characteristic of enterprise drives as Erik Quanstrom pointed out earlier 
on this thread. The 7200s have an MTBF of around 0.75 million hours in 
contrast to RE4s with > 1.0-million-hour MTBF.




--On Tuesday, September 22, 2009 00:35 +0100 Eris Discordia 
 wrote:



What I haven't found is a decent, no frills, sata/e-sata enclosure for a
home system.


Depending on where you are, where you can purchase from, and how much you
want to pay you may be able to get yourself ICY DOCK or Chieftec
enclosures that fit the description. ICY DOCK's 5-bay enclosure seemed a
fine choice to me although somewhat expensive (slightly over 190 USD, I
seem to remember).

-
---

Related to the subject of drive reliability:


A common misconception is that "server-grade" drives fail less frequently
than consumer-grade drives. Two different, independent studies, by
Carnegie Mellon University and Google, have shown that failure rates are
largely independent of the supposed "grade" of the drive.


-- <http://en.wikipedia.org/wiki/RAID>

The paragraph cites this as its source:

--
<http://searchstorage.techtarget.com/magazineFeature/0,296894,sid5_gci125
9075,00.html>
(full text available only to registered users; registration is free,
which begs the question of why they've decided to pester penniless
readers with questions their "corporation's" number of employees and IT
expenses)

which has derived its content from this study:

<http://www.usenix.org/events/fast07/tech/schroeder/schroeder_html/index.
html>

I couldn't find the other study, "independent" from this first.



--On Monday, September 21, 2009 15:07 -0700 Bakul Shah
 wrote:


On Mon, 21 Sep 2009 16:30:25 EDT erik quanstrom 
wrote:

> > i think the lesson here is don't by cheep drives; if you
> > have enterprise drives at 1e-15 error rate, the fail rate
> > will be 0.8%.  of course if you don't have a raid, the fail
> > rate is 100%.
> >
> > if that's not acceptable, then use raid 6.
>
> Hopefully Raid 6 or zfs's raidz2 works well enough with cheap
> drives!

don't hope.  do the calculations.  or simulate it.


The "hopefully" part was due to power supplies, fans, mobos.
I can't get hold of their reliability data (not that I have
tried very hard).  Ignoring that, raidz2 (+ venti) is good
enough for my use.


this is a pain in the neck as it's a function of ber,
mtbf, rebuild window and number of drives.

i found that not having a hot spare can increase
your chances of a double failure by an order of
magnitude.  the birthday paradox never ceases to
amaze.


I plan to replace one disk every 6 to 9 months or so. In a
3+2 raidz2 array disks will be swapped out in 2.5 to 3.75
years in the worst case.  What I haven't found is a decent,
no frills, sata/e-sata enclosure for a home system.














Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-21 Thread Eris Discordia

What I haven't found is a decent, no frills, sata/e-sata enclosure for a
home system.


Depending on where you are, where you can purchase from, and how much you 
want to pay you may be able to get yourself ICY DOCK or Chieftec enclosures 
that fit the description. ICY DOCK's 5-bay enclosure seemed a fine choice 
to me although somewhat expensive (slightly over 190 USD, I seem to 
remember).




Related to the subject of drive reliability:


A common misconception is that "server-grade" drives fail less frequently
than consumer-grade drives. Two different, independent studies, by
Carnegie Mellon University and Google, have shown that failure rates are
largely independent of the supposed "grade" of the drive.


-- 

The paragraph cites this as its source:

--

(full text available only to registered users; registration is free, which 
begs the question of why they've decided to pester penniless readers with 
questions their "corporation's" number of employees and IT expenses)


which has derived its content from this study:



I couldn't find the other study, "independent" from this first.



--On Monday, September 21, 2009 15:07 -0700 Bakul Shah 
 wrote:



On Mon, 21 Sep 2009 16:30:25 EDT erik quanstrom 
wrote:

> > i think the lesson here is don't by cheep drives; if you
> > have enterprise drives at 1e-15 error rate, the fail rate
> > will be 0.8%.  of course if you don't have a raid, the fail
> > rate is 100%.
> >
> > if that's not acceptable, then use raid 6.
>
> Hopefully Raid 6 or zfs's raidz2 works well enough with cheap
> drives!

don't hope.  do the calculations.  or simulate it.


The "hopefully" part was due to power supplies, fans, mobos.
I can't get hold of their reliability data (not that I have
tried very hard).  Ignoring that, raidz2 (+ venti) is good
enough for my use.


this is a pain in the neck as it's a function of ber,
mtbf, rebuild window and number of drives.

i found that not having a hot spare can increase
your chances of a double failure by an order of
magnitude.  the birthday paradox never ceases to
amaze.


I plan to replace one disk every 6 to 9 months or so. In a
3+2 raidz2 array disks will be swapped out in 2.5 to 3.75
years in the worst case.  What I haven't found is a decent,
no frills, sata/e-sata enclosure for a home system.









Re: [9fans] Simplified Chinese plan 9

2009-09-14 Thread Eris Discordia
I've been, for the time being, officially p9-gagged due to "core-dumping" 
on the list. But thanks anyway for the information. And yes, the Latin 
alphabet does function.


--On Monday, September 14, 2009 09:33 + Paul Donnelly 
 wrote:



eris.discor...@gmail.com (Eris Discordia) writes:


http://thinkzone.wlonk.com/Language/Korean.htm


Interesting. I used to think Korean, too, uses a syllabary. Turns out
it's expressed alphabetically. Expressing Japanese that way would
create some space for confusion as there are certain sounds that never
combine with certain other sounds, e.g. there are 'sa,' 'se,' 'so,'
and 'su' syllables in which 's' is heard just like 's' in 'say' but
there's no 'si'--there's only 'shi.'


Actually, I belive that in Korean, "si" (시, if that displays for you at
all) is pronounced "shi". :P


If there existed an 's' character and also characters for vowels the
invalid combination 'si' could be created in writing. I wonder if
Korean alphabet can be used to make invalid combinations or all
possible combinations correspond to existing phonetic constructs.


Some combinations don't occur. Especially there are dipthongs that don't
occur. But that's not really strange or a problem. Consider the word:
qimk. It doesn't work in English, but the Latin alphabet still
functions.





Re: [9fans] Chaucer on 9fans

2009-09-13 Thread Eris Discordia

"Aye an' a bit of Mackeral settler rack and ruin
Ran it doon by the haim, 'ma place.
Well I slapped me and I slapped it doon in the side and
I cried, cried, cried."
[...]
"Aye! A roar he cried frae the bottom of his heart
That I would nay fall but as dead, dead as 'a can be by his feet;
De ya ken?...
And the wind cried back."



Namoore of this, for Goddes dignitee!!!


It's safe to say, I guess: okay.


--On Sunday, September 13, 2009 20:27 +1200 Andrew Simmons 
 wrote:



Namoore of this, for Goddes dignitee!!!

Myn eres aken of thy drasty speche!

'By God', quod he, 'for pleynley, at a word
Thy drasty posting is nat worth a toord!'





Re: [9fans] Simplified Chinese plan 9

2009-09-12 Thread Eris Discordia

i think you need to read some chaucer.  you are
the boiling frog in a pot of words.


English isn't my native tongue. It's a bit too much to expect me to read 
14th century "stuff" only to understand what probably amounts to an 
affront. You tell me what is "the boiling frog in a pot of words."


--On Saturday, September 12, 2009 10:27 -0400 erik quanstrom 
 wrote:



> These are novel and amusing orthographies and in-crowd jargon and
> nothing more [...]

I think we agree there: I said they were fad.


i think you need to read some chaucer.  you are
the boiling frog in a pot of words.

- erik









Re: [9fans] Simplified Chinese plan 9

2009-09-12 Thread Eris Discordia

Once again, words you use recklessly turn out to have actual definitions.


I am aware of those definitions. Please refer to the Jared Diamond lecture 
titled "The Great Leap Forward" to (gracefully) understand what I am 
talking about. It is supposed in the discussion of language evolution I 
referred to (and Diamond beautifully explains in that lecture) that pidgins 
and creoles may be clues to to the "universal language/grammar" contained 
in human genetic heritage: the innate linguistic capability of humankind. 
Those two categories of "proto-languages" show the emergent nature of 
language and that when confronted with a new medium--on a plantation in a 
community of slaves and masters of various origins or in an electronic 
messaging system--humans tend to rework from scratch or from whatever 
available material a complete language guided by their inborn universal 
language. A few generations is all it takes to go from "proto-language" to 
language.


My argument was that in case of electronic messaging systems the 
"proto-language," while creating new symbols and even new syntax, never 
evolves into a full-blown language no matter how many generations use it 
(to date, at least two consecutive generations). In fact, because it is 
bound to subcultures that come and go, and because it is used to set up 
"cliques" within larger communities of users of the medium its usage never 
becomes effortless and "natural." The effort required to learn and keep up 
with the flavor of the month is part of the price one pays to stay in the 
clique. Hence, what I wrote: "they aren't subject to the same dynamism, 
particularly same constraints, the core of language is."



Namely, I don't think you could discover a systemic grammatical deviation
from English in leet or text-speak or whatever.


"Doesn't afraid of anything," eh? Or "inb4 pr0n?" "Amirite desu?" I have 
encountered dozens of those consistently-used constructs but you've been 
coding too much and IRCing too little, apparently, which is appreciable but 
undermines your judgment about "text-speak."


(Just in case, that third example performs at least three contortions at 
once: combines the Japanese SOV sentence order with English's SVO, uses a 
Japanese word in a semantically wrong, subculture-specific manner, and 
employs a "cool" version of "am I right" with only a subset of connotations 
that "am I right" can carry. Syntactic, lexical, and semantic.)



These are novel and amusing orthographies and in-crowd jargon and nothing
more [...]


I think we agree there: I said they were fad.

I also doubt that we'll have the kind of technology you're talking about 

[...]

I cannot guarantee things but I can tell you this: expect speech synthesis 
from neural readings for motor incapacitated (think Stephen Hawking) in one 
decade or less. And, of course, I have my doubts, too, but I also have my 
hopes _and_ my thought experiments.




--On Saturday, September 12, 2009 02:39 -0600 Daniel Lyons 
 wrote:




On Sep 12, 2009, at 1:05 AM, Eris Discordia wrote:


There's a discussion of evolution of languages that involves a
language going from pidgin to creole to full-blown. Maybe "text-ese"
is some sort of pidgin, or more leniently creole, that draws on the
"speakers'" native language but the point here is that it will never
evolve into a full-blown language.



Once again, words you use recklessly turn out to have actual definitions.
From Wikipedia:

"A pidgin language is a simplified language that develops as a means of
communication between two or more groups that do not have a language in
common..."

"A creole language, or simply a creole, is a stable language that
originates from a mixture of various languages. The lexicon of a creole
usually consists of words clearly borrowed from the parent languages,
except for phonetic and semantic shifts. On the other hand, the grammar
often has original features and may differ substantially from those of
the parent languages."

I'm sure you'll provide us with the definitions from Merriam-Webster as
well.

In other words, a pidgin is what you get when you have two groups without
a common language being forced to communicate. A creole is what you get
when their kids learn the pidgin as a first language. Linguists and
physicists have a bad habit of making their jargon colorful so I'll only
deduct half the usual points.

I agree with your conclusion, but I disagree with a couple steps in your
reasoning. Namely, I don't think you could discover a systemic
grammatical deviation from English in leet or text-speak or whatever.
These are novel and amusing orthographies and in-crowd jargon and nothing
more—people pronounce ROFL and LOL to be ironic and cute, not because
they

Re: [9fans] Simplified Chinese plan 9

2009-09-12 Thread Eris Discordia

i believe this distinction between "natural" and "artificial"
languages is, uh, arbitrary.


Well, I don't think this is true. The distinction is strong enough for 
everyone to be able to immediately tell apart a language from a 
non-language. Actually, I think the term "artificial language" is kind of a 
courtesy. Natural language, to which the term "language" is most properly 
applied, is way different in how much more redundant, imprecise, and 
semantically potent it is.


Still, final judgment, or any judgment, in this matter is really linguists' 
to make so I guess I should better suspend my own while listening to them 
:-D



these are largely unpronouncable.  and i've only heard a few ever
pronunced at all.  (rofl comes to mind, though that term predates my
knowledge of text messaging).


They fall into the category of stenography. Circumstances, e.g. 
technological burden or limitations, inspire the trend of their creation. 
"Coolness" factor creates new ones and sustains some. After many years of 
IM (or SMS) they continue to be ad hoc and bound to subcultures--have you 
yet seen 'inb4' or 'caek' used? I have--which is why I think their features 
can't be used to draw inferences about language (they may be studied for 
other purposes, of course). They aren't subject to the same dynamism, 
particularly same constraints, the core of language is. Precisely because 
they aren't used in actual conversation or any type of text that is worth, 
to the writer, more than a throw-away note.



natural languages never have sharp boundaries and are pretty dynamic.
when did "byte" become a word? when did "gift" become a verb?  look how
fast text-ese has evolved.


Sharp boundaries with what? That's some question ;-) Natural languages are 
immediately discernible from most communication protocols used by non-human 
entities. Byte has a long and confused story that doesn't quite make it 
clear what it [byte] was initially meant to mean. Merriam-Webster dates 
'gift' as a transitive verb to ca. 1550 CE.


There's a discussion of evolution of languages that involves a language 
going from pidgin to creole to full-blown. Maybe "text-ese" is some sort of 
pidgin, or more leniently creole, that draws on the "speakers'" native 
language but the point here is that it will never evolve into a full-blown 
language. All of its "speakers" are speakers of much stronger native 
languages. Most of them share proper English as a language of global 
communication. "Text-ese" and its (often self-professed) importance seem 
like a fad to me. Do you think it will survive fast and reliable 
speech-to-text and/or brain-to-computer interfaces, i.e. a time when the 
technical burden of typing is removed without one having to expose one's 
voice to the insecure Internet and complete strangers (as in voice chat)? I 
know English will (because people think in it) but I seriously doubt 
"text-ese," essentially required by technological limitations and peer 
pressure among teens, will. Teen and other subculture languages, of course, 
will continue to exist. Ain't it "magical and rad?"




--On Friday, September 11, 2009 21:46 -0400 erik quanstrom 
 wrote:



> i'm not a linguist, but the linguists i know subscribe to the
> viewpoint that the written and spoken language are separate.
> and evolve separately.  i would derive from this that writability
> is independent of pronouncability.

If a sequence of symbols corresponds to something from a natural
language  then it must be pronounceable since it must have been uttered
at some time.  The same rule may not apply to "extensions" to natural
language (acronyms,  stenography) or artificial languages (mathematics,
computer programs).


i believe this distinction between "natural" and "artificial"
languages is, uh, arbitrary.  think of the symbols that people
im each other with.  these are largely unpronouncable.  and
i've only heard a few ever pronunced at all.  (rofl comes to mind,
though that term predates my knowledge of text messaging).

i also am not sure that there is such a thing as an extension to
a language.  natural languages never have sharp boundaries
and are pretty dynamic.  when did "byte" become a word?
when did "gift" become a verb?  look how fast text-ese has
evolved.

my concept of a language looks more like a standard deviation
than a box.

- erik





Re: [9fans] Simplified Chinese plan 9

2009-09-11 Thread Eris Discordia

your first problem was whether japanese would have some sort of
new or unique problem with an alphabet given the absence of certain
syllables (like shi) from the language. the answer is, of course, no:
the language would fall into either of the two extant conventions for
dealing with the syllable: always write "shi", or write "si" and just
change the pronunciation.


You're right. There wouldn't be any "new or unique" problems but there 
might have been some space for confusion, which is what I asserted. A 
gojuuon (kana table) contains all permitted syllables (kana representatives 
of _families_ of syllables, actually) while an alphabet would allow many 
invalid combinations. For a syllabic-moraic language where there are almost 
as many invalid combinations as there are valid ones this method makes good 
sense.



no written language stands independent of its pronunciation rules.
alphabets need a somewhat larger set of rules than syllabaries, but
that's true independent of language.


Um, "no written language" would be too strong. Avestan script was invented 
to make obsolete pronunciation rules by containing a large enough, but not 
too large, set of basic symbols that were to be in one-to-one 
correspondence with phonetic constructs of the language(s) that mattered to 
its inventors. Since there were no exceptions there was no need for rules 
beyond the correspondence between symbols and phonetic constructs. Of 
course, the script itself became obsolete in due time. Modern day IPA is a 
better informed attempt with an expanded albeit similar goal, although it 
still needs to "approximate" sounds of some languages and it is extremely 
hard to learn and use for non-phoneticians; or phoneticians for that 
matter, but at least learning IPA is part of their job.


**


i'm not a linguist, but the linguists i know subscribe to the
viewpoint that the written and spoken language are separate.
and evolve separately.  i would derive from this that writability
is independent of pronouncability.


If a sequence of symbols corresponds to something from a natural language 
then it must be pronounceable since it must have been uttered at some time. 
The same rule may not apply to "extensions" to natural language (acronyms, 
stenography) or artificial languages (mathematics, computer programs).




--On Friday, September 11, 2009 17:59 -0400 Anthony Sorace 
 wrote:



that's a whole different problem, though.

your first problem was whether japanese would have some sort of
new or unique problem with an alphabet given the absence of certain
syllables (like shi) from the language. the answer is, of course, no:
the language would fall into either of the two extant conventions for
dealing with the syllable: always write "shi", or write "si" and just
change the pronunciation.

no written language stands independent of its pronunciation rules.
alphabets need a somewhat larger set of rules than syllabaries, but
that's true independent of language.





--On Friday, September 11, 2009 18:16 -0400 erik quanstrom 
 wrote:



That's true but isn't exactly the same thing. "Irregularly" pronounced
combinations are still valid combinations. I'd say the universal example
for languages that are written in Latin alphabet or a variation thereof
would be the (notorious) 'fgsfds.' It's an invalid combination because
there is _no_ pronunciation at all--except 'figgis-fiddis' which is a
really recent, and ground-breaking, invention ;-)


by this definition, one could devise a valid input method
with which it would be impossible to type "xyzzy".


no written language stands independent of its pronunciation rules.
alphabets need a somewhat larger set of rules than syllabaries, but
that's true independent of language.


i'm not sure they are fully dependent.  consider acronyms.  or even
variable names.  (sometimes these need to be referred to
in speech.)  there are special hacks for making these
pronouncable.  in mathematics the same symbol can
have many pronunciations that depend entirely on the
context.

i'm not a linguist, but the linguists i know subscribe to the
viewpoint that the written and spoken language are separate.
and evolve separately.  i would derive from this that writability
is independent of pronouncability.

trying to think as a linguist, i would consider spoken acronyms
to be cognates from the written language.

as an homage to j. arthur seebach i'd say, "english is *neat*".

- erik





Re: [9fans] Simplified Chinese plan 9

2009-09-11 Thread Eris Discordia

lots of romance languages have exactly that characteristic, though
(maybe other languages, too). see C and G in italian. "ci" is simply
pronounced "correctly" as "chi".


That's true but isn't exactly the same thing. "Irregularly" pronounced 
combinations are still valid combinations. I'd say the universal example 
for languages that are written in Latin alphabet or a variation thereof 
would be the (notorious) 'fgsfds.' It's an invalid combination because 
there is _no_ pronunciation at all--except 'figgis-fiddis' which is a 
really recent, and ground-breaking, invention ;-)


With Japanese syllabaries one cannot produce unpronounceable sequences. 
Nonsense, yes, but nothing that cannot be uttered.


--On Friday, September 11, 2009 15:53 -0400 Anthony Sorace 
 wrote:



lots of romance languages have exactly that characteristic, though
(maybe other languages, too). see C and G in italian. "ci" is simply
pronounced "correctly" as "chi".





Re: [9fans] Simplified Chinese plan 9

2009-09-11 Thread Eris Discordia

http://thinkzone.wlonk.com/Language/Korean.htm


Interesting. I used to think Korean, too, uses a syllabary. Turns out it's 
expressed alphabetically. Expressing Japanese that way would create some 
space for confusion as there are certain sounds that never combine with 
certain other sounds, e.g. there are 'sa,' 'se,' 'so,' and 'su' syllables 
in which 's' is heard just like 's' in 'say' but there's no 'si'--there's 
only 'shi.' If there existed an 's' character and also characters for 
vowels the invalid combination 'si' could be created in writing. I wonder 
if Korean alphabet can be used to make invalid combinations or all possible 
combinations correspond to existing phonetic constructs.




--On Friday, September 11, 2009 13:49 -0400 erik quanstrom 
 wrote:



I don't know anything about Korean writing system or IMEs but since CJK
ideographs (most importantly Han characters) are involved similar
statements may apply.


for korean per ce, there are only 24 characters:

http://thinkzone.wlonk.com/Language/Korean.htm

one would imagine that han input methods would work
well for han in korean text.

- erik









Re: [9fans] Simplified Chinese plan 9

2009-09-11 Thread Eris Discordia

anyway, the general idea is that it can compose kanji from strings of
hiragana. it's also been used for other languages (although my memory of
that says it was mostly for the transliteration function, rather than the
compositing function). is it possible to do something similar for the
hanzi, composing them up from roots/stems? i've seen reference to the
idea in chinese dictionaries, but have no idea if it's use is widespread.


Kana to kanji conversion is peculiar to Japanese and that's basically how 
all Japanese IMEs work. You input a series of kana (in Roman/Latin letters 
converted on-the-fly), then either assert them as they are or accept a 
corresponding kanji the IME offers. It's called inline conversion. 
Conversion may also be explicitly requested from the software when for some 
reason inline conversion results are unsatisfactory. It takes really good 
UI design to make the process practical.


For Chinese, input from a standardized romanization is required, Pinyin 
being the most widely used (cellphones, computers, people who learn Chinese 
as a second language and would have an immensely hard time if they were to 
write in ideographs, even many Chinese people). Kana to kanji conversion is 
not viable there simply because kana is not the syllabary system used to 
express Chinese. Chinese syllables do no correspond to kana, plus Chinese 
is tonal while Japanese is not. Phonetically, and therefore input-wise 
since practical CJK input is based on sounds rather than meanings, the two 
languages are universes apart even though they share Han characters in the 
semantic sphere. Actually, any practical input system should rely on sound 
representation rather than meaning--there only so many sounds while there 
are infinitely many meanings.


Roots/stems you refer to are elements in the ideographs used to classify 
Han characters. They are more properly called radicals and are ordered by 
stroke count, i.e. the number times you put down the pen to compose one 
from the basic strokes. Most IMEs, _besides_ automatic conversion, offer 
the option to choose a kanji/hanzi/hanja by any one of various lookup 
methods. Radical lookup is one such method. There are other classifications 
of Han characters such as Hadamitzky-Spahn (applicable to kanji) which 
aren't present in many IMEs.


This is a great example of a full-blown Japanese word processor (it's 
Windows freeware):




Features nearly everything expected from a CJK input system and works 
independent of MS IME although can also be used in conjunction.


At present, Windows and MS Office do an unrivalled job of enabling 
multi-lingual input and display. I can't help but feel this is sort of a 
lock-in situation for people who need/fancy that sort of capability. This 
isn't really something I would revel in but it's at least reassuring that 
there is _some_ convenient, stable, uniform way to get these things done.




--On Friday, September 11, 2009 12:54 -0400 Anthony Sorace 
 wrote:



i know very little about existing chinese input methods, so this is more a
question for my own understanding than a suggestion, but:

there is ktrans for Plan 9; the latest version i'm aware of is described
here:   http://basalt.cias.osakafu-u.ac.jp/plan9/s39.html
although that page is a bit hard to read since line breaks are not
preserved. the contents are just the README from the tar file; maybe
easier to just download that and read there.

anyway, the general idea is that it can compose kanji from strings of
hiragana. it's also been used for other languages (although my memory of
that says it was mostly for the transliteration function, rather than the
compositing function). is it possible to do something similar for the
hanzi, composing them up from roots/stems? i've seen reference to the
idea in chinese dictionaries, but have no idea if it's use is widespread.

i've had ktrans working on 4th edition in the past, although i just tried
again (after a long gap), and it blows an assert, which i've not looked
into yet.





Re: [9fans] Simplified Chinese plan 9

2009-09-11 Thread Eris Discordia

Maybe it makes a sence to make something like this in Plan9 (an analog
kbmap) for typing complex symbols like an hieroglyph ?


Your method is in essence what Microsoft's IME on Windows and various IMEs 
on UNIX-likes (such as SCUM) use. However, an IME for inputting from a list 
of over twenty thousand characters takes quite an effort to devise before 
it can be practical and useful. Right now even display of CJK is not quite 
fully supported on any existing FOSS platform (Ruby character display was 
added to Firefox only somewhere after version 3). Non-integrated pieces of 
FOSS with great capabilities do exist.


In case of (Simplified and Traditional) Chinese there apparently exist only 
two successful IMEs out there: one is Microsoft's, the other belongs to a 
Chinese company that has put lots of money and effort into developing the 
software. I believe both support input by Pinyin romanization, although I 
may be wrong. There's also Google's Pinyin IME which was involved in a 
lawsuit with said Chinese company.


In case of Japanese an IME needs to support three writing systems at once, 
firstly the two kana, and then transforming from kana to kanji. Abundance 
of homonyms in Japanese as well as a certain writing strategy called ateji 
(using kanji for phonetic value rather than semantic value) makes embedding 
of a dictionary into the IME unavoidable. Good dictionaries for this 
purpose don't come free--they must either be bought from professional 
companies or compiled by people who intimately know the language, 
preferably native speakers. This latter, I believe, is how IMEs on 
UNIX-likes came to be. Anyhow, Japanese IMEs, too, rely on input based on a 
romanization of the language. The actual number of distinct kanji required 
for input of text at a high school literate level is around two 
thousand--JLPT Level One roughly corresponds to that--but people, of 
course, expect a much larger dictionary. Microsoft IME also provides 
semantic aid by offering short descriptions of kanji so that people can 
decide which corresponds to the meaning they want to convey. Although 
unnecessary, it is a most welcome addition.


I don't know anything about Korean writing system or IMEs but since CJK 
ideographs (most importantly Han characters) are involved similar 
statements may apply.


Overall, there's no easy way that is light on financial and/or human 
resources--the two types of resources are interchangeable, i.e. if you have 
an active user base you may be able to avoid expenditure--to put CJK input 
support into a UI, which is probably why Plan 9 doesn't have that at the 
moment. It isn't a computer thing--it's a human thing. I might add porting 
IMEs from some UNIX-like system is probably the best option (for those with 
the technical prowess).


**

DISTRACTION

While googling around for the existence of IMEs on Plan 9 I came across 
this document from 1996 titled "Unicode: Writing in the Global Village:"



Despite these hurdles, Unicode may soon become the most common
multilingual character-coding system. Support for multiple-language use
is quickly growing. New operating systems—AT&T's Plan 9, Windows NT,
Novell's Netware 4.01 Directory Services, Sybase's Gain Momentum, and
Apple's Newton already support Unicode.


--


It's funny how the author assumes display and input are the same thing 
while they so greatly differ, input being times harder to implement.




--On Friday, September 11, 2009 15:29 +0400 Alexander Sychev 
 wrote:



Hello!

Some time ago I wrote for inferno an analog of kbmap with an extention -
a  possibility to print complex symbols via sequences of more basic
symbols.
I use it for typing by the russian translit.
Here is a piece of file for my kbmap:

1   45  0
1   46  'Ц
1   47  'В
1   48  'Б
1   49  'Н
1   50  'М
C   цх  'ч
C   Цх  'Ч
C   сх  'ш
C   Сх  'Ш
C   сцх 'щ
C   Сцх 'Щ


The latin symbols are mapped to russian when it is possible. Other
russian symbols are presented via sequences of mapped symbols, e.g.
russian symbol  'Ч' [ch] is presented like an sequence of 'ц' [c] и
'х' [h].
A sequence can be broken by pressing any non-symbol key.
There is at least one big disadvantage of this method - the input focus
can be changed, e.g. by mouse. In inferno I didn't resolve this problem,
because /dev/pointer can be opened only once.

Maybe it makes a sence to make something like this in Plan9 (an analog
kbmap) for typing complex symbols like an hieroglyph ?

On Fri, 11 Sep 2009 14:23:02 +0400, erik quanstrom
 wrote:


HI..everyone:
   Is there some ways to input Simplified Chinese in plan 9 ? I
know plan 9 supports Unicode, so it is no questions for plan 9 to
display Simplified Chinese... and i have seen some pictures on
Internet

Re: [9fans] nice quote

2009-09-10 Thread Eris Discordia

There is a plan 9 OST?


The leech target contains mostly video game OSTs. For exactitude's sake I 
did look for P9fOS. Not there, but if you're really into it (and heed 
"piracy" not) that bit of auditory magic, and indeed the visual magic it 
accompanied, is a couple clicks away from the Google home page.


And this place claims to be (legally?) selling it:


P.S. Above is nothing you didn't know.

--On Thursday, September 10, 2009 17:58 +0200 hiro <23h...@googlemail.com> 
wrote:



And none of this applies to or concerns Plan 9, which may be a cause for
regret--or not.


There is a plan 9 OST?





Re: [9fans] nice quote

2009-09-10 Thread Eris Discordia

anyone written any software recently?
at this point it probably doesn't matter whether it was for plan 9 or not.


Me did moan. Me did code, too, the retarded way. Wrote a couple score lines 
of Perl to extract bits of JavaScript out of pages at a certain site, 
slightly modify them, run them, extract the links produced, and harvest the 
results using wget. This to get automated access to a repository of OSTs 
rather than clicking a jillion times for getting only one album. Also, 
another couple score lines of Perl (for IRSSI) to auto-fetch packs from 
XDCC bots.


And none of this applies to or concerns Plan 9, which may be a cause for 
regret--or not.


--On Wednesday, September 09, 2009 16:48 +0100 Charles Forsyth 
 wrote:



if people would leave off moaning about moaning,
we'd clear the space for more moaning about lisp
although the former did have the advantage that the
messages were shorter and didn't quote the bulk of
all previous messages.

anyone written any software recently?
at this point it probably doesn't matter whether it was for plan 9 or not.





Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-08 Thread Eris Discordia

Thanks.

Erik Quanstrom, too, posted a link to that page, although it wasn't in HTML.

--On Monday, September 07, 2009 22:02 +0200 Uriel  wrote:


On Fri, Sep 4, 2009 at 3:56 PM, Eris Discordia
wrote:

if you have quanstro/sd installed, sdorion(3) discusses how it
controls the backplane lights.


Um, I don't have that because I don't have any running Plan 9 instances,
but I'll try finding it on the web (if it's been through man2html at
some time).


Here you go: http://man.cat-v.org/plan_9_contrib/3/sdorion





Re: [9fans] nice quote

2009-09-07 Thread Eris Discordia
This thread has grown into a particularly educational one, for me at least, 
thanks to everyone who posted.


Vinu Rajashekhar's two posts were strictly to the point. There _is_ a 
mental model of the small computer to teach along with Scheme and there are 
ways to get close to the machine from within Haskell. The suggestion for 
using tail recursion seemed to serve to cue compiler into transforming the 
recursion into iteration (in C the programmer would _probably_ have used 
iteration in the first place).


Daniel Lyons' argument goes well with Paul Donnelly's:


I'm only saying that there are a lot of weird ideas about Lisp floating
around which a person can  hardly be blamed for picking up on, and these
are the reasons it sounds to me like you have.


Regarding where I may have gone wrong about Lisp (besides orthography). At 
the moment, I am very much intrigued to try learning a functional language 
even if only to fail again. The hobbyist can experiment at leisure, after 
all. And the links posted provide quite enough material.


As for this direct question:


I must say that the Lisp version is much simpler and clearer to me, while
the C version is mildly baffling. Does that make me a wizard who can
hardly read simple C code, or is it just a matter of what you and I are
respectively more comfortable with?


I'd like to concur with the implication that it's all a matter of what 
mindset one carries but I'm also tempted to exclaim since I find the C 
version straightforward--it closely follows how one would do a bubble sort 
on paper. Or perhaps even this assertion is shaped by my personal 
impressions.




Re: [9fans] nice quote

2009-09-06 Thread Eris Discordia

Thanks for the first-hand account :-)


Don't be Whiggish in your understanding of history.  Its participants
did not know their way.


Given your original narrative I really can't argue. Maybe, as you note, I'm 
wrongly assuming everyone knew a significant part of that which had come 
before them without accounting for natural propagation delays and barriers 
between thought pools. Nonetheless, it can't be denied a lot of ideas, and 
words used to denote them, in computation were conceived at earlier times 
than one might expect, sometimes even more comprehensively than today. For 
instance, von Foerster was consistently using "computing" in an 
astonishingly wide sense, e.g. bio-computing, by the 1950s. Even today most 
people don't immediately generalize that notion the way he did while such 
generalization is more than warranted.



--On Sunday, September 06, 2009 11:03 -0700 Rob Pike  
wrote:



Are you implying Doug McIlroy hadn't been taught about (and inevitably
occupied by) Church-Turing Thesis or even before that Ackermann function
and had to wait to be inspired by a comment in passing about FORTRAN to
realize the importance of recursion?! This was a rhetorical question, of
course.


Doug loves that story. In the version he told me, he was a (math) grad
student at MIT in 1956 (before FORTRAN) and the discussion in the lab
was about computer subroutines - in assembly or machine language of
course.  Someone mused about what might happen if a subroutine called
itself.  Everyone looked bemused.  The next day they all returned and
declared that they knew how to implement a subroutine that could call
itself although they had no idea what use it would be.  "Recursion"
was not a word in computing.  Hell, "computing" wasn't even much of a
word in math.

Don't be Whiggish in your understanding of history.  Its participants
did not know their way.

-rob





Re: [9fans] nice quote

2009-09-06 Thread Eris Discordia

There's a talk Doug McIllroy gave where he joked about how he
basically invented (or rather, discovered) recursion because someone
said ``Hey, what would happen if we made a FORTRAN routine call
itself?'' IIRC he had to tinker with the compiler to get it to accept
the idea, and at first, no one realized what it would be good for.


Are you implying Doug McIlroy hadn't been taught about (and inevitably 
occupied by) Church-Turing Thesis or even before that Ackermann function 
and had to wait to be inspired by a comment in passing about FORTRAN to 
realize the importance of recursion?! This was a rhetorical question, of 
course.




--On Sunday, September 06, 2009 00:23 -0400 "J.R. Mauro" 
 wrote:



On Sat, Sep 5, 2009 at 2:26 PM, erik quanstrom 
wrote:

i'm not a lisp fan.  but it's discouraging to see
such lack of substance as the following (collected
from a few posts):


Oh, yay, a Xah Lee quote, he's surely a trusted source on all things
Lisp. Didja read his page about hiring a prostitute in Las Vegas? Or
the one about how he lives in a car in the Bay Area because he's too
crazy to get hired?


surely an ad hominum attack like this neither furthers an
argument nor informs anyone.


I forgot this: Graham basically accuses programmers who don't find LISP
as attractive (or powerful, as he puts it) as he does of living on lower
planes of existence from which the "heavens above" of functional (or
only LISP) programming seem incomprehensible. He writes/speaks
persuasively, he's a successful businessman, but is he also an honest
debater?


and here i don't see an argument at all.


I just read in Wikipedia that, "Lisp's original conditional operator,
cond, is the precursor to later if-then-else structures," without any
citations. Assuming that to be true conditional branching is a
fundamental element of control flow and it has existed in machine
languages ever since early days. There's really very little to brag
about it.


i'd love to argue this factually, but my knowledge isn't
that extensive.  i think you'll find in the wiki entry for
Computer that much of what we take for granted today
was not obvious at the time.  stored program computers
with branching didn't come along until about 1948
(einiac).  i hope someone will fill in the gaps here.
i think it's worth appreciating how great these early
discoveries were.


There's a talk Doug McIllroy gave where he joked about how he
basically invented (or rather, discovered) recursion because someone
said ``Hey, what would happen if we made a FORTRAN routine call
itself?'' IIRC he had to tinker with the compiler to get it to accept
the idea, and at first, no one realized what it would be good for.



in the same vein, i don't know anything much about file
systems that i didn't steal from ken thompson.

- erik












Re: [9fans] nice quote

2009-09-06 Thread Eris Discordia

In this respect rating the "expressive power of C versus LISP" depends
very much on the problem domain under discussion.


Of course. I pointed out in my first post on the thread that "[...] for a 
person of my (low) caliber, LISP is neither suited to the family of 
problems I encounter nor suited to the machines I solve them on." I cannot 
exclude other machines and other problems but can talk from what little I 
have personally experienced.



I would like to see Haskell fill C's niche [...]


Is it as readily comprehensible to newcomers as C? Are there texts out 
there that can welcome a real beginner in programming and help him become 
productive, on a personal level at least, as rapidly as good C 
textbooks--you know the classic example--do? Is there a coherent mental 
model of small computers--not necessarily what you or I deem to be a small 
computer--that Haskell fits well and can be taught to learners? I imagine 
those will be indispensable for any language to replace existing languages, 
much more so in case of C.



--On Saturday, September 05, 2009 20:58 -0500 Jason Catena 
 wrote:



Hailed Eris:

I was alluding to the expressive power of C versus LISP considered with
respect to the primitives available on one's computing platform and
primitives in which solutions to one's problems are best expressed.


I think one of the reasons there exists little languages, and cliches
such as "the right tool for the job", is that the "primitives
available on one's computing platform" are very often not the
"primitives in which solutions to one's problems are best expressed."
In this respect rating the "expressive power of C versus LISP" depends
very much on the problem domain under discussion.

For "systems programming", C has the advantage in both practice (use
in running systems) and theory: processes which take a lot of system
resources to execute also tend to take a lot of C code, whereas in
most higher-order languages, things which represent high-level
runtime-consuming abstractions tend look little different than simple
bit-level operations.  The difference is one of approach, I guess:
whether you want to write optimal code yourself, and see what the
machine is doing, or trust the compiler to find a good way to
translate to machine language and run (in real-time) your
efficient-to-code higher-order functions.  The better the translation
from the higher-level language, the more this difference becomes a
matter of taste, programming style, availability of programmers, and
the body of domain knowledge already represented in language
libraries.

I would like to see Haskell fill C's niche: it's close to C's
execution speed now, and pure functions and a terse style gives real
advantages in coding speed (higher-order functions abstract common
"patterns" without tedious framework implementations), maintainability
(typeclasses of parameters in utility functions means you don't write
different implementations of the same function for different types,
yet preserve type compatibility and checking), and reliability (pure
functions don't depend on state, so have fewer moving parts to go
wrong).

Jason Catena





Re: [9fans] nice quote

2009-09-06 Thread Eris Discordia

in fact, none of the things we take for granted --- e.g., binary,
digital, stack-based, etc. --- were immediately obvious.  and it
might be that we've got these thing that we "know" wrong yet.


I don't think we are actually in disagreement here. I have no objections to 
your assertion. However, the particular case at hand indicates a different 
thing than historians (of computer technology) "backporting" today's 
trivial matters. I believe that a concept existed in a language 
(Plankalkuel) but not the machine it was supposed to control (Z3) by all 
means indicates the designer of the machine and the language was aware of 
the concept but faced technical limitations of his time. Stored-program 
computers weren't only consequences of a person's (von Neumann's) 
genius--they also were consequences of the culmination, and return point, 
of delay line technology (EDSAC's memory components).


A parallel can be drawn with the emergence of quantum mechanics. Many 
students of physics who aren't taught or don't teach themselves history of 
physics tend to think quantum mechanics emerged at a particular time due to 
that physical thinkers shortly before the time just weren't up to the 
mental challenge and it would take visionaries/revolutionaries to institute 
the new understanding. Historians of physics, however, can tell you with 
quite some confidence that the improvements of experimental instrumentation 
and becoming technically feasible of certain experiments that weren't 
feasible before around the end of 19th century were very probably a more 
influential agent.



--On Saturday, September 05, 2009 20:56 -0400 erik quanstrom 
 wrote:



> The instruction most conspicuously absent from the instruction set of
> the Z3 is conditional branching. [...] but there is no straightforward
> way to implement conditional sequences of instructions. However, we
> will show later than conditional branching can be simulated on this
> machine.


i think your reasoning is going backwards in time.  the fact that
a historian later can note that they *could* have had conditional
branching, if they'd thought of it further bolsters my position
that it is not immediately obvious that conditional branching
is what you want.

in fact, none of the things we take for granted --- e.g., binary,
digital, stack-based, etc. --- were immediately obvious.  and it
might be that we've got these thing that we "know" wrong yet.

i would imagine that in 30 years there will be several "obvious"
things about quatum computers that nobody's thought of
yet.

- erik





Re: [9fans] nice quote

2009-09-05 Thread Eris Discordia

so you're saying that the table in this section is wrong?

http://en.wikipedia.org/wiki/Computer#History_of_computing

if it is and you can back it up, i sugeest you fix wikipedia.


It isn't wrong.

The exact wording from "The First Computers: History and Architectures" 
goes:



The instruction most conspicuously absent from the instruction set of the
Z3 is conditional branching. [...] but there is no straightforward way to
implement conditional sequences of instructions. However, we will show
later than conditional branching can be simulated on this machine.


On the other hand, Wikipedia's article on Plankalkuel says:


Plankalkül drew comparisons to APL and relational algebra. It includes
assignment statements, subroutines, conditional statements, iteration,
floating point arithmetic, arrays, hierarchical record structures,
assertions, exception handling, and other advanced features such as
goal-directed execution.


-- 

In other words, both statements are correct. Z3 did not have conditional 
branching (given the type of store it used it would be too hard), however 
Plankalkuel did provision conditionals, invocation, and subroutines--all 
that is necessary to implement conditional branching.




--On Saturday, September 05, 2009 20:17 -0400 erik quanstrom 
 wrote:



I wasn't, in this case at least, implying something not backed by firm
evidence. Conditional branching embodied in actual computers goes back
to  Plankalkuel on Z3. The idea is as early as Babbage. It comes as
natural  even to first-timers, following much more difficult conception
of a notion  of control flow, that there must be a manner of
conditionally passing it  around.


so you're saying that the table in this section is wrong?

http://en.wikipedia.org/wiki/Computer#History_of_computing

if it is and you can back it up, i sugeest you fix wikipedia.

- erik









Re: [9fans] nice quote

2009-09-05 Thread Eris Discordia

I forgot this: Graham basically accuses programmers who don't find LISP
as  attractive (or powerful, as he puts it) as he does of living on
lower  planes of existence from which the "heavens above" of functional
(or only  LISP) programming seem incomprehensible. He writes/speaks
persuasively,  he's a successful businessman, but is he also an honest
debater?


and here i don't see an argument at all.


I was trying to say the same thing about Paul Graham's view of people who 
don't like, or "grok," LISP. That he doesn't argue the point--he presents 
it as a fact.



i'd love to argue this factually, but my knowledge isn't
that extensive.  i think you'll find in the wiki entry for
Computer that much of what we take for granted today
was not obvious at the time.  stored program computers
with branching didn't come along until about 1948
(einiac).  i hope someone will fill in the gaps here.
i think it's worth appreciating how great these early
discoveries were.


I agree with your point about non-triviality of much about computers that's 
taken for trivial today. However, I happened to have consulted this book 
couple of years ago:




(This is Google Books search inside the book with the term "conditional 
branching.")


I wasn't, in this case at least, implying something not backed by firm 
evidence. Conditional branching embodied in actual computers goes back to 
Plankalkuel on Z3. The idea is as early as Babbage. It comes as natural 
even to first-timers, following much more difficult conception of a notion 
of control flow, that there must be a manner of conditionally passing it 
around.




--On Saturday, September 05, 2009 14:26 -0400 erik quanstrom 
 wrote:



i'm not a lisp fan.  but it's discouraging to see
such lack of substance as the following (collected
from a few posts):


Oh, yay, a Xah Lee quote, he's surely a trusted source on all things
Lisp. Didja read his page about hiring a prostitute in Las Vegas? Or
the one about how he lives in a car in the Bay Area because he's too
crazy to get hired?


surely an ad hominum attack like this neither furthers an
argument nor informs anyone.


I forgot this: Graham basically accuses programmers who don't find LISP
as  attractive (or powerful, as he puts it) as he does of living on
lower  planes of existence from which the "heavens above" of functional
(or only  LISP) programming seem incomprehensible. He writes/speaks
persuasively,  he's a successful businessman, but is he also an honest
debater?


and here i don't see an argument at all.


I just read in Wikipedia that, "Lisp's original conditional operator,
cond,  is the precursor to later if-then-else structures," without any
citations.  Assuming that to be true conditional branching is a
fundamental element of  control flow and it has existed in machine
languages ever since early days.  There's really very little to brag
about it.


i'd love to argue this factually, but my knowledge isn't
that extensive.  i think you'll find in the wiki entry for
Computer that much of what we take for granted today
was not obvious at the time.  stored program computers
with branching didn't come along until about 1948
(einiac).  i hope someone will fill in the gaps here.
i think it's worth appreciating how great these early
discoveries were.

in the same vein, i don't know anything much about file
systems that i didn't steal from ken thompson.

- erik









Re: [9fans] nice quote

2009-09-05 Thread Eris Discordia

Using your theories, please explain why Lisp and Plan 9 both hover around
the same level of popularity (i.e., not very, but not dead either).


I don't think I can say anything in that respect that cannot either be 
easily refuted or greatly improved upon by someone already reading this 
list and just too busy with their own stuff to post. Some of them 
explicitly avoid feeding the troll (that I be, supposedly).


Anyway, here's what I think: Plan 9 and LISP are different, evolutionarily. 
LISP seems to me like a downsized reptile that has survived and been forced 
to exist in the shadow of mammals after the Mesozoic while Plan 9 looks 
more like a lemur. A rather recently developed mammal driven into a small 
area by its close kin from a common ancestor.


And one primary note: I have come to understand, in part thanks to this 
very list, that popularity isn't really a good measure of merit for 
computer stuff but you asked about popularity so I'll try to focus on that. 
(Case in point, there's a lot I read about on this list that I don't think 
I'd hear about in a lifetime, and this isn't a popular list.)


**

LISP evolved in a parallel path to the line of languages that descended 
from ALGOL. It represented/represents a programming paradigm--whose 
significance is beyond me but visible to CS people--and it used to also 
embody an application area. That application area, at the time, overlapped 
with the ambitions of some of the best experts in computation. LISP gained 
momentum, became an academic staple, was the pride and joy of world's best 
CS/CE departments. The application area got hit but the programming 
paradigm was strong as before.


The paradigm has scientific value--which is again beyond me but I trust CS 
people on that--so it continues to be taught at world's best CS/CE 
departments and to up-and-coming programmers and future computer 
scientists. SICP is witness to that. In the academy, LISP will live on as 
long as the paradigm it's attached to lives on and is deemed significant. 
Those same people who are educated in some dialect of LISP, as well as 
other languages, found businesses and apply their knowledge; occasionally, 
by way of their training in LISP. For whatever reason they see merit in it 
that many self-educated programmers or those trained at lesser institutions 
don't. Obviously, there aren't that many top CS/CE departments and those 
with founder status or strongly influences by founder institutions are 
still fewer. Hence, LISP's living dead state: "popularity" among the elite. 
Mind you, the natural divide between the two groups can sometimes be a 
cause of resentment and get non-LISP people badmouthing it.


**

Plan 9, on the other hand, was supposed to be a drop-in successor to 
UNIX--a natural step forward. It was supposed to satisfy long-time UNIX 
users by deceiving them with a similar-looking toolset while implementing a 
large change of philosophy whose impact would only become clear after 
(previous) UNIX users had already settled in. The factors that kept it from 
actually replacing UNIX everywhere are many.


One factor was timing. It reached various tiers of "ignorant masses" when 
not one but multiple possible continuations of UNIX, all of them FOSS, had 
already gained foothold (GNU/Linux and *BSD).


The other factor was its overly complex arrangement compared to the mundane 
purposes of lowly creatures more or less like me. I have tried arguing why 
Plan 9 as it is is a hassle on desktop systems and have been met with 
criticism that mostly targeted my lack of computer aptitude in general 
rather than my argument. I stressed what I termed "conceptual complexity" 
of Plan 9's model of how things should be and the lack of _any_ user 
friendly, albeit sane, abstraction on top of that complexity.


A third, more important, factor is that it was advocated to people who 
probably couldn't understand how Plan 9 would serve them better than things 
they heard of more regularly, where was this new thing's edge that 
justified the cost of its adoption. I for one am still at a loss on that 
matter. As a hobbyist, I lurk, and occasionally--they say--troll, around 
here but I'm not keeping my huge media collection on a Plan 9 installation 
or using Acme for entering multi-lingual (up to three languages until a 
while ago, four recently) text. Either task would be extremely cumbersome 
to do on Plan 9 (and this really has little to do with the OS itself). In 
short, I won't be doing Plan 9 because it's Plan 9. I, and most of the 
lowly ones, need further justification that either hasn't been presented or 
is way above my, or our, head.


The fourth factor I can think of is Plan 9's owners' attitude towards it. I 
once dared go as far as saying it was actually "jettisoned." For reasons 
that are beyond me Plan 9 isn't seeing much attention from Bell Labs or its 
creators. It currently seems to lack the Benevolent Dictator for Life 
figure many FOSS pro

Re: [9fans] nice quote

2009-09-05 Thread Eris Discordia

Oh, yay, a Xah Lee quote, he's surely a trusted source on all things
Lisp. Didja read his page about hiring a prostitute in Las Vegas? Or
the one about how he lives in a car in the Bay Area because he's too
crazy to get hired?


Patience, brother. Search "Paul Graham" on that page and let your mind do 
the free association. And I did say it was about wondering, didn't I?


--On Saturday, September 05, 2009 07:36 -0700 John Floren 
 wrote:



On Sat, Sep 5, 2009 at 7:27 AM, Eris Discordia
wrote:

One serious question today would be: what's LISP _really_ good for?


http://www.paulgraham.com/avg.html


I could do a similar thing:

<http://www.schnada.de/quotes/contempt.html#struetics>

... and leave you wondering (or not). I won't.



Oh, yay, a Xah Lee quote, he's surely a trusted source on all things
Lisp. Didja read his page about hiring a prostitute in Las Vegas? Or
the one about how he lives in a car in the Bay Area because he's too
crazy to get hired?


John
--
"Object-oriented design is the roman numerals of computing" -- Rob Pike









Re: [9fans] nice quote

2009-09-05 Thread Eris Discordia

general-purpose language good for "system programming"--you seem to call
that "being a good OS language"--


I take this part back. I mixed your post with Jason Catena's for a moment.

--On Saturday, September 05, 2009 15:14 +0100 Eris Discordia 
 wrote:



Let me be a little pedantic.


The 9fans know given the haphazard nature of a hobbyist's knowledge I am
extremely bad at this, but then let me give it a try.


FYI, it's been Lisp for a while.


As long as Britannica and Merriam-Webster call it LISP I don't think
calling it LISP would be strictly wrong. Has LISt Processing become
stigmatic in Lisp/LISP community?


Like what? The if statement, which was invented by Lisp? The loop
statement, for expressing loops? It sounds like you got a dose of Scheme
rather than Lisp to me.


I just read in Wikipedia that, "Lisp's original conditional operator,
cond, is the precursor to later if-then-else structures," without any
citations. Assuming that to be true conditional branching is a
fundamental element of control flow and it has existed in machine
languages ever since early days. There's really very little to brag about
it.

Regardless, I offer the following comparison:


19.2. How to Use Defstruct

<http://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node170.html>


Struct (C programming language)

<http://en.wikipedia.org/wiki/Struct_(C_programming_language)>

In the (small mind?) mental model of small computer there's the row of
pigeonholes and the stencil you may slide along the row for "structured"
access to its contents. I leave it to you to decide which of the above
better corresponds to that. My opinion you already know.

Indeed, my only encounter with LISP has been Scheme and through a failed
attempt to read SICP.


This hasn't been true for a while. Common Lisp is a general purpose
language like any other. The only thing I have ever found obnoxious about
CL was the filesystem API. Most CL implementations are compilers these
days and they produce surprisingly efficient machine code. The Scheme
situation is more diverse but you can definitely find performance if
that's what you're eluding to.


I was alluding to the expressive power of C versus LISP considered with
respect to the primitives available on one's computing platform and
primitives in which solutions to one's problems are best expressed. It
isn't a matter of whether the language you use is supplemented by good
libraries or how fast the binary image you produce can run as I have
little doubt out there exist lightning fast implementations of complex
algorithms in LISP. I was trying to give my personal example for why I
managed to learn C and failed to learn LISP.

If you have a scrawny x86 on your desktop and are trying to implement,
say, a bubble sort--yes, the notorious bubble sort, it's still the first
thing that comes to a learner's mind--it seems C is quite apt for
expressing your (embarrassing) solution in terms of what is available on
your platform. Loops, arrays, swapping, with _minimal_ syntactic
distraction. Simple, naive algorithms should end up in simple,
immediately readable (and refutable) code. Compare two implementations
and decide for yourself:

<http://en.literateprograms.org/Bubble_sort_(Lisp)>
<http://en.literateprograms.org/Bubble_sort_(C)>


Its claim to fame as the language for "wizards" remains.


I think this has more to do with Lisp users being assholes than anything
intrinsic about Lisp. This is one of the nice things about Clojure. It's
a break from tradition in this regard, as well as many others.


I really did mean "wizards" by "wizards." I intended no insult--merely
sort of an awed jealousy.


It's as though you have the up-to-date negative propaganda, but not the
up-to-date facts.


Of course. Propaganda has a wider outreach than facts, particularly when
for every textbook on a subject there are, I don't know, ten (or more?)
on the competing subject.


The main benefits it had in AI were features that came from garbage
collection and interactive development.


More importantly, LISt Processing which used to be an element of the
expert systems approach to AI and which is now defunct (as a way of
making machines intelligent, whatever that means). While "expert systems"
continue to exist the word causes enough reverb of failure to be replaced
by other buzzwords: knowledge-based systems, automated knowledge bases,
and whatnot.

I think, and may be dead wrong, LISP's ominous appearance came from
adhering to an AI paradigm. Now that the paradigm's no more viable why
should the appearance persist?


An advantage it has these days is that it produces code that performs
better than, say, Python or Perl.


I cannot comment on this. Have no knowledge of Python and beg to disagree
about Perl. The entry barrier for learning Per

Re: [9fans] nice quote

2009-09-05 Thread Eris Discordia
I forgot this: Graham basically accuses programmers who don't find LISP as 
attractive (or powerful, as he puts it) as he does of living on lower 
planes of existence from which the "heavens above" of functional (or only 
LISP) programming seem incomprehensible. He writes/speaks persuasively, 
he's a successful businessman, but is he also an honest debater?


--On Saturday, September 05, 2009 12:02 +0100 Richard Miller 
<9f...@hamnavoe.com> wrote:



One serious question today would be: what's LISP _really_ good for?


http://www.paulgraham.com/avg.html










Re: [9fans] nice quote

2009-09-05 Thread Eris Discordia

One serious question today would be: what's LISP _really_ good for?


http://www.paulgraham.com/avg.html


I could do a similar thing:



... and leave you wondering (or not). I won't.

Paul Graham's essay/article consists of a success story, _his_ success 
story (which, in minor part, depends on continued sales of his two LISP 
books), and a variety of claims I am unqualified to verify or refute. What 
is there for me to learn? That there exists/existed one successful LISP 
application? Is that really what I had tried to negate?


Besides, if quoting ESR were a measure of credibility I'd be given some 
when I appeared to 9fans out of the blue and quoted him saying something to 
the effect that Plan 9 is dead and buried because it wasn't up to replacing 
UNIX (at the moment, that is _not_ my opinion).




--On Saturday, September 05, 2009 12:02 +0100 Richard Miller 
<9f...@hamnavoe.com> wrote:



One serious question today would be: what's LISP _really_ good for?


http://www.paulgraham.com/avg.html






Re: [9fans] nice quote

2009-09-05 Thread Eris Discordia
for one thing I believe you have misread me. I said C was a 
general-purpose language good for "system programming"--you seem to call 
that "being a good OS language"-- and low-level application programming. I 
probably should have taken more care and wrote the precise term: systems 
programming.



This is like saying agglutinative languages are worse for conquering the
world with than isolating languages because the Ottoman empire fell
before the English empire.


Correlation doesn't imply causation--that's true. But there _are_ ways to 
ascertain a correlation is due to a causal relationship. One such way is to 
identify known causes of success or failure. _If_ one claims a language 
costs more to learn and rewards similarly or even less than another 
language one already has identified a known cause of failure. If failure 
does occur, causation by the language itself, rather than its surrounding 
elements (marketers, users, designers, climate, serendipity), cannot be 
ruled out.



I think it's mostly happenstance. Lots of languages succeed despite
having a killer app or app area. Python's a good example.


Despite _not_ having those, you mean, right? I think it's too early to talk 
about Python's success. It has barely lived half as long as C and one-third 
as long as LISP. If you're really going to call Python successful I don't 
know how you're going to describe Java.



Please don't interpret this as "Lisp kicks C's ass."


I don't, and I certainly weren't implying "C kicks LISP's ass." I don't 
qualify for that sort of assertion.



There are simply too many variables to lay the blame at Lisp's alleged
functional basis.


That's a very good point. I did say "LISP represents a programming 
paradigm" but I don't think its (perceived?) failure has to do with the 
paradigm itself, rather with whether mere mortals can find application 
areas where the cost of assimilating that paradigm (and therefore learning 
the language) is justified by measurable gains.





--On Friday, September 04, 2009 15:36 -0600 Daniel Lyons 
 wrote:



Let me be a little pedantic.

On Sep 4, 2009, at 2:18 PM, Eris Discordia wrote:

Above says precisely why I did. LISP is twofold hurtful for me as a
naive, below average hobbyist.


FYI, it's been Lisp for a while.


For one thing the language constructs do not reflect the small
computer primitives I was taught somewhere around the beginning of
my education.


Like what? The if statement, which was invented by Lisp? The loop
statement, for expressing loops? It sounds like you got a dose of Scheme
rather than Lisp to me.


For another, most (simple) problems I have had to deal with are far
better expressible in terms of those very primitives. In other
words, for a person of my (low) caliber, LISP is neither suited to
the family of problems I encounter nor suited to the machines I
solve them on.


This hasn't been true for a while. Common Lisp is a general purpose
language like any other. The only thing I have ever found obnoxious about
CL was the filesystem API. Most CL implementations are compilers these
days and they produce surprisingly efficient machine code. The Scheme
situation is more diverse but you can definitely find performance if
that's what you're eluding to.


Its claim to fame as the language for "wizards" remains.


I think this has more to do with Lisp users being assholes than anything
intrinsic about Lisp. This is one of the nice things about Clojure. It's
a break from tradition in this regard, as well as many others.


Although, mind you, the AI paradigm LISP used to represent is long
deprecated (Rodney Brooks gives a good overview of this deprecation,
although not specifically targeting LISP, in "Cambrian Intelligence:
The Early History of the New AI"). One serious question today would
be: what's LISP _really_ good for? That it represents a specific
programming paradigm is not enough justification. Ideally, a
language should represent a specific application area, as does C,
i.e. general-purpose system and (low-level) application programming.



It's as though you have the up-to-date negative propaganda, but not the
up-to-date facts. Lisp is "really good for" the same kinds of things
other general purpose languages are good for. The main benefits it had in
AI were features that came from garbage collection and interactive
development. You get those benefits today with lots of systems, but that
doesn't mean they aren't still there in Lisp. An advantage it has these
days is that it produces code that performs better than, say, Python or
Perl. I definitely would not call being a "general purpose system" and
suitability for "application programming" a "specific application area."
This is like saying agglutinative language

Re: [9fans] nice quote

2009-09-04 Thread Eris Discordia

Caveat: please add IMH(UI)O in front of any assertion that comes below.

Since education was brought up: I remember I found it seriously twisted 
when I was told mathematics freshmen in a top-notch university not 
(geographically) far from me are taught not one but two courses in computer 
programming... in Java.


Being the hobbyist (as contrasted to the professional) here, and the one 
who's got the smaller cut out of the intelligence cake, I think I am sure C 
was a lot easier to learn and comprehend than either Pascal--all the kids 
were into "Pascal or C? That's the problem" back then--or C++ or even the 
mess of a language called GW-BASIC (which I learnt as a kid and before I 
knew C, too, could be learnt by kids). Even if Pascal got all the buzz 
about being a "teaching language."


What seems to distinguish--pedagogically, at least--C is, as I noted on 
that other thread, its closeness to how the small computer, not the actual 
small computer but the mental model of a small computer, works. Pointers? 
They're just references to "pigeonholes" in a row of such holes. Scope? 
It's just how long your variables are remembered. Invocation? Just a way to 
regurgitate your own cooking. If one has to solve a problem, implement an 
algorithm, on a small computer one needs to be able to explain it in terms 
of the primitives available on that computer. That's where C shines. 
There's a close connection between language primitives and the primitives 
of the underlying computer. I'm not saying this is something magically 
featuring in C--it's a property that _had_ to feature in some language some 
time, C became that. In a different time and place, on different machines, 
another language would/will be that (and it shall be called C ;-))


I whined about LISP on yet another thread. Above says precisely why I did. 
LISP is twofold hurtful for me as a naive, below average hobbyist. For one 
thing the language constructs do not reflect the small computer primitives 
I was taught somewhere around the beginning of my education. For another, 
most (simple) problems I have had to deal with are far better expressible 
in terms of those very primitives. In other words, for a person of my (low) 
caliber, LISP is neither suited to the family of problems I encounter nor 
suited to the machines I solve them on. Its claim to fame as the language 
for "wizards" remains. Although, mind you, the AI paradigm LISP used to 
represent is long deprecated (Rodney Brooks gives a good overview of this 
deprecation, although not specifically targeting LISP, in "Cambrian 
Intelligence: The Early History of the New AI"). One serious question today 
would be: what's LISP _really_ good for? That it represents a specific 
programming paradigm is not enough justification. Ideally, a language 
should represent a specific application area, as does C, i.e. 
general-purpose system and (low-level) application programming.


A favorite quote out of my favorite physics textbook:


Further progress lies in the direction of making our equations invariant
under wider and still wider transformations. This state of affairs is
very satisfactory from a philosophical point of view, as implying an
increasing recognition of the part played by the observer in himself
introducing the regularities that appear in his observations, and a lack
of arbitrariness in the ways of nature, but it makes things less easy for
the learner of physics.


-- P. A. M. Dirac, The Principles of Quantum Mechanics

Unlike physical phenomena languages (natural or artificial) are subject to 
constraints that act (in comparison) very slowly and very leniently. 
There's a great deal of arbitrariness in how a computer language might 
look. It is epistemologically, aesthetically, and pragmatically 
advantageous to "remove arbitrariness" by fitting a language to either its 
target platform or its target problem, preferably both. C did and continues 
to do so, LISP doesn't (not anymore, to say the least).



P.S. UI stands for "uninformed."

--On Friday, September 04, 2009 10:47 -0700 "Brian L. Stuart" 
 wrote:



>> > K&R is beautiful in this respect. In
contrast, I
>> never managed to
>> > bite in Stroustrup's description.
>>
>> Ok, now I'll get provocative:
>> Then why do so many people have a problem
understanding C?
>
> Are you saying that there is a significant number of
> people who understand C++ but not C?  The reason

I wasn't saying anything, I was asking a question. :)


Ah, I misunderstood.  The question about why people don't
understand C on the heels of a reference to Stroustrup
led me to think that was a suggestion C++ was easier to
understand than C.  Of course, I may be a little too
sensitive to such a claim, because of what I've been
hearing in the academic community for a while.  Some
keep saying that we should use more complex languages
in the introductory course because they're in some way
easier.  But I've yet to understand their definition
of easier.*

BLS

*Well, ac

Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-04 Thread Eris Discordia

there's a standard for this
red fail
orange  locate
green   activity

maybe you're enclosure's not standard.


That may be the case as it's really sort of a cheap hack: Chieftec 
SNT-2131. A 3-in-2 "solution" for use in 5.25" bays of desktop computer 
cases. I hear ICY DOCK has better offers but didn't see those available 
around here.



since it's a single led and follows the drive, i think this is a voltage
problem. it just has to do with the fact that the voltage / pullup
standard changed.


Good enough explanation for me. One thing that gave me worries was the 
negative reviews of some early 7200.12's (compared to 7200.11) circulating 
around on the web. Apparently, earlier firmware versions on the series had 
serious problems--serious enough to kill a drive, some reviews claimed.



http://sources.coraid.com/sources/contrib/quanstro/root/sys/man/3


Upon reading the man page the line that relieved me was this:


The LED state has no effect on drive function.


And thanks again for the kind counsel.



--On Friday, September 04, 2009 10:10 -0400 erik quanstrom 
 wrote:



There's one multi-color (3-prong) LED responsible for this. Nominally,
green should mean drive running and okay, alternating red should mean
transfer, and orange (red + green) a disk failure. In case of 7200.11's


there's a standard for this
red fail
orange  locate
green   activity

maybe you're enclosure's not standard.


I tried changing the bay in which the disk sits and the anomaly follows
the  disk so I guess the backplane's okay.


since it's a single led and follows the drive, i think this is a voltage
problem. it just has to do with the fact that the voltage / pullup
standard changed.


Um, I don't have that because I don't have any running Plan 9 instances,
but I'll try finding it on the web (if it's been through man2html at
some  time).


http://sources.coraid.com/sources/contrib/quanstro/root/sys/man/3

- erik





Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-04 Thread Eris Discordia

Many thanks for the info :-)


if there's a single dual-duty led maybe this is the problem.  how
many sepearte led packages do you have?


There's one multi-color (3-prong) LED responsible for this. Nominally, 
green should mean drive running and okay, alternating red should mean 
transfer, and orange (red + green) a disk failure. In case of 7200.11's 
this works as it should. In case of 7200.12 the light goes orange when the 
disk spins up and remains so. At times of transfer it goes red as it should 
but returns back to orange instead of green when there's no transfer. I 
feared the (new) disk was unhealthy and stressed it for some time but all 
seems to be fine except that light.


I tried changing the bay in which the disk sits and the anomaly follows the 
disk so I guess the backplane's okay. The tech specifically mentioned 
Seagate ES2 as a similar case and told me the disk was fine and it just 
lacked support for interacting with the light (directly or through the 
backplane, I don't know).



if you have quanstro/sd installed, sdorion(3) discusses how it
controls the backplane lights.


Um, I don't have that because I don't have any running Plan 9 instances, 
but I'll try finding it on the web (if it's been through man2html at some 
time).


--On Friday, September 04, 2009 08:41 -0400 erik quanstrom 
 wrote:



This caught my attention and you are the storage expert here. Is there
an  equivalent technology on SATA disks for controlling enclosure
facilities?  (Other than SMART, I mean, which seems to be only for
monitoring and not  for control.)


SES-2/SGPIO typically interact with the backplane, not the drive itself.
you can use either one with any type of disk you'd like.


I have this SATA backplan-inside-enclosure with 3x Barracuda 7200 series
1  TB disks attached. The enclosure lights for the two 7200.11's respond
the  right way but the one that's ought to represent the 7200.12 freaks
out  (goes multi-color). Have you experienced anything similar? The tech
at the  enclosure vendor tells me some Seagate disks don't support
control of  enclosure lights.


not really.  the green (activity) light is drive driven and sometimes
doesn't work due to different voltage / pull up resistor conventions.
if there's a single dual-duty led maybe this is the problem.  how
many sepearte led packages do you have?

the backplane chip could simply be misprogrammed.  do the lights
follow the drive?  have you tried resetting the lights.

if you have quanstro/sd installed, sdorion(3) discusses how it
controls the backplane lights.

- erik





Re: [9fans] Petabytes on a budget: JBODs + Linux + JFS

2009-09-04 Thread Eris Discordia

- a hot swap case with ses-2 lights so the tech doesn't
grab the wrong drive,


This caught my attention and you are the storage expert here. Is there an 
equivalent technology on SATA disks for controlling enclosure facilities? 
(Other than SMART, I mean, which seems to be only for monitoring and not 
for control.)


I have this SATA backplan-inside-enclosure with 3x Barracuda 7200 series 1 
TB disks attached. The enclosure lights for the two 7200.11's respond the 
right way but the one that's ought to represent the 7200.12 freaks out 
(goes multi-color). Have you experienced anything similar? The tech at the 
enclosure vendor tells me some Seagate disks don't support control of 
enclosure lights.


--On Thursday, September 03, 2009 21:20 -0400 erik quanstrom 
 wrote:



On Thu Sep  3 20:53:13 EDT 2009, r...@sun.com wrote:

"None of those technologies [NFS, iSCSI, FC] scales as cheaply,
reliably, goes as big, nor can be managed as easily as stand-alone pods
with their own IP address waiting for requests on HTTPS."
   http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-bui
   ld-cheap-cloud-storage/

Apart from the obvious comment that I swear I used a quote like that
to justify 9P more than once, I'm very curious to know how Plan9
would perform on such a box.

Erik, do you have any comments?


i'm speaking for myself, and not for anybody else here.
i do work for coraid, and i do do what i believe.  so
cavet emptor.

i think coraid's cost/petabyte is pretty competitive.
they sell 48TB 3u unit for about 20% more.  though
one could not build 1 of these machines since the
case is not commercially available.

i see some warning signs about this setup.  it stands
out to me that they use desktop-class drives and the
drives appear hard to swap out.  the bandwith out
of the box is 125MB/s max.

aside from that, here's what i see as what you get for
that extra 20%:
- fully-supported firmware,
- full-bandwith to the disk  (no port multpliers)
- double the network bandwidth
- ecc memory,
- a hot swap case with ses-2 lights so the tech doesn't
grab the wrong drive,

oh, and the coraid unit works with plan 9.  :-)

- erik





Re: [9fans] scheme plan 9

2009-09-04 Thread Eris Discordia
The performance was enjoyable indeed, and interesting. Thanks :-) Although, 
I bet you can get better results in an easier way with VST. No tool is 
universal.


Just in case, the video wouldn't load even with the latest version of Adobe 
Flash Player on Opera (some problem with Vimeo, presumably). I downloaded 
the excerpt in MOV and watched it.


--On Thursday, September 03, 2009 08:57 -0700 Bakul Shah 
 wrote:



On Thu, 03 Sep 2009 07:29:53 BST Eris Discordia
  wrote:


I mean, I never got past SICP Chapter 1 because that first chapter got
me  asking, "why this much hassle?"


May be you had an impedance mismatch with SICP?


P.S. I'm leaving. You may now remove your
arts-and-letters-cootie-protection suits and go back to normal
tech-savvy  attire ;-)


This may not be your cup of tea or be artsy enough for you
but check out what happens when tech meets arts:

http://impromptu.moso.com.au/gallery.html

Start the first video; may be skip the first 3 minutes or
so but after that stay with it for a few minutes.  The author
is creating music by *coding* in real time (and doing a great
job!).  He uses Impromptu, a Scheme programming environment,
that supports realtime scheduling and low level sound
synthesis. Given Scheme one can then build arbitrarily
complex signal processing graphs.

For some subset of people this sort of thing just might be a
better introduction to programming than SICP. Basically
anything that allows them to do fun things with programming
and leaves them wanting more.

BTW, you too can download impromptu on OS X and synthesise
your own noize!





Re: [9fans] scheme plan 9

2009-09-02 Thread Eris Discordia

Killing parens won't make you an adult :-)


Killing the paren(t)s is the hobbyist(eenager)'s radical response to 
existential why's that arise as the world of exper(adul)ts opens up before 
them and regularities in there are found to be essentially conventional 
rather than rational or natural.


I mean, I never got past SICP Chapter 1 because that first chapter got me 
asking, "why this much hassle?"


P.S. I'm leaving. You may now remove your 
arts-and-letters-cootie-protection suits and go back to normal tech-savvy 
attire ;-)


--On Wednesday, September 02, 2009 09:35 -0700 Bakul Shah 
 wrote:



On Wed, 02 Sep 2009 12:32:53 BST Eris Discordia
  wrote:

Although, you may be better off reading SICP "as intended," and use MIT
Scheme on either Windows or a *NIX. The book (and the freaking language)
is  already hard/unusual enough for one to not want to get confused by
implementation quirks. (Kill the paren!)


The second edition of SICP uses IEEE Scheme (basically R4RS
Scheme) and pretty much every Scheme implementation supports
R4RS -- s9fes from http://www.t3x.org/s9fes/ certainly
supports it.  It doesn't support rational or complex numbers
but as I recall no example in SICP relies on those.

Killing parens won't make you an adult :-)





Re: [9fans] "Blocks" in C

2009-09-02 Thread Eris Discordia
Perl people love closures. It's one of their common programming techniques. 
Closures in C? Way to screw its clarity and closeness to the real (or 
virtual) machine. And in the end closure or no closure doesn't change how 
the binary looks but allows programmers to pepper source with brain-teasers 
(now, what does _that_ evaluate to?). Not good at all.


--On Wednesday, September 02, 2009 10:04 +0200 Anant Narayanan 
 wrote:



Mac OS 10.6 introduced a new C compiler frontend (clang), which added
support for "blocks" in C [1]. Blocks basically add closures and
anonymous functions to C (and it's derivatives). Full details with
examples are in the linked article. I think the feature is quite elegant
and might be useful in cases where you want map/reduce like functionality
in C.

How much effort would it be to support a feature similar to blocks in 8c
(and family)? What are your thoughts on the idea in general?

--
Anant

[1] http://arstechnica.com/apple/reviews/2009/08/mac-os-x-10-6.ars/10









Re: [9fans] scheme plan 9

2009-09-02 Thread Eris Discordia
Although, you may be better off reading SICP "as intended," and use MIT 
Scheme on either Windows or a *NIX. The book (and the freaking language) is 
already hard/unusual enough for one to not want to get confused by 
implementation quirks. (Kill the paren!)


--On Wednesday, September 02, 2009 10:21 +0100 matt 
 wrote:



number of schemes > 4

http://www.plan9.bell-labs.com/wiki/plan9/Contrib_index/

maybe one is what you are looking for

there is also a gsoc project, search 9fans for more details
http://9fans.net/archive/



xiangyu wrote:


HI,everyone:
  Has anyone ported scheme into plan 9 ? or is there some scheme
implementation  existence on plan 9 ? i want to learn SCIP
recently ,but i can't find a scheme in plan 9 . so  ask ..
looking forward for the answer as soon as possible...
thanks first.










Re: [9fans] critique of sockets API

2009-06-12 Thread Eris Discordia

just to correct a basic fact, the size of the instruction set doesn't
define RISC instruction set (http://en.wikipedia.org/wiki/RISC
"instruction set size").


Heh. I did refer to exactly that article and consequently inserted "adding 
to the complexity of each primitive" as well as the modifier "strictly."



can you cite any references that claim that the size of intel's
instruction set has contributed to its success?


Try the Wikipedia article on CISC which really isn't a source, I admit. The 
general impression is that not only the total number, and diversity, of the 
instruction set but how much is done using a single instruction from the 
viewpoint of a programmer/compiler have a role there. How a CISC 
instruction set is implemented is besides this point. Some translation may 
take place in the process but it doesn't matter as long as it is 
transparent to the programmer/compiler.



could it be that since transistors are very cheep, adding instructions is
simply the cheapest go-faster trick?


Could be exactly that and further confirm my stance that added complexity, 
when it is carefully hidden and is exposed to the middle user only as a 
variety of orthogonal options, does not result in a bad system. In can 
actually result in a better system. I'm, of course, not claiming that the 
x86 instruction set represents an epitome of orthogonality or good design.



what is a middle user?  and why would such a person be
discouraged by having to learn fewer things?  does the myriad
of different ways to write an if statement in perl make it more
useable?  readable?


A middle user is someone who uses your product to create another product 
for an end user who very often doesn't create product on their own, 
although that may not be the case. A programmer who uses the language and 
compiler you designed, or the OS you created, or the API you developed. 
Having to learn fewer things is not discouraging, lack of expressiveness 
due to lack of options can be. In case of Perl its popularity establishes 
its merit for the area in which it is popular. Besides, we aren't talking 
about if redundancy makes a system more expressive (it could but we aren't 
talking about that), we are rather talking about addition of options that 
are orthogonal to existing ones: options that have no equivalent.


A callback infrastructure can be created in user land using threading but 
that would have two disadvantages: the very point of lowering initial cost 
using callback strategy is made moot, and people will tend to disagree on 
how that extra infrastructure should look like so a variety of 
incompatible, highly redundant implementations will pop up.



i don't see how progress == complicated.  could you explain
how they are one and the same?

as a counter example, unix uses untyped files.  not only does
lack of typing make the system simplier, it makes it more expressive
as well.


I don't see how complex == complicated. You could have a very sophisticated 
system at hand that is rather easy to program. Increasing adaptivity 
through increased complexity is one well-known evolutionary path. The other 
well-known path is increasing resilience through decreased complexity. 
There is a point where resilience and adaptivity are just at the balance 
you desire for your audience. Making clear who your audience is should 
clear this problem as well.


Regarding your example I believe you are mixing redundancy with choice. 
While I could even argue for some redundancy I'm not trying to do so. File 
typing, as seen on Apple systems, is not orthogonal to other features 
already present in those systems; it is redundant. Windows does not have 
any file typing similar to Apple's. UNIX-like systems do make some 
distinctions between files which have become rather blurry with time. I 
brought up the the subject in my awkward manner some time ago when caching 
9P was being discussed on the list. It seemed to me that a form of typing 
that expressed at a reasonable level of detail the amenability of files to 
various caching strategies could have improved the situation of 9P caching.



i don't see how a wage-earning programmer can't be a researcher as well.
and being a wage-earning programmer, i appreciate simplicity and use
it to advantage on a daily basis.


The vast majority of them aren't. That's a fact of life. You appreciate 
simplicity because you happen to work on a specific application for which 
your target system's API is exceptionally well-rounded. Try designing a 
complex UI for some CAD software and tell me how amenable your simple 
system is to that purpose. The platforms targeted for, say, creating the 
frontend to a CAD system are not chosen by luck really. They have been made 
suitable by designers to that and other purposes. Adding of orthogonal 
options, when done wisely, should not take away or negatively affect the 
core primitives you use and are content with.



several times, i've needed to get a particular bit of

Re: [9fans] critique of sockets API

2009-06-12 Thread Eris Discordia

s/could be/is/

From real world product experience across multiple operating systems
and architectures.


So there is at least one example to support the case for callbacks. I am 
pretty convinced there are many more examples.


--On Thursday, June 11, 2009 19:34 -0400 "Devon H. O'Dell" 
 wrote:



2009/6/11 Eris Discordia :

but given that plan 9 is about having a system that's easy
to understand and modify, i would think that it would be
tough to demonstrate that asyncronous i/o or callbacks
could make the system (or even applications) simplier.
i doubt that they would make the system more efficient,
either.

do you have examples that demonstrate either?


I can't claim I have anything in code which would be necessary for an
actual demonstration or for going beyond the "talk talk talk" stage. I
can, however, present one simple case: in some applications asynchronous
name resolving is a must and it can be realized by either of threads or
callbacks. Crawlers and scanners come to mind. Spawning threads for DNS
requests could be more costly than registering a set of callbacks within
one thread and then harvesting the results within that same thread.


s/could be/is/

From real world product experience across multiple operating systems
and architectures.









Re: [9fans] critique of sockets API

2009-06-11 Thread Eris Discordia

but given that plan 9 is about having a system that's easy
to understand and modify, i would think that it would be
tough to demonstrate that asyncronous i/o or callbacks
could make the system (or even applications) simplier.
i doubt that they would make the system more efficient,
either.

do you have examples that demonstrate either?


I can't claim I have anything in code which would be necessary for an 
actual demonstration or for going beyond the "talk talk talk" stage. I can, 
however, present one simple case: in some applications asynchronous name 
resolving is a must and it can be realized by either of threads or 
callbacks. Crawlers and scanners come to mind. Spawning threads for DNS 
requests could be more costly than registering a set of callbacks within 
one thread and then harvesting the results within that same thread.



one can build what's needed.  compare just r?fork and exec
with all the spawn variants windows has.


The actual number of calls in CreateThread and CreateProcess family is 
rather small. Most calls are wrappers for more basic calls but we don't 
want to go into that.



i think you're trying to argue that — a priori — choice is good?


I believe it is. How many of us are using strictly RISC machines on our 
desks today? Extending the set of available primitives and adding to the 
complexity of each primitive both are natural steps in the development of 
computer systems as they see more use. Limiting options doesn't seem to me 
to be an effective way of encouraging good programming practice. It can, 
however, successfully discourage potential middle users.



that's not the position i subscribe to.  and since plan 9
is simple and easy to change, it makes an ideal system
for someone who wants to try new things.


And after the new things are tried out and, one may hope, proven 
advantageous? I think the next step would be incorporating them into the 
system. A process that in due time will make the system less simple and 
easy to change but more immediately useful.


I see this more as a difference of intended audience than a difference of 
taste or philosophy. The real question is whom you imagine as your system's 
middle user: a wage-earning programmer or a researcher. Were I a programmer 
who worked for pay I'd very much appreciate being given every possible 
option that would let me do my job easier, faster, and, of course, more 
properly.


--On Thursday, June 11, 2009 14:24 -0400 erik quanstrom 
 wrote:



I might as well repeat myself: choice of strategy depends on the
application. Given choice programmers can decide on which strategy or
combination of strategies works best. Without choice, well, they will
just  live with what's available.


this is a very deep philosophical divide between windows and
systems like plan 9, and research unix.  the approach the labs
took was to provide a minimal set of primatives from which
one can build what's needed.  compare just r?fork and exec
with all the spawn variants windows has.

i think you're trying to argue that — a priori — choice is good?

but given that plan 9 is about having a system that's easy
to understand and modify, i would think that it would be
tough to demonstrate that asyncronous i/o or callbacks
could make the system (or even applications) simplier.
i doubt that they would make the system more efficient,
either.

do you have examples that demonstrate either?


One Right Way" always leaves open the question of whether a different
choice of strategy on the same platform, were a different choice
available,  would have yielded better results.


clearly if that position is accepted, computer science is a
solved problem; we should all put our heads down and just
code up the accepted wisdom.

that's not the position i subscribe to.  and since plan 9
is simple and easy to change, it makes an ideal system
for someone who wants to try new things.

- erik





Re: [9fans] critique of sockets API

2009-06-11 Thread Eris Discordia

the signal context is the original calling thread.  unless
ms' diagram is incorrect, this is a single threaded operation;
only the i/o originator can process the event.  so the plan 9


Of course it is single-threaded operation. That's the very idea behind 
using callbacks. Originally they were used to allow applications to do 
asynchronous I/O on earlier Windows incarnations that had no concept of 
threading. When threading became available it was correctly considered 
hazardous, as with any other form of concurrency, so it was and is avoided 
unless absolutely necessary. Callbacks survived the introduction of 
threading.


There are actually three thinkable options here: read-and-wait 
(synchronous), subscribe-and-continue (asynchronous using callback), and 
continue-until-notified (asynchronous using message queue). My point was 
that as of some time ago, more than a decade I believe, Windows began 
offering all three options which may be used on their own or in tandem. The 
three options can be sorted in order of increasing initial cost: callback, 
message queue, thread (and/or "lightweight process"). Average cost reverses 
that ordering. The application determines which strategy is better.


On the subject of who gets to process a certain event on Windows it should 
be noted that any process/thread that subscribes for an event will be 
notified, if an opportunity for subscription is provided in the first 
place. In case of a read other processes/threads have no notion that a 
specific read has been requested and they will never want to process the 
results of a read they haven't requested. Interface and other events that 
may concern more than one process/thread, on the other hand, can be 
processed by any worker that opts to process them. A chain of handlers with 
varying priorities is very easy to construct within this framework.



there's plenty of overlapped i/o in plan 9 — it's in the
disk and network device drivers.  even stripping away
the register fiddling, it's not an i/o model that's attractive;
that's the reason all the details are hidden in the kernel.


I might as well repeat myself: choice of strategy depends on the 
application. Given choice programmers can decide on which strategy or 
combination of strategies works best. Without choice, well, they will just 
live with what's available. It is common with Windows programmers to use a 
combination of threading, callbacks, and synchronous I/O that best 
represents the application's workings. GUI is often run in its own thread 
which is made to wait only on I/O that crucially affects future 
interactions, e.g. a user "Open File" request. Processing of sporadic 
network events, e.g. reports from a network resource, is done in auxiliary 
message queues. Light network operations are delegated to a single thread 
that uses callbacks. Only heavy network operations are aggressively 
threaded.


Most notable here is the fact that allowing for various compatible options 
to grow within the same system can only enrich it while insisting on "The 
One Right Way" always leaves open the question of whether a different 
choice of strategy on the same platform, were a different choice available, 
would have yielded better results.



are you saying the author(s) had their windows blinders on
and might not have considered other options?


I believe at this point it is clear that all thinkable options are 
available on the platform you refer to as (horse) "blinders" but that 
really isn't important. The author(s) had criticized a specific I/O model 
without mentioning, or probably even knowing, that alternatives existed and 
could be profiled for tangible, definitive results. I merely pointed that 
out.


--On Thursday, June 11, 2009 08:16 -0400 erik quanstrom 
 wrote:



On Thu Jun 11 04:12:13 EDT 2009, eris.discor...@gmail.com wrote:

> i don't think i understand what you're getting at.
> it could be that the blog was getting at the fact that select
> funnels a bunch of independent i/o down to one process.
> it's an effective technique when (a) threads are not available
> and (b) processing is very fast.

This might help: what he is getting at is probably the question of why
not  make possible network applications that consist of a bunch of
callbacks or  a mix of callbacks and listener/worker threads. Windows
implements both  synchronous and asynchronous I/O. Threads are
available. Callbacks, too, as  well as message queues.


are you saying the author(s) had their windows blinders on
and might not have considered other options?

my windows-fu is very low.  but according to microsoft
http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx
windows "asynchronous" (overlapped) i/o signals the calling thread, so
the signal context is the original calling thread.  unless
ms' diagram is incorrect, this is a single threaded operation;
only the i/o originator can process the event.  so the plan 9
model would seem to me to be better threaded and i think
th

Re: [9fans] critique of sockets API

2009-06-11 Thread Eris Discordia

i don't think i understand what you're getting at.
it could be that the blog was getting at the fact that select
funnels a bunch of independent i/o down to one process.
it's an effective technique when (a) threads are not available
and (b) processing is very fast.


This might help: what he is getting at is probably the question of why not 
make possible network applications that consist of a bunch of callbacks or 
a mix of callbacks and listener/worker threads. Windows implements both 
synchronous and asynchronous I/O. Threads are available. Callbacks, too, as 
well as message queues. Ideally, it is the programmer's informed choice 
based on their understanding of their application's priorities whether to 
use callbacks, listener/worker threads, message queues, or a combination. 
Someone may find it worth the effort to compare these approaches on a 
platform that provides both. (COM is notorious for implementing things 
through callback that get some wrapping of one's head around them before 
making sense.)


--On Tuesday, June 09, 2009 20:07 -0400 erik quanstrom 
 wrote:



On Tue Jun  9 19:22:39 EDT 2009, bpisu...@cs.indiana.edu wrote:

Well, select() or alt might or might not be required depending on
whether  you want your thread to wait till the read operation waiting
for data from the network completes.


your thread will always wait until any system call completes;
they're all synchronous all the time.  if you want your application
to do something else at the same time, you're going to need two
threads and a synch device (like a lock + shared memory or a channel).


read()->process()->read()... alternating sequence of operations that is
required, wherein the application has to explicitly go fetch data from
the network  using the read operation. To borrow text from the paper:

The API does not provide the programmer a way in which to say, "Whenever
there is data for me, call me to process it directly."



i don't think i understand what you're getting at.
it could be that the blog was getting at the fact that select
funnels a bunch of independent i/o down to one process.
it's an effective technique when (a) threads are not available
and (b) processing is very fast.

perhaps you think this is doging the question, but the cannonical
plan 9 approach to this is to note that it's easy (trivial) to have a n
reader threads and m worker threads s.t. i/o threads don't block
unless (a) there's nothing to i/o, or (b) the workers aren't draining
the queue fast enough; and s.t. worker threads don't block
unless (a) the i/o threads can't keep up.  in this case, there is no
work to do anyway.  consider these two types of threads;
let mb be a pointer to a message buffer.

thread type 1
for(;;)
mb <- freechan
read(fd, mb->wp, BUFSIZE);
mb -> fullchan
thread type 2;
for(;;)
mb <- fullchan
do stuff
mb -> freechan

if your server issues responses, it's easy to add thread type 3.

as you can see, this is a simple generalization of your case.  (if we
have a queue depth of 0 and one thread 1 and one thread 2,
we will get your loop.)  yet there should be no waiting.


The question was meant to ask as to how easy it is to programmatically
use the filesystem interface in a multi home network. But I agree that
support for  multiple network interfaces in Plan9 is way superior.


i think the answer to your question is that programs don't care.
programs that take a cannonical network address are just as
happy to accept /net/tcp!www.example.com!http as they are to
accept /net.alt/tcp!www.example.com!http.  for a short while
i ran a 10gbit network on /net.10g.

- erik









Re: [9fans] Adventures of a home user

2009-04-23 Thread Eris Discordia
To whom it may concern: had the right patches for Plan 9 to work on Virtual 
PC been incorporated and a new ISO released half the complaints from 
Windows users who want to give Plan 9 a try would disappear. Some potential 
enterprise users might also get interested in running many Plan 9 instances 
on Microsoft Virtual Server platform after seeing it run on Virtual PC (Due 
to its light weight Plan 9 may be a good choice for some virtual hosting 
services).


P.S. No need to remind me the originator of this thread is trying Plan 9 on 
a VM in Linux. He has a working Windows installation anyway and configuring 
Virtual PC for networking (or any task) is way easier than QEMU. 
Performance is comparable. Plus, VPC's graphics and guest-host integration 
work perfectly.


--On Thursday, April 23, 2009 5:12 PM +0800 Jim Habegger 
 wrote:



My Plan 9 training is temporarily suspended while I learn to use QEMU.

That's funny because I suspended my Slackware training to learn to use
Plan 9.

Now I might suspend my QEMU training to try out some other virtualizers.
Also, I got a FreeDOS image to use for my QEMU training, so I may wander
off into FreeDOS for a while.








Re: [9fans] VirtualBox and Plan9

2009-04-21 Thread Eris Discordia
I have tried to boot a number of Plan 9 4e ISOs on a number of VirtualBox 
releases (even from before Sun acquired it). Just won't work. It will take 
someone who knows Plan 9 very well to debug it and find out exactly why it 
doesn't.


A problem with Virtual PC was recently solved, I remember, by patching the 
kernel. Nobody said if the patch did work or not probably because applying 
it and recompiling the kernel was too complicated a task for a novice. And 
that was with the ISO properly booting and a complete installation on the 
(virtual) disk. If the ISO doesn't boot and the installation system never 
begins I assume one will have to create a new ISO with the patched kernel 
as well--not something I can dream of doing.


--On Tuesday, April 21, 2009 11:50 PM -0600 Ramon de Vera 
 wrote:



Has anyone else tried grabbing the latest Plan9 ISO (from the website)
and installing it via VirtualBox v2.2 ? I got only as far as:

kfs...

And the whole thing just stalled and hung at that point, so I am
hoping that someone can point me to the wrong vbox  settings that I am
using.

I am trying this from a Vista box. Please don't say go to a different
OS and try again. :-)

Regards,
Mon









Re: [9fans] Help for home user discovering Plan 9

2009-04-18 Thread Eris Discordia

Actually, I used Windows for years before discovering something
better. I explicitly disabled updates in XP, and it would insist on
looking for them and bothering me about them, anyway.


I put it here for I don't know what to call it--shall we say... historical 
record?--how to turn off your Windows XP installation's automatic update 
service: get into Control Panel, run the System applet, turn to Automatic 
Updates page tab, set the radio button to your desired option. If you want 
Windows to never download anything of its own accord, even when instructed 
by applications (such as InstallShield) that use Windows Update 
infrastructure for their purposes, go to Control Panel, go to 
Administrative Tools, run the Services MMC snap-in, find Background 
Intelligent Transfer Service, stop the service, set the service's startup 
mode to 'Disabled.'


Very easy, very logical, very intuitive, clearly documented, and even 
self-documented. Windows has lots of disadvantages but UI, configuration, 
and representation of the local system is where there's the smallest 
concentration of them. If you want to blame it get under the hood, find 
actual OS design flaws, and then laugh to your heart's content.


In conclusion, I apologize to 9fans for polluting their list with Windows 
nonsense. This will end right here even if J. R. Mauro goes on to say 
her/his Windows system won't boot after a clean successful installation.


--On Saturday, April 18, 2009 3:43 PM -0400 "J.R. Mauro" 
 wrote:



On Sat, Apr 18, 2009 at 2:29 PM, Eris Discordia
 wrote:

That is a lie. There are updates which (at least on XP) you could
never refuse. Nevermind the fact that Windows would have to restart
more than once on a typical series of updates.


Windows isn't really the subject on this thread or this list. Except when
someone goes out of their way to nonsensically blame it. I don't think
that's really meaningful or productive in any imaginable way. As it
happens, no one here is really a Windows user (or some are and they're
laughing in the hiding bush). You are no better. Please do substantiate
what you claim or stop trolling. There are absolutely no mandatory
Windows updates; you can run a Windows system intact, with zero
modification, for as long as you want or as long as it holds up given
its shortcomings. So, my educated guess goes: you have zero acquaintance
with that OS. Not even as much acquaintance as a normal user should have.


Actually, I used Windows for years before discovering something
better. I explicitly disabled updates in XP, and it would insist on
looking for them and bothering me about them, anyway.

Now maybe I missed some other option or the option I chose was
misleadingly labeled, or something was biffed in my registry. I just
googled for "can't turn off Automatic update" and found a bunch of
similar stories, though. In any event, it was so long ago I can't
remember what the circumstances exactly were.



--On Saturday, April 18, 2009 12:19 PM -0400 "J.R. Mauro"
 wrote:


On Sat, Apr 18, 2009 at 2:08 AM, Eris Discordia
 wrote:


This thing about Windows updates, I think it's a non-issue. It's not
like updates are mandatory and, as a matter of fact, there's rather
fine-grained classification of them on Microsoft's knowledge base which
can be used by any more or less experienced user to identify exactly
what they need for addressing a specific glitch and to download and
install that and only that. Periodic updates of Windows are really
unnecessary and can be easily turned off. Cumulative updates (like the
service packs), on the other hand, are often the best way to go.


That is a lie. There are updates which (at least on XP) you could
never refuse. Nevermind the fact that Windows would have to restart
more than once on a typical series of updates.



What seems to actually be the problem for you is that you don't like
being told there's a closed modification to your existing closed
software. Well, that's the nature of binary-only proprietary for-profit
software. The only way to get you to pay out of anything other than
good will, which is a rare bird.


No, I think he's saying that Windows Update is a piece of fetid garbage.



P.S. On open/free software mailing lists and forums justice is often
not done to Windows, et al. Particularly, no meaningful alternative is
presented for carrying out the important duties Windows currently
performs for general computing, i.e. non-technical home and office
applications which combined together were and continue to be the killer
application of microcomputers.


Mac's updater is miles ahead of Windows Update, but both are still
crappy. I've given Linux to several "computer illiterates" and they
were immediately relieved that they could open up a single application
and search for any kind of software they needed, and up

Re: [9fans] Help for home user discovering Plan 9

2009-04-18 Thread Eris Discordia

That is a lie. There are updates which (at least on XP) you could
never refuse. Nevermind the fact that Windows would have to restart
more than once on a typical series of updates.


Windows isn't really the subject on this thread or this list. Except when 
someone goes out of their way to nonsensically blame it. I don't think 
that's really meaningful or productive in any imaginable way. As it 
happens, no one here is really a Windows user (or some are and they're 
laughing in the hiding bush). You are no better. Please do substantiate 
what you claim or stop trolling. There are absolutely no mandatory Windows 
updates; you can run a Windows system intact, with zero modification, for 
as long as you want or as long as it holds up given its shortcomings. So, 
my educated guess goes: you have zero acquaintance with that OS. Not even 
as much acquaintance as a normal user should have.


--On Saturday, April 18, 2009 12:19 PM -0400 "J.R. Mauro" 
 wrote:



On Sat, Apr 18, 2009 at 2:08 AM, Eris Discordia
 wrote:

This thing about Windows updates, I think it's a non-issue. It's not like
updates are mandatory and, as a matter of fact, there's rather
fine-grained classification of them on Microsoft's knowledge base which
can be used by any more or less experienced user to identify exactly
what they need for addressing a specific glitch and to download and
install that and only that. Periodic updates of Windows are really
unnecessary and can be easily turned off. Cumulative updates (like the
service packs), on the other hand, are often the best way to go.


That is a lie. There are updates which (at least on XP) you could
never refuse. Nevermind the fact that Windows would have to restart
more than once on a typical series of updates.



What seems to actually be the problem for you is that you don't like
being told there's a closed modification to your existing closed
software. Well, that's the nature of binary-only proprietary for-profit
software. The only way to get you to pay out of anything other than good
will, which is a rare bird.


No, I think he's saying that Windows Update is a piece of fetid garbage.



P.S. On open/free software mailing lists and forums justice is often not
done to Windows, et al. Particularly, no meaningful alternative is
presented for carrying out the important duties Windows currently
performs for general computing, i.e. non-technical home and office
applications which combined together were and continue to be the killer
application of microcomputers.


Mac's updater is miles ahead of Windows Update, but both are still
crappy. I've given Linux to several "computer illiterates" and they
were immediately relieved that they could open up a single application
and search for any kind of software they needed, and updating it all
was done by that simple application. How simple is that!

The rate of failure of updates (compared to Windows update, which
would leave you with a completely unusable system every once in a
while) was also much lower.



--On Saturday, April 18, 2009 8:11 AM +0200 lu...@proxima.alt.za wrote:


The update/installation process in Ubuntu sucks. If you try something
using BSD ports or Gentoo portage, you can fine tune things and have
explicit control over the update process.


I was specifically omitting BSD ports, as they are in a different
league.  The point I _was_ making is that one readily sacrifices
control for convenience and that Linux and Windows users and those who
assist them have to accept second-rate management and pay for it (I
should know, I can see it when XP decides to use the GPRS link for its
updating :-(

Enough reason for me to prefer Plan 9 (and NetBSD, but I can only get
my teeth into so many apples), if there weren't many more reasons.

++L











Re: [9fans] Help for home user discovering Plan 9

2009-04-18 Thread Eris Discordia
This thing about Windows updates, I think it's a non-issue. It's not like 
updates are mandatory and, as a matter of fact, there's rather fine-grained 
classification of them on Microsoft's knowledge base which can be used by 
any more or less experienced user to identify exactly what they need for 
addressing a specific glitch and to download and install that and only 
that. Periodic updates of Windows are really unnecessary and can be easily 
turned off. Cumulative updates (like the service packs), on the other hand, 
are often the best way to go.


What seems to actually be the problem for you is that you don't like being 
told there's a closed modification to your existing closed software. Well, 
that's the nature of binary-only proprietary for-profit software. The only 
way to get you to pay out of anything other than good will, which is a rare 
bird.


P.S. On open/free software mailing lists and forums justice is often not 
done to Windows, et al. Particularly, no meaningful alternative is 
presented for carrying out the important duties Windows currently performs 
for general computing, i.e. non-technical home and office applications 
which combined together were and continue to be the killer application of 
microcomputers.


--On Saturday, April 18, 2009 8:11 AM +0200 lu...@proxima.alt.za wrote:


The update/installation process in Ubuntu sucks. If you try something
using BSD ports or Gentoo portage, you can fine tune things and have
explicit control over the update process.


I was specifically omitting BSD ports, as they are in a different
league.  The point I _was_ making is that one readily sacrifices
control for convenience and that Linux and Windows users and those who
assist them have to accept second-rate management and pay for it (I
should know, I can see it when XP decides to use the GPRS link for its
updating :-(

Enough reason for me to prefer Plan 9 (and NetBSD, but I can only get
my teeth into so many apples), if there weren't many more reasons.

++L






Re: [9fans] VMs, etc. (was: Re: security questions)

2009-04-17 Thread Eris Discordia

even today on an average computer one has this articulation: a CPU (with
a FPU perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
graphical capacities (terminal) : GPU.


It happens so that a reversal of specialization has really taken place, as 
Brian Stuart suggests. These "terminals" you speak of, GPUs, contain such 
vast untapped general processing capabilities that new uses and a new 
framework for using them are being defined: GPGPU and OpenCL.





Right now, the GPU on my low-end video card takes a huge burden off of the 
CPU when leveraged by the right H.264 decoder. Two high definition AVC 
streams would significantly slow down my computer before I began using a 
CUDA-enabled decoder. Now I can easily play four in parallel.


Similarly, the GPUs in PS3 boxes are being integrated into one of the 
largest loosely-coupled clusters on the planet.




Today, even a mere cellphone may contain enough processing power to run a 
low-traffic web server or a 3D video game. This processing power comes 
cheap so it is mostly wasted.


I'd like to add to Brian Stuart's comments the point that previous 
specialization of various "boxes" is mostly disappearing. At some point in 
near future all boxes may contain identical or very similar powerful 
hardware--even probably all integrated into one "black box." So cheap that 
it doesn't matter if one or another hardware resource is wasted. To put to 
good use such a computational environment system software should stop 
incorporating a role-based model of various installations. All boxes, 
except the costliest most special ones, shall be peers.


--On Friday, April 17, 2009 7:11 PM +0200 tlaro...@polynum.com wrote:


On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstu...@bellsouth.net wrote:

- First, the gap between the computational power at the
terminal and the computational power in the machine room
has shrunk to the point where it might no longer be significant.
It may be worth rethinking the separation of CPU and terminal.
For example, I'm typing this in acme running in a 9vx terminal
booted using using a combined fs/cpu/auth server for the
file system.  But I rarely use the cpu server capability of
that machine.


I'm afraid I don't quite agree with you.

The definition of a terminal has changed. In Unix, the graphical
interface (X11) was a graphical variant of the text terminal interface,
i.e. the articulation (link, network) was put on the wrong place,
the graphical terminal (X11 server) being a kind of dumb terminal (a
little above a frame buffer), leaving all the processing, including the
handling of the graphical interface (generating the image,
administrating the UI, the menus) on the CPU (Xlib and toolkits run on
the CPU, not the Xserver).

A terminal is not a no-processing capabilities (a dumb terminal):
it can be a full terminal, that is able to handle the interface,
the representation of data and commands (wandering in a menu shall
be terminal stuff; other users have not to be impacted by an user's
wandering through the UI).

More and more, for administration, using light terminals, without
software installations is a way to go (less ressources in TCO). "Green"
technology. Data less terminals for security (one looses a terminal, not
the data), and data less for safety (data is centralized and protected).


Secondly, one is accustomed to a physical user being several distinct
logical users (accounts), for managing different tasks, or accessing
different kind of data.

But (to my surprise), the converse is true: a collection of individuals
can be a single logical user, having to handle concurrently the very
same rw data. Terminals are then just distinct views of the same data
(imagine in a CAD program having different windows, different views of a
file ; this is the same, except that the windows are on different
terminals, with different "instances" of the logical user in front of
them).

The processing is then better kept on a single CPU, handling the
concurrency (and not the fileserver trying to accomodate). The views are
multiplexed, but not the handling of the data.

Thirdly, you can have a slow/loose link between a CPU and a terminal
since the commands are only a small fraction of the processing done.
You must have a fast or tight link between the CPU and the fileserver.

In some sense, logically (but not efficiently: read the caveats in the
Plan9 papers; a processor is nothing without tightly coupled memory, so
memory is not a remote pool sharable---Mach!), even today on an
average computer one has this articulation: a CPU (with a FPU
perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
graphical capacities (terminal) : GPU.

--
Thierry Laronde (Alceste) 
 http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C









Re: [9fans] security questions

2009-04-17 Thread Eris Discordia
Very nice of you to go to lengths for describing Inferno to a non-techie. 
Thank you. Just got the Fourth Edition ISO and will try it. Maybe even 
learn some Limbo in long term.


--On Friday, April 17, 2009 1:55 PM +0200 lu...@proxima.alt.za wrote:


what it is that Inferno does for a user or what a user can do
with it; what distinguishes it from other (operating?) systems. I've
decided to try it because documentation says it will readily run on
Windows.


Let's start with the fact that Inferno is a small-footprint, hosted
operating environment with its own, complete development tool set.  As
such it is strictly portable across many architectures with all the
advantages of such portability as well as all the useful features
Inferno inherited from Plan 9.  Not least of these is Limbo, a
programming language based on the mourned Alef and, conveniently,
interpreted by the Limbo virtual machine, not dissimilar from, but
much better thought out than the JAVA virtual machine.

You can pile on any number of additional great attributes of Inferno
and Limbo that make them highly useful.  There is also the option to
run Inferno natively on some architectures (I've never dug any deeper
than the PC for this, so off the top of my head I can provide no
exciting examples) with all the drawbacks of needing device drivers
for all sorts of inconsiderate platforms.

In a way, I guess Inferno is a slightly different Plan 9 with built-in
virtualisation for a wide range of platforms.  But the differences are
notable even if the philosophy is the same between the two
environment.

++L










Re: [9fans] Help for home user discovering Plan 9

2009-04-17 Thread Eris Discordia
It's like I'm seeing an apparition of myself back more than a year ago. No 
wonder 9fans got to dislike me so much. Do 9fans get nuisances like me in 
regular intervals?


--On Friday, April 17, 2009 1:14 PM + Balwinder S Dheeman 
 wrote:



On 04/15/2009 05:22 PM, Pietro Gagliardi wrote:

On Apr 15, 2009, at 4:26 AM, Eris Discordia wrote:


Plan 9 is not intended for home or home office.


True, but that doesn't mean it can't be used in such an environment. I
type all my reports up in Plan 9.


Please set aside rare cases and let us know who except for the students,
teachers and, or researchers uses Plan9 and, or Inferno in the offices,
homes and, or cafes and for what?

The Plan9 project started in 1980, took around 9 years to be solid
enough to be usable and that too by the internal and, or lab people
[http://plan9.bell-labs.com/sys/doc/9.html] only. Whereas, the FreeBSD
and, or Linux (though not an OS or Unix variant in a sense) came into
existence later in 1993 and 1991 respectively are more popular among any
other variants of Unix.

IMHO, the Plan9 and, or Inferno are just failed attempts and have no
real and, or viable commercial and, or industrial use in absence of
hardware drivers and, or not the killer but some useful applications.

Moreover, the user interface and, or window manager i.e. rio is too
technical for an average user to put in to a good use. It lacks usual
buttons for minimizing (hiding), maximizing, controlling windows. You
can't even send a window to background and even if Inferno's wm has some
of these including title bars, but the meanings and, or behavior of the
same is quite different from other popular GUI systems.

--
Balwinder S "bdheeman" DheemanRegistered Linux User: #229709
Anu'z li...@home (Unix Shoppe)Machines: #168573, 170593, 259192
Chandigarh, UT, 160062, India Plan9, T2, Arch/Debian/FreeBSD/XP
Home: http://cto.homelinux.net/~bsd/  Visit: http://counter.li.org/





Re: [9fans] security questions

2009-04-17 Thread Eris Discordia
I see. Thanks for the edification :-) I found--still find--it hard to 
understand what Inferno is/does. Actually read 
 but it isn't very 
direct about what it is that Inferno does for a user or what a user can do 
with it; what distinguishes it from other (operating?) systems. I've 
decided to try it because documentation says it will readily run on Windows.


As a side note, I found a short passage in the Inferno paper that confirmed 
something I had pointed out previously on this list in almost identical 
wording (and been ridiculed for):



The Styx protocol lies above and is independent of the communications
transport layer; it is readily carried over TCP/IP, PPP, ATM or various
modem transport protocols.


--On Friday, April 17, 2009 11:47 AM +0200 lu...@proxima.alt.za wrote:


I don't know what Inferno is but the phrase 'virtual machine' appears
somewhere in the product description. Isn't Inferno the 'it' you're
searching for?


No, Inferno resembles - very superficially, as you will discover if
you study the literature - a JAVA interpreter surrounded by its own
operating system.  There are so many clever things about Inferno, it
is hard to do it justice.  But it is not a virtualiser.  More's the
pity, of course.  A virtualiser with Inferno's good features would be
a very useful device.

Actually, I have long had a feeling that there is a convergence of
VNC, Drawterm, Inferno and the many virtualising tools (VMware, Xen,
Lguest, etc.), but it's one of these intuition things that I cannot
turn into anything concrete.

++L






Re: [9fans] security questions

2009-04-16 Thread Eris Discordia

The other thought that comes to mind is to consider something
like class based queuing (from the networking world).  That
is, allow choice of different allocation/scheduling/resource
use policies and allow further subdivision.


As with jail, this is also present in FreeBSD, I believe. It's called 
'login classes.' Although it's probably not as flexible as you'd want it to 
be.


--On Thursday, April 16, 2009 7:07 PM -0700 Bakul Shah 
 wrote:



On Thu, 16 Apr 2009 21:25:06 EDT "Devon H. O'Dell"
  wrote:

That said, I don't disagree. Perhaps Plan 9's environment hasn't been
assumed to contain malicious users. Which brings up the question: Can
Plan 9 be safely run in a potentially malicious environment?  Based on
this argument, no, it cannot. Since I want to run Plan 9 in this sort
of environment (and thus move away from that assumption), I want to
address these problems, and I kind of feel like it's weird to be
essentially told, ``Don't do that.''


Why not give each user a virtual plan9? Not like vmware/qemu
but more like FreeBSD's jail(8), "done more elegantly"[TM]!
To deal with potentially malicious users you can virtualize
resources, backed by limited/configurable real resources.

The other thought that comes to mind is to consider something
like class based queuing (from the networking world).  That
is, allow choice of different allocation/scheduling/resource
use policies and allow further subdivision. Then you can give
preferential treatment to known good guys.  Other users can
still experiment to their heart's content within the
resources allowed them.

My point being think of a consistent high level model that
you like and then worry about implementation details.









Re: [9fans] security questions

2009-04-16 Thread Eris Discordia

Plan 9 itself makes a great platfrom on which to construct
virtualisation.


I don't know what Inferno is but the phrase 'virtual machine' appears 
somewhere in the product description. Isn't Inferno the 'it' you're 
searching for?


--On Friday, April 17, 2009 6:48 AM +0200 lu...@proxima.alt.za wrote:


One can indirectly (and more consistently) limit the number of
allocated resources in this fashion (indeed, the number of open file
descriptors) by determining the amount of memory consumed by that
resource as proportional to the size of the resource. If I as a user
have 64,000 allocations of type Foo, and struct Foo is 64 bytes, then
I hold 1,000 Foos.


And by this, I clearly mean 64,000 bytes of allocated Foos.


From purely a spectator's perspective, I believe that if one needs to
add considerable complexity to Plan 9 in the form of user-based kernel
resource management, one may as well look carefully at the option of
adding self-virtualisation to the Plan 9 kernel and manage resources
in the virtualisation layer.

Plan 9 has provided a wide range of sophisticated, yet simple
techniques to solve a wide range of computer/system problems, but I'm
of the opinion that it missed virtualisation as one of these
techniques.  I may be dreaming, but I've long been of the opinion that
Plan 9 itself makes a great platfrom on which to construct
virtualisation.

++L










Re: [9fans] vgadb woes

2009-04-15 Thread Eris Discordia
Try generating a working modeline for your X.Org and then just put the 
numbers from modeline into vgadb. xvidtune should help with generating the 
modeline.


A modeline looks like:

Modeline "mode_name_here" 106.47 1440 1520 1672 1904 900 901 904 932 -HSync 
+Vsync


Numbers are from left to right, :

01. Clock (= pixel clock frequency, 'include' section 'clock')
02. HDisplay (= horizontal resolution)
03. HSyncStart (= 'include' section 'shb')
04. HSyncEnd (= 'include' section 'ehb')
05. HTotal (= 'include' section 'ht')
06. VDisplay (= vertical resolution)
07. VSyncStart (= 'include' section 'vrs')
08. VSyncEnd (= 'include' section 'vre')
09. VTotal (= 'include' section 'vt')
10. HSync (= 'include' section 'hsync', either '+' or '-')
11. VSync (= 'include' section 'vsync', either '+' or '-')

There's almost one-to-one correspondence between these numbers and the 
cryptic numbers referred to in vgadb(6). Translations in parentheses :-)


--On Wednesday, April 15, 2009 3:11 PM -0400 "Devon H. O'Dell" 
 wrote:



I've got a laptop that I (for shits and giggles) decided to put Plan 9
on. Lo and behold, it worked fine (Compal EL80, Core 2 Duo, 2GB RAM,
nVidia video).

So, I'm running at 1280x1024x32 right now in VESA, which is
reasonable, but I'd like to run at my maximum native resolution, which
is 1680x1050 (I believe). After tooling around with Xorg configs, I've
found a horiz/vert refresh rate that should work for me...

...except that I have no idea how to convert that into vgadb lingo.
I've read all the comments in vgadb, and the manpage, which helpfully
suggests that I purchase a rather dated book. I suppose it's at least
available, but in the interest of ``I want it now,'' are there any
hints on translating eg.

Option  "DPMS"
HorizSync   28-84
VertRefresh 43-60

into vgadb(6) lingo?

Only other bit of potentially relevant information I have is that Xorg
reports the monitor as having a ``330.0 MHz pixel clock''.

Thanks,

--dho









Re: [9fans] some measurements in plan 9

2009-04-15 Thread Eris Discordia

OK, they just run for a few seconds, so, any suggestions are welcome
(in fact needed)


/dev/bintime


Wouldn't many rounds of running with different data sets (to minimize cache 
hits) and timing with time also serve the same purpose? Does time introduce 
some unavoidable minimum margin of error for short runs that will get 
multiplied by the number of times and drown the actual measurement? If time 
only introduces fluctuations they will cancel each other out in many runs 
but if there's an unavoidable minimum error then multiple runs won't help.


--On Wednesday, April 15, 2009 8:22 AM -0700 ron minnich 
 wrote:



On Wed, Apr 15, 2009 at 8:18 AM, hugo rivera  wrote:

seems reasonable to me, I assume you are looking at data consumption
only?


well, I am not really sure what you mean. Data consumption? ;-)


sorry. Memory data footprint. Not code + memory. This all depends on
lots of factors, but for code remember that text is shared.



OK, they just run for a few seconds, so, any suggestions are welcome
(in fact needed)


/dev/bintime

ron





Re: [9fans] Help for home user discovering Plan 9

2009-04-15 Thread Eris Discordia

If you phrased this slightly more gently, people may in fact agree
with you.


They'd be agreeing with the wrong formulation, then.


But Plan 9 is a great environment to experiment in.


Sure. So is every nascent or vestigial system.

Anyhow, the thread's originator says he's interested in computer systems in 
a very autotelic way. So, applications don't matter a lot; he's going to 
dine on the contents of the Petri dish no matter what :-D


--On Wednesday, April 15, 2009 3:52 PM +0200 lu...@proxima.alt.za wrote:


but I
don't think you can get much from it by way of productivity, unless you
intend to get productive in software engineering and/or computer science.


If you phrased this slightly more gently, people may in fact agree
with you.  Although I find my workstation quite a useful mail agent,
perhaps for all the wrong reasons (the best way to describe it:
acme/Mail is *fast*!).

But Plan 9 is a great environment to experiment in.  Perhaps you ought
to look upon it as the Petri dish for information technology: concepts
grow a great deal faster in Plan 9 than they do elsewhere, for all the
_right_ reasons.

++L






Re: [9fans] Help for home user discovering Plan 9

2009-04-15 Thread Eris Discordia

Now I need to decide whether to install qemu or kvm, and whether to
install it in Ubuntu or in Debian, and then reorganize my partitions
accordingly.


QEMU would be the way to go. It seems most people here who run Plan 9 in a 
VM do it on QEMU on Linux; you'll have a better chance of getting answers 
if something goes wrong. I believe there won't be any need for changing 
your partition table as long as you don't want QEMU read/write from/to a 
"raw" partition.


--On Wednesday, April 15, 2009 9:05 PM +0800 Jim Habegger 
 wrote:



On Wed, Apr 15, 2009 at 4:26 PM, Eris Discordia
 wrote:

Plan 9 is not intended for home or home office.


Yes, I understood that from the responses to my questions. As soon as
I read them, I gave up the idea of trying to switch to Plan 9. Now
it's more about enriching my knowledge and experience. It might be
good experience for me to see how far I can stretch Plan 9 for home
computing.


learning about
computers is for me only a pleasant aside to actual use of computers


It's more the other way around with me. Using them is only a pleasant
aside to learning about them!

Now I need to decide whether to install qemu or kvm, and whether to
install it in Ubuntu or in Debian, and then reorganize my partitions
accordingly.









Re: [9fans] Help for home user discovering Plan 9

2009-04-15 Thread Eris Discordia
I don't know if it's because of bashfulness or what that people aren't 
telling it to your face: Plan 9 is not intended for home or home office. It 
hasn't matured to that point and its age is already past when it had a 
chance to mature. From what I've read on this list it probably serves as 
the back-end so some useful SOHO (and embedded?) applications, in addition 
to research and probably industrial use, but I don't think it's the 
front-end to any. These people who use it--I don't--all are either very 
much interested in computer systems or simply students, professors, 
researchers, and/or employees in the field.


You can try using Plan 9--I did and was dejected because learning about 
computers is for me only a pleasant aside to actual use of computers--but I 
don't think you can get much from it by way of productivity, unless you 
intend to get productive in software engineering and/or computer science.


--On Tuesday, April 14, 2009 2:05 PM +0800 Jim Habegger 
 wrote:



We have three Windows laptops in our family. I've been using free
software systems off and on for years. Last week I learned about Plan
9 from Bell Labs, from someone in a Linux Questions forum. Now I have
it installed on a partition on my laptop, along with XP,
Ubuntu-on-NTFS, Debian, and Slackware. I've learned to access a fat
partition, change the font size, and use Acme. Now I need to learn how
to set up a wireless connection to the family router network, access
my files on my wife's Vista laptop, and browse the Internet.

My wireless card is not listed in Plan9.ini. Does that mean there's no
way for me to connect with that card?

I'd like to learn how much I can use Plan 9 for home office,
multimedia and Internet socializing, then I'd like to experiment with
distributing the system between computers. I've learned about as much
as I can for now from the documentation on the Plan 9 site, except for
how to connect to the network. I'm waiting to find out if it's even
possible.

Now I'm listing /bin, reading man pages, and practicing commands.
After that I might have some questions. Meanwhile, does anyone have
any suggestions about learning to use Plan 9 for home office,
multimedia and Internet socializing, and then to learn more about
networking and distributed systems?





Re: [9fans] a bit OT, programming style question

2009-04-11 Thread Eris Discordia

No, bash's completion system is what's responsible for line numbers in
the thousands.


How? Is bash's completion on your system different than on my system? I'd 
like you to substantiate that statement and will thank you for a proper 
response.


--On Friday, April 10, 2009 3:33 PM -0400 "J.R. Mauro"  
wrote:



On Thu, Apr 9, 2009 at 6:09 PM, Eris Discordia 
wrote:

It only starts to balloon once you begin customizing bash.


Have you customized your bash by aliases as long as tens or hundreds of
lines? Now is it bash's fault you have defined an alias for something
that ought to be a script/program in its own right?


No, bash's completion system is what's responsible for line numbers in
the thousands.



--On Thursday, April 09, 2009 3:34 PM -0400 "J.R. Mauro"
 wrote:


No, it's very likely bigger. wc -l is lines of course, and I'm
guessing each line is more than 1 character. However,

$ set | wc -l
64

I don't quite get that locally.


It only starts to balloon once you begin customizing bash. I'm not
sure how rc handles functions, but the nice thing about zsh is that it
compiles them to bytecode instead of this insanity that bash employs.


















Re: [9fans] a bit OT, programming style question

2009-04-09 Thread Eris Discordia

this is the "space-shuttle dichotomy."  it's a false one.  it's a
continuum. its ends are dangerous.


So somewhere in the middle is the golden mean? I have no objections to 
that. *BSD systems very well represent a silver, if not a golden, 
mean--just my idea, of course.



it is interesting to me that some software manages to run off both
ends of this continuum at the same time.  in linux your termcap
from 1981 will still work, but software written to access /sys last
year is likely out-of-date.


While I won't vouch for Linux as a good OS (user-land and kernel combined) 
I understand what you see as its eccentricity is merely a side-effect of 
openness. Tighten the development up and you get a BSD-style system 
(committer/contributor/maintainer/grunt/user highest-to-lowest ranking, 
with a demiurge position for Theo de Raadt). Tighten it even further up 
with in-ken shared among a core group of old-timers and thoroughbreds 
transmitted only to serious researchers and you get Plan 9.


You are right, after all. It all lies on a continuum. Actually, more 
tightly regulated Linux distros such as Slackware readily demonstrate that; 
they easily beat all-out all-open distros like Fedora (whose existence is 
probably perceived at Red Hat as a big brainstorming project).



your insinuation that *bsd is a real serious system and plan 9 is
a research system doesn't make any historical sense to me.  they
both started as research systems.  i am not aware of any law that
prevents a system that started as a research project from becoming
a serious production system.


What I am insinuating is more like this: any serious system will sooner or 
later have to grow warts and/or contract herpes. That's an unavoidable 
consequence of social life. If you do insist that Plan 9 has no warts, or 
far less warts than the average, or that it has never seen a cold sore on 
its upper lip then I'll happily conclude it has never lived socially. And I 
haven't really ever used Plan 9 or "been into it." The no-herpes indicator 
is that strong.



i know of many thousands of plan 9 systems in production right
now.


Good for you. Honestly.

--On Thursday, April 09, 2009 11:06 AM -0400 erik quanstrom 
 wrote:



On Thu Apr  9 10:48:08 EDT 2009, eris.discor...@gmail.com wrote:

Most of it in the 19 lines for one TERMCAP variable. Strictly a relic of
the past kept with all good intentions: backward compatibility, and
heeding


[...]


Quite a considerable portion of UNIX-like systems, FreeBSD in this case,
is  the way it is not because the developers are stupid, rather because
they  have a "constituency" to tend to. They aren't carefree researchers
with  high ambitions.


this is the "space-shuttle dichotomy."  it's a false one.  it's a
continuum. its ends are dangerous.

on the one hand, if you change things, the new things are likely
to be buggy.  on the space shuttle, this is bad.  people die.

on the other hand, systems are not perfect.  and if the problems
are not addressed, eventually the system will need to much fixing
and will be abandoned.

yet bringing a new system on line is an even bigger risk.  everything
is new simultaneously.

it is interesting to me that some software manages to run off both
ends of this continuum at the same time.  in linux your termcap
from 1981 will still work, but software written to access /sys last
year is likely out-of-date.

your insinuation that *bsd is a real serious system and plan 9 is
a research system doesn't make any historical sense to me.  they
both started as research systems.  i am not aware of any law that
prevents a system that started as a research project from becoming
a serious production system.

i know of many thousands of plan 9 systems in production right
now.

- erik





Re: [9fans] a bit OT, programming style question

2009-04-09 Thread Eris Discordia

It only starts to balloon once you begin customizing bash.


Have you customized your bash by aliases as long as tens or hundreds of 
lines? Now is it bash's fault you have defined an alias for something that 
ought to be a script/program in its own right?


--On Thursday, April 09, 2009 3:34 PM -0400 "J.R. Mauro" 
 wrote:



No, it's very likely bigger. wc -l is lines of course, and I'm
guessing each line is more than 1 character. However,

$ set | wc -l
64

I don't quite get that locally.


It only starts to balloon once you begin customizing bash. I'm not
sure how rc handles functions, but the nice thing about zsh is that it
compiles them to bytecode instead of this insanity that bash employs.









Re: [9fans] a bit OT, programming style question

2009-04-09 Thread Eris Discordia

Seems Charles Forsyth's bash (or wc -l) works very differently.


[r...@host ~/]# set | wc -l
  49
[r...@host ~/]#


37 out of 49 are just environment variables (as contrasted to shell 
variables). So the shell is using 12 variables in addition to the 
environment. A 'set | wc -c' gives 2133 over half of which are from the 
environment, 972 of them in TERMCAP.


--On Thursday, April 09, 2009 3:28 PM -0400 "Devon H. O'Dell" 
 wrote:



2009/4/9 Richard Miller <9f...@hamnavoe.com>:

set | wc -l
  8047
well.


This is nearly as big as the shell itself in the (ahem) good old days.

term% tar tzvf interdata_v6.tar.gz bin/sh
--rwxr-xr-x     8316 Nov 13 15:48 1978 bin/sh


No, it's very likely bigger. wc -l is lines of course, and I'm
guessing each line is more than 1 character. However,

$ set | wc -l
64

I don't quite get that locally.

--dho









Re: [9fans] a bit OT, programming style question

2009-04-09 Thread Eris Discordia

Try env | wc -l in bash. Now tell me why that value is so big.



[r...@host ~]# env | wc -l
37
[r...@host ~]#


Is that very high? I don't even know if it is or how it would mean anything 
bad (or good for that matter) assuming it were high. Not to mention, it's a 
very bad metric. Because:



[r...@host ~]# env | wc -c
1404
[r...@host ~]#


Most of it in the 19 lines for one TERMCAP variable. Strictly a relic of 
the past kept with all good intentions: backward compatibility, and heeding 
the diversity of hardware and configuration that still exists out there. 5 
of the other 18 lines are completely specific to my installation. That 
leaves us with 13 short lines.


Quite a considerable portion of UNIX-like systems, FreeBSD in this case, is 
the way it is not because the developers are stupid, rather because they 
have a "constituency" to tend to. They aren't carefree researchers with 
high ambitions.


--On Tuesday, April 07, 2009 11:04 PM -0400 "J.R. Mauro" 
 wrote:



On Tue, Apr 7, 2009 at 9:48 PM, Eris Discordia 
wrote:

The man page *does* say it's too big and slow. So does the bash
manpage. And getting readline to do anything sane is about as fun as
screwing around with a terminfo file.


A bad implementation is not a bad design. And, in fact, the badness of
the implementation is even questionable in the light of bash's normal
behavior or the working .inputrc files I've been using for some time.


Behavior is not indicative of good design. It just means that the
bandaids heaped upon bash (and X11, and...) make it work acceptably.

Try env | wc -l in bash. Now tell me why that value is so big.



Anyway, thanks for the info.

--On Tuesday, April 07, 2009 3:57 PM -0400 "J.R. Mauro"
 wrote:


On Tue, Apr 7, 2009 at 2:21 PM, Eris Discordia
 wrote:


I see. But seriously, readline does handle bindings and line editing
for bash. Except it's a function instead of a program and you think
it's a bad idea.


The man page *does* say it's too big and slow. So does the bash
manpage. And getting readline to do anything sane is about as fun as
screwing around with a terminfo file.



--On Tuesday, April 07, 2009 10:31 PM +0800 sqweek 
wrote:


2009/4/7 Eris Discordia :


Keyboard
bindings for example; why couldn't they be handled by a program
that just does keyboard bindings + line editing, and writes
finalized lines to the shell.


Like... readline(3)?


 No.
-sqweek





--On Tuesday, April 07, 2009 8:09 AM -0700 ron minnich
 wrote:


On Tue, Apr 7, 2009 at 12:28 AM, Eris Discordia
 wrote:



Like... readline(3)?


one hopes not.

ron























Re: [9fans] a bit OT, programming style question

2009-04-07 Thread Eris Discordia

The man page *does* say it's too big and slow. So does the bash
manpage. And getting readline to do anything sane is about as fun as
screwing around with a terminfo file.


A bad implementation is not a bad design. And, in fact, the badness of the 
implementation is even questionable in the light of bash's normal behavior 
or the working .inputrc files I've been using for some time.


Anyway, thanks for the info.

--On Tuesday, April 07, 2009 3:57 PM -0400 "J.R. Mauro"  
wrote:



On Tue, Apr 7, 2009 at 2:21 PM, Eris Discordia 
wrote:

I see. But seriously, readline does handle bindings and line editing for
bash. Except it's a function instead of a program and you think it's a
bad idea.


The man page *does* say it's too big and slow. So does the bash
manpage. And getting readline to do anything sane is about as fun as
screwing around with a terminfo file.



--On Tuesday, April 07, 2009 10:31 PM +0800 sqweek 
wrote:


2009/4/7 Eris Discordia :


Keyboard
bindings for example; why couldn't they be handled by a program that
just does keyboard bindings + line editing, and writes finalized
lines to the shell.


Like... readline(3)?


 No.
-sqweek





--On Tuesday, April 07, 2009 8:09 AM -0700 ron minnich
 wrote:


On Tue, Apr 7, 2009 at 12:28 AM, Eris Discordia
 wrote:



Like... readline(3)?


one hopes not.

ron


















Re: [9fans] a bit OT, programming style question

2009-04-07 Thread Eris Discordia
I see. But seriously, readline does handle bindings and line editing for 
bash. Except it's a function instead of a program and you think it's a bad 
idea.


--On Tuesday, April 07, 2009 10:31 PM +0800 sqweek  wrote:


2009/4/7 Eris Discordia :

Keyboard
bindings for example; why couldn't they be handled by a program that
just does keyboard bindings + line editing, and writes finalized lines
to the shell.


Like... readline(3)?


 No.
-sqweek





--On Tuesday, April 07, 2009 8:09 AM -0700 ron minnich  
wrote:



On Tue, Apr 7, 2009 at 12:28 AM, Eris Discordia
 wrote:



Like... readline(3)?


one hopes not.

ron









Re: [9fans] a bit OT, programming style question

2009-04-07 Thread Eris Discordia

Keyboard
bindings for example; why couldn't they be handled by a program that
just does keyboard bindings + line editing, and writes finalized lines
to the shell.


Like... readline(3)?


SEE ALSO
   The Gnu Readline Library, Brian Fox and Chet Ramey
   The Gnu History Library, Brian Fox and Chet Ramey
   bash(1)


-- man readline

--On Tuesday, April 07, 2009 3:53 PM +0800 sqweek  wrote:


2009/4/7 Corey :

Keyboard
bindings for example; why couldn't they be handled by a program that
just does keyboard bindings + line editing, and writes finalized lines
to the shell.


 Congratulations, you've perceived the difference between shell and
terminal. A lot of people stuck in modern unix fail to notice this
one... which is not that surprising considering the state of modern
unix terminals (9term excepted - quiet Anothy :P).
-sqweek









Re: [9fans] way OT but shocking none the less

2009-04-07 Thread Eris Discordia
Not really. Rackable Systems has for long been one of those things that 
were there but you never saw. And SGI's end was coming. When did they 
decommission IRIX? Wasn't it some years ago?


I think Rackable Systems is going to focus on SGI's supercomputing 
background and boost their line with parallel supercomputers. Just wild 
guessing for fun.


--On Monday, April 06, 2009 7:37 AM -0700 Benjamin Huntsman 
 wrote:



...SGI... was purchased for just $25M by Rackable Systems


I saw that.  It's a sad day when such an icon of the computer industry
gets bought by some company I've never heard of for a (relatively) piddly
little sum...





Re: [9fans] Plan 9 on Routers?

2009-03-25 Thread Eris Discordia

as long as you restrict your network to plan 9 machines, it is possible
to import /net from a gateway machine and avoid sticky things like packet
filtering.


Back to the future yet? May I suggest that the "sticky" packet filtering, 
more generally packet manipulation, has crucial applications in any 
packet-switched network (like... "the Net") and a certain OS's current lack 
of facilities, out of the box, to deal with the problem does not 
automatically mean the problem should be thrown out. Of course, in an 
essentially sheltered world not having an IDS is as good as having one but, 
you see, that's the world of a certain OS. Other OSes have to live in the 
wild.


P.S. This is a get-back from the NAT thread.

--On Tuesday, March 24, 2009 7:20 PM -0400 erik quanstrom 
 wrote:



It seems that /net/iproute is where I can start. It has a complete
interface for editing routes. What we need is a user space script that
implements routing, like http://www.openbgp.org/ does on OpenBSD.
Except that, it will only have to send add, delete and flush control
messages to the iproute file.


see  ipconfig(8).


About Packet Classification. I read that iptables is not needed on
Plan 9 because its "mount /net over the network" concept achieved
anonymity or transparency -- something along those lines. "There are
no logs about who is sending what, and that is a good thing".


that's not strictly true.  as long as you restrict your network to
plan 9 machines, it is possible to import /net from a gateway
machine and avoid sticky things like packet filtering.  there is
also ipmux (discussed in ip(3)).  i don't think ipmux has enough
rewriting (or state) to implement something like nat.

- erik









Re: [9fans] Venti by another name

2009-02-16 Thread Eris Discordia

I'll know soon enough as I'm in the process of building a Venti store for
our video files. I just wondered if anyone had done it already.


That kindles in me the bystander's interest. It'd be nice of you to report 
back on results.


--On Monday, February 16, 2009 12:13 PM + matt 
 wrote:






Can any lossless compression scheme improve on {J,M}PEG? If multiple
streams are stored in venti then there may be some common blocks but
even that should not significantly surpass RAR algorithm's "solid"
archiving of the same streams. Not that I understand these things.


Only if there are shared blocks. Macro blocks in MPEG are 8x8 pixels
which makes them < 32k in size. If you're lucky you might just align them
when written to Venti!

I'll know soon enough as I'm in the process of building a Venti store for
our video files. I just wondered if anyone had done it already.

I doubt there's much in it.

Matt









Re: [9fans] Venti by another name

2009-02-15 Thread Eris Discordia
BitTorrent protocol chops the upload into same-sized blocks and hashes each 
block. Then the blocks are traded. It predates other P2P protocols that do 
so. I don't know about anonymity/privacy guards (FreeNet was mentioned). In 
BT these hashes are contained in the torrent file and used for integrity 
checking so it can't be correlated with venti.


OFF's idea seems okay but it's rather trite. You can always represent 
copyrighted content in different ways, pass it on, and claim you haven't 
exchanged the "copyrighted bytes." That's why you are barred by the DMCA 
from transcoding even for personal use the content for which you have an 
"only personal home use" license (of course, you abide the law, don't you 
;-). The person who did the transform/transcoding is still legally liable.


Also, you can always create a lookup table of all possible bit sequences of 
a certain length and represent your data (padded to a multiple of that 
length) as a chain of such sequences. That's no wonderwork.



I've not pushed any mpeg data into venti, though I have idly wondered if
there are any disk saving.


Can any lossless compression scheme improve on {J,M}PEG? If multiple 
streams are stored in venti then there may be some common blocks but even 
that should not significantly surpass RAR algorithm's "solid" archiving of 
the same streams. Not that I understand these things.


--On Saturday, February 14, 2009 3:34 PM + matt 
 wrote:



I came across this today

http://offsystem.sourceforge.net/

It's a P2P system where data blocks are traded not files.

A file becomes a set of blocks and if requested, anyone who has the block
can supply the data, even if they don't possess the same file.

In that way no-one is sharing copyrighted material in the large, just
coincedent blocks.

I've not pushed any mpeg data into venti, though I have idly wondered if
there are any disk saving.












Re: [9fans] Flash Video

2009-02-03 Thread Eris Discordia

Very interesting. Thank you.

By the way, Gnash seems to be quite useful.

--On Tuesday, February 03, 2009 12:22 PM +0100 Christian Walther 
 wrote:



2009/2/3 Eris Discordia :

I don't know of any open source implementations of Flash Player. The
software on each platform and for each browser seems to be (c) Adobe and
closed source. Does an open source implementation, however incomplete,
exist?


Well, there is"gnash", which aims to be an open source flash movie
player. It relies on ffmpeg or gstreamer to decode any video.
More info can be found on http://www.gnashdev.org/





--On Tuesday, February 03, 2009 6:30 AM -0500 Pietro Gagliardi 
 wrote:



-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Feb 3, 2009, at 5:11 AM, Eris Discordia wrote:


I don't know of any open source implementations of Flash Player. The
software on each platform and for each browser seems to be (c) Adobe
and closed source. Does an open source implementation, however
incomplete, exist?


The two major ones are swfdec and Gnash, the latter part of the GNU
project. They're both at version 0.8.4, but swfdec 0.9.2 is available as
development version.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (Darwin)

iEYEARECAAYFAkmIKrkACgkQuv7AVNQDs+wJ+wCghd0KZynItmM56GgowKv7MxZq
XI0An03lZmRkM5PrQ1PjCY+EBlOwyRM5
=yeuI
-END PGP SIGNATURE-









Re: [9fans] Flash Video

2009-02-03 Thread Eris Discordia
I don't know of any open source implementations of Flash Player. The 
software on each platform and for each browser seems to be (c) Adobe and 
closed source. Does an open source implementation, however incomplete, 
exist?


Videos embedded in SWF files are encoded in Sorenson's MPEG-4 profile.



--On Monday, February 02, 2009 8:14 AM -0500 "Devon H. O'Dell" 
 wrote:



2009/2/2 Akshat Kumar :

2009/2/2 Skip Tavakkolian <9...@9netics.com>:

it might require a c-section.
might want to start with VLC or ffmpeg.



My aim was just to get 9fans talking about it.
Hence, the pushing.

But yes, what information can you provide
about either of those, with regards to porting
or creating natively?


The Flash file format is an open standard
(http://www.adobe.com/devnet/swf/). To be useful for encoded video,
you'd need a VP6 codec (which seems lolno) and x264. It would probably
be possible to do at least the x264 stuff via ffmpeg, which is
probably not too difficult to port -- it's pretty simple code and the
codecs are easily portable. To be useful for anything else, you'd also
need a bytecode interpreter that understood the compiled actionscript
-- it's just a bytecode-compiled ECMAScript, and I believe its details
are also found in that PDF. The rest is being able to display JPG/PNG
raster images and antialiased TTF and vectors. (Flash allows you to
embed fonts into the generated SWF output as well).

--dho


ak








Re: [9fans] cheap, low-resolution terminal

2009-01-27 Thread Eris Discordia

yes, it is *also* hard to mistype on it.


Yeah, you have a point there. It's unimaginable to me how someone can type 
on that keyboard, but it seems some people do and do well.


My question's target was whether it's because of a motor disability that 
matt uses a Maltron or out of personal choice. Apparently, people do it out 
of personal choice so I assume matt's case is the same (not that all 
exceptions have been excluded, but anyway). Ergonomics is a successful 
sales pitch, after all.


--On Wednesday, January 28, 2009 3:31 AM +0100 Gorka Guardiola 
 wrote:



On Wed, Jan 28, 2009 at 12:00 AM, Jonas Amoson 
wrote:

it is quite hard to mistype on it...



yes, it is *also* hard to mistype on it.


--
- curiosity sKilled the cat





  1   2   3   >