Re: [9fans] Balancing Progress and Accessibility in the Plan 9 Community. (Was: [9fans] Interoperating between 9legacy and 9front)

2024-05-12 Thread Dan Cross
On Sun, May 12, 2024 at 9:33 PM  wrote:
> I don't think this approach has ever worked in
> the open source world -- it always starts with
> someone building something useful. The vision
> and goal is defined by the work being done.
>
> After something useful is built, people start
> to join in and contribute.
>
> After enough people join in, it makes sense to
> have more organization.

I remain mystified by the desired end state here.  For all intents and
purposes, as far as the wider world is concerned, 9front is plan 9.
I'm not sure I'd want that burden, to be honest, but that's just me.
That aside, realistically, 9front is the only thing in the plan 9
world that has energy behind it.

On the other hand, there's 9legacy, which pulls together some useful
patches and attempts to carry on in a manner imagined to be closer to
what Bell Labs did. That's fine; it's low activity, but people are
busy, have lives to live, all that stuff. Regardless, some people seem
to be genuinely offended by its existence, and I can't really
understand why.

Meanwhile, the people actually doing any work are in communication
with one another, regardless of what label is applied to the software
running on their individual computers, which is as it should be.

So what is it, exactly, that people want?

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tcf128fa955b8aafc-M8382853c36f5465da5c3743a
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] one weird trick to break p9sk1 ?

2024-05-12 Thread Dan Cross
On Sun, May 12, 2024 at 12:44 PM Richard Miller <9f...@hamnavoe.com> wrote:
> 23h...@gmail.com:
> > sorry for ignoring your ideas about a p9sk3, but is your mentioning of
> > ocam's razor implying that dp9ik is too complicated?
> > is there any other reason to stick with DES instead of AES in
> > particular? i'm not a cryptographer by any means, but just curious.
>
> My comments are about p9sk1; I'm not implying anything about other
> algorithms.  When working with other people's software, whether
> professionally or for my own purposes, I try to take a
> minimum-intervention approach: because it's respectful, because of
> Occam's Razor, because of Tony Hoare's observation that software can
> be either so simple that it obviously has no bugs, or so complicated
> that it has no obvious bugs.

Forgive my saying it, Richard, but I think this is a somewhat overly
staid view of things.

Software, as a constructed object, is maybe unique in that it is
almost infinitely malleable, and the "minimum intervention" approach
is often not terribly useful. As for being respectful of other
people's software, who are these other people? The original authors of
plan 9 are no longer involved, and indeed, the intellectual property
has been transferred to the foundation, and by any reasonable standard
the "community" has been given responsibility for the evolution of the
code.

As for the proposed strawman `p9sk3`, I fail to see what advantage
that would have over dp9ik, except perhaps a less silly name. The
person who wrote the paper on plan 9 security sees it being superior
to what's there now, after all, and frankly he'd know better than
either Occam or Tony Hoare.

- Dan C.

> I thought of 3DES in the first instance because of this desire to be
> minimally disruptive.  Support for DES is already there and tested.
> 3DES only needs extra keys in /mnt/keys, and because 3DES encryption
> with all three keys the same becomes single DES, there's a graceful
> fallback when users have access only via an older client with
> unmodified p9sk1. Obviously the server ticket would always be protected
> by 3DES.
> 
> This is only the first scratching of an idea, not implemented yet.
> 
> I've got nothing against AES. I'm not a cryptographer either, but I did once
> have to build a javacard implementation for a proprietary smartcard which
> involved a lot of crypto infrastructure, and had to pass EMV certification.
> Naturally that needed AES, elliptic curves, and plenty of other esoterica
> to fit in with the existing environment and specifications.
> 

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T56397eff6269af27-M76fe847d3ed83b053ad32e0f
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Interoperating between 9legacy and 9front

2024-05-11 Thread Dan Cross
On Sat, May 11, 2024 at 4:17 PM Jacob Moody  wrote:
> On 5/11/24 14:59, Dan Cross wrote:
> > On Sat, May 11, 2024 at 3:36 PM hiro <23h...@gmail.com> wrote:
> >>> explanation of dp9ik, which while useful, only
> >>> addresses what (I believe) Richard was referring to in passing, simply
> >>> noting the small key size of DES and how the shared secret is
> >>> vulnerable to dictionary attacks.
> >>
> >> i don't remember what richard was mentioning, but the small key size
> >> wasn't the only issue, the second issue is that this can be done
> >> completely offline. why do you say "only", what do you think is
> >> missing that should have been documented in addition to that?
> >
> > Probably how a random teenager could break it in an afternoon. :-)
>
> If we agree that:
>
> 1) p9sk1 allows the shared secret to be brute-forced offline.
> 2) The average consumer machine is fast enough to make a large amount of 
> attempts in a short time,
>in other words triple DES is not computationally hard to brute force these 
> days.
>
> I don't know how you don't see how this is trivial to do.
> A teenager can learn to download hashcat, all that is missing from this right 
> now is some python
> script to get the encrypted shared secret from a running p9sk1 server. All 
> the code for doing
> this is already written in C as part of the distribution, you just have to 
> only do half the
> negotiation and break out. I think you vastly underestimate the 
> resourcefulness of teenagers.
>
> I had previously stated I would publish the PoC that friends of mine in 
> university built
> as part of their class, I have been asked to not do that so I will not.

To be clear: _I'm_ not saying it can't be done. I don't know that it
can be done in an _afternoon_; maybe a day or two, but I honestly
don't know. I was just trying to clarify what (I think) Richard was
asking for.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tde2ca2adda383a3a-Me442d3920e7aeed16791c3f8
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Interoperating between 9legacy and 9front

2024-05-11 Thread Dan Cross
On Sat, May 11, 2024 at 4:05 PM hiro <23h...@gmail.com> wrote:
> are you discontinuing 9legacy?

I'm not doing anything, just explaining why it hasn't happened.

Hey! It's a nice day out. A bit chilly with some wind, but sunny. I
don't know about you, but I'm going fishing.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tde2ca2adda383a3a-M0a433758862ca1b4a69e2e90
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Interoperating between 9legacy and 9front

2024-05-11 Thread Dan Cross
On Sat, May 11, 2024 at 3:52 PM hiro <23h...@gmail.com> wrote:
> it's YOUR fork, why aren't you doing it?

For a simple reason: time.

The work to integrate it in isn't technically that difficult, but
requires time, which is always in short supply.

- Dan C.

> On Sat, May 11, 2024 at 11:47 AM David du Colombier <0in...@gmail.com> wrote:
> >
> > I'd be very pleased if someone could port the
> > dp9ik authentication protocol to 9legacy.
> >
> > --
> > David du Colombier

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tde2ca2adda383a3a-M909e62763fe790d21eb88c72
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Interoperating between 9legacy and 9front

2024-05-11 Thread Dan Cross
On Sat, May 11, 2024 at 3:36 PM hiro <23h...@gmail.com> wrote:
> > explanation of dp9ik, which while useful, only
> > addresses what (I believe) Richard was referring to in passing, simply
> > noting the small key size of DES and how the shared secret is
> > vulnerable to dictionary attacks.
>
> i don't remember what richard was mentioning, but the small key size
> wasn't the only issue, the second issue is that this can be done
> completely offline. why do you say "only", what do you think is
> missing that should have been documented in addition to that?

Probably how a random teenager could break it in an afternoon. :-)

> significant effort has been spent not only to come up with dp9ik and
> verify it but also to document it openly and suggest it's use
> repeatedly to the whole plan9 community (even non-9front-users).
>
> it's beyond me why more 9fans people are not taking this contribution
> at face value.

I wonder if you read the rest of my email

> > I should note that a couple of years ago I talked to Eric Grosse about
> > dp9ik and p9sk1.
>
> Who is Eric Grosse?

https://n2vi.com/bio.html

> > I do
> > wish the name were different: c'mon guys, not _everything_ needs to be
> > snarky. ?;-)
>
> I do wish there wasn't ever any reasons to ever be snarky to anybody
> in the whole plan9 community.
>
> But sometimes it's easier to make some jokes than to solve all
> perceived interpersonal issues of all involved people in the
> community.

Huh.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tde2ca2adda383a3a-M12607f08d1ba7baaf4dc46ec
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Interoperating between 9legacy and 9front

2024-05-11 Thread Dan Cross
On Sat, May 11, 2024 at 2:26 PM hiro <23h...@gmail.com> wrote:
> On Fri, May 10, 2024 at 12:59 PM Richard Miller <9f...@hamnavoe.com> wrote:
> > > From: o...@eigenstate.org
> > > ...
> > > keep in mind that it can literally be brute forced in an
> > > afternoon by a teenager[1][2]; even a gpu isn't needed to do
> > > this in a reasonable amount of time.[1]
> >
> > [citation needed][1]
> >
>
> there you are[1].
> [1] http://felloff.net/usr/cinap_lenrek/newticket.txt

I believe the citation that Richard was asking for was one
demonstrating that p9sk1 could be broken by a teenager in an afternoon
(which, to be fair to Ori, is likely just a bit of fun hyperbole meant
to provide some flourish to an otherwise dry subject). The citation
you provided is to an explanation of dp9ik, which while useful, only
addresses what (I believe) Richard was referring to in passing, simply
noting the small key size of DES and how the shared secret is
vulnerable to dictionary attacks.

I should note that a couple of years ago I talked to Eric Grosse about
dp9ik and p9sk1. I'm sure he won't mind if I share that his (early)
impression was that dp9ik is a strict improvement over p9sk1, and that
p9sk1 should be phased out in favor of dp9ik. As a small quip, I do
wish the name were different: c'mon guys, not _everything_ needs to be
snarky. ?;-)

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tde2ca2adda383a3a-Mcb61bde6ee99250df4da09fb
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] VCS on Plan9

2024-04-20 Thread Dan Cross
On Sat, Apr 20, 2024 at 4:31 AM Giacomo Tesio  wrote:

> Hi 9fans
>
> Il 18 Aprile 2024 22:41:50 CEST, Dan Cross  ha scritto:
> >
> > Git and Jujitsu are, frankly, superior
>
> out of curiosity, to your knowedge, did anyone ever tried to port fossil
> scm
> to Plan9 or 9front (even through ape)?
>
> <https://fossil-scm.org/home/doc/trunk/www/index.wiki>


Not to my knowledge, no.

Also (tangential) did anybody tried to port Tiny-CC?


No idea, sorry.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tab2715b0e6f3e0a5-M6dcd6b8b7083a7a4380867b1
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] VCS on Plan9

2024-04-19 Thread Dan Cross
On Fri, Apr 19, 2024 at 12:10 AM  wrote:
> Quoth Dan Cross :
> > Correct me if I'm wrong, but this is just an `echo` into fossilcons,
> > isn't it? `fsys main snap -a` or something like it?
>
> it would be if being able to write to fossilcons
> didn't imply being able to do a lot more than
> creating a new snapshot.

Fair point. But it would be pretty simple to write a small proxy
service that wrapped that, and only exposed something that let an
authorized user trigger writing the `snap` command into `fossilcons`
on one's behalf. The point is, the raw tools to do what Steve proposed
mostly already exist; wiring them up shouldn't require any changes to
fossil itself.

This is all rather far afield from the issue of revision control,
though. Getting back to that, a true VCS and the sort of backup
facility offered by fossil+venti (and other similar things, such as
the work you've done on filesystems) are mostly orthogonal.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tab2715b0e6f3e0a5-M4869765d772e9c6a4d5a5d32
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] VCS on Plan9

2024-04-18 Thread Dan Cross
On Thu, Apr 18, 2024, 7:18 PM Bakul Shah via 9fans <9fans@9fans.net> wrote:

> On Apr 18, 2024, at 2:48 PM, Shawn Rutledge  wrote:
>
>
> Just another reason to eventually have Rust on Plan 9…
>
>
> Yeah. Compiles are too damn fast; no time to make masala chai :-)
>

Arrey yaar, miri vo ki bahoat problem hain.

- Dan C.


> *9fans * / 9fans / see discussions
>  + participants
>  + delivery options
>  Permalink
> 
>

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tab2715b0e6f3e0a5-M307432eefe257c8a6efc881c
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] VCS on Plan9

2024-04-18 Thread Dan Cross
On Thu, Apr 18, 2024 at 5:01 PM Steve simon  wrote:
> re: VCS -vs-dump
>
> I always planned to add code to fossil to allow members of (say) the 'dump' 
> group to trigger fa fossil to venti dump at arbitrary times.
> If with this it would be trivial to have a 'release' rc script which could 
> save a log message and trigger a dump.

Correct me if I'm wrong, but this is just an `echo` into fossilcons,
isn't it? `fsys main snap -a` or something like it?

> I know this is not really a VCS but the ability to get back to an atomic set 
> of files representing a release would help a lot IMHO.

I could see the utility in that, but at this point, a repo in some
modern VCS that contained the source combined with a snapshot of the
release contents (containing binaries and so on) would be better,
IMHO.

Ironically, this came full circle for me a decade or so ago at Google.
I was explaining Venti to someone and they said, "why did they write
that? Why not just write to git?" I had to explain that Venti predated
`git` by several years. Kids these days!

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tab2715b0e6f3e0a5-M13b4db40706f74184c419de7
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] VCS on Plan9

2024-04-18 Thread Dan Cross
On Thu, Apr 18, 2024 at 4:27 PM Bakul Shah via 9fans <9fans@9fans.net> wrote:
> Did anyone try to port sccs to plan9?

Interesting question; I suspect not.  The only reason to have done so
would have been to inspect source repositories created outside of plan
9, in which case it likely would have been more natural to do so via
Unix (at least for the repositories I can think of that would have
been adjacent).

Culturally, there was a feeling that source revision a la RCS, SCCS,
etc, were unnecessary because the dump filesystem gave you snapshots
already. Moreover, those were automatic and covered more than one file
at a time; RCS/SCCS required some discipline in that one had to
remember to check in a new revision. And as Paul said, the idea of an
atomic, multi-file changeset was revolutionary at the time.

The downside of the filesystem approach for maintaining history is
two-fold: 1) granularity. Typically the dump is only generated once a
day, but often one would rather commit more frequently (or perhaps
less so...) than that. 2) context. As it turns out, the ability to
associate a changeset with a well-written commit message is very
valuable. I have lost count of the number of times I've asked, "what
was going on when _this_ code was written?" Having that directly
available from the source repository is incredibly powerful.

Thankfully, I doubt anyone is using the old patch mechanism anymore.
Git and Jujitsu are, frankly, superior.

- Dan C.

> On Apr 18, 2024, at 9:11 AM, Paul Lalonde  wrote:
>
> The Bell Labs approach to source control was, I'm, weak.  It relied on 
> snapshots of the tree and out-of-band communication.  Don't forget how small 
> and tight-knit that development team was, and how valuable perfect historic 
> snapshots were.
>
> Add that 40 years ago source code revision control systems were incredibly 
> primitive.  The idea of an atomic change set (in Unix land at least) was 
> revolutionary in the early 90s.
>
> This is one place where 35 years of evolution in software practices has very 
> much improved.
>
> Paul
>
> On Thu, Apr 18, 2024, 8:55 a.m. certanan via 9fans <9fans@9fans.net> wrote:
>> 
>> Hi,
>> 
>> is there any more "organic/natural" way to do source control on today's 
>> Plan9 (9front specifically), other than Ori's Git?
>> 
>> In other words, how (if at all) did people at Bell Labs and the community 
>> alike originally manage their contributions in a way that would allow them 
>> to create patches without much hassle?
>> 
>> Was it as simple as backing a source tree up, making some changes, and then 
>> comparing the two? Venti? Replica?
>> 
>> tom
>
>
> 9fans / 9fans / see discussions + participants + delivery options Permalink

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tab2715b0e6f3e0a5-Mec07348f737f68bed8fea253
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: Charting the Future: Envisioning Plan 9 Release 5 for the 9fans Community. [Was:Re: [9fans] Supported Notebooks]

2024-01-25 Thread Dan Cross
On Thu, Jan 25, 2024 at 10:04 PM  wrote:
> Since the Plan 9 Foundation doesn't focus on technical aspects, Would the 
> formation of a Plan 9 Core Team be the next logical step? My understanding is 
> the core team would decide technical direction and implementation. What are 
> your thoughts?

If there's a plan 9 core team, that'd be news to me. I don't believe
that such a thing exists.

- Dan C.

> On Fri, Jan 26, 2024, at 11:31, Don A. Bailey wrote:
> > I don’t think you realize that you have your entire emotional
> > perception of this situation flipped.
> >
> > This was a simple comment on why I strongly disagreed with VT’s request
> > for a 5th Release. I explained myself. I did not get emotional, nor am
> > I emotional now. What I did receive is a lot of strange emotional
> > responses for which I have neither time nor interest. And frankly,
> > neither should anyone here.
> >
> > Who cares if I like 9front? I’m not against it, nor the developers. I’m
> > simply against *joining* 9front with 9legacy/etc as a formal release. I
> > personally believe that’s a bad move.
> >
> > Don’t agree? Ok, so what? I’m one dude. And yet the gaggle of you
> > people have tried to drag me down some psychoanalytical rabbit hole,
> > and waste my entire day. And because I won’t let you drag me into it,
> > and because I respond with short unemotional statements, you somehow
> > think *I’m* the bad guy because I won’t devolve into your world.
> >
> > Geez guys seriously… go touch grass and have a life. Know what I did
> > today instead of engaging with your bullshit? I did my job. I played
> > with my son. I cooked us an amazing dinner. We built a fort. We looked
> > at deer outside. We listened to music.
> >
> > All that because I didn’t waste my time with long bullshit responses
> > that wouldn’t satisfy you, anyway, because I disagree with 9front being
> > merged. Who cares?
> >
> > Live your life, man.
> >
> >
> >
> >> On Jan 25, 2024, at 9:18 PM, Michael Misch  
> >> wrote:
> >>
> >> How you react to being told that you are behaving poorly, and it’s 
> >> neither appreciated or respected, speaks volumes. It’s telling, as you 
> >> say, that your take is to get defensive and, honestly, shitty. Emotional 
> >> maturity may be lacking in general on the list but please do not posture 
> >> from some imagined moral high ground. It’s so tiring, just do better.
> >>
> >>> On Jan 25, 2024, at 15:38, Don A. Bailey  wrote:
> >>>
> >>> It’s telling that you see a difference of opinion as a temper tantrum. A 
> >>> major problem with people’s perspective of 9front and the current plan 9 
> >>> community, honestly.
> >>>
> >>>
> >>>
> > On Jan 25, 2024, at 6:35 PM, Jacob Moody  wrote:
> 
>  On 1/25/24 16:03, Don A. Bailey wrote:
> > I’m aware you’re a member of the foundation.
> >
> > What I want I think I’ve made clear. I do not want to see a formal 
> > release of Plan 9 that includes anything from the 9front project. I do 
> > not want 9front merged with what I tongue-in-cheek term “mainline” 
> > (9legacy / 9pio updated patch sets). I’d rather 9front stay its own 
> > thing. I’m certain there are a lot of relevant contributions within 
> > 9front but I think its place is as its own niche system.
> >
> 
>  Who is going to do the work? Do you want to do the work? Do you think 
>  this temper tantrum you've been throwing on
>  this list all day is somehow going to convince anyone else to work 
>  with/for you?
>  It's rich that you feel like you can dictate rules (no 9front code) but 
>  have no interest in making any effort
>  yourself to make that a reality.
> 
>  I await your "better" plan 9.
> 
>  - moody
> 

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T42f11e0265bcfa18-M17af06951e14de8a0010f108
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Plan9 multi-core support

2023-08-27 Thread Dan Cross
On Sat, Aug 26, 2023 at 9:28 PM Don Bailey  wrote:

> Rob - would you be willing to tell us what the novel work is (and more
> about it) that still has relevance today? I'm sure I'm not the only one on
> the list that would love to learn more about that history.
>

I wouldn’t try to speak for Rob, but I imagine that this would include at
least the rendezvous work (cf the “Multiprocessor Sleep and Wakeup” paper)
and possibly channel-based CSP-style concurrency inside the kernel.

Hopefully Rob will share this with his thoughts here; I’m interested as
well.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T912e4838cb1a371f-M21d6677b78aef7094de3024d
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans]

2023-05-10 Thread Dan Cross
On Wed, May 10, 2023 at 6:34 PM Romano  wrote:
> References: 
> 
> Subject: RUDP and/or others
>
> I know this is from a thread almost 8 years old on 9fans.
>
> I'm ignorant of why RUDP wasn't used in lieu of TCP for 9P
> connections.  Anyone know the whys and wherefores (either technical,
> historical, or political)?  I had read earlier that TCP was used in
> lieu of IL (another transport protocol developed for Plan 9) due to
> performance over long-distance connections.  Did RUDP just not cut it
> in some other way for the needs of sending/receiving 9P messages?
> From the description in ip(3), it seems to have the nice behavior of
> resuming communication when a machine reboots.  Is it due to the
> middle boxes/firewalls that are present in present-day networks?

Probably because it didn't support delivery ordering guarantees. Talk
about a blast from the past, though.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T6ae3228112b5c3b4-Md522f5fdecdde562fa81cfd4
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] [PATCH] fossil: fix a deadlock in the caching logic

2023-04-08 Thread Dan Cross
On Sat, Apr 8, 2023 at 10:37 AM Charles Forsyth
 wrote:
> It was the different characteristics of hard drives, even decent SATA, 
> compared to SSD and nvme that I had in mind.

Since details have been requested about this. I wouldn't presume to
speak from Charles, but some of those differences _may_ include:

1. Optimizing for the rotational latency of spinning media, and its effects vis:
  a. the layout of storage structures on the disk,
  b. placement of _data_ on the device.
2. Effects with respect to things that aren't considerations for rotating disks
  a. Wear-leveling may be the canonical example here
3. Effects at the controller level.
  a. Caching, and the effect that has on how operations are ordered to
ensure consistency
  b. Queuing for related objects written asynchronously and
assumptions about latency

In short, when you change storage technologies, assumptions that were
made with, say, a filesystem was initially written may be invalidated.
Consider the BSD FFS for example: UFS was written in an era of VAXen
and slow, 3600 RPM spinning disks like RA81s attached to relatively
unintelligent controllers; it made a number of fundamental design
decisions based on that, trying to optimize placement of data and
metadata near each other (to minimize head travel--this is the whole
cylinder group thing), implementation that explicitly accounted for
platter rotation with respect to scheduling operations for the
underlying storage device, putting multiple copies of the superblock
in multiple locations in the disk to maximize the chances of recovery
in the event of the (all-too-common) head crashes of the era, etc.
They also did very careful ordering of operations for soft-updates in
UFS2 to ensure filesystem consistency when updating metadata in the
face of a system crash (or power failure, or whatever). It turns out
that many of those optimizations become pessimizations (or at least
irrelevant) when you're all of a sudden writing to a solid-state
device, nevermind battery-backed DRAM on a much more advanced
controller.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T354fe702e1e9d5e9-M28a486accd8e735904418630
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] plan 9 and lisp

2023-01-19 Thread Dan Cross
On Thu, Jan 19, 2023 at 10:48 AM Bakul Shah  wrote:
>[snip]
> Nils M Holm, the author of s9fes, did the original
> port with some help from me. He didn't want to
> maintain plan9 related changes which is why I am
> maintaining it. Nils also has a book on it but
> AFAIK it doesn't cover anything specific to plan9.

I thought that Ozan Yigit had done a small scheme that ran on plan9 at
one point, but I can't find a pointer to it on his page at York at the
moment. Maybe I'm misremembering, but I definitely remember running a
scheme repl under rio, which was actually quite pleasant. Someone
(Russ?) had also ported mosml, which is also interesting to play
around with.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T7b0afbefb53189b6-M8a312f570b9303bd42f95aa2
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Re: _threadmalloc() size>100000000; shouldn't be totalmalloc?

2022-06-28 Thread Dan Cross
On Tue, Jun 28, 2022 at 11:38 AM adr  wrote:
> On Tue, 28 Jun 2022, Dan Cross wrote:
> > You mean by `newthread` and `chancreate`? Those are part of the
> > thread library. Notice that there are no callers outside of 
> > /sys/src/libthread.
>
> What I mean is that "size" in _threadmalloc() will be set by those
> functions with values directly given by the programmer, with this
> limit not documented.

Like I said earlier, plan9 had the luxury of being a research
system. It has brilliant ideas, but mostly trapped inside of
research-quality code. If you look just below the surface,
there are arbitrary limits and edge cases all over the system.
It's honestly surprising that it works as well as it does.

That this particular limit is not documented isn't terribly
surprising. Most of these limits are undocumented. I doubt
anyone ever thought to create a 100MB stack or channel
when that code was written.

> I wouldn't call a function wich is part of an api internal. An
> internal function, for me, is a function inaccesible for the
> programmer, like _threadmalloc itself.
>
> By the way, you mean threadcreate, don't you?

No, I meant the direct calls to `_threadmalloc`.  But sure,
we can say `theadcreate` since that just expands to a call
to `newthead` and `newthread` is static.

We may as well throw `proccreate` into the mix too as it
also indirectly calls `_threadmalloc` via `_newproc`.  For
that matter, libthead's `main` also calls `_threadmalloc`.

I'm not sure if that changes the point, though.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Te1be8dc72738258d-M71941c1e8258c2d786a38352
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Re: _threadmalloc() size>100000000; shouldn't be totalmalloc?

2022-06-28 Thread Dan Cross
On Tue, Jun 28, 2022 at 10:22 AM adr  wrote:
> On Tue, 28 Jun 2022, Dan Cross wrote:
> > [snip]
> > Given the name of the function (`_threadmalloc`), I'd guess that this isn't
> > intended for general use, but rather, for the internal consumption of the
> > thread library, where indeed such a large allocation would likely be an
> > error (bear in mind this code was developed on 32-bit machines with RAMs
> > measured in Megabytes, not Gigabytes).
>
> No, it's used also when creating a channel and setting a thread's
> stack size.

You mean by `newthread` and `chancreate`? Those are part of the
thread library. Notice that there are no callers outside of /sys/src/libthread.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Te1be8dc72738258d-M5755b12d8fc8f5fc46f51c97
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Re: _threadmalloc() size>100000000; shouldn't be totalmalloc?

2022-06-28 Thread Dan Cross
On Tue, Jun 28, 2022 at 9:01 AM adr  wrote:
> On Sun, 26 Jun 2022, adr wrote:
> > [snip]
> > /sys/src/libthread/lib.c
> >
> > [...]
> > void*
> > _threadmalloc(long size, int z)
> > {
> >   void *m;
> >
> >   m = malloc(size);
> >   if (m == nil)
> >   sysfatal("Malloc of size %ld failed: %r", size);
> >   setmalloctag(m, getcallerpc(&size));
> >   totalmalloc += size;
> >   if (size > 1) {
> >   fprint(2, "Malloc of size %ld, total %ld\n", size,
> > totalmalloc);
> >   abort();
> >   }
> >   if (z)
> >   memset(m, 0, size);
> >   return m;
> > }
> > [...]
> >
> > Shouldn't the if statement test the size of totalmalloc before the
> > abort? That size, 100M for internal allocations? It has to be
> > totalmalloc, doesn't it? If not this if statement should be after
> > testing the success of the malloc. Am I missing something?

Note that the `if` statement doesn't test the size of *totalmalloc*, but
just `size`.  `totalsize` is only incidentally printed in the output if `size`
exceeds 100 MB: that is, if a single allocation is larger than 100MB.

> I mean, I think using libthread more like using small channels and thread's 
> stack sizes, but:
>
> Why put a limit here to size when totalmalloc could continue to grow until 
> malloc fails?
>
> If we want a limit, why don't we take into account the system resources?
>
> If we want a constant limit, why don't we put it on threadimpl.h, explain why 
> this value in a comment
> and document it in thread(2)?
>
> Why bother to call malloc before testing if size exeeds that limit???

Given the name of the function (`_threadmalloc`), I'd guess that this isn't
intended for general use, but rather, for the internal consumption of the
thread library, where indeed such a large allocation would likely be an
error (bear in mind this code was developed on 32-bit machines with RAMs
measured in Megabytes, not Gigabytes).

As for why the abort is after the allocation? The sad answer, as with so many
things in plan 9, is that there probably isn't a particularly good
reason. A possible
answer may be that this leaves the process in a state where the allocations can
be inspected in a debugger after the abort, complete with a proper
allocation tag.
An equally plausible answer is that it's just the way the original author typed
it in.

Please understand that Plan 9 spent most of its life as a research system,
and the code and its quality reflects that.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Te1be8dc72738258d-M6c48d66d6cc15f19fd28066c
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Aarch64 on labs|9legacy?

2022-05-22 Thread Dan Cross
On Sun, May 22, 2022, 4:16 PM adr  wrote:

> Has someone done something with aarch64 on labs|9legacy?
> It's called arm64
>
> Great.
>
> Because with this 4 magical words I was supposed to find... what?
> Where? What are you talking about? Where is this work on Bell Labs'
> plan9 that I could find using the string "arm64" (which of course
> I knew already from 9front, the only distribution with aarch64
> support)? Even in Inferno there is no aarch64 support. Where are
> this people publishing aarch64 work in the labs distribution that
> I could fing using "arm64"? What are you talking about?
>
> But now after all the useful interesting contribution to the list,
> as usual, you have time to even express sarcasm.
>
> You haven't help me to find anything, you don't have to do it, of
> course, but then don't talk like you have done it.
>

I've known and observed Charles for a very long time, indeed. I can imagine
how you interpreted what he wrote as you have, but perhaps consider that he
didn't mean what he wrote in the way you have taken it.

A simple question, a fucking simple question and here we go with
> the trolling and the bullshit, I'm done with this fucking list.
>

Probably for the best.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T000c7f7d66260ba3-M29e6e0dfdc06aad0e373cbbd
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Aarch64 on labs|9legacy?

2022-05-22 Thread Dan Cross
On Sun, May 22, 2022, 10:37 AM hiro <23h...@gmail.com> wrote:

> why not use millers Rpi kernel instead? Isn't it also including all the
> important changes?


Correct me if I'm wrong, but Richard's kernel is 32-bit (Aarch32) only,
though with the PAE stuff enabled to access more than 4GiB of _physical_
memory. The virtual address space is still limited to 4GiB.

- Dan C.

On Sunday, May 22, 2022, adr  wrote:
>> On Sat, 21 May 2022, Dan Cross wrote:
>>
>>> To answer your original question, no: there is no aarch64 support in
>>> either 9legacy or the
>>> Bell Labs distribution.
>>>
>>> - Dan C.
>>>
>> 
>> I just ported 7c, 7l and 7a from 9front. I'm adjusting libmach,
>> mkfiles, etc. Porting 9front's aarch64 raspberry pi kernel can be
>> an oportunity to learn about the kernel design.
>> 
>> adr.
> *9fans <https://9fans.topicbox.com/latest>* / 9fans / see discussions
> <https://9fans.topicbox.com/groups/9fans> + participants
> <https://9fans.topicbox.com/groups/9fans/members> + delivery options
> <https://9fans.topicbox.com/groups/9fans/subscription> Permalink
> <https://9fans.topicbox.com/groups/9fans/T000c7f7d66260ba3-Ma2603b1c67fa83c4948d0ca9>
>

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T000c7f7d66260ba3-Mb98e0d4d7d4cef3c166f2224
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Aarch64 on labs|9legacy?

2022-05-21 Thread Dan Cross
To answer your original question, no: there is no aarch64 support in either
9legacy or the Bell Labs distribution.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T000c7f7d66260ba3-M8dc571703e5960d0f3c631f5
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] void*

2022-05-15 Thread Dan Cross
On Sun, May 15, 2022 at 9:16 AM adr  wrote:

> On Sun, 15 May 2022, adr wrote:
> > What I mean is if we are going to follow C99 in the use of void*,
> > we should allow arithmetic on them.
>
> Let me be clear, I know that C99 requires the pointer to be a
> complete object type to do arithmetic, and I like that, is consistent.
> But then I don't see the point to use void* as a generic pointer.


I confess that I am confused about what, precisely, you are asking for.

You are correct that standard C only allows arithmetic on pointers to
complete object types. But `void *` is not a pointer to a complete object
type, and so therefore pointer arithmetic on pointers of type `void *` is
illegal. So in that sense, Plan 9 C is already following C99.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tecaea3b9ec8e7066-M697dfcf01429a681db4155b2
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] licence question

2022-02-01 Thread Dan Cross
On Tue, Feb 1, 2022 at 10:08 AM ibrahim via 9fans <9fans@9fans.net> wrote:

> On Tuesday, 1 February 2022, at 3:12 PM, Dan Cross wrote:
>
> This isn't really on-topic for 9fans, but I find this hard to believe.
> Linux used the exact same compiler suite, and became wildly successful
> while the BSD distributions mostly stagnated; certainly, the BSDs never
> grew at the rate or reached the levels of popularity that Linux has
> attained: it wasn't the license on the toolchain.
>
> Berkeley stopped their distribution of BSD systems right after they were
> forced to remove the toolchain. The last releases were 4.3 and 4.4 lite.
>

Hmm, no, that's not quite right. 4.3BSD was in 1986; 4.4-Lite2 was in 1995:
nineish years later, with lots of intervening activity (Tahoe, Reno, Net/1,
Net/2, 4.4, 4.4-Lite, etc). Perhaps you meant to write 4.4 and 4.4-Lite?

Anyway, the toolchain switch had little to do with CSRG shutting down. The
reality there was that Unix stopped being interesting for an academic
department to continue supporting like EECS at UCB was with CSRG and BSD:
this was all spelled out in the 4.4BSD announcement. As Bostic put it, BSD
was always a community effort, but after 4.4, it just wasn't going to grow
within UCB:
https://groups.google.com/g/comp.unix.bsd/c/hZYO7xTDqQ8/m/NE-S-HWH9-wJ

They also understood that GCC was a better compiler than PCC was at the
time: with PCC on the VAX, they had post-processing steps implementing
peephole optimizations with shell scripts, and GCC just generated better
code. It also implemented most of the ANSI standard, which PCC did not. In
other words, there were technical reasons for switching to GCC beyond just
the license. Indeed, the setup document says, "Most 4.3BSD binaries may be
used with 4.4BSD in the course of the conversion. It is desirable to
recompile local sources after the conversion, as the new compiler (GCC)
provides superior code optimization." (
https://docs.freebsd.org/44doc/smm/01.setup/paper.pdf)

Then the project got forked. It led to a stop in development. I personally
> believe that this was the main reason behind the BSD's to lose their charm.
>

By "forked" do you mean the Jolitz's porting it to the 386? That was
independent of what happened with CSRG shutting down, and neither of those
had much to do with the GPL or adopting GCC as the system compiler.

This also ignores a lot of other developments: the VAX stopped being _the_
dominant platform of the Internet, there was a lot more choice for
reasonable Unix distributions from e.g. workstation vendors, many
organizations stopped looking at Unix as primarily a research system which
lead to the rise of standards bodies like IEEE with POSIX, which
subsequently lessened the need for something like BSD as a quasi-standard
for the likes of DARPA. Meanwhile, and the PC platform became huge, and
people who just wanted Unix on cheap machines suddenly had Linux as a free
working alternative that was "good enough". BSDi cost a lot of money for an
individual (around $1000 USD, if I recall correctly), was embroiled in a
lawsuit with AT&T (the outcome of which was not at all certain) and with
Linux you had a reasonable expectation that someone would help you boot it
on your toaster -- or at least accept the patches if you did it yourself --
and you didn't have to put with with some of the "big" personalities in the
traditional Unix world.

If you read about the reasoning why as an example Minix or even plan9 got
> their own toolchains I think you can read between the lines that the lack
> or the existence of a toolchain with the right license is far more
> important than many believe.
>

Plan 9 got its own toolchain because that was part of the project: how
would one evolve a language like C and a compiler/assembler/linker suite to
make it more pleasant for building the sort of holistic system that plan9
became, and how to accommodate cross-compilation and development in a
heterogenous hardware environment: indeed, many of the early plan9 design
choices were motivated by similar concerns. The license had little, if
anything, to do with it, and being an "open source" system was not a
primary goal of plan9 when the compilers were written. I can't really
imagine that GCC was given any serious consideration as a compiler for
plan9 simply on its merits, if indeed it was given any consideration at
all: at the time, it fulfilled a completely different purpose.

You may have a point with respect to the license mattering for the original
Minix, but bear in mind that the ACK predates GCC, and Tanenbaum was
building a pedagogical system, not a production system: he actually needed
to be able to distribute the toolchain with the Minix educational
materials. He probably could have used GCC just as well, but it's clear he
started Minix a few years ahead of 

Re: [9fans] licence question

2022-02-01 Thread Dan Cross
On Tue, Feb 1, 2022 at 8:10 AM David Leimbach via 9fans <9fans@9fans.net>
wrote:

> > On Jan 29, 2022, at 8:03 AM, ibrahim via 9fans <9fans@9fans.net> wrote:
> >
> > And I believe that the reason why NetBSD, OpenBSD, FreeBSD are not as
> wide spread as Linux was the lack of a compiler suite conforming to the BSD
> license
>
> For some people it’s because they didn’t have a math coprocessor and Linux
> didn’t need one. For others it was the AT&T lawsuit.
>
> I haven’t ever heard the compiler tool chain was a big reason, but I’d be
> interested to hear your perspective here. GCC can produce code of any
> license.


This isn't really on-topic for 9fans, but I find this hard to believe.
Linux used the exact same compiler suite, and became wildly successful
while the BSD distributions mostly stagnated; certainly, the BSDs never
grew at the rate or reached the levels of popularity that Linux has
attained: it wasn't the license on the toolchain.

I believe that David is right that it was a combination of running on
really low-end hardware (in the early days, Torvalds accepted patches for
just about anything), and a similarly low barrier to entry (others
elsewhere have quipped about having to appease, "the Gods of BSD" to get
anything into those systems) and the AT&T lawsuit, which was at best
misguided but scared people off of BSD.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T3e07bfdf263a83c8-M16b72f2b6c8835a3ea4ea4a7
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] v9fs vs mmap (not quite SOLVED)

2021-10-27 Thread Dan Cross
On Wed, Oct 27, 2021 at 10:11 AM Richard Miller <9f...@hamnavoe.com> wrote:

> > But is it not possible that the FPGA tools don't
> > have the same issues with mmap that e.g. Go does?
>
> 1. Some of the fpga tools are closed-source so I can't check with
> confidence that they will never try to use mmap.
>

Or even more confusing, it may use mmap() in a way that is irrelevant to
what filesystem it or the data lives on.

2. The go compiler is open-source so it was a simple matter to make
> an experimental variant on linux which uses read/write instead of
> mmap (as it does on plan 9). I still get the bus error code=0x2
> when running this over v9fs. Hence the remaining v9fs problem is
> not mmap related.


Well, good luck getting it all going!

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tb065f4df67a8bab9-M1947a44962dcadac34b03bac
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] v9fs vs mmap (not quite SOLVED)

2021-10-27 Thread Dan Cross
On Wed, Oct 27, 2021 at 9:16 AM Richard Miller <9f...@hamnavoe.com> wrote:

> > What, precisely, is your use case?
>
> As I said, the go cross-compile was just an example task to
> test the viability of v9fs. I don't *need* to cross-compile
> on linux: the 9pi image, for example, comes with native
> go binaries which I can use for bootstrapping.
>

Right.

The real use case is to have some linux-only tools -- fpga
> circuit compilation toolchains for example -- keeping their
> data on the plan 9 server, with the benefit of fossil snapshots
> and much more space than is available on a little thinkpad SSD.
>

Oh I see. Very well, then. But is it not possible that the FPGA tools don't
have the same issues with mmap that e.g. Go does? I'm afraid the rest of us
have gotten wrapped around that axle but it's a bit of a red-herring.

- Dan C.


--
> 9fans: 9fans
> Permalink:
> https://9fans.topicbox.com/groups/9fans/Tb065f4df67a8bab9-Mf84f9fa78ba133380f7d058e
> Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
>

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tb065f4df67a8bab9-M961ffbff61c57d294c30ed08
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] v9fs vs mmap (not quite SOLVED)

2021-10-27 Thread Dan Cross
On Wed, Oct 27, 2021 at 6:56 AM Richard Miller <9f...@hamnavoe.com> wrote:

> > Skip, did you specify -o cache=mmap when mounting diod service
> > for the go build experiment?
>
> I tried it myself using local diod and cache=mmap. I get a similar
> SIGBUS on instruction fetch again. Conclusion: as Bakul says, now
> I'm debugging linux. Not going there, thanks.
>
> Going further off-topic for 9fans, sorry:
>
> I thought it would be clever to update the linux client to a newer
> kernel (4.19 was the latest I could find for debian 9). That
> didn't go well: booting the new kernel fails with the message
>   Failed to find cpu0 device node
>
> Does anyone know if it's feasible to do an out-of-tree build
> of v9fs kernel modules (9p, 9pnet?) from current source [where
> is it?] and use them with my old 4.9 kernel?
>

You can certainly try to do a _build_, and you may even get a shared object
of some kind. Perhaps the question could be rephrased as, "will such a
built artifact work in an older kernel?" and for that, I'm afraid all bets
are off.

What, precisely, is your use case? I understood from your earlier note that
you'd rather not keep data on Linux if you don't have to. But if you're
only building, and the "data" is just a cloned git repository and object
files and binaries, I'd reiterate my suggestion of plan 9 mounting a
user-level 9P server from Linux instead of Linux trying to mount a 9P
server from plan9: the data on Linux might be thought of as a cache, that's
easily reconstructable if necessary. Perhaps another question is, why not
build directly on plan9? Bootstrapping a toolchain, perhaps?

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tb065f4df67a8bab9-Maf2662944c3fcc3c671ade36
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] v9fs vs mmap (not quite SOLVED)

2021-10-26 Thread Dan Cross
On Tue, Oct 26, 2021 at 6:52 AM Richard Miller <9f...@hamnavoe.com> wrote:

> > Can anyone suggest other mount options I should tweak?
>
> I have tried cache=fscache and cache=loose. In both cases I see startling
> cases of incoherency: ie reading file X returns contents of file Y (neither
> of which has been modified for months).
>
> Maybe my linux kernel is too old? (4.9.0-5-amd64)


Quite possibly, though it occurs to me: if you just want to make forward
progress in the short term, perhaps consider using the local Linux
filesystem and exporting that to plan9 using a user-space 9p server?

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tb065f4df67a8bab9-M46fe1a757214cbd53fb7b9ea
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] How to setup wifi on raspberry pi 4

2021-08-11 Thread Dan Cross
On Wed, Aug 11, 2021 at 4:38 PM Stuart Morrow 
wrote:

> On 11/08/2021, Richard Miller <9f...@hamnavoe.com> wrote:
> > /sys/doc/net/net.pdf
>
> Heads up: spends alot of time on STREAMS, which are not a part of Plan 9.
>
> The FQA also links to that paper with no such forewarning.
>

The mentions of IL are perhaps a little more galling, but this criticism is
spot on: the networking paper as written describes the 2nd edition system.
3rd edition removed streams; 4th ed removed IL.

Still, the bones of the system with respect to how the different pieces fit
together are still very similar to what the paper describes.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T3464a8c7bad3062a-M229c2a7cffd6a848c7322a22
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Four numbers in /lib/sky/here

2021-07-27 Thread Dan Cross
On Tue, Jul 27, 2021 at 1:53 PM Anthony Sorace  wrote:

> There are a few other things which also use that file (e.g. latcmp, the 2e
> road(1), gmap (who did that?) my darksky program). I’m wondering if I’m
> missing something that uses that fourth number. I certainly can’t rule out
> user error.


Could it be an azimuth that's consumed by something?

- Dan C.

> On Jul 27, 2021, at 10:20, David du Colombier <0in...@gmail.com> wrote:
> >
> > The fourth number looks like a mistake.
> > astro(1) only parses the first three numbers.
> >
> > --
> > David du Colombier

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tc2ea4ba95db1a01f-Mb45d69f331059f87022cfc73
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: Posix implementation of Plan 9 cpu(1) (Was: [9fans] Command to set samterm label)

2021-07-21 Thread Dan Cross
On Wed, Jul 21, 2021 at 1:32 PM Xiao-Yong Jin  wrote:

> > On Jul 21, 2021, at 12:16 PM, Dan Cross  wrote:
> >
> > Nothing prevents you from invoking u9fs over an SSH connection; one
> needn't run it from inetd, and I doubt anyone has in 20 years.
>
> You are right.  In that case, the only difference is just that,
> citing hiro,
>
> yes it's a lot of back and forth, but ssh only is needed for
> running
> the process, the data afterwards can use 9p directly.


It's unclear what that's supposed to mean. There's obviously still a
transport involved; in one case, that's over (I presume) TLS over a TCP
connection owned by drawterm, in the other, it's a bitstream running over
the SSH protocol over TCP. In the former case, if the drawterm process on
the Linux side dies for whatever reason, your imported resources disappear.
In the latter, if the sshd or u9fs die, same.

Overall, this seems like abusing drawterm to do what u9fs (or other,
similar userspace 9P servers) was (were) intended to do.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T09fcdec9c87bfde4-Mbefc0794ea49365431f10fc4
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: Posix implementation of Plan 9 cpu(1) (Was: [9fans] Command to set samterm label)

2021-07-21 Thread Dan Cross
On Wed, Jul 21, 2021 at 1:09 PM Xiao-Yong Jin  wrote:

> > On Jul 21, 2021, at 11:42 AM, Dan Cross  wrote:
> >
> > On Wed, Jul 21, 2021 at 12:17 PM Xiao-Yong Jin 
> wrote:
> > > On Jul 21, 2021, at 11:08 AM, Dan Cross  wrote:
> > > > ssh linuxpc drawterm -c srvdev.rc
> > > >
> > > > yes it's a lot of back and forth, but ssh only is needed for running
> > > > the process, the data afterwards can use 9p directly.
> > >
> > > What's the difference between that and using something like u9fs?
> >
> > auth?
> >
> > This is using ssh to attach to the Linux machine to import it's
> filesystem into the plan9 namespace? Wouldn't authenticating from plan9 to
> Linux over SSH be independent of drawterm vs u9fs?
>
> Drawterm does the proper auth and connect to the plan9 system.
> U9fs needs plan9 srv to auth and connect to it.
>
> If you are running a plan9 system, you probably have your auth
> setup.  If you have access to a posix system, you probably have ssh
> setup and you have access as a normal user, and that enables you
> to do ssh and drawterm back.  No additional setup required.
>
> U9fs, on the contrary, states
>
>   It is typically invoked on a Unix machine by
>   inetd with its standard input and output connected to a net-
>   work connection, typically TCP on an Ethernet.  It typically
>   runs as user root and multiplexes access to multiple Plan 9
>   clients over the single wire.  It assumes Plan 9 uids match
>   Unix login names, and changes to the corresponding Unix
>   effective uid when processing requests.
>
> I'm not going to run this and listen on a public interface even if
> it does not run as root.
>
> There are issues with the auth method that u9fs uses, which I'm not
> going to discuss here.


Nothing prevents you from invoking u9fs over an SSH connection; one needn't
run it from inetd, and I doubt anyone has in 20 years.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T09fcdec9c87bfde4-M19bbcae898047761677de240
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: Posix implementation of Plan 9 cpu(1) (Was: [9fans] Command to set samterm label)

2021-07-21 Thread Dan Cross
On Wed, Jul 21, 2021 at 12:17 PM Xiao-Yong Jin  wrote:

> > On Jul 21, 2021, at 11:08 AM, Dan Cross  wrote:
> > > ssh linuxpc drawterm -c srvdev.rc
> > >
> > > yes it's a lot of back and forth, but ssh only is needed for running
> > > the process, the data afterwards can use 9p directly.
> >
> > What's the difference between that and using something like u9fs?
>
> auth?
>

This is using ssh to attach to the Linux machine to import it's filesystem
into the plan9 namespace? Wouldn't authenticating from plan9 to Linux over
SSH be independent of drawterm vs u9fs?

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T09fcdec9c87bfde4-M2452ff590f92aee3d7e33dfb
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: Posix implementation of Plan 9 cpu(1) (Was: [9fans] Command to set samterm label)

2021-07-21 Thread Dan Cross
On Wed, Jul 21, 2021 at 11:12 AM hiro <23h...@gmail.com> wrote:

> if i want to serve files from a linux, i sometimes run drawterm on the
> linux, export stuff to an /srv and then access that from the other
> side.
>
> theoretically you can automate that also from the other side, make
> some script for the /srv stuff, and run it from 9front via ssh via
> drawterm:
>
> ssh linuxpc drawterm -c srvdev.rc
>
> yes it's a lot of back and forth, but ssh only is needed for running
> the process, the data afterwards can use 9p directly.
>

What's the difference between that and using something like u9fs?

- Dan C.

On 7/21/21, Xiao-Yong Jin  wrote:
> >> On Jul 20, 2021, at 10:52 PM, Lucio De Re  wrote:
> >>
> >> what would
> >> it take to serve 9P on Posix (in P9P, in other words) over the
> >> network? Fontsrv and gitsrv would be immediate beneficiaries.
> >
> > Just run it like,
> >
> > fontsrv -s 'tcp!192.168.9.2!1500'
> >
> > and I've no idea what gitsrv is.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T09fcdec9c87bfde4-M0fca6f2faa67aa5a68dafebd
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Transfer of Plan 9 to the Plan 9 Foundation

2021-04-15 Thread Dan Cross
On Thu, Apr 15, 2021 at 2:44 PM Anonymous AWK fan via 9fans <9fans@9fans.net>
wrote:

> > This text was generated by the GPT3 text generator using all licensing
> related threads in 9fans as input.
>
> No it wasn't.
>
> I'm concerned because only one contributor (Nokia) transferred copyright
> of their contributions to the P9F which were then re-licensed,
> but everyone seems to think this applies to all of Plan 9.
>
> But the LPL allows re-licensing to new versions published by Nokia,
> so all contributions could be re-licensed if Nokia updated the LPL.
>

Your concerns are duly noted, but I don't think this is really the right
forum for them. Perhaps try to get in touch with the foundation directly?
9fans doesn't really have much control over this side of things.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tf20bce89ef96d4b6-M754f316b7fd5c202cfaeecf1
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


[9fans] Sad news.

2020-09-28 Thread Dan Cross
I just got word that Andrey has passed away. :-(

I'm sorry, I don't have any further details right now, but wanted to let
folks know.

- Dan C.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T89f7af873f4109c5-M18047666342802d775c0f841
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Jim McKie

2020-06-24 Thread Dan Cross
Oh no! I'm so sorry to hear this. How awful! I always thought jmk was a
really friendly and approachable scientist and I enjoyed interacting with
him.

- Dan C.


On Wed, Jun 24, 2020, 8:36 PM Charles Forsyth 
wrote:

> I am sorry to say that Jim McKie (jmk) died suddenly on 16 June.
> https://www.ippolitofuneralhomes.com/obituaries/James-B-McKie?obId=15111702&fbclid=IwAR3d7aHZXEOhYz-ciOrQPh-W1eMw-_8MHiCUdeKOxzLBEI6VGHsSn4aTjdk
> *9fans * / 9fans / see discussions
>  + participants
>  + delivery options
>  Permalink
> 
>

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Td73b359f9dc68c15-Md60cac4127629861fcb1f97c
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] ARM hardware and SATA

2019-12-12 Thread Dan Cross
Google invests heavily into basic computer science reseach, and Google
researchers are well represented at respected conferences and in high
impact journals in their subfields. So yes, one can do 'actual "research"
projects' at Google. We have an entire research organization doing just
that, often in collaboration with academia: https://research.google/

However, not everyone working on experimental projects at Google is doing
what one might call "research". For some, such as myself, the line can be
blurry, but I'm firmly in a development camp, as are most people I know. To
put it succinctly for me, as for many others, the job isn't to publish
papers or present results, it's to write software of value to Google.
However, we have considerable latitude to investigate new and innovative
ways of writing that software.

Of course, that also entails taking lessons learned from systems outside of
the mainstream.

It's true that Barret still works on Akaros: it was his PhD thesis topic.
However, Google is no longer investing in it directly.

- Dan C.

On Thu, Dec 12, 2019, 8:54 PM hiro <23h...@gmail.com> wrote:

> Dan, does that mean you are allowed to have actual "research" projects
> at google? i just never thought something like this would be possible,
> and never realized akaros happened at google itself. i imagined the
> involvement of universities instead, but i clearly didn't check
> closely enough.
> I hear only bad news from google lately, but if they give enough
> freedom to also do basic research (or let's call it OS development
> cause IT research is an oxymoron) that's great news to me indeed :)
> And as you said all i know of google is their web site. or knew, cause
> many useful services like google code search are no more. at least
> google mail still works :P

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tfa3a09b0e78ea56b-M51f65170addd39400729b2c2
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] ARM hardware and SATA

2019-12-12 Thread Dan Cross
Our use of plan9 was really incidental and was in support of our work on
Akaros. It was a tool we used to support our development environment, but
not a focus of development itself nor something we did development on
directly. We did contribute a few things back to 9legacy; some bug fixes
for the i218 driver where the NIC would lock up come to mind; we found a
few bugs in the 9pi USB stack that Richard fixed. I suppose that counts as
"improving" plan9.

Work on Akaros has stopped however, at least at Google.

Those that I know who use acme at Google are not, generally, writing web
services. Rather, they are working on the Go compiler and runtime. I
suppose it's possible that someone uses acme to write web services, but the
number of people doing that kind of thing is actually pretty small, even
though a lot of people think of Google as a "web" company. I dunno; I work
on kernels.

- Dan C.


On Thu, Dec 12, 2019, 5:47 PM Juan Cuzmar  wrote:

> Wow I'm surprised that people are still working on plan9 to
> develop things especially in google... If I could aso: what kind
> of things you develop with plan9?
>
> Dan Cross  wrote:
> > We had 9legacy running on Intel NUCs at Google for our internal
> > development. It worked well enough, though of course wasn't an
> > ARM based machine. Getting it going was a little hacky, but not
> > too bad. We were using raspberry pi's as terminals.
> >
> > I haven't looked in depth, but I suspect there's relatively
> > little support for SATA interfaces in Richard's BCM code.
> > Targeting something like the BananaPi W2 as a small server
> > would probably be doable and the delta from Richard's code
> > would be smaller than an ersatz port.
> >
> > - Dan C.
> >
> >
> > On Thu, Dec 12, 2019, 12:02 PM Lucio De Re
> >  wrote:
> >
> > > I'd like suggestions for some hardware on which to run Plan 9, almost
> > > certainly expandable SSD capacity will be a must (Venti service).
> > > Price and quality will be the biggest factors, as always.
> > >
> > > Ideally, storage is where the value will reside, the actual processor
> > > could be expendable.
> > >
> > > ARM would allow me to start with Richard Miller's release, which I
> > > believe to be a very sound foundation.
> > >
> > > Thanks for any and all comments.
> > >
> > > Lucio.
> >
> > --
> > 9fans: 9fans
> > Permalink:
> >
> https://9fans.topicbox.com/groups/9fans/Tfa3a09b0e78ea56b-Mb7916a939d1b3ea5c7cf7b1f
> > Delivery options:
> > https://9fans.topicbox.com/groups/9fans/subscription
> --
> 9fans: 9fans
> Permalink:
> https://9fans.topicbox.com/groups/9fans/Tfa3a09b0e78ea56b-M4e6f7e9ded09cec99479a158
> Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
>

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tfa3a09b0e78ea56b-M58e0974725293d89ca3556d3
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] ARM hardware and SATA

2019-12-12 Thread Dan Cross
We had 9legacy running on Intel NUCs at Google for our internal
development. It worked well enough, though of course wasn't an ARM based
machine. Getting it going was a little hacky, but not too bad. We were
using raspberry pi's as terminals.

I haven't looked in depth, but I suspect there's relatively little support
for SATA interfaces in Richard's BCM code. Targeting something like the
BananaPi W2 as a small server would probably be doable and the delta from
Richard's code would be smaller than an ersatz port.

- Dan C.


On Thu, Dec 12, 2019, 12:02 PM Lucio De Re  wrote:

> I'd like suggestions for some hardware on which to run Plan 9, almost
> certainly expandable SSD capacity will be a must (Venti service).
> Price and quality will be the biggest factors, as always.
> 
> Ideally, storage is where the value will reside, the actual processor
> could be expendable.
> 
> ARM would allow me to start with Richard Miller's release, which I
> believe to be a very sound foundation.
> 
> Thanks for any and all comments.
> 
> Lucio.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tfa3a09b0e78ea56b-Mb7916a939d1b3ea5c7cf7b1f
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] Someone made a Wayland compositor based on Rio, Wio

2019-05-02 Thread Dan Cross
On Thu, May 2, 2019 at 9:24 PM Skip Tavakkolian 
wrote:

> One person's epiphany is another's afterthought.
>
> (I've been conditioned by Twitter to not read anything longer than 240
> chars. Ditto for responses. I can't speak to points you made later)
>

It wasn't clear to me that there was a question, let alone a point, in that
word salad.

On Thu, May 2, 2019, 1:21 PM hiro <23h...@gmail.com> wrote:
>
>> > Broadly speaking, that's [rio running inside rio] the essence of Rio.
>>
>> they makes no sense to me, that rio nesting feature seems like an
>> afterthought and involves a lot of back and forth which isn't just
>> inefficient, but extremely inflexible!
>>
>>


Re: [9fans] Don't Plan 9 C compiler initialize the rest of member of a?struct?

2019-04-01 Thread Dan Cross
On Mon, Apr 1, 2019 at 8:36 PM Kurt H Maier  wrote:

> On Mon, Apr 01, 2019 at 08:26:30PM -0400, Jeremy O'Brien wrote:
> > On Mon, Apr 1, 2019, at 11:33, Kyohei Kadota wrote:
> > > Hi, 9fans. I use 9legacy.
> > >
> > > About below program, I expected that flags field will initialize to
> > > zero but the value of flags was a garbage, ex, "f8f7".
> > > Is this expected?
> > >
> > > ```
> > > #include 
> > >
> > > struct option {
> > > int n;
> > > char *s;
> > > int flags;
> > > };
> > >
> > > int
> > > main(void)
> > > {
> > > struct option opt = {1, "test"};
> > > printf("%d %s %x\n", opt.n, opt.s, opt.flags);
> > > return 0;
> > > }
> > > ```
> > >
> > >
> >
> > According to C99: "If an object that has automatic storage duration is
> not initialized explicitly, its value is indeterminate."
> >
> > Stack variable == automatic storage duration. This appears to be correct
> behavior to me.
> >
>
> Can anyone provide the patches 9legacy uses to implement C99 compliance?


There were actually quite a few of them, mostly done by Geoff Collyer.  The
compiler sources list contains a list of desiderata in a file called `c99`;
of course, the plan9 compilers aren't completely compliant (they weren't
trying to be). Incidentally this file has been carried forward into, for
example, /sys/src/cmd/cc/c99 in the 9front distribution (and other plan9
derivatives).

In the present case, this appears to be a compiler bug. The aforementioned
reference to n1548 sec 6.7.9 para 10 is incorrect in that there _is_ an
explicit initializer here. The relevant text in the standard is sec 6.7.9
pp 16-21, which specifies that in the event that an explicit initializer
does not completely cover (in a topological sense) the thing it is
initializing, then the elements not covered shall be initialized as if they
had _static_ storage duration; that is, they should be zeroed.

Now as I said, the Plan 9 C compilers aren't explicit C99 compliant. But
given that the `c99` file describes things related to initializer lists as
being unneeded because they were already implemented, one may assume it was
believed that this was covered by c99 behavior. It isn't.

- Dan C.


Re: [9fans] acme under plan9port : made to work?

2018-11-29 Thread Dan Cross
On Thu, Nov 29, 2018 at 11:43 AM Mayuresh Kathe  wrote:

> i apologise up-front for asking this on 9fans, but, how is acme and
> plumber and all it's utilities (including upas) made to work under
> non-plan9 systems via plan9port; on say something like linux or even mac
> os x?
>
> do they have some kind of user-level library which emulates 9p?
>

That's exactly right.

- Dan C.


Re: [9fans] upas : without acme : possible?

2018-11-29 Thread Dan Cross
On Thu, Nov 29, 2018 at 8:45 AM Mayuresh Kathe  wrote:

> hello,
>
> is it possible to use "upas" without relying on acme?
> it might be uncomfortable (relatively speaking), but is it possible?
>

Yes. This is quite reasonable. To a first order approximation, `upas` is a
mail transfer agent, for moving mail around across a network (or just into
a mailbox on a local system) while Acme provides a mail client (a "mail
user agent") based on a filesystem.

There used to be, and probably still is, another mail client just called
'mail' that could be used to read and send mail, but that is also
independent of upas.

Hope that helps.

- Dan C.


Re: [9fans] zero copy & 9p (was Re: PDP11 (Was: Re: what heavy negativity!)

2018-10-10 Thread Dan Cross
On Wed, Oct 10, 2018 at 7:58 PM  wrote:

> > Fundamentally zero-copy requires that the kernel and user process
> > share the same virtual address space mapped for the given operation.
>
> and it is. this doesnt make your point clear. the kernel is always mapped.
>

Meltdown has shown this to be a bad idea.

(you ment 1:1 identity mapping *PHYSICAL* pages to make the lookup cheap?)
>

plan9 doesn't use an identity mapping; it uses an offset mapping for most
of the address space and on 64-bit systems a separate mapping for the
kernel. An identity mapping from P to V is a function f such that f(a) = a.
But on 32-bit plan9, VADDR(p) = p + KZERO and PADDR(v) = v - KZERO. On
64-bit plan9 systems it's a little more complex because of the two
mappings, which vary between sub-projects: 9front appears to map the kernel
into the top 2 gigs of the address space which means that, on large
machines, the entire physical address space can't fit into the kernel.  Of
course in such situations one maps the top part of the canonical address
space for the exclusive use of supervisor code, so in that way it's a
distinction without a difference.

Of course, there are tricks to make lookups of arbitrary addresses
relatively cheap by using the MMU hardware and dedicating part of the
address space to a recursive self-map. That is, if you don't want to walk
page tables yourself, or keep a more elaborate data structure to describe
the address space.

the difference is that *USER* pages are (unless you use special segments)
> scattered randomly in physical memory or not even realized and you need
> to lookup the pages in the virtual page table to get to the physical
> addresses needed to hand them to the hardware for DMA.
>

So...walking page tables is hard? Ok

now the *INTERESTING* thing is what happens to the original virtual
> address space that covered the I/O when someone touches into it while
> the I/O is in flight. so do we cut it out of the TLB's of ALL processes
> *SHARING* the segment? and then have the pagefault handler wait until
> the I/O is finished?


You seem to be mixing multiple things here. The physical page has to be
pinned while the DMA operation is active (unless it can be reliably
canceled). This can be done any number of ways; but so what? It's not new
and it's not black magic. Who cares about the virtual address space? If
some other processor (nb, not process -- processes don't have TLB entries,
processors do) might have a TLB entry for that mapping that you just
changed you need to shoot it down anyway: what's that have to do with
making things wait for page faulting?

The simplicity of the current scheme comes from the fact that the kernel
portion of the address *space* is effectively immutable once the kernel
gets going. That's easy, but it's not particularly flexible and other
systems do things differently (not just Linux and its ilk). I'm not saying
you *should* do it in plan9, but it's not like it hasn't been done
elegantly before.


> fuck your go routines... he wants the D.
>
> > This can't always be done and the kernel will be forced to perform a
> > copy anyway.
>
> explain *WHEN*, that would be an insight in what you'r trying to
> explain.
>
> > To wit, one of the things I added to the exynos kernel
> > early on was a 1:1 mapping of the virtual kernel address space such
> > that something like zero-copy could be possible in the future (it was
> > also very convenient to limit MMU swaps on the Cortex-A15). That said,
> > the problem gets harder when you're working on something more general
> > that can handle the entire address space. In the end, you trade the
> > complexity/performance hit of MMU management versus making a copy.
>
> don't forget the code complexity with dealing with these scattered
> pages in the *DRIVERS*.
>

It's really not that hard. The way Linux does it is pretty bad, but it's
not like that's the only way to do it.

Or don't.

- Dan C.


Re: [9fans] PDP11 (Was: Re: what heavy negativity!)

2018-10-09 Thread Dan Cross
On Tue, Oct 9, 2018 at 7:24 PM hiro <23h...@gmail.com> wrote:

> from what i see in linux people have been more than just exploring it,
> they've gone absolutely nuts. it makes everything complex, not just
> the fast path.
>

To whom are you responding? Your email is devoid of context, so it is not
clear.

However your statement appears to be based on an unstated assumption that
there is a plan9 school of thought, and a Linux school of thought, and no
other school of thought. If so, that is incorrect.

- Dan C.


Re: [9fans] PDP11 (Was: Re: what heavy negativity!)

2018-10-09 Thread Dan Cross
On Tue, Oct 9, 2018 at 5:28 PM hiro <23h...@gmail.com> wrote:

> > E.g. right now Plan 9 suffers from a *lot* of data copying between
> > the kernel and processes, and between processes themselves.
>
> Huh? What exactly do you mean?


The current plan9 architecture relies heavily on copying data within a
process between userspace and the kernel for e.g. IO. This should be well
known to anyone who's rummaged around in the kernel, as it's pretty
evident. Because of the simple system call and VM interfaces, things like
scatter/gather IO or memory-mapped direct hardware access aren't really
options. `iowritev`, for example, coalesces its arguments into a single
buffer that it then pwrite()'s to its destination.

Can you describe the scenario and the
> measurements you made?
>

This is a different issue. I don't know if copying is as significant an
overhead as Lyndon suggested, but there are plenty of slow code paths in
plan9. For example, when we ported the plan9 network stack to Akaros, we
made a number of enhancements that combined sped things up by 50% or
greater. Most of these were pretty simple: optimizing checksum
calculations, alignment of IP and TCP headers on natural word boundaries
meaning that we could read an IP address with a 32-bit load (I think that
one netted a gigabit increase in throughput), using optimized memcpy
instead of memmove in performance critical code paths, etc. We went from
about 7Gbps on a 10Gbps interface to saturating the NIC. Those measurements
were made between dedicated test machines on a dedicated network using
netperf. Drew Gallatin, now at Netflix working on FreeBSD's network stack,
did most of the optimization work.

If that experience in that one section of the kernel is any indicator,
plan9 undoubtedly has lots of room for optimization in other parts of the
system. Lots of aspects of the system were optimized for much smaller
machines than are common now and many of those optimizations no longer make
much sense on modern machines; the allocator is slow, for example, though
very good at not wasting RAM. Compare to a vmem-style allocator, that can
allocate any requested size in constant-time, but with up-to a factor of
two waste of memory.

Lots of plan9 code is also buggy, or at least racy: consider the seemingly
random valued timeouts to "give other threads 5 seconds to get out" in
ipselffree() and iplinkfree() before "deallocating" an Iplink/Ipself.
Something like RCU, even a naive RCU, would be more robust here,
particularly under heavy load. Device drivers are atrophied and often
buggy, or at least susceptible to hardware bugs that are fixed by the
vendor-provided drivers. When I put in the plan9 networks to support Akaros
development, we ran into a bug in the i218 ethernet controller that caused
the NIC to wedge. We got Geoff Collyer to fix the i82563 driver and we sent
a patch to 9legacy, but it's symptomatic of an aging code base with a
shrinking developer population.

> If we could eliminate most of that copying, things would get a lot faster.
>
> Which things would get faster?
>

Presumably bulk data transfer between devices and the user portion of an
address space. If copying were eliminated (or just reduced) these would
certainly get fast*er*. Whether they would be sufficiently faster as to
make a perceptible performance different to a real workload is another
matter.

> Dealing with the security issues isn't trivial
>
> what security issues?
>

Presumably the bread-and-butter security issues that arise whenever the
user portion of an address space is being concurrently accessed by
hardware. As a trivial example, imagine scheduling a DMA transfer from some
device into a buffer in the user portion of an address space and then
exit()'ing the process. What do you do with the pages the device was
writing into? They had better be pinned in some way until the IO operation
completes before they're reallocated to something else that isn't expecting
it to be clobbered.

I wouldn't be surprised if the raft of currently popular speculative
execution bugs could be exacerbated by the kernel playing around with data
in the user address space in a naive way. It doesn't look like plan9 has
any serious mitigations for those.

- Dan C.


Re: [9fans] PDP11 (Was: Re: what heavy negativity!)

2018-10-08 Thread Dan Cross
On Mon, Oct 8, 2018 at 6:25 PM Digby R.S. Tarvin  wrote:

> Does anyone know what platform Plan9 was initially implemented on?
>

My understanding is that the earliest experiments involved a VAX, but
development quickly shifted to MIPS and 68020-based machines (the "gnot"
was, IIRC, a 68020-based computer).

My guess is that there is no reason in principle that it could not fit
> comfortably into the constraints of a PDP11/70, but if the initial
> implementation was done targeting a machine with significantly more
> resources, it would be easy to make design decisions that would be entirely
> incompatible.
>

I find this unlikely.

The PDP-11, while a respectable machine for its day, required too many
tradeoffs to make it attractive as a development platform for a
next-generation research operating system in the late 1980s: be it
electrical power consumption vs computational oomph or dollar cost vs
available memory, the -11 had fallen from the attractive position it held a
decade prior. Perhaps slimming a plan9 kernel down sufficiently so that it
COULD run on a PDP-11 was possible in the early days, but I can't see any
reason one would have WANTED to do so: particularly as part of the impetus
behind plan9 was to exploit advances in contemporary hardware: lower-cost,
higher-performance, RISC-based multiprocessors; ubiquitous networking;
common high-resolution bitmapped graphical displays; even magneto-optical
storage (one bet that didn't pan out); etc.

Certainly Richard Millar's comment suggests that might be the case. If it
> is heavily dependent on VM, then the necessary rewrite is likely to be
> substantial.
>

As a demonstration project, getting a slimmed-down plan9 kernel to boot on
a PDP-11/70-class machine would be a nifty hack, but it would be quite a
tour de force and most likely the result would not be generally useful. I
think that, as has been suggested, the conceptual simplicity of plan9
paradoxically means that resource utilization is higher than it might
otherwise be on either a more elaborate OR more constrained system (such as
one targeting e.g. the PDP-11). When you can afford not to care about a few
bytes here or a couple of cycles there and you're not obsessed with
scraping out the very last drop of performance, you can employ a simpler
(some might say 'naive') algorithm or data structure.

I'm not sure how the kernel design has changed since the first release. The
> earliest version I have is the release I bought through Harcourt Brace back
> in 1995. But I won't be home till December so it will be a while before I
> can look at it, and probably won't have time to experiment before then in
> any case.
>

The kernel evolved substantially over its life; something like doubling in
size. I remember vaguely having a discussion with Sape where he said he
felt it had grown bloated. That was probably close to 20 years ago now.

For what it is worth, I don't think the embarrassment of riches presented
> to programmers by current hardware has tended to produce more elegant
> designs. If more resources resulted in elegance, Windows would be a thing
> of beauty.  Perhaps Plan9 is an exception. It certainly excels in elegance
> and design simplicity, even if it does turn out to be more resource hungry
> than I imagined. I will admit that the evils of excessively constrained
> environments are generally worse in terms of coding elegance - especially
> when it leads to overlays and self modifying code.
>

plan9 is breathtakingly elegant, but this is in no small part because as a
research system it had the luxury of simply ignoring many thorny problems
that would have marred that beauty but that the developers chose not to
tackle. Some of these problems have non-trivial domain complexity and,
while "modern" systems are far too complex by far, that doesn't mean that
all solutions can be recast as elegantly simple pearls in the plan9 style.
Whether we like those problems or not, they exist and real-world solutions
have to at least attempt to deal with them (I'm looking at you, web x.0 for
x >= 2...but curse you you aren't alone).

PDP11's don't support virtual memory, so there doesn't seem any elegant way
> to overcome that fundamental limitation on size of a singe executable.
>

No, they do: there is paging hardware on the PDP-11 that's used for address
translation and memory protection (recall that PDP-11 kept the kernel at
the top of the address space, the per-process "user" structure is at a
fixed virtual address, and the system could trap a bus error and kill a
misbehaving user-space process). What they may not support is the sort of
trap handling that would let them recover from a page fault (though I
haven't looked) and in any case, the address space is too small to make
demand-paging with reclamation cost-effective.


> So I don't think it i would be worth a substantial rewrite to get it
> going. It is a shame that there don't seem to have been any more powerful
> machines with a comparably eleg

Re: [9fans] Why Plan 9 uses $ifs instead of $IFS?

2017-10-17 Thread Dan Cross
On Tue, Oct 17, 2017 at 12:04 PM, Giacomo Tesio  wrote:
> Also, why NPROC has been left uppercase? :-)

I once had a mathematics professor who advised me not to look for
rationality or logic in nomenclature. I've found that, since taking
this advice to heart, my life is much less stressful.

- Dan C.



Re: [9fans] Why Plan 9 uses $ifs instead of $IFS?

2017-10-17 Thread Dan Cross
On Tue, Oct 17, 2017 at 10:38 AM, Giacomo Tesio  wrote:
> Out of curiosity, do anybody know why Plan9 designers chose lowercase
> variables over uppercase ones?
>
> At first, given the different conventions between rc and sh (eg $path is an
> array, while $PATH is a string), I supposed Plan 9 designers wanted to
> prevent conflict with unix tools relying to the older conventions.
>
> However, I'm not sure this was the main reason, as this also open to subtle
> issues: if a unix shell modifies $IFS and then invoke an rc script, such
> script will ignore the change and keep using the previous $ifs.
>
>
> As far as I can see, APE does not attempt any translation between the two
> conventions, so maybe I'm just missing something obvious...
>
>
> Do anyone know what considerations led to such design decision?

Aesthetics.



Re: [9fans] gcc not an option for Plan9

2013-03-24 Thread Dan Cross
On Sun, Mar 24, 2013 at 9:54 PM, Kurt H Maier  wrote:

> On Sun, Mar 24, 2013 at 09:42:09PM -0400, Dan Cross wrote:
> > Yeah.  Or someone who is arguably the biggest problem on the list adding
> > absolutely nothing other than some uninformed, dogmatically driven, rigid
> > meta-commentary.  Maybe that's all that person can do.  He should keep
> > feeling smug while turning the crank, though: he obviously knows more
> than
> > the guy who designed and wrote most of it.
>
> ...wait.  Are we talking about you?  Because I was talking about Rob
> Pike.


When you have produced a fraction of a percent of what Rob Pike has
produced over his career, I might take you seriously. Until then:
http://en.wikipedia.org/wiki/Dunning–Kruger_effect

And I'm out.

- Dan C.


Re: [9fans] gcc not an option for Plan9

2013-03-24 Thread Dan Cross
Yeah.  Or someone who is arguably the biggest problem on the list adding
absolutely nothing other than some uninformed, dogmatically driven, rigid
meta-commentary.  Maybe that's all that person can do.  He should keep
feeling smug while turning the crank, though: he obviously knows more than
the guy who designed and wrote most of it.


On Sun, Mar 24, 2013 at 9:36 PM, Kurt H Maier  wrote:

> On Sun, Mar 24, 2013 at 09:20:04PM -0400, Dan Cross wrote:
> > Eh, not so much anymore.  The morlocks have taken over, which is a shame:
> > 9fans used to be one of the best technical mailing lists on the Internet.
> >  Those days are long gone.  The ankle biters are too numerous now.
> >
> > (Cue requisite flames.)
> >
> > - Dan C.
> >
>
> I agree.  It's horrible that you can't seem to have any sort of
> technical discussion these days without some guy butting in and telling
> everyone to shut up because computers are so crazy fast that nothing
> even matters, or that nobody else cares about the technology involved,
> etc.  It's a shame.
>
> khm
>
>


Re: [9fans] gcc not an option for Plan9

2013-03-24 Thread Dan Cross
Eh, not so much anymore.  The morlocks have taken over, which is a shame:
9fans used to be one of the best technical mailing lists on the Internet.
 Those days are long gone.  The ankle biters are too numerous now.

(Cue requisite flames.)

- Dan C.


On Sun, Mar 24, 2013 at 9:00 PM, Winston Kodogo  wrote:

> "To go back to the original subject"
>
> Surely this is the first time that has ever been done on 9fans?
>
> This is 9fans, not 'Nam. There are rules.
>
>


Re: [9fans] doing a native awk port (was Re: Bug in print(2) g verb)

2013-03-03 Thread Dan Cross
On Sun, Mar 3, 2013 at 2:31 PM, erik quanstrom wrote:

> should i say "the current ot awk source"?  it's certainly not
>  designed for plan 9.
>

Regardless you are right that it is clearly not worth porting to 'native'
Plan 9 libraries or APIs; what, if anything, would be the benefit of such
an effort?

- Dan C.


Re: [9fans] c++

2012-11-22 Thread Dan Cross
On Nov 22, 2012 12:43 PM,  wrote:
> > Ha!  Ever programmed in APL?
>
> Don't knock it, to learn APL I had to "shift paradigm" and it was a
> very important lesson in my programming education.

No doubt. As a learning exercise, such things are great. But I don't know
that the brand of brevity engendered by APL really leads to fewer defects.
:-)

- Dan C.


Re: [9fans] c++

2012-11-22 Thread Dan Cross
Thanks for making my point for me.
On Nov 22, 2012 12:13 PM, "Kurt H Maier"  wrote:

> On Thu, Nov 22, 2012 at 09:38:06AM -0500, Dan Cross wrote:
> > Personally, I think that all of this language posturing is
> > "geekier-than-thou" nonsense.
>
> And the rest of this email is "wiser-than-thou" bullshit.  Programming
> languages ARE tools.  If you enjoy using shitty tools to earn your
> living, when superior tools are available, you ARE mentally deficient.
> If someone came to me and asked me to rebuild an engine with a hammer
> and a screwdriver, I would change jobs.  With sufficient effort, I'm
> sure it's possible, but my work would not be enjoyable.
>
> It's not all about blogging, Dan.
>
>


Re: [9fans] c++

2012-11-22 Thread Dan Cross
Nor should you. What she eats is my problem not yours, and it's an
incredibly minor problem. Like, only a little more important than worrying
about C++ and Java.
On Nov 22, 2012 12:33 PM, "hiro" <23h...@gmail.com> wrote:

> dan, I don't care about your children.
>
>


Re: [9fans] c++

2012-11-22 Thread Dan Cross
On Nov 22, 2012 9:56 AM, "dexen deVries"  wrote:
>
> On Thursday 22 of November 2012 09:38:06 Dan Cross wrote:
> > In the big scheme of things, absolutely none of this matters.  Whether
one
> > programs in Java, C, Go, COBOL or 370 assembler doesn't really make any
> > difference; one could die tomorrow, and would anyone care what language
> > s/he programmed in?  really?  This world has bigger problems than that.
> >
> > Programming languages are tools; nothing more.  (...)
>
> that assumes any programming language is (at best) a constant or linear
factor
> in problem solving time and complexity. some circles hold opinion that
more
> powerfull programming languages provide polynominal or exponential factor.

I'm not sure what that has to do with programming languages being tools: I
can drive a nail by banging on it with a screwdriver or my fist, but it's
much more convenient to use a hammer.  Which tool I choose really depends
on the problem I'm trying to solve.

In other words, what it assumes is that different languages are better
suited to different tasks.

> aside of that, in various publications number of bugs is found to
correlate
> with line counts or similar metrics, making a more concise language a net
win.

Ha!  Ever programmed in APL?

- Dan C.


Re: [9fans] c++

2012-11-22 Thread Dan Cross
On Nov 22, 2012 9:50 AM, "erik quanstrom"  wrote:
>
> i agree with your point.  but i think that you the statments you point
> out are hyperbole.

That is fair to an extent.

> > In the big scheme of things, absolutely none of this matters.  Whether
one
> > programs in Java, C, Go, COBOL or 370 assembler doesn't really make any
> > difference; one could die tomorrow, and would anyone care what language
> > s/he programmed in?  really?  This world has bigger problems than that.
>
> this argument isn't a good one.  this is a variation of the
> "finish your plate there are starving kids in africa" argument.  the fact
> that there are starving kids in africa has no bearing on if the kid in
> the quote has had enough to eat.
>
> the fact that there are bigger problems in the world does not imply
> that we ourselves are in a position to do anything about them.  heck,
> i see problems very close to home that i can't do much about.  i can
> try to make arguments, but very often there is no direct influence that
> can be made.  and being right is no comfort.

Well, my point was Not, "there are kids starving in X, so instead of
complaining about language Y, go there and dig a well..." but rather to try
and put these things in perspective.  The point was really aimed at those
who seem emotionally consumed by trivial things like programming languages
and command shells: there are probably more important things in their own
lives that they could devote that same energy towards to better effect.

To put it another way, I consider emotional arguments about programming
languages so unimportant that they pale in comparison to encouraging my
daughter to eat a healthy breakfast; starving kids in other countries
didn't even enter my mind.

- Dan C.


Re: [9fans] c++

2012-11-22 Thread Dan Cross
Personally, I think that all of this language posturing is
"geekier-than-thou" nonsense.

Calling C++ or Java a disease?  Really?
Suggesting that if you use one of those languages you're somehow mentally
deficient?  Really?
Suggesting someone change jobs because they're asked to program in C++?
 Really?

In the big scheme of things, absolutely none of this matters.  Whether one
programs in Java, C, Go, COBOL or 370 assembler doesn't really make any
difference; one could die tomorrow, and would anyone care what language
s/he programmed in?  really?  This world has bigger problems than that.

Programming languages are tools; nothing more.  Use whichever one fits the
problem at hand.  If you're the kind of person who geeks out on and enjoys
playing around with new tools; the kind that appreciates the relative
aesthetic quality of one versus the other, more power to you: but
understand that trying to reformulate problems so that one can apply one's
whizz-bang new shiny SuperHammer when the thing that comes out of parents'
toolbox will do is just wasting time.

I came across this recently, and it really resonated:
http://www.lindsredding.com/2012/03/11/a-overdue-lesson-in-perspective/

- Dan C.


Re: [9fans] c++

2012-11-22 Thread Dan Cross
VisitorFactoryBuilderFactorySingletonDecoratorFactory.


On Thu, Nov 22, 2012 at 6:57 AM, Charles Forsyth
wrote:

> I'm writing Java now, after a long gap, and it's ok.
> It has its share of annoying aspects, but it's not too bad.
> Java is a bit like a high-level assembler for the JVM,
> and there are too many packages, many with intricate interfaces and
> conventions.
> C# fixes every one of my complaints about the Java language,
> and generally seems more thoughtful.
>
> I simply ignore the philosophy as much as I can,
> although it's hard to escape the terminology (all those "factories").
>
> On 22 November 2012 11:34, hiro <23h...@gmail.com> wrote:
> > java feel highly inconsistent and are full of stupid busywork
> > and strange programming philosophies that you have to "learn" about,
> > but teach you nothing.
>
>


Re: [9fans] go forth and ulong no more!

2012-11-21 Thread Dan Cross
I agree with brucee here about the Go type names: I'd rather see uint64,
int64, uint32, int32, etc.

usize doesn't bother me much.  New C programmers are often confused by
size_t being unsigned (even experienced ones at times); this makes it clear.


On Wed, Nov 21, 2012 at 8:35 PM, Bruce Ellis  wrote:

> i think that go's scalar types would work better. also usize is  a bit
> dicky.
>
> brucee
> On Nov 22, 2012 12:23 PM, "erik quanstrom"  wrote:
>
>> On Wed Nov 21 19:19:21 EST 2012, benave...@gmail.com wrote:
>> > hola,
>> >
>> > usize, really?
>> >
>> > any reason not use this opportunity to join the world and use
>> inttypes.h or stdint.h format?
>>
>> have you read the opengroup pubs?
>>
>>
>> http://pubs.opengroup.org/onlinepubs/007904975/basedefs/stdint.h.html
>>
>> http://pubs.opengroup.org/onlinepubs/009604599/basedefs/inttypes.h.html
>>
>> i don't see any advantage to using whatever types these guys are using.
>> when porting things from plan 9, it's good to have different type names.
>> the assumptions of various systems differ.  when porting things to plan 9,
>> you're likely going to be using ape anyway.
>>
>> these headers are missing a type representing physical memory, and Rune.
>> no, i'm never going to consider using wchar_t instead.
>>
>> yet they have types we do not want such as int_{least,fast} and int_max_t.
>> they seem to be a trap set by greybeards for unsuspecting young
>> programmers.
>> one could hold this kind of thing up as a reason that c is an old and
>> broken language.
>>
>> and then there's the printf macros.  oh, joy.
>>
>> i'm sure that others could back this up with more inteligent reasoning.
>>  i'm just
>> prone to rant (had you noticed) when i see some of this stuff.
>>
>> - erik
>>
>>


Re: [9fans] 8c - is this leagal?

2012-11-21 Thread Dan Cross
On Wed, Nov 21, 2012 at 8:30 AM, Bence Fábián  wrote:

> 8c is for Plan9's C dialect.
> Look into /sys/doc/ape.ps
>

This was not a useful answer to Steve's question.

- Dan C.


Re: [9fans] Kernel panic when allocating a huge memory

2012-11-03 Thread Dan Cross
On Sat, Nov 3, 2012 at 1:13 PM, Kurt H Maier  wrote:
> On Sat, Nov 03, 2012 at 01:04:15PM -0400, erik quanstrom wrote:
>> in modern systems, i believe they mean the same thing.
>>
>> http://en.wikipedia.org/wiki/Paging#Terminology
>
> Sorry, I didn't know you were talking about Windows NT.

I didn't know you were talking about VAX Unix.

>> > memory deduplication? is that true?
>>
>> http://lwn.net/Articles/454795/
>
> hiro was asking if plan 9 deduplicates memory.

That's odd, because Erik was pretty obviously talking about the host
virtual machine.

But hey; whatever.  It's cool.

- Dan C.



Re: [9fans] Acme: the way the future actually was

2012-10-25 Thread Dan Cross
On Thu, Oct 25, 2012 at 10:28 AM, Ethan Grammatikidis
 wrote:
> On Fri, 14 Sep 2012 08:48:40 -0800 Jack Johnson  wrote:
> > Even with it's "faults" (age?), I still miss Oberon. It was *fun* and 
> > elegant.
>
> It's still around in AOS form where it can run native or as a
> user-space program under other OSs. I used it to try out someone else's
> work and didn't really find the UI very elegant. In particular I
> couldn't copy text from the compiler error window, which I thought was
> desperately bad. Anyway, apart from that it worked; middle-clicking to
> compile and to launch the program was ok, and the OpenGL program I was
> trying out ran very smoothly.
>
> The only link I seem to have kept is http://www.ocp.inf.ethz.ch/

Hindsight is always 20/20.

When I first used Oberon 20 years ago, it had this amazing liberating
feeling to it; a graphical demonstration of sorting algorithms?
Brilliant!  (I was in high school.  Our "Computer Science" class was
taught using Turbo Pascal on IBM PCs; the textbook had a picture of an
IBM 4381 on the cover.  Luckily, I managed to persuade the system
administrators at the local university into giving me accounts on most
of the major systems so I could use C, Unix and VMS.)

My point is that it's so easy to forget that these sorts of statements
about the power, simplicity and elegance of things past carry with
them a context that is usually not explicitly articulated.  If you
came to Oberon from some primitive computing environment (like, say,
PCs or something) then it was indeed fun and amazingly elegant.  That
said, I wouldn't want to go back to running it on a SPARCstation 1
with 16 megs of RAM, a 200MB disk, and a 17 inch black and white CRT.
It's easy to look back and say to oneself, "wow, that wasn't as cool
as I remembered it being..." but that doesn't change that, at the
time, it *was* that cool because of the context.

- Dan C.



Re: [9fans] off-topic: why linux lost the desktop

2012-10-19 Thread Dan Cross
On Thu, Oct 18, 2012 at 1:17 PM, John Floren  wrote:

> I too find Linux too mainstream: http://i.imgur.com/Wtm16.png
>

Bravo.

- Dan C.


Re: [9fans] rc's shortcomings (new subject line)

2012-08-30 Thread Dan Cross
On Thu, Aug 30, 2012 at 7:56 PM, erik quanstrom  wrote:
>> The thing is that mk doesn't really do anything to set up connections
>> between the commands it runs.
>
> it does.  the connections are through the file system.

No.  The order in which commands are run (or if they are run at all)
is based on file timestamps, so in that sense it uses the filesystem
for coordination, but mk itself doesn't do anything to facilitate
interprocess communications between the commands it runs (for example
setting up pipes between commands).

- Dan C.



Re: [9fans] rc's shortcomings (new subject line)

2012-08-30 Thread Dan Cross
On Thu, Aug 30, 2012 at 7:03 PM, erik quanstrom  wrote:
>> rejected such system-imposing structure on files in Unix-y type
>> environments since 1969.
> [...]
>> other threads of execution.  Could we do something similar with pipes?
>>  I don't know that anyone wants typed file descriptors; that would
>> open a whole new can of worms.
>
> i don't see that the os can really help here.  lib9p has no problem
> turning an undelimited byte stream → 9p messages.  there's no reason
> any other format couldn't get the same treatment.

Yeah, I don't see much here unless one breaks the untyped stream model
(from the perspective of the system).

> said another way, we already have typed streams, but they're not
> enforced by the operating system.

Yes, but then every program that participates in one of these
computation networks has to have that type knowledge baked in.  The
Plan 9/Unix model seems to preclude a general mechanism.

> one can also use the thread library technique, using shared memory.

Sure, but that doesn't do much for designing a new shell.  :-)

>> Consider a simple reduction in Lisp; say, summing up a list of numbers
>> or something like that.  In Common Lisp, we may write this as:
>>
>> (reduce #'+ '(1 2 3 4 5))
>>
>> In clojure, the same thing would be written as:
>>
>> (reduce + [1 2 3 4 5])
>>
>
> this reminds me of a bit of /bin/man.  it seemed that the case statement
> to generate a pipeline of formatting commands was awkward—verbose
> and yet limited.
>
> fn pipeline{
> if(~ $#* 0)
> troff $Nflag $Lflag -$MAN | $postproc
> if not{
> p = $1; shift
> $p | pipeline $*
> }
> }
>
> fn roff {
> ...
> fontdoc $2 | pipeline $preproc
> }

Ha!  That's something.  I'm not sure what, but definitely something (I
actually kind of like it).

>> http://clojure.com/blog/2012/05/08/reducers-a-library-and-model-for-collection-processing.html
>>
>> Your example of running multiple 'grep's in parallel sort of reminded
>> me of this, though it occurs to me that this can probably be done with
>> a command: a sort of 'parallel apply' thing that can run a command
>> multiple times concurrently, each invocation on a range of the
>> arguments.  But making it simple and elegant is likely to be tricky.
>
> actually, unless i misread (i need more coffee), the blog sounds just like
> xargs.

Hmm, not exactly.  xargs would be like reducers if xargs somehow asked
stdin to apply a program to itself.

A parallel apply sort of thing could be used with xargs, of course;
'whatever | xargs papply foo' could keep some $n$ of foo's running at
the same time.  The magic behind 'papply foo `{whatever}' is that it
knows how to interpret its arguments in blocks.  xargs will invoke a
command after reading $n$ arguments, but that's mainly to keep from
overflowing the argument buffer, and (to my knowledge) it won't try to
keep multiple instances running them in parallel.

Hmm, I'm afraid I'm off in the realm of thinking out loud at this
point.  Sorry if that's noisy for folks.

- Dan C.



Re: [9fans] rc's shortcomings (new subject line)

2012-08-30 Thread Dan Cross
On Thu, Aug 30, 2012 at 7:51 PM, Dan Cross  wrote:
> A parallel apply sort of thing could be used with xargs, of course;
> 'whatever | xargs papply foo' could keep some $n$ of foo's running at
> the same time.  The magic behind 'papply foo `{whatever}' is that it
> knows how to interpret its arguments in blocks.  xargs will invoke a
> command after reading $n$ arguments, but that's mainly to keep from
> overflowing the argument buffer, and (to my knowledge) it won't try to
> keep multiple instances running them in parallel.

Oops, I should have checked the man page before I wrote.  It seems
that at least some version of xargs have a '-P' for 'parallel' mode.



Re: [9fans] rc's shortcomings (new subject line)

2012-08-30 Thread Dan Cross
On Thu, Aug 30, 2012 at 7:11 PM, dexen deVries  wrote:
> On Thursday 30 of August 2012 15:35:47 Dan Cross wrote:
>> (...)
>> Your example of running multiple 'grep's in parallel sort of reminded
>> me of this, though it occurs to me that this can probably be done with
>> a command: a sort of 'parallel apply' thing that can run a command
>> multiple times concurrently, each invocation on a range of the
>> arguments.  But making it simple and elegant is likely to be tricky.
>
> now that i think of it...
>
> mk creates DAG of dependences and then reduces it by calling commands, going
> in parallel where applicable.
>
> erik's example with grep x *.[ch] boils down to two cases:
>  - for single use, do it simple & slow way -- just run single grep process for
> all files
>  - but when you expect to traverse those files often, prepare a mkfile
> (preferably in a semi-automatic way) which will perform search in parallel.
>
> caveat: output of one grep instance could end up in the midst of a /line/ of
> output of another grep instance.

The thing is that mk doesn't really do anything to set up connections
between the commands it runs.



Re: [9fans] rc vs sh

2012-08-30 Thread Dan Cross
[Special to Lucio: Email to proxima.alt.za from Google's SMTP servers
is failing; it looks like they're listed in rbl.proxima.alt.za.]

On Thu, Aug 30, 2012 at 3:56 PM, Lucio De Re  wrote:
>> But as I said, this is not to argument about Go developers' choices:
>> they do as they see fit
>
> I think their philosophy is sound, not just an arbitrary choice.  The
> alternative is a commitment that can only be fulfilled by applying
> resources best utilised on the focal issue.
>
> For example, the kerTeX installation relies on an ftp client that
> accepts a URL on the command line.  My UBUNTU installation has no such
> ftp command.  That leaves you with the choice between driving the more
> conventional ftp program with a small script (not nice, but it can be
> done) or require (as you do for LEX and YACC) that wget be installed
> everywhere, not just where ftp isn't of the neat BSD variety.
>
> It's a choice you make on behalf of the user and you can be sure that
> a significant portion of your target market would prefer the opposite.
> A very small portion will also stand up and criticise you if you go
> the wget rule, whereas it is much harder to challenge the use of ftp
> with a script.  However, of the two, wget is more robust.
>
> That's the way it is.  Sometimes one has the luxury of doing things
> properly, sometimes it is more critical to arrive at a result first.
> A healthy ethos would encourage tidying up behind one, but the costs
> are seldom justified in the present development climate.  Future
> conditions may be different and perhaps we can then all feel justified
> in chipping in to tidy up behind our less tidy pioneers.

Very well put.

- Dan C.



Re: [9fans] rc's shortcomings (new subject line)

2012-08-30 Thread Dan Cross
On Wed, Aug 29, 2012 at 7:27 PM, erik quanstrom  wrote:
>> > rc already has non-linear pipelines.  but they're not very convienient.
>>
>> And somewhat limited.  There's no real concept of 'fanout' of output,
>> for instance (though that's a fairly trivial command, so probably
>> doesn't count), or multiplexing input from various sources that would
>> be needed to implement something like a shell-level data flow network.
>>
>> Muxing input from multiple sources is hard when the data isn't somehow
>> self-delimited.
>>[...]
>> There may be other ways to achieve the same thing; I remember that the
>> boundaries of individual writes used to be preserved on read, but I
>> think that behavior changed somewhere along the way; maybe with the
>> move away from streams?  Or perhaps I'm misremembering?
>
> pipes still preserve write boundaries, as does il.  (even the 0-byte write) 
> but tcp of course by
> definition does not.  but either way, the protocol would need to be
> self-framed to be transported on tcp.  and even then, there are protocols
> that are essentially serial, like tls.

Right.  I think this is the reason for Bakul's question about
s-expressions or JSON or a similar format; those formats are
inherently self-delimiting.  The problem with that is that, for
passing those things around to work without some kind of reverse
'tee'-like intermediary, the system has to understand the the things
that are being transferred are s-expressions or JSON records or
whatever, not just streams of uninterpreted bytes.  We've steadfastly
rejected such system-imposing structure on files in Unix-y type
environments since 1969.

But conceptually, these IPC mechanisms are sort of similar to channels
in CSP-style languages.  A natural question then becomes, how do
CSP-style languages handle the issue?  Channels work around the muxing
thing by being typed; elements placed onto a channel are indivisible
objects of that type, so one doesn't need to worry about interference
from other objects simultaneously placed onto the same channel in
other threads of execution.  Could we do something similar with pipes?
 I don't know that anyone wants typed file descriptors; that would
open a whole new can of worms.

Maybe the building blocks are all there; one could imagine some kind
of 'splitter' program that could take input and rebroadcast it across
multiple output descriptors.  Coupled with some kind of 'merge'
program that could take multiple input streams and mux them onto a
single output, one could build nearly arbitrarily complicated networks
of computations connected by pipes.  Maybe for simplicity constrain
these to be DAGs.  With a notation to describe these computation
graphs, one could just do a topological sort of the graph, create
pipes in all the appropriate places and go from there.  Is the shell
an appropriate place for such a thing?

Forsyth's link looks interesting; I haven't read through the paper in
detail yet, but it sort of reminded me of LabView in a way (where
non-programmers wire together data flows using boxes and arrows and
stuff).

>> > i suppose i'm stepping close to sawzall now.
>>
>> Actually, I think you're stepping closer to the reducers stuff Rich
>> Hickey has done recently in Clojure, though there's admittedly a lot
>> of overlap with the sawzall way of looking at things.
>
> my knowledge of both is weak.  :-)

The Clojure reducers stuff is kind of slick.

Consider a simple reduction in Lisp; say, summing up a list of numbers
or something like that.  In Common Lisp, we may write this as:

(reduce #'+ '(1 2 3 4 5))

In clojure, the same thing would be written as:

(reduce + [1 2 3 4 5])

The problem is how the computation is performed.  To illustrate,
here's a simple definition of 'reduce' written in Scheme (R5RS doesn't
have a standard 'reduce' function, but it is most commonly written to
take an initial element, so I do that here).

(define (reduce binop a bs)
  (if (null? bs)
a
(reduce binop (binop a (car bs)) (cdr bs

Notice how the recursive depth of the function is linear in the length
of the list.  But, if one thinks about what I'm doing here (just
addition of simple numbers) there's no reason this can't be done in
parallel.  In particular, if I can split the list into evenly sized
parts and recurse, I can limit the recursive depth of the computation
to O(lg n).  Something more like:

(define (reduce binop a bs)
  (if (null? bs)
a
(let ((halves (split-into-halves bs)))
  (binop (reduce binop a (car halves)) (reduce binop a (cadr halves)))

If I can exploit parallelism to execute functions in the recursion
tree simultaneously, I can really cut down on execution time.  The
requirement is that binop over a and bs's is a monoid; that is, binop
is associative over the set from which 'a' and 'bs' are drawn, and 'a'
is an identity element.

This sounds wonderful, of course, but in Lisp and Scheme, lists are
built from cons cells, and e

Re: [9fans] rc's shortcomings (new subject line)

2012-08-29 Thread Dan Cross
On Wed, Aug 29, 2012 at 2:04 AM, erik quanstrom  wrote:
>> > the haahr/rakitzis es' if makes more sense, even if it's wierder.)
>>
>> Agreed; es would be an interesting starting point for a new shell.
>
> es is great input.  there are really cool ideas there, but it does
> seem like a lesson learned to me, rather than a starting point.

Starting point conceptually, if not in implementation.

>> I think in order to really answer that question, one would have to
>> step back for a moment and really think about what one wants out of a
>> shell.  There seems to be a natural conflict a programming language
>> and a command interpreter (e.g., the 'if' vs. 'if not' thing).  On
>> which side does one err?
>
> since the raison d'être of a shell is to be a command interpter, i'd
> go with that.

Fair enough, but that will color the flavor of the shell when used as
a programming language.  Then again, Inferno's shell was able to
successfully navigate both in a comfortable manner by using clever
facilities available in that environment (module loading and the
like).  It's not clear how well that works in an environment like
Unix, let alone Plan 9.

>> I tend to agree.  As a command interpreter, rc is more or less fine as
>> is.  I'd really only feel motivated to change whatever people felt
>> were common nits, and there are fairly few of those.
>
> there are nits of omission, and those can be fixable.  ($x(n-m) was added)

Right.

>> > perhaps (let's hope) someone else has better ideas.
>>
>> Well, something off the top of my head: Unix pipelines are sort of
>> like chains of coroutines.  And they work great for defining linear
>> combinations of filters.  But something that may be interesting would
>> be the ability to allow the stream of computations to branch; instead
>> of pipelines being just a list, make them a tree, or even some kind of
>> dag (if one allows for the possibility of recombining streams).  That
>> would be kind of an interesting thing to play with in a shell
>> language; I don't know how practically useful it would be, though.
>
> rc already has non-linear pipelines.  but they're not very convienient.

And somewhat limited.  There's no real concept of 'fanout' of output,
for instance (though that's a fairly trivial command, so probably
doesn't count), or multiplexing input from various sources that would
be needed to implement something like a shell-level data flow network.

Muxing input from multiple sources is hard when the data isn't somehow
self-delimited.  For specific applications this is solvable by the
various pieces of the computation just agreeing on how to represent
data and having a program that takes that into account do the muxing,
but for a general mechanism it's much more difficult, and the whole
self-delimiting thing breaks the Unix 'data as text' abstraction by
imposing a more rigid structure.

There may be other ways to achieve the same thing; I remember that the
boundaries of individual writes used to be preserved on read, but I
think that behavior changed somewhere along the way; maybe with the
move away from streams?  Or perhaps I'm misremembering?  I do remember
that it led to all sorts of hilarious arguments about what the
behavior of things like, 'write(fd, "", 0)' should induce in the
reading side of things, but this was a long time ago.

Anyway, maybe something along the lines of, 'read a message of length
<=SOME_MAX_SIZE from a file descriptor; the message boundaries are
determined by the sending end and preserved by read/write' could be
leveraged here without too much disruption to the current model.

> i think part of the problem is answering the question, what problem
> would we like to solve.  because "a better shell" just isn't well-defined
> enough.

Agreed.

> my knee-jerk reaction to my own question is that making it easier
> and more natural to parallelize dataflow.  a pipeline is just a really
> low-level way to talk about it.  the standard
> grep x *.[ch]
> forces all the *.[ch] to be generated before 1 instance of grep runs on
> whatever *.[ch] evaluates to be.
>
> but it would be okay for almost every use of this if *.[ch] were generated
> in parallel with any number of grep's being run.
>
> i suppose i'm stepping close to sawzall now.

Actually, I think you're stepping closer to the reducers stuff Rich
Hickey has done recently in Clojure, though there's admittedly a lot
of overlap with the sawzall way of looking at things.

- Dan C.



Re: [9fans] rc vs sh

2012-08-28 Thread Dan Cross
On Aug 29, 2012 2:14 AM, "Jeremy Jackins"  wrote:
> > Well, if you could explain a) how it's currently broken, and b) how a
> > 'corrected' version would be useful, others might be more sympathetic
> > to your concerns.  From most perspectives, it doesn't appear broken at
> > all; it works fine, it's just not what you would have done.
>
> Speak for yourself, please.

Which part?  One was a request to provide a substantive argument, the other
an objective fact.

- Dan C.


Re: [9fans] rc's shortcomings (new subject line)

2012-08-28 Thread Dan Cross
On Tue, Aug 28, 2012 at 8:56 PM, erik quanstrom  wrote:
>> And rc is not perfect.  I've always felt like the 'if not' stuff was a 
>> kludge.
>
> no, it's certainly not.  (i wouldn't call if not a kludge—just ugly.

Kludge perhaps in the sense that it seems to be to work around an
issue with the grammar and the expectation that it's mostly going to
be used interactively, as opposed to programmatically.  See below.

> the haahr/rakitzis es' if makes more sense, even if it's wierder.)

Agreed; es would be an interesting starting point for a new shell.

> but the real question with rc is, what would you fix?

I think in order to really answer that question, one would have to
step back for a moment and really think about what one wants out of a
shell.  There seems to be a natural conflict a programming language
and a command interpreter (e.g., the 'if' vs. 'if not' thing).  On
which side does one err?

> i can only think of a few things around the edges.  `{} and $ are
> obvious and is some way to use standard regular expressions.  but
> those really aren't that motivating.  rc does enough.

I tend to agree.  As a command interpreter, rc is more or less fine as
is.  I'd really only feel motivated to change whatever people felt
were common nits, and there are fairly few of those.

> perhaps (let's hope) someone else has better ideas.

Well, something off the top of my head: Unix pipelines are sort of
like chains of coroutines.  And they work great for defining linear
combinations of filters.  But something that may be interesting would
be the ability to allow the stream of computations to branch; instead
of pipelines being just a list, make them a tree, or even some kind of
dag (if one allows for the possibility of recombining streams).  That
would be kind of an interesting thing to play with in a shell
language; I don't know how practically useful it would be, though.

>> switch/case would make helluva difference over nested if/if not, if
>> defaulted to fall-through.
>
> maybe you have an example?  because i don't see that.  if not works
> fine, and can be nested.  case without fallthrough is also generally
> what i want.  if not, i can make the common stuff a function.
>
>> variable scoping (better than subshel) would help writing larger
>> scripts, but that's not necessarily an improvement ;-) something
>> similar to LISP's `let' special form, for dynamic binding.

(A nit: 'let' actually introduces lexical scoping in most Lisp
variants; yes, doing (let ((a 1)) ...) has non-lexical effect if 'a'
is a dynamic variable in Common Lisp, but (let) doesn't itself
introduce dynamic variables.  Emacs Lisp is a notable exception in
this regard.)

> there is variable scoping.  you can write
>
> x=() y=() cmd
>
> cmd can be a function body or whatever.  x and y are then private
> to cmd.  you can nest redefinitions.
>
> x=1 y=2 {echo first $x $y; x=a y=b {echo second $x $y; x=α y=β {echo 
> third $x $y}; echo ret second $x $y}; echo ret first $x $y}
> first 1 2
> second a b
> third α β
> ret second a b
> ret first 1 2

This syntax feels clunky and unfamiliar to me; rc resembles block
scoped languages like C; I'd rather have a 'local' or similar keyword
to introduce a variable in the scope of each '{ }' block.

> you should try the es shell.  es had let and some other scheme-y
> features.  let allows one to do all kinds of tricky stuff, like build
> a shell debugger in the shell, but my opinion is that es was more
> powerful and fun, but it didn't buy enough because it didn't really
> expand on the essential nature of a shell.  what can one do to
> manipulate processes and file descriptors.

es was a weird merger between rc's syntax and functional programming
concepts.  It's neat-ish, but unless we're really ready to go to the
pipe monad (not that weird, in my opinion) you're right.  Still, if it
allowed one to lexically bind a file descriptor to a variable, I could
see that being neat; could I have a closure over a file descriptor?  I
don't think the underlying process model is really set up for it, but
it would be kind of cool: one could have different commands consuming
part of a stream in a very flexible way.

- Dan C.



Re: [9fans] rc vs sh

2012-08-28 Thread Dan Cross
One last thing:

On Tue, Aug 28, 2012 at 11:56 PM, Kurt H Maier  wrote:
> On Tue, Aug 28, 2012 at 11:50:27PM +0530, Dan Cross wrote:
>> You are conflating bootstrapping the language with the language's
>> build system.  The go command is actually quite nice.
>
> Also, the go command is useless unless the bootstrap build system can
> construct it.  I'm not conflating anything, I'm just not talking about
> the build system you like.

I don't *like* it, I just don't *hate* it.  Two very different concepts.

>> The use of bash in Go is tiny.  Why fixate on it when you could go
>> build something useful, instead?
>
> Because a corrected build system would be useful to me.

Well, if you could explain a) how it's currently broken, and b) how a
'corrected' version would be useful, others might be more sympathetic
to your concerns.  From most perspectives, it doesn't appear broken at
all; it works fine, it's just not what you would have done.

> Is this a complicated concept?

No.  But it's basic tact and consideration to fully explain oneself if
one expects a useful response.

>> Evidence suggests otherwise.
>
> I have yet to see such.

*shrug*  Don't know what to tell you, then.

>> Anyway, I have neither the time nor the inclination to get into a
>> pissing match with some random person on the Internet about Go's use
>> of bash.  If it's such a serious problem for you, well, I hope you
>> figure out a way to work around it.  If not, then I don't know what to
>> tell you.  In either case, good luck!
>
> I wish you would have ascertained you had nothing to tell me earlier in
> the thread.  Thank you for your support.

I somehow get the feeling that few people have anything to tell you
that you're willing to listen to.

- Dan C.



Re: [9fans] rc vs sh

2012-08-28 Thread Dan Cross
On Tue, Aug 28, 2012 at 11:13 PM, Kurt H Maier  wrote:
> On Tue, Aug 28, 2012 at 09:15:35PM +0530, Dan Cross wrote:
>> Oh no, I can't.  Please, by all means, point me to whatever it is that
>> you have produced that demonstrates your prowess in this area so that
>> I can learn more.
>
> you sound upset

Not at all.

> [...]
>> And yet the produced the language, and people use it.  But you clearly
>> know better, so please, by all means, show me what you've produced
>> that's useful that I can learn something from.
>
> I have no urges to prove myself to you.

I see no reason to take you your opinion any more seriously than
anyone else's.  You're entitled to it; that doesn't mean you are
right.

> They have produced a language, yes.
> They have not produced a worthwhile build system

You are conflating bootstrapping the language with the language's
build system.  The go command is actually quite nice.

The use of bash in Go is tiny.  Why fixate on it when you could go
build something useful, instead?

> or development community.

Evidence suggests otherwise.

Anyway, I have neither the time nor the inclination to get into a
pissing match with some random person on the Internet about Go's use
of bash.  If it's such a serious problem for you, well, I hope you
figure out a way to work around it.  If not, then I don't know what to
tell you.  In either case, good luck!

- Dan C.



Re: [9fans] rc vs sh

2012-08-28 Thread Dan Cross
On Tue, Aug 28, 2012 at 9:00 PM, Kurt H Maier  wrote:
> On Tue, Aug 28, 2012 at 08:48:39PM +0530, Dan Cross wrote:
>> Wonderful!  Please point me to your new programming language so I can
>> have a look?
>
> I don't think it would do you any good, since you are apparently unable
> to differentiate between programming languages and build systems.

Oh no, I can't.  Please, by all means, point me to whatever it is that
you have produced that demonstrates your prowess in this area so that
I can learn more.

>> So are you saying that because they use bash to build the system, the
>> language is shitty?  Or just the build system is shitty?
>
> I have other issues with Go as a language, but the build system is
> unmitigated shit.

Irrelevant.

>> Writing a shell script is easy.  Writing a shell script to build a
>> non-trivial piece of software across $n$ different platforms is hard.
>
> And yet people do it all the time.

Irrelevant.

>> To put it another way, why not cut the cord?  Because it takes time
>> away from doing something they consider more important.
>
> Incorrect.  There's a whole world of people out there; some of them
> would be willing to build and maintain an elegant, portable shell
> script.  That's the point of having an open development process, I
> thought.  I understand the need for the core devs to focus on the task
> at hand: language building.  It is idiotic not to delegate the build
> system to someone willing and able to devote the time to it.

Not the way that community is currently set up, so irrelevant.

>> More generally, if your impression of Go as a language ("Typical go
>> shit...") is based on what shell they chose for the build script, then
>> I'm not sure you have your priorities straight.
>
> Fortunately, your assessment of my priorities is meaningless.  "Typical
> Go shit" referred to the ceaseless lack of focus on quality endemic to a
> schizophrenic community that was organized around a language without a
> mission.  Go is still evolving in two separate directions; one camp sees
> it as yet another language for web shit, and one camp sees it as a real
> programming language for actual programs.  I long ago lost interest in
> seeing who will eventually win, but in the meantime every bad decision
> seems to have some chorus of supporters who take every piece of
> criticism personally.  *Those* are the people who need to examine their
> priorities.

And yet the produced the language, and people use it.  But you clearly
know better, so please, by all means, show me what you've produced
that's useful that I can learn something from.

- Dan C.



Re: [9fans] rc vs sh

2012-08-28 Thread Dan Cross
On Tue, Aug 28, 2012 at 8:35 PM, Kurt H Maier  wrote:
> On Tue, Aug 28, 2012 at 04:52:34PM +0200, Lucio De Re wrote:
>> Or are you oriented towards kiloLOCs of test code to see which
>> features are implemented and provide compatability a la autoconf?
>
> Excellent example of a false dilemma.  I'm oriented towards exerting the
> effort to make something that isn't shitty.

Wonderful!  Please point me to your new programming language so I can
have a look?

> I'm at peace with the go
> developers decision to avoid that effort.  Are you?

So are you saying that because they use bash to build the system, the
language is shitty?  Or just the build system is shitty?

> Anyway, bash uses autoconf as well.  So all you've done is push the mess
> one step farther away from your code.  Why not just cut the cord?  I'm
> hearing "shell scripting is easy" and I'm hearing "acceptance testing is
> too hard."  Which is it?  I can write portable shell scripts, but the
> idiots on golang-nuts have explicitly said they don't WANT portable
> shell scripts.  They want to rely on bash, and all the GNU bullshit that
> brings with it.

Writing a shell script is easy.  Writing a shell script to build a
non-trivial piece of software across $n$ different platforms is hard.

I can't speak for the Go team, but I suspect their decision is a
pragmatic compromise: should they spend their (limited) time creating
and maintaining a portable version of 'rc' that can be built (how,
exactly?  With a script that's just a straight run of commands or
something?) on a bunch of different platforms so that they can write
some beautiful script to build Go, or should they produce some lowest
common denominator shell script in the most common shell out there
that builds the system and then spend the time they save concentrating
on building a cool programming language?

I don't think the gain from the former approach is really worth the
cost to the latter.

To put it another way, why not cut the cord?  Because it takes time
away from doing something they consider more important.

More generally, if your impression of Go as a language ("Typical go
shit...") is based on what shell they chose for the build script, then
I'm not sure you have your priorities straight.

  - Dan C.



Re: [9fans] rc vs sh

2012-08-28 Thread Dan Cross
On Tue, Aug 28, 2012 at 2:27 PM, Rudolf Sykora  wrote:
> Hello,

Howdy.

> I am just curious...
> Here
> http://9fans.net/archive/2007/11/120
>
> Russ Cox writes he uses bash as his default shell. Does anybody know
> the reason? Is this for practicality within the linux environment? Or
> has he found rc too limiting?

So rc is a nice shell, but it's most useful in a particular
environment that has evolved with it in a very pleasant way.  If one
is constrained to work outside of that environment, then rc isn't so
much better than any other shell.

Note that I'm not referring to the implementation; rc is certainly
nicer than bash in this sense, but rather the tangible function from a
user perspective.  If one is in an environment where the majority of
one's coworkers are stuck using bash and one needs to retain
shell-level compatibility with them for some reason or another, then
it makes sense to use bash, as aesthetically unpleasing as that may
be.

One has to ask oneself, is rc worth it?  If the level of productivity
increase that came from using rc instead of bash was greater than the
cost of maintaining a custom environment built around rc, then one
would might make an argument for using it.  But how many of us can
honestly say that's the benefits are so great?  The basic command,
pipe and stdout redirection syntax is the same.  It's the same if I
want to run a process or pipeline in the background.  I can set the
prompts to be the same and configure things so that copy/paste works
in an identical fashion across the two.  And those are the VAST
majority of things I do with a shell; to be honest, 99% of the time, I
don't even think about what shell I'm running; regardless of what it
is.

And rc is not perfect.  I've always felt like the 'if not' stuff was a kludge.

- Dan C.



[9fans] Stupid troff questions.

2012-06-07 Thread Dan Cross
Sorry for this, but it seems that 9fans is the best place to ask for
troff advice

I want to use .2C in ms documents to enter two-column mode.  However,
this seems to force insertion of a paragraph break space, which I do
not want; I can't find any documentation on how to turn that off, and
reading troff macros isn't that fun.  Does anyone know avoid the extra
shot of vertical whitespace?  (In the meanwhile, I used me on Unix to
do what I needed to do.)

- Dan C.



Re: [9fans] Governance question???

2012-05-16 Thread Dan Cross
On Wed, May 16, 2012 at 10:03 AM, hiro <23h...@googlemail.com> wrote:
> There are towns without restaurants and pubs in America?

Yes.  Americans tend to have bars, rather than pubs.

Forsyth's characterization of Atlanta is largely correct, but his
conclusion (to avoid) is incorrect.  Atlanta has Coca Cola and two
streets named "Peach"; the latter intersect.

- Dan C.



Re: [9fans] Regarding 9p based "protocols" message framing

2012-03-20 Thread Dan Cross
On Tue, Mar 20, 2012 at 4:30 PM, Ciprian Dorin Craciun
 wrote:
> On Tue, Mar 20, 2012 at 14:32, Dan Cross  wrote:
>> 9P itself is not a stream-oriented
>> protocol, nor is it what one would generally call, 'transport
>> technology.'
>
>    I would beg to differ on this subject... Because a lot of tools in
> the Plan9 environment expose their facilities as 9p file systems, but
> expose other semantics than that of "generic" files -- i.e. a
> contiguous stream of bytes from start to EOF -- like for example RPC
> semantic in case of factotum; thus I would say that 9p is used as a
> "session" layer in the OSI terminology. (But as in TCP/IP stack we
> don't have other layers between "transport" and "application" I would
> call it a "transport" layer in such a context.)

That's one way of looking at it.  However, the "file as a stream of
bytes" abstraction is mapped onto 9P at a higher layer; 9P itself is
really about discrete messages.  The canonical "transport" layer in
TCP/IP is TCP.

But we're arguing semantics at that point; regardless, I think you'd
find you hold something of a minority view.

- Dan C.



Re: [9fans] Regarding 9p based "protocols" message framing

2012-03-20 Thread Dan Cross
On Tue, Mar 20, 2012 at 7:42 AM, Yaroslav  wrote:
>>  Why was I puzzled: because as a non Plan9 user / developer, I
>> usually think of the underlaying transport technology (be it sockets
>> or 9p) as a stream of bytes without explicit framing.
>
> As I understand, 9P itself is designed to operate on top of a
> message-oriented transport; however, it has everything required to run
> over a stream, esp. message length at beginning of every message.
> Framing is done by the library: the read9pmsg routine performs as many
> reads as necessary to return a complete 9P message to the caller.

Perhaps initially: over an IP network, 9P used to run over IL.  With
9P2000, IL was deprecated and 9P was most typically run over TCP.
There was a very old message to 9fans (like, early 90's kind of old)
that implied that IL was much more efficient than TCP on the wire, but
it probably doesn't matter now.  9P itself is not a stream-oriented
protocol, nor is it what one would generally call, 'transport
technology.'

- Dan C.



Re: [9fans] 9vx instability

2011-11-27 Thread Dan Cross
On Sun, Nov 27, 2011 at 2:09 PM, Lyndon Nerenberg  wrote:
> On Sun, 27 Nov 2011, Dan Cross wrote:
>> On Sat, Nov 26, 2011 at 11:32 PM,   wrote:
>>> /lib/mainkampf is part of an ongoing project to make
>>> venti sha-1 hashes easy to remember by translating
>>> them into hitler-speeches.
>>
>> It's also, frankly, offensive.
>
> I think 'disgusting' describes it better.

That's certainly true of the work (mein kampf) itself.

- Dan C.



Re: [9fans] 9vx instability

2011-11-27 Thread Dan Cross
On Sat, Nov 26, 2011 at 11:32 PM,   wrote:
> /lib/mainkampf is part of an ongoing project to make
> venti sha-1 hashes easy to remember by translating
> them into hitler-speeches.

It's also, frankly, offensive.



Re: [9fans] NIX 64-bit kernel is available

2011-09-14 Thread Dan Cross
On Wed, Sep 14, 2011 at 5:48 PM, John Floren  wrote:
> I don't want to drag out the discussion, but:
>
> 1. Nixie is pretty much generic these days. Wikipedia even calls it a
> genericised trademark.
> 2. Nixies are also Germanic water spirits.
>
> I think we're good :)

(I don't want to contribute to the noise, but "PIX" would be a pretty
cool name, all things considered.)

- Dan C.



Re: [9fans] Plan9 development

2010-11-15 Thread Dan Cross
On Sun, Nov 14, 2010 at 11:29 PM, Gary V. Vaughan  wrote:
> You can either try to remember what all of those are, or use something
> like autoconf to probe for known bugs, and gnulib to plug them, or
> you can link against a shim library like GNU libposix which will
> do all of that automatically when it is built and installed, allowing
> you to write to the POSIX APIs with impunity.

I've read this discussion with interest.  Something that strikes me is
that there are certain underlying beliefs and assumptions in the Plan
9 community that are taken for granted and rarely articulated, but
that frame many of the comments from 9fans.  Further, those are, in
many ways, contrary to the assumptions and requirements Gary is
constrained by when working on libtool.

I believe that one of the most powerful decisions that the original
plan 9 developers made was consciously resisting backwards
compatibility with Unix.  That's not to say that they completely
ignored it, or that it was actively worked against, but that it was
not a primary consideration and did not (overly) constrain either
design or implementation.  This freed them to revisit assumptions,
rethink many of the decisions in the base system, and clean up a lot
of rough edges.

For instance, and relevant to this discussion, take a look at how
cross-compilation and platform independence on Plan 9 "just works" in
a simple, consistent way across multiple architectures.  I was
surprised how an earlier message in this discussion when Gary said,

> If you have never tried to build and link shared libraries from the same
> code-base on 30 (or even 3) separate incompatible architectures, then
> you are probably missing the point, and needn't read any further.

Granted, I think the key thing here is that Gary's talking about
shared libraries (which, as Russ said, the Plan 9 developers simply
punted on), instead of just building, but I can't help but feel that
this overlooks part of the plan 9 way of doing things.

The plan 9 developers made a decision to break with the convention of
naming object files with a ".o" extension, assigned a letter to each
archicture, established the convention that object files and libraries
would use that letter in their filenames, and renamed the compiler,
assembler and linker accordingly.  Then they modified the filesystem
hierarchy to have archiecture specific directory trees for
architecture specific things (which is easy to do with you've got
mutable namespaces).  Mk was smart enough that these conventions could
be used in the build system pretty easily.  None of these are name
changes are particularly deep; in many ways, they are simply cosmetic.
 However, they led to a simplification that makes building for
different architectures out of the same tree nearly trivial.  Just by
setting an environment variable, I can build the entire system for a
different architecture, in the same tree, with a single command, with
no conflicts.  Since the compiler for each architecture is just
another program, cross-compilation isn't special or complicated.
Compare this to setting up gcc for cross compilation.

And that's sort of the point.  9fans tend not to ask, "how can I make
this work across multiple systems that are immutable as far as I'm
concerned as a developer" but rather they ask, "how can the system
support me doing this in a simpler, orthogonal, more consistent way?"
If that means shedding some convention and replacing it with something
else that makes life easier, there's less hesitation to do that.

To that end, libtool, autoconf and automake, etc, all seem to answer
the wrong question.  From the 9fans perspective (and take this with a
grain of salt; I can't claim to speak for all of us), libtool seems
"crazy" because it puts a bandaid on a wart instead of trying to solve
the underlying problem of complex, inconsistent interfaces across
systems.  In this way, it is reactionary.  Autoconf et al are
analogous to a bunch of nested #ifdef's, and most 9fans would chose to
program to some sort of shim library that provided a consistent
interface as a matter of course.  Better yet, they'd go fix the
underlying systems so that they correctly implemented the relevant
standard or interface.  I'm not sure that's possible with Unix, as
Gary rightly points out, because of the weight of the installed base,
fragmentation between variants and the requirements of backwards
compability.  Though unrealistic, it's certainly idealistic.

One of the enduring benefits of Plan 9 is that it is (still) a good
example of how well-reasoned engineering tradeoffs and a modicum of
good taste can produce a powerful and elegant system with a simple
implementation.  Rob Pike is (in?)famously quoted as saying, "not only
is Unix dead, it's starting to smell really bad" (note to Rob: is this
apocryphal?  I've never found an original source).  I think that's
often taken out of context; Unix may be dead as an exciting venue for
the exploration of fundamentally new ways o

Re: [9fans] Is this the same Russ Cox we know here?

2009-11-10 Thread Dan Cross
Not yet.  There's an 8g, so it should (in principle) be portable.  But
it requires user-mode support for setting LDTs.  But I'm really not
any sort of expert at all, just an interested observer.  Maybe Rob or
Russ will poke in and say something.

On Tue, Nov 10, 2009 at 8:00 PM, andrey mirtchovski
 wrote:
> but will it run on Plan 9?
>
>



Re: [9fans] Is this the same Russ Cox we know here?

2009-11-10 Thread Dan Cross
Yes.

On Tue, Nov 10, 2009 at 7:54 PM, Joseph Stewart
 wrote:
> Hmmm... is this Limbo/Newsqueak/Alef inspired?
> http://golang.org
> -joe



Re: [9fans] nvram

2009-07-29 Thread Dan Cross
On Tue, Jul 28, 2009 at 2:37 PM, erik quanstrom wrote:
> this change as worked well on my personal system and at coraid
> for the past 6 months.  it just works.  even on hitherto unknown
> controllers like the orion.

Hmm.  A few years ago, I ran into a similar problem and added a
variable that could be set in plan9.ini to specify where the nvram
actually is.  It works reasonably well



Re: [9fans] dcp - a deep copy script, better than dircp

2009-07-20 Thread Dan Cross
On Mon, Jul 20, 2009 at 6:56 PM, erik quanstrom wrote:
> i know you can do it with du, but it seems a bit "cat -n"ish to me.
> for comparison:

This was why I wrote 'walk' a few years ago; du is the disk usage
tool, not a general file walker.

- Dan C.



Re: [9fans] new usb stack and implicit timeouts

2009-07-20 Thread Dan Cross
fd = open("/some/ctl", OWRITE);
write(fd, "timeout LONG_ENOUGH", length);
close(fd);

On Mon, Jul 20, 2009 at 2:10 PM, Lyndon Nerenberg wrote:
>> I am unsure I would remove timeouts even from bulk endpoints.
>> It is true that some devices (the usb/serial for example) need to
>> read for an undefined time waiting for data, but I don't think that is
>> an issue as long
>> as the timeouts are long enough,
>
> Please show us the algorithm that *correctly* determines 'long enough' for
> every application talking to the devices.
>
>



Re: [9fans] dcp - a deep copy script, better than dircp

2009-07-20 Thread Dan Cross
On Mon, Jul 20, 2009 at 11:02 AM, Ethan
Grammatikidis wrote:
> Sorry if that was a bit harsh, but I've had far too much 'advice' to 'just do 
> this easy little thing'... Computers are supposed to supplement the brain, to 
> help, not require more (and in some cases quite impossible) work. To file 
> anything so you can find it again requires experience in filing that 
> particular information type. I'm constantly dealing with new data...

Not to criticize, but I think the suggestion that Steve is making is
that, in order to better use the computer to supplement the brain and
help, it's best to use the tools of this particular computer system in
the most natural way, versus trying to use it as merely an improved
Linux or Unix or what have you.  Now, I'm not suggesting that you
don't understand Plan 9, or that the underlying reasons you are moving
directories around aren't valid, of course!  If you're data is new,
then good to go.  If not, then I think the advise is for some things
that you may find simplify your life, not make it harder.



Re: [9fans] new usb stack and implicit timeouts

2009-07-20 Thread Dan Cross
Pardon me if this is totally ignorant, but can't we just have a ctl
message to control a timeout, which applications may then set on their
own?

On Mon, Jul 20, 2009 at 3:12 AM, Gorka Guardiola wrote:
> On Sun, Jul 19, 2009 at 5:58 PM, Francisco J Ballesteros wrote:
>> On Sun, Jul 19, 2009 at 5:46 PM,  wrote:
>>> http://www.beyondlogic.org/usbnutshell/usb6.htm#SetupPacket
>>>
>>
>> IIRC, I think the host controller is responsible for timing out
>> requests sent to the device (I refer to setup packets), but my uchi
>> does not. In any case, I don't think anyone wants to remove timeouts
>> from ctl requests.
>>
>>
>
> I am unsure I would remove timeouts even from bulk endpoints.
> It is true that some devices (the usb/serial for example) need to
> read for an undefined time waiting for data, but I don't think that is
> an issue as long
> as the timeouts are long enough, doing polling is quite easy. There is
> polling in the
> lower levels anyway.
>
> On the other hand, I think smart card readers go for
> lunch on a read and may never come
> back if there is no timeout. Of course alarm() can be used, but
> a timeout makes it simpler. I prefer having to poll on some
> cases than having to use signals on others.
>
> --
> - curiosity sKilled the cat
>
>



Re: [9fans] intel gma 950 graphics

2009-07-16 Thread Dan Cross
On Thu, Jul 16, 2009 at 1:54 PM, erik quanstrom wrote:
> to be precise, this is not 950 graphics:
>
> 0.2.0:  vid  03.00.00 8086/2772  10 0:fe98 524288 1:cc01 16 
> 2:e008 268435456 3:fe94 262144
>        Intel Corporation 82945G/GZ Integrated Graphics Device

Hmm, perhaps I am confused (it wouldn't be the first time).  My
understanding was that the 82945G implements the GMA950 chipset.  Come
to think of it, I'm not entirely sure why I thought that?

> the intel atom machine i've been working with keeps
> doing things i'm very happy with!  i got a chance to check
> out the vesa graphics and they seem to be very well.
> 1600x1200x16 works just fine.  the vga timings are perfect.
> and even pictures don't look too bad.  since i'm using pat
> rather than mtrr, adding the newly-added vesa->flush()
> fixed the slight niggle i was having with a jumpy pointer.

I really wish I could do 1680x1024xwhatever.

> now if i could just get someone to build the same thing with
> intel 82574 nics

That reminds me, I've been meaning to ask you a question.  I'll send
it off-list.

- Dan C.



Re: [9fans] Intel GMA950 video

2009-07-16 Thread Dan Cross
On Wed, Jul 15, 2009 at 3:28 PM, Lyndon Nerenberg wrote:
> Anybody running a terminal with a GMA950 chipset? I need to verify it works
> before I plunk down money on some new terminal hardware. VESA support is
> fine, just as long as rio us usable on it.
>
> The Wiki shows i950 VESA support. I'm not sure of i960 == GMA950. The way
> vendors are these days you can't even tell if GMA950 == GMA950 half the time
> :-P

I am.

In particular, I'm using an 82945G-based terminal with the VESA
driver.  It works well, but is not optimal (my monitor supports an 8:5
aspect ratio, incompatible with the offered VESA modes).  I'm slowly
putting together a native driver (really, updating the i810 driver)
for it, but I haven't touched it in a couple of weeks now.

I do not believe that the i960 is uses the GMA950, however; I think it
uses a successor but they are probably largely compatible.

- Dan C.



Re: [9fans] Plan9 as an everyday OS

2009-07-14 Thread Dan Cross
On Tue, Jul 14, 2009 at 9:01 AM, erik quanstrom wrote:
> there are other sound models, it would be nice to design ac97's
> interface in such a way that it can work with other sound models.

Years ago, I suggested building a generic audio layer into the kernel
and plugging specific devices into that, much like how the Ethernet
device drivers are structured.  At the time, it wasn't seen as
worthwhile as there was essentially no code sharing between the
various drivers for audio devices (SB16, ESS-whatever, the bitsy).
However, I still think this is worthwhile just to provide (a) a
standard interface for audio devices (e.g., /dev/audioctl always
accepts the same messages to set volume, input levels, etc), and (b)
to have a single kernel support more than one type of audio device
(imagine a network where you actually have an SB16 plus a bunch of
AC97 devices and some of these HCI things that Devon mentioned-one 9pc
should be able to support them all).

- Dan C.



  1   2   >