[9fans] Aquarela patches

2009-02-12 Thread Andreas Zell
Hello,

i fixed some bugs in aquarela.
Is this the right place to post the patches ?

AZ.



Re: [9fans] Aquarela patches

2009-02-12 Thread Federico G. Benavento
patch(1) (diff. from the lunix patch...)

On Thu, Feb 12, 2009 at 6:37 AM, Andreas Zell z...@imageaccess.de wrote:
 Hello,

 i fixed some bugs in aquarela.
 Is this the right place to post the patches ?

 AZ.





-- 
Federico G. Benavento



Re: [9fans] source browsing via http is back

2009-02-12 Thread Bruce Ellis
Stop being sensible! There is no room on this list for such behaviour.

brucee

On Thu, Feb 12, 2009 at 11:49 PM, erik quanstrom quans...@quanstro.net wrote:
 On Wed, Feb 11, 2009 at 7:28 AM, erik quanstrom quans...@quanstro.net 
 wrote:
   what extra work would that be,
 
  that it pulls files in their entirety (IOW: cp /n/sources/... /)
 
  i would be very suprised if a single copy of sources had many shared
  blocks.

  He doesn't want a single copy though, he is hoping to mirror
 sourcesdump. That's 2255 almost-copies of sources. Yes, there will be
 changes in each snapshot, but how often does something like
 /extra/python.iso.bz2 change?

 exactly.  the point i was trying to make, and evidently
 was being too coy about, is that 330 odd gb wouldn't
 be as useful a number as the sum of the sizes of all the
 new/changed files from all the dump days.  this would
 be a useful comparison because this would give a
 measure of how much space is saved with venti over
 the straightforward algorithm of copying the changed
 blocks, as ken's fileserver does.

 - erik





Re: [9fans] Plan 9 source history (was: Re: source browsing via http is back)

2009-02-12 Thread Venkatesh Srinivas

On Wed, Feb 11, 2009 at 07:07:48PM +0100, Uriel wrote:

Oh, glad that somebody found my partial git port useful, I might give
it another push some time.

Having a git/hg repo of the plan9 history is something I have been
thinking about for a while, really cool that you got something going
already.

Will you provide a standard git web interface (and a 'native' git
interface for more efficient cloning)?


http://acm.jhu.edu/git/plan9 is a git web interface, git://acm.jhu.edu/git/plan9
is a native git interface.

-- vs



Re: [9fans] source browsing via http is back

2009-02-12 Thread Roman V. Shaposhnik
On Thu, 2009-02-12 at 07:49 -0500, erik quanstrom wrote:
 exactly.  the point i was trying to make, and evidently
 was being too coy about, is that 330 odd gb wouldn't
 be as useful a number as the sum of the sizes of all the
 new/changed files from all the dump days.  this would
 be a useful comparison because this would give a
 measure of how much space is saved with venti over
 the straightforward algorithm of copying the changed
 blocks, as ken's fileserver does.

I'm confused. Since when did kenfs entered this conversation?
I thought we were talking about how sources are managed today
and how replica might make you waste some bandwidth (albeit,
not all that much given how infrequently sources themselves
change).

Thanks,
Roman.




Re: [9fans] source browsing via http is back

2009-02-12 Thread erik quanstrom
 On Thu, 2009-02-12 at 07:49 -0500, erik quanstrom wrote:
  exactly.  the point i was trying to make, and evidently
  was being too coy about, is that 330 odd gb wouldn't
  be as useful a number as the sum of the sizes of all the
  new/changed files from all the dump days.  this would
  be a useful comparison because this would give a
  measure of how much space is saved with venti over
  the straightforward algorithm of copying the changed
  blocks, as ken's fileserver does.
 
 I'm confused. Since when did kenfs entered this conversation?
 I thought we were talking about how sources are managed today
 and how replica might make you waste some bandwidth (albeit,
 not all that much given how infrequently sources themselves
 change).

exactly.  i don't believe that the amount of data replica would
transfer is 330gb.  330gb seems more than replica would transfer.
it's over 1000x the size of the distribution.  i'm pretty sure sources
doesn't see that much churn.

the algorithm replica uses will result in about the same amount
data moved as kenfs would store, as they use similar algorithms.
old unix dump would use a similar amount of space.  mkfs would
use more, as it is file, not block based. 

- erik



Re: [9fans] source browsing via http is back

2009-02-12 Thread Nathaniel W Filardo
On Thu, Feb 12, 2009 at 07:49:58AM -0500, erik quanstrom wrote:
 exactly.  the point i was trying to make, and evidently
 was being too coy about, is that 330 odd gb wouldn't
 be as useful a number as the sum of the sizes of all the
 new/changed files from all the dump days.  this would
 be a useful comparison because this would give a
 measure of how much space is saved with venti over
 the straightforward algorithm of copying the changed
 blocks, as ken's fileserver does.

Unless I misunderstand how replica works, the 330 odd GB number [1] is
useful as the amount of data that would have to be transfered over the wire
to initialize a mirror.  (Since, as I understand it, a replica log of
sourcesdump would have nothing but add commands for each $year/$dump/$file
entry, and would therefore necessitate transfering each file separately).

On the other hand, it's entirely possible that I'm missing some feature of
replica, or that some set of wrapper scripts around it would suffice.  If
so, please excuse, and correct, my ignorance.

On the first hand again, given the occasional reports of replica hosed me
I'm not terribly keen on trusting it and seem to recall that some of the
fixes have involved hand-editing the replica logs on sources.  This makes me
suspicious that some of the replica logs frozen in sourcesdump would be
incorrect and lead to incorrect data on mirrors if used as part of the
scheme.

With a venti  vac (auth/none vac, naturally, so as to not violate
filesystem permissions) based mirror, there's a single score published daily
that covers the entirety of sourcesdump so far, and a venti/copy -f
sufficies to bring any mirror up to date using at most 550 odd MB if the
initial mirror is empty. [2]

--nwf;

[1] The discrepency between 550 MB and 330 GB increases as time goes on and
as the slice of sources being mirrored goes from just some source files
that some schmo thought would be nice to mirror to all of it.

[2] Further, 9fs access to sources is grand, but it does take me 10 to 15
minutes to pull down a day's worth of just some source files, even if
nothing has changed and I uses vac -f, due to all the network latency for
Tstat/Rstat requests.  This could be improved in a number of ways, but it
strikes me as simpler to use venti/copy to copy only the incremental deltas.
Some brief experiments, transfering blocks from Baltimore back to a machine
in the same neighborhood as sources, indicate that venti/copy -f takes 15
minutes for the first copy (2002/1212) and that subsequently copying even a
dump with many changes (2008/0901) took only four.  (Git may do even
better.)


pgpjIpsduVNnW.pgp
Description: PGP signature


Re: [9fans] source browsing via http is back

2009-02-12 Thread andrey mirtchovski
 On the first hand again, given the occasional reports of replica hosed me
 I'm not terribly keen on trusting

given the occasional reports of software X hosed me (for any and all
X), i don't think we should be terribly keen on using computers at
all.



Re: [9fans] source browsing via http is back

2009-02-12 Thread Nathaniel W Filardo
On Thu, Feb 12, 2009 at 09:50:50AM -0700, andrey mirtchovski wrote:
  On the first hand again, given the occasional reports of replica hosed me
  I'm not terribly keen on trusting
 
 given the occasional reports of software X hosed me (for any and all
 X), i don't think we should be terribly keen on using computers at
 all.

Touche. :)

--nwf; 


pgpPwwpt15Wpc.pgp
Description: PGP signature


Re: [9fans] source browsing via http is back

2009-02-12 Thread erik quanstrom
  On the first hand again, given the occasional reports of replica hosed me
  I'm not terribly keen on trusting
 
 given the occasional reports of software X hosed me (for any and all
 X), i don't think we should be terribly keen on using computers at
 all.

or, being a programmer, one could endeavor to fix these problems.
i haven't yet found any significant software that worked under all conditions
for all people from the time it was first compiled.

- erik



Re: [9fans] source browsing via http is back

2009-02-12 Thread erik quanstrom
 On Thu, Feb 12, 2009 at 07:49:58AM -0500, erik quanstrom wrote:
  exactly.  the point i was trying to make, and evidently
  was being too coy about, is that 330 odd gb wouldn't
  be as useful a number as the sum of the sizes of all the
  new/changed files from all the dump days.  this would
  be a useful comparison because this would give a
  measure of how much space is saved with venti over
  the straightforward algorithm of copying the changed
  blocks, as ken's fileserver does.
 
 Unless I misunderstand how replica works, the 330 odd GB number [1] is
 useful as the amount of data that would have to be transfered over the wire
 to initialize a mirror.  (Since, as I understand it, a replica log of
 sourcesdump would have nothing but add commands for each $year/$dump/$file
 entry, and would therefore necessitate transfering each file separately).

actually not, see http://www.quanstro.net/plan9/history.pdf
i have done the process of replicating history on 4 different
fileservers.  only the *differences* need to be transfered.

 On the first hand again, given the occasional reports of replica hosed me
 I'm not terribly keen on trusting it and seem to recall that some of the
 fixes have involved hand-editing the replica logs on sources.  This makes me
 suspicious that some of the replica logs frozen in sourcesdump would be
 incorrect and lead to incorrect data on mirrors if used as part of the
 scheme.

i've posted a number of fixes for specific failure modes reported.
i've done about 10k replicas with my changes without any failures.
try it.  the only way to fix things is to keep working on them.

- erik



Re: [9fans] source browsing via http is back

2009-02-12 Thread Bruce Ellis
bellisimo!

On Fri, Feb 13, 2009 at 3:50 AM, andrey mirtchovski
mirtchov...@gmail.com wrote:
 On the first hand again, given the occasional reports of replica hosed me
 I'm not terribly keen on trusting

 given the occasional reports of software X hosed me (for any and all
 X), i don't think we should be terribly keen on using computers at
 all.





[9fans] ata vs. ide

2009-02-12 Thread erik quanstrom
there's a little confusion with ata vs. ide.  the short
2008 story is that ata is a command set, ide is a
programming interface.  the history is much more
complicated and riddled with backronyms like
“pata.”

so i think it would be a good idea to at least 
make this change in sd(3)

; diffy -c sd
/n/dump/2009/0212/sys/man/3/sd:21,36 - sd:21,39
  .IR C  ,
  and by its unit number
  .IR u .
- The controller naming convention for ATA(PI) units starts
- with the first controller being named
+ The controller naming convention for legacy IDE units
+ names the first controller
[etc.]

i realize that it might be excessively annoying to
change sdata.c to sdide.c, but even if that is too
much i still think it would be a good idea to clarify
things in the man page. after all, the ahci and mv50xx
drivers also use the ata command set.

objections?

- erik



[9fans] SDnosense

2009-02-12 Thread erik quanstrom
SDnosense is used by devsd for access to the
raw device.  it's used to keep a device from
returning sense data.  for many drives this
means squirreling away the sense data because
it comes along with the status.  is there any
particular reason why devsd can't do this?

maybe i'm missing something.

- erik



Re: [9fans] reminder: internships in sunny california (but you gotta be

2009-02-12 Thread john
 I'm still looking for undergrad/grad students who'd like to spend some
 time here in wine country.
 
 http://www.eagleridgeproperties.com/admin/housepictures/355707CA54.jpg
 
 I can pay you if you are a us citizen. If not, you can come here as a
 visitor (yeah, I know, no money  sux but
 there are some fun machines we have access to and you could get a
 topic out of it)
 
 This is plan 9 work. Let me know if interested.
 
 thanks
 
 ron

The more 9terns the better (I'll be working there again this summer);
working with Ron is great and Livermore is a really nice place.  I
strongly recommend applying if you can.

John




[9fans] why sources and the plan 9 web server have been up and down lately

2009-02-12 Thread geoff
We're suffering machine room renovation, major electrical
work and high winds.  As a result of some combination of these,
we've had two major power outages and three minor ones today.

As a result of moving machines around recently, we hadn't had
all of our networking equipment and our computers on UPSes today.
That's now been fixed and I believe our systems are all back up
and should stay that barring a prolonged power outage.



Re: [9fans] why sources and the plan 9 web server have been up and down lately

2009-02-12 Thread andrey mirtchovski
you're taking all the fun out of complaining :(



Re: [9fans] why sources and the plan 9 web server have been up and down lately

2009-02-12 Thread Anthony Sorace
and thank god for that.