Re: portage tree (Was: Re: reiser4 status (correction))

2006-07-23 Thread Hans Reiser
Thanks Christian.  You can go ahead and add something to our wiki
pointing to it if you would like.  This might help tide people over
until the repacker ships.

Hans


Re: reiser4 status (correction)

2006-07-23 Thread Hans Reiser
Mike Benoit wrote:

On Sat, 2006-07-22 at 07:34 -0500, David Masover wrote:

  

The compression will probably mostly be about speed.  Remember, if
we're 
talking about people who want to see tangible, visceral results,
we're 
probably also talking about end-users.  And trust me, the vast
majority 
of most of my data (as an end-user) is not very compressible.




Sure, mine too. Between the many gigs of MP3s and movies I have stored
on my HD, only about 10-20GB is the OS, Email, and documents/source
code. Even just compressing that small portion though I could probably
save between 5-10GB. The difference is though I can do a df before, and
a df after, and I can instantly see I got my moneys worth. Same with
encryption. 
  

I am looking forward to the first user email complaining that he
compressed a file stored on reiser4 and it didn't save space (and then
someday maybe even an email saying that he compressed a file and it took
up more space but the user space compressor is sure that it saved space
and he does not understand).:)

With the repacker it is much more difficult (for average users) to time
how long a program takes to load some file (or launch), before the
repacker and after.

I think you are confusing repacking and compression?  Repacking, by
removing seeks, will make it more predictable not less.

 Especially since caching comes in to play. 

Also, according to this little poll on one of the compressed FUSE sites
you linked to, more people are looking to compression for space saving,
then for speed:
http://parallel.vub.ac.be/~johan/compFUSEd/index.php?option=pollstask=resultspollid=31


  

No, mostly we're talking about things like office documents, the 
majority of which fit in less than a gigabyte, and multimedia (music, 
movies, games) which will gain very little from compression.  If 
anything, the benefit would be mostly in compressing software.



less tangible like fragmentation percentages and minor I/O throughput
improvements. I used to work at a large, world wide web hosting company
and I could see making a case to management for purchasing Reiser4
compression would be pretty easy for our shared servers. Instantly
freeing up large amounts of disk space (where .html/.php files were the
vast majority) would save huge amounts of money on disk drives,
especially since most of the servers used RAID1 and adding new drives
was a huge pain in the neck. Making a case to purchase a repacker would
be much, much more difficult.
  

Hmm, the problem is, if storage space is really the big deal, it's been 
done before, and some of these efforts are still usable and free:

http://parallel.vub.ac.be/~johan/compFUSEd/
http://www.miio.net/fusecompress/
http://north.one.pl/~kazik/pub/LZOlayer/
http://apfs.humorgraficojr.com/apfs_ingles.html

And while we're on the topic, here's an FS that does unpacking of 
archives, probably about the same way we imagined it in Reiser4 
pseudofiles/magic:

http://www.nongnu.org/unpackfs/

But regardless, as far as I can tell, the only real, tangible benefit of 
using Reiser4 compression instead of one of those four FUSE filesystems 
is speed.  Reiser4 would compress/decompress when actually hitting the 
disk, not just the FS, and it would also probably use in-kernel 
compression, rather than calling out to userspace on every FS operation.


I think that compressing only on flush is a big issue.  It was a lot
harder to code it, but it really makes a difference.  You don't want a
machine with a working set that fits into RAM to be compressing, that
would be lethal to performance, and it is a very important case.

Hans


Re: reiser4 status (correction)

2006-07-22 Thread Hans Reiser
David Masover wrote:



 And it's not just databases.  Consider BitTorrent.  The usual
 BitTorrent way of doing things is to create a sparse file, then fill
 it in randomly as you receive data.  Only if you decide to allocate
 the whole file right away, instead of making it sparse, you gain
 nothing on Reiser4, since writes will be just as fragmented as if it
 was sparse.

If you don't flush it before you fill it, then in reiser4 it does not
matter if it starts out sparse.


 Personally, I'd rather leave it as sparse, but repack everything later.

We have 3 levels of optimization: 1) at each modification, 2) at each
flush, and 3) at each repack.  Each of these operates on a different
time scale, and all 3 are worthy of doing as right as we can.

Now, the issue of where should airholes be?  Why, that is the grand
experiment that will start to happen in a few months.  Nobody knows yet
what defaults we should have, and whatever we choose, there will be some
users who gain from explicit control of it.

   it must not be as trivial as I think it is.

The problem is that there was a list of must dos, and this was just one
of them.  If reiser4 goes in, then fsync is the only thing in front of
the repacker.  The list has reduced in size a bunch.


 A much better approach in my opinion would be to have Reiser4 perform
 well in the majority of cases without the repacker, and sell the
 repacker to people who need that extra bit of performance. If I'm not
 mistaken this is actually Hans intent.


 Hans?

Yes, that's the idea.  Only sysadmins of large corps are likely to buy. 
We throw in service and support as well for those who purchase it.

If I was making money, I would not do this, but I am not.  I am not
really willing to work a day job for the rest of my life supporting guys
in Russia, it is only ok to do as a temporary measure.  I am getting
tired


 If Reiser4 does turn out to
 perform much worse over time, I would expect Hans would consider it a
 bug or design flaw and try to correct the problem however possible. 


I would want Reiser4 without a repacker to outperform all other
filesystems.  The problem with this fragmentation over time issue is
that it is hard to tweak allocation, measure the effect, tweak again,
etc.  Not sure how to address it without a lot of work.  Maybe we need
to create some sort of condensed portage benchmark

Hans

Hans


Re: reiser4 status (correction)

2006-07-22 Thread Mike Benoit
On Fri, 2006-07-21 at 23:53 -0600, Hans Reiser wrote:
 We have 3 levels of optimization: 1) at each modification, 2) at each
 flush, and 3) at each repack.  Each of these operates on a different
 time scale, and all 3 are worthy of doing as right as we can.
 
 Now, the issue of where should airholes be?  Why, that is the grand
 experiment that will start to happen in a few months.  Nobody knows yet
 what defaults we should have, and whatever we choose, there will be some
 users who gain from explicit control of it.
 

Wouldn't the most recently modified files give a good hint to the
repacker? The files larger then a few kilobytes most recently modified
would be packed with air holes, and the files least recently modified
would be packed more tightly? Perhaps even files that are currently the
most fragmented would be repacked with air holes, as compared to files
least fragmented would be packed more tightly.

Could you not also write a small little app that gathers all kinds of
stats about a file system and sends it to a Namesys server in hopes of
finding better statistical data? I'm sure there are thousands of users
that would be willing to run this app for the greater good regardless
or not if they used ReiserFS in the first place. Things like the number
of files on the disk, what percentage of those files have been modified
in the last week, which files are the most/least fragmented, and when
they were last modified, etc...?

 
  A much better approach in my opinion would be to have Reiser4 perform
  well in the majority of cases without the repacker, and sell the
  repacker to peopleisn' something  who need that extra bit of performance. 
  If I'm not
  mistaken this is actually Hans intent.
 
 
  Hans?
 
 Yes, that's the idea.  Only sysadmins of large corps are likely to buy. 
 We throw in service and support as well for those who purchase it.
 
 If I was making money, I would not do this, but I am not.  I am not
 really willing to work a day job for the rest of my life supporting guys
 in Russia, it is only ok to do as a temporary measure.  I am getting
 tired
 

Personally, as much as I would like it all to be free, I think I would
be much more willing to pay for compression/encryption (on both servers
and desktops) then I would be for a repacker. Hard disks cost money, and
if I can compress the vast majority of my data and save on purchasing a
new hard disk, that is well worth it. I also have some important data
that I would really like to encrypt, which is also worth spending money
on. But gaining ~10% in performance probably isn't worthwhile spending
money on as I most likely wouldn't notice a difference in my day to day
life, unless my server was incredibly busy. 

There is no doubt there is a market for a repacker, but I think people
are much more likely to spend money on something that is immediately
tangible, like disk space instantly being free'd up by compression, or
data instantly being encrypted. As compared to something that is much
less tangible like fragmentation percentages and minor I/O throughput
improvements. I used to work at a large, world wide web hosting company
and I could see making a case to management for purchasing Reiser4
compression would be pretty easy for our shared servers. Instantly
freeing up large amounts of disk space (where .html/.php files were the
vast majority) would save huge amounts of money on disk drives,
especially since most of the servers used RAID1 and adding new drives
was a huge pain in the neck. Making a case to purchase a repacker would
be much, much more difficult.

See, customers who used lots of CPU were easy to up-sell to a dedicated
server because page load times were tangible and if they didn't move we
would be forced to shut them off. However customers who used gobs of
disk space were much more difficult to up-sell to dedicated servers
because it didn't affect themselves or other customers in a tangible
way. They wouldn't notice any difference by moving to a much more
expensive dedicated server.

I would like to see Namesys succeed and become incredibly profitable for
Hans, if nothing else for the fact that he has given a huge amount to
the open source community already. A profitable Namesys only means we'll
have a greater chance of seeing even more interesting stuff from them in
the future. 

-- 
Mike Benoit [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Re: reiser4 status (correction)

2006-07-22 Thread David Masover

Mike Benoit wrote:



Could you not also write a small little app that gathers all kinds of
stats about a file system and sends it to a Namesys server in hopes of
finding better statistical data? I'm sure there are thousands of users


Assuming the results are all made available, essentially public domain. 
 If it becomes improve Reiser instead of improve filesystems, then 
only fans of Reiser will do it.



A much better approach in my opinion would be to have Reiser4 perform
well in the majority of cases without the repacker, and sell the
repacker to peopleisn' something  who need that extra bit of performance. If 
I'm not
mistaken this is actually Hans intent.


Hans?
Yes, that's the idea.  Only sysadmins of large corps are likely to buy. 
We throw in service and support as well for those who purchase it.



Personally, as much as I would like it all to be free, I think I would
be much more willing to pay for compression/encryption (on both servers
and desktops) then I would be for a repacker. Hard disks cost money, and
if I can compress the vast majority of my data and save on purchasing a
new hard disk, that is well worth it. I also have some important data
that I would really like to encrypt, which is also worth spending money
on. But gaining ~10% in performance probably isn't worthwhile spending
money on as I most likely wouldn't notice a difference in my day to day
life, unless my server was incredibly busy. 


Assuming the performance gain is only 10%, you may be right.  Still, 
faster disks, controllers, buses, and CPUs also cost money.


I would be willing to pay for both, even, if the price was reasonable (I 
think there was a scheme based on amount of space placed under the FS?), 
and if I could be guaranteed updates (bug fixes, new kernels, etc) for 
at least the functionality that I've paid for, with no additional cost.


Reiser4 lazy writes make a huge difference on a laptop, and my current 
laptop is a Mac.  That means that around when the next Mac OS rolls out, 
it will be perfectly reasonable to spend some $50 or $100 to make my 
Linux faster instead of my Mac OS, especially since I'm trying to 
migrate off of Mac OS as much as possible.  (I miss package management.)


One more suggestion:  Maybe make it free for non-commercial use only? 
And by that I mean, based on how the FS is actually being used.  I don't 
know if piracy is or has been a problem for Namesys, but at least the 
economics of it makes sense:  A fifteen year old hacker won't want to 
pay for an FS, but might have a lot to contribute.  But if your main 
market is large servers, have those people pay -- they are running their 
business off your FS.



There is no doubt there is a market for a repacker, but I think people
are much more likely to spend money on something that is immediately
tangible, like disk space instantly being free'd up by compression, or
data instantly being encrypted. As compared to something that is much


The compression will probably mostly be about speed.  Remember, if we're 
talking about people who want to see tangible, visceral results, we're 
probably also talking about end-users.  And trust me, the vast majority 
of most of my data (as an end-user) is not very compressible.


Ok, I lied:  I love games, and you can make a very modern-looking game 
in 96K:


http://produkkt.abraxas-medien.de/kkrieger

But while that could be seen as a kind of compression, it's done by 
hand, and is fairly irrelevant to filesystems.


No, mostly we're talking about things like office documents, the 
majority of which fit in less than a gigabyte, and multimedia (music, 
movies, games) which will gain very little from compression.  If 
anything, the benefit would be mostly in compressing software.



less tangible like fragmentation percentages and minor I/O throughput
improvements. I used to work at a large, world wide web hosting company
and I could see making a case to management for purchasing Reiser4
compression would be pretty easy for our shared servers. Instantly
freeing up large amounts of disk space (where .html/.php files were the
vast majority) would save huge amounts of money on disk drives,
especially since most of the servers used RAID1 and adding new drives
was a huge pain in the neck. Making a case to purchase a repacker would
be much, much more difficult.


Hmm, the problem is, if storage space is really the big deal, it's been 
done before, and some of these efforts are still usable and free:


http://parallel.vub.ac.be/~johan/compFUSEd/
http://www.miio.net/fusecompress/
http://north.one.pl/~kazik/pub/LZOlayer/
http://apfs.humorgraficojr.com/apfs_ingles.html

And while we're on the topic, here's an FS that does unpacking of 
archives, probably about the same way we imagined it in Reiser4 
pseudofiles/magic:


http://www.nongnu.org/unpackfs/

But regardless, as far as I can tell, the only real, tangible benefit of 
using Reiser4 compression instead of one of those four FUSE filesystems 
is 

portage tree (Was: Re: reiser4 status (correction))

2006-07-22 Thread Christian Trefzer
Hi,

The portage tree is such a fine testing object since it should be sort
of a best case scenario for reiser filesystems, and needs no real
backup in case of a screwup during tests.

I've been on Gentoo for years now, used reiser3 since the days when you
had to patch it into a 2.2 kernel and tried out reiser4 on my portage
tree first of all. During my comparison, reiser3 + notail resulted in
huge amounts of wasted disk space, obviously it is not very smart to use
up a whole 4k block for a file of only few bytes. reiser3 with packed
tails was a lot better, but still it was a remarkable difference from
there to reiser4, and I am only talking storage efficiency here which
directly translates to the amount of I/O necessary to get the same data
all the way to RAM and CPU. The reiser4 dev team did a great job there!

With reiser3, I made another observation wrt. fragmentation, most of
which should be alleviated by reiser4 and its delayed allocation. Over
time, doing package database updates and querying for pending updates to
the local machine became slower. Packing up everything in a tarball,
mkfs, mount with -onoatime, unpack usually fixed the problem. Since
the performance reduction in reiser4 is a lot less (I can hardly tell if
it makes a difference before vs. after the backup/restore repacking) I
hardly do it any more, but once in a while I still use my script
initially written for the comparison test.

Attached is the shell script I used for comparing various mount options
for ReiserFS v3 and later v4. It is capable of converting between
ext2|3, reiser3|4 and xfs, provided the target fs stores the data
efficiently enough that everything fits inside. (I had interesting
results when moving from reiser3 to ext3 for testing, esp. wrt. portage
tree ; ) 

The script falls short at handling arbitrary mount options, but may be
trivially edited to use any desired options for a single use. Reading
and understanding is recommended before feeding your viable data to it.
I hope somebody may find it useful, or at least inspiring. Any comments
appreciated!

Kind regards,
Chris
#!/bin/sh

# ! WARNING !
# As always, BE CAREFUL AND KNOW WHAT YOU DO WHEN RUNNING THIS PROGRAM!
# At best, read the code, understand what it does, and take appropriate
# precautions. Backups of valuable data are a requirement anyway, so go 
# for it BEFORE you try this out!
#
#
# SYNOPSIS:
# fsconvert.sh fstype compression method mountpoint
# 
# where fstype may be any of ext[2|3], reiser[fs|4] or xfs
# and compression method is a choice of compress, gzip or bzip2.
#
#
# USE:
# This script was initially intended to re-create old reiserfs 3 
# filesystems for performance reasons, and can reasonably be considered
# useful when run once every year or two for any filesystem under heavy
# read/write load. Simply give the existing filesystem type as the fstype
# argument.
#
# Though it was written for that single purpose at first, it was recently
# adapted for filesystem type migration kind of operations, as it did
# never care about the fs type in the beginning and now takes the
# destination fs type as an argument.
#
# The script REQUIRES the concerned fs to be mounted, to have an entry in
# /etc/fstab, to have no further active mountpoints in a subdirectory,
# enough free disk space in $archdir to back up data and of course the
# required tools (mkfs.fstype, tar, compression tool of choice) to do
# all the magic.
#
# Please send any comment, modification, failure report etc. to
# ctrefzer AT gmx DOT de
#
# Have fun!
# Chris

# Changelog:
#
# 2005-10-03
# Enhanced commandline interface to support multiple target file system types
# and compression methods.
# Filesystems supported so far:
#  - ext2 / 3
#  - reiser 3 / 4
#  - xfs
# 
# Compression methods which seemed appropriate:
#  - none (uncompressed archive for almost no CPU load)
#  - compress (quite fast even on old machines)
#  - gzip (realtime on new machines, yet impressive I/O reduction)
#  - bzip2 (in case of tight disk space, but takes ages wrt. the others)

### Configure here:

# where to store the archives:
archdir=${PWD}/.fsconvert


# Sane defaults for compression levels (and guidelines): use fast gzip on 
# modern CPUs where GZIP -3 gives better compression than LZW, yet still at
# realtime, whereas BZIP2 should only be used when disk space is _really_
# critical - it will take forever. LZW (compress) is advised on not-so-recent
# machines to save some I/O without slowing things down. A nifty compromise
# may be GZIP -1 on P2/P3 class machines.

export GZIP=-1
export BZIP2=-9


### No modifications required below here!


### Some helper functions:

function e_report {
echo -en ${1}...
}

function e_done {
# Paranoia is good! It's all about the data...
sync
echo  Done.
}

function e_fail {
echo  FAILED!
exit ${1}
}

function e_usage {
echo USAGE: ${0} fstype compressiontype mountpoint
exit 1
}

if [ -z ${1} ]
then

Re: reiser4 status (correction)

2006-07-22 Thread Mike Benoit
On Sat, 2006-07-22 at 07:34 -0500, David Masover wrote:

 The compression will probably mostly be about speed.  Remember, if
 we're 
 talking about people who want to see tangible, visceral results,
 we're 
 probably also talking about end-users.  And trust me, the vast
 majority 
 of most of my data (as an end-user) is not very compressible.
 

Sure, mine too. Between the many gigs of MP3s and movies I have stored
on my HD, only about 10-20GB is the OS, Email, and documents/source
code. Even just compressing that small portion though I could probably
save between 5-10GB. The difference is though I can do a df before, and
a df after, and I can instantly see I got my moneys worth. Same with
encryption. 

With the repacker it is much more difficult (for average users) to time
how long a program takes to load some file (or launch), before the
repacker and after. Especially since caching comes in to play. 

Also, according to this little poll on one of the compressed FUSE sites
you linked to, more people are looking to compression for space saving,
then for speed:
http://parallel.vub.ac.be/~johan/compFUSEd/index.php?option=pollstask=resultspollid=31


 
 No, mostly we're talking about things like office documents, the 
 majority of which fit in less than a gigabyte, and multimedia (music, 
 movies, games) which will gain very little from compression.  If 
 anything, the benefit would be mostly in compressing software.
 
  less tangible like fragmentation percentages and minor I/O throughput
  improvements. I used to work at a large, world wide web hosting company
  and I could see making a case to management for purchasing Reiser4
  compression would be pretty easy for our shared servers. Instantly
  freeing up large amounts of disk space (where .html/.php files were the
  vast majority) would save huge amounts of money on disk drives,
  especially since most of the servers used RAID1 and adding new drives
  was a huge pain in the neck. Making a case to purchase a repacker would
  be much, much more difficult.
 
 Hmm, the problem is, if storage space is really the big deal, it's been 
 done before, and some of these efforts are still usable and free:
 
 http://parallel.vub.ac.be/~johan/compFUSEd/
 http://www.miio.net/fusecompress/
 http://north.one.pl/~kazik/pub/LZOlayer/
 http://apfs.humorgraficojr.com/apfs_ingles.html
 
 And while we're on the topic, here's an FS that does unpacking of 
 archives, probably about the same way we imagined it in Reiser4 
 pseudofiles/magic:
 
 http://www.nongnu.org/unpackfs/
 
 But regardless, as far as I can tell, the only real, tangible benefit of 
 using Reiser4 compression instead of one of those four FUSE filesystems 
 is speed.  Reiser4 would compress/decompress when actually hitting the 
 disk, not just the FS, and it would also probably use in-kernel 
 compression, rather than calling out to userspace on every FS operation.
 
 But you see, if you're talking about speed, 10% is a respectably big 
 improvement, so I could see selling them on a repacker at the same time.
 

Most of those FUSE file systems you linked to scare me. This is from the
APFS web page:

Permissions do not work!
Do not worry, it is not your fault, permissions are not implemented yet.
It is like all files had rwxrwxrwx mode.

I have lost all my data! How do I get it back?
From the backup, obviously.

FUSE is great, but can it even come close to matching the performance of
in-kernel file systems? Not only that, but if you want to compress a
directory you have to go through about a 12 step process of moving the
files, setting up a mount point, and moving the files back. 

Will Reiser4 not allow us to mount with compression enabled then
enable/disable compression on a per file/per directory basis? 


 Maybe bundles are a good idea...
 Maybe there should be a Reiser4 Whitepaper Value Pack, once everything 
 on the whitepaper is done?
 
  See, customers who used lots of CPU were easy to up-sell to a dedicated
  server because page load times were tangible and if they didn't move we
  would be forced to shut them off. However customers who used gobs of
  disk space were much more difficult to up-sell to dedicated servers
  because it didn't affect themselves or other customers in a tangible
  way. They wouldn't notice any difference by moving to a much more
  expensive dedicated server.
 
 Sounds like more a marketing problem than a technical one.  Couldn't you 
 just charge more on the virtual server?  Or start charging by the megabyte?

You could, and we did charge by the megabyte, only after they exceeded
the limit of their package. However web hosting is a fiercely
competitive market, so we were constantly adjusting our packages to
include more disk space, more bandwidth, more features for the same
amount, or lower prices, just to compete. The way many shared hosting
companies work is each server is waaay over sold, at least in the disk
space department. We would have anywhere from 500-1500 accounts 

Re: reiser4 status (correction)

2006-07-22 Thread Maciej Sołtysiak
Hello Andreas,

Saturday, July 22, 2006, 1:06:54 AM, you wrote:

 On 17:45 Fri 21 Jul , David Masover wrote:
 Question, then:  Can the ext2 defrag work on a raw ext3 partition, without 
 having to convert it first?

 Dunno, but I don't think so
I tried that once, back in 2002 i think. defrag seemed doing things fine, but 
thrashed the
filesystem very badly. The good thing is, that I was able to fsck it and recover
most of my files. However it is not something you should try.

-- 
Best regards,
Maciej




Re: reiser4 status (correction)

2006-07-22 Thread David Masover

Mike Benoit wrote:


code. Even just compressing that small portion though I could probably
save between 5-10GB. The difference is though I can do a df before, and
a df after, and I can instantly see I got my moneys worth. Same with
encryption. 


In the case of encryption, it's also got competition.  There are two 
FUSE filesystems that do crypto, and there's cryptoloop/dm-crypt, which 
you need anyway for encrypted swap.


True, it's nowhere near as nice, but it is functional.  You would think 
one of these would be easier to develop to where it's useful.  And if 
you're encrypting it, you're already accepting a performance hit.


So again, the Reiser4 advantage here is all speed.


http://parallel.vub.ac.be/~johan/compFUSEd/index.php?option=pollstask=resultspollid=31


Of course they are, or they wouldn't use FUSE.

But if that's really true, why wouldn't a FUSE driver help you?


Most of those FUSE file systems you linked to scare me. This is from the
APFS web page:



I have lost all my data! How do I get it back?
From the backup, obviously.


To be fair, that's what you get from any FS.

But how hard would it be to make a FUSE filesystem work properly?  How 
hard would it be to get a Reiser4 plugin to work properly?



FUSE is great, but can it even come close to matching the performance of
in-kernel file systems?


Performance again!


Not only that, but if you want to compress a
directory you have to go through about a 12 step process of moving the
files, setting up a mount point, and moving the files back. 


You listed 3 steps.  Besides, I don't see anything stopping you from 
modifying one of these to selectively compress things, the way Reiser4 
would.  So, put your entire FS under a FUSE system, then configure which 
directories you want compressed.


Again, the drawback is huge gobs of performance.  It just seems natural 
to use the repacker if you're going to use any Reiser4-based replacement 
for these.



You could, and we did charge by the megabyte, only after they exceeded
the limit of their package. However web hosting is a fiercely
competitive market, so we were constantly adjusting our packages to
include more disk space, more bandwidth, more features for the same
amount, or lower prices, just to compete. The way many shared hosting
companies work is each server is waaay over sold, at least in the disk


Ah, so I guess the advantage of a more expensive, dedicated box is that 
you know no one's overselling it?


Personally, I get a little sick of the insane amounts of overselling 
that happen -- you just know it's going to come back and bite you in the 
ass.  This is happening right now with bandwidth, and overselling is 
pretty much solely responsible for the whole Net Neutrality mess -- it 
would be a complete non-issue if 5 megabits to your house! was 
backed up by an actual 5 megabits reserved for me, or if they sold them 
as 2-5 megabits, where 2 is guaranteed, and 5 is what you get when no 
one else is using it.


I'm a bit skeptical of my local ISP's Fiber To The Home initiative, 
because I can't imagine they have enough spare upstream bandwidth just 
lying around.


I mean, if you actually have enough bandwidth, you don't want or need 
your ISP to do QoS or prioritizing for you -- you can just do it 
yourself.  Personally, ssh packets would be of a much higher priority to 
me than anything else...



Thats pretty much the only way you make money with $10-20/month
packages.


Yeah, I know, hard for an honest guy to compete...


I don't doubt the benefits of the repacker, but from a business
perspective the repacker is something that runs transparently in the
background, once you install it, things magically speed up then you
never hear from it again as it does its job. Out of sight, out of mind.
Whereas the compression/encryption plugin are always in your face, every
time you run df, or enter a passphrase to gain access to your files, you
know its there and working. Its something you'll tell your friends
about. After you first run it, the repacker just fades away in to the
background and you forget about it.


Ah.  This might explain the success of things like Ruby On Rails.  Once 
it's done, the insane amount of CPU required to run a high-level 
interpreted language (versus even something semi-interpreted like perl) 
is out of sight, out of mind.  But when you first set it up, the savings 
in development time are immediately obvious, in your face.


Although I would want to find a way to avoid typing a passphrase every 
time -- maybe keep the key on a USB keychain.  Passphrases would just 
annoy users.  Unless, of course, you could tie it to logon -- assuming 
you can actually change the passphrase later...



Charging only for commercial use of the repacker/compression/encryption
plugin would be a great middle ground. 


Good, I'm glad it wasn't a completely insane idea.


Re: reiser4 status (correction)

2006-07-21 Thread Hans Reiser
David Masover wrote:

 Hans Reiser wrote:

 On a more positive note, Reiser4.1 is getting closer to release


 Good news!  But it's been awhile since I've followed development, and
 the homepage seems out of date (as usual).  Where can I find a list of
 changes since v4?

 By out of date, I mean things like this:

 Reiser4.1 will modify the repacker to insert controlled air holes,
 as it is well known that insertion efficiency is harmed by overly
 tight packing.

Sigh, no, the repacker will probably be after 4.1

The list of tasks for zam looks something like:

fix bugs that arise

debug read optimization code (CPU reduction only, has no effect on IO),
1 week est.  (would be nice if it was less)

review compression code 1 day per week until it ships.

fix fsync performance (est. 1 week of time to make post-commit writes
asynchronous, maybe 3 weeks to create fixed-reserve for write twice
blocks, and make all fsync blocks write twice)

write repacker (12 weeks).

I am not sure that putting the repacker after fsync is the right choice

The task list for vs looks like:

* fix bugs as they arise.

* fix whatever lkml complains about that either seems reasonable, or
that akpm agrees with.

* Help edward get the compression plugins out the door.

* Improve fsck's time performance.

* Fix any V3 bugs that Chris and Jeff don't fix for us.  Which reminds
me, I need to check on whether the 90% full bug got fixed



Re: reiser4 status (correction)

2006-07-21 Thread Sarath Menon

 Sigh, no, the repacker will probably be after 4.1
 The list of tasks for zam looks something like:
 fix bugs that arise
 debug read optimization code (CPU reduction only, has no effect on IO),
 1 week est.  (would be nice if it was less)
 review compression code 1 day per week until it ships.
 fix fsync performance (est. 1 week of time to make post-commit writes
 asynchronous, maybe 3 weeks to create fixed-reserve for write twice
 blocks, and make all fsync blocks write twice)
 write repacker (12 weeks).

Well, this is free software, and is not backed u with a million dollars of 
funding. Trust me, these guys are doing a great job with the options they 
have. 

This is not a fliambait, but I personally feel that this is way too great 
considering the resources that the entire team has. I am more than willing to 
wait even though I have had occasional bad experiences with reiser{3,4}. 

The bottemline is that all versions of reiserfs have had great performance, 
even though with its own hiccups (which fs doesn't have ??) and simply put, 
it is a fantastic peice of code.



-- 
My mother loved children -- she would have given anything if I had been one.
-- Groucho Marx


Re: reiser4 status (correction)

2006-07-21 Thread David Masover

Hans Reiser wrote:


I am not sure that putting the repacker after fsync is the right choice


Does the repacker use fsync?  I wouldn't expect it to.

Does fsync benefit from a properly packed FS?  Probably.

Also, while I don't expect anyone else to be so bold, there is a way 
around fsync performance:  Disable it.  Patch the kernel so that any 
fsync call from userspace gets ignored, but lie and tell them it worked. 
 Basically:


 asmlinkage long sys_fsync(unsigned int fd)
 {
-   return do_fsync(fd, 0);
+   return 0;   // do_fsync(fd, 0);
 }

In order to make this sane, you should have backups and an Uninterrupted 
Power Supply.  In the case of a loss of power, the box should notice and 
immediately sync, then either shut down or software suspend.  Any UPS 
battery should be able to handle the amount of time it takes to shut the 
system off.


Since anything mission critical should have backups and a UPS anyway, 
the only problem left is what happens if the system crashes.  But system 
crashes are something you have to plan for anyway.  Disks fail -- stuff 
happens.  RAID won't save you -- the RAID controller itself will 
eventually fail.


So suppose you're running some very critical server -- for now, chances 
are it's running some sort of database.  In this case, what you really 
want is database replication.  Have at least two servers up and running, 
and consider the transaction complete not when it hits the disk, but 
when all running servers acknowledge the transaction.  The RAM of two 
boxes should be safer than the disk of one.


What about a repacker?  The best I can do to hack around that is to 
restore the whole box from backup every now and then, but this requires 
the box to be down for awhile -- it's a repacker, but not an online one. 
 In this case, the solution would be to have the same two servers 
(replicating databases), and bring first one down, and then the other.


That would make me much more nervous than disabling fsync, though, 
because now you only have the one server running, and if it goes down... 
 And depending on the size of the data in question, this may not be 
feasable.  It seems entirely possible that in some setups like this, the 
only backup you'd be able to afford would be in the form of replication.


In my own personal case, I'd prefer the repacker to tuning fsync.  But 
arguments could be made for both.


Re: reiser4 status (correction)

2006-07-21 Thread Mike Benoit
On Fri, 2006-07-21 at 02:44 -0600, Hans Reiser wrote:
 fix fsync performance (est. 1 week of time to make post-commit writes
 asynchronous, maybe 3 weeks to create fixed-reserve for write twice
 blocks, and make all fsync blocks write twice)
 
 write repacker (12 weeks).
 
 I am not sure that putting the repacker after fsync is the right choice
 

Tuning fsync will fix the last wart on Reiser4 as far as benchmarks are
concerned won't it? Right now Reiser4 looks excellent on the benchmarks
that don't use fsync often (mongo?), but last I recall the fsync
performance was so poor it overshadows the rest of the performance. It
would also probably be more useful to a much wider audience, especially
if Namesys decides to charge for the repacker.

ReiserV3 is used on a lot of mail and squid proxy servers that deal with
many small files, and these work loads usually call fsync often. My
guess is that ReiserV3 users are the most likely to migrate to Reiser4,
because they already know the benefits of using a Reiser file system.
But neglecting fsync performance will just put a sour taste in their
mouth. 

On top of that, I don't see how a repacker would help these work loads
much as the files usually have a high churn rate. Packing them would
probably be a net loss as the files would just be deleted in 24hrs and
replaced by new ones.

Very few people will (or should) disable fsync as David suggests, I
don't see that as a solution at all, even if it is temporary.

-- 
Mike Benoit [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Re: reiser4 status (correction)

2006-07-21 Thread David Masover

Mike Benoit wrote:


Tuning fsync will fix the last wart on Reiser4 as far as benchmarks are
concerned won't it? Right now Reiser4 looks excellent on the benchmarks
that don't use fsync often (mongo?), but last I recall the fsync
performance was so poor it overshadows the rest of the performance. It
would also probably be more useful to a much wider audience, especially
if Namesys decides to charge for the repacker.


If Namesys does decide to charge for the repacker, I'll have to consider 
whether it's worth it to pay for it or to use XFS instead.  Reiser4 
tends to become much more fragmented than most other Linux FSes -- 
purely subjective, but probably true.



ReiserV3 is used on a lot of mail and squid proxy servers that deal with
many small files, and these work loads usually call fsync often.

[...]

But neglecting fsync performance will just put a sour taste in their
mouth. 


So will neglecting fragmentation, only worse.  At least fsync is slow up 
front.  Fragmentation will be slow much farther in, when the mailserver 
has already been through one painful upgrade.  Charging for the repacker 
just makes it worse.



On top of that, I don't see how a repacker would help these work loads
much as the files usually have a high churn rate. Packing them would
probably be a net loss as the files would just be deleted in 24hrs and
replaced by new ones.


Depends.  Some will, some won't.  My IMAP server does have a lot of 
churning, but there's also the logs (which stay for at least a month or 
two before they rotate out), and since it's IMAP, I do leave quite a lot 
of files alone.


v3 is also used on a lot of web servers, at least where I used to work 
-- some areas will be changing quite a lot, and some areas not at all. 
Changing a lot means fragmentation will happen, not changing at all 
means repacking will help.


These issues may be helped by partitioning, if you know how you're going 
to split things up.  But then, how do you partition in the middle of a 
squid server?  A lot of people visit the same sites every day, checking 
for news, but that means plenty of logos, scripts, and other things 
won't change -- but plenty of news articles will change every couple hours.



Very few people will (or should) disable fsync as David suggests, I
don't see that as a solution at all, even if it is temporary.


I guess the temporary solution is to incur a pretty big performance hit. 
 But it comes back to, which is more of a performance problem, fsync or 
fragmentation?


And I really would like to hear a good counter-argument to the one I've 
given for disabling fsync.  But even if we assume fsync must stay, do we 
have any benchmarks on fragmentation versus fsync?


But maybe it's best to stop debating, since both will be done 
eventually, right?


Re: reiser4 status (correction)

2006-07-21 Thread Mike Benoit
On Fri, 2006-07-21 at 16:06 -0500, David Masover wrote:
 Mike Benoit wrote:
 
  Tuning fsync will fix the last wart on Reiser4 as far as benchmarks are
  concerned won't it? Right now Reiser4 looks excellent on the benchmarks
  that don't use fsync often (mongo?), but last I recall the fsync
  performance was so poor it overshadows the rest of the performance. It
  would also probably be more useful to a much wider audience, especially
  if Namesys decides to charge for the repacker.
 
 If Namesys does decide to charge for the repacker, I'll have to consider 
 whether it's worth it to pay for it or to use XFS instead.  Reiser4 
 tends to become much more fragmented than most other Linux FSes -- 
 purely subjective, but probably true.
 

I would like to see some actual data on this. I haven't used Reiser4 for
over a year, and when I did it was only to benchmark it. But Reiser4
allocates on flush, so in theory this should decrease fragmentation, not
increase it. Due to this I question what you are _really_ seeing, or if
perhaps it is a bug in the allocator? Why would XFS or any other
multi-purpose file system resist fragmentation noticeably more then
Reiser4 does.

I don't think the repacker is designed to be a must have for every
Reiser4 installation. If it was, I would consider Reiser4 to be
seriously flawed. Instead I think it is simply designed to improve
certain workloads that may cause high fragmentation in hopes of keeping
I/O speeds at their peek. 

Am I correct in this assumption Hans?
 
No Linux file system that I'm aware of has a defragmentor, but they DO
become fragmented, just not near as bad as FAT32 used to when MS created
their defragmentor. The highest non-contiguous percent I've seen with
EXT3 is about 12%, FAT32 I have seen over 50%, and NTFS over 30%. In
fact I'm running in to a fragmentation issue with ReiserV3 right now
that Jeff is working on, but it is more of a worst case scenario issue,
not a regular occurrence issue.

For normal workloads I doubt you would notice much difference at all
by using a repacker, 10% maybe? Which is one of the reasons you probably
haven't seen a repacker for EXT2/3, even though I'm sure it would
improve performance for some people.

-- 
Mike Benoit [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Re: reiser4 status (correction)

2006-07-21 Thread Andreas Schäfer
On 14:37 Fri 21 Jul , Mike Benoit wrote:
 No Linux file system that I'm aware of has a defragmentor, but they DO
 become fragmented, just not near as bad as FAT32 used to when MS created
 their defragmentor.

Forgotten ext2? ;-) Funny thing: If your ext3 got too fragmented:
convert it back to ext2, defrag and reconvert it again to ext3. All of
this can be done in place, i.e. without moving the data to other
partitions etc.

-Andreas


pgpiwx2SW0fdR.pgp
Description: PGP signature


Re: reiser4 status (correction)

2006-07-21 Thread David Masover

Mike Benoit wrote:

On Fri, 2006-07-21 at 16:06 -0500, David Masover wrote:

Mike Benoit wrote:


Tuning fsync will fix the last wart on Reiser4 as far as benchmarks are
concerned won't it? Right now Reiser4 looks excellent on the benchmarks
that don't use fsync often (mongo?), but last I recall the fsync
performance was so poor it overshadows the rest of the performance. It
would also probably be more useful to a much wider audience, especially
if Namesys decides to charge for the repacker.
If Namesys does decide to charge for the repacker, I'll have to consider 
whether it's worth it to pay for it or to use XFS instead.  Reiser4 
tends to become much more fragmented than most other Linux FSes -- 
purely subjective, but probably true.




I would like to see some actual data on this. I haven't used Reiser4 for
over a year, and when I did it was only to benchmark it. But Reiser4
allocates on flush, so in theory this should decrease fragmentation, not
increase it. Due to this I question what you are _really_ seeing, or if
perhaps it is a bug in the allocator? Why would XFS or any other
multi-purpose file system resist fragmentation noticeably more then
Reiser4 does.


Maybe not XFS, but in any case, Reiser4 fragments more because of how 
its journaling works.  It's the wandering logs.


Basically, when most Linux filesystems allocate space, they do try to 
allocate it contiguously, and it generally stays in the same place. 
With ext3, if you write to the middle of a file, or overwrite the entire 
file, you're generally going to see your writes be written once to the 
journal, and then again to the same place the file originally was.


Similarly, if you delete and then create a bunch of small files, you're 
generally going to see the new files created in the same place the old 
files were.


With Reiser4, wandering logs means that rather than write to the 
journal, if you write to the middle of the file, it writes that chunk to 
somewhere else on the disk, and somehow gets it down to one atomic 
operation where it simply changes the file to point to the new location 
on disk.  Which means if you have a filesystem that is physically laid 
out on disk like this (for simplicity, assume it only has a single file):


# is data
* is also data
- is free space

##*--

When you try to write in the middle (the '*' chars) -- let's say we're 
changing them to '%' chars, this happens:


##*%-

Once that's done, the file is updated so that the middle of it points to 
the fragment in the new location, and the old location is freed:


##-%-



Keep in mind, because of lazy writes, it's much more likely for the 
whole change to happen at once.  Here's another example:


#

Let's say we just want to overwrite the file with another one of the 
same length:


#%---

then, commit the transaction:

-%---

You see the problem?  You've now split the free space in half. 
Realistically, of course, it wouldn't be by halves, but you're basically 
inserting random air holes all over the place, and your FS is becoming 
more like foam, taking up more of the free space, until you can no 
longer use the free space  In the above example, if we then have to 
come write some huge file, it looks like this:


*%***

Split right in half.  Now imagine this effect multiplied by hundreds or 
thousands of files, over time...


This is why Reiser4 needs a repacker.  While it's fine for larger files 
-- I believe after a certain point, it will write twice, so looking at 
our first example:



##*--

Write to a new, temporary place:

##*%-

Write back to the original place:

##%%-

Complete the transaction and free the temporary space:

##%--


This technique is what other journaling filesystems use, and it also 
means that writing is literally twice as slow as on a non-journaling 
filesystem, or on one with wandering logs like Reiser4.  But, it's a 
practical necessity when you're dealing with some 300 gig MySQL database 
of which only small 10k chunks are changing.  Taking twice as long on a 
10k chunk won't kill anyone, but fragmenting your 300 gig database (on a 
320 gig partition) will kill your performance, and will be very 
difficult to defragment.


But on smaller files, it would be very beneficial if we could allow the 
FS to slowly fragment (to foam-ify, if you will) and defrag once a week. 
 The amount of speed gained in each write -- and read, if it's not 
getting too awful during that week -- definitely makes up for having to 
spend an hour or so defragmenting, especially if the FS can be online at 
the time.


And you can probably figure out an optimal time to wait before 
defragmenting, since your biggest fragmentation problems happen when the 
chunk of contiguous space at the end of the 

Re: reiser4 status (correction)

2006-07-21 Thread David Masover

Andreas Schäfer wrote:

On 14:37 Fri 21 Jul , Mike Benoit wrote:

No Linux file system that I'm aware of has a defragmentor, but they DO
become fragmented, just not near as bad as FAT32 used to when MS created
their defragmentor.


Forgotten ext2? ;-) Funny thing: If your ext3 got too fragmented:
convert it back to ext2, defrag and reconvert it again to ext3. All of
this can be done in place, i.e. without moving the data to other
partitions etc.


Well, I know an ext3 partition can be mounted, unchanged, as ext2.  Of 
course, you had to cleanly unmount as ext3 first, and make sure you 
cleanly mount/unmount as ext2 before you try to mount as ext3 again...


Question, then:  Can the ext2 defrag work on a raw ext3 partition, 
without having to convert it first?


Re: reiser4 status (correction)

2006-07-21 Thread Andreas Schäfer
On 17:45 Fri 21 Jul , David Masover wrote:
 Question, then:  Can the ext2 defrag work on a raw ext3 partition, without 
 having to convert it first?

Dunno, but I don't think so


pgp3cS9WVSQCi.pgp
Description: PGP signature


Re: reiser4 status (correction)

2006-07-21 Thread Mike Benoit
Your detailed explanation is appreciated David and while I'm far from a
file system expert, I believe you've overstated the negative effects
somewhat.

It sounds to me like you've gotten Reiser4's allocation process in
regards to wandering logs correct, from what I've read anyways, but I
think you've overstated its fragmentation disadvantage when compared
against other file systems.

I think the thing we need to keep in mind here is that fragmentation
isn't always a net loss. Depending on the workload, fragmentation (or at
least not tightly packing data) could actually be a gain. In cases where
you have files (like log files or database files) that constantly grow
over a long period of time, packing them tightly at regularly scheduled
intervals (or at all?) could cause more harm then good. 

Consider this scenario of two MySQL tables having rows inserted to each
one simultaneously, and lets also assume that the two tables were
tightly packed before we started the insert process.

1 = Data for Table1
2 = Data for Table2 

Tightly packed:



Simultaneous inserts start:

1122112211221122

I believe this is actually what is happening to me with ReiserV3 on my
MythTV box. I have two recordings running at the same time, each writing
data at about 500kb/s and once the drive has less then 10% free the
whole machine grinds to a screeching halt while it attempts to find free
space. The entire 280GB drive is full of files fragmented like this.
 
Allocate on flush alone would probably help this scenario immensely. 

The other thing you need to keep in mind is that database files are like
their own little mini-file system. They have their own fragmentation
issues to deal with (especially PostgreSQL). So in cases like you
described where you are overwriting data in the middle of a file,
Reiser4 may be poor at doing this specific operation compared to other
file systems, but just because you overwrite a row that appears to be in
the middle of a table doesn't mean that the data itself is actually in
the middle of the table. If your original row is 1K, and you try to
overwrite it with 4K of data, it most likely will be put at the end of
the file anyways, and the original 1K of data will be marked for
overwriting later on. Isn't this what myisampack is for?

So while I think what you described is ultimately correct, I believe
extreme negative effects from it to be a corner case, and probably not
representative of the norm. I also believe that other Reiser4
improvements would outweigh this draw back to wandering logs, again in
average workloads. 

So the original point that I was trying to make comes back to the fact
that I don't believe Reiser4 _needs_ a repacker to maintain decent
performance. The fact that it will have a repacker just makes it that
much better for people who might need it. If Hans didn't think he could
make money off it, it probably wouldn't be so high on his priority list?
We can't fault him for that though.

Like you mentioned, if Reiser4 performance gets so poor without the
repacker, and Hans decides to charge for it, I think that will turn away
a lot potential users as they could feel that this is a type of
extortion. Get them hooked on something that only performs well for a
certain amount of time, then charge them money to keep it up. I also
think the community would write their own repacker pretty quick in
response. 

A much better approach in my opinion would be to have Reiser4 perform
well in the majority of cases without the repacker, and sell the
repacker to people who need that extra bit of performance. If I'm not
mistaken this is actually Hans intent. If Reiser4 does turn out to
perform much worse over time, I would expect Hans would consider it a
bug or design flaw and try to correct the problem however possible. 

But I guess only time will tell if this is true or not. ;)

On Fri, 2006-07-21 at 17:40 -0500, David Masover wrote:
 Maybe not XFS, but in any case, Reiser4 fragments more because of how 
 its journaling works.  It's the wandering logs.
 
 Basically, when most Linux filesystems allocate space, they do try to 
 allocate it contiguously, and it generally stays in the same place. 
 With ext3, if you write to the middle of a file, or overwrite the entire 
 file, you're generally going to see your writes be written once to the 
 journal, and then again to the same place the file originally was.
 
 Similarly, if you delete and then create a bunch of small files, you're 
 generally going to see the new files created in the same place the old 
 files were.
 
 With Reiser4, wandering logs means that rather than write to the 
 journal, if you write to the middle of the file, it writes that chunk to 
 somewhere else on the disk, and somehow gets it down to one atomic 
 operation where it simply changes the file to point to the new location 
 on disk.  Which means if you have a filesystem that is 

Re: reiser4 status (correction)

2006-07-21 Thread Hans Reiser
Mike Benoit wrote:


On top of that, I don't see how a repacker would help these work loads
much as the files usually have a high churn rate. 

I think Reiserfs is used on a lot more than squid servers.  For them,
80% of files don't move for long periods of time is the usual industry
statistic


Re: reiser4 status (correction)

2006-07-21 Thread David Masover

Mike Benoit wrote:

Your detailed explanation is appreciated David and while I'm far from a
file system expert, I believe you've overstated the negative effects
somewhat.

It sounds to me like you've gotten Reiser4's allocation process in
regards to wandering logs correct, from what I've read anyways, but I
think you've overstated its fragmentation disadvantage when compared
against other file systems.

I think the thing we need to keep in mind here is that fragmentation
isn't always a net loss. Depending on the workload, fragmentation (or at
least not tightly packing data) could actually be a gain. In cases where


defragmented != tightly packed.


you have files (like log files or database files) that constantly grow
over a long period of time, packing them tightly at regularly scheduled
intervals (or at all?) could cause more harm then good. 


This is true...


Consider this scenario of two MySQL tables having rows inserted to each
one simultaneously, and lets also assume that the two tables were
tightly packed before we started the insert process.

1 = Data for Table1
2 = Data for Table2 


Tightly packed:



Simultaneous inserts start:

1122112211221122



Allocate on flush alone would probably help this scenario immensely. 


Yes, it would.  You'd end up with



assuming they both fit into RAM.  And of course they could later be 
repacked.


By the way, this is the NTFS approach to avoiding fragmentation -- try 
to avoid fragmenting anything below a certain block size.  I, for one, 
would be perfectly happy if my large files were split up every 50 or 100 
megs or so.


The problem is when you get tons of tiny files and metadata stored so 
horribly inefficiently that things like Native Command Queuing is 
actually a huge performance boost.



The other thing you need to keep in mind is that database files are like
their own little mini-file system. They have their own fragmentation
issues to deal with (especially PostgreSQL).


I'd rather not add to that.  This is one reason to hate virtualization, 
by the way -- it's bad enough to have a fragmented NTFS on your Windows 
installation, but worse if the disk itself is a fragmented sparse file 
on Linux.



So in cases like you
described where you are overwriting data in the middle of a file,
Reiser4 may be poor at doing this specific operation compared to other
file systems, but just because you overwrite a row that appears to be in
the middle of a table doesn't mean that the data itself is actually in
the middle of the table. If your original row is 1K, and you try to
overwrite it with 4K of data, it most likely will be put at the end of
the file anyways, and the original 1K of data will be marked for
overwriting later on. Isn't this what myisampack is for?


If what you say is true, isn't myisampack also an issue here?  Surely it 
doesn't write out an entirely separate copy of the file?


Anyway, the most common usage I can see for mysql would be overwriting a 
1K row with another 1K row, or dropping a row, or adding a wholly new 
row.  I may be a bit naive here...


But then, isn't there also some metadata somewhere which says things 
like how many rows you have in a given table?


And it's not just databases.  Consider BitTorrent.  The usual BitTorrent 
way of doing things is to create a sparse file, then fill it in randomly 
as you receive data.  Only if you decide to allocate the whole file 
right away, instead of making it sparse, you gain nothing on Reiser4, 
since writes will be just as fragmented as if it was sparse.


Personally, I'd rather leave it as sparse, but repack everything later.


So while I think what you described is ultimately correct, I believe
extreme negative effects from it to be a corner case, and probably not
representative of the norm. I also believe that other Reiser4
improvements would outweigh this draw back to wandering logs, again in
average workloads. 


Depends on your definition of average.  I'm also speaking from 
experience.  On Gentoo, /usr/portage started out being insanely fast on 
Reiser4, because it barely had to seek at all -- despite being about 
145,000 small files.  I think it was maybe half that when I first put it 
on r4, but it's more than twice as slow now, and you can hear it thrashing.


Now, the wandering logs did make the rsync process pretty fast -- the 
entire thing gets rsync'd against one of the Gentoo mirrors.  For anyone 
using Debian, this is the equivalent of apt-get update.


Only now, this rsync process is not only entirely disk-bound, it's 
something like 10x as slow.  I have a gig of RAM, so at least it's fast 
once it's cached, but it's obviously horrendously fragmented.  I am not 
sure if it's individual files or directories, but it could REALLY use a 
repack.


From what I remember of v3, it was never quite this bad, but it never 
started 

Re: reiser4 status (correction)

2006-07-20 Thread David Masover

Hans Reiser wrote:


On a more positive note, Reiser4.1 is getting closer to release


Good news!  But it's been awhile since I've followed development, and 
the homepage seems out of date (as usual).  Where can I find a list of 
changes since v4?


By out of date, I mean things like this:

Reiser4.1 will modify the repacker to insert controlled air holes, as 
it is well known that insertion efficiency is harmed by overly tight 
packing.


Am I to understand that this text hasn't changed since the repacker was 
essentially canceled/removed/commercialized?  Or is there actually a 
repacker somewhere usable, today, that is still scheduled to insert air 
holes by v4.1?


Removed LKML from CC for now, for no particular reason.  Add them back 
in, if you like.