Re: [patch] Re: assertion failed: can_hit_entd(ctx, s)

2006-08-30 Thread Alexander Zarochentsev
Hello,

On 30 August 2006 01:10, Andrew James Wade wrote:
 Hello Alexander,

 In addition to your patch, I've also applied the patch below. With
 these two patches the fs is much more stable for me.

That code was removed from reiser4 recently, the patch will be in the 
next -mm kernel.

I knew there was a bug somewhere :) 


 However, something is holding a d_ref across the calls to
 reiser4_writepage. It's not clear to me that this is allowed so my
 patch may not be a full fix.

 Andrew Wade

 signed-off-by: [EMAIL PROTECTED]

 diff -rupN a/fs/reiser4/plugin/item/extent_file_ops.c
 b/fs/reiser4/plugin/item/extent_file_ops.c ---
 a/fs/reiser4/plugin/item/extent_file_ops.c2006-08-28
 11:30:33.0 -0400 +++
 b/fs/reiser4/plugin/item/extent_file_ops.c2006-08-29
 13:06:20.0 -0400 @@ -1320,20 +1320,22 @@ static int
 extent_readpage_filler(void * TWIG_LEVEL, CBK_UNIQUE, NULL);
   if (result != CBK_COORD_FOUND) {
   reiser4_unset_hint(hint);
 - return result;
 + goto out;
   }
   ext_coord-valid = 0;
   }

   if (zload(ext_coord-coord.node)) {
   reiser4_unset_hint(hint);
 - return RETERR(-EIO);
 + result = RETERR(-EIO);
 + goto out;
   }
   if (!item_is_extent(ext_coord-coord)) {
   /* tail conversion is running in parallel */
   zrelse(ext_coord-coord.node);
   reiser4_unset_hint(hint);
 - return RETERR(-EIO);
 + result = RETERR(-EIO);
 + goto out;
   }

   if (ext_coord-valid == 0)
 @@ -1358,6 +1360,10 @@ static int extent_readpage_filler(void *
   } else
   reiser4_unset_hint(hint);
   zrelse(ext_coord-coord.node);
 +
 +out:
 + /* Calls to this function may be intermingled with VM writeback. */
 + reiser4_txn_restart_current();
   return result;
  }


 !DSPAM:44f4ad37293861987214747!

Thanks,
Alex.



Re: assertion failed: JF_ISSET(jprivate(page), JNODE_DIRTY)

2006-08-30 Thread Alexander Zarochentsev
On 30 August 2006 01:38, Andrew James Wade wrote:
 I now have a stack trace for this assertion:

there  is a race between znode_make_dirty and flushing dirty node to 
disk.  I guess (but not sure by 100%) it has no bad effect so the 
assertion is wrong.

 reiser4 panicked cowardly: reiser4[tar(5412)]:

[ ... ]

-- 
Alex.



Re: Reiser4 und LZO compression

2006-08-30 Thread David Masover

PFC wrote:



Maybe, but Reiser4 is supposed to be a general purpose filesystem
talking about its advantages/disadvantages wrt. gaming makes sense,


I don't see a lot of gamers using Linux ;)


There have to be some.  Transgaming seems to still be making a 
successful business out of making games work out-of-the-box under Wine. 
 While I don't imagine there are as many who attempt gaming on Linux, 
I'd guess a significant portion of Linux users, if not the majority, are 
at least casual gamers.


Some will have given up on the PC as a gaming platform long a go, tired 
of its upgrade cycle, crashes, game patches, and install times.  These 
people will have a console for games, probably a PS2 so they can watch 
DVDs, and use their computer for real work, with as much free software 
as they can manage.


Others will compromise somewhat.  I compromise by running the binary 
nVidia drivers, keeping a Windows partition around sometimes, and 
enjoying many old games which have released their source recently, and 
now run under Linux -- as well as a few native Linux games, some Cedega 
games, and some under straight Wine.


Basically, I'll play it on Linux if it works well, otherwise I boot 
Windows.  I'm migrating away from that Windows dependency by making sure 
all my new game purchases work on Linux.


Others will use some or all of the above -- stick to old games, use 
exclusively stuff that works on Linux (one way or the other), or give up 
on Linux gaming entirely and use a Windows partition.


Anything Linux can do to become more game-friendly is one less reason 
for gamers to have to compromise.  Not all gamers are willing to do 
that.  I know at least two who ultimately decided that, with dual boot, 
they end up spending most of their time on Windows anyway.  These are 
the people who would use Linux if they didn't have a good reason to use 
something else, but right now, they do.  This is not the fault of the 
filesystem, but taking the attitude of There aren't many Linux gamers 
anyway -- that's a self-fulfilling prophecy, gamers WILL leave because 
of it.


Also, as you said, gamers (like many others) reinvent filesystems 
and generally use the Big Zip File paradigm, which is not that stupid 
for a read only FS (if you cache all file offsets, reading can be pretty 
fast). However when you start storing ogg-compressed sound and JPEG 
images inside a zip file, it starts to stink.


I don't like it as a read-only FS, either.  Take an MMO -- while most 
commercial ones load the entire game to disk from install DVDs, there 
are some smaller ones which only cache the data as you explore the 
world.  Also, even with the bigger ones, the world is always changing 
with patches, and I've seen patches take several hours to install -- not 
download, install -- on a 2.4 ghz amd64 with 2 gigs of RAM, on a striped 
RAID.  You can trust me when I say this was mostly disk-bound, which is 
retarded, because it took less than half an hour to install in the first 
place.


Even simple multiplayer games -- hell, even single-player games can get 
fairly massive updates relatively often.  Half-Life 2 is one example -- 
they've now added HDR to the engine.


In these cases, you still need as fast access as possible to the data 
(to cut down on load time), and it would be nice to save on space as 
well, but a zipfile starts to make less sense.  And yet, I still see 
people using _cabinet_ files.


Compression at the FS layer, plus efficient storing of small files, 
makes this much simpler.  While you can make the zipfile-fs transparent 
to a game, even your mapping tools, it's still not efficient, and it's 
not transparent to your modeling package, Photoshop-alike, audio 
software, or gcc.


But everything understands a filesystem.


It depends, you have to consider several distinct scenarios.
For instance, on a big Postgres database server, the rule is to have 
as many spindles as you can.
- If you are doing a lot of full table scans (like data mining etc), 
more spindles means reads can be parallelized ; of course this will mean 
more data will have to be decompressed.


I don't see why more spindles means more data decompressed.  If 
anything, I'd imagine it would be less reads, total, if there's any kind 
of data locality.  But I'll leave this to the database experts, for now.


- If you are doing a lot of little transactions (web sites), it 
means seeks can be distributed around the various disks. In this case 
compression would be a big win because there is free CPU to use ; 


Dangerous assumption.  Three words:  Ruby on Rails.  There goes your 
free CPU.  Suddenly, compression makes no sense at all.


But then, Ruby makes no sense at all for any serious load, unless you 
really have that much money to spend, or until the Ruby.NET compiler is 
finished -- that should speed things up.



besides, it would virtually double the RAM cache size.


No it wouldn't, not the way Reiser4 does it.  

Alarm Sistemi 119YTL

2006-08-30 Thread Alarmatik Maillist
KDV dahil 119 YTL'ye Kablosuz Alarm Sistemi...

www.alarmatik.com



Re: [patch] Re: assertion failed: can_hit_entd(ctx, s)

2006-08-30 Thread Hans Reiser
Is it already sent in?  If not, can it go out today?

Hans

Alexander Zarochentsev wrote:
 Hello,

 On 30 August 2006 01:10, Andrew James Wade wrote:
   
 Hello Alexander,

 In addition to your patch, I've also applied the patch below. With
 these two patches the fs is much more stable for me.
 

 That code was removed from reiser4 recently, the patch will be in the 
 next -mm kernel.

 I knew there was a bug somewhere :) 

   
 However, something is holding a d_ref across the calls to
 reiser4_writepage. It's not clear to me that this is allowed so my
 patch may not be a full fix.

 Andrew Wade

 signed-off-by: [EMAIL PROTECTED]

 diff -rupN a/fs/reiser4/plugin/item/extent_file_ops.c
 b/fs/reiser4/plugin/item/extent_file_ops.c ---
 a/fs/reiser4/plugin/item/extent_file_ops.c   2006-08-28
 11:30:33.0 -0400 +++
 b/fs/reiser4/plugin/item/extent_file_ops.c   2006-08-29
 13:06:20.0 -0400 @@ -1320,20 +1320,22 @@ static int
 extent_readpage_filler(void * TWIG_LEVEL, CBK_UNIQUE, NULL);
  if (result != CBK_COORD_FOUND) {
  reiser4_unset_hint(hint);
 -return result;
 +goto out;
  }
  ext_coord-valid = 0;
  }

  if (zload(ext_coord-coord.node)) {
  reiser4_unset_hint(hint);
 -return RETERR(-EIO);
 +result = RETERR(-EIO);
 +goto out;
  }
  if (!item_is_extent(ext_coord-coord)) {
  /* tail conversion is running in parallel */
  zrelse(ext_coord-coord.node);
  reiser4_unset_hint(hint);
 -return RETERR(-EIO);
 +result = RETERR(-EIO);
 +goto out;
  }

  if (ext_coord-valid == 0)
 @@ -1358,6 +1360,10 @@ static int extent_readpage_filler(void *
  } else
  reiser4_unset_hint(hint);
  zrelse(ext_coord-coord.node);
 +
 +out:
 +/* Calls to this function may be intermingled with VM writeback. */
 +reiser4_txn_restart_current();
  return result;
  }


 !DSPAM:44f4ad37293861987214747!
 

 Thanks,
 Alex.



   



Re: [patch] Re: assertion failed: can_hit_entd(ctx, s)

2006-08-30 Thread Alexander Zarochentsev
On 30 August 2006 19:43, Hans Reiser wrote:
 Is it already sent in?  If not, can it go out today?

already sent.


 Hans

 Alexander Zarochentsev wrote:
  Hello,
 
  On 30 August 2006 01:10, Andrew James Wade wrote:
  Hello Alexander,
 
  In addition to your patch, I've also applied the patch below. With
  these two patches the fs is much more stable for me.
 
  That code was removed from reiser4 recently, the patch will be in
  the next -mm kernel.
 
  I knew there was a bug somewhere :)
 

[...]

-- 
Alex.



Re: Reiser4 und LZO compression

2006-08-30 Thread Edward Shishkin

PFC wrote:



Maybe, but Reiser4 is supposed to be a general purpose filesystem
talking about its advantages/disadvantages wrt. gaming makes sense,



I don't see a lot of gamers using Linux ;)
But yes, gaming is what pushes hardware development these days, at 
least  on the desktop.


Also, as you said, gamers (like many others) reinvent filesystems 
and  generally use the Big Zip File paradigm, which is not that stupid 
for a  read only FS (if you cache all file offsets, reading can be 
pretty fast).  However when you start storing ogg-compressed sound and 
JPEG images inside  a zip file, it starts to stink.


***

Does the CPU power necessary to do the compression cost more or less  
than another drive?



***

It depends, you have to consider several distinct scenarios.
For instance, on a big Postgres database server, the rule is to have 
as  many spindles as you can.
- If you are doing a lot of full table scans (like data mining etc), 
more  spindles means reads can be parallelized ; of course this will 
mean more  data will have to be decompressed.
- If you are doing a lot of little transactions (web sites), it 
means  seeks can be distributed around the various disks. In this case  
compression would be a big win because there is free CPU to use ; 
besides,  it would virtually double the RAM cache size.


You have to ponder cost (in CPU $) of compression versus the cost 
in  virtual RAM saved for caching and the cost in disks not bought.


***


Do the two processors have separate caches, and thus being overly fined
grained makes you memory transfer bound or?



It depends on which dual core system you use ; future systems (like 
Core)  will definitely share cache as this is the best option.


***

If we analyze the results of my little compression benchmarks, we 
find  that :

- gzip is way too slow.
- lzo and lzf are pretty close.

LZF is faster than LZO (especially on decompression) but compresses 
worse.

So, when we are disk-bound, LZF will be slower.
When we are CPU-bound, LZF will be faster.

The differences are not that huge, though, so it might be worthwile 
to  weight this against the respective code cleanliness, of which I have 
no  idea.


However my compression benchmarks mean nothing because I'm 
compressing  whole files whereas reiser4 will be compressing little 
blocks of files. We  must therefore evaluate the performance of 
compressors on little blocks,  which is very different from 300 
megabytes files.
For instance, the setup time of the compressor will be important 
(wether  some huffman table needs to be constructed etc), and the 
compression  ratios will be worse.


Let's redo a benchmark then.
For that I need to know if a compression block in reiser4 will be 
either :
- a FS block containing several files (ie. a block will contain 
several  small files)

- a part of a file (ie. a small file will be 1 block)

I think it's the second option, right ?


(Plain) file is considered as a set of logical clusters (64K by
default). Minimal unit occupied in memory by (plain) file is one
page. Compressed logical cluster is stored on disk in so-called
disk clusters. Disk cluster is a set of special items (aka ctails,
or compressed bodies), so that one block can contain (compressed)
data of many files and everything is packed tightly on disk.



Re: Reiser4 und LZO compression

2006-08-30 Thread Hans Reiser
Edward Shishkin wrote:

 (Plain) file is considered as a set of logical clusters (64K by
 default). Minimal unit occupied in memory by (plain) file is one
 page. Compressed logical cluster is stored on disk in so-called
 disk clusters. Disk cluster is a set of special items (aka ctails,
 or compressed bodies), so that one block can contain (compressed)
 data of many files and everything is packed tightly on disk.



So the compression unit is 64k for purposes of your benchmarks.


Re: assertion failed: JF_ISSET(jprivate(page), JNODE_DIRTY)

2006-08-30 Thread Andrew James Wade
On Wednesday 30 August 2006 06:26, Alexander Zarochentsev wrote:
 On 30 August 2006 01:38, Andrew James Wade wrote:
  I now have a stack trace for this assertion:
 
 there  is a race between znode_make_dirty and flushing dirty node to 
 disk.  I guess (but not sure by 100%) it has no bad effect so the 
 assertion is wrong.
 
Okay, I'll change that to a WARN_ON in my tree and see what falls out.

Thanks,
Andrew Wade


bug: Unable to mount reiserfs from DVD+R

2006-08-30 Thread Xuân Baldauf
Hello,

I created backup DVDs formatted using reiserfs. However, mounting them
is not possible. If I try to mount such a DVD, I get following results:

Aug 31 01:08:18 notebook2 kernel: ReiserFS: hdc: using ordered data mode
Aug 31 01:08:18 notebook2 kernel: ReiserFS: hdc: warning: sh-458:
journal_init_dev: cannot init journal device 'unknown-block(22,0)': -30
Aug 31 01:08:18 notebook2 kernel: ReiserFS: hdc: warning: sh-462: unable
to initialize jornal device
Aug 31 01:08:18 notebook2 kernel: ReiserFS: hdc: warning: sh-2022:
reiserfs_fill_super: unable to initialize journal space

I get this error even when mounting using -o ro,nolog.

I tracked this down to the method journal_init_dev, which opens the
journal device with mode blkdev_mode = FMODE_READ | FMODE_WRITE;
except when the journal device is marked read only. However, as it
seems, my DVD burner drive is able to write DVD media (so it is not a
read-only-device, in general), but the particular DVD is only readable,
not writable. That's why (I think) the journal device itself is not
marked read only and yet open_by_devnum(jdev, blkdev_mode); fails.

The method journal_init_dev should honor the mount options ro or
nolog or both, which it currently does not (as of Linux 2.6.18-rc4).
Alternatively, method journal_init_dev might try to open the journal
device again (only using read only mode) in case the first try to open
results in -EROFS.

Xuân Baldauf.