Re: Reiser4 und LZO compression

2006-08-31 Thread Clemens Eisserer

But speaking of single threadedness, more and more desktops are shipping
with ridiculously more power than people need.  Even a gamer really

Will the LZO compression code in reiser4 be able to use multi-processor systems?
E.g. if I've a Turion-X2 in my laptop will it use 2 threads for
compression/decompression making cpu throughput much better than
whatthe disk could do?

lg Clemens


2006/8/30, Hans Reiser [EMAIL PROTECTED]:

Edward Shishkin wrote:

 (Plain) file is considered as a set of logical clusters (64K by
 default). Minimal unit occupied in memory by (plain) file is one
 page. Compressed logical cluster is stored on disk in so-called
 disk clusters. Disk cluster is a set of special items (aka ctails,
 or compressed bodies), so that one block can contain (compressed)
 data of many files and everything is packed tightly on disk.



So the compression unit is 64k for purposes of your benchmarks.



Re: Reiser4 und LZO compression

2006-08-31 Thread Edward Shishkin

Clemens Eisserer wrote:

But speaking of single threadedness, more and more desktops are shipping
with ridiculously more power than people need.  Even a gamer really


Will the LZO compression code in reiser4 be able to use multi-processor 
systems?

E.g. if I've a Turion-X2 in my laptop will it use 2 threads for
compression/decompression making cpu throughput much better than
whatthe disk could do?



Compression is going in flush time and there can be more then
one flush thread that processes the same transaction atom.
Decompression is going in the context of readpage/readpages.
So if you mean per file, then yes for compression and no for
decompression.

Edward.



lg Clemens


2006/8/30, Hans Reiser [EMAIL PROTECTED]:


Edward Shishkin wrote:

 (Plain) file is considered as a set of logical clusters (64K by
 default). Minimal unit occupied in memory by (plain) file is one
 page. Compressed logical cluster is stored on disk in so-called
 disk clusters. Disk cluster is a set of special items (aka ctails,
 or compressed bodies), so that one block can contain (compressed)
 data of many files and everything is packed tightly on disk.



So the compression unit is 64k for purposes of your benchmarks.








Re: Reiser4 und LZO compression

2006-08-31 Thread Clemens Eisserer

Hi Edward,

Thanks a lot for answering.


Compression is going in flush time and there can be more then
one flush thread that processes the same transaction atom.
Decompression is going in the context of readpage/readpages.
So if you mean per file, then yes for compression and no for
decompression.

So the parallelism is not really explicit, more or less a bit accidental.
Are threads in the kernel possible - and if yes how large is the
typical workload of stuff which can be decompressed? I guess for
several hundered kb using more than one thread could speed things up
quite a bit?

lg Clemens


Re: Reiser4 und LZO compression

2006-08-31 Thread Hans Reiser
Edward Shishkin wrote:
 Clemens Eisserer wrote:
 But speaking of single threadedness, more and more desktops are
 shipping
 with ridiculously more power than people need.  Even a gamer really

 Will the LZO compression code in reiser4 be able to use
 multi-processor systems?
 E.g. if I've a Turion-X2 in my laptop will it use 2 threads for
 compression/decompression making cpu throughput much better than
 whatthe disk could do?


 Compression is going in flush time and there can be more then
 one flush thread that processes the same transaction atom.
 Decompression is going in the context of readpage/readpages.
 So if you mean per file, then yes for compression and no for
 decompression.
I don't think your explanation above is a good one.

If there is more than one process reading a file, then you can have
multiple decompressions at one time of the same file, yes?

Just because there can be more than one flush thread per file does not
mean it is likely there will be.

CPU scheduling of compression/decompression is an area that could use
work in the future.For now, just understand that what we do is
better than doing nothing.;-/

 Edward.


 lg Clemens


 2006/8/30, Hans Reiser [EMAIL PROTECTED]:

 Edward Shishkin wrote:
 
  (Plain) file is considered as a set of logical clusters (64K by
  default). Minimal unit occupied in memory by (plain) file is one
  page. Compressed logical cluster is stored on disk in so-called
  disk clusters. Disk cluster is a set of special items (aka
 ctails,
  or compressed bodies), so that one block can contain (compressed)
  data of many files and everything is packed tightly on disk.
 
 
 
 So the compression unit is 64k for purposes of your benchmarks.









Re: Reiser4 und LZO compression

2006-08-31 Thread Edward Shishkin

Hans Reiser wrote:

Edward Shishkin wrote:


Clemens Eisserer wrote:


But speaking of single threadedness, more and more desktops are
shipping
with ridiculously more power than people need.  Even a gamer really


Will the LZO compression code in reiser4 be able to use
multi-processor systems?
E.g. if I've a Turion-X2 in my laptop will it use 2 threads for
compression/decompression making cpu throughput much better than
whatthe disk could do?



Compression is going in flush time and there can be more then
one flush thread that processes the same transaction atom.
Decompression is going in the context of readpage/readpages.
So if you mean per file, then yes for compression and no for
decompression.


I don't think your explanation above is a good one.

If there is more than one process reading a file, then you can have
multiple decompressions at one time of the same file, yes?



You are almost right. Unless they read the same logical cluster.



Just because there can be more than one flush thread per file does not
mean it is likely there will be.

CPU scheduling of compression/decompression is an area that could use
work in the future.For now, just understand that what we do is
better than doing nothing.;-/


Edward.




lg Clemens


2006/8/30, Hans Reiser [EMAIL PROTECTED]:



Edward Shishkin wrote:


(Plain) file is considered as a set of logical clusters (64K by
default). Minimal unit occupied in memory by (plain) file is one
page. Compressed logical cluster is stored on disk in so-called
disk clusters. Disk cluster is a set of special items (aka


ctails,


or compressed bodies), so that one block can contain (compressed)
data of many files and everything is packed tightly on disk.





So the compression unit is 64k for purposes of your benchmarks.















Re: Reiser4 und LZO compression

2006-08-31 Thread David Masover

Clemens Eisserer wrote:

But speaking of single threadedness, more and more desktops are shipping
with ridiculously more power than people need.  Even a gamer really
Will the LZO compression code in reiser4 be able to use multi-processor 
systems?


Good point, but it wasn't what I was talking about.  I was talking about 
the compression happening on one CPU, meaning even if it takes most of 
the CPU to saturate disk throughput, your other CPU is still 100% 
available, meaning the typical desktop user won't notice their apps 
running slower, they'll just notice disk access being faster.




Re: bug: Unable to mount reiserfs from DVD+R

2006-08-31 Thread Hans Reiser
Xuân Baldauf wrote:
 Hello,

 I created backup DVDs formatted using reiserfs. However, mounting them
 is not possible. If I try to mount such a DVD, I get following results:

 Aug 31 01:08:18 notebook2 kernel: ReiserFS: hdc: using ordered data mode
 Aug 31 01:08:18 notebook2 kernel: ReiserFS: hdc: warning: sh-458:
 journal_init_dev: cannot init journal device 'unknown-block(22,0)': -30
 Aug 31 01:08:18 notebook2 kernel: ReiserFS: hdc: warning: sh-462: unable
 to initialize jornal device
 Aug 31 01:08:18 notebook2 kernel: ReiserFS: hdc: warning: sh-2022:
 reiserfs_fill_super: unable to initialize journal space

 I get this error even when mounting using -o ro,nolog.

 I tracked this down to the method journal_init_dev, which opens the
 journal device with mode blkdev_mode = FMODE_READ | FMODE_WRITE;
 except when the journal device is marked read only. However, as it
 seems, my DVD burner drive is able to write DVD media (so it is not a
 read-only-device, in general), but the particular DVD is only readable,
 not writable. That's why (I think) the journal device itself is not
 marked read only and yet open_by_devnum(jdev, blkdev_mode); fails.

 The method journal_init_dev should honor the mount options ro or
 nolog or both, which it currently does not (as of Linux 2.6.18-rc4).
 Alternatively, method journal_init_dev might try to open the journal
 device again (only using read only mode) in case the first try to open
 results in -EROFS.

 Xuân Baldauf.



   
Sounds reasonable to me, Chris?