Re: cdrecord: failure in auto-formatting DVD+RW Verbatim media on USB-TSSTcorp

2009-01-14 Thread Joerg Schilling
Bill Davidsen david...@tmr.com wrote:

  It may be that some of the workarounds should be removed today.


 Of course they may still be beneficial in some cases. Perhaps based on 
 the things you have learned in the past few yers it would be practical 
 to make them a little more selective, so that they would only work 
 around the original problem. cdrecord is very good about running on old 
 hardware, so these workarounds seem to have value on those machines.

Cdrecord has a lot of comment but in order to remember everything for all 
workarounds, it would need even more comments ;-)

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily


-- 
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: Thoughts on writing CD from stdin

2009-01-14 Thread Joerg Schilling
Paul Serice p...@serice.net wrote:

 The obvious thing to try is to put the root directory (more or less)
 immediately after the PVD.  With read-only media, this can be a
 problem because there is no way to go back and fill in missing
 information -- like the size of files.

 So when the size of stdin is not known in advance, there isn't much
 choice: the PVD must be written very early, and the PVD must specify
 the location of the root directory.

This is why mkisofs implements -stream-media-size since 6 years ;-)

 Incidentally, this is why burning iso9660 images to DVDs was broken on
 linux for so long.  Software put the root file system at the end of
 the media.  For a DVD, the end of media is greater than 4GB which
 could not be seen because linux was using a 32-bit, byte-oriented
 inode scheme.

This is wrong.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily


-- 
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: Thoughts on writing CD from stdin

2009-01-14 Thread Thomas Schmitt
Hi,

Paul Serice wrote:
  Software put the root file system at the end of
  the media. For a DVD, the end of media is greater than 4GB which
  could not be seen because linux was using a 32-bit, byte-oriented
  inode scheme.

Joerg Schilling wrote:
 This is wrong

Multi-session software puts the directory entries
behind the end of the written area on media. With
mkisofs and libisofs this is near the start of the
new session.

Since DVDs and BDs offer multi-session capabilities
and allow to write more than 4 GB of data, it is
possible that a tree of directory entries is indeed
written beyond the 32-bit byte count limit.


Have a nice day :)

Thomas



-- 
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: cdrecord: failure in auto-formatting DVD+RW Verbatim media on USB-TSSTcorp

2009-01-14 Thread Joerg Schilling
Giulio Orsero giul...@gmail.com wrote:

 On Tue, 13 Jan 2009 12:31:35 +0100, joerg.schill...@fokus.fraunhofer.de
 (Joerg Schilling) wrote:

 This is why cdrecord does not auto-format the medium. 
 Your drive returns an incorrect disk type identifyer.
 see drv_dvdplus.c:
 
 if (profile == 0x001A) { 
 dsp-ds_flags |= DSF_DVD_PLUS_RW;   /* This is a DVD+RW 
  */ 
 if (dip-disk_status == DS_EMPTY  /* Unformatted  
  */ 
 dip-disk_type == SES_UNDEF) {  /* Not a CD 
  */ 
 This way:
 if (profile == 0x001A) { 
 dsp-ds_flags |= DSF_DVD_PLUS_RW;   /* This is a DVD+RW 
  */ 
 if (dip-disk_status == DS_EMPTY) { /* Unformatted  
  */ 
 

 I can confirm the above change fixes the issue with the USB TSSTcorp burner.

Thank you for the feedback.

I changed the code to now only emit a warning in case the drive return illegal 
data.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily


-- 
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: cdrecord: compiling failure on RHEL3 due to MKLINKS

2009-01-14 Thread Joerg Schilling
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote:

 Greg Wooledge wool...@eeg.ccf.org wrote:

  On Tue, Jan 13, 2009 at 01:58:15PM +0100, Giulio Orsero wrote:
    System data
   OS: Linux 2.4.33 (RHEL3)
   cdrecord:  2.01.01a55
 
    Problem
   compiling with
 $ make
   fails due to, I think, the way/order in which make (3.79.1) processes
   missing files.

 The problem is a result of a bug in gmake that make gmake ignore certain 
 dependencies.

It may really be that gmake only processes thing in the wrong order.

It turns out that adding just one filename to to the rule any of the failing 
makefiles in the complete Schily Source Consolidation

ftp://ftp.berlios.de/pub/schily/

resulted in a complete compile jusing gmake-3.80.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily


-- 
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: Thoughts on writing CD from stdin

2009-01-14 Thread Bill Davidsen

Dave Platt wrote:

Bill Davidsen wrote:

This is not ISO9660 data. I want the burner program to take my bits 
and put them on the media, nothing else. The data is in 64k chunks, 
so all writes are sector complete. I haven't had any issues with 
reading the data off DVD, or off CD in the case where I write it to 
something which allows me to know the size, but in general I don't. 
Nor can I run the data generation repeatedly to get the size, it's 
one of those it depends things.


Unfortunately, determining the end-of-data (end-of-track) location on a
data CD is one of those things which is difficult-to-impossible to do
reliably.

This seems to be due to the way in which the 2048-byte-block data format
is layered on top of the original Red Book audio CD format.  On a
data CD written in the usual way (one data track), the transition between
the data-layered blocks and the leadout area is difficult for CD players
to handle reliably, and different players tend to behave differently.

If you're lucky, your CD-ROM drive will read the last data block 
reliably,

and the attempt to read one block beyond that will result in an immediate
I/O error of some sort, allowing you to detect end-of-data reliably and
quickly.

This rarely seems to be the case, unfortunately.  Other scenarios I have
seen include:

-  The last data block reads back reliably.  Attempting to read the
   block following it does return an error, but only after a substantial
   delay.

-  The last data block (or even the last couple of data blocks) are
   unreadable.  Attempting to read them results in an I/O error.

I remember that with old Linux kernels readahead needed to be disabled, 
I haven't seen this problem in a while so it seems that the kernel fixes 
are working.

I believe that I remember some discussion on the list, which turned up
a spec requirement that when transitioning between tracks having 
different

modes (and the leadout is a different mode than a data track) you're
actually required to pad the data... or, if you don't, the transition
blocks between tracks are formally unreadable.  I don't remember the
exact details.

In practice, in order to be able to read your last real sector(s) of
data reliably, it's necessary to pad the burn with a few sectors of
unwanted filler data.  I believe that cdrecord and/or mkisofs were
changed, a few years ago, to apply this sort of padding automatically
to ensure that the final portion of an ISO9660 image would always
be readable.

Since you aren't using ISO9660, and since you have prior knowledge
of your data's fundamental block size (64kB), I think there's a
reasonable solution for you.

-  Use cdrecord (or one of the plug-compatible substitutes) in TAO
   burning mode.

Rather than one of the raw modes? I found several posts suggesting 
that the magic was 'raw96r' or similar. I believe I tried that, as well 
as -sao, -tao, and -dao, but I can repeat the test easily.

-  Use the -pad option, with padsize=16 option (16 sectors or
   32kB of padding).

-  Read your CD-ROM disk back 64k bytes at a time.

-  You'll get an I/O error when you try to read the 64kB byte
   chunk which extends past the end of what you actually burned.
   Ignore any fragmentary data (partial chunk).

Recent kernels seem to return a valid partial data count for the last 
read, and then an error on the next read. Reading 6400 bytes at a time 
seems working, although this may only mean the media and firmware are 
friends.

You can probably use the track size in the TOC as an indication
of the amount of data actually written - just round it down to
a multiple of 32 sectors.


The last 64k block has the end of data flag set, so it's unambiguous.

--
Bill Davidsen david...@tmr.com
 Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over... Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: Thoughts on writing CD from stdin

2009-01-14 Thread Rob Bogus

Thomas Schmitt wrote:

Hi,

Dave Platt wrote:
  

Unfortunately, determining the end-of-data (end-of-track) location on a
data CD is one of those things which is difficult-to-impossible to do
reliably.



This is actually a matter of the device driver
and not so much of drive and media. The size of
a logical track as obtained by MMC commands is
reliable. One just has to be aware that not all
types of sectors can be read by the SCSI command
for reading data blocks.

In the special case of CD TAO data tracks there are
two such non-data blocks at the end of each track.
(Not related to Darth Bane's Rule Of Two.)

With CD SAO data tracks there are no such non-data
blocks in the range of the official track size.
All DVD types seem to be clean in that aspect, too.
(No non-data sectors are defined for non-CD in MMC.)


  

-  The last data block reads back reliably.  Attempting to read the
  block following it does return an error, but only after a substantial
  delay.
-  The last data block (or even the last couple of data blocks) are
   unreadable.  Attempting to read them results in an I/O error.

... when transitioning between tracks having different modes ...



The behavior of the Linux block device driver
created several urban legends.

To my theory it reads in larger chunks up to
the track end. When the last chunk of a TAO track
is read, the two unreadable sectors are encountered
and let the whole chunk fail.
If the driver would retry with a chunk size that
is two sectors smaller, then it would be ok.
But the driver does not. It just declares i/o error.

  
The readahead size can be set by the 'blockdev' command, but smaller 
sizes hurt performance.
I believe the error was fixed a few kernel versions ago, so that a clean 
partial read count is returned. I haven't validated that for all write 
modes, and perhaps I should for purposes of discussion.



Chunk size is smaller than 300 kB. That's why that
size of padding is a traditional remedy for track
content which can recognize its proper end.

Whatever, if you read the CD TAO track by own SCSI
read commands it is easy to retrieve the exact amount
of data which has been written to that track.
libburn test program telltoc can demonstrate that.

The readable amount includes any padding by formatter
and burn program, of course.
So padding, which helps with the block device driver,
is rather counterproductive if you have a reader
which works flawless.
With a correct reader one has to memorize the amount
of padding and ignore that many bytes at the end of
the data.

Padding at write time and ignoring pad bytes at read
time is just a guess how to fool the CD TAO bug in the
Linux driver.


Bill Davidsen wrote:
  

The data is in 64k chunks, so all writes are sector
complete. I haven't had any issues with reading the
data off DVD, or off CD



Having data aligned to a size larger than 2 blocks
(= 4 kB) can be another remedy for the driver
problem. It depends on the assumption that the driver
will not attempt to read ahaead of the data amount
which is demanded by the user space program.
Large alignment size will probably help to fulfill
that assumption.


Have a nice day :)

Thomas


  



--
E. Robert Bogusta
 It seemed like a good idea at the time



Re: Thoughts on writing CD from stdin

2009-01-14 Thread Thomas Schmitt
Hi,

Dave Platt wrote:
   Unfortunately, determining the end-of-data (end-of-track) location on a
   data CD is one of those things which is difficult-to-impossible to do
   reliably.

I wrote:
  This is actually a matter of the device driver
  and not so much of drive and media.

Rob Bogus wrote:
 The readahead size can be set by the 'blockdev' command, but smaller
 sizes hurt performance.

I would expect a smaller chunk size to increase
the probability of accidential success.
Nevertheless the actual problem seems to be that
no retry is made after a read chunk turned out to
be partially unreadable.
The driver has few chance to predict that unreadability
as the blocks are located within the track's size range.
But it could either retry with single block steps after
a failure or be cautious not to read the last two blocks
of a CD track in a single SCSI command together with
other blocks.


Rob Bogus wrote:
 I believe the error was fixed a few kernel versions ago, so that a
 clean partial read count is returned. I haven't validated that for
 all write modes, and perhaps I should for purposes of discussion.

Decisive is to test with CD TAO tracks.
A similar bug with a CD SAO track or a DVD track
would be a surprise to me.

One should try to write a track with a number of
data blocks that is a product of odd numbers. Like 
 3*5*7*11*13 = 15015
and avoid any padding.
That way it is unlikely that a chunk read ends
exactly before the two non-data sectors.
(This is the rare case when all data bytes are
 readable despite the problem.)


Bill Davidsen wrote:
 Rather than one of the raw modes? I found several posts suggesting that
 the magic was 'raw96r' or similar.

Last time i tried to read a raw mode CD
the block device driver of Linux 2.4 failed.
The same with audio sectors.
Those sectors do not have a payload of 2048
bytes. About the same problem as with the
two sectors at the end of a TAO track.


 Recent kernels seem to return a valid partial data count for the last read,
 and then an error on the next read.

It would be a great relief if the annoyance
with reading CD TAO tracks was finally gone.


Have a nice day :)

Thomas


-- 
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: Growisofs input cache -- Patch

2009-01-14 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen david...@tmr.com wrote:

  
In Run 1 there were several buffer underruns which slowed the DVD recorders 
down. In Run 2 the buffer was always at 100% (except for the end of 
course) :-).
  
  
This seems reasonable, what were the performance numbers for the other 
system activity? I'm surprised at the underruns, cdrecord has internal 
fifo, and I thought you did, too. With a hacked cdrecord (around a50) 
the burn ran almost eight seconds slower, regardless of burn size, and 
never dropped below 92% full at the drive, and 70% or so in the fifo.



Hacked how?
  


Late reply... hacked to use O_DIRECT and some simple monitoring of data 
count, etc. Clearly writing data with O_DIRECT can be 3-5% slower than 
using system buffers, but the performance of the other applications was 
better, no video pauses, all response felt good, generally a better 
overall behavior.


--
Bill Davidsen david...@tmr.com
 Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over... Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org