Re: cdrecord: compiling failure on RHEL3 due to MKLINKS

2009-01-13 Thread Giulio Orsero
On Tue, 13 Jan 2009 11:05:42 -0500, Greg Wooledge wool...@eeg.ccf.org
wrote:

  Problem
 compiling with
  $ make
 fails due to, I think, the way/order in which make (3.79.1) processes
 missing files.

If you're using GNU make (which you appear to be, based on that version
string), here's the easiest way to do it:

Yes, I'm using gnu make.

I tried running 
./Gmake
but the problem persists.

As I said I have a solution, I posted about the issue so that Joerg knows
about it and he can decide whether to address it or not.

Thanks
-- 
giul...@pobox.com


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: cdrecord: compiling failure on RHEL3 due to MKLINKS

2009-01-13 Thread Joerg Schilling
Greg Wooledge wool...@eeg.ccf.org wrote:

 On Tue, Jan 13, 2009 at 01:58:15PM +0100, Giulio Orsero wrote:
   System data
  OS: Linux 2.4.33 (RHEL3)
  cdrecord:  2.01.01a55
  
   Problem
  compiling with
  $ make
  fails due to, I think, the way/order in which make (3.79.1) processes
  missing files.

The problem is a result of a bug in gmake that make gmake ignore certain 
dependencies.


 If you're using GNU make (which you appear to be, based on that version
 string), here's the easiest way to do it:

 $ sudo ln -s /usr/bin/make /usr/local/bin/gmake
 $ gmake

 Joerg also shipped a wrapper script called Gmake in the top level
 directory of the cdrtools source tree, so I suppose you could try
 running

 $ ./Gmake

 instead, but having the gmake symlink in /usr/local/bin will help you
 in the long run with other programs that expect gmake to exist.

This does not help. All the Gmake script does it to tell the makefile system 
that gmake instead of make was used.

Recent versions of the makefile system are able to recognize gmake from the
existence of $(MAKE_COMMAND)
Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily


-- 
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: Thoughts on writing CD from stdin

2009-01-13 Thread Dave Platt

Bill Davidsen wrote:

This is not ISO9660 data. I want the burner program to take my bits and 
put them on the media, nothing else. The data is in 64k chunks, so all 
writes are sector complete. I haven't had any issues with reading the 
data off DVD, or off CD in the case where I write it to something which 
allows me to know the size, but in general I don't. Nor can I run the 
data generation repeatedly to get the size, it's one of those it 
depends things.


Unfortunately, determining the end-of-data (end-of-track) location on a
data CD is one of those things which is difficult-to-impossible to do
reliably.

This seems to be due to the way in which the 2048-byte-block data format
is layered on top of the original Red Book audio CD format.  On a
data CD written in the usual way (one data track), the transition between
the data-layered blocks and the leadout area is difficult for CD players
to handle reliably, and different players tend to behave differently.

If you're lucky, your CD-ROM drive will read the last data block reliably,
and the attempt to read one block beyond that will result in an immediate
I/O error of some sort, allowing you to detect end-of-data reliably and
quickly.

This rarely seems to be the case, unfortunately.  Other scenarios I have
seen include:

-  The last data block reads back reliably.  Attempting to read the
   block following it does return an error, but only after a substantial
   delay.

-  The last data block (or even the last couple of data blocks) are
   unreadable.  Attempting to read them results in an I/O error.

I believe that I remember some discussion on the list, which turned up
a spec requirement that when transitioning between tracks having different
modes (and the leadout is a different mode than a data track) you're
actually required to pad the data... or, if you don't, the transition
blocks between tracks are formally unreadable.  I don't remember the
exact details.

In practice, in order to be able to read your last real sector(s) of
data reliably, it's necessary to pad the burn with a few sectors of
unwanted filler data.  I believe that cdrecord and/or mkisofs were
changed, a few years ago, to apply this sort of padding automatically
to ensure that the final portion of an ISO9660 image would always
be readable.

Since you aren't using ISO9660, and since you have prior knowledge
of your data's fundamental block size (64kB), I think there's a
reasonable solution for you.

-  Use cdrecord (or one of the plug-compatible substitutes) in TAO
   burning mode.

-  Use the -pad option, with padsize=16 option (16 sectors or
   32kB of padding).

-  Read your CD-ROM disk back 64k bytes at a time.

-  You'll get an I/O error when you try to read the 64kB byte
   chunk which extends past the end of what you actually burned.
   Ignore any fragmentary data (partial chunk).

You can probably use the track size in the TOC as an indication
of the amount of data actually written - just round it down to
a multiple of 32 sectors.


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org



Re: Thoughts on writing CD from stdin

2009-01-13 Thread Thomas Schmitt
Hi,

Dave Platt wrote:
 Unfortunately, determining the end-of-data (end-of-track) location on a
 data CD is one of those things which is difficult-to-impossible to do
 reliably.

This is actually a matter of the device driver
and not so much of drive and media. The size of
a logical track as obtained by MMC commands is
reliable. One just has to be aware that not all
types of sectors can be read by the SCSI command
for reading data blocks.

In the special case of CD TAO data tracks there are
two such non-data blocks at the end of each track.
(Not related to Darth Bane's Rule Of Two.)

With CD SAO data tracks there are no such non-data
blocks in the range of the official track size.
All DVD types seem to be clean in that aspect, too.
(No non-data sectors are defined for non-CD in MMC.)


 -  The last data block reads back reliably.  Attempting to read the
   block following it does return an error, but only after a substantial
   delay.
 -  The last data block (or even the last couple of data blocks) are
unreadable.  Attempting to read them results in an I/O error.
 
 ... when transitioning between tracks having different modes ...

The behavior of the Linux block device driver
created several urban legends.

To my theory it reads in larger chunks up to
the track end. When the last chunk of a TAO track
is read, the two unreadable sectors are encountered
and let the whole chunk fail.
If the driver would retry with a chunk size that
is two sectors smaller, then it would be ok.
But the driver does not. It just declares i/o error.

Chunk size is smaller than 300 kB. That's why that
size of padding is a traditional remedy for track
content which can recognize its proper end.

Whatever, if you read the CD TAO track by own SCSI
read commands it is easy to retrieve the exact amount
of data which has been written to that track.
libburn test program telltoc can demonstrate that.

The readable amount includes any padding by formatter
and burn program, of course.
So padding, which helps with the block device driver,
is rather counterproductive if you have a reader
which works flawless.
With a correct reader one has to memorize the amount
of padding and ignore that many bytes at the end of
the data.

Padding at write time and ignoring pad bytes at read
time is just a guess how to fool the CD TAO bug in the
Linux driver.


Bill Davidsen wrote:
 The data is in 64k chunks, so all writes are sector
 complete. I haven't had any issues with reading the
 data off DVD, or off CD

Having data aligned to a size larger than 2 blocks
(= 4 kB) can be another remedy for the driver
problem. It depends on the assumption that the driver
will not attempt to read ahaead of the data amount
which is demanded by the user space program.
Large alignment size will probably help to fulfill
that assumption.


Have a nice day :)

Thomas


-- 
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org