Re: Using dd to verify a dvd and avoid the readahead bug.
Hi, > Have you tried setting both to zero and asking to read just the number > of blocks in the ISO filesystem? Interesting proposal. The image of 3041 blocks is not available any more. New test candidate looks like this track: 1 lba: 0 (0) 00:02:00 adr: 1 control: 4 mode: 1 track:lout lba: 3193 (12772) 00:44:43 adr: 1 control: 4 mode: -1 It is supposed to have 3191 readable paylod blocks. First let's see wether the bug would occur: # blockdev --getra /dev/sr2 32 $ dd bs=2048 if=/dev/sr2 >/dev/null 3158+0 records in I.e. 66 kB missing. With ra still set to 32 : $ dd bs=2048 if=/dev/sr2 count=3191 >/dev/null 3158+0 records in Now i change the drive setting # blockdev --setra 0 /dev/sr2 # blockdev --getra /dev/sr2 0 $ dd bs=2048 if=/dev/sr2 count=3191 >/dev/null 3158+0 records in # hdparm -a /dev/sr2 /dev/sr2 not supported by hdparm It is an USB burner. So i hop to a IDE DVD-ROM: # blockdev --getra /dev/hdg 8 $ dd bs=2048 if=/dev/hdg count=3191 >/dev/null It is gnawing more than 30 seconds on these 6 MB 3142+0 records in 98 kB missing ! That's about world record. # blockdev --setra 0 /dev/hdg # blockdev --getra /dev/hdg 0 $ dd bs=2048 if=/dev/hdg count=3191 >/dev/null It tries hard and long but 3142+0 records out About hdparm there is not much to try: # hdparm -a /dev/hdg /dev/hdg: readahead= 0 (off) Shrug. And again to prove that it is not impossible to read all blocks except the two final ones: $ test/telltoc --drive /dev/sg2 \ --read_and_print -1 -1 raw:/dvdbuffer/image.iso ... NOTE : Last two frames of CD track unreadable. This is normal if TAO track. End Of Data : start=0s , count=3193s , read=3191s $ ls -l /dvdbuffer/image.iso -rw-r--r--1 * *6535168 Oct 5 18:06 /dvdbuffer/image.iso $ expr 6535168 - 3191 '*' 2048 0 Reading skills are indispensable. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
Thomas Schmitt wrote: Hi, Bill Davidsen: blockdev --setra 0 /dev/hdc This does not match the behavior on my oldish system either. First suspicios thing: # blockdev --getra /dev/hdg 8 That would be 8 x 512 = 2 x 2048 bytes. So 4 kB should be a upper limit for the loss on this drive. But my losses of blocks with dd are often more than 32 kB. There is a possibility that loss would be up to a full read size, I think. That's because if you ask for N and get an error, the loss will be N, I don't think the read tells how much it read without error, nor would most programs use that if they got the information. -- Experiments: I write a TAO track to CD-RW $ cdrskin -v dev=/dev/sg2 blank=fast padsize=0 -tao /dvdbuffer/x which has 3041 blocks of payload and appears with -toc as track: 1 lba: 0 (0) 00:02:00 adr: 1 control: 4 mode: 1 track:lout lba: 3043 (12172) 00:42:43 adr: 1 control: 4 mode: -1 Let me show what can be done with the same SCSI commands as used by the block device driver. This is telltoc, a demo application of libburn-0.3.9, using SCSI command 28h "READ 10". $ test/telltoc --drive /dev/sg2 \ --read_and_print 0 -1 raw:/dvdbuffer/image ... Media content: session 1 track 1 data lba: 000:02:00 Media content: session 1 leadoutlba: 304300:42:43 Data : start=0s , count=3043s , read=0s , encoding=1:'/dvdbuffer/image' NOTE : Last two frames of CD track unreadable. This is normal if TAO track. End Of Data : start=0s , count=3043s , read=3041s $ ls -l /dvdbuffer/image -rw-r--r--1 * *6227968 Oct 3 21:08 /dvdbuffer/image $ expr 6227968 - 3041 '*' 2048 0 Exactly 3041 blocks of 2048 bytes each. None more, none less. Now this is _my_ way to retrieve data from old CDs out of times when 32 kB of padding were surely enough of a sacrifice. My 2.4 kernel already seems to need 64 kB, maybe even 128. No hdparm helps, no blockdev helps. Have you tried setting both to zero and asking to read just the number of blocks in the ISO filesystem? Only skillful reading helps. (I am so proud of my reading skills 8-)) I wonder if this lack of problems here is because I'm using DVD burners, not CD-only. I have upgraded every production system to DVD, just to avoid "can't reads that here" delays. Have a nice day :) You seem to have raised legitimate doubts about the behavior of DVD vs. CD units, as well as known issues on kernel versions. And I think code was being added to 2.6.23, or is queued for 2.6.24 to return a short read and no error in just this situation. I'll have to see if I can find the reference. -- bill davidsen <[EMAIL PROTECTED]> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
Hi, Bill Davidsen: > blockdev --setra 0 /dev/hdc This does not match the behavior on my oldish system either. First suspicios thing: # blockdev --getra /dev/hdg 8 That would be 8 x 512 = 2 x 2048 bytes. So 4 kB should be a upper limit for the loss on this drive. But my losses of blocks with dd are often more than 32 kB. -- Experiments: I write a TAO track to CD-RW $ cdrskin -v dev=/dev/sg2 blank=fast padsize=0 -tao /dvdbuffer/x which has 3041 blocks of payload and appears with -toc as track: 1 lba: 0 (0) 00:02:00 adr: 1 control: 4 mode: 1 track:lout lba: 3043 (12172) 00:42:43 adr: 1 control: 4 mode: -1 Let us ask dd with ra=8 : $ dd bs=2048 if=/dev/hdg >/dev/null dd: reading `/dev/hdg': Input/output error 3014+0 records in 3014+0 records out So there are missing 27 payload blocks = 54 kB. That is too much, even if blockdev mistakes the unit as 2048 rather than 512. Next i try Bill's proposal : # blockdev --setra 0 /dev/hdg # blockdev --getra /dev/hdg 0 I reload the tray. Just to be sure: # blockdev --getra /dev/hdg 0 Banzai $ dd bs=2048 if=/dev/hdg >/dev/null dd: reading `/dev/hdg': Input/output error 3014+0 records in 3014+0 records out Another drive, this time under usb-scsi: # blockdev --getra /dev/sr2 32 $ dd bs=2048 if=/dev/sr2 >/dev/null dd: reading `/dev/sr2': Input/output error 3032+0 records in 3032+0 records out Cough. Larger readahead, less loss. Only 18 kB. # blockdev --setra 0 /dev/sr2 # blockdev --getra /dev/sr2 0 I reload the tray. Just to be sure: # blockdev --getra /dev/sr2 0 $ dd bs=2048 if=/dev/sr2 >/dev/null dd: reading `/dev/sr2': Input/output error 3032+0 records in 3032+0 records out Let me show what can be done with the same SCSI commands as used by the block device driver. This is telltoc, a demo application of libburn-0.3.9, using SCSI command 28h "READ 10". $ test/telltoc --drive /dev/sg2 \ --read_and_print 0 -1 raw:/dvdbuffer/image ... Media content: session 1 track 1 data lba: 000:02:00 Media content: session 1 leadoutlba: 304300:42:43 Data : start=0s , count=3043s , read=0s , encoding=1:'/dvdbuffer/image' NOTE : Last two frames of CD track unreadable. This is normal if TAO track. End Of Data : start=0s , count=3043s , read=3041s $ ls -l /dvdbuffer/image -rw-r--r--1 * *6227968 Oct 3 21:08 /dvdbuffer/image $ expr 6227968 - 3041 '*' 2048 0 Exactly 3041 blocks of 2048 bytes each. None more, none less. Now this is _my_ way to retrieve data from old CDs out of times when 32 kB of padding were surely enough of a sacrifice. My 2.4 kernel already seems to need 64 kB, maybe even 128. No hdparm helps, no blockdev helps. Only skillful reading helps. (I am so proud of my reading skills 8-)) Have a nice day :) Thomas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
Hi, > I don't doubt your observations, I know that the run-out blocks are not > readable. I think that's because at the level below blocks, data on the > CD is spread across a wider range to mitigate the effect of scratches: The run-out blocks seem to belong to packet writing mode of which TAO is a special case. MMC-5 , 4.2.3.11 Recording Data " Since it is necessary to locate exact boundaries of user blocks, additional padding is inserted around the linkage frame. The collection of the link block, the pad blocks, and the user blocks is called a Packet. The format of the packet is shown in Figure 23." Figure 23 shows a chain of boxes with the texts: " Link Block | Run-in Block 1 | Run-in Block 2 | Run-in Block 3 | Run-in Block 4 | User Data Blocks | Run-out Block 1 | Run-out Block 2 " What bites us are Run-out Block 1 and Run-out Block 2. > there is the guaranteed post-gap after them. That's what i believe is the mistaken remedy. Surely the 300 kB of data are a generous sacrifice to the flaw in the OS drivers. But i believe that this works because 300 kB is larger than the largest buffer chunk used by the driver for SCSI reading. Not because it coincides with the size of post-gap as prescribed for track type changes. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
On Wed, 2007-10-03 at 18:11 +0200, Thomas Schmitt wrote: > Patrick Ohly: > > Writing 150 empty blocks after the last block with user data is required > > by the data CD standard - they are called "post-gap". > > Yep. But as the publicly available standard MMC-5 clarifies: > > "6.33.3.19 Post-gap > If a Data track is followed by another kind of track > (such as an audio track), this Data track ends with a post-gap. > A post-gap is placed at the end of a Data track, and is part of > the Data Track. A post-gap does not contain actual user data. > The minimum length of post-gap is 2 seconds. The Drive does not > perform any action for a Post-gap. > " > > This is not the situation we have with a single TAO track > on closed media, unless you call the end of media "another > kind of track". The disc layouts that I have seen in vendor documentation seemed to considered the lead-out as "another kind of track" and I tend to follow that interpretation. > Also: i do not have this "readahead" problem with SAO sessions. > And all my drives can read TAO tracks up to the last 2 non-data > sectors if i apply SCSI command READ 10. I don't doubt your observations, I know that the run-out blocks are not readable. I think that's because at the level below blocks, data on the CD is spread across a wider range to mitigate the effect of scratches: the two unreadable run-out blocks contain parts of the data of the preceding readable blocks, but are not complete by themselves. In SAO the write does not have to stop after the track and therefore no run-out blocks are needed. > All payload and all padding is retrievable up to the last byte ! > It is not an issue of the drives but of the operating system. I agree that an OS could be improved to handle this better; my point was that in normal operation (= mounting ISO file system on valid CD) the read-ahead should not fail because only real data blocks are read and there is the guaranteed post-gap after them. -- Bye, Patrick Ohly -- [EMAIL PROTECTED] http://www.estamos.de/ -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
j t wrote: Hi, I have an iso file (which contains an iso9660/udf filesystem) that I've written to a dvd-r using growisofs, thus: # growisofs -dvd-compat -speed=1 -Z /dev/hdc=myDVD.iso In the past, I have been able to check (verify) the burn finding the iso size (using "isoinfo -d -i ") and then by comparing the output from: # dd if=myDVD.iso bs=2048 count= | md5sum with # dd if=/dev/hdc bs=2048 count= | md5sum (and checking /var/log/syslog for any read errors) Now I have started getting read errors close to the lead-out, so I append 150 2k blocks to the end of the iso file using: # dd if=/dev/zero bs=2048 count=150 >> myDVD.iso and I even disable readahead using hdparm: # hdparm -a 0 /dev/hdc Unfortunately that probably wasn't the problem in the first place, you need to tell the o/s to stop doing readahead for performance, and the command to do that is blockdev: blockdev --setra 0 /dev/hdc does what you want, although see below, I don't think that's your problem. But I still get read errors around near the end of the dvd: # dd if=/dev/hdc bs=2048 count=2002922 | md5sum dd: reading `/dev/hdc': Input/output error 1938744+0 records in 1938744+0 records out Could someone please tell me: 1) Is this the dreaded readahead bug again? No. 2) Can I use dd to verify my burns and avoid the readahead bug? yes. 3) If not, how can I verify my dvd burn? You did, the burn is bad. Thank you for your help. -- bill davidsen <[EMAIL PROTECTED]> CTO TMR Associates, Inc Doing interesting things with small computers since 1979 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
Hi, Patrick Ohly: > Writing 150 empty blocks after the last block with user data is required > by the data CD standard - they are called "post-gap". Yep. But as the publicly available standard MMC-5 clarifies: "6.33.3.19 Post-gap If a Data track is followed by another kind of track (such as an audio track), this Data track ends with a post-gap. A post-gap is placed at the end of a Data track, and is part of the Data Track. A post-gap does not contain actual user data. The minimum length of post-gap is 2 seconds. The Drive does not perform any action for a Post-gap. " This is not the situation we have with a single TAO track on closed media, unless you call the end of media "another kind of track". Also: i do not have this "readahead" problem with SAO sessions. And all my drives can read TAO tracks up to the last 2 non-data sectors if i apply SCSI command READ 10. All payload and all padding is retrievable up to the last byte ! It is not an issue of the drives but of the operating system. To me it appears like the block device driver of the kernel is overly trusting towards the TOC of the media. The TOC says payload+2 sectors, the driver tries to read that many sectors, the last buffer chunk encounters an error, the driver does not re-try to read the last payload sectors in 2 kB steps but pretends they are ill. This explains nicely why the number of missing bytes varies on my system with the CD media i insert. It also explains why SAO data sessions do not show the problem for me. The standard and the tradition of this whole issue is so messy, nevertheless, that the man page of my burn backend cdrskin advises padsize=300k in all its examples with data tracks. > Later (Feb 15 2003) Joerg changed mkisofs so that it always adds 150 > blocks. Seems to be a wise decision, although the problem is a media+drive issue and actually belongs into the realm of the burn software. But the whole mess is intransparent enough that 150 extra sectors are a low price for universal readability. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
Dear Thomas, you are doing an excellent job explaining CD writing problems, my thanks for that. However, in this case I feel compelled to comment ;-) On Tue, 2007-10-02 at 22:20 +0200, Thomas Schmitt wrote: > Our dear kernels seem to get confused by the inability > to read these announced sectors. > The answer was to attach some blocks as sacrifice to > this bug-or-feature. Writing 150 empty blocks after the last block with user data is required by the data CD standard - they are called "post-gap". For a long time the combination of "mkisofs" + "cdrecord" did not add any padding unless explicitly asked to (which users most likely didn't know and thus didn't do) and even then, it rounded up instead of adding exactly 150 blocks. I consider such discs in violation of the standard, which may lead to non-deterministic behavior like read errors. > The safe size for padding once used to be 32 kB. > At some time it grew and an urban legend emerged that > 150 sectors (= 2 seconds of music) might be the new limit. > Well, 300 kB seems to be safe, indeed ... up to now. > Cargo cult. Not a myth at all ;-) For more information, see my postings in this old thread here: http://groups.google.ca/group/mailing.comp.cdwrite/browse_frm/thread/ab41402747ff62af/568f34f142e706bd?tvc=1&q=%22reading+out+data+cds%22&hl=en#568f34f142e706bd Later (Feb 15 2003) Joerg changed mkisofs so that it always adds 150 blocks. -- Bye, Patrick Ohly -- [EMAIL PROTECTED] http://www.estamos.de/ -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
"j t" <[EMAIL PROTECTED]> wrote: > Now I have started getting read errors close to the lead-out, so I > append 150 2k blocks to the end of the iso file using: > > # dd if=/dev/zero bs=2048 count=150 >> myDVD.iso > > and I even disable readahead using hdparm: > # hdparm -a 0 /dev/hdc > > But I still get read errors around near the end of the dvd: > # dd if=/dev/hdc bs=2048 count=2002922 | md5sum > dd: reading `/dev/hdc': Input/output error > 1938744+0 records in > 1938744+0 records out If you like to understand the reason for this error you need to use "readcd". This is the only way to get qualified error messages for the block. Jörg -- EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin [EMAIL PROTECTED](uni) [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
On 10/2/07, Thomas Schmitt <[EMAIL PROTECTED]> wrote: > There is no confirmed sighting of readahead bugs on DVD. Perhaps that explains why there is no "pad" option in the growisofs program? (apart from the option which is passed to mkisofs, of course)... Thanks, Jaime -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
Hi, > # growisofs -dvd-compat -speed=1 -Z /dev/hdc=myDVD.iso > # dd if=/dev/hdc bs=2048 count=2002922 | md5sum > dd: reading `/dev/hdc': Input/output error > 1938744+0 records in > > 1) Is this the dreaded readahead bug again? I doubt it strongly. 2002922-1938744 = 64178 blocks missing That's 125 MB. No readahead does 125 MB. The hard "read-ahead" bug is with CD which have been burned in TAO mode. This mode appends 2 sectors which cannot be read via the SCSI SPC command READ but only via SCSI MMC command READ CD (if ever, i did not try yet). These sectors are nevertheless counted in the TOC info of CD media as part of the track. One cannot easily determine whether they are readable or not - unless one tries. Our dear kernels seem to get confused by the inability to read these announced sectors. The answer was to attach some blocks as sacrifice to this bug-or-feature. The safe size for padding once used to be 32 kB. At some time it grew and an urban legend emerged that 150 sectors (= 2 seconds of music) might be the new limit. Well, 300 kB seems to be safe, indeed ... up to now. Cargo cult. There is no confirmed sighting of readahead bugs on DVD. We got lots of others. Maybe they interbreed. > 2) Can I use dd to verify my burns and avoid the readahead bug? You can try to skip over the bad spot in order to recover data blocks from blocks with higher addresses. After all it's a block device with random access reading. Nevertheless this will make any decent verifier raise the red flag. > 3) If not, how can I verify my dvd burn? The question is rather how you can burn a thoroughly readable DVD. To me it looks that your verifier did a good job. Congrats. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Using dd to verify a dvd and avoid the readahead bug.
On 10/2/07, j t <[EMAIL PROTECTED]> wrote: > But I still get read errors around near the end of the dvd: > # dd if=/dev/hdc bs=2048 count=2002922 | md5sum > dd: reading `/dev/hdc': Input/output error > 1938744+0 records in > 1938744+0 records out > > Could someone please tell me: > 1) Is this the dreaded readahead bug again? > 2) Can I use dd to verify my burns and avoid the readahead bug? > 3) If not, how can I verify my dvd burn? I think it's a media problem - I've tried again with more discs (both the same type and also from a different spindle/manufacturer) and the problems have gone away - I can now verify the burnt disc successfully using both "dd" and "readom". Jaime -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Using dd to verify a dvd and avoid the readahead bug.
Hi, I have an iso file (which contains an iso9660/udf filesystem) that I've written to a dvd-r using growisofs, thus: # growisofs -dvd-compat -speed=1 -Z /dev/hdc=myDVD.iso In the past, I have been able to check (verify) the burn finding the iso size (using "isoinfo -d -i ") and then by comparing the output from: # dd if=myDVD.iso bs=2048 count= | md5sum with # dd if=/dev/hdc bs=2048 count= | md5sum (and checking /var/log/syslog for any read errors) Now I have started getting read errors close to the lead-out, so I append 150 2k blocks to the end of the iso file using: # dd if=/dev/zero bs=2048 count=150 >> myDVD.iso and I even disable readahead using hdparm: # hdparm -a 0 /dev/hdc But I still get read errors around near the end of the dvd: # dd if=/dev/hdc bs=2048 count=2002922 | md5sum dd: reading `/dev/hdc': Input/output error 1938744+0 records in 1938744+0 records out Could someone please tell me: 1) Is this the dreaded readahead bug again? 2) Can I use dd to verify my burns and avoid the readahead bug? 3) If not, how can I verify my dvd burn? Thank you for your help. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]