Re: Issues brining BD disks from the command line - write failures
Volker Kuhlmann wrote: On Thu 09 May 2013 19:01:12 NZST +1200, Thomas Schmitt wrote: This happens only with CDs which were written in write type TAO. Ehh, I'm very sure I've seen it with DVDs too, and the read-ahead size there was larger. Nevertheless, that is a _read_ problem. Dale has a problem with write errors. Sure, but you asked him to test afterwards by reading back. The read-ahead bug has never been observed with DVD or BD, anyway. I have to disagree for DVD, and can't speak for BD, not having tried it. To my experience, 128 KB is enough. Tradition is 300 KB, out of a wrong perception of Linux bug and MMC specs. Actually it depends on the size of reading ahead. So it might vary. I got so sick of it, I set the value in my script to 2MB to be done with it. I know it's too big, but I don't care. And what are the options for UDF (which is becoming increasingly necessary)? mkudffs and cp. But for what, particularly ? Random-file-access backups. TBH I stopped burning because 4.2GB isn't of much use these days, but wouldn't mind burning some larger disks. I used ext2 in the past, useless for reading from, but good enough for dd'ing back to disk before reading. With larger sizuseless for reading from es that becomes a bit annoying. Why useless for reading from? What problems do you have when mounting read-only for use? Haven't done it in a while, but I don't recall doing anything magic, and I certainly have used such media to preserve odd filesystem things like hard links, ACLs, etc. I create an empty file, make a filesystem on it, mount it, copy what I need, umount it, and burn. I had to do a bunch of those two years ago. -- E. Robert Bogusta It seemed like a good idea at the time -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/518e815c.9050...@tmr.com
Re: Issues brining BD disks from the command line - write failures
Hi, Or you could be dealing with a stupid Linux kernel that hasn't got fixed for the last 15 years. If the recording ends at the end of the filesystem (common, for obvious reasons) and the size of the filesystem is not a multiple of some internal Linux buffer size, the last buffer Linux tries to read is incomplete and gets treated with an I/O error, although the only error is that of the programmer trying to read past the end of the legimitate recording. This happens only with CDs which were written in write type TAO. The stupidity is equally distributed over Linux and MMC specs. The MMC specs prescribe that a TAO track ends by two non-data sectors. These sectors are counted as part of the track size. Linux blindly believes the announced track size from the CD table-of-content and tries to read the two non-data sectors, too. So far so good. One cannot distinguish TAO form SAO CDs easily and thus has to try. The Linux stupidity is to drop the whole cache tile (i believe it is 128 KB) that would contain those two blocks. This means to also drop up to 124 KB of perfectly readable data. Nevertheless, that is a _read_ problem. Dale has a problem with write errors. The read-ahead bug has never been observed with DVD or BD, anyway. The other good thing to do with Linux is to append 2MByte of zeros at the end of the filesystem To my experience, 128 KB is enough. Tradition is 300 KB, out of a wrong perception of Linux bug and MMC specs. Actually it depends on the size of reading ahead. So it might vary. I do not deem mkisofs to be the best ISO 9660 program. :)) What are the options, though? (Not counting jokes like woedumb.) xorriso. If you want ISO 9660 then i dare to say it is better than mkisofs. And what are the options for UDF (which is becoming increasingly necessary)? mkudffs and cp. But for what, particularly ? It is a misperception that the specs for DVD and BD would demand UDF. It is the specs for commercial videos which demand UDF. And the specs for BD demand a UDF variant which mkisofs does not produce either. I have looked into the specs (ECMA-167 and UDF-2.60, not nice to read) and found nothing that would convince me of technical benefits. If some user would approach me with the wish for UDF for video DVD or video BD, then i would start developing ISO 9660 / UDF hybrids. But that would need effort by the user, too. One would need hardware that insists in specs-compliant DVDs. One would need the full specs for video CD resp. BD. One would need time for testing. No such user showed up yet. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/16264623796868281...@scdbackup.webframe.org
Re: Issues brining BD disks from the command line - write failures
On Thu 09 May 2013 19:01:12 NZST +1200, Thomas Schmitt wrote: This happens only with CDs which were written in write type TAO. Ehh, I'm very sure I've seen it with DVDs too, and the read-ahead size there was larger. Nevertheless, that is a _read_ problem. Dale has a problem with write errors. Sure, but you asked him to test afterwards by reading back. The read-ahead bug has never been observed with DVD or BD, anyway. I have to disagree for DVD, and can't speak for BD, not having tried it. To my experience, 128 KB is enough. Tradition is 300 KB, out of a wrong perception of Linux bug and MMC specs. Actually it depends on the size of reading ahead. So it might vary. I got so sick of it, I set the value in my script to 2MB to be done with it. I know it's too big, but I don't care. And what are the options for UDF (which is becoming increasingly necessary)? mkudffs and cp. But for what, particularly ? Random-file-access backups. TBH I stopped burning because 4.2GB isn't of much use these days, but wouldn't mind burning some larger disks. I used ext2 in the past, useless for reading from, but good enough for dd'ing back to disk before reading. With larger sizes that becomes a bit annoying. Thanks for your suggestions, Volker -- Volker Kuhlmann http://volker.dnsalias.net/ Please do not CC list postings to me. -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/20130509081559.gg25...@paradise.net.nz
Re: Issues brining BD disks from the command line - write failures
Hi, Ehh, I'm very sure I've seen it with DVDs too, and the read-ahead size there was larger. In that case we should try to reproduce the problem. At least the Linux kernel would need another reason why to misperceive the size of the medium on the first hand. In case of CD it is obviously the MMC compliant inclusion of two non-data blocks at the end of TAO tracks. The block device driver does know (at least roughly) the size of a CD. I believe to see the size determination in my olde /usr/src/linux/drivers/scsi/sr.c in function static void get_sectorsize(struct scsi_cd *cd) by cmd[0] = READ_CAPACITY; ... the_result = scsi_execute_req(cd-device, cmd, DMA_FROM_DEVICE, buffer, 8, NULL, SR_TIMEOUT, MAX_RETRIES); ... cd-capacity = 1 + ((buffer[0] 24) | (buffer[1] 16) | (buffer[2] 8) | buffer[3]); This code matches the MMC description of the result of SCSI command 25h READ CAPACITY, which is supposed to tell the capacity [...] with respect to reading operations. In the context of MMC, reading is not only reading of data, but also reading of non-data sectors. Thus, READ CAPACITY counts the two non-data sectors of TAO as readable. (Just not by command 2Bh READ(10), but by BEh READ CD.) As said, the fault of Linux is not to handle the last two blocks of CD tracks specially, resp. not to retry by reading single blocks after reading the last cache tile has failed. It has to be aware that those two blocks may or may not be part of the track's payload data. Some try-and-error is inavoidable here. But the error should not be forwarded to the user and it should not eat up more than the two questionable blocks. Nevertheless, that is a _read_ problem. Dale has a problem with write errors. Sure, but you asked him to test afterwards by reading back. I see. Well, if there is a read-ahead bug with DVD then the checkreading by dd could indeed produce false i/o errors at the very end of the track. A safer proposal would then be xorriso -outdev /dev/sr0 -check_media use=outdev -- If the medium is DVD+RW or BD-RE then there will be trailing stuff anyway. One will have again to compute the size of the valid payload like with my dd proposal, and then use -check_media option max_lba= : xorriso -outdev /dev/sr0 -check_media max_lba=1700758 use=outdev -- (-outdev has to be used if the medium content is not an ISO 9660 filesystem. No writing will happen, because no xorriso command for creating or changing an ISO image is used here. Moreover, xorriso will not append data to a non-blank medium which it did not aquire as input drive. So this is safe.) mkudffs and cp. But for what, particularly ? Random-file-access backups. That's the reason why i began to develop xorriso. It can record ACLs and xattr, can register MD5 checksums of medium and of each single data file, does incremental backups based on either MD5 or on inode properties, and can checkread its own backups without the need for seeing the original files. Example from man xorriso: This changes the directory trees /projects and /personal_mail in the ISO image so that they become exact copies of their disk counterparts. ISO file objects get created, deleted or get their attributes adjusted accordingly. ACL, xattr, hard links and MD5 checksums will be recorded. Accelerated comparison is enabled at the expense of potentially larger backup size. Only media with the expected volume ID or blank media are accepted. Files with names matching *.o or *.swp get excluded explicitly. When done with writing the new session gets checked by its recorded MD5. $ xorriso \ -abort_on FATAL \ -for_backup -disk_dev_ino on \ -assert_volid PROJECTS_MAIL_* FATAL \ -dev /dev/sr0 \ -volid PROJECTS_MAIL_$(date +%Y_%m_%d_%H%M%S) \ -not_leaf *.o -not_leaf *.swp \ -update_r /home/thomas/projects /projects \ -update_r /home/thomas/personal_mail /personal_mail \ -commit -toc -check_md5 FAILURE -- -eject all To be used several times on the same medium, whenever an update of the two disk trees to the medium is desired. Begin with a blank medium and update it until the run fails gracefully due to lack of remaining space on the old one. [...] To apply zisofs compression to those data files which get newly copied from the local filesystem, insert these commands immediately before -commit : -hardlinks perform_update \ -find / -type f -pending_data -exec set_filter --zisofs -- \ zisofs needs zlib and its development headers at compile time of xorriso. Linux kernels
Re: Issues brining BD disks from the command line - write failures
Hi, Ehh, I'm very sure I've seen it with DVDs too, and the read-ahead size there was larger. In that case we should try to reproduce the problem. At least the Linux kernel would need another reason why to misperceive the size of the medium on the first hand. In case of CD it is obviously the MMC compliant inclusion of two non-data blocks at the end of TAO tracks. The block device driver does know (at least roughly) the size of a CD. I believe to see the size determination in my olde /usr/src/linux/drivers/scsi/sr.c in function static void get_sectorsize(struct scsi_cd *cd) by cmd[0] = READ_CAPACITY; ... the_result = scsi_execute_req(cd-device, cmd, DMA_FROM_DEVICE, buffer, 8, NULL, SR_TIMEOUT, MAX_RETRIES); ... cd-capacity = 1 + ((buffer[0] 24) | (buffer[1] 16) | (buffer[2] 8) | buffer[3]); This code matches the MMC description of the result of SCSI command 25h READ CAPACITY, which is supposed to tell the capacity [...] with respect to reading operations. In the context of MMC, reading is not only reading of data, but also reading of non-data sectors. Thus, READ CAPACITY counts the two non-data sectors of TAO as readable. (Just not by command 2Bh READ(10), but by BEh READ CD.) As said, the fault of Linux is not to handle the last two blocks of CD tracks specially, resp. not to retry by reading single blocks after reading the last cache tile has failed. It has to be aware that those two blocks may or may not be part of the track's payload data. Some try-and-error is inavoidable here. But the error should not be forwarded to the user and it should not eat up more than the two questionable blocks. Nevertheless, that is a _read_ problem. Dale has a problem with write errors. Sure, but you asked him to test afterwards by reading back. I see. Well, if there is a read-ahead bug with DVD then the checkreading by dd could indeed produce false i/o errors at the very end of the track. A safer proposal would then be xorriso -outdev /dev/sr0 -check_media use=outdev -- If the medium is DVD+RW or BD-RE then there will be trailing stuff anyway. One will have again to compute the size of the valid payload like with my dd proposal, and then use -check_media option max_lba= : xorriso -outdev /dev/sr0 -check_media max_lba=1700758 use=outdev -- (-outdev has to be used if the medium content is not an ISO 9660 filesystem. No writing will happen, because no xorriso command for creating or changing an ISO image is used here. Moreover, xorriso will not append data to a non-blank medium which it did not aquire as input drive. So this is safe.) mkudffs and cp. But for what, particularly ? Random-file-access backups. That's the reason why i began to develop xorriso. It can record ACLs and xattr, can register MD5 checksums of medium and of each single data file, does incremental backups based on either MD5 or on inode properties, and can checkread its own backups without the need for seeing the original files. Example from man xorriso: This changes the directory trees /projects and /personal_mail in the ISO image so that they become exact copies of their disk counterparts. ISO file objects get created, deleted or get their attributes adjusted accordingly. ACL, xattr, hard links and MD5 checksums will be recorded. Accelerated comparison is enabled at the expense of potentially larger backup size. Only media with the expected volume ID or blank media are accepted. Files with names matching *.o or *.swp get excluded explicitly. When done with writing the new session gets checked by its recorded MD5. $ xorriso \ -abort_on FATAL \ -for_backup -disk_dev_ino on \ -assert_volid PROJECTS_MAIL_* FATAL \ -dev /dev/sr0 \ -volid PROJECTS_MAIL_$(date +%Y_%m_%d_%H%M%S) \ -not_leaf *.o -not_leaf *.swp \ -update_r /home/thomas/projects /projects \ -update_r /home/thomas/personal_mail /personal_mail \ -commit -toc -check_md5 FAILURE -- -eject all To be used several times on the same medium, whenever an update of the two disk trees to the medium is desired. Begin with a blank medium and update it until the run fails gracefully due to lack of remaining space on the old one. [...] To apply zisofs compression to those data files which get newly copied from the local filesystem, insert these commands immediately before -commit : -hardlinks perform_update \ -find / -type f -pending_data -exec set_filter --zisofs -- \ zisofs needs zlib and its development headers at compile time of xorriso. Linux kernels
Re: Issues brining BD disks from the command line - write failures
Hi, dvdrecord turned into a dead project in 2001 - 6 months after it started. But no new forks arised since i began to compete with you. cdrkit turned into a deas project in May 2007 which is also 6 months after the start. cdrkit is the fork by Debian. Last release was in 2010. Now look at contemporary Debian ISO images: dd if=debian-7.0.0-amd64-netinst.iso count=100 | \ strings | fgrep XORRISO | sed -e 's/ //g' says XORRISO-1.2.6 2013.01.08.103001, LIBISOBURN-1.2.6, LIBISOFS-1.2.6, LIBBURN-1.2.6 201305041100 export MKISOFS=xorrisofs growisofs ... Why would you like to do that as long as there is the free maintained mkisofs? If not for technical reasons then for the fact that mkisofs has been thrown out of major Linux distros. (Expelled for social incompatibility of distro maintainers and you, to be clear.) Elsewise: MD5s ? ACLs ? zisofs ? File filters for encryption or compression ? Support without accusations towards users ? But of course one does not need growisofs, because xorriso can burn to all the media which growisofs can burn. I learned a lot from growisofs. Chapeau towards Andy Polyakov. So is libburn, libisofs, libisoburn, cdrskin, and xorriso. But these are based on a conceptional mistake: they asume that you need no special privileges to write optical media which is wrong. My stuff burns CD, DVD, and BD on Linux, FreeBSD, Solaris. On Linux and FreeBSD it needs rw-permission for the device file. On Solaris it needs pfexec privileges basic,sys_devices and r-permission for the device file in /dev/rdsk. On Linux, you need superuser privileges only for SCSI commands which are not listed by the kernel as legitimate commands. All MMC commands are allowed. Also all commands from SPC and SBC, which are needed for MMC drives. Not allowed for normal users are the manufacturer-proprietary commands which are issued by the quality checker QPxTool. Cdrtools work on: Impressive. Chapeau again. (But where shall i get an Apollo MC68000 machine ? They were really nice, back in the early 1990s.) Most user feedback about libburn comes from Linux. The other stuff runs on any X/Open compliant system. I have seen similar claims from a lot of people regarding their software. Having had a Unix life before Linux, i know how to write portable C. xorriso has been ported to the systems to which GRUB2 is ported. But it is not needed, as you could use mkisofs... Not with GRUB2 script grub-mkrescue. I added several features on request of Vladimir Serbinenko (and killed the GRUB2 fork of mkisofs before it could get released). Further Vladimir contributed code for HFS+ hybrid filesystem. It is about production of MBR, GPT, and APM, for the purpose of booting from CD and USB stick by BIOS, EFI, and Apple firmware. As you see, it makes sense to check software features from time to time. The users have the choice ... if their distro maintainers aren't refusing to package cdrtools, that is. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/1082623832021464...@scdbackup.webframe.org
Re: Issues brining BD disks from the command line - write failures
Hi, or a bad enclosure maybe? Rather not. The growisofs error message indicates media problems. Current: DVD-R sequential recording That's a different game ... Sense Key: 0x3 Medium Error, deferred error, Segment 0 Sense Code: 0x0C Qual 0x00 (write error) Fru 0x0 ... but quite the same error as with growisofs. like there is some sort of issue with DMA access to the drive? No transport problems to see. The drive dislikes the medium. (Any medium ?) Do I have a coincidence not only getting a brand new dead burner but some sort of bus damage to this system? If the burner is new then complain to the seller and demand replacement. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/22081623689829268...@scdbackup.webframe.org
Re: Issues brining BD disks from the command line - write failures
The second set of error messages were from cdrtools from the brandonsnider ppa. The error messages are much more verbose -- Not that I'm smart enough to be able to really understand them that well. Even with cdrtools, I can't get BD media to burn. I can get DVD and CD media to burn in the internally attached burner - sometimes. In order to remove the BD media as a culprit, I have purchased another batch of BD-R media from a different manufacturer. Hope to get that test done today. On 05/08/2013 03:53 AM, Joerg Schilling wrote: Dale dale.joll...@yahoo.com wrote: First off I want to say thank all of you that work on this software that makes it possible to do this stuff. I'm more impressed than you can imagine. Now that I have hopefully 'buttered you up' a little bit, I'm in need of assistance, and I have exhausted my google skills in an attempt to find an answer to the issue. At this point I'm trying trial and error, and at $1.25 a disc, it's getting expensive. A (large) part of my issue is me; I had this working at one point, didn't write down _exactly_ all the requirements and process I followed, and there have been quite a few kernel and program updates since it worked. I am trying to backup large .wav files ( 1.3 - 2+GB in size) to BD-R media. Your problem is that growisofa does not print useful error messages for failed commands. Why don't you use cdrtools? ftp://ftp.berlios.de/pub/cdrecord/alpha/ Jörg -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/518a4f92.90...@yahoo.com
Re: Issues brining BD disks from the command line - write failures
Hi, The second set of error messages were from cdrtools from the brandonsnider ppa. Obviously cdrecord resp. one of its forks, indeed. The error messages are much more verbose -- Not that I'm smart enough to be able to really understand them that well. Actually the beef is the sense code triple 3,0C,00. It is listed in MMC-6 as 3 0C 00 WRITE ERROR More can not be told from the drive's reply which was Sense Bytes: 71 00 03 00 00 00 00 0A 00 00 00 00 0C 00 00 00 Even with cdrtools, I can't get BD media to burn. I would not use cdrecord or its forks for DVD or BD. growisofs is fully ok for DVD. It has that error at the end of BD-R burning, though. I use my own backends which are based on my libburn: cdrskin ... accepts many cdrecord options xorriso ... integrated ISO 9660 filesystem generator and burn program. libburn is tested daily by backups on CD, DVD, and BD media. Both programs should be available in Linux distros. cdrskin might be offered as part of package libburn. xorriso might be offered as part of package libisoburn. I can get DVD and CD media to burn in the internally attached burner - sometimes. They become unreliable when they get old. But usually that needs 3 years or longer. I have a little collection of half-dead burners reaching back ten years. Traditionally mine fail first on DVD-RW and DVD+R DL. But a new drive has to burn newly purchased media which it has in its compatibility list. (The manufacturers often publish a list of tested media products. That is quite futile because the seller brands often change their manufacturer.) In case you experience a successful burn, you should in any case try whether the medium is fully readablei. A coarse test is: Divide the size of the iso image by 2048. E.g. yielding 1700758. Let dd copy the computed number of blocks from medium to /dev/null: dd if=/dev/sr1 bs=2048 count=1700758 of=/dev/null If this ends by an i/o error (often near the end of the medium) then the burn was not successful. In order to remove the BD media as a culprit, You already tested with DVD-R. Those must work, or else the drive is ill. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/12603623869371846...@scdbackup.webframe.org
Re: Issues brining BD disks from the command line - write failures
Hi, Joerg Schilling wrote: There are only dead forks from cdrtools - don't use them. Probably i am the one who is to blame for killing them. Programmers who consider to fork can as well try libburn and contact me for feature requests. That's what happened to Debian's cdrkit. Meanwhile xorriso produces all their ISOs except for arch powerpc which needs a HFS filesystem. (I hoped for that arch to die out. But alas, the virtual machines seem to give it eternal life.) Growisofs refers to mkisofs. One can use growisofs with xorriso's emulation mode xorrisofs too: export MKISOFS=xorrisofs growisofs ... It will not do -udf, though. Thus you cannot create an official Video DVD by xorriso. On the other hand, one can use xorriso's unique features of filesystem manipulation, checksumming, compression, encryption, etc. Cdrtools is actively maintained So is libburn, libisofs, libisoburn, cdrskin, and xorriso. and runs on virtially any platform. libburn works with optical media on Linux, FreeBSD, and Solaris. The other stuff runs on any X/Open compliant system.i xorriso has been ported to the systems to which GRUB2 is ported. Other writing software does not include mkisofs or similar software. I do not deem mkisofs to be the best ISO 9660 program. :)) But cdrecord is the best CD writing program, i confess. Chapeau. We both cannot serve for official Video Blu-ray, btw. (Even learning all specs would cost 3000 USD, to my knowledge.) You often told me that mkisofs is not a backup program. Well, xorriso is. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/4228623856108890...@scdbackup.webframe.org
Re: Issues brining BD disks from the command line - write failures
On Thu 09 May 2013 02:36:00 NZST +1200, Thomas Schmitt wrote: I do not deem mkisofs to be the best ISO 9660 program. :)) What are the options, though? (Not counting jokes like woedumb.) And what are the options for UDF (which is becoming increasingly necessary)? Thanks, Volker -- Volker Kuhlmann is list0570 with the domain in header. http://volker.dnsalias.net/ Please do not CC list postings to me. -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/20130509002149.ge25...@paradise.net.nz
Issues brining BD disks from the command line - write failures
First off I want to say thank all of you that work on this software that makes it possible to do this stuff. I'm more impressed than you can imagine. Now that I have hopefully 'buttered you up' a little bit, I'm in need of assistance, and I have exhausted my google skills in an attempt to find an answer to the issue. At this point I'm trying trial and error, and at $1.25 a disc, it's getting expensive. A (large) part of my issue is me; I had this working at one point, didn't write down _exactly_ all the requirements and process I followed, and there have been quite a few kernel and program updates since it worked. I am trying to backup large .wav files ( 1.3 - 2+GB in size) to BD-R media. When I had it working, I would create a 25GB file with truncate, use mkudffs to format the file as a filesystem, loopback mount it, and copy data over to the file system, sync a couple of times, dismount and the use cdrecord or growisofs (I have tried both) to burn the image to the media. I have two burners, on two different systems, so I know I'm pretty sure I'm not having a hardware issue. One is attached via a SAS controller via an eSATA breakout cable to an external eSATA enclosure: description: SCSI CD-ROM product: BD-RE WH14NS40 vendor: HL-DT-ST physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/cdrom1 logical name: /dev/cdrw1 logical name: /dev/dvd1 logical name: /dev/dvdrw1 logical name: /dev/sr1 version: 1.00 capabilities: removable audio configuration: status=ready *-medium physical id: 0 logical name: /dev/cdrom1 The other burner is connected via internal onboard SATA controller: description: DVD-RAM writer product: BD-RE BH08LS20 vendor: HL-DT-ST physical id: 0.1.0 bus info: scsi@2:0.1.0 logical name: /dev/cdrom1 logical name: /dev/cdrw1 logical name: /dev/dvd1 logical name: /dev/dvdrw1 logical name: /dev/sr0 version: 1.00 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc I'm now _way_ behind on moving this stuff off to removable media and I'm starting to get a bit desperate. This last attempt, on the external burner using growisofs, new server and fresh OS load is finally prompted me to annoy you with a plea for help. This is as far as it got that time: growisofs -speed=1 -Z /dev/sr1=/mnt/md9/bdburn/BDBurn.udf Executing 'builtin_dd if=/mnt/md9/bdburn/BDBurn.udf of=/dev/sr1 obs=32k seek=0' /dev/sr1: pre-formatting blank BD-R for 24.8GB... /dev/sr1: Current Write Speed is 2.0x4390KBps. 949059584/250 ( 3.8%) @0.3x, remaining 216:40 RBU 100.0% UBU 29.2% 951975936/250 ( 3.8%) @0.2x, remaining 217:40 RBU 100.0% UBU 20.8% 954499072/250 ( 3.8%) @0.2x, remaining 218:19 RBU 100.0% UBU 12.5% WRITE@LBA=720a0h failed with SK=3h/WRITE ERROR]: Input/output error write failed: Input/output error /dev/sr1: flushing cache /dev/sr1: closing track /dev/sr1: closing session CLOSE SESSION failed with SK=5h/INVALID FIELD IN CDB]: Input/output error /dev/sr1: reloading tray Am I creating too large of a file to burn to the media - I note that growisofs is saying preformatting to 24.8 GB? This is the size I have used int he past that worked. This is my actual UDF file size. ls -al /mnt/md9/bdburn/BDBurn.udf -rw-r--r-- 1 root root 250 May 6 17:39 /mnt/md9/bdburn/BDBurn.udf _any_ help in getting this sorted will be greatly appreciated. -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/51891f73.1000...@yahoo.com
Re: Issues brining BD disks from the command line - write failures
Hi, 954499072/250 ( 3.8%) @0.2x, remaining 218:19 RBU 100.0% UBU 12.5% WRITE@LBA=720a0h failed with SK=3h/WRITE ERROR]: Input/output error This is a failure of the drive to write to the medium. It has nothing to do with your preparations of BDBurn.udf, but rather with the relation of drive and medium. I.e. they do not like each other (any more). The speed is very low. Possibly the drive relocated many blocks to the Spare Area. Possibibly that Spare area is now full and further bad blocks could not be replaced by spares. CLOSE SESSION failed with SK=5h/INVALID FIELD IN CDB]: Input/output error This is not the reason of failure. But it could be a known growisofs bug, that causes an error message at the end of a burn run. Andy Polyakov, the author of growisofs, stated that it is harmless. (Some doubts have arised meanwhile, though.) Am I creating too large of a file to burn to the media growisofs is supposed to refuse in that case. But it started writing and failed early (after only 1 GB). I note that growisofs is saying preformatting to 24.8 GB? This stems from growisofs' habit to format BD-R by default. To my own experience (with libburn, not with growisofs) BD-R are more likely to fail if they are formatted. This is contrary to the theoretical advantages of formatting (Defect Management). _any_ help in getting this sorted will be greatly appreciated. It might help to try writing to unformatted BD-R. growisofs option -use-the-force-luke=spare:none will prevent formatting of a blank BD-R before writing starts: growisofs -speed=1 -use-the-force-luke=spare:none \ -Z /dev/sr1=/mnt/md9/bdburn/BDBurn.udf My own programs cdrskin and xorriso do not format BD-R by default: cdrskin -v speed=1 dev=/dev/sr1 /mnt/md9/bdburn/BDBurn.udf xorriso -as cdrecord -v speed=1 dev=/dev/sr1 /mnt/md9/bdburn/BDBurn.udf - Since growisofs announced /dev/sr1: Current Write Speed is 2.0x4390KBps. i assume that the drive does not offer speed 1. You may inquire the list of speeds by dvd+rw-mediainfo /dev/sr1 Look for output lines like Write Speed #0:4.0x4495=17980KB/s Speed Descriptor#0:00/11826175 R@8.0x4495=35960KB/s W@4.0x4495=17980KB/s (this is RITEK/BR2 media on Optiarc BD RW BD-5300S) Or perform cdrskin dev=/dev/sr1 --list_speeds or xorriso -outdev /dev/sr1 -list_speeds and look at the end of their text output. - Have a nice day :) Thomas -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/12401623645803380...@scdbackup.webframe.org
Re: Issues brining BD disks from the command line - write failures
I put in a fresh new, blank disc and I got some interesting errors -- I'll try this on the other system with the other burner here in a bit. ls -al /dev/sr* brw-rw 1 root cdrom 11, 0 May 7 13:02 /dev/sr0 brw-rw 1 root cdrom 11, 1 May 7 13:02 /dev/sr1 # growisofs -speed=1 -use-the-force-luke=spare:none -Z /dev/sr1=/mnt/md9/bdburn/BDBurn.udf :-[ READ FORMAT CAPACITIES failed with SK=3h/ASC=19h/ACQ=00h]: Input/output error # dvd+rw-mediainfo /dev/sr1 INQUIRY:[HL-DT-ST][BD-RE WH14NS40 ][1.00] GET [CURRENT] CONFIGURATION: Mounted Media: 41h, BD-R SRM Media ID: OTCBDR/001 Current Write Speed: 4.0x4495=17984KB/s Write Speed #0:4.0x4495=17984KB/s Write Speed #1:2.0x4495=8992KB/s Speed Descriptor#0:00/12219391 R@6.0x4495=26976KB/s W@4.0x4495=17984KB/s Speed Descriptor#1:00/12219391 R@6.0x4495=26976KB/s W@2.0x4495=8992KB/s :-[ READ BD SPARE INFORMATION failed with SK=5h/INVALID FIELD IN CDB]: Input/output error READ DISC INFORMATION: Disc status: blank Number of Sessions:1 State of Last Session: empty Next Track: 1 Number of Tracks: 1 READ FORMAT CAPACITIES: :-[ READ FORMAT CAPACITIES failed with SK=3h/ASC=19h/ACQ=00h]: Input/output error :-[ READ TRACK INFORMATION failed with SK=3h/ASC=19h/ACQ=00h]: Input/output error On 05/07/2013 12:52 PM, Thomas Schmitt wrote: Hi, 954499072/250 ( 3.8%) @0.2x, remaining 218:19 RBU 100.0% UBU 12.5% WRITE@LBA=720a0h failed with SK=3h/WRITE ERROR]: Input/output error This is a failure of the drive to write to the medium. It has nothing to do with your preparations of BDBurn.udf, but rather with the relation of drive and medium. I.e. they do not like each other (any more). The speed is very low. Possibly the drive relocated many blocks to the Spare Area. Possibibly that Spare area is now full and further bad blocks could not be replaced by spares. CLOSE SESSION failed with SK=5h/INVALID FIELD IN CDB]: Input/output error This is not the reason of failure. But it could be a known growisofs bug, that causes an error message at the end of a burn run. Andy Polyakov, the author of growisofs, stated that it is harmless. (Some doubts have arised meanwhile, though.) Am I creating too large of a file to burn to the media growisofs is supposed to refuse in that case. But it started writing and failed early (after only 1 GB). I note that growisofs is saying preformatting to 24.8 GB? This stems from growisofs' habit to format BD-R by default. To my own experience (with libburn, not with growisofs) BD-R are more likely to fail if they are formatted. This is contrary to the theoretical advantages of formatting (Defect Management). _any_ help in getting this sorted will be greatly appreciated. It might help to try writing to unformatted BD-R. growisofs option -use-the-force-luke=spare:none will prevent formatting of a blank BD-R before writing starts: growisofs -speed=1 -use-the-force-luke=spare:none \ -Z /dev/sr1=/mnt/md9/bdburn/BDBurn.udf My own programs cdrskin and xorriso do not format BD-R by default: cdrskin -v speed=1 dev=/dev/sr1 /mnt/md9/bdburn/BDBurn.udf xorriso -as cdrecord -v speed=1 dev=/dev/sr1 /mnt/md9/bdburn/BDBurn.udf - Since growisofs announced /dev/sr1: Current Write Speed is 2.0x4390KBps. i assume that the drive does not offer speed 1. You may inquire the list of speeds by dvd+rw-mediainfo /dev/sr1 Look for output lines like Write Speed #0:4.0x4495=17980KB/s Speed Descriptor#0:00/11826175 R@8.0x4495=35960KB/s W@4.0x4495=17980KB/s (this is RITEK/BR2 media on Optiarc BD RW BD-5300S) Or perform cdrskin dev=/dev/sr1 --list_speeds or xorriso -outdev /dev/sr1 -list_speeds and look at the end of their text output. - Have a nice day :) Thomas -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/51894a3e.1030...@yahoo.com
Re: Issues brining BD disks from the command line - write failures
Hi, :-[ READ FORMAT CAPACITIES failed with SK=3h/ASC=19h/ACQ=00h]: 3,19,00 is not listed among the official SCSI error codes. 3,x,y means Medium error. The other two components would tell what kind of medium error. E.g. 3,C0,00 would be Write error. :-[ READ BD SPARE INFORMATION failed with SK=5h/INVALID FIELD IN CDB]: It looks like the drive refuses to tell anything about BD peculiarities. Media ID: OTCBDR/001 First time i see Optodisc Technology Corporation as manufacturer. My list says that they are 1x-4x high-to-low media. (The other type LTH is quite problematic. But HTL should be ok.) Well, if more than one drive fails with these, then you will have to try a different medium type. E.g with a different nominal speed (Optodisc has 1x-6x OTCBDR/002) or from a brand that sells from a different manufacturer (Brand Verbatim is VERBAT or MBI). BD-RE are more expensive than BD-R, but usually less error prone. At least you would surely buy a different medium type. (And you most surely can revive spoiled BD-RE as soon as you find a drive which can handle them.) Have a nice day :) Thomas -- To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@other.debian.org Archive: http://lists.debian.org/32514623672553567...@scdbackup.webframe.org