Re: New tool for packing files into an ISO image

2012-01-09 Thread Bill Davidsen

Tom Horsley wrote:

On Thu, 8 Dec 2011 18:50:46 -0800
BitBucket wrote:

   

Tom:

I'm a bit behind the curve (as a matter of policy, mostly).  What would be
an 'old' tool for packing files into an ISO image that you were improving
upon?  Looking for Windows GUI-based tools here, to pack offloads onto DVDs.
 

As far as I know, there is no old tool for doing this. This one is
new because I just wrote it, not because it replicates the function
of some old tool (at least I couldn't find any existing tool
or I probably wouldn't have written this one :-).


   
There is a useful tool for determining what will fit on a media size, 
and it has been around. since the mid 1990s, called breaker. It takes as 
input a list of items and sizes, such as are creates by

  ls -s | sort -n
or
  du -S | sort -n
and you provide the size of the media, optional rounding up in size of 
each item, optional fixed overhead per item (tar/cpio headers), and have 
options to generate a file list inone-file-per-media format and 
optionally use a format suitable for the "graftpoints" option in cdrecord.


It's still on public.tmr.com as source, I started it before CD burners 
were common and I was backing up a massice 600MB drive to 60MB takes. 
The problem hasn't changed, the numbers have just gotten bigger. :-(


--
Bill Davidsen
  We are not out of the woods yet, but we know the direction and have
taken the first step. The steps are many, but finite in number, and if
we persevere we will reach our destination.  -me, 2010




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org
Archive: http://lists.debian.org/4f0b43c4.9050...@tmr.com



Re: Linux, ISOFS, multi-extent files: what's the status?

2010-01-18 Thread Bill Davidsen

Volker Kuhlmann wrote:

On Sun 10 Jan 2010 10:15:10 NZDT +1300, Bill Davidsen wrote:

  
There was another error having to do with reading data at the end of an 
image. Due to read ahead settings a read past end of data occurred and the 
(valid) partial data was not returned to the user program. Might that be 
what you are remembering?



Thanks for mentioning it. Since the mid-90s the kernel produces I/O
errors reading the last blocks of an ISO image from actual disk because
of a read-ahead function. Turning off read-ahead is not sufficient to
prevent this error, turning off DMA is also necessary (presumably that
has its own read-ahead too). This problem because suddenly worse again a
few years ago when DVDs arrived (larger block size, larger read-ahead). 
I can't say whether it's ever been fixed completely, I always use a

workaround for my own disks, but haven't seen it much on other disks any
more.
  


I have just turned down readahead in blockdev and my issues have gone 
away, but like you I don't see it much if a while.



--
Bill Davidsen 
 "We can't solve today's problems by using the same thinking we
  used in creating them." - Einstein


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Linux, ISOFS, multi-extent files: what's the status?

2010-01-09 Thread Bill Davidsen

Joerg Schilling wrote:

Giulio  wrote:

  

I'd like to understand what are the pitfalls, if any, in using multi-extent
files, as enabled by mkisofs "--iso-level 3" option for files larger than
4GiB-2, on Linux.

I'm using 
	- mkisofs 2.01.01a69

- kernel 2.6.18-164.9.1.el5 (RHEL5)





When I started to implement support for multi-extent files in Summer 2006
in mkisofs, Linux had a problem with reading multi-extent files that are not 
a multiple of 2048 bytes. IIRC, this problem disappeared in 2007 already. 
BTW: IIRC, I received an I/O error for the last read before the bug was fixed.
  


There was another error having to do with reading data at the end of an 
image. Due to read ahead settings a read past end of data occurred and 
the (valid) partial data was not returned to the user program. Might 
that be what you are remembering?


--
Bill Davidsen 
 "We can't solve today's problems by using the same thinking we
  used in creating them." - Einstein



Re: NO SEEK COMPLETE: Input/output error when burning DVD+R DL

2010-01-04 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

  

growisofs -dvd-compat -speed=2 -Z /dev/sr0=/setup/myFile.iso
/dev/sr0: splitting layers at 1644752 blocks
:-[ SEND DVD+R DOUBLE LAYER RECORDING INFORMATION failed with SK=3h/NO
SEEK COMPLETE]: Input/output error
Any ideas what is the problem and how to overcome it ?



The drive does not like the command that shall
tell it at which address to switch to the
second layer.

The command is issued by  plus_r_dl_split()  in
dvd+rw-tools-7.1/growisofs_mmc.cpp.
I read in the same file:

  if (profile==0x2B && next_track==1 && dvd_compat && leadout)
  plus_r_dl_split (cmd,leadout);

and in growisofs.c

  intdvd_compat=0, ...
  ...
  else if (!strcmp(opt,"-dvd-compat"))
  {   if (poor_man<0) poor_man = 1;
  dvd_compat++;
  ...
  else if (!strncmp(opt,"-use-the-force-luke",19))
  ...
  if ((o=strstr(s,"dao")))
  {   dvd_compat  += 256;
  ...
  else if (!strcmp(opt,"-dvd-video"))
  {   if (poor_man<0) poor_man = 1;
  dvd_compat++,   growisofs_argc++;
 


So it might be worth to try a run without
options
  -dvd-compat
  -use-the-force-luke=dao
  -dvd-video
  


I should definitely leave the force unused.

--
Bill Davidsen 
 "We can't solve today's problems by using the same thinking we
  used in creating them." - Einstein


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: how to rescue this backup data

2010-01-04 Thread Bill Davidsen

Zhang Weiwu wrote:
Hello. We have a website backup that has to be recovered thanks to 
webserver hard disk crash. However the person who burned the backup 
seems to have done it wrong. This is what we investigated what most 
likely have happened:


a. he have a DVD+R that has a session on it.
b. he created a backup iso image by merging that session:
 $ genisoimage -C ... -M /dev/sr0 -o backup.iso website_backup/
c. he burnt the backup to a /different/ DVD+R that also had a session:
 $ wodim dev=/dev/sr1 backup.iso
  (or, might also have been "$ growisofs -M /dev/sr1=backup.iso" )
d. he happily labeled the DVD+R in /dev/sr1 as "website backup". no 
double check. done.



Now, we have the DVD+R that was there in /dve/sr1 when he do step c, 
because it is correctly labeled as website backup. I can dump 
backup.iso by using this:

  $ dd if=/dev/sr1 of=backup.iso bs=2048 count=155440 skip=1256096
  where value of count and skip are found here
$ dvd+rw-mediainfo /dev/sr1 | grep -A 4 '.\[#2\]'
READ TRACK INFORMATION[#2]:
 Track State:   partial/complete
 Track Start Address:   1256096*2KB
 Free Blocks:   0*2KB
 Track Size:155440*2KB

We do not know which DVD+R was in /dev/sr0 when he do step b. There 
are a few hundreds in stock. We can locate a dozen that might be if 
really have to. The dumped backup.iso has the right size, that makes 
me believe nothing merged from the previous session in step b are 
important to me. In other words, backup.iso should contain all data I 
need to recover the website.


However, reasonable to believe, I could not mount backup.iso:
# mount -o loop backup.iso /mnt/
mount: /dev/loop0: can't read superblock

My question is, is there a way to recover hierarchical file backup 
from backup.iso, except all files and directories that refer to the 
previous session, which we could not easily find and should not need?


What actually happens when you try to mount the session? That is, using 
the session= option to the mount command. What error message do you get?


--
Bill Davidsen 
 "We can't solve today's problems by using the same thinking we
  used in creating them." - Einstein


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Wodim: How to read from stdin?

2010-01-03 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen  wrote:

  
This is a limitation of cdrecord and programs derived from it. The 
growisofs program knows how to do this for DVD but can not do it for CD 
(one of my problems). In your case you can know the size of the image in 
advance because you have it. In fact, I wonder why you don't just burn 
it direct.



Growisofs does not write in SAO mode, this is the reason why it can do things 
that do not work the easy way on SAO mode.


  
In my case I have a data stream of 400-500MB which I wish to burn to CD, 
but because I can't know the size in advance and don't wish to save it 
for reasons I won't detail, I am forced to use growisofs and DVD for 
these little data sets.



You are of course not forced to use growisofs. Cdrecord will work for you too
if you use it the right way... there is e.g. the -stream-media-size size
option for mkisofs and there are plenty of other methods.

  
This is not an ISO filesystem, before or after burning. What is needed 
is a mode to read stdin until end of data, pad as needed and close. 
There are many interesting formats beyond ISO-9660.
Mr Schilling noted that there seems a way to do this in some modes, but 
he provides no information on it. He's probably right that it could be 
done, but doesn't want ot say how for whatever reason.



Sorry, but this is nonsense. The way it can be done is obvious to anyone
with some basic skills in CD/DVD/BD writing. The reason why it is not yet 
implemented in cdrecord is just that there are many more important things to do

and that it would not work on all 30+ platforms that cdrecord supports.

  
I don't need it on 30 platforms, I need it on one. Fortunately growisofs 
provides that, since cdrecord seems not to be able to do so.


--
Bill Davidsen 
 "We can't solve today's problems by using the same thinking we
  used in creating them." - Einstein


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Wodim: How to read from stdin?

2010-01-02 Thread Bill Davidsen

Til Schubbe wrote:

* On 02.01. Thomas Schmitt muttered:

| So it seems you need a different burn program.

Seems so.

  

Joerg Schilling wrote:


Please note that software that allows to write media without knowing the
size in advance is not writing in SAO mode.
  

In principle: yes.

But cdrskin can combine options -sao and -isosize and thus can burn 
ISO images which it receives from stdin to CD as SAO resp. to DVD as 
DAO.



Ok, I'll give it a try. Since Debian Lenny provides a quite old
version of cdrskin, I will install it from the source.

My goal is to burn a DVD with an existing image over the network.
Therefore I asked for reading from stdin.
  


This is a limitation of cdrecord and programs derived from it. The 
growisofs program knows how to do this for DVD but can not do it for CD 
(one of my problems). In your case you can know the size of the image in 
advance because you have it. In fact, I wonder why you don't just burn 
it direct.


In my case I have a data stream of 400-500MB which I wish to burn to CD, 
but because I can't know the size in advance and don't wish to save it 
for reasons I won't detail, I am forced to use growisofs and DVD for 
these little data sets.


Mr Schilling noted that there seems a way to do this in some modes, but 
he provides no information on it. He's probably right that it could be 
done, but doesn't want ot say how for whatever reason.


--
Bill Davidsen 
 "We can't solve today's problems by using the same thinking we
  used in creating them." - Einstein


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: CLOSE SESSION failed with SK=5h/INVALID FIELD IN CDB: not harmless?

2009-12-09 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

me:
  

Is there any way how after umounting of the
filesystem the content is still not up to date
for subsequent reading of the file ?
The image file got opened by growisofs via 
open64(O_DIRECT|O_RDONLY).
  

Jens Jorgensen:
  

Well there's a scary thought. I guess I would hope that opening with
O_DIRECT would maybe cause a flush of dirty pages for this file?



It is only a shot in the dark.
One would have to test whether e.g. command sync
or umounting and remounting the hosting
filesystem would prepare the image file for
flawless copying to media.

With other image types there was never such
an effect. But a mounted UDF random access
filesystem might have its own i/o peculiarities.

O_DIRECT itself is still quite obscure to me.
The opinion on LKML is mainly against using it.
We have people here on this list who oppose
that opinion.

  
The reason to use O_DIRECT is to avoid impact on the performance of 
other processes in the system, rather than to improve speed. By doing io 
directly to user buffers data in system buffers used by most programs is 
no overwritten. On an empty system O_DIRECT is slightly slower than 
buffered io, unless asyncronous multiple buffers are used. That's too 
complex for most people, and the slowdown is in the <10% range, so 
burner buffers stay full.



I had to explore the i/o behavior of growisofs
because on some hampered busses on Linux it was
faster with writing than libburn.
Using O_DIRECT on reading had only a slightly
accelerating effect on writing.
But it turned out that the main advantage of
growisofs is in buffer allocation via mmap()
which seems necessary with using O_DIRECT.

  
You can manually allocate a large buffer and then use bit operators to 
force the address to be page aligned. Again, most user would rather use 
mmap() which does that for you.



Such side effects are ill, of course. The CPU
is mainly idle. So something in the Linux i/o
is stumbeling over its own feet. This happens
quite often with USB busses but there are also
SATA and IDE connections which do not transmit
full 16x or 20x DVD speed.

  
Again, there's a reason for that, the max USB packet size is set to 64 
by default, performance can be improved by setting to the max allowed in 
the kernel, which was 256 the last time I looked. This makes a big 
difference with some disks and tapes as well.


 /sys/bus/usb/devices/usbX/bMaxPacketSize0



The best trick with such busses is to write
64 KB chunks rather than the usual 32 KB.
This normally beats O_DIRECT reading
significantly.

So i decided to use mmap() buffer, to offer
64 KB chunks optionally at run time and
O_DIRECT optionally at compile time.
  



--
Bill Davidsen 
 "We can't solve today's problems by using the same thinking we
  used in creating them." - Einstein



Re: Is dvd-r not supported in cdrecord?

2009-10-26 Thread Bill Davidsen

Joerg Schilling wrote:

"Thomas Schmitt"  wrote:

  

what should i do with hald and O_EXCL ?
  

I assume your successful burns were via
growisofs. Afaik it opens the device file with
option O_EXCL. See man 2 open for usage.
The meaning of O_EXCL on Linux storage device
files is fewly documented:
A further open(O_EXCL|O_RDWR) will fail as
long as there is an open file descriptor with
O_EXCL|O_RDWR.



I did already mention that using O_EXCL is not an
option for libscg and cdrtools as using O_EXCL would
causes other problems that are even less tolerable.
  


I'm curious, I'll accept that you have tried this and found some really 
bad behavior, but what is less tolerable than burning a series of 
coasters? I think growisofs uses O_EXCL and I don't recall any behavior 
after burning which made me unhappy.


In your experience, what breaks?

--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.



Re: Is dvd-r not supported in cdrecord?

2009-10-26 Thread Bill Davidsen

Joerg Schilling wrote:

chi kwan  wrote:

  
I kill mime and gosh, it is burning at 8x as dummy! Thanks very much for the 
insight and sorry for not trying it early enough to avoid all the exchanges.

life is good again.



Thank you for proving my asumption ;-)

BTW: I made this proposal as I did already see exactly the same error message 
as a result from the actions of hald. In the other case, killing hald did fix 
the problem. I have no idea how hald causes this kind of errors.
  


My assumption, as I posted earlier, is that it is trying to mount media 
of any type it understands. To guess the media type the first sector is 
read. This requires a seek to sector zero and a read, leaving the burner 
at a location which has already been written.


Actually, I just had a thought on how to patch hal not to do this, I'll 
try it sometime this week as I get time.


--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Is dvd-r not supported in cdrecord?

2009-10-24 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

Bill Davidsen:
  

the OS provides
tools by which applications can prevent this, if the applications
fail to use them than the fix lies in the application (and the user
who chose the application).



I would really like to implement a hald
cooperation module in libburn.
Do you have any specs how to contact hald from
a vanilla C program, to tell it to stay away
from a certain drive, and to later give back
the drive to hald ?
Maybe even an example program somewhere ?
  


I believe you would have to modify hald to produce suitable behavior, 
and it hardly seems worth doing until the future of hald is clearer. 
Hopefully it will eventually vanish, other find it a problem for other 
reasons.


--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Is dvd-r not supported in cdrecord?

2009-10-23 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

chi kwan:
  

Cannot write dvd-r(w) but with dvd+rw is great
dvd-r(w) support of the drive/system is alright using CD/DVD creator comes
with Gnome
  


Possibly this uses growisofs as burn engine.
(To verify this: have a look with
ps -ef | grep growisofs
 while a DVD burn is going on.)


  

CDB:  2A 00 00 00 00 00 00 00 10 00
Sense Key: 0x5 Illegal Request, Segment 0
Sense Code: 0x30 Qual 0x05 (cannot write medium - incompatible format) Fru
  

Joerg Schilling:
  

It looks like your drive does not like the low quality media you are using.



I have sincere doubt that poor media quality
can cause a 5,30,05 error. It clearly belongs
to a family of complaints about media types
and media states:
5 30 00 INCOMPATIBLE MEDIUM INSTALLED
5 30 01 CANNOT READ MEDIUM  UNKNOWN FORMAT
5 30 02 CANNOT READ MEDIUM  INCOMPATIBLE FORMAT
5 30 03 CLEANING CARTRIDGE INSTALLED
5 30 04 CANNOT WRITE MEDIUM  UNKNOWN FORMAT
5 30 05 CANNOT WRITE MEDIUM  INCOMPATIBLE FORMAT
5 30 06 CANNOT FORMAT MEDIUM  INCOMPATIBLE MEDIUM
5 30 07 CLEANING FAILURE
5 30 08 CANNOT WRITE  APPLICATION CODE MISMATCH
5 30 09 CURRENT SESSION NOT FIXATED FOR APPEND
5 30 10 MEDIUM NOT FORMATTED

With poor media i would rather expect
3 73 03 POWER CALIBRATION AREA ERROR
or
3 0C 00 WRITE ERROR

I interpret "INCOMPATIBLE FORMAT" rather as
unsuitable state of the track that begins at
block address 0.

My theory would be strengthened if indeed
growisofs was the burn engine under the Gnome
program which can reliably burn the media.
  


I interpret this as the burner and media not getting along well, 
sometimes showing up with cdrecord because it may in some cases use 
different commands depending on the vendor quirks. I've seen similar 
messages from certain burners when changing media brand.


It is *not* some general problem with CD-R or DVD-R, I use cdrecord with 
then all the time because I burn for several old devices which really 
dislike +R media.


Joerg Schilling:

It may be that this is a problem caused by "hald" or it's recent replacement (I 
belive it is called "device-kit" or similar) disturbing the write process. You 
may like to kill all related processes before tryng to write again.



I agree as long as you say "may be," because I've seen this on older 
distributions which predate hal, which are on old software for one 
reason or another (technical or political ;-) ).


This may also be caused by Gnome options about media, where an option is 
selected to auto-mount certain types of media and the mounter needs to 
check the device every so often and see if there is a media and what 
kind it is, you can get a "seek zero, read" in the middle of a series of 
writes, resulting in an attempt to write over an already-written sector.


Before anyone suggests that this is an OS error, the OS provides tools 
by which applications can prevent this, if the applications fail to use 
them than the fix lies in the application (and the user who chose the 
application). One program is hal, as Joerg noted, but there are others.


--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.



Re: My Pioneer DVR-218L i think is behaving like the "LG GH20NS10 might hang at the end of DVD-R[W] DAO recording..."

2009-10-21 Thread Bill Davidsen

Lucas Levrel wrote:

Le 19 octobre 2009, Joerg Schilling a écrit :

  

Due to the well known bugs in "cdrkit" aka. "wodim", k3b prefers the original
software before the broken fork. 



But beware that some distributions (e.g. openSUSE 10.3) have a symlink 
/usr/bin/cdrecord -> wodim. So depending on PATH & installation details of 
genuine cdrecord, the latter may be unnoticed.


  
Calling wodim "cdrecord" is not unlike ordering Coke® and getting Pepsi® 
instead. It isn't about quality it's about the intent to deceive the 
user. If people wanted wodim they would ask for it by name. It's as fake 
as those "Pitsberg Stealers" (that's the way they spell it) sweatshirts 
you see at roadside stands near football games.



If available, one should have a look to k3b logs I guess...

  



--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.



--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Announcing cdrskin-0.7.2

2009-10-18 Thread Bill Davidsen

D. Hugh Redelmeier wrote:

| From: Joerg Schilling 

| I am not shure whether you know that you are responding to a social problem.

I certainly see a nasty tone on this message.

I'm not going to judge who is right and who is wrong.  This is a long
conflict, as far as I know.

  
I suggested a few years ago that Joerg and his principal detractors go 
in a telephone booth and have a pissing contest, but I haven't seen it 
on youtube yet. I spite of the personalities users do get helped here, 
and seeing people make fools of themselves does have some amusement value.



My humble suggestion: everyone should tone the rhetoric.  Surely all
the points have already been made.  Continued sniping reflects badly
on everyone.

As a user, I'm thankful for Joerg *and* everyone else who has helped
to develop this code.


  



--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Announcing cdrskin-0.7.2

2009-10-18 Thread Bill Davidsen

Mario Đanić wrote:

The Problem is however,



The problem is however that you are highly annoying. With the plans
for libburnia website revamp, there is a plan to provide a replacement
list for this list, where we would help any users, regardless of the
software they use. Even if we have it now, its a bit under-promoted
and has some spam. It'll change. If nothing else, it would at least be
a little more friendly.
  


This list is actually intended to be just that, and modulo Joerg's 
comments seems to serve well. It would really serve users better (if 
that's your intent) to continue on this list, and not force people to 
read yet another list to get information on all softwares. We have 
regular coverage of most of the popular burning programs, and some 
obscure software and questions in addition.


Whatever you think of Joerg, he has contributed a number of useful 
solutions, and has maintained and improved his software for a decade. 
Anyone reading this list in historical context will know I'm not his 
fanboy (;-)), but I do use his software for a number of things.


--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.



--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: solaris cdrecord BH08LS20 drive BD-R problems

2009-10-14 Thread Bill Davidsen

Rob W wrote:

Thank you Thomas for suggesting things to try.

> ... spoil your joy of success
Let us not digress too much - remember my definition of success:
# cdrecord dev=3,0,0 speed=2 driveropts=burnfree image.iso

>>> try to burn it via growisofs
>> I have no idea how long it took
> Probably rather 4 to 5 hours
I agree, but should only take 30 minutes, with no errors.

> Does it mount?
Yes, I can see all of my files on the Windows machine too.

> Can you read all 7634768 blocks?
Wow, dd is really slow. I am still waiting for results.

Did you specify block size? Depending on several things, the addition of 
"bs=2M" (2MB) will make a bunch of difference.


--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Announcing cdrskin-0.7.2

2009-10-13 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

  

Then please inform your users that you cannot grant correctly written
CDs on Lite-ON drives and



libburn works fine with my olde LITE-ON
LTR-48125S CD-RW burner.


  

that you will not be able to write SVCDs on Pioneer drives.



(Harrumph)

Users ! You might not be able to write SVCDs on Pioneer drives !

(Good so ?)


Users, please also note:

I have no idea how to write SVCD on any drive.

  
The usual programs to generate the info create multiple files and a 
cuefile (see the cdrecord man page). Real cdrecord uses this cuefile to 
write the SVCD media, the last (years ago) time I tried to do that with 
wodim it seemed to work but the media wouldn't play. This format is 
still (marginally) useful for small video where the quality of VCD is 
unacceptable and the cost between CD and DVD blanks matters.



libburn does Mode 1 (-data) and Audio tracks.
To mix them on the same media will possibly not
work in your old CD player in the living room.
If the player is actually a little computer then
this ban on mixing will not be an issue.

There are rumors that one would need Mode 2
for multi-session CDs. The only real hint for
this are statements in ECMA-168 which have no
influence on CD or on ISO 9660 (ECMA-119).
The rumored prescription cannot apply to Audio
as this is mutually exclusive with Mode 2.
The rumored prescription cannot be essential
with computer attached CD-ROM drives as i have
a lot of experience with multi-session Mode 1
CDs meanwhile.


Have a nice day :)

Thomas


  



--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: cdrtools-2.01.01a66 ready

2009-10-13 Thread Bill Davidsen

Joerg Schilling wrote:

NEW features of cdrtools-2.01.01a66:

***
NOTE: cdrtools is currently in a state just before a new major release.


  
You have been saying this for several years, it has less meaning than 
"have a nice day," so either release a new beta or release candidate, or 
stop promising things you can't deliver. You nit-pick other's posts, so 
don't complain if you get called on this.



Libsiconv:

-   Fixed a problem in libsiconv in case that the the locale is specified as
"iconv:name".

  



Cdrecord:

-   Better man page with repect to dev=

-   The cdrecord man page has been restructured.

-   Fixed a bug in the workaround code for a firmware bug for DVD+R
media in HL-DT-ST drives.

Cdda2wav (Maintained/enhanced by Jörg Schilling, originated by Heiko Eißfeldt 
he...@hexco.de):

-   Better man page with repect to dev=

-   The cdda2wav man page has been restructured.

Readcd:

-   readcd now only send the Plextor specific SCSI commands for the -cxscan
option in case that the drive identifies as Plextor.

-   Better man page with repect to dev=

  
I will read the new man pages before commenting, but I'm happy to see 
work on this.




TODO:
-   Support correct inode numbers for UDF hardlinks

-   Support sockets, pipes, char/blk-dev specials with UDF

  

Another thing I will try before I comment.


--
Bill Davidsen 
 Unintended results are the well-earned reward for incompetence.



--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



BD format info

2009-07-23 Thread Bill Davidsen
I'm looking for information on what format is used for Blu-Ray disks. I 
have an extensive collection of clips which are either 720p or a similar 
format with compatible aspect ratio. I have conversion tools which can 
generate virtually any common container, video and audio codec, etc. 
What I need is a Linux tool to take those clips in original or 
transformed form, and create a disk which will play in a standard 
Blu-Ray player.


Running Fedora if anyone cares.

--
Bill Davidsen 
 Obscure bug of 2004: BASH BUFFER OVERFLOW - if bash is being run by a
normal user and is setuid root, with the "vi" line edit mode selected,
and the character set is "big5," an off-by-one error occurs during
wildcard (glob) expansion.


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: rename functions as they conflict with glibc

2009-06-15 Thread Bill Davidsen

Joerg Schilling wrote:

Roman Rakus  wrote:

  
I'm not sure if this problem is also fixed but this patch rename 
functions as they conflict with glibc.



Your mail addresses two problems:

The problem is that POSIX.1-2008 is in conflict with general rules in POSIX.
POSIX as well as glibc illegally uses these names: POSIX grants that it will 
never break published older code. The POSIX standard for this reason was not

allowed to use these names. The Standard Commitee has even been informed about
the problem to no avail.

Note that the names in question are in widely spread public use since 1982 with 
the interfaces used by libschily. So glibc is introducing a non-compliance 
because the functions using the same names in glibc implement different 
interfaces. 

  
I'm sad that you are still beating this dead horse. You waste your time 
and energy debating an issue long since settled. It no longer matters if 
you are right, neither POSIX nor the millions of applications using 
POSIX definitions are going to change. You are taking the policy on 
breaking existing programs using older POSIX compliant libraries, and 
trying to convince them that it applies to applications written against 
other libraries using the same names.


Einstein is credited with saying that the definition of stupidity (some 
claim he said futility) is doing the same thing over and over hoping for 
a different result.


Your problem is a result of using _extremely_ outdated and even illegal (*) 
sources from a questionable fork. 

*) illegal because in 2006, the initators of the fork introduced modifications 
that are in conflict with the Copyright law. The code you send cannot be legally 
distributed.


  
You should get a lawyer or get a life. Why do you fritter away your time 
revisiting this issue?
I recommend you to just upgrade to recent original code. Original code is 
legally distributable and it does implement a workaround for the problem since 
the non-POSIX compliant ;-) POSIX.1-2008 has been approved in Summer 2008.



Here is the recent original code:

ftp://ftp.berlios.de/pub/cdrecord/alpha/

Please upgrade all related code at RedHat's site as soon as possible and stop 
publishing the code you are currently distributing.
  


Redhat and other gave up your version because you were too difficult to 
work with. Since your willingness to accept other views as having merit 
hasn't changed, neither will the decision to use a hack on you old code. 
The problem is of your making.


--
Bill Davidsen 
 Obscure bug of 2004: BASH BUFFER OVERFLOW - if bash is being run by a
normal user and is setuid root, with the "vi" line edit mode selected,
and the character set is "big5," an off-by-one error occurs during
wildcard (glob) expansion.


--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: cdrtools 2.01.01a59: Compiling problems

2009-04-20 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen  wrote:

  
The build appeared to be doing just that, even though I unpacked into a 
totally clean subdirectory which did not exist before the unpack. So to 
be absolutely sure I was starting from clean, I did a "make clean" which 
also appears to be broken; instead of deleting the dependent files it 
goes off into an orgy of "checking" things and "BUILDING" things, and 
"RULESETS" which appeared to be looping.



Your reply is unrelated to the problem of the OP
and it does not contain a valid statement.
  


And looking at this issue, of course my comment is related to the OP 
problem, I was trying to replicate his problem on a clean source tree. 
You mentioned leaving old information around, so I attempted to produce 
a valid data point by cleaning the tree to pristine state and then doing 
the build.


Repeating a test to determine if the first results have additional 
influencing conditions is considered good practice.


--
bill davidsen 
 CTO TMR Associates, Inc

"You are disgraced professional losers. And by the way, give us our money back."
   - Representative Earl Pomeroy,  Democrat of North Dakota
on the A.I.G. executives who were paid bonuses  after a federal bailout.



--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: cdrtools 2.01.01a59: Compiling problems

2009-04-20 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen  wrote:

  
The build appeared to be doing just that, even though I unpacked into a 
totally clean subdirectory which did not exist before the unpack. So to 
be absolutely sure I was starting from clean, I did a "make clean" which 
also appears to be broken; instead of deleting the dependent files it 
goes off into an orgy of "checking" things and "BUILDING" things, and 
"RULESETS" which appeared to be looping.



Your reply is unrelated to the problem of the OP
and it does not contain a valid statement.
  


I guess that means that you will not only not fix the problem, you will 
pretend it doesn't exist...


--
bill davidsen 
 CTO TMR Associates, Inc

"You are disgraced professional losers. And by the way, give us our money back."
   - Representative Earl Pomeroy,  Democrat of North Dakota
on the A.I.G. executives who were paid bonuses  after a federal bailout.



--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: cdrtools 2.01.01a59: Compiling problems

2009-04-20 Thread Bill Davidsen

Joerg Schilling wrote:

Marcus Roeckrath  wrote:

  
while a58 compiles fine on eisfair compilation of a59 wasn't succesful; I 
got lots of this errors:


 ../../include/schily/schily.h:173: error: conflicting types for 'fexecve'
/usr/include/unistd.h:462: error: previous declaration of 'fexecve' was 
here

../../include/schily/schily.h:173: error: conflicting types for 'fexecve'
/usr/include/unistd.h:462: error: previous declaration of 'fexecve' was 
here



A similar "problem" ha sbeen reported yesterday already and is a result of 
trickingout autoconf at your side.


POSIX.1-2998 violates POSIX as it defines interfaces that are in conflict 
with public interfaces used in 1982 already. fexec* is such an illegal 
interface and libschily inplements the official interface as published in 1982.


This however is not a problem a the makefile system provides an autoconf test 
that implements a workaround. You just should not confuse autoconf by using 
autoconf results from previous compile runs.
  


The build appeared to be doing just that, even though I unpacked into a 
totally clean subdirectory which did not exist before the unpack. So to 
be absolutely sure I was starting from clean, I did a "make clean" which 
also appears to be broken; instead of deleting the dependent files it 
goes off into an orgy of "checking" things and "BUILDING" things, and 
"RULESETS" which appeared to be looping.


I don't care what standard you quote, POSIX or written on stone tablets 
by God, "make clean" should not create all the files just so it can get 
rid of them! After about ten minutes thrashing I decided it was confused 
and killed it. I look at the files, and it was really creating files 
instead of deleting them.


I will try this again when "make clean" works, I'm not going to guess 
which files need to be removed. As you said, "this is a job for the 
makefile."


I'm told it doesn't build on AIX either, but I don't have a system to 
use for test anymore.


--
bill davidsen 
 CTO TMR Associates, Inc

"You are disgraced professional losers. And by the way, give us our money back."
   - Representative Earl Pomeroy,  Democrat of North Dakota
on the A.I.G. executives who were paid bonuses  after a federal bailout.



--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Odd messages from cdrecord

2009-04-03 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen  wrote:

  

Let me know if more information is useful.

  
Looks as if the attachment got stripped by the incoming mailer, let me 
try another type...



The message you probaly refer to would be correct for the original DVD-R
from 1997 I wrote the cdrecord DVD driver for. All current DVD-R media is 
prerecorded. I need to find a way to properly detect that the message 
does not apply without preventing cdrecord from working correclty with the

DVD-R media that allows to write CSS data.

  


Thanks for the info. Is that in any way related to the long delay before 
the burn starts? Almost half the total time is spent before the actual 
burn starts? Or is that an unrelated issue?


--
bill davidsen 
 CTO TMR Associates, Inc

"You are disgraced professional losers. And by the way, give us our money back."
   - Representative Earl Pomeroy,  Democrat of North Dakota
on the A.I.G. executives who were paid bonuses  after a federal bailout.



--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Odd messages from cdrecord

2009-04-03 Thread Bill Davidsen

Bill Davidsen wrote:
I have been getting some odd warnings (see WARNING) in the 
attachments) about media size, and the time to start burning is 
sometimes several minutes. I see this on four machines, built 
recently, some DVD some DVD-D/L capable, some running very recent 
versions of cdrtools, no difference in behavior so I picked the 
example most convenient. Brand of DVD and burner doesn't make much 
difference, I buy cheap store brand DVDs, but tried a few brand name 
with no change. All produce clean images reliably.


See attachment for details, results are perfect, so this is only for 
information. Of course taking a minute or tow off the start time would 
be really great if there's a trick to do so.


Let me know if more information is useful.

Looks as if the attachment got stripped by the incoming mailer, let me 
try another type...


--
bill davidsen 
 CTO TMR Associates, Inc

"You are disgraced professional losers. And by the way, give us our money back."
   - Representative Earl Pomeroy,  Democrat of North Dakota
on the A.I.G. executives who were paid bonuses  after a federal bailout.


pixels:davidsen> CDrecord dev=/dev/dvd -v -eject Fedora-11-Beta-i386-DVD.iso
CDrecord: No write mode specified.
CDrecord: Asuming -sao mode.
CDrecord: If your drive does not accept -sao, try -tao.
CDrecord: Future versions of cdrecord may have different drive dependent 
defaults.
Cdrecord-ProDVD-ProBD-Clone 2.01.01a50 (i686-pc-linux-gnu) Copyright (C) 
1995-2008 J�rg Schilling
TOC Type: 1 = CD-ROM
scsidev: '/dev/dvd'
devname: '/dev/dvd'
scsibus: -2 target: -2 lun: -2
Warning: Open by 'devname' is unintentional and not supported.
Linux sg driver version: 3.5.27
Using libscg version 'schily-0.9'.
SCSI buffer size: 64512
atapi: 1
Device type: Removable CD-ROM
Version: 5
Response Format: 2
Capabilities   : 
Vendor_info: 'ATAPI   '
Identifikation : 'DVD A  DH20A4P  '
Revision   : '9P59'
Device seems to be: Generic mmc2 DVD-R/DVD-RW/DVD-RAM.
Current: DVD-R sequential recording
Profile: DVD+R/DL 
Profile: DVD+R 
Profile: DVD+RW 
Profile: DVD-R/DL layer jump recording 
Profile: DVD-R/DL sequential recording 
Profile: DVD-RW sequential recording 
Profile: DVD-RW restricted overwrite 
Profile: DVD-RAM 
Profile: DVD-R sequential recording (current)
Profile: DVD-ROM 
Profile: CD-RW 
Profile: CD-R 
Profile: CD-ROM 
Profile: Removable Disk 
Using generic SCSI-3/mmc-2 DVD-R/DVD-RW/DVD-RAM driver (mmc_dvd).
Driver flags   : NO-CD DVD MMC-3 SWABAUDIO BURNFREE FORCESPEED 
Supported modes: PACKET SAO LAYER_JUMP
Drive buf size : 1182464 = 1154 KB
FIFO size  : 4194304 = 4096 KB
Track 01: data  3635 MB
Total size: 3635 MB = 1861396 sectors
Current Secsize: 2048
WARNING: Phys disk size 2298496 differs from rzone size 2297888! Prerecorded 
disk?
WARNING: Phys start: 196608 Phys end 2495103
Blocks total: 2297888 Blocks current: 2297888 Blocks remaining: 436492
Forcespeed is OFF.
Starting to write CD/DVD/BD at speed 16 in real SAO mode for single session.
Last chance to quit, starting real write0 seconds. Operation starts.
Waiting for reader process to fill input buffer ... input buffer ready.
BURN-Free is OFF.
Starting new track at sector: 0
Track 01: 3635 of 3635 MB written (fifo 100%) [buf  99%]  15.2x.
Track 01: Total bytes read/written: 3812139008/3812139008 (1861396 sectors).
Writing  time:  303.413s
Average write speed   9.1x.
Min drive buffer fill was 75%
Fixating...
Fixating time:   10.063s
CDrecord: fifo had 60046 puts and 60046 gets.
CDrecord: fifo was 0 times empty and 12633 times full, min fill was 82%.
pixels:davidsen> 


Odd messages from cdrecord

2009-04-02 Thread Bill Davidsen
I have been getting some odd warnings (see WARNING) in the attachments) 
about media size, and the time to start burning is sometimes several 
minutes. I see this on four machines, built recently, some DVD some 
DVD-D/L capable, some running very recent versions of cdrtools, no 
difference in behavior so I picked the example most convenient. Brand of 
DVD and burner doesn't make much difference, I buy cheap store brand 
DVDs, but tried a few brand name with no change. All produce clean 
images reliably.


See attachment for details, results are perfect, so this is only for 
information. Of course taking a minute or tow off the start time would 
be really great if there's a trick to do so.


Let me know if more information is useful.

--
bill davidsen 
 CTO TMR Associates, Inc

"You are disgraced professional losers. And by the way, give us our money back."
   - Representative Earl Pomeroy,  Democrat of North Dakota
on the A.I.G. executives who were paid bonuses  after a federal bailout.



--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Announcing xorriso-0.3.6

2009-03-21 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

  
If you like to see hard links on Linux, I recommend just to implement 
support for inode numbers on Linux



My trick is that i implement such things
in xorriso. My influence on kernels is
limited. But xorriso can retrieve any
extra information that it encoded in the
image.
It is also able to restore multi extent
files on operating systems which fail to
implement a proper reader for ISO level 3
(quite new Linux kernels among them).


  

If Linux users are interested on the current ACL standard, I recommend to
add ZFS to the basic filesystems supported by Linux.



Seems that those who could are not interested
and those who would are not capable.
  


As long as you can back up and restore what is there in Linux now, and 
use features which other applications and OS are required to ignore if 
they do not understand, then you have provided a useful backup 
capability which is "portable enough" to be useful. Rarely are the xattr 
needed to be moved to other OS, because the exact functionality of the 
xattr may not be precisely the same. If the file nuances are preserved 
within the same OS that's sufficient to be useful.


If IBM actually does buy Sun, I suspect that the best features of AIX 
and Solaris will be made available to Linux, and marketing of the legacy 
OS will be minimal. For the same reasons that tru64, MULTICS and OS/2 
are no longer competing, it's no longer a question of which OS is "best" 
(by whatever measure), it's a measure of which OS generates more revenue 
than it costs to maintain.


--
bill davidsen 
 CTO TMR Associates, Inc

"You are disgraced professional losers. And by the way, give us our money back."
   - Representative Earl Pomeroy,  Democrat of North Dakota
on the A.I.G. executives who were paid bonuses  after a federal bailout.




Re: is my drive defect - request for comments

2009-03-02 Thread Bill Davidsen

Helmut Jarausch wrote:

On  1 Mar, Thomas Schmitt wrote:
  

Hi,

Joerg Schilling wrote:


.
  

people in most cases solve their problems by writing in RAW mode
or by killing hald.
  

If raw mode prevents hald from disturbing a
burn run then this is an undocumented feature.



Just one more comment (Attn: I don't know anything
about the techniques of recording on a CD/DVD)

As Joerg suggested in private mail, I tried
using cdrecord after I had stopped hald.
So on my system even without hald running
cdrecord refused to write even the first sector.
Switching to -raw96r mode (on a CD) did succeed
but readcd -c2scan failed.
Joerg said that my drive (LG GH22NS30 SATA)
does not support c2 scanning. But then I wonder
why it didn't get errors for all sectors (just many).

I tried PLATINUM and VERBATIM media.
This seems to be connected to this burner (and its
predecessor LG GH20NS15)

I have two PC (both AMD 64 running a recent Linux
system with an 2.6.26 or 2.6.28 kernel)
Both show this problem (with cdrecord only).

I don't think it has to do with this recent Linux kernels
since on two 32bit machines, running 2.6.26/8, as well, I have no
problems though with an older (LG) burner.

I managed to burn the CD with cdrskin, I mounted
that CD and compared to the source directory.
And the data was OK.
I tried to "verify" that same CD with readcd (c2scan)
and it failed with many sectors.

cdrecord even failed to burn a DVD on those machines,
while growisofs succeeded.

NOTE, this is just the short history, no rating !

Thomas suggested to use CDCK which validated
both the CD and the DVD. But unfortunately
that checks timings only and therefore is only an
indirect measure of quality.

I wished there a were a tool which could show
"near failures" on a CD and on a DVD,
since I use my burner for backup purposes only.
And I want to be able to access my data even several
years in the future (e.g. digital photographs)

  
I've said this before, but it bears repeating: Get dvdisaster so you can 
add a layer of software ECC to your media. You have the choice of 
creating a "small" ISO image of 70-80% of the media capacity and then 
adding recovery information to the endof the image before you burn, or 
having the ECC information in another image and using the full size of 
the media for backup. A minimum of 20% redundancy is quite good, for 
critical stuff 30% is better. Burn a DVD with dvdisaster info, take 
something sharp and scratch the media. After verifying that you can't 
reas everything, recover the initial (data) with dvdisaster. Nothing is 
perfect, this is seriously improved correction.



Thanks for all your help and I explicitly include
Joerg here, as well.
I have been using cdrtools for many years now and it
had worked and is working flawlessly except on this new hardware.

Helmut.

  



--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: is my drive defect - request for comments

2009-02-28 Thread Bill Davidsen

Joerg Schilling wrote:

"Thomas Schmitt"  wrote:

  

I wrote


-raw96r does not yield sectors which can be
read by the READ command.


Joerg Schilling wrote:


This claim is of course wrong.
  

MMC-5 says:
"6.15.3 Command Processing
 The block size for the READ (10) command shall be 2048 bytes.
 If the block size of a requested sector is not 2048, the Drive shall:
 1. Terminate the command with CHECK CONDITION status,
 2. Set sense bytes SK/ASC/ASCQ to ILLEGAL REQUEST/ILLEGAL MODE FOR THIS TRACK,
"

man cdrecord says:
   -raw96r
  Select Set RAW writing mode with 2352 byte sectors plus 96 bytes

No 2048 bytes. No READ (10).
This matches my own observations with write
capacity (is higher) and block device read(2)
(fails with i/o error).



Looks like you need to learn about CD writing basics.

cdrecord knows how to write any possible sector type in -raw96r mode.

  
So are you saying that your documentation is wrong and you don't write 
2352+96 byte sectors? Or that the standard says drives should not read 
them with the READ command and if so what does that mean?


If you claim that Thomas is wrong, please explain why.

BTW: writing in -raw96r is a trick to circumvent problems from the 
non-cooperative Linux hald (*) and -raw96 should be used with all Lite-ON

drives to circumvent firmware bugs found in all Lite-ON drives.

If you do not know how -raw96r works, then I would guess that your software 
does not support to write in this mode.


*) hald on Linux must be called harmfull software as it interrupts the write 
process and makes media unusable. This is because it starts reading from 
incompletely written CD.
  


Again you show that you do not understand how this works, hald may be 
configured to do things which are not appropriate. To blame the software 
for the options is like blaming *your* software because someone chose to 
use the wrong mode on the command line.


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: dvd+rw-tools 7.1: Compiling problems on new eisfair2

2009-02-03 Thread Bill Davidsen

Andy Polyakov wrote:
while never having problems to compile on the eisfair1-serverproject 
I'm unable to compile dvd+rw-tools on the upcoming eisfair2.


I got:

eisfair2 # make
make[1]: Entering directory `/usr/src/dvd+rw-tools-7.1'
g++  -O2 -fno-exceptions -D_REENTRANT   -c -o growisofs_mmc.o 
growisofs_mmc.cpp

In file included from growisofs_mmc.cpp:17:
transport.hxx: In member function ‘int 
Scsi_Command::is_reload_needed(int)’:

transport.hxx:366: error: ‘INT_MAX’ was not declared in this scope
make[1]: *** [growisofs_mmc.o] Error 1
make[1]: Leaving directory `/usr/src/dvd+rw-tools-7.1'
make: *** [all] Error 2

Eisfair2 depends on Ubuntu 8 having:

g++ (GCC) 4.2.3 (Ubuntu 4.2.3-2ubuntu7)

gcc (GCC) 4.2.3 (Ubuntu 4.2.3-2ubuntu7)

GNU Make 3.81

glibc 2.7

Is there something missing on my new eisfair2?


Note that INT_MAX is not referred in dvd+rw-tools source, CDSL_CURRENT 
from linux/cdrom.h is. The latter is the problem. Generally header 
files are expected to be self-contained, in other words it should 
suffice to include linux/cdrom.h alone to use interfaces described in 
it. linux/cdrom.h used to be self-contained and one found in 
glibc-kernheaders package still is, it's pristine kernel header that 
is not self-contained anymore. It probably should be classified as 
kernel header bug (as I have no formal way of knowing that a 
documented macro would require any other particular header). But one 
way or another, if you ought to compile this very moment, you should 
be able to 'make WARN=-DINT_MAX=0x7fff'.


Good point, although Linux documentation has many places where it warns not to 
use kernel headers because they may change between versions. For portability 
perhaps an ifndef would be the way to call limits.h if there's a reason not to 
include it unconditionally.




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: cdrecord floating point exception

2009-02-02 Thread Bill Davidsen

Joerg Schilling wrote:

Rob Bogus  wrote:

  

Joerg Schilling wrote:


>From zoubi...@hotmail.com Fri Jan 30 21:07:19 2009

  
You did not install cdrecord correctly as you see from this messages.
Cdrecord needs to be installed suid root in order to be able to open all needed 
devices and in order to send all needed SCSI commands.


  
  
When are you going to fix that? Other software can burn without being 
root, clearly it can be done. If there are better commands to use with a 



This is a definitive wrong claim. There is No way to correctly write without
root privileges. 

  

Try to kill hald and retry cdrecord after correctly installing it suid root.

  
  
Time to learn to use a scalpel instead of a chain saw... You don't just 
"kill hald" on most modern distributions, things stop working. And the 



Try to learn that hald on Linux is broken and acts on wrong status changes.
  


Nothing is ever your fault. Instead of learning from the applications 
which burn CDs and DVDs without being root, your software has problems 
with hald and you refuse to accept that changing the hald config fixes 
the problems and others can work with hald as is, and insist hald is at 
fault.



--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Is ulimit -l unlimited or limit memorylocked unlimited still needed?

2009-01-26 Thread Bill Davidsen

shirish wrote:

Hi all,
   I went to the homepage of the dvd+rw-tools homepage.

http://fy.chalmers.se/~appro/linux/DVD+RW/tools/


It gave me the following warning

"IMPORTANT NOTE for 6.0 users! Newer Linux kernels have ridiculously
low default memorylocked resource limit, which prevents privileged
users from starting growisofs 6.0 with "unable to anonymously mmap
33554432: Resource temporarily unavailable" error message. Next
version will naturally have workaround coded in, but meanwhile you
have to issue following command at command prompt prior starting
growisofs:

* if you run C-like shell, issue 'limit memorylocked unlimited';
* if you run Bourne-like shell, issue 'ulimit -l unlimited';
  


I don't know how it got to trying for 32MB of locked memory, but none of 
my systems will allow normal users to lock all of memory... I've never 
seen an error at runtime, so I'm not sure if the warning means much. 
Note that distributions may have very different defaults in both the 
boot time setup and the kernel, so unless you are building you own 
kernel from kernel.org source and running on your own distribution "your 
mileage may vary."


I run growisofs and have never had a warning at runtime, nor a problem 
with function. Nor would any of my systems let a normal user lock all of 
memory, that's a problem waiting to happen.


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Burned or not burned? dvd+rw-mediainfo can't decide

2009-01-17 Thread Bill Davidsen

Parker Jones wrote:
I burned an ISO to DVD+RW with growisofs then checked its status with 
dvd+rw-mediainfo.  It seems to have burned ok.  Then I eject and 
re-insert the DVD and do another dvd+rw-mediainfo.  Now the media is 
blank.


So which is it - burned or not?

Any suggestions much appreciated!


The obvious problem would have been lack of dvd-compat, but you did use 
the option. My second though is that you may have gotten a fast format 
on your media, and your player, for whatever reason, may require a full 
format. Since it's RW it might be interesting to do a full format on 
that same media, then burn it in the same way, and see if that make a 
difference.


I have not had a problem with any of my players, in fact I had several 
commercial DVDs which wouldn't play in one until I burned a copy, but 
DVD players are not yet a completely deterministic technology.


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Thoughts on writing CD from stdin

2009-01-16 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen  wrote:

  

The best solution for that problem is to kill hald ;-)

kill -STOP ` pgrep hald ` 
  
  
Noted. Since the only mode which seems to have a hope of working is TAO 
from what people have said, raw96r seems to be a side track. And I would 
certainly edit the configuration rather than just kill hal and do all 
the associated work by hand.,



I am not sure what you understand by editing the configuration
  


By default many Linux distributions have hal polling the CD/DVD drives, 
and that's fine if you aren't doing burning, or are using certain 
burning methods. But there are configuration files for hal which can 
change the way things are done, just one more configuration in /etc 
which the user can adjust.


If you like a real solution, we would need to find a way to make the people who 
write hald from linux to become interested in fixing their bugs.
  


I don't regard it as a bug that a program does what the configuration 
file or command line options request. In general a bug is an 
*unintended* behavior, but this appears not to be the case.


While I think of configuration, could cdrtools have a option to NOT try 
and install setuid? If run as a normal user it lacks permissions to do 
the install at all, and if run as root it does something I don't want. 
Obviously I can get around it, but it is just one more thing to 
remember, since it installs by default in a tree where I definitely 
don't want setuid programs.


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Thoughts on writing CD from stdin

2009-01-15 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen  wrote:

  

-  Use cdrecord (or one of the plug-compatible substitutes) in TAO
   burning mode.

  
Rather than one of the "raw" modes? I found several posts suggesting 
that the magic was 'raw96r' or similar. I believe I tried that, as well 
as -sao, -tao, and -dao, but I can repeat the test easily.



People who recommend -raw96r instead of -sao usually suffer from the hald 
bug on Linux that causes hald to distrurb CD writing.


The best solution for that problem is to kill hald ;-)

kill -STOP ` pgrep hald ` 
  


Noted. Since the only mode which seems to have a hope of working is TAO 
from what people have said, raw96r seems to be a side track. And I would 
certainly edit the configuration rather than just kill hal and do all 
the associated work by hand.,


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Growisofs input cache --> Patch

2009-01-14 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen  wrote:

  
In Run 1 there were several buffer underruns which slowed the DVD recorders 
down. In Run 2 the buffer was always at 100% (except for the end of 
course) :-).
  
  
This seems reasonable, what were the performance numbers for the other 
system activity? I'm surprised at the underruns, cdrecord has internal 
fifo, and I thought you did, too. With a hacked cdrecord (around a50) 
the burn ran almost eight seconds slower, regardless of burn size, and 
never dropped below 92% full at the drive, and 70% or so in the fifo.



Hacked how?
  


Late reply... hacked to use O_DIRECT and some simple monitoring of data 
count, etc. Clearly writing data with O_DIRECT can be 3-5% slower than 
using system buffers, but the performance of the other applications was 
better, no video pauses, all response felt good, generally a better 
overall behavior.


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Thoughts on writing CD from stdin

2009-01-14 Thread Bill Davidsen

Dave Platt wrote:

Bill Davidsen wrote:

This is not ISO9660 data. I want the burner program to take my bits 
and put them on the media, nothing else. The data is in 64k chunks, 
so all writes are sector complete. I haven't had any issues with 
reading the data off DVD, or off CD in the case where I write it to 
something which allows me to know the size, but in general I don't. 
Nor can I run the data generation repeatedly to get the size, it's 
one of those "it depends" things.


Unfortunately, determining the end-of-data (end-of-track) location on a
data CD is one of those things which is difficult-to-impossible to do
reliably.

This seems to be due to the way in which the 2048-byte-block data format
is "layered on top of" the original Red Book audio CD format.  On a
data CD written in the usual way (one data track), the transition between
the data-layered blocks and the leadout area is difficult for CD players
to handle reliably, and different players tend to behave differently.

If you're lucky, your CD-ROM drive will read the last data block 
reliably,

and the attempt to read one block beyond that will result in an immediate
I/O error of some sort, allowing you to detect end-of-data reliably and
quickly.

This rarely seems to be the case, unfortunately.  Other scenarios I have
seen include:

-  The last data block reads back reliably.  Attempting to read the
   block following it does return an error, but only after a substantial
   delay.

-  The last data block (or even the last couple of data blocks) are
   unreadable.  Attempting to read them results in an I/O error.

I remember that with old Linux kernels readahead needed to be disabled, 
I haven't seen this problem in a while so it seems that the kernel fixes 
are working.

I believe that I remember some discussion on the list, which turned up
a spec requirement that when transitioning between tracks having 
different

modes (and the leadout is a different mode than a data track) you're
actually required to pad the data... or, if you don't, the transition
blocks between tracks are formally unreadable.  I don't remember the
exact details.

In practice, in order to be able to read your last real sector(s) of
data reliably, it's necessary to pad the burn with a few sectors of
unwanted filler data.  I believe that cdrecord and/or mkisofs were
changed, a few years ago, to apply this sort of padding automatically
to ensure that the final portion of an ISO9660 image would always
be readable.

Since you aren't using ISO9660, and since you have prior knowledge
of your data's fundamental block size (64kB), I think there's a
reasonable solution for you.

-  Use cdrecord (or one of the plug-compatible substitutes) in TAO
   burning mode.

Rather than one of the "raw" modes? I found several posts suggesting 
that the magic was 'raw96r' or similar. I believe I tried that, as well 
as -sao, -tao, and -dao, but I can repeat the test easily.

-  Use the "-pad" option, with "padsize=16" option (16 sectors or
   32kB of padding).

-  Read your CD-ROM disk back 64k bytes at a time.

-  You'll get an I/O error when you try to read the 64kB byte
   chunk which extends past the end of what you actually burned.
   Ignore any fragmentary data (partial chunk).

Recent kernels seem to return a valid partial data count for the last 
read, and then an error on the next read. Reading 6400 bytes at a time 
seems working, although this may only mean the media and firmware are 
friends.

You can probably use the track size in the TOC as an indication
of the amount of data actually written - just round it down to
a multiple of 32 sectors.


The last 64k block has the "end of data" flag set, so it's unambiguous.

--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: cdrecord: failure in auto-formatting DVD+RW Verbatim media on USB-TSSTcorp

2009-01-13 Thread Bill Davidsen

Joerg Schilling wrote:

I remember that there was a problem with the old ricoh drives...

At least 7 years ago, media quality and DVD+RW media compatibility between 
different
drives has been very bad and I needed to add many workarounds in order to make 
cdrecord behave nicely.


It may be that some of the workarounds should be removed today.
  


Of course they may still be beneficial in some cases. Perhaps based on 
the things you have learned in the past few yers it would be practical 
to make them a little more selective, so that they would only work 
around the original problem. cdrecord is very good about running on old 
hardware, so these workarounds seem to have value on those machines.


Hard to write perfect software to run on imperfect (and inconsistent) 
hardware. :-(


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Thoughts on writing CD from stdin

2009-01-13 Thread Bill Davidsen

Paul Serice wrote:

I don't obviously see why CD burning requires the size in advance,
but cdrecord and similar seem to want it.



The meta-data for the iso9660 is in the "primary volume descriptor"
(PVD) which is (more or less) written first.  One of its fields is the
location of the root directory which varies from one instance of the
iso9660 file system to the next.
  
This is not ISO9660 data. I want the burner program to take my bits and 
put them on the media, nothing else. The data is in 64k chunks, so all 
writes are sector complete. I haven't had any issues with reading the 
data off DVD, or off CD in the case where I write it to something which 
allows me to know the size, but in general I don't. Nor can I run the 
data generation repeatedly to get the size, it's one of those "it 
depends" things.


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Thoughts on writing CD from stdin

2009-01-12 Thread Bill Davidsen
Is there a way to burn data from a program to a CD without knowing in 
advance how much data will be written? I have solutions for DVD, don't 
have a good one for CD. I don't obviously see why CD burning requires 
the size in advance, but cdrecord and similar seem to want it.


What would be useful is [application]->[burning_software]->media without 
buffering.


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: please decrypt cdrecord's error message

2009-01-08 Thread Bill Davidsen

Helmut Jarausch wrote:

Hi,

having used (earlier) versions of cdrecord on the very same drive and
media brand, I got the following errors today (on two medias)

(This is with Linux kernel 2.6.26)

  
Is this by any chance related to cdrecord being updated and not 
reinstalled setuid root? I know that can produce an error, but I can't 
remember exactly the details.



Cdrecord-ProDVD-ProBD-Clone 2.01.01a55 (i686-pc-linux-gnu) Copyright (C) 
1995-2008 Jörg Schilling
TOC Type: 1 = CD-ROM
scsidev: '/dev/hdb'
devname: '/dev/hdb'
scsibus: -2 target: -2 lun: -2
Warning: Open by 'devname' is unintentional and not supported.
Linux sg driver version: 3.5.27
Using libscg version 'schily-0.9'.
Driveropts: 'burnfree'
SCSI buffer size: 64512
atapi: 1
Device type: Removable CD-ROM
Version: 0
Response Format: 2
Capabilities   : 
Vendor_info: 'HL-DT-ST'

Identifikation : 'DVDRAM GSA-4167B'
Revision   : 'DL11'
Device seems to be: Generic mmc2 DVD-R/DVD-RW/DVD-RAM.
Current: DVD-R sequential recording
Profile: DVD-RAM 
Profile: DVD-R sequential recording (current)
Profile: DVD-R/DL sequential recording 
Profile: DVD-R/DL layer jump recording 
Profile: DVD-RW sequential recording 
Profile: DVD-RW restricted overwrite 
Profile: DVD+RW 
Profile: DVD+R 
Profile: DVD+R/DL 
Profile: DVD-ROM 
Profile: CD-R 
Profile: CD-RW 
Profile: CD-ROM 
Profile: Removable Disk 
Using generic SCSI-3/mmc-2 DVD-R/DVD-RW/DVD-RAM driver (mmc_dvd).
Driver flags   : NO-CD DVD MMC-3 SWABAUDIO BURNFREE 
Supported modes: PACKET SAO

Drive buf size : 1114112 = 1088 KB
Drive pbuf size: 1966080 = 1920 KB
Drive DMA Speed: 11772 kB/s 66x CD 8x DVD 2x BD
FIFO size  : 67108864 = 65536 KB
Track 01: data 0 MB padsize:   30 KB
Total size:0 MB = 300 sectors
Current Secsize: 2048
Total power on  hours: 6656
Blocks total: 2298496 Blocks current: 2298496 Blocks remaining: 2298196
Reducing transfer size from 64512 to 32768 bytes.
Starting to write CD/DVD/BD at speed 4 in real SAO mode for single session.
Last chance to quit, starting real write0 seconds. Operation starts.
Waiting for reader process to fill input buffer ... input buffer ready.
BURN-Free is ON.
Starting new track at sector: 0
Track 01:0 MB written.
Track 01: writing 600 KB of pad data.
/usr/local/bin/cdrecord: Success. write_g1: scsi sendcmd: no error
CDB:  2A 00 00 00 00 00 00 00 10 00
status: 0x2 (CHECK CONDITION)
Sense Bytes: F0 00 05 00 00 00 00 10 2A 00 00 0C 21 00 00 00
Sense Key: 0x5 Illegal Request, Segment 0
Sense Code: 0x21 Qual 0x00 (logical block address out of range) Fru 0x0
Sense flags: Blk 0 (valid) 
resid: 32768

cmd finished after 0.001s timeout 200s
write track pad data: error after 0 bytes
BFree: 1088 K BSize: 1088 K
Track 01: Total bytes read/written: 32768/0 (0 sectors).
Writing  time:8.876s
Average write speed   0.1x.
Fixating...
Fixating time:   20.623s
/usr/local/bin/cdrecord: fifo had 1 puts and 1 gets.
/usr/local/bin/cdrecord: fifo was 0 times empty and 0 times full, min fill was 
100%.


Many thanks for a hint,
Helmut.

  



--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: growisofs won't write double layer DVDs

2009-01-08 Thread Bill Davidsen

Joerg Schilling wrote:

Andy Polyakov  wrote:

  
I recommend you to use cdrecord instead of growisofs. 
Cdrecord will tell the drive to do laser power calibration
  

cdrecord does not do that to DVD+ media. A.



Wrong: cdrecord of course does a power calibration for DVD+
  


I thought it bypassed that for DVD+, I saw it in the code and can't find 
it again. It looked as if a text which fails on DVD+ went around that logic.
Does cdr_opc get set somewhere I'm missing? That could explain why some 
media I have don't work well with one burner and cdrecord.


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: growisofs wont write double layer DVDs

2009-01-03 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

  

I'm trying to write double layer DVDs on my TS-L632C DVD burner.
..
/dev/sr0: splitting layers at 1437552 blocks
:-[ SEND DVD+R DOUBLE LAYER RECORDING INFORMATION failed with SK=3h/POWER
CALIBRATION AREA ERROR]: Input/output error



Looks like drive and media do not like each other.

You can try to avoid that special command which
applies only to DVD+R DL. But a power calibration
error is normally a problem between drive and media
which will bite again at the next possible occasion.


  

One time I got the burner to do something different and give me
:-? L0 Data Zone Capacity is set already
/dev/sr0: "Current Write Speed" is 2.5x1352KBps.
:-[ wr...@lba=0h failed with SK=0h/ASC=00h/ACQ=02h]: Input/output error
:-( write failed: Input/output error



Here the splitting of layers seems to have succeeded
in a previous run. But already the first write attempt
fails with an error that is not in the list of MMC-5.

  

It's a bit annoying to have a burner that's supposed to do double layer
but doesn't do it... :(



Maybe a different brand of media will work.
Maybe the drive simply does not do DVD+R DL
despite marketig promises and its willingness
to try.
It is a confirmed fact that some drive do
write sucessfully to some media. One just
needs a matching combination.
  


And media price seems not to be a predictor of performance.  I've had 
good luck with no-name DL media from a computer show ($23/100) using 
several burners, and bad luck with some name brands. Some burners, and 
importantly some firmware versions, like or don't like the media.


--
Bill Davidsen 
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to cdwrite-requ...@other.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@other.debian.org



Re: Growisofs input cache --> Patch

2008-12-09 Thread Bill Davidsen

David Weisgerber wrote:

On Saturday 06 December 2008 12:35:14 Thomas Schmitt wrote:
  

Hi,

David Weisgerber wrote:


So I will create a patch which adds such a switch to growisofs
  

Andy Polyakov wrote earlier:


There is a way to open image without O_DIRECT and without modifying
source code: growisofs -Z /dev/cdrom=/dev/fd/0 < image.iso. Ta-dah...
  

In my role as a frontend programmer for
growisofs i would first try whether "Ta-dah"
is a viable workaround.



Otherwise I would need to
distribute a patched version of growisofs with turbojet...
  

If you rely on a growisofs novelty then
you have to urge the users to install
newest dvd+rw-tools.
"Ta-dah" - if possible - would allow a
much better decoupling of your software
and the system installed tools.



As my software is not so widely spread at the moment, I'd like to go the clean 
way. In addition, I think that using the pipe will create another buffer 
which I want to avoid. This problem is all about performance.


In the mean time I created a patch for growisofs which I hope will be included 
in the future (see attachment). I have run some tests with it which showed me 
*very* good results:

---
Tested: 4x2742 MB 


Run 1 (w/o -no-directio): 5:37 min (for the slowest DVD recorder)
Mem:   2060284k total,   593336k used,  1466948k free,15972k buffers

Run 2 (w -no-directio): 4:38 min (for the slowest DVD recorder)
Mem:   2060284k total,  1877136k used,   183148k free,  224k buffers
---

In Run 1 there were several buffer underruns which slowed the DVD recorders 
down. In Run 2 the buffer was always at 100% (except for the end of 
course) :-).
  


This seems reasonable, what were the performance numbers for the other 
system activity? I'm surprised at the underruns, cdrecord has internal 
fifo, and I thought you did, too. With a hacked cdrecord (around a50) 
the burn ran almost eight seconds slower, regardless of burn size, and 
never dropped below 92% full at the drive, and 70% or so in the fifo.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Growisofs input cache (get away from dd?)

2008-12-05 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

Bill Davidsen wrote:
  

not passing
multiple GB of once-used data through the system buffers and blowing
everything else out.



I see some effects which could make the O_DIRECT
approach less superior to normal reading than your
scenario would let expect:

- If the (ISO) image is freshly generated then the
  formatter program just blew out everything else
  anyway. And for the ISO formatter it makes few
  sense to read O_DIRECT as it has a random access
  pattern on eventually small files.

  
You are still thinking about benefit to the image generation application 
rather than the overall system. The reads are small and scattered, but 
they are single use i/o, meaning that it is highly unlikely that the 
data will be used before it passes out of the cache. Therefore there is 
no gain to the ISO creation application from using a buffered read, and 
the certainty that at least 4GB of data used by other applications or 
the system itself will be evicted from the cache. And the same thing 
holds true for writing the ISO image, unless you have a machine with 4GB 
or more to use for write cache, the data will be out of cache before the 
media is burned.


And even if it isn't, there's no performance gain to mention, since all 
of the burning applications worth consideration use readahead buffering 
and therefore will have the data to burn before it's needed. So the 
possible gain ranges from "very little" to "none" for the burning 
application, while the possible slowdown to other applications on the 
machine can be up to 20% on a machine doing a lot of transactions. 
That's measured in the real world, I used to be project leader for the 
NNTP server group at a national ISP, and compressing the log files would 
drop the throughput that much under load. Putting dd in front of the 
file read, using O_DIRECT, eliminated the problem, and only added 3-4% 
to the real time for the process.



- The cache area in RAM is nowadays quite large
  and i understand that the least recently used
  cache chunks get thrown out. While a growisofs
  read chunk is ageing in the cache for several
  seconds, any busy process can refresh its own
  chunks' read time.
  


I don't know how large your machines are, but I certainly don't want 
4.7GB, or 8.5GB, or 25GB to be used for caching something used by one 
application which benefits marginally if at all from the caching. And if 
you have large memory, run mkisofs to generate a "most of memory" size 
image, type "sync" in one window and try to do something requiring 
reading disk in another. All that data gets queued to the disks, 
commands dropped into NCQ if you have good disks, and reads virtually 
stop (Linux) or slow to a crawl (Solaris).


A good reason why caching data is bad unless it will be read again soon.


  So i expect that the cache of growisofs' input
  stream can only blow out chunks which are as
  fewly used as its own chunks.
  
This would explain why i never saw the need to do

O_DIRECT reading: my ISO images are fresh, if
they are big. Typically they get piped directly
into the burner process.

  
Which doesn't mean that using O_DIRECT for reading the input files would 
not be nicer for your cache.



I have one use case, though, where i burn two identical
copies of the same DVD image for redundancy reasons.
One-by-one via a single burner drive from an
image file on disk onto 16x DVD+R.
Even then i experience no undue impact on other
processes.
I also never noticed a difference when i switched
from growisofs to my own programs for that.

  
Burning is a slow process relative to creating the image, so that is a 
lesser impact. But by pulling an image through the cache you will have 
an impact on other things using the cache, although even 16x is pretty 
slow i/o compared to current disk speed (and I bet you have good ones).

Is there a realistic scenario where O_DIRECT is
reproduciblrye superior to normally buffered reading ?
I ponder whether i shall offer a O_DIRECT option
with the source objects of libburn.
  


That depends on how large your machine is. ;-) If you have 64GB a 
Blu-Ray will fit. But on a typical machine with 2-8GB, I would expect 
that flushing cache when the dirty times runs out would make the system 
notably slower doing things needing read (this depends on tuning somewhat).


You could try:
1 - drop cache (echo 1 >/proc/sys/vm/drop_caches)
2 - start a kernel compile
3 - start building a 4GB or larger ISO image.

Then repeat with O_DIRECT. Use of iostat may give you additional data.

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Growisofs input cache (get away from dd?)

2008-12-04 Thread Bill Davidsen

David Weisgerber wrote:

The problem with growisofs is, that it does not seem to use the file
system buffers. That means already burning 3 DVDs at once is too much for
my system.
  

I would asume that growisofs cannot easily turn off the filesystem
buffering, but is there a buffer inside growisofs?



I found out, that growisofs uses the option O_DIRECT when opening files. Does 
somebody know why this option is used? It seems to me totally useless when 
reading ISO files...


http://www.ukuug.org/events/linux2001/papers/html/AArcangeli-o_direct.html
  


You have to understand what O_DIRECT does to know why it is desirable to 
use it. Using O_DIRECT bypasses the system i/o buffers, not to benefit 
the burner program but to benefit the rest of the system users by not 
passing multiple GB of once-used data through the system buffers and 
blowing everything else out. Because the majority of use of media 
burning programs is not some form of do-it-yourself media burner, but 
users burning a single media with a single burner, and probably wanting 
to use the system with decent response while the burn takes place.


If you want to burn multiple copies of an image, you have YOUR program 
read the image, once, and send the data to the burning programs as stdin 
via a pipe. Don't depend on having the o/s providing the buffering, or 
special modes being used or not. You can do this in about 40 lines of 
perl, it's not rocket science. You need a burning program which will 
burn from its stdin, preferably without knowing how long the image is in 
advance, since an ISO image is not being created by the burning program.


There are systems so big that blowing out the cache for a CD, or DVD may 
not hurt performance, but I think a Blu-Ray will be noticed on most 
systems. Using O_DIRECT only looks totally useless when you look through 
the wrong end of the problem at "how does it help the application" 
instead of "how does it avoid hurting the rest of the applications?"


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Problems mounting DVD-RWs written with growisofs

2008-11-22 Thread Bill Davidsen

Marcus Roeckrath wrote:

Hi Thomas,

Am Freitag, 21. November 2008 20:56 schrieb Thomas Schmitt:

  

growisofs -use-the-force-luke=dao -Z /dev/sr2
-use-the-force-luke=notry,tty Is such a construct - multiple use of
-use-the-force-luke parameters - allowed?
  

Yes it is. As often as you want.



Fine, because of my test I assumed it can be done that way.

  

Strange: "notry,tty" is not among them.



Little typo missing "a": notray

  

I cannot spot any comma interpretation.



This comma seperated option is added by the frontend webcdwriter/CDWserver 
(Pro-Version).


I'm not a C programmer having greater experience f. e. in pascal but the 
corresponding part of the code


{   s++;
if (strstr(s,"tty"))no_tty_check = 1;
if (strstr(s,"dummy"))  test_write   = 1;

and so on
 
compares if the constant string f. e. tty is part of s. I think all if 
lines are in action even a previous if statement return true.


If this explanation is right all comma seperated values should be found.

  
That really looks like an unintended capability, but I agree that it 
should work.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: cdrecod and profile 0x40 (BD-ROM): -atip exits with 255

2008-11-22 Thread Bill Davidsen

Giulio Orsero wrote:

On Wed, 12 Nov 2008 21:45:14 +0100, [EMAIL PROTECTED]
(Joerg Schilling) wrote:

  

Giulio Orsero <[EMAIL PROTECTED]> wrote:




  

I checked drv_mmc.c and I see it handles >=0x40, then I grepped for "Found
unsupported" and added customized messages to distinguish between drv_mmc.c
and drv_bd.c and I see the error comes from drv_bd.c, not drv_mmc.

== it seems dvr_bd.c does not handle 0x40/0x0040
   if (profile == 0x0043) {
dp = &cdr_bdre;
} else if ((profile == 0x0041) || (profile == 0x0042)) {
dp = &cdr_bdr;
} else {
errmsgno(EX_BAD, "Found unsupported 0x%X profile.\n",
profile);
return ((cdr_t *)0);
}
===
  


Coud you confirm that all that is needed is the following?
It seems to work for me but being not a C programmer I cannot really
understand what the code does and see whether there could be side-effects.

Thanks


--- drv_bd.c.orig   2008-11-21 14:36:04.0 +0100
+++ drv_bd.c2008-11-21 14:36:51.0 +0100
@@ -308,6 +308,8 @@
dp = &cdr_bdre;
} else if ((profile == 0x0041) || (profile == 0x0042)) {
dp = &cdr_bdr;
+   } else if (profile == 0x0040) {
+   dp = &cdr_bd;
} else {
errmsgno(EX_BAD, "Found unsupported 0x%X profile.\n",
profile);
return ((cdr_t *)0);

  
This does what you think it does, but if a few more cases come in it 
would probably be easier to read and maintain by using a switch.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Announcing xorriso-0.2.6

2008-09-20 Thread Bill Davidsen
 writing a new DVD for a 3-4kB file change is
undesirable.



The waste of space on multi-session media is worst on CD-R[W]
and DVD-R[W] where session gaps can be up to 20 MB. On DVD+R
it is only 4 MB. Best it is with overwriteables: DVD-RAM, DVD+RW,
formatted DVD-RW. There the waste is between 0 and 62 KiB.

Another overhead is caused by the fact that with each session
a complete new directory structure is written.
The size depends on the number of file objects in the image
and usually is in the range of a single percent of the 
combined image size.


So the wish to write in granularity of few KiB will hardly
be fulfillable by xorriso.
My backups produce one base session of 200 MB and 20 small
update sessions on CD media, one base session of 800 MB and
40 update sessions on DVD-R, 50 on DVD+R and 60 on DVD+RW.
About 4500 blocks of each session hold the directory tree.
(One can measure by making two runs with only a small data
 file changed inbetween.)

Exanmple of a DVD-R:
 TOC layout   : Idx ,  sbsector ,   Size , Volume Id
 ISO session  :   1 , 0 ,870113s , UPDATE_HOME_2008_04_18_220030
 ISO session  :   2 ,908176 , 22632s , UPDATE_HOME_2008_04_20_133303
 ISO session  :   3 ,938656 , 19169s , UPDATE_HOME_2008_04_21_153459
 ...
 ISO session  :  41 ,   2233968 , 60302s , UPDATE_HOME_2008_06_06_114440

DVD+-R multi-session info might be ignored or misunderstood
by DVD-ROM drives. In that case one has to help mount
to find the youngest session.
This problem does not apply to overwriteable media.


Have a nice day :)

Thomas


  



--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Issues with cdrsin and USB devices on RHEL5

2008-07-16 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

  

I'm going to cautiously say that this file, intended for humans but can be
parsed, seems to handle anything up to a mix of IDE, "real" SCSI, and USB,



Yep. For me it yields plausible results with
4 drives at USB, IDE, and SATA.
  drive name: sr1 hdc hda sr0


  

and tells you what the kernel thinks of them. That may be more important
than the truth, if a device says it can support feature X, but the kernel
doesn't agree, I would not expect that feature to work reliably.



If the CDROM driver believes it then it does not
necessarily have to be true.
Example:
  Can read multisession:  1   1   1   1
is true with hda only for CD but not for DVD media.
This DVD-ROM drive is too stupid to recognize multiple
sessions on DVD-R or DVD+R. (One can scan for ISO 9660
headers, though, and successfully mount the sessions
via option sbsector= .) 

  

Wow, is that actually useful for anything? Amazing!

I guess the point is that the drive has support for multisession and 
advertises same. If this value was zero, I doubt that any option would 
help. Firmware bugs are not known to the kernel, unfortunately. I would 
treat this as and AND of capabilities, if this file says the capability 
is missing it probably really is. If present, the device supports the 
feature or lies convincingly. ;-)

So the feature detection is a direct competitor
of libburn's own view of things. To take it into
respect would demand to establish conflict handling
in case of contradictions.

Nevertheless, i will consider to take the "drive name:"
line as a hint of how the drive list should look like.
In case of deviations, one could issue a warning.

  

Yes, that certainly should be useful in eliminating the non-optical media.
  

Thought it might be useful to tell you what the kernel
would let you see and use.



A totally overhauled drive detection strategy will have
to collect info from such sources and try to verify
it by own drive inquiries.

So any hint about CD drive info sources is welcome.

For now i seem to have put a patch on the problems.
After the upcoming release i will reconsider that
topic and make some larger changes in the Linux adapter
of libburn.
  


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Issues with cdrsin and USB devices on RHEL5

2008-07-16 Thread Bill Davidsen

Greg Wooledge wrote:

On Mon, Jul 14, 2008 at 10:45:25PM +0200, Giulio Orsero wrote:
  

 /lib/udev/check-cdrom.sh
#!/bin/bash


...
  

pos=$[$pos+1]



Dear gods.  Didn't anyone tell them that $[ is deprecated?

pos=$(($pos+1))

... is the preferred syntax, and is POSIX/ksh compatible.

  
Who cares? The $(( notation is slower to type, easier to get wrong, 
gives confusing error messages if you miss the "$" and start with 
parens, etc.  I know about it, but I would never use it, and it's 
visually easier to read correctly.


If you need max portability and/or readability you use "let pos=pos+1" 
anyway.


This is a cd burning list, not the alt.shell.pedantic newsgroup. Let's 
keep to the main topic.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Issues with cdrsin and USB devices on RHEL5

2008-07-16 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

i uploaded the current release candidate of cdrskin
  http://scdbackup.sourceforge.net/cdrskin-0.4.9.tar.gz
  http://scdbackup.webframe.org/cdrskin-0.4.9.tar.gz

It should be able to work on /dev/scd0 if no /dev/sr0
is existing.
It should also stay silent about busy /dev/hdX which
are identified as "disk" by /proc/ide/hdX/media.
And no buffer overflow with drive_scsi_dev_family=scd
any more.
  


may I say very nice work, and prompt as well.

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Issues with cdrsin and USB devices on RHEL5

2008-07-14 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

Bill Davidsen:
  

Dumb question, why do you not look at the nice list of CD devices provided
...
/proc/sys/dev/cdrom/info



I'm going to cautiously say that this file, intended for humans but can 
be parsed, seems to handle anything up to a mix of IDE, "real" SCSI, and 
USB, and tells you what the kernel thinks of them. That may be more 
important than the truth, if a device says it can support feature X, but 
the kernel doesn't agree, I would not expect that feature to work reliably.


Actually i was using the nice list of CD device files
which is provided according to the SCSI Howto but has
been deprecated by Documentation/devices.txt
whereas Documentation/scsi/scsi.txt still refers to the
SCSI Howto.

/proc is an interesting thing. But where is it documented ?
On what feature in /proc can i rely ?
/proc/scsi/sg/devices once gave me hints about the activities
of my drives. Not any more. :(

I will begin to collect /proc addresses about CD drives.
Maybe one can develop a kindof expert system from that list.
  


This has been around forever, I see it on an old box running RH8 
(2.4.18) as well as a modern dual-core running a recent -rc kernel. 
Thought it might be useful to tell you what the kernel would let you see 
and use.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Issues with cdrsin and USB devices on RHEL5

2008-07-14 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

  

# cdrskin --devices
cdrskin 0.4.8 : limited cdrecord compatibility wrapper for libburn
0  dev='/dev/hdc'  rwrw-- :  'HL-DT-ST'  'DVDRAM GSA-4163B'
...
# ls /dev/sr*
ls: /dev/sr*: No such file or directory
# ls /dev/sc*
/dev/scd0



By default cdrskin will accept /dev/scdN or /dev/sgM but
finally insists in seeing a /dev/srN.

You may push it towards /dev/scdN by adding option
  drive_scsi_dev_family=scd

The problem will be solved in upcomming version 0.5.0.
(It also affects xorriso which can only be helped
by creating a /dev/sr0 softlink.)


  

Note that hda is my hard disk, I don't know why cdrskin tries to use it.



libburn would know that it is not a CD drive
after opening it. But it may not open a CD drive
in the progress of burning.



Dumb question, why do you not look at the nice list of CD devices 
provided by the Linux kernel, and assume that anything not on that list 
is not going to work for CD uses, no matter what it is, or thinks it is.


/proc/sys/dev/cdrom/info

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Weird problem with RHEL5 isofs driver when using mkisofs -x --exclude options

2008-07-10 Thread Bill Davidsen

Joerg Schilling wrote:

Giulio Orsero <[EMAIL PROTECTED]> wrote:

  
The incorrect perms are a result of the bugs in the mkisofs version that comes 
with Redhat.
  

Actually I was always talking about the permissions/timestamp on the test
directory "dir1", these were incorrect even when the iso was created with
mkisofs a42.



The timestamp for dir1 was OK in all cases. The timestamp and/or permissons
for "." and/or ".." have been wrong.

  
Please check again with the following patch, it should then work even without 
-find:
  

I can confirm that with the patch the permissions of "dir1" are ok even
without "-find".



Do you still see the message 


Unknown file type (unallocated) isotest/.. - ignoring and continuing.
  


No

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Weird problem with RHEL5 isofs driver when using mkisofs -x --exclude options

2008-07-09 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  
Please check again with the following patch, it should then work even without 
-find:


  
  

Thank you for the patch.



Well, if a problem was described decently, I am usually able to explain how to 
correctly use mkisofs or to create a fix soon.


This is an important difference to the fork used by some Linux distros ;-)
  


For once we are in total agreement. ;-)

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Weird problem with RHEL5 isofs driver when using mkisofs -x --exclude options

2008-07-09 Thread Bill Davidsen

Joerg Schilling wrote:

Giulio Orsero <[EMAIL PROTECTED]> wrote:

  

isoinfo output was always right (at least as for permissions and timestamp
which I was interested in).



isoinfo displayed wrong timestamps for "." and ".." with the old mkisofs clone 
that comes with RedHat. 


isoinfo still displayes wrong timestamps for "." even with a recent mkisofs.


  

The incorrect permissions/timestamp were from "ls -l" on the iso image as
mounted on linux using Linux isofs driver.

That very same iso image, when mounted on Linux, would:
- show uncorrect perms/timestamp if mounted on RHEL5
- show correct perms/timestamp if mounted on RHEL3



The incorrect perms are a result of the bugs in the mkisofs version that comes 
with Redhat.


The fact that you see different timestamps is a result of the fact that ISO-9660
stores different meta data for dir/ and dir/. 
If mkisofs does not make sure that the related meta data is identical, you see 
the known issues.


  

If I produce the iso image using "-find"; then Linux isofs RHEL5 driver will
show correct perms/timestamp too.



see above

  

My problem is that I have a backup script that when run on RHEL5 will
produce iso image which RHEL5 will read in a wrong way, unless it seems I
change the backup script to use "-find".

What is weird for me is:
1) Why would RHEL5 isofs driver be confused by an iso image produced by
mkisofs with the "-x" flag, even when the -x flag should have no effect
since, as per my example, would exclude something is not there.
As soon as I take the -x option out, RHEL5 will read the iso image
correctly.
I'd think mkisofs would produce the very same output if I tell it to exclude
something that is not there, basically a noop.



They did change the filesystem implementation in the kernel.

  

2) Switching to "mkisofs -find" seems to fix the issue, but I don't
understand why.



because mkisofs then stores the same meta data for dir/ and dir/. and you see 
the same values regardless of which values are taken by the FS in the kernel.


In general, your problem suffers from two reasons:

1)  The filesystem does not return ".". and ".." first with readdir()

  
I believe that this may be a result of the loop mount. However, (a) I 
don't see the part of SuS which promises that these directory entries 
will be returned first, and (b) it appears that "." need not be the 
inode of the current directory.


The behavior of "dir" "dir/" and "dir/." is not the same in cases of 
symbolic links and (it seems) special mounts such as bind and loop. I'm 
not ready to explain in depth, I just observe...

2)  The deprecated -x option incorrectly excludes "." and ".."

Please check again with the following patch, it should then work even without 
-find:


  


Thank you for the patch.

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: cdrecord -scanbus problem

2008-06-23 Thread Bill Davidsen

Bill Davidsen wrote:

Joerg Schilling wrote:

[EMAIL PROTECTED] wrote:

  

Hallo,

When I run cdrecord -scanbus as root I get:[EMAIL PROTECTED]:~# cdrecord 
-scanbus
Cdrecord-ProDVD-ProBD-Clone 2.01.01a41 (i686-pc-linux-gnu) Copyright (C)
1995-2008 Jâ? rg Schilling
cdrecord: Permission denied. Cannot open '/dev/hda'. Cannot open or use SCSI
driver.
cdrecord: For possible targets try 'cdrecord -scanbus'. Make sure you are 
root.

cdrecord: For possible transport specifiers try 'cdrecord dev=help'.

What's happening here? My system is built from SlackWare 12.1 and
/dev/hda is a normal IDE hard disk.



You need to run cdrecord with root privileges!
  


Did you read what he wrote? He clearly says "as root" in the very 
paragraph you quoted.
This can be done by reading ALL of what he wrote and understanding 
what it says.

You did not do that...

This can be done by calling it as root or by making it suid root.

You did not do that...
  


And it appears the original poster did not report the fact about running 
as root, but rather forced the binary to run as "bin" by making it 
setuid to that id, regardless of the login user. Since there was no way 
to know that, I think any assumption that the program was NOT running as 
root was correct only as a stroke of luck, rather than a feat of 
paranormal powers. There is a saying I can't quite remember, something 
like "if you misread the map you may still reach your destination if the 
map is wrong." It sounded better in the original German, which I haven't 
heard since I was a small child.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: cdrecord -scanbus problem

2008-06-23 Thread Bill Davidsen

Joerg Schilling wrote:

[EMAIL PROTECTED] wrote:

  

Hallo,

When I run cdrecord -scanbus as root I get:[EMAIL PROTECTED]:~# cdrecord 
-scanbus
Cdrecord-ProDVD-ProBD-Clone 2.01.01a41 (i686-pc-linux-gnu) Copyright (C)
1995-2008 Jâ? rg Schilling
cdrecord: Permission denied. Cannot open '/dev/hda'. Cannot open or use SCSI
driver.
cdrecord: For possible targets try 'cdrecord -scanbus'. Make sure you are 
root.

cdrecord: For possible transport specifiers try 'cdrecord dev=help'.

What's happening here? My system is built from SlackWare 12.1 and
/dev/hda is a normal IDE hard disk.



You need to run cdrecord with root privileges!
  


Did you read what he wrote? He clearly says "as root" in the very 
paragraph you quoted.
This can be done by reading ALL of what he wrote and understanding what 
it says.

You did not do that...

This can be done by calling it as root or by making it suid root.

You did not do that...
  



--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Why are my DVD+R DL only readable by root ?

2008-06-14 Thread Bill Davidsen

Gregoire Favre wrote:

Hello,

I use the attached script to burn my media, and unfortunately, only root
has access to DL discs...

Any idea on how to solve this ?
  


Your subject is misleading, or at least confusing... It implies that 
after burn only root can read DL. If that's the case it's a new bug, and 
I have no idea at all why, since I don't see it.


If you mean you must be root to use cdrecord (not woe-dumb), then yes, 
it is a sad fact of life. The solution is to build a recent cdrtools 
from source, then install setuid root, or to change your script to use 
growisofs. The advantage of growisofs is no need to be root, the 
advantage of cdrtools is use of vendor commands which make some burners 
work better.


Install comment: I put everything setuid (or group, root or other user) 
in /usr/local/bin. In the case of cdrecord I also renamed it "CDrecord" 
so that I would be sure I was not getting some other program calling 
itself cdrecord.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Burning a Windows / DVD player compatible DVD

2008-06-01 Thread Bill Davidsen

Bob wrote:

Hi,

I'm really struggling with this whole compatible DVD thing.

For my amateur dramatics group, I have produced a DVD of our show,
including menus and all sorts of fun like that.  The DVD plays
perfectly with "xine dvd://video/dvd".

I have burned the DVD in many different ways, such as "growisofs
-dvd-compat -dvd-video -Z /dev/scd1 ." and most of the ways I burn it
will work perfectly on my panasonic DVD player.

However, when I distributed these DVD's to other people, I have
received reports that they don't work on Windows machines (they
apparently show up as a blank DVD), Mac's (similar I guess - but no
detail on that) or some DVD drives (which just refuse to read them).

I believe the DVD that was written that works on Linux and my DVD
player is using the UDF file system, but several real DVD's I have use
iso9660.  I tried burning it with iso9660 (actually using gnomebaker)
and that fails to play on my DVD player, but it is recognised and can
be played by a windows computer.

I'm currently using DVD-R disks, which I understood to be more likely
compatible with DVD players (although I may have that wrong because
http://fy.chalmers.se/~appro/linux/DVD+RW/ suggests that DVD+R disks
are more compatible).

In case it is useful, I have pasted the output of dvd+rw-mediainfo for
one of the burned DVD's at http://pastebin.com/m7479a9c

Could anyone suggest how I should be burning these DVD's to ensure
they are compatible with both windows and more DVD players?
  
I have been using DVD-R, creating the ISO images to burn with dvdstyler. 
I usually burn with growisofs, but I have used (original) cdrecord as 
well, and that works as well. I burn these on several machines, and 
often create the mpeg files using ffmpeg and starting with some other 
format. I create all my mpg files in 720x480 DVD format if they aren't 
already, I don't trust conversions to do it by magic.


No magic trick.

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: mkisofs -M makes no attempt to reconstruct multi-extent files

2008-05-22 Thread Bill Davidsen

Joerg Schilling wrote:

Andy Polyakov <[EMAIL PROTECTED]> wrote:

  

You used mkisofs incorrectly
  
Command line sequence was *tailored* to allow to produce usable input 
for *hex editor* in less than minute.


Why did you use -C16,xxx?

This is definitely wrong.
  
The reason is in the beginning of merge_isofs() in multi.c. In 
particular for (i=0;i<100;i++) loop. As area prior sector 16 is allowed 
to be and customarily used for other purposes (such as hybrid disks), 
there is no guarantee that data there does not resemble iso9660 volume 
descriptor. I don't want mkisofs to look at sectors prior 16th at all, 
but start directly with volume descriptor. One can argue that mkisofs 
should have seek-ed to 16th sector all by itself, but the code has been 
around for so long, that it should be considered feature, not bug.



In theory, I could change mkisofs to start looking at sector #16 as the oldest
availabe implementation (20 years old) at 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/hsfs/hsfs_vnops.c

starts reading at sector 16.

But if you put something that looks like a PVD between sector #0 and 
sector #15, then your software is wrong anyway. Are you really doing this?
  


I would think that since you don't want to use anything resembling a PVD 
found in that range, any application would be more robust not to look at 
all. Who knows what a hybrid disk might contain?


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: mkisofs -M makes no attempt to reconstruct multi-extent files

2008-05-20 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  

Only a setting up correct multi-extent file directory entry will work correctly.
  
  
I'm curious how you handle the case where the file shrinks and no longer 
needs multi-extent. I hope that's clear, I don't have a better 
description at hand.



The best solution for problems is to handle unrelated things independently.

The first step was to implement multi-extent files correctly.

The second step will be to make retained multi-extent files correclty in the 
next session.


If there is a remaining problem, lets see.

  
I have no problem with following correct steps in order. I think there's 
a problem with large files being smaller, but I may not have explained 
it clearly.



Doing things correct may look as if it takes more time than doing things quickly
but it finally wins. I will continue to do things right with mkisofs rather than 
trying to find quick ways to hide problems at first sight (as done in 
"genisoimage"). Users of my software can rely on me taking things seriously. In 
the long term, I will be able to keep software maintainable that's what finally 
counts what's important for me.




Jörg

  



--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: mkisofs -M makes no attempt to reconstruct multi-extent files

2008-05-20 Thread Bill Davidsen

Joerg Schilling wrote:

Andy Polyakov <[EMAIL PROTECTED]> wrote:

  

You used mkisofs incorrectly
  
Command line sequence was *tailored* to allow to produce usable input 
for *hex editor* in less than minute.


Why did you use -C16,xxx?

This is definitely wrong.
  

Why I even bothered to report this? To be told that I can't use
multi-sessioning options to dump second session data to separate file in
order to examine its directory table in hex editor?



Mmm I see no relation to the problem: you used -C16 isntead of -C0
I thought this is something you should know.


  

I used hex editor, yet I can assure that despite this I did not
misinterpret the results.


I explained you that the problem is the incorrect allocation os _new_ space for 
the old file.
  

Well, why don't you back up your explanation with some evidence? I've
provided directory records' layout, even XP dir output for actual
multi-session recording, while you only said what you *would* use to
examine single-session recording...



I don't understand you.

Your claim that the file is non-contiguous is just wrong.
I explained the real problem and I am trying to fix the problem since yesterday 
evening.


  

BUT NEVER MIND!!! I'm going to throw in some more information supporting
my claim and then I'm going to *stop* following this thread, because I
simply don't have time or energy arguing.



You look frustrated, why?

  

Exhibit #5. Attached mkisofs/multi.c patch. Note that I make no claims
that it's complete solution for the problem. On the contrary, I can
confirm that the patched mkisofs for example fails to handle situation
when 5G.0 shrinks to less than 4GB in second session. The sole purpose
of the patch is to provide support for original claim [summarized in
subject]. All the patch does is reconstruct mxpart member of
directory_entry structure for extents in previous session.



Your patch is not correct at all although you started changing at the right 
location ;-)


Only a setting up correct multi-extent file directory entry will work correctly.
  


I'm curious how you handle the case where the file shrinks and no longer 
needs multi-extent. I hope that's clear, I don't have a better 
description at hand.


I stared working already on a correct solution today but setting up the correct 
data structures takes a lot more than just 5 lines of code. After my solution 
is ready, we still need some testing


Jörg

  



--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: WRITE@LBA=3e0c30h failed with SK=3h/WRITE ERROR]: Input/output error (DL)

2008-05-20 Thread Bill Davidsen

Joerg Schilling wrote:

Dave Platt <[EMAIL PROTECTED]> wrote:

  
The problem is that they have called wodim "cdrecord" and provided (or 
in some cases not) different functionality. Obviously distributions 
thought people wouldn't use it if people knew it was another program. I 
feel the same way as I do when I order coke and get pepsi, it's a scam.
  

I can't speak for the people who put together the distributions, but
I'm not at all convinced about the "Obviously".

I suspect that the larger motivation was "Don't break existing
scripts, don't break existing GUI front-end applications".



They definitely broke existing GUI as the original cdrecord works while 
"wodim" does not work in many cases.


  
I can't say "many cases," but there are some burners which work better 
(ie produce useful burns) with cdrecord from Joerg. That said, in most 
cases wodim will work with good media and hardware.

The Authors from k3b even started to look for the original cdrecord to be able
to use this instead of the plagiat.
  

Fewer complaints to them, I would think.

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Announcing xorriso-0.1.6

2008-05-20 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  
This has been on my "someday list" for a while, does it have the 
capability of taking a bootable image, letting me change the non-boot 
files, and then giving me another burnable image? I'm thinking Linux 
install disks with the extras and upgrades added, to simplify creation. 
While I know how to make bootable media, from scratch if need be, I 
don't much enjoy the steps.  :-(



This will not work without non-artficial intelligence as you need tricks to 
make a CD boot on every BIOS.


  
My assumption is that if you start with a working bootable media, all 
you have to do is find out what tricks were used in the first place. As 
you note below you can get that information from isoinfo, and since you 
are changing the content rather than the boot logic or image you just 
need to keep what you have. I was more aiming this at xorriso,  since 
that already does most of what's needed, and Thomas may be more inclined 
to prove it can be done than explain why it can't.


Clearly making a bootable CD is very easy, since I have done it just by 
following the explanation you made in an early man page, and knowing 
that the the "boot block" in eltorito is just a floppy image, which 
happens to be the largest size you can fit on a standard floppy using 
four 32k sectors per track, or maybe eight 16k with a short IRG.


I appreciate that you are mentioning all the odd case which are 
difficult, but if eltorito boot can be done correctly, and other boot 
methods are rejected, that would still be useful.
The best way to re-create a bootable CD is to manually find the boot image 
first. This can be done by calling "isodebug":


isodebug -i /tmp/sol-nv-b87-x86-dvd.iso 
ISO-9660 image created at Mon Apr  7 12:32:02 2008


Cmdline: 'mkisofs 2.01 -b boot/grub/stage2_eltorito -c .catalog -no-emul-boot 
-boot-load-size 4 -boot-info-table -relaxed-filenames -l -ldots -r -N -d -D -V 
SOL_11_X86 -o .../solarisdvd.iso .../solarisdvd.product'

As yoou see, the boot image is "boot/grub/stage2_eltorito", however only the 
first 2048 bytes of this file are announced in the ElTorito header:


isoinfo  -d -i /tmp/sol-nv-b87-x86-dvd.iso 
CD-ROM is in ISO 9660 format

System id: Solaris
Volume id: SOL_11_X86
Volume set id: 
Publisher id: 
Data preparer id: 
Application id: MKISOFS ISO 9660/HFS FILESYSTEM BUILDER & CDRECORD CD-R/DVD CREATOR (C) 1993 E.YOUNGDALE (C) 1997 J.PEARSON/J.SCHILLING
Copyright File id: 
Abstract File id: 
Bibliographic File id: 
Volume set size is: 1

Volume set sequence number is: 1
Logical block size is: 2048
Volume size is: 1984486
El Torito VD version 1 found, boot catalog is in sector 10559
NO Joliet present
Rock Ridge signatures version 1 found
Eltorito validation header:
Hid 1
Arch 0 (x86)
ID ''
Key 55 AA
Eltorito defaultboot header:
Bootid 88 (bootable)
Boot media 0 (No Emulation Boot)
Load segment 0
Sys type 0
Nsect 4
Bootoff 2940 10560

What you see with this output is that the file /boot/grub/stage2_eltorito
starts at Sector # 10560.

As ElTorito only announces 2048 bytes from:

-r--r--r--   100  133008 Apr  2 2008 [  10560 00]  stage2_eltorito 


(this output if from isoinfo  -i /tmp/sol-nv-b87-x86-dvd.iso -lR)

it is obvious that any automated attempt to re-create a bootable DVD from the 
Solaris install DVD will not work. BTW: Linux CDs/DVDs look very similar.
  


I appreciate the warning, but doing the common case right and avoiding 
doing the other cases wrong is still useful.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: The future of mkisofs

2008-05-20 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  
I happily offer the suggestion that since the program has not been 
limited to ISO9660 images for years, the name implies limitations which 
don't apply, and since you can create images for most common optical 
media, that a name like "mkoptimage" would be more correct as well as 
preventing confusion.



mkisofs is a well known program name and I cannot see that your proposal for a 
name change could help to reduce confusion for users.


  
Wow, think about that! You have to tell people many times a month that 
the program called cdrecord in many distributions is really "wodin" and 
works differently than your original cdrecord. Do you really want to be 
bothered by having questions about *three* versions of mkisofs, the 
current one in a38, the old one shipped with some distributions, and 
your new cleaned up version? It sounds like a waste of your time to me! 
And as for user confusion, I always install current cdrecord as 
"CDrecord" so I won't get the one which comes with a distribution. It 
would help the users if you did that with the enhanced mkisofs, because 
they wouldn't use an old version and complain, or try to use features in 
the new version and find they "don't work."


You know that no matter how confused the user is, it always gets 
reported as bug in your software.


Best of both worlds, you don't get questions caused by users confusion, 
waste time chasing bugs that aren't, and users know what they are getting.

The probably biggest help for _new_ users was to list only the "most important"
options in case of a usage error:

Most important Options:
-posix-HFollow sylinks encountered on command line
-posix-LFollow all symlinks
-posix-PDo not follow symlinks (default)
-o FILE, -output FILE   Set output file name
-R, -rock   Generate Rock Ridge directory information
-r, -rational-rock  Generate rationalized Rock Ridge directory info
-J, -joliet Generate Joliet directory information
-print-size Print estimated filesystem size and exit
-UDFGenerate UDF file system
-dvd-video  Generate DVD-Video compliant UDF file system
-iso-level LEVELSet ISO9660 level (1..3) or 4 for ISO9660 v 2
-V ID, -volid IDSet Volume ID
-graft-points   Allow to use graft points for filenames
-M FILE, -prev-session FILE Set path to previous session to merge

to lead people to the right options.

Jörg

  



--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Announcing xorriso-0.1.6

2008-05-19 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

be invited to try the new release 0.1.6 of my program
xorriso, a ISO 9660 Rock Ridge filesystem manipulator.

It creates, loads, manipulates and writes ISO 9660
filesystem images with Rock Ridge extensions. It can
load the management information of existing ISO images
and it writes the session results to optical media or
to filesystem objects.

A special property of xorriso is that it needs neither
an external ISO 9660 formatter program nor an external
burn program for CD or DVD but rather incorporates the
libraries of libburnia-project.org .
  


This has been on my "someday list" for a while, does it have the 
capability of taking a bootable image, letting me change the non-boot 
files, and then giving me another burnable image? I'm thinking Linux 
install disks with the extras and upgrades added, to simplify creation. 
While I know how to make bootable media, from scratch if need be, I 
don't much enjoy the steps.  :-(


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: The future of mkisofs

2008-05-19 Thread Bill Davidsen

Joerg Schilling wrote:

mkisofs is a program that has been originally writen in 1993 by Eric Youngdale
and that has been extended by many people. 

In 1997, after Eric Youngdale mostly stopped working on mkisofs, I added 
mkisofs to the cdrecord source tree and started working on bugs and important
extensions. 

After 2 years (in 1999) Eric Youngdale transferred the complete code reporitory 
to me.


At that time, several people helped to enhance mkisofs. The most active one was 
James Pearson - he is unfortunately no longer reachable since he became a father 
years ago. Since that time, more than 5 years ago, I am the main mkisofs 
maintainer. 



Being now able to decide myself in mkisofs since 1999, I spend half a year only 
with bug fixes and code restructuring to make the code prepared for the 
future Now more than 8 years later, mkisofs has a lot more features than in 
1999 and needs another code lifting.


As mkisofs is very powerful and supports many OS and filesystem-hybrids, it has 
become a de-facto standard for creating ISO-9660 based filesystem images.


We are currently a short time before the next "stable" release of cdrtools and
I am planning to start a bigger code clean up after that time. The current plan 
is to do it the following way:


-	cdrtools-xxx.zzz-final (the next "stable" release) will be the last 
	release that includes all features that are currently in mkisofs.

People who need these features (e.g. because they own old hardware)
need to keel the version of mkisofs that is included in the next stable
version of cdrtools.

-   The ability to create "Apple-Hybrid" filesystem images causes problems
since many years already because the "Apple HFS" filesystem type does
not support files > 2 GB and includes other limitations.

	-	Support for this old filesystem is only needed for owners of 
		Mac OS 9 systems and for people who like to boot Apple PPC
		based systems. These Apple PPC based systems are out of 
		production since 3 years already.


-   Recent Apple systems boot using El-Torito extensions and
understand UDF + Apple extensions. Both is supported in mkisofs
		since a while. 


It seems that support for "Apple HFS" is no longer needed in mkisofs
and removing the support could help to clean up the code.

I am planning to remove the "Apple-Hybrid" support with the developer
versions for cdrtools that follow the next stable release of cdrtools.

People who believe that this change would cause problems are called to explain 
their arguments. Please comment.
  


I would offer the thought that to avoid confusion and people getting the 
wrong functionality, this would be a great time to pick a new name for 
the program and use that to indicate both deleted and added 
functionality. It would be easier than explaining having different 
features by the same name. Having two programs with different behavior 
called cdrecord has been confusing and has wasted your (and our) time 
repeatedly, so the cleanup could start by cleaning up confusion.


I happily offer the suggestion that since the program has not been 
limited to ISO9660 images for years, the name implies limitations which 
don't apply, and since you can create images for most common optical 
media, that a name like "mkoptimage" would be more correct as well as 
preventing confusion.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: WRITE@LBA=3e0c30h failed with SK=3h/WRITE ERROR]: Input/output error (DL)

2008-05-19 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

Andy Polyakov wrote:
  

  

Could you explain why writing a new program should
help to "fix" the hostile habbit of some Linux kernel
developers?



I see this as a more complex system of animosity.
To exchange one component might change that
system completely.
But for now, the official Linux world rather settled
with wodim. So the quarrel ended anyway.
  


The problem is that they have called wodim "cdrecord" and provided (or 
in some cases not) different functionality. Obviously distributions 
thought people wouldn't use it if people knew it was another program. I 
feel the same way as I do when I order coke and get pepsi, it's a scam.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Blu-Ray, Java, free software

2008-05-08 Thread Bill Davidsen
I found this note on InfoWorld today, linking Blu-Ray, Java, and a free 
software repository. On the off chance that the story and the links in 
it are useful to someone, I am posting the link 
<http://cwflyris.computerworld.com/t/3217299/121109532/112375/0/>.


On the other hand, the suspicious paranoid in me wonders this: if there 
is really a JVM in a Blu-Ray player, would it be possible for someone to 
hide a virus in a Blu-Ray disk, such that it would do something 
unpleasant when a disk was mounted or played. While most of these are on 
dumb stuff like TVs, have you ever read the list of software licenses 
mentioned in a typical HDTV boot? Maybe my HDTV is running Linux inside, 
I wouldn't fall off my chair to learn that it does!


Enjoy.

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: More Dual-Layer burning oddities

2008-05-01 Thread Bill Davidsen

Joerg Schilling wrote:

CJ Kucera <[EMAIL PROTECTED]> wrote:

  

Please explain your problem!
Cdrecord does the right thing with DVD+R without the need for a manual
layer break. A manual layer break is only needed for DVD-Video.
  

I thought that I had.  DVD+R/DL discs burned without "layerbreak=value"
on my box result in discs which are unreadable past the layer break (at
least, I assume that's where it fails, since it's at just about half the
size of the disc).  Once I threw in a manual layerbreak, the discs that
I burnt worked properly.  I didn't mention this in the original email,
but I haven't yet run any tests on a different batch of discs, to see if
it was just the media requiring that flag for me.



So you either have a problem with the medium or with the drives firmware.
Cdrecord computes and uses the right value automatically.

What value do you use for the layerbreak?

It is a well known problem that some media is unreadable on the second 
layer.
  


Since a manual layerbreak works, that seems unlikely. If certain media 
don't work in a burner, I'm never sure which should be called a media or 
firmware error, If a firmware change the problem, the media vendor will 
say it's a firmware fix, and the burner vendor will call it a 
work-around for media problems.


In either case I'd try different media just for the data point, if 
another major brand fails, I'd blame the burner.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: mkisofs "-graft-points" not working?

2008-04-23 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  

What you do here did never work as you believe.. You are using incorrect
syntax.
  
  
What I have been doing has been working for years, and worked with the 
mkisofs from wodim, and with "mkisofs 2.01a12 (i686-pc-linux-gnu)" which 
I had lying around.



It may be that it worked in your specific case. Before it has been fixed, it
was non-deterministic and it did not work as documented in the man page.

As I had to deal with a broken implementation and a documentation with no
self-contradictions, I decided to implement what's documented in the man 
page 


Hoewver, looking at the code. it seems that you found a bug introduced
with a code cleanup. The code to auto-append a '/' in case the target is a dir
was disabled.

The next release will be fixed.


Thank you.

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: mkisofs "-graft-points" not working?

2008-04-18 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  
After installing the recent cdrtools (a35), I started a backup script 
which writes all the backup information in a file and then creates an 
ISO image thus:

  mkisofs -o $DATE.iso -RU -graft-points -path-list $DATE.filelist

The run aborted saying that files had the same Rockridge name. After 
some investigation, it appears that the conflict was caused by ignoring 
the graft points and putting every file in the root of the image instead 
of using the subdirectory information. The man page for mkisofs 
indicates that graft-points still is intended to function as I have been 
using it for years.


The symptom is that a graft point like "USRLCL=/usr/local" no longer 
creates a subdirectory called USRLCL, but rather puts the tree starting 
with /usr/local directly in the root of the ISO image. Needless to say, 
this change doesn't qualify as an enhancement with me.



What you do here did never work as you believe.. You are using incorrect
syntax.
  


What I have been doing has been working for years, and worked with the 
mkisofs from wodim, and with "mkisofs 2.01a12 (i686-pc-linux-gnu)" which 
I had lying around.
If you like to create a grafted _directory_ in the ISO image, you need to 
add a slash to the path name to the left of the equal sign.
  


The script for monthly backup has a graft point HOME=/home which has 
been working for five years on one of the machines I tested, which reports:

gaimboi:davidsen> mkisofs -version
mkisofs 2.0 (i686-pc-linux-gnu)
gaimboi:davidsen> l /usr/local/bin/mkisofs
-rwxr-xr-x1 root  1413386 Jun 14  2003 /usr/local/bin/mkisofs

This was built from *your* source and installed. If it doesn't work that 
way, then you changed it. And the man page still seems to say what I 
expect, if I say DIR=foo/bar/zot then no matter what the path right of 
equal is, that gets renamed as whatever is left of equal.


Try this with your own old code to realize it has always worked this way:
 mkdir AA BB
 touch AA/test
 touch BB/test
 mkisofs -o test1.iso AAA=AA BBB=BB
 mkisofs -o test2.iso AAA=AA BBB=AA/test

In the first case BBB is a directory, in the 2nd it is a file. And 
current code presents no error message, the graft point is silently 
totally ignored. You changed it, you broke existing scripts, it fails 
without warning, and it doesn't conform to the documentation.


For once could you fix it and admit there's a problem, instead of trying 
to claim it always worked that way?


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



mkisofs "-graft-points" not working?

2008-04-17 Thread Bill Davidsen
After installing the recent cdrtools (a35), I started a backup script 
which writes all the backup information in a file and then creates an 
ISO image thus:

 mkisofs -o $DATE.iso -RU -graft-points -path-list $DATE.filelist

The run aborted saying that files had the same Rockridge name. After 
some investigation, it appears that the conflict was caused by ignoring 
the graft points and putting every file in the root of the image instead 
of using the subdirectory information. The man page for mkisofs 
indicates that graft-points still is intended to function as I have been 
using it for years.


The symptom is that a graft point like "USRLCL=/usr/local" no longer 
creates a subdirectory called USRLCL, but rather puts the tree starting 
with /usr/local directly in the root of the ISO image. Needless to say, 
this change doesn't qualify as an enhancement with me.


Just a warning, started quite a while ago, perhaps as long ago as a26. 
I'm sure that there's some obscure option to make graft-points work as 
they used to, and I'll be told to read the manual (as usual), and it's 
all a kernel problem (as usual), or it's my hardware (as I was told last 
week), but at the moment mkisofs doesn't have working graft-points.




Simple test:

posidon:davidsen> cd /tmp
posidon:davidsen> mkdir AA BB
posidon:davidsen> touch AA/temp
posidon:davidsen> touch BB/temp
posidon:davidsen> mkisofs -o x.iso -RU -graft-points DirAA=AA DirBB=BB
Warning: creating filesystem that does not conform to ISO-9660.
Setting input-charset to 'UTF-8' from locale.
Unknown file type (unallocated) AA/.. - ignoring and continuing.
Using temp000 for  /temp (temp)
mkisofs: Error: 'BB/temp' and 'AA/temp' have the same Rock Ridge name 
'temp'.

mkisofs: Unable to sort directory
posidon:davidsen>



Trying without graft-points it seems to work:

posidon:davidsen> tree -d
.
|-- AA
`-- BB

2 directories
posidon:davidsen> mkisofs -o y.iso -RU .
Warning: creating filesystem that does not conform to ISO-9660.
Setting input-charset to 'UTF-8' from locale.
Unknown file type (unallocated) ./.. - ignoring and continuing.
Total translation table size: 0
Total rockridge attributes bytes: 817
Total directory bytes: 4586
Path table size(bytes): 30
Max brk space used 0
177 extents written (0 MB)
posidon:davidsen> isoinfo -l -i y.iso

Directory listing of /
d-   000   2048 Apr 17 2008 [ 23 02] .
d-   000   2048 Apr 17 2008 [ 23 02] ..
d-   000   2048 Apr 17 2008 [ 24 02] AA
d-   000   2048 Apr 17 2008 [ 25 02] BB

Directory listing of /AA/
d-   000   2048 Apr 17 2008 [ 24 02] .
d-   000   2048 Apr 17 2008 [ 23 02] ..
--   000  0 Apr 17 2008 [-16 00] temp

Directory listing of /BB/
d-   000   2048 Apr 17 2008 [ 25 02] .
d-   000   2048 Apr 17 2008 [ 23 02] ..
--   000  0 Apr 17 2008 [-17 00] temp
posidon:davidsen>


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Announcing cdrskin-0.4.4

2008-04-16 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

i wrote:
  

Like: 1 hour 40 minutes is too long for a backup window.
  

Rob Bogus wrote:
  

That I can accept, I was going to post something similar.



This all would be no issue if there was an alternative
to BD-RE media like there are alternatives to DVD-RAM.

I tested DVD-RAM, said "blargh", and decided to use
DVD+RW.

With re-writeable BD one has to craft "BD+RW" from BD-RE.
Even then it lasts nearly an hour to fill a BD disc.
(Reminds me of my first 2x Yamaha CD-RW burner.)


  

Or: The media are perfect and never ever show any bad spot.
  

That sounds like a fairy tale.



If the media would be as good as DVD+RW on my various
drives then it would be ok for backup purposes.

A severe problem is the larger size of BD media.
Assumed similar probability of write failures per MB,
you have to expect at least 5 times more misburns
than with DVD+RW.
That would be indeed unbearable.


  

Ideally a copy of the backup would be held for byte-by-byte verify, and bad
spots (only bad spots) would be rewritten. That assumes that they CAN be
written successfully eventually. All solutions are ugly.



I recommend multi-copy backups for long term archiving.
I.e. identical images on several media all covered by
the same list of block checksums (64 kB blocks).
But that needs buffer storage on hard disk in order to
truely get identical copies.

Buffer storage for 25 GB is not appealing. Especially
since i recently got rid of buffer storage for
oversized files.

  
Having a large buffer for large backups may be the only practical 
solution. I do use dvdisaster software for critical backups, it 
occasionally really saves the day, although it is slow when calculating 
the ECC codes to burn. Combined with dual layer DVD to reduce the number 
of media changes I can do practical backups.

My usage scenario for a full speed BD-RE run would
be short term backups which are allowed to fail
from time to time. Not too often.


If BD-RE with defect management is as ill as DVD-RAM
on my Philips drive, then it is no real solution either.

Imagine the backup operator sitting in front of a
gnawing drive since hours. Pondering whether to finally
abort the backup or whether to hope for the normal
lame speed to come back.

DVD-RAM is nearly unusable for me. If BD-RE is as
bad, then i'll need to buy a tape drive for the next
generation of backup media.

My disk is 500 GB and my backup media are less than 5 GB.
One can do a lot with multi-volume and incremental.
But finally the backup data need to get onto media in
a reasonable time.

  
I don't know what external drives cost in Europe, they are about $130 
for 500GB in the US. That makes verified backups pretty cheap for 500GB. 
The cost effective solution for too much cheap disk may be more cheap 
disk. The cost of BD media is coming down slowly, but that is still far 
less cost per GB than the disk drive.


I burn "most critical" data on DVD for off-site backup, but daily still 
goes on those USB drives.

So if it is possible to run BD-RE at 9.5 MB/s without
too many misburns, then it is better than nothing.
Still not good, i confess.

I hope Giulio will tell us about his experiences.
  



--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Odd message from cdrecord

2008-04-14 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  

I cannot help if your DVD drive is slow or has other deficits that make the
DMA speed test reporting too low numbers. This is not a cdrecord problem 
but a problem of your hardware or your OS.
  
  
Since the hardware and the OS work at higher speeds with outer software, 




Could you explain outer software compared to inner software?
  


s/outer/other/ - changing keyboards 3-4 times a day does not improve my 
already dubious typing. But growisofs, for instance, gets 9.8 or 9.9x 
overall and 12.1x on the outer tracks. I did check using the size and 
elapsed time to be sure the calculations are correct, so that's a real 
result from the same hardware and OS.


Sorry for the typo, I thought it was clear from context.
  
it's clear that cdrecord is making an incorrect estimate of the hardware 
capabilities. Since you seem uninterested in investigating why that 
happens I'll continue using software which can make better estimates of 
the hardware.



cdrecord does not make incorrect estimations, it _meters_ the behavior.
For drives that behave incorrectly in this area, just follow the 
instructions.
  
But it wants to limit the speed to half of what the hardware will do 
with other programs. Clearly there is some issue here, it just isn't 
obvious what it is.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Odd message from cdrecord

2008-04-11 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  

Joerg Schilling wrote:


Bill Davidsen <[EMAIL PROTECTED]> wrote:

  
  
cdrecord 2.01.01a35 (built from source), USB attached "litescribe" 
drive, TDK 16x DVD-R media. Recording is refused with a message

   DMA speed too slow (OK for 6x). Cannot write at speed 16x.
Since growisofs writes at 12x just fine (9.8x overall, 12x outer tracks) 
clearly there is an issue of some kind with deciding which speed to use, 
and speeds of >6x as working.



Just follow the instructions in the error message.
  
  
Since other software is able to write at higher speed than 6X I'll just 
use the fastest program available. I thought you might be interested in 



star is the fastest program... you are missinterpreting messages
  


I didn't know star could burn DVDs, it doesn't seem to be documented.
  
why it underestimates the DMA capabilities by at least 2x, I wasn't 
looking for hints on how to make burning take twice as long by running 
at a lower speed. With a disk plugged into that port I can get sustained 
write of 43MB/s, so it would appear that the hardware is capable of 16x 
operation and something is making cdrecord believe otherwise.



I cannot help if your DVD drive is slow or has other deficits that make the
DMA speed test reporting too low numbers. This is not a cdrecord problem 
but a problem of your hardware or your OS.
  


Since the hardware and the OS work at higher speeds with outer software, 
it's clear that cdrecord is making an incorrect estimate of the hardware 
capabilities. Since you seem uninterested in investigating why that 
happens I'll continue using software which can make better estimates of 
the hardware.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Odd message from cdrecord

2008-04-11 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  
cdrecord 2.01.01a35 (built from source), USB attached "litescribe" 
drive, TDK 16x DVD-R media. Recording is refused with a message

   DMA speed too slow (OK for 6x). Cannot write at speed 16x.
Since growisofs writes at 12x just fine (9.8x overall, 12x outer tracks) 
clearly there is an issue of some kind with deciding which speed to use, 
and speeds of >6x as working.



Just follow the instructions in the error message.
  


Since other software is able to write at higher speed than 6X I'll just 
use the fastest program available. I thought you might be interested in 
why it underestimates the DMA capabilities by at least 2x, I wasn't 
looking for hints on how to make burning take twice as long by running 
at a lower speed. With a disk plugged into that port I can get sustained 
write of 43MB/s, so it would appear that the hardware is capable of 16x 
operation and something is making cdrecord believe otherwise.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Odd message from cdrecord

2008-04-11 Thread Bill Davidsen
cdrecord 2.01.01a35 (built from source), USB attached "litescribe" 
drive, TDK 16x DVD-R media. Recording is refused with a message

  DMA speed too slow (OK for 6x). Cannot write at speed 16x.
Since growisofs writes at 12x just fine (9.8x overall, 12x outer tracks) 
clearly there is an issue of some kind with deciding which speed to use, 
and speeds of >6x as working.


This is just FYI, I only use this setup to burn complete DVD images from 
precreated files (mkisofs+dvdisaster) and this isn't keeping me from 
backing up old data.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Blanking/Formating a BD-RE

2008-03-06 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  

Joerg Schilling wrote:


Arnold Maderthaner <[EMAIL PROTECTED]> wrote:

  
  
If someone can provide me some tool or instructions how to do it I  
could try it on another PC with BD-RE drive.

Jörg did you have any progress with the BD-RE integration in cdrecord ?



Last weekend, I have been at Linuxtage in Chemnitz and next week I am at CeBIT.

I hopw to be able to continue at March 15th.

  
  
Anything interesting at Linuxtage? Related to the list topic, I mean? 
And off-topic, I see the lighttpd folks were talking about Solaris, 
another of your interests.



There was "only" OpenOffice with a booth...

Chemnitz is more community oriented than commercial, this is why it is alway 
interesting to talk to people.


One intersting news for Linux might be that it should be possible to run 
cdrecord root-less on newer Linux kernels in the close future.
  
By using capabilities or improved filtration of commands? Or something 
else I don't think of in a moment. Good in any case. I haven't found 
that SElinux made any additional problems, but BD has been out of my 
price range still.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Blanking/Formating a BD-RE

2008-03-04 Thread Bill Davidsen

Joerg Schilling wrote:

Arnold Maderthaner <[EMAIL PROTECTED]> wrote:

  
If someone can provide me some tool or instructions how to do it I  
could try it on another PC with BD-RE drive.

Jörg did you have any progress with the BD-RE integration in cdrecord ?



Last weekend, I have been at Linuxtage in Chemnitz and next week I am at CeBIT.

I hopw to be able to continue at March 15th.

  


Anything interesting at Linuxtage? Related to the list topic, I mean? 
And off-topic, I see the lighttpd folks were talking about Solaris, 
another of your interests.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Mkisofs - seek error on old image

2008-03-04 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> wrote:

  
Without knowing the parameters to lseek64 here, I would guess this is 
a defective previous session os a linux kernel bug.


Mmm. That's why I added the DEBUG output

scsi.c:111

lseek(f, 2607382528, SEEK_SET)

How can I test the previous session? It has been written successfully 
by growisofs.
  


Interesting, I did not receive this mail.

The number is dividable by 2048, please send the "cdrecord -minfo" output
for this disk.
  


You were on the cc list for my reply, if you were pulling mail while 
away you may have missed it. And "divisible by 2048" is what I meant by 
"2k" in my post.


If he had an improperly closed session might he have seen this rejected 
as a seek beyond the end of written data?




--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mkisofs - seek error on old image

2008-02-29 Thread Bill Davidsen

Giuseppe Corbelli wrote:

Joerg Schilling wrote:

"Giuseppe \"Cowo\" Corbelli" <[EMAIL PROTECTED]> wrote:


Last step fails:

mkisofs -C 1273136,1284368 -M /dev/sr0 -R -J 
/amps/backups/flexbackup/ -v -o test.iso


Setting input-charset to 'ISO-8859-15' from locale.
DEBUG: startsec 1273136 startbyte 2607382528
mkisofs: Invalid argument. Seek error on old image


Without knowing the parameters to lseek64 here, I would guess this is 
a defective previous session os a linux kernel bug.


Mmm. That's why I added the DEBUG output

scsi.c:111

lseek(f, 2607382528, SEEK_SET)

How can I test the previous session? It has been written successfully 
by growisofs.


First, as I'm sure your checked, your seek address is a multiple of 2k. 
So it's on a sector boundary.


Are you sure that your system doesn't try to mount or check a DVD when 
you put it in the drive? Some of this is in the window manager and some 
in udev, although once I told the WM not to ever look at an optical 
media I stopped having warnings. udev hasn't caused problems that way, 
Fedora [678] in use here.



--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Blanking/Formating a BD-RE

2008-02-29 Thread Bill Davidsen

Arnold Maderthaner wrote:

Hi !

How should I test it with a Windows application as I run Linux ?

When I said "it would be good to know" I was hoping someone with a 
Windows system and some expertise would jump in. I run Linux also.

yours

Arnold

Am 24.02.2008 um 04:43 schrieb Bill Davidsen:


Joerg Schilling wrote:

Arnold Maderthaner <[EMAIL PROTECTED]> wrote:

  
Yes I'm running it as root on RHEL5.1 with newest cdrecord with the  
patch that Joerg send.
Btw. after that "crash" the BD-RE drive cannot be used anymore. I had  
to reboot the system.


If you need to reboot, you found a kernel bug.
  


In the sense that the kernel could detect that the drive was in a 
problem state and do the type of initialization which occurs at boot 
and device probe time. There are other things possibly involved.


The kernel just passes commands, so the application might be sending 
some command (not "wrong," just different than what the Windows 
application uses) which locks up the firmware. To test you could 
leave the system up and power cycle the drive (plug and unplug power 
cable). Unlikely, but not impossible. You could call this an 
application bug or a firmware bug, but if power cycle of the drive 
clears it, it is likely to be firmware response to the command sent. 
If the kernel passes the application command to the device, it's 
reasonably hard to see this as a kernel bug in the usual sense.


It would be good to know what command the Windows application sends 
to do the same function, it would help clarify the nature of the 
problem, and obviously the solution.

--
Bill Davidsen <[EMAIL PROTECTED]>
  "Woe unto the statesman who makes war without a reason that will still
  be valid when the war is over..." Otto von Bismark 







--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Blanking/Formating a BD-RE

2008-02-23 Thread Bill Davidsen

Joerg Schilling wrote:

Arnold Maderthaner <[EMAIL PROTECTED]> wrote:

  
Yes I'm running it as root on RHEL5.1 with newest cdrecord with the  
patch that Joerg send.
Btw. after that "crash" the BD-RE drive cannot be used anymore. I had  
to reboot the system.



If you need to reboot, you found a kernel bug.
  


In the sense that the kernel could detect that the drive was in a 
problem state and do the type of initialization which occurs at boot and 
device probe time. There are other things possibly involved.


The kernel just passes commands, so the application might be sending 
some command (not "wrong," just different than what the Windows 
application uses) which locks up the firmware. To test you could leave 
the system up and power cycle the drive (plug and unplug power cable). 
Unlikely, but not impossible. You could call this an application bug or 
a firmware bug, but if power cycle of the drive clears it, it is likely 
to be firmware response to the command sent. If the kernel passes the 
application command to the device, it's reasonably hard to see this as a 
kernel bug in the usual sense.


It would be good to know what command the Windows application sends to 
do the same function, it would help clarify the nature of the problem, 
and obviously the solution.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Help: Burning multisession DVD+R with cdrecord 2.01.01a37

2008-02-15 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

Eric Wanchic:
  

Looks Great ! TAO was executed.


Rob Bogus:
  

Thomas, perhaps you can clarify the SAO vs. TAO options and actual



Actually it is a packet write type.
"TAO" and "SAO" are rather aliases for a certain
behavior as it is known from good old CD media.
 
With DVD+R there is only this one write type.

cdrskin calls "TAO" a run without pre-announced size
and "SAO" a run with such an announcement.

MMC-5 4.3.6.2.2
"The Host views a DVD+R fragment as a fixed packet track
where the packet size is 16."
The following text describes command RESERVE TRACK
as optional for reserving a fixed track size.

  
Thanks, I saved that bit so I can quote it (with attribution, of course) 
to the next person who asks me. I was under the impression that there 
could be multiple tracks in a session, but I never saw an example of 
that. Clearly some drives offer one or the other, some both, and some 
odd things like:


   Identifikation : 'DVD A  DH20A4P  '
   Supported modes: TAO PACKET SAO SAO/R96P SAO/R96R RAW/R16 RAW/R96P RAW/R96R 
LAYER_JUMP
 

I really wish I fully understood all of the modes with "/" but I have 
never needed them so it's just curiosity for now.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Help: Burning multisession DVD+R with cdrecord 2.01.01a37

2008-02-13 Thread Bill Davidsen

Joerg Schilling wrote:

Bill Davidsen <[EMAIL PROTECTED]> whined again and trolled:

  
I think [1] is a hint, I bet you used a vendor hack of growisofs instead 
of downloading and building the real program from source. That doesn't 
mean I promise it will work for you, just that you know what you have. 
Oh, and it works for me, one session, multi-session, DVD-R, DVD+R, etc, etc.



growisofs is a dead project as the whole cdrkit is dead.
 
  
As you have already noted, you were thinking of some other software 
which has nothing to do with anything I said.
You didn't apologize for bringing up something utterly unrelated to what 
I said, you called it a typo.
  
I would say the same thing about cdrecord, while wodim has fixed many 



wodim did not fix single problem from cdrecord, but it introduced 
_many_ bugs that never have been in the original.
  
[  many incorrect claims removed ]


  
Since you removed them I conclude that you could not refute a single one 
of them, since every problem can be easily replicated. Why don't you 
just *fix* the issues I mentioned, and everyone would be happy. These 
are the reasons why wodim even exists, because you not only refuse to 
fix these problems, or accept the fixes others have made, you refuse to 
admit the problems exist, when anyone with a computer can verify them.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Help: Burning multisession DVD+R with cdrecord 2.01.01a37

2008-02-13 Thread Bill Davidsen

Eric Wanchic wrote:

Thanks for replying.(^_^)

Thomas Schmitt wrote:

Hi,

 

I've moved from growisofs [...] to wodim,
finally to cdrecord cdrecord 2.01.01a37.



What was the reason to give up growisofs ?

It is supposed to do multi-session on DVD+R if
you do not use options -dvd-compat or -dvd-video.
  


Reason 1: I read the man page and I did exactly what it stated, and it 
wasn't working for me. Please see my current posting (5 days old) on 
ubuntuforums - http://ubuntuforums.org/showthread.php?p=4288486


No body could really help me with this issue. But as you will see I 
didn't stop there. I also began to analyze K3B's debugging output. I 
had the most difficult time trying to find any more documentation that 
wasn't already included in the man pages, or through google searches 
to see what it was I could be doing wrong.


Reason 2: I finally went to |#cdrkit| on |irc.oftc.net |and waited 
four days before I finally got a response. One of the chat members 
stated:
1. I see you have found one of our bugs. I think we are long overdue 
for an upgrade.

2. Sorry, I can't help you. I've never worked with multisession DVD +Rs.
3. Why don't you try 
http://www.mail-archive.com/cdwrite@other.debian.org/info.html. They 
might be able to help you.


I think [1] is a hint, I bet you used a vendor hack of growisofs instead 
of downloading and building the real program from source. That doesn't 
mean I promise it will work for you, just that you know what you have. 
Oh, and it works for me, one session, multi-session, DVD-R, DVD+R, etc, etc.


I would say the same thing about cdrecord, while wodim has fixed many 
annoying things about cdrecord[1], various vendor versions seem to 
provide enough "learning experiences" to justify using real cdrecord, or 
at least not calling wodim cdrecord, because it isn't... quite. Still, 
it is the only tool for burning fancy CDs, unless the very latest 
cdrskin has added more capabilities.


[1] the following things make cdrecord unpleasant to use:
- must run as root.
- whines about using real device names instead of meaningless numbers.
 support is not "unintentional," code was added to make it work
- whines about losing data if not run as root
 a simple "didn't set rt" message would tell the user, data loss 
doesn't happen with burnfree
 in any case, and cdrecord complains even to do information like prcap, 
msinfo or atip,

 indicating it is setting rt when not needed.
- program advises to drop back to the last millennium and run 2.4 Linux 
kernels


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Help: Burning multisession DVD+R with cdrecord 2.01.01a37

2008-02-13 Thread Bill Davidsen

Eric Wanchic wrote:
Summarizing a Long story: I'm using Ubuntu 7.10 gutsy, and I've been 
working/researching for almost one week on how to create a 
multisession DVD+R for backing up. I've moved from growisofs to 
genisoimage to wodim, finally to cdrecord cdrecord 2.01.01a37. I 
manually installed cdrecord 2.01.01a37 from smake. I seem to be able 
to create the first track with mkisofs and burn with cdrecord. It is 
mountable and readable. But when I get ready to do the second track 
with cdrecord -msinfo dev=3,0,0; it states:


cdrecord: Cannot read first writable address



Here is what I've done. For the first track:
mkisofs -v -o track_01.iso -R /media/sdc1/backups

Next I burn:
cdrecord -v speed=2 dev=2,0,0 -eject -multi -tao -data track_01.iso

I noticed that durning the burn:
Starting to write CD/DVD/BD at speed 4 in real SAO mode for multi 
session.


I thought it was a bad DVD drive, so I switched it with another, and I 
still received SAO, even though I stated -multi and -tao.




See comment below


Finally I do my:
cdrecord -msinfo dev=3,0,0

And get:
cdrecord: Cannot read first writable address

Doing a cdrecord: cdrecord -minfo dev=3,0,0:

Cdrecord-ProDVD-ProBD-Clone 2.01.01a37 (x86_64-unknown-linux-gnu) 
Copyright (C) 1995-2008 Jörg Schilling

scsidev: '3,0,0'
scsibus: 3 target: 0 lun: 0
Linux sg driver version: 3.5.34
Using libscg version 'schily-0.9'.
Device type: Removable CD-ROM
Version: 5
Response Format: 2
Capabilities   :
Vendor_info: 'TSSTcorp'
Identifikation : 'CDDVDW SH-S203N '
Revision   : 'SB01'
Device seems to be: Generic mmc2 DVD-R/DVD-RW/DVD-RAM.
Using generic SCSI-3/mmc-3 DVD+R driver (mmc_dvdplusr).
Driver flags   : NO-CD DVD MMC-3 SWABAUDIO BURNFREE
Supported modes: PACKET SAO LAYER_JUMP


Note the lack of TAO here, so SAO is used instead. I assume that's 
acceptable, Joerg understands these mode well enough to get it "usefully 
right" here. So I don't think there's a problem with SAO.


My drives return TAO, SAO, or both, depending on model. Example:

   Vendor_info: 'ATAPI   '
   Identifikation : 'DVD A  DH20A4P  '
   Revision   : 'NP53'
   Device seems to be: Generic mmc2 DVD-R/DVD-RW/DVD-RAM.
   Using generic SCSI-3/mmc   CD-R/CD-RW driver (mmc_cdr).
   Driver flags   : MMC-3 SWABAUDIO BURNFREE FORCESPEED
   Supported modes: TAO PACKET SAO SAO/R96P SAO/R96R RAW/R16 RAW/R96P RAW/R96R 
LAYER_JUMP
 


Mounted media class:  DVD
Mounted media type:   DVD+R
Disk Is not erasable
data type:standard
disk status:  complete
session status:   complete
BG format status: none
first track:  1
number of sessions:   1
first track in last sess: 1
last track in last sess:  1
Disk Is unrestricted
Disk type: DVD, HD-DVD or BD

Track  Sess Type   Start Addr End Addr   Size
==
  1 1 Data   0  9183   9184

Last session start address: 0
Last session leadout start address: 9184
[EMAIL PROTECTED]:/media# mount /dev/scd0 cdrom0
mount: block device /dev/scd0 is write-protected, mounting read-only

I can mount this. I'm new to all of this, but I'm assuming it is 
closing the disc. I am open to suggestions. Thanks



I think this is a clue, "block device /dev/scd0 is write-protected" may 
indicate that you are running as non-root, after being root to write the 
first session. In any case, it is probably related.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Doesn't wodim close disk ?

2008-01-08 Thread Bill Davidsen

Gregoire Favre wrote:

On Sat, Jan 05, 2008 at 01:20:48PM +0100, Thomas Schmitt wrote:

Hello :-)

  

I have used cdrecord-proDVD on DVD-RW and
on DVD+RW without unexpected problems.
Since about 1.5 years, cdrecord from Joerg's
sources offers the proDVD capabilities.



Same for me on my old burner, but I got tired of changing the key for
cdrecord-proDVD and switched to growisofs.

  
I just find it more convenient to use growisofs with multi-session. Has 
nothing to do with how well it works, just an easier to use user interface.

I would rather bet that cdrecord -dao is
able to burn a DVD+R which is equivalent
to one burned by growisofs -dvd-compat.


And growisofs works on drives which lack DOA capabilities. cdrecord 
tells me, correctly, that the device won't do -dao, but it doesn't burn 
the DVD. I don't recall having problems with using -sao, but I usually 
use DVD-R as well, so my experience is limited, although I do sometimes 
use DVD+R for various reasons.

But i have to confess that i never tried
it.



  
I did with old source at the begining of cdrecord with DVD support, at

that time one should still need to specify the size of the session
before burning, which I thought was really a non sense as the program
could easyly calculate it for us.

I read the new manual from cdrecord and it seems it hasn't changed in
that aspect.

  

but I would take a RW media ;-)
  

DVD+RW are a completely different game.
Other command sequences, other pitfalls. 



I do wrote a media using cdrecord now and the result was just perfect.
I attach the script I used to do the burning.
The DVD was perfectly readable under OSX.

  

Good to know.

Final note: one burner which doesn't support DOA is

   Vendor_info: 'PIONEER '
   Identifikation : 'DVD-RW  DVR-104 '
   Supported modes: TAO PACKET SAO SAO/R96P SAO/R96R

--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 





Re: Doesn't wodim close disk ?

2008-01-01 Thread Bill Davidsen

Joerg Schilling wrote:

Gregoire Favre <[EMAIL PROTECTED]> wrote:

  

On Tue, Jan 01, 2008 at 02:18:31PM +0100, Joerg Schilling wrote:



The information you provided prove that cdrecord was able to close the disk.

If you did send the output from cdrecord -minfo from the disk past
the -fix run, this could be verified..
  

I said to you that cdrecord didn't change anything because
dvd+rw-mediainfo still showed the same info :



I don't care about the putput from dvd+rw-mediainfo

  
But the rest of the list is not pretending all software not written by 
you is useless, so it provides useful information at times.

And after the try of cdrecord -fix I did a full md5sum of all
files and compared to the local files on my HD : the same, I
can mount and read the file without any problem (well only under
linux...).



Pleae do not continue to repeat that this is a linux vs. no-linus problem

  
The lack of close is everywhere, but it only is visible on some 
operating systems.
  

Maybe the cdrecord dev=/dev/sr0 -minfo would give you reason in a
way ?



Are you unwilling to read the documentation?

Why do you use an unsupported dev= parameter?

  
Because the world has largely given up using obscure numbers instead of 
meaningful names. People use node names instead of IP numbers or MAC 
addresses. Your pseudo-SCSI scheme is even less meaningful since the 
original SCSI-I hardware used four, not three, numbers (slot, bus, 
device, LUN). And your "unintentional and not supported" error message 
is B.S. and you know it, you added code to make it work, and that's 
hardly unintentional. And since modern pluggable devices are now common 
and order of plug defines the numbers, only names make any sense to use 
in scripts.

Cdrecord-ProDVD-ProBD-Clone 2.01.01a36 (x86_64-unknown-linux-gnu) Copyright (C) 
1995-2007 Jörg Schilling
scsidev: '/dev/sr0'
devname: '/dev/sr0'
scsibus: -2 target: -2 lun: -2
Warning: Open by 'devname' is unintentional and not supported.
Linux sg driver version: 3.5.27
Using libscg version 'schily-0.9'.
Device type: Removable CD-ROM
Version: 5
Response Format: 2
Capabilities   : 
Vendor_info: 'LITE-ON '

Identifikation : 'DVDRW SH-16A7S  '
Revision   : 'WS04'
Device seems to be: Generic mmc2 DVD-R/DVD-RW/DVD-RAM.
Using generic SCSI-3/mmc-3 DVD+R driver (mmc_dvdplusr).
Driver flags   : NO-CD DVD MMC-3 SWABAUDIO BURNFREE FORCESPEED 
Supported modes: PACKET SAO LAYER_JUMP

WARNING: Phys disk size 2295104 differs from rzone size 2146272! Prerecorded 
disk?
WARNING: Phys start: 196608 Phys end 2491711
Mounted media class:  DVD
Mounted media type:   DVD+R
Disk Is not erasable
data type:standard
disk status:  incomplete/appendable
session status:   empty
BG format status: none
first track:  1
number of sessions:   2
first track in last sess: 2
last track in last sess:  2
Disk Is unrestricted
Disk type: DVD, HD-DVD or BD

Track  Sess Type   Start Addr End Addr   Size
==
1 1 Data   0  21462712146272
2 2 Blank  21483202295103146784

Last session start address: 0
Last session leadout start address: 2146272
Next writable address:  2148320
Remaining writable size:146784



wunderful!

The disk is OK now and should be readable on every non-defective 
drive that supports to read DVD+R media.


  

Thank for taking time to see the problem, but for the record
it should be mentioned that I never got any error with wodim
and all files seem good under linux.



A buggy program like wodim that does not print error messages
when it creates defective media is a real problem. Why didn't you listen
to the warnings I publish since more than a year? Wodim is software published 
by people that don't care about their users. The only intention these people

had was to harm the free software project cdrtools.

  
And removes all the B.S. warnings and limitations you put in your code 
purely to make people think Linux had a problem. People choose to use 
lesser software because it's easier than dealing with you. You could be 
widely respected in free software, and instead people only use your 
software when they can't find anything else.
Now your problem that was caused by defective "DVD support" in wodim 
has been fixed by cdrecord!


The previously unreadable track #1 is now correctly closed.

What is your problem?


Note that you did not yet report any real problem. My answers are all
bases on the known bugs in wodim and on your claim that the medium
is not readable. 
  


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 






Re: problem with BD sessions after 4GB

2007-11-30 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

Arnold Maderthaner:
  

 Mounted Media: 43h, BD-RE


growisofs -M /dev/dvdrw -R -J -joliet-long -use-the-force-luke=4gms
:-( next session would cross 4GB boundary, aborting...



This is contrary to what i read from the source code.

You will have to obtain a source tarball of dvd+rw-tools
and take a look into growisofs.c .
Search for the error message text which i get to see
like this:

else if (next_session > (0x20-0x5000)) /* 4GB/2K-40MB/2K */
if ((mmc_profile&0x)<0x20 ||
((mmc_profile&0x)<0x40 && !no_4gb_check))
fprintf (stderr,":-( next session would cross 4GB "
"boundary, aborting...\n"),
exit (FATAL_START(ENOSPC));

Disable it by spoiling the size test:

else if (0 && next_session > (0x20-0x5000)) /* 4GB/2K-40MB/2K */
...

compile and check whether it will work then.

If you are curious then leave the test active and
print the media type as perceived by growisofs:

fprintf (stderr,":-( next session would cross 4GB "
"boundary (0x%X), aborting...\n",
(unsigned) (mmc_profile&0x) ),

The hex number given in brackets would be supposed to be
0x43 ... but that does not match source and program behavior.
I would be curious :))


Particularly since the media reported after the write was 0x43. But I 
think it's more likely that the media type is correct, and this test 
just should not be done. The logic to skip this test may be failing. Of 
course the media check could be put into the test instead of your debug 
"0 &&" to be sure this media is allowed to be large.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: DVD-R as backup medium: growisofs + udftools

2007-11-29 Thread Bill Davidsen

Joerg Schilling wrote:

"Thomas Schmitt" <[EMAIL PROTECTED]> wrote:

  

Hi,

Joerg Schilling:


It is _impossible_ to do correct backups without additional disk space.


Giuseppe Corbelli:


You say it is necessary to store star archives on a filesystem prior
to backup?
  

It is possible to write archives as sessions
directly to multi-session media: DVD-R, DVD+R,
DVD-RW, CD-R, CD-RW.
As long as one knows the start addresses it
is easy to read the archives by help of dd.
This works for afio as well as for star.

Try for example:
  find . | \
  afio -oZ - | \
  cdrskin -v dev=/dev/sr0 -multi -




It seems that you missunderstand reliability.

find | archiver

is a grant for inconsistence.
  


You left off "if any of the files are being written." What you said is 
true for filesystems which are changing, not correct for filesystems 
with all static content. As long as the user understands the limitations 
of the method, it presents no reliability issues.


You understand the issues, and could have clarified the limitations.

Using tar is gradually better but still not OK.
Do you like software for a toy environment or real reliable solutions?

It makes no sense to discuss religuous aspects like you try to do.
Rather thing abount constraints:

-   Big backups and incrementals do not fit on a single DVD.
If you really like to use DVDs as media, use star -nultivol
  


Or other tools like 'breakup' which can split a large list of backup 
tasks to subtasks of limited size.

-   Small backups that fit on a DVD easily fit on intermediate disk space.

Note that you first need to think about snapshots anyway and snapshots
need disk space.


Snapshots only capture what's on the filesystem, which does not provide 
consistent data in all cases. If an application modifies multiple files, 
or multiple records in a single file, such that it buffers data in the 
application, partial data may be written at any instant in time. The 
application must be stopped or be able to bring the data to a known 
valid state on demand to provide perfect reliability. Fortunately most 
users don't have applications they can't stop, or the applications can 
insure data consistency internally, so this is just another exception case.


I can't think of anything other than a clean shutdown and backup from 
live CD boot which would handle every possible case.


Note: I'm not disagreeing with you just making some clarifications of 
which you are aware but others may not be.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: DVD-R as backup medium: growisofs + udftools

2007-11-29 Thread Bill Davidsen

Giuseppe "Cowo" Corbelli wrote:

Joerg Schilling wrote:
I'm trying to backup a couple of GBs upon a DVD-R using Plextor 
PX-810SA, 


What drive is this? Is this a Lite-ON?


Plextor PX-810SA

To start I thought about using packet writing / udftools. By reading 
documentation, it states that I have to


Wny?


I wanted to mount the media as rw and give its mountpoint to 
flexbackup as 'standard' directory.


If it's not rw media you will not have any joy at all mounting it rw. 
You can't modify sectors you need to modify, so there is no magic 
technique to make it otherwise unless you want to create your own 
filesystem.


You are better off using growisofs and appending any new or changed 
files in a new session. This is pretty automated. You can also do this 
with cdrecord, but you need to manually get the ISO filesize from the 
DVD and copy it into the command line of the burn command, and run as 
root for your operations, etc. In other words growisofs is adequate to 
the job and much easier to use, either tool or cdrskin will do the job 
in terms of capability.



dvd+rw-format -force /dev/sr0

for RW media only, so I guess this is not applicable to -R, right?


Why do you ike to do this at all?


Just quoting the documentation :-) I'm a newbie in DVD recording world.


Then I have to:

Write an empty session spanning the whole medium. It seems that
without this step, any attempt to create the UDF filesystem will fail.

DVD-RW: growisofs -Z /dev/sr0=/dev/zero



With DVD-R as well as with DVD+R, this is expected to create a nice 
coaster.


Yep, just as I thought.


But this does not sound good to me. Wouldn't this just waste my media?


Of course, see above.

Why don't you just write using mkisofs and cdrecord?


Definitely next step but follow this:

mount -o rw /dev/sr0 /mnt/x
flexbackup writes to /mnt/x directly using udf/packet


flexbackup writes to a directory
mkisofs /directory |cdrecord

In real world I need to make room for temporary backup files, which I 
thought I could avoy


There's a little program I wrote years ago for adding thing to DVD using 
cdrecord, see http://www.tmr.com/~public/source/ program addir. May or 
may not be useful, it does automate piping a list of names in, as from 
find, and adding them to the image.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Using dd to verify a dvd and avoid the readahead bug.

2007-10-05 Thread Bill Davidsen

Thomas Schmitt wrote:

Hi,

Bill Davidsen:
  

  blockdev --setra 0 /dev/hdc



This does not match the behavior on my oldish system
either.

First suspicios thing:
  # blockdev --getra /dev/hdg
  8
That would be 8 x 512 = 2 x 2048 bytes.
So 4 kB should be a upper limit for the loss on this drive.

But my losses of blocks with dd are often more than 32 kB.
  


There is a possibility that loss would be up to a full read size, I 
think. That's because if you ask for N and get an error, the loss will 
be N, I don't think the read tells how much it read without error, nor 
would most programs use that if they got the information.

--
Experiments:

I write a TAO track to CD-RW
  $ cdrskin -v dev=/dev/sg2 blank=fast padsize=0 -tao /dvdbuffer/x
which has 3041 blocks of payload and appears with -toc as
  track:   1 lba: 0 (0) 00:02:00 adr: 1 control: 4 mode: 1
  track:lout lba:  3043 (12172) 00:42:43 adr: 1 control: 4 mode: -1
  




Let me show what can be done with the same SCSI commands
as used by the block device driver.
This is telltoc, a demo application of libburn-0.3.9, using
SCSI command 28h "READ 10".

  $ test/telltoc --drive /dev/sg2 \
 --read_and_print 0 -1 raw:/dvdbuffer/image
  ...
  Media content: session  1  track 1 data   lba: 000:02:00
  Media content: session  1  leadoutlba:  304300:42:43
  Data : start=0s , count=3043s , read=0s , 
encoding=1:'/dvdbuffer/image'
  NOTE : Last two frames of CD track unreadable. This is normal if TAO track.
  End Of Data  : start=0s , count=3043s , read=3041s 


  $ ls -l /dvdbuffer/image
  -rw-r--r--1 *  *6227968 Oct  3 21:08 /dvdbuffer/image
  $ expr 6227968 - 3041 '*' 2048
  0

Exactly 3041 blocks of 2048 bytes each. None more, none less.

Now this is _my_ way to retrieve data from old CDs
out of times when 32 kB of padding were surely enough
of a sacrifice.
My 2.4 kernel already seems to need 64 kB, maybe even 128.


No hdparm helps, no blockdev helps.
  


Have you tried setting both to zero and asking to read just the number 
of blocks in the ISO filesystem?

Only skillful reading helps.
(I am so proud of my reading skills 8-))

  
I wonder if this lack of problems here is because I'm using DVD burners, 
not CD-only. I have upgraded every production system to DVD, just to 
avoid "can't reads that here" delays.

Have a nice day :)
  


You seem to have raised legitimate doubts about the behavior of DVD vs. 
CD units, as well as known issues on kernel versions. And I think code 
was being added to 2.6.23, or is queued for 2.6.24 to return a short 
read and no error in just this situation. I'll have to see if I can find 
the reference.


--
bill davidsen <[EMAIL PROTECTED]>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Using dd to verify a dvd and avoid the readahead bug.

2007-10-03 Thread Bill Davidsen

j t wrote:

Hi,

I have an iso file (which contains an iso9660/udf filesystem) that
I've written to a dvd-r using growisofs, thus:

# growisofs -dvd-compat -speed=1 -Z /dev/hdc=myDVD.iso

In the past, I have been able to check (verify) the burn finding the
iso size (using "isoinfo -d -i ") and then by comparing the
output from:

# dd if=myDVD.iso bs=2048 count= | md5sum
with
# dd if=/dev/hdc bs=2048 count= | md5sum
(and checking /var/log/syslog for any read errors)

Now I have started getting read errors close to the lead-out, so I
append 150 2k blocks to the end of the iso file using:

# dd if=/dev/zero bs=2048 count=150 >> myDVD.iso

and I even disable readahead using hdparm:
# hdparm -a 0 /dev/hdc
  


Unfortunately that probably wasn't the problem in the first place, you 
need to tell the o/s to stop doing readahead for performance, and the 
command to do that is blockdev:

 blockdev --setra 0 /dev/hdc
does what you want, although see below, I don't think that's your problem.

But I still get read errors around near the end of the dvd:
# dd if=/dev/hdc bs=2048 count=2002922 | md5sum
dd: reading `/dev/hdc': Input/output error
1938744+0 records in
1938744+0 records out

Could someone please tell me:
1) Is this the dreaded readahead bug again?
  


No.


2) Can I use dd to verify my burns and avoid the readahead bug?
  


yes.

3) If not, how can I verify my dvd burn?
  


You did, the burn is bad.

Thank you for your help.

  

--
bill davidsen <[EMAIL PROTECTED]>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Errormessage(s) index?

2007-09-26 Thread Bill Davidsen

Bernhard Frühmesser wrote:

> Hi,

Hello,

>> :-[ PERFORM OPC failed with SK=3h/ASC=73h/ACQ=03h]: Input/output error
>> Is there any index/list where i can take a look what exactly went 
wrong?

>
> Those numbers are the SCSI error codes as returned by
> the drives. They are listed e.g. in MMC-5, Annex F.
> dvd+rw-tools provides a list at
>   http://fy.chalmers.se/~appro/linux/DVD+RW/keys.txt

Thanks for the URL.

> Your error code is
>  3  73  03   POWER CALIBRATION AREA ERROR
> which means that drive and media do not like each other.
>
> If this is often re-used media then it is worn out.
> If it is a new disc, then drive and media are not
> compatible enough. (Not unusual, regrettably.)

Strange, it is a new disk, but i have always been using this media and 
never had any problems with it. (Verbatim DVD-R). It´s been 3 days 
since i burned stuff on a DVD-R without any problems, but today i got 
this message, i tested 3 new disks from Verbatim and one from Sony 
(all DVD-R) but i always get this error.


It is likely to be a hardware issue. I presume that you have tried 
rebooting the machine, in case your burner got an error which left it in 
some odd state. That's a real "power off" boot, 15 sec or more of power off.


If this is newly installed hardware, this may just be "infant mortality" 
on the hardware, or reseating the cables may provide information. That's 
unlikely, but inexpensive.


If this is newly upgraded firmware, try going back (if you can) or check 
the vendor site for a newer version.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot




Re: No supported write modes with LG GSA-H62N SATA DVD+RW

2007-09-14 Thread Bill Davidsen

Joe MacDonald wrote:

Hi guys,

I'll follow up the other replies in a bit when I get more information, 
but I thought I'd try to get the answer to the growisofs requests now.


On 9/12/07, * Bill Davidsen* <[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>> wrote:


Joe MacDonald wrote:
> Either way, it doesn't work in linux so doing backups to DVDs (which
> is generally how I like doing my backups) becomes a pain, to say the
> least, if I have to backup my ext3 partition data from Windows.  So
> I'm pretty motivated to see this one through.  And buying an add-on
> SATA controller is my last resort since that benefits no one but
me.  :-)

You might as well try growisofs, it's easy to learn, will create
an ISO
image with mkisofs if asked, or burn a prepared image, or burn from a
pipe. Only drawback is burning DVD only, no CD. For backup that's
unlikely to be an issue.


So I tried a couple of different options, first with an ISO I had 
created a while back and second with just a pile of files on my hard 
disk that I wanted to back up.  Neither one resulted in any level of 
success, but I'm not completely convinced this wasn't a case of pilot 
error.  So here's my results:


[EMAIL PROTECTED]:/tmp# file backup.iso
backup.iso: ISO 9660 CD-ROM filesystem data UDF filesystem data 
(unknown version, id 'NSR0

[EMAIL PROTECTED]:/tmp# ls -lh backup.iso
-rw-r--r-- 2 root root 3.7G 2006-06-06 00:14 backup.iso
[EMAIL PROTECTED]:/tmp# ls -l /dev/dvd*
lrwxrwxrwx 1 root root 4 2007-09-13 12:57 /dev/dvd -> scd0
lrwxrwxrwx 1 root root 4 2007-09-13 12:57 /dev/dvdrw -> scd0
[EMAIL PROTECTED]:/tmp# ls -l /dev/scd0
brw-rw 1 root cdrom 11, 0 2007-09-13 12:57 /dev/scd0
[EMAIL PROTECTED]:/tmp# growisofs -Z /dev/dvd -J -R -input-charset=iso8859-1 
./backup.iso
Executing 'genisoimage -J -R -input-charset=iso8859-1 ./backup.iso | 
builtin_dd of=/dev/dvd obs=32k seek=0'

:-[ MODE SELECT failed with SK=5h/ASC=1Ah/ACQ=00h]: Input/output error

Which is when I started to think I must be doing something wrong. 


Since it was an ISO image, you didn't want to do anything but burn it...
  growisofs -Z /dev/dvd=backup.iso


Anyway, my second test, was with the collection of .flac files:

[EMAIL PROTECTED] :/tmp# growisofs -Z /dev/dv -speed=1 -J -R 
-input-charset=iso8859 "/home/joe/cds/Matthew Good" 
"/home/joe/cds/Nine Inch Nails/" "/home/joe/cd

s/The Tragically Hip/"
Using IN_A_000 for  /In a Coma (disc 2) (In a Coma (disc 1))
:-[ MODE SELECT failed with SK=5h/ASC=1Ah/ACQ=00h]: Input/output error

And I'm not seeing anything interesting in dmesg:

[EMAIL PROTECTED]:/tmp# dmesg
[___snip___]
[   31.043343] libata version 2.20 loaded.
[   31.044366] sata_via :00:0f.0: version 2.1
[   31.044665] ACPI: PCI Interrupt Link [ALKA] enabled at IRQ 20
[   31.044738] ACPI: PCI Interrupt :00:0f.0[B] -> Link [ALKA] -> 
GSI 20 (level, low) -> IRQ 16

[   31.044924] sata_via :00:0f.0: routed to hard irq line 11
[   31.045040] ata1: SATA max UDMA/133 cmd 0x0001e200 ctl 0x0001e302 
bmdma 0x0001e600 irq 16
[   31.045140] ata2: SATA max UDMA/133 cmd 0x0001e400 ctl 0x0001e502 
bmdma 0x0001e608 irq 16

[   31.045229] scsi0 : sata_via
[   31.062598] usbcore: registered new interface driver usbfs
[   31.062686] usbcore: registered new interface driver hub
[   31.062768] usbcore: registered new device driver usb
[   31.092812] USB Universal Host Controller Interface driver v3.0
[   31.114790] via-rhine.c:v1.10-LK1.4.2 Sept-11-2006 Written by 
Donald Becker

[   31.125811] ieee1394: Initialized config rom entry `ip1394'
[   31.245121] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[   31.247832] Floppy drive(s): fd0 is 1.44M
[   31.278089] FDC 0 is a post-1991 82077
[   31.411458] ATA: abnormal status 0x7F on port 0x0001e207
[   31.422159] ATA: abnormal status 0x7F on port 0x0001e207
[   31.584631] ata1.00: ATAPI, max UDMA/100
[   31.748318] ata1.00: configured for UDMA/100
[   31.748388] scsi1 : sata_via
[   31.951772] ata2: SATA link down 1.5 Gbps (SStatus 0 SControl 300)
[   31.962463] ATA: abnormal status 0x7F on port 0x0001e407
[   31.965375] scsi 0:0:0:0: CD-ROMHL-DT-ST DVDRAM 
GSA-H62N  CL00 PQ: 0 ANSI: 5

[   31.967496] VP_IDE: IDE controller at PCI slot :00: 0f.1
[   31.967591] ACPI: PCI Interrupt :00:0f.1[A] -> Link [ALKA] -> 
GSI 20 (level, low) -> IRQ 16

[   31.967774] VP_IDE: chipset revision 6
[   31.967835] VP_IDE: not 100% native mode: will probe irqs later
[   31.967907] VP_IDE: VIA vt8237 (rev 00) IDE UDMA133 controller on 
pci:00:0f.1
[   31.967987] ide0: BM-DMA at 0xe800-0xe807, BIOS settings: 
hda:DMA, hdb:pio
[   31.968159] ide1: BM-DMA at 0xe808-0xe80f, BIOS settings: 
hdc:pio, hdd:pio

[   31.968328] Probing IDE interface ide0...
[   31.994222] sr0: scsi3-mmc drive: 125x/125x wri

Re: No supported write modes with LG GSA-H62N SATA DVD+RW

2007-09-12 Thread Bill Davidsen

Joe MacDonald wrote:

Hi Thomas,

On 9/11/07, *Thomas Schmitt* <[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>> wrote:


Hi,

Joe MacDonald:
> So I'm guessing this is an indication of a problem
> in the Linux device driver, maybe at the SATA level,
> maybe in SG?

I would assume that SG is not to blame but rather
the SATA specific code. Possibly even specific to
the VIA VT6240 SATA controller.

Burning seems to work with a VIA VT6421 and with a
nVidia CK804 controller (i asked a friend about
his well working hardware with contemporary SuSE
Linux).

Do you plan to go on with exploring the problem ?


I absolutely do.  I realized that in a couple of my later replies to 
Jörg yesterday I inadvertently cut the list from the recipient lists, 
I'll probably send out an update on what he was able to learn from my 
further testing with Windows for the sake of anyone coming by later 
and looking at the list archives.


In short, though, burning with cdrecord works (I think as expected) in 
Windows, which I think everyone can already guess from the exchanges 
on the list.


Either way, it doesn't work in linux so doing backups to DVDs (which 
is generally how I like doing my backups) becomes a pain, to say the 
least, if I have to backup my ext3 partition data from Windows.  So 
I'm pretty motivated to see this one through.  And buying an add-on 
SATA controller is my last resort since that benefits no one but me.  :-)


You might as well try growisofs, it's easy to learn, will create an ISO 
image with mkisofs if asked, or burn a prepared image, or burn from a 
pipe. Only drawback is burning DVD only, no CD. For backup that's 
unlikely to be an issue.
I understand your wish to stay with the familiar, but at the moment 
growisofs would be a data point as to works or not, and a way for you to 
move forward. And if the bug can be shown in other software, kernel 
developers might look at it more readily.


Else we could repeat the old experiment which i made
with a user on the debburn-devel list.
It resulted in this error
   5 1A 00 PARAMETER LIST LENGTH ERROR
when any mode page was sent.


That was the list of steps you sent yesterday?  I was focused on 
Jörg's suggestions since my preference is to continue using what I 
know (that would be cdrecord) but I'm all for gathering as much data 
as I can before I try to approach SCSI and/or SATA developers.


If you post a bug report then please point me
to the place where the discussion happens. 



Will do.  My preference at this point would be to do it somewhere like 
the Ubuntu bug tracker, but that will complicate things slightly since 
Ubuntu ships wodim and most of my testing has been post-wodim-removal. 
But we'll see how that goes.  I had remarkable success there before 
when I brought up alsa problems.
Does ubuntu still use wodim? I thought that there was a move to go back 
to cdrtools.


--
bill davidsen <[EMAIL PROTECTED]>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979




  1   2   3   4   5   6   >