Re: Problems with dvd+rw-tools 6 and Pioneer drive
On Friday 27 January 2006 06:03, Bill Davidsen wrote: I just did a DVD-R this morning with a Pioneer drive, as a test of the 6.0 version. The report is: INQUIRY:[PIONEER ][DVD-RW DVR-104 ][1.40] GET [CURRENT] CONFIGURATION: Mounted Media: 11h, DVD-R Sequential Media ID: CMC MAG. AF1 Current Write Speed: 2.0x1385=2770KB/s Write Speed #0:2.0x1385=2770KB/s Write Speed #1:1.0x1385=1385KB/s Speed Descriptor#0:00/2298495 [EMAIL PROTECTED]/s [EMAIL PROTECTED]/s Speed Descriptor#1:00/2298495 [EMAIL PROTECTED]/s [EMAIL PROTECTED]/s READ DVD STRUCTURE[#0h]: Media Book Type: 25h, DVD-R book [revision 5] Legacy lead-out at:0*2KB=0 READ DISC INFORMATION: Disc status: blank Number of Sessions:1 State of Last Session: empty Number of Tracks: 1 READ TRACK INFORMATION[#1]: Track State: invisible incremental Track Start Address: 0*2KB Next Writable Address: 0*2KB Free Blocks: 2297888*2KB Track Size:2297888*2KB READ CAPACITY: 0*2048=0 and it worked perfectly. Media is some whatever was cheapest no-name from a computer show. They have been working fine in all my machines, and cost me $50 for 4x100 spindles. The name is ValueDisk. I believe they will run up to 8x in a more modern burner, but my newest D/L unit is powered down for a drive install, so I am not motivated to test more tonight. growisofs 6.0 works fine for me here, with cheap or HP DVD-R, and with LITEON LDW-451S unit, although dvd+rw-mediainfo can't give me info on the liteon (2.6.15 kernel, old ones work). But your drive - the Pioneer DVR-104 - has a minimal DVD writing speed of 1x (see http://www.pioneerelectronics.com/pna/article/0,,2076_4249_47473,00.html). So my guess about the problem with the speed-option of version 6.0 for drives with minimal burning speeds 1x is still a possible explanation for the behaviour of growisofs with my drive. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: dvd+rw-tools update [6.0, DVD-R DL]
Bill Davidsen [EMAIL PROTECTED] wrote: This sounds a bit confused. Are you able to describe your concern? You said star doesn't run as fast on Linux as Solaris, you can probably fix that problem by using O_DIRECT, so the writes and reads don't compete for the same buffer memory. And I already mentioned that it is higly improbable that O_DIRECT will speed up on Linux. A better buffer cache (this means improving Linux) helps much more than avoiding it. Jörg -- EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin [EMAIL PROTECTED](uni) [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: dvd+rw-tools update [6.0, DVD-R DL]
Bill Davidsen [EMAIL PROTECTED] wrote: My belief is that if O_DIRECT is in the kernel headers it works. I would love to have time to test timing of O_DIRECT on (a) disk, (b) partition, (c) file on disk, and alignment on minimal vs. page size boundaries. Doing O_DIRECT writes of anything avoids buffer pool collisions. To test either hack mkisofs to write with O_DIRECT or pipe into Dwriter or similar and see the difference in create time of a DVD image. I may actually do the hack and enable it when -o is used (and on Linux). If I do I'll put up the patch and post a link here. Then Joerg can reject it because it makes mkisofs run faster on Linux. Statements like this one from self called Linux specialists are an important reason why Linux specialists constantly disqualify themself :-( If you take the time you needed to write this for running a test, you would know that O_DIRECT makes file I/O slower than by using the standard method. Star needs only 40% of the system CPU time when using O_DIRECT, but slows down by 30%. And BTW: You cannot use O_DIRECT on Linux without defining __USE_GNU but doing this uncovers broken prototypes that prevent compilation. I used 04 instead of O_DIRECT for my tests. Jörg -- EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin [EMAIL PROTECTED](uni) [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: dvd+rw-tools update [6.0, DVD-R DL]
Joerg Schilling [EMAIL PROTECTED] writes: Bill Davidsen [EMAIL PROTECTED] wrote: This sounds a bit confused. Are you able to describe your concern? You said star doesn't run as fast on Linux as Solaris, you can probably fix that problem by using O_DIRECT, so the writes and reads don't compete for the same buffer memory. And I already mentioned that it is higly improbable that O_DIRECT will speed up on Linux. A better buffer cache (this means improving Linux) helps much more than avoiding it. How would a buffer cache improve the situation for star? I may have missed the beginning of the discussion, but caching write-once-then-forget data seems pointless. -- Matthias Andree -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: dvd+rw-tools update [6.0, DVD-R DL]
Joerg Schilling wrote: Bill Davidsen [EMAIL PROTECTED] wrote: My belief is that if O_DIRECT is in the kernel headers it works. I would love to have time to test timing of O_DIRECT on (a) disk, (b) partition, (c) file on disk, and alignment on minimal vs. page size boundaries. Doing O_DIRECT writes of anything avoids buffer pool collisions. To test either hack mkisofs to write with O_DIRECT or pipe into Dwriter or similar and see the difference in create time of a DVD image. I may actually do the hack and enable it when -o is used (and on Linux). If I do I'll put up the patch and post a link here. Then Joerg can reject it because it makes mkisofs run faster on Linux. Statements like this one from self called Linux specialists are an important reason why Linux specialists constantly disqualify themself :-( If you take the time you needed to write this for running a test, you would know that O_DIRECT makes file I/O slower than by using the standard method. Star needs only 40% of the system CPU time when using O_DIRECT, but slows down by 30%. And BTW: You cannot use O_DIRECT on Linux without defining __USE_GNU but doing this uncovers broken prototypes that prevent compilation. I'm attaching a tiny program to show that isn't the case. It's for zeroing out large files. If you have a disk intensive application you might run this to zero out say 50GB or so, and compare the impact on the application with dd of /dev/zero using 1024k buffer size. The program has compiled on rh7.2 thru FC4, SuSE, ubuntu, etc, no kernel headers or GNU needed, you want the POSIX behaviour, or at least I do. I used 04 instead of O_DIRECT for my tests. Needed only on really old installs, since any distribution which shipped a kernel with the feature also should ship the user headers to compile. My source has a check, I am running a 2.6.15 kernel on a RH7.3 hacked base, so after the includes are read it will do the define if needed. Jörg -- E. Robert Bogusta It seemed like a good idea at the time // O_DIRECT test - measure speed of direct write #include unistd.h #include stdio.h #include sys/types.h #include sys/stat.h #include fcntl.h /* this allows compilation on a OLD test machine (RG 7.3) */ #ifndef O_LARGEFILE #define O_LARGEFILE 010 #endif /* old includes */ #ifndef O_DIRECT #define O_DIRECT 04 #endif /* old includes */ #ifdef DIRECT #define Attribs (O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE|O_DIRECT) #else /* not direct */ #define Attribs (O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE) #endif #define MB (1024*1024) int main(int argc, char *argv[]) { char *buffer, *temp; int i, j, NumMB, stat, fd; char *filename; // sanity check if (argc 3) { fprintf(stderr, Minimum two args!\n\n Usage: zerofile len_MB file1 [ file2 [... fileN ] ]\n ); exit(1); } NumMB = atoi(argv[1]); // allocate the buffer aligned in every case stat = posix_memalign(buffer, getpagesize(), MB); if (stat) { fprintf(stderr, Aligned buffer allocation of %d MB failed\n, NumMB ); exit(2); } // create and write the files for (i = 2; i argc; ++i) { filename = argv[i]; fd = open(filename, Attribs, 0644); if (fd 0) { fprintf(stderr, Unable to create file %s, skipping and continuing\n, filename ); continue; } // write the data to the file for (j = 0; j NumMB; ++j) { stat = write(fd, buffer, MB); if (stat MB) { fprintf(stderr, Write error on %s, short file\n 1 MB write only sent %d bytes after %d MB written\n, filename, stat, j ); break; } } // close the file fsync(fd); close(fd); } exit(0); }
Re: dvd+rw-tools update [6.0, DVD-R DL]
Matthias Andree wrote: Joerg Schilling [EMAIL PROTECTED] writes: Bill Davidsen [EMAIL PROTECTED] wrote: This sounds a bit confused. Are you able to describe your concern? You said star doesn't run as fast on Linux as Solaris, you can probably fix that problem by using O_DIRECT, so the writes and reads don't compete for the same buffer memory. And I already mentioned that it is higly improbable that O_DIRECT will speed up on Linux. A better buffer cache (this means improving Linux) helps much more than avoiding it. How would a buffer cache improve the situation for star? I may have missed the beginning of the discussion, but caching write-once-then-forget data seems pointless. I think he means one which didn't do that useless caching, although most improvements also bite in some cases. I had a patch in which assumed that if a program wrote more than N bytes without a read or seek that the data should be sent to the drive NOW. Eliminated writing a full CD to buffer, with data from cache, then closing the file and having the drive go dead busy for a minute. -- bill davidsen [EMAIL PROTECTED] CTO TMR Associates, Inc Doing interesting things with small computers since 1979 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: dvd+rw-tools update [6.0, DVD-R DL]
Matthias Andree [EMAIL PROTECTED] wrote: You said star doesn't run as fast on Linux as Solaris, you can probably fix that problem by using O_DIRECT, so the writes and reads don't compete for the same buffer memory. And I already mentioned that it is higly improbable that O_DIRECT will speed up on Linux. A better buffer cache (this means improving Linux) helps much more than avoiding it. How would a buffer cache improve the situation for star? I may have missed the beginning of the discussion, but caching write-once-then-forget data seems pointless. A buffer cache usually does read-ahead and (in case it is well-developed) clustering. As I already mentioned, the result from a short star test with O_DIRECT verifies that O_DIRECT makes programs like star nearly 30 slower. This is nothing new. Similar results have been made many years ago with similar features on other platforms. The only exception so far was DG-UX but DG-UX has really slow buffered I/O by default. Jörg -- EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin [EMAIL PROTECTED](uni) [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]