Re: [zfs-discuss] Verify files' checksums
Ian Collins <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > Are these I/O errors written to stdout or stderr or where? > > Yes, stderr. OK, good, thanks. > You will not be able top open the file. What?! Even if there are errors I want to still be able to read the file to salvage what can be salvaged. E.g., if one byte in a picture file is wrong then it's quite likely I can still use the picture. If ZFS denies access to the whole file, or even to the whole block with the error, then the whole file is ruined. That's very bad. Are you sure there is no way to read the file anyway? > One of the great benefits of ZFS is you don't have to manually verify > checksums of files on disk. Unless you want to make sure they haven't > been maliciously altered that is. Malicious alteration is not the only way for unwanted changes to a disk. Cheers, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Verify files' checksums
Ian Collins <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > I couldn't see anything there describing either how to verify the > > checksums of individual files or why that would be impossible. > > If you can read the file, the checksum is OK. If it were not, you > would get an I/O error attempting to read it. Are these I/O errors written to stdout or stderr or where? Regards, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Verify files' checksums
"Scott Laird" <[EMAIL PROTECTED]> wrote: > On Sat, Oct 25, 2008 at 1:57 PM, Marcus Sundman <[EMAIL PROTECTED]> > wrote: > > I don't want to scrub several TiB of data just to verify a 2 MiB > > file. I want to verify just the data of that file. (Well, I don't > > mind also verifying whatever other data happens to be in the same > > blocks.) > > Just read the file. If the checksum is valid, then it'll read without > problems. If it's invalid, then it'll be rebuilt (if you have > redundancy in your pool) or you'll get I/O errors (if you don't). So what you're trying to say is "cat the file to /dev/null and check for I/O errors", right? And how do I check for I/O errors? Should I run "zpool status -v" and see if the file in question is listed there? Cheers, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Verify files' checksums
"Johan Hartzenberg" <[EMAIL PROTECTED]> wrote: > On Sat, Oct 25, 2008 at 6:49 PM, Marcus Sundman <[EMAIL PROTECTED]> > wrote: > > Richard Elling <[EMAIL PROTECTED]> wrote: > > > Marcus Sundman wrote: > > > > How can I verify the checksums for a specific file? > > > > > > ZFS doesn't checksum files. > > > > AFAIK ZFS checksums all data, including the contents of files. > > > > > So a file does not have a checksum to verify. > > > > I wrote "checksums" (plural) for a "file" (singular). > > > > AH - Then you DO mean the ZFS built-in data check-summing - my > mistake. ZFS checksums allocations (blocks), not files. The checksum > for each block is stored in the parent of that block. These are not > shown to you but you can "scrub" the pool, which will see zfs run > through all the allocations, checking whether the checksums are valid. I don't want to scrub several TiB of data just to verify a 2 MiB file. I want to verify just the data of that file. (Well, I don't mind also verifying whatever other data happens to be in the same blocks.) > This PDF document is quite old but explains it fairly well: I couldn't see anything there describing either how to verify the checksums of individual files or why that would be impossible. OK, since there seems to be some confusion about what I mean, maybe I should describe the actual problems I'm trying to solve: 1) When I notice an error in a file that I've copied from a ZFS disk I want to know whether that error is also in the original file on my ZFS disk or if it's only in the copy. 2) Before I destroy an old backup copy of a file I want to know that the other copy, which is on a ZFS disk, is still OK (at least at that very moment). Naturally I could calculate new checksums for all files in question and compare the checksums, but for reasons I won't go into now this is not as feasible as it might seem, and obviously less efficient. Up to now I've been storing md5sums for all files, but keeping the files and their md5sums synchronized is a burden I could do without. Cheers, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Verify files' checksums
Richard Elling <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > How can I verify the checksums for a specific file? > > ZFS doesn't checksum files. AFAIK ZFS checksums all data, including the contents of files. > So a file does not have a checksum to verify. I wrote "checksums" (plural) for a "file" (singular). - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Verify files' checksums
How can I verify the checksums for a specific file? - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] volname
"James C. McPherson" <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > [...] > blinder:jmcp $ for dev in `awk -F'"' '/sd/ {print > $2}' /etc/path_to_inst`; do prtconf -v /devices/$dev|egrep -i > "id1|dev.dsk.*s2" ; done > [...] > $ cfgadm -lav sata0 > [...] > Having the physical serial number reported makes it much easier > to match the physical drive with the logical - as long as you've > got the lid off :-) Nice, thx. Not that it's going to do me much good now, since it won't let me make volnames anymore at all. :-( > > Yes, I used the cXtYdZ syntax with zpool create, but the drives > > already had EFI labels. Did zpool re-label them anyway? > > I believe so. It would've been nice if zpool had given a warning, since it's not obvious that "zpool create" rewrites the EFI label (both old and newly written ones). And even if it did rewrite the labels it still _should_ keep the old volnames. Regards, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] volname
"James C. McPherson" <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > I couldn't figure out which controller got which numbers so I had > > to disconnect drives one by one > > I'm interested in what you did to figure out your drive > locations - did you use cfgadm, fmtopo or sestopo to figure > it out, or something closer to the hardware? Disconnect one drive, boot, look which drive is missing, shutdown, reconnect it and disconnect the next drive, boot, etc. > What sort of hardware, btw? A motherboard with 6 SATA ports (I think 4 is by the nvidia chipset and 2 by some other chip) and a sil3132-based PCI-E card with 2 SATA ports. The disks are 1 TB WD GP ones. > Oh, and if you gave the zpool create command the cXtYdZ > name, then the labels will have been over-written with GPT > (aka EFI) labels instead. Yes, I used the cXtYdZ syntax with zpool create, but the drives already had EFI labels. Did zpool re-label them anyway? Regards, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] volname
A Darren Dunham <[EMAIL PROTECTED]> wrote: > On Sat, Oct 11, 2008 at 03:19:49AM +0300, Marcus Sundman wrote: > > I've used format's "volname" command to give labels to my drives > > according to their physical location. I did quite a lot of work > > labeling all my drives (I couldn't figure out which controller got > > which numbers so I had to disconnect drives one by one, and they're > > not hotpluggable => lot's of reboots). However, when I created a > > zpool of the drives all the labels vanished! > > If they were SMI labels before, then ZFS probably switched them to > EFI/GPT labels, and the volname dropped out. I wrote EFI labels on the disks immediately when I got them, before making the volnames, so I don't think that's it. > > Since I didn't write them down > > elsewhere I have to do it all over. Except that now it won't let me: > > > format> volname > > > Unable to get current partition map. > > > Cannot label disk while its partitions are in use as described. > > > format> > > "as described"? Did format tell you anything when you selected the > disk other than "/dev/dsk/s0 is part of active ZFS pool blah..."? Nope: 8<- Specify disk (enter its number): 7 selecting c6t1d0 [disk formatted] /dev/dsk/c6t1d0s0 is part of active ZFS pool safe. Please see zpool(1M). FORMAT MENU: >8- Cheers, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] volname
Hi I've used format's "volname" command to give labels to my drives according to their physical location. I did quite a lot of work labeling all my drives (I couldn't figure out which controller got which numbers so I had to disconnect drives one by one, and they're not hotpluggable => lot's of reboots). However, when I created a zpool of the drives all the labels vanished! Since I didn't write them down elsewhere I have to do it all over. Except that now it won't let me: > format> volname > Unable to get current partition map. > Cannot label disk while its partitions are in use as described. > format> I get the same error message even if the device is offline (i.e., "zpool offline "). Am I doing something wrong, or is format just giving me the finger for no reason? Cheers, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] create raidz with 1 disk offline
"Brandon High" <[EMAIL PROTECTED]> wrote: > On Sat, Sep 27, 2008 at 4:02 PM, Marcus Sundman <[EMAIL PROTECTED]> > wrote: > > So, is it possible to create a 5 * 1 TB raidz with 4 disks (i.e., > > with one disk offline)? > > [...] > 1. Create a sparse file on an existing filesystem. The size should be > the same as your disks. > 2. Create the raidz with 4 of the drives and the sparse file. > [...] That's brilliant in its simplicity, and I see no reason why it wouldn't work. (ZFS even allows a larger sparse file than the size of the file system it's on.) I'm still waiting for my new disks and SATA-card, but I'll send a follow-up message once I've got them and tried your trick. Thanks! Regards, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] create raidz with 1 disk offline
Hi, I'm about to convert my 3 * 1 TB raidz to a 5 * 1 TB raidz. Since raidz can't be grown like that I have to find some place to move the data to temporarily while I reformat the raidz. However, I'm short on disk space (which is why I'm adding 2 new disks). So, is it possible to create a 5 * 1 TB raidz with 4 disks (i.e., with one disk offline)? In that case I could use one of the 1 TB disks as temporary storage. (I would first move 1 TB of data from the old raidz to one of the new disks, then delete the old raidz and create the new 5-way raidz with 4 disks, then copy the 1 TB of data from the 5th disk to the new raidz and finally add the 5th disk to the raidz.) Cheers, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] raidz in zfs questions
> > 1. In zfs can you currently add more disks to an existing raidz? > > This is important to me as i slowly add disks to my system one at a > > time. > > No Is this being worked on? Is it even planned? (I've looked at a bunch of FAQs and searched some mailing lists but I can't find the answers so I ask here.) - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] path-name encodings
"Anton B. Rang" <[EMAIL PROTECTED]> wrote: > > Do you happen to know where programs in (Open)Solaris look when they > > want to know how to encode text to be used in a filename? Is it > > LC_CTYPE? > > In general, they don't. Command-line utilities just use the sequence > of bytes entered by the user. Obviously that depends on the application. A command-line utility that interprets an normal xml file containing filenames know the characters but not the bytes. The same goes for command-line utilities that receive the filenames as text (e.g., some file transfer utility or daemon). > GUI-based software does as well, but the encoding used for user input > can sometimes be selected Hmm.. I'm usually programming at quite high a level, so I'm not very familiar with how stuff works under the hood... If I run xev on my linux box (I don't have X on any (Open)Solaris) and press the Ä-key on my keyboard it says "keycode 48" and "keysym 0xe4", and then "XLookupString gives 2 bytes: (c3 a4) "ä"". Thus at least XLookupString seems to know that I'm using UTF-8. Where did it (or whoever converted 0xe4 to 0xc3a4) get the needed info? - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] raidz in zfs questions
Chris Gilligan <[EMAIL PROTECTED]> wrote: > 2. in a raidz do all the disks have to be the same size? Related question: Does a raidz have to be either only full disks or only slices, or can it be mixed? E.g., can you do a 3-way raidz with 2 complete disks and one slice (of equal size as the disks) on a 3rd, larger, disk? - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] path-name encodings
Bart Smaalders <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > Bart Smaalders <[EMAIL PROTECTED]> wrote: > >> UTF8 is the answer here. If you care about anything more than > >> simple ascii and you work in more than a single locale/encoding, > >> use UTF8. You may not understand the meaning of a filename, but at > >> least you'll see the same characters as the person who wrote it. > > > > I think you are a bit confused. > > > > A) If you meant that _I_ should use UTF-8 then that alone won't > > help. Let's say the person who created the file used ISO-8859-1 and > > named it 'häst', i.e., 0x68e47374. If I then use UTF-8 when > > displaying the filename my program will be faced with the problem > > of what to do with the second byte, 0xe4, which can't be decoded > > using UTF-8. ("häst" is 0x68c3a47374 in UTF-8, in case someone > > wonders.) > > What I mean is very simple: > > The OS has no way of merging your various encodings. If I create a > directory, and have people from around the world create a file > in that directory named after themselves in their own character sets, > what should I see when I invoke: > > % ls -l | less > > in that directory? Either (1) programs can find out what the encoding is, or (2) programs must assume the encoding is what some environment variable (or somesuch) is set to. (1) The OS doesn't have to "merge" anything, just let the programs handle any conversions the programs see fit. (2) The OS must transcode the filenames. If a filename is incompatible with the target encoding then the offending characters must be escaped. > If you wish to share filenames across locales, I suggest you and > everyone else writing to that directory use an encoding that will work > across all those locales. The encoding that works well for this on > Unix systems is UTF8, since it leaves '/' and NULL alone. Again, that won't work. First of all there is no way to enforce programs to use UTF-8. I can't even force my own programs to do that. (E.g., unrar or unzip or tar or 7z (can't remember which one(s)) just dump the filename data to the fs in whatever encoding they were inside the archive, and I have at least one collaboration program that also does it similarly.) Now, if I force the fs to only accept filenames compatible with UTF-8 (i.e., utf8only) then I risk losing files. I'd rather have files with incomprehensible filenames than not have them at all. OTOH, if I allow filenames incompatible with UTF-8 then my programs can't necessarily access them if I use UTF-8. I could use some 8bits/char encoding (e.g., iso-8859-15), but I'd rather not, since the world is going the way of UTF-8 and so I'd just be dragging behind. And then I would also have problems with garbage-filenames when they use UTF-8 or some other encoding. Also, I'm quite sure I do have files with names with characters not in iso-8859-15. So, you see, there is no way for me to use filenames intelligibly unless their encodings are knowable. (In fact I'm quite surprised that zfs doesn't (and even can't) know the encoding(s) of filenames. Usually Sun seems to make relatively sane design decisions. This, however, is more what I'd expect from linux with their overpragmatic "who cares if it's sane, as long as it kinda works"-attitudes.) Regards, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] path-name encodings
[EMAIL PROTECTED] (Joerg Schilling) wrote: > Marcus Sundman <[EMAIL PROTECTED]> wrote: > > [EMAIL PROTECTED] (Joerg Schilling) wrote: > > > [...] ISO-8859-1 (the low 8 bits of UNOICODE) [...] > > > > Unicode is not an encoding, but you probably mean "the low 8 bits of > > UCS-2" or "the first 256 codepoints in Unicode" or somesuch. > > Unicode _is_ an encoding that uses 21 (IIRC) bits. AFAIK you are incorrect. Unicode is a standard that, among other things, defines a _number_ for each character. A number does not equal 21 bits, even if it so happens that the highest codepoint number in the current version is no more than 21 bits long. Unicode defines (at least) 3 encodings to represent those characters: UTF-8, UTF-16 and UTF-32. Well, it doesn't very much matter exactly how the terms are defined, as long as everybody knows what's what. So, I'm sorry for nitpicking. - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] path-name encodings
[EMAIL PROTECTED] (Joerg Schilling) wrote: > [...] ISO-8859-1 (the low 8 bits of UNOICODE) [...] Unicode is not an encoding, but you probably mean "the low 8 bits of UCS-2" or "the first 256 codepoints in Unicode" or somesuch. Regards, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] path-name encodings
"Anton B. Rang" <[EMAIL PROTECTED]> wrote: > > OK, thanks. I still haven't got any answer to my original question, > > though. I.e., is there some way to know what text the > > filename is, or do I have to make a more or less wild guess what > > encoding the program that created the file used? > > You have to guess. Ouch! Guessing sucks. (By the way, that's why I switched to ZFS with its internal checksums, so that I wouldn't have to guess if my data was OK.) Thanks for the answer, though. Do you happen to know where programs in (Open)Solaris look when they want to know how to encode text to be used in a filename? Is it LC_CTYPE? > NFS doesn't provide a mechanism to send the encoding with the > filename; I don't believe that CIFS does, either. Really?!? That's insane! How do programs know how to encode filenames to be sent over NFS or CIFS? > If you're writing the application, you could store the encoding as an > extended attribute of the file. This would be useful, for instance, > for an AFP server. OK. But then I'd have to hack a similar change into all other programs that I use, too. > > > The trick is that in order to support such things as > > > casesensitivity=false for CIFS, the OS needs to know what > > > characters are uppercase vs lowercase, which means it needs to > > > know about encodings, and reject codepoints which cannot be > > > classified as uppercase vs lowercase. > > > > I don't see why the OS would care about that. Isn't that the job of > > the CIFS daemon? > > The CIFS daemon can do it, but it would require that the daemon cache > the whole directory in memory (at least, to get reasonable > efficiency). I guess that depends on what file access functions there are for the file system. > If you leave it up to the CIFS daemon, you also wind up with problems > if you have a single sharepoint shared between local users, NFS & > CIFS -- the NFS client can create two files named "a" and "A", but > the CIFS client can only see one of those. Not necessarily. There could be some (nonstandard) way of accessing such duplicates (e.g., by having the CIFS daemon append "[dup-N]" or somesuch to the name). And even if that problem did exist it might still be OK for CIFS access to have that limitation. Regards, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] path-name encodings
Bart Smaalders <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > Bart Smaalders <[EMAIL PROTECTED]> wrote: > >>> I'm unable to find more info about this. E.g., what does "reject > >>> file names" mean in practice? E.g., if a program tries to create a > >>> file using an utf8-incompatible filename, what happens? Does the > >>> fopen() fail? Would this normally be a problem? E.g., do tar and > >>> similar programs convert utf8-incompatible filenames to utf8 upon > >>> extraction if my locale (or wherever the fs encoding is taken > >>> from) is set to use utf-8? If they don't, then what happens with > >>> archives containing utf8-incompatible filenames? > >> > >> Note that the normal ZFS behavior is exactly what you'd expect: you > >> get the filenames you wanted; the same ones back you put in. > > > > OK, thanks. I still haven't got any answer to my original question, > > though. I.e., is there some way to know what text the filename is, > > or do I have to make a more or less wild guess what encoding the > > program that created the file used? > > How do you expect the filesystem to know this? Open(2) takes 3 args; > none of them have anything to do with the encoding. I don't expect the filesystem to know "this" (whatever you mean by "this"). I don't expect the filesystem not to either. I just don't know, and therefore I ask. > > OK, if I use utf8only then I know that all filenames can be > > interpreted as UTF-8. However, that's completely unacceptable for > > me, since I'd much rather have an important file with an > > incomprehensible filename than not have that important file at all. > > Also, what about non-UTF-8 encodings? E.g., is it possible to know > > whether 0xe4 is "ä" (as in iso-8859-1) or "ф" (as in iso-8859-5)? > > > > There are two characters not allowed in filenames: NULL and '/'. > Everything else is meaning imparted by the user, just like the > contents of text documents. You are confusing "characters" and "bytes". The former are encoded when transformed to the latter. '/' is a character, 0x2f is a byte. (Well, representations of a character and of a byte, respectively, if we're nitpicking.) > >> The trick is that in order to support such things as > >> casesensitivity=false for CIFS, the OS needs to know what > >> characters are uppercase vs lowercase, which means it needs to > >> know about encodings, and reject codepoints which cannot be > >> classified as uppercase vs lowercase. > > > > I don't see why the OS would care about that. Isn't that the job of > > the CIFS daemon? > > If my program attempts to open file "fred" in a case-insensitive > filesystem and "FRED" exists, I would expect to get a handle to > "FRED". In order for the filesystem to do this, the OS must be able > to perform this comparison. Well, yes, if the case-insensitivity is in the filesystem (and if the fs is in the kernel), but my point was that it wouldn't _have_to_ be in the filesystem. It's probably faster if it is, though. > CIFS is in the kernel; case insensitivity is a property of the > filesystem, not a layer added on by a daemon. You probably mean "CIFS is in (Open)Solaris" and "case insensitivity is a property of ZFS". > If not, I could create "fred" and "FRED" locally, and then which one > would I get were I to open "FrEd" via CIFS? I guess that would be up to the implementation (unless CIFS includes it in its specification). > > As a matter of fact I don't see why the OS would need to > > know how to decode any filename-bytes to text. However, I firmly > > believe that user applications should have that opportunity. If the > > encoding of filenames is not known (explicitly or implicitly) then > > applications don't have that opportunity. > > The OS doesn't care; the user does. If a user creates a file named > წყალსა in his home directory, but my encoding doesn't contain these > characters, what should ls -l display? I assume we're assuming encodings to be known here. (If the encodings are unknown/unspecified the user can't create a file named any particular character string, only raw data (bits/bytes).) What a particular program displays is up to the implementation, I guess. I've seen programs use escapes (e.g., \uc3\ua5), or '?', or empty squares, or small squares with hex-numbers in them. (I've also seen programs not displ
Re: [zfs-discuss] path-name encodings
Bart Smaalders <[EMAIL PROTECTED]> wrote: > > I'm unable to find more info about this. E.g., what does "reject > > file names" mean in practice? E.g., if a program tries to create a > > file using an utf8-incompatible filename, what happens? Does the > > fopen() fail? Would this normally be a problem? E.g., do tar and > > similar programs convert utf8-incompatible filenames to utf8 upon > > extraction if my locale (or wherever the fs encoding is taken from) > > is set to use utf-8? If they don't, then what happens with archives > > containing utf8-incompatible filenames? > > > Note that the normal ZFS behavior is exactly what you'd expect: you > get the filenames you wanted; the same ones back you put in. OK, thanks. I still haven't got any answer to my original question, though. I.e., is there some way to know what text the filename is, or do I have to make a more or less wild guess what encoding the program that created the file used? OK, if I use utf8only then I know that all filenames can be interpreted as UTF-8. However, that's completely unacceptable for me, since I'd much rather have an important file with an incomprehensible filename than not have that important file at all. Also, what about non-UTF-8 encodings? E.g., is it possible to know whether 0xe4 is "ä" (as in iso-8859-1) or "ф" (as in iso-8859-5)? > The trick is that in order to support such things as > casesensitivity=false for CIFS, the OS needs to know what characters > are uppercase vs lowercase, which means it needs to know about > encodings, and reject codepoints which cannot be classified as > uppercase vs lowercase. I don't see why the OS would care about that. Isn't that the job of the CIFS daemon? As a matter of fact I don't see why the OS would need to know how to decode any filename-bytes to text. However, I firmly believe that user applications should have that opportunity. If the encoding of filenames is not known (explicitly or implicitly) then applications don't have that opportunity. - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] utf8only-property
So, I set utf8only=on and try to create a file with a filename that is a byte array that can't be decoded to text using UTF-8. What's supposed to happen? Should fopen(), or whatever syscall 'touch' uses, fail? Should the syscall somehow escape utf8-incompatible bytes, or maybe replace them with ?s or somesuch? Or should it automatically convert the filename from the active locale's fs-encoding (LC_CTYPE?) to UTF-8? - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Can ZFS be event-driven or not?
"Wee Yeh Tan" <[EMAIL PROTECTED]> wrote: > On Wed, Feb 27, 2008 at 10:42 PM, Marcus Sundman <[EMAIL PROTECTED]> > wrote: > > Darren J Moffat <[EMAIL PROTECTED]> wrote: > > > Marcus Sundman wrote: > > > > Nicolas Williams <[EMAIL PROTECTED]> wrote: > > > >> On Wed, Feb 27, 2008 at 05:54:29AM +0200, Marcus Sundman > > > >> wrote: > > > >>> Nathan Kroenert <[EMAIL PROTECTED]> wrote: > > > >>>> Are you indicating that the filesystem know's or should > > > >>>> know what an application is doing?? > > > >>> Maybe "snapshot file whenever a write-filedescriptor is > > > >>> closed" or somesuch? > > > >> Again. Not enough. Some apps (many!) deal with multiple > > > >> files. > > > > > > > > So what? Why would every file-snapshot have to be a file that's > > > > valid for the application(s) using it? (Certainly zfs snapshots > > > > don't provide that property either, nor any other > > > > backup-related system I've seen.) > > > > > > If it isn't how does the user or application know that is safe > > > to use that file ? > > > > Unless the files contain some checksum or somesuch then I guess it > > doesn't know it's safe. However, that's unavoidable unless the > > application can use a transaction-supporting fs api. > > Checksums only tell you the data file is good. If you have a whole > load of backups (one every nano-second) and none of them have a good > checksum, you are still very screwed. True. However, this is equally true for zfs snapshots. If I undestood the concept of CDP correctly then each zfs snapshot would provide a subset of the set of all versions in the CDP database. Thus, CDP couldn't possibly provide less protection than zfs snapshots (although it might be harder to find the right versions of files). So, if you think zfs snapshots provide enough protection then you can't claim CDP doesn't. > > > Is it okay to provide a snapshot of a file that is corrupt and > > > will cause further more serious data corruption in the > > > application ? > > > > Well, apparently so. That's what zfs snapshots do. That's what all > > backup tools do. Sure it would be better to have full transactions > > in the fs api, but without that I don't think it's possible to do > > any better than "the file might be corrupt or it might not, good > > luck if your file format doesn't support corruption-detection". > > A good backup practice increases (significantly) the likelihood of > getting a usable backup. E.g. you quiesce Oracle before you start > your backup to make sure that the datafiles you backup are consistent. True for both ZFS snapshots and CDP, except that with CDP you don't have to make the actual snapshot since that's automated. > Still, you are missing the point. What's the point of backing up if > you cannot use it for restoring your environment? I think you are missing the point if you think ZFS snapshots are capable of something CDP is not. Also, I though the author of the original message wasn't particularly interested in restoring the environment, but more about restoring individual files. As a kind of version history, or filesystem undo if you will. Maybe I misunderstood him. - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Can ZFS be event-driven or not?
Darren J Moffat <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > Nicolas Williams <[EMAIL PROTECTED]> wrote: > >> On Wed, Feb 27, 2008 at 05:54:29AM +0200, Marcus Sundman wrote: > >>> Nathan Kroenert <[EMAIL PROTECTED]> wrote: > >>>> Are you indicating that the filesystem know's or should know what > >>>> an application is doing?? > >>> Maybe "snapshot file whenever a write-filedescriptor is closed" or > >>> somesuch? > >> Again. Not enough. Some apps (many!) deal with multiple files. > > > > So what? Why would every file-snapshot have to be a file that's > > valid for the application(s) using it? (Certainly zfs snapshots > > don't provide that property either, nor any other backup-related > > system I've seen.) > > If it isn't how does the user or application know that is safe to use > that file ? Unless the files contain some checksum or somesuch then I guess it doesn't know it's safe. However, that's unavoidable unless the application can use a transaction-supporting fs api. > Is it okay to provide a snapshot of a file that is corrupt and will > cause further more serious data corruption in the application ? Well, apparently so. That's what zfs snapshots do. That's what all backup tools do. Sure it would be better to have full transactions in the fs api, but without that I don't think it's possible to do any better than "the file might be corrupt or it might not, good luck if your file format doesn't support corruption-detection". - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] path-name encodings
Darren J Moffat <[EMAIL PROTECTED]> wrote: > See the description of the normalization and utf8only properties in > the zfs(1) man page. > > I think this might help you. > > normalization =none | formD | formKCf That's apparently only for comparisons, so I don't see how it's relevant. > utf8only =on | off > > Indicates whether the file system should reject file > names that include characters that are not present in > the UTF-8 character code set. If this property is expli- > citly set to "off," the normalization property must > either not be explicitly set or be set to "none." The > default value for the "utf8only" property is "off." This > property cannot be changed after the file system is > created. I'm unable to find more info about this. E.g., what does "reject file names" mean in practice? E.g., if a program tries to create a file using an utf8-incompatible filename, what happens? Does the fopen() fail? Would this normally be a problem? E.g., do tar and similar programs convert utf8-incompatible filenames to utf8 upon extraction if my locale (or wherever the fs encoding is taken from) is set to use utf-8? If they don't, then what happens with archives containing utf8-incompatible filenames? - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Can ZFS be event-driven or not?
Nicolas Williams <[EMAIL PROTECTED]> wrote: > On Wed, Feb 27, 2008 at 05:54:29AM +0200, Marcus Sundman wrote: > > Nathan Kroenert <[EMAIL PROTECTED]> wrote: > > > Are you indicating that the filesystem know's or should know what > > > an application is doing?? > > > > Maybe "snapshot file whenever a write-filedescriptor is closed" or > > somesuch? > > Again. Not enough. Some apps (many!) deal with multiple files. So what? Why would every file-snapshot have to be a file that's valid for the application(s) using it? (Certainly zfs snapshots don't provide that property either, nor any other backup-related system I've seen.) - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] path-name encodings
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > Are path-names text or raw data in zfs? I.e., is it possible to know > > what the name of a file/dir/whatever is, or do I have to make more > > or less wild guesses what encoding is used where? > > I'm not sure what you are asking here. When a zfs file system is > mounted, it looks like a normal unix file system, i.e., a tree of > files where intermediate nodes are directories and leaf nodes may be > directories or regular files. In other words, ls gives you the same > kind of output you would expect on any unix file system. As to > whether a file/directory name is text or binary, that depends > on the name used when creating the file/directory. As far as the > meta-data used to maintain the file system tree, most of this is > compressed. But your question makes me wonder if you have tried > zfs. If so, then I really am not sure what you are asking. If not, > maybe you should try it out... I am running it (in nexenta). Anyway, my question was whether path-names (files, dirs, links, sockets, etc) are text or raw data. Fundamentals: "raw data" is "a list of bits, usually in groups of 8 (i.e., bytes)", and "text" is "raw data + some way of knowing how to convert that data into characters, forming strings". Example: When you go to a web-page the webserver sends the bytes of the page along with a http-header named "Content-Type", which tells your browser how to interpret those bytes. Example: Some versioning systems, such as svn, are hardcoded to encode pathnames as UTF-8. So, although the encoding-metadata isn't available along with the data it is in the specification. So, once more, is it possible to know the pathnames (as text) on zfs, or are pathnames just raw data and I (or my programs) have to make more or less wild guesses about what encoding the user who created the file/dir/etc. used for its name? At least on linux it's the latter. IMO it really sucks to not be able to know the names of files/dirs/etc., because it always leads to problems. E.g., most (but not all) programs assume filenames should be encoded according to the current locale (let's say utf-8), so when a filename with another encoding (let's say iso-8859-15) is encountered various Evil(tm) things happen, such as not displaying the file(s) at all (e.g., an image viewer I've used), or replacing filenames with "?", or replacing parts of filenames with "?" and decoding the rest of the filename with an obviously incorrect encoding (e.g., ls). I've even seen programs crash when they can't decode a filename. - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Can ZFS be event-driven or not?
Nathan Kroenert <[EMAIL PROTECTED]> wrote: > Are you indicating that the filesystem know's or should know what an > application is doing?? Maybe "snapshot file whenever a write-filedescriptor is closed" or somesuch? - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] path-name encodings
Are path-names text or raw data in zfs? I.e., is it possible to know what the name of a file/dir/whatever is, or do I have to make more or less wild guesses what encoding is used where? - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] missing files on copy
Mark Ashley <[EMAIL PROTECTED]> wrote: > It's simply a shell grokking issue, when you allow your (l)users to > self name your files then you will have spaces etc in the filename > (breaks shell arguments). In this case the '[E]' is breaking your > command line argument grokking. Can't be, because the '[E]' wasn't part of the command line arguments (it was in a subdirectory). - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Home NAS - Configuration & Questions
Tim Cook <[EMAIL PROTECTED]> wrote: > I'm currently running the asus K8N-LR, and it works wonderfully. Thanks, but socket 939 is cold dead and buried. S939 CPUs are very expensive. DDR is over twice as expensive as DDR2. I can't tell if the motherboard is expensive or not because I just can't find it _anywhere_ in Finland (and is thus definitely not "common", which was one of my few criterias). And to top it all off it has one of those whiny chipset fans, which is bad both for a _home_ nas (because of the noise) and for reliability (because when one breaks the heatsink it's attached to is way too small for the general air flow in the case). Regards, Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Home NAS - Configuration & Questions
Marcus Sundman <[EMAIL PROTECTED]> wrote: > Richard Elling <[EMAIL PROTECTED]> wrote: > > It may be less expensive to purchase a new motherboard with 6 SATA > > ports on it. > > Sure, but which one? I've been trying to find one for many, many > months already, but it has turned out to be impossible to find anyone > that has tried one successfully and is willing to tell about it. I'm > currently looking at the NForce 570 SLI based GA-M57SLI-S4 > motherboard, but I don't know if it'll work well or not. Hah! Funny timing. After I wrote that, but before I pressed the Send-button I got a message from Nathan Kroenert (thanks, Nathan!) telling me that the GA-M57SLI-S4 works very well indeed. I think I'll order one of those tonight. - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Home NAS - Configuration & Questions
Richard Elling <[EMAIL PROTECTED]> wrote: > Marcus Sundman wrote: > > Kava <[EMAIL PROTECTED]> wrote: > > > >> Can anyone recommend a cheap (but reliable) SATA PCI or PCIX card? > >> > > > > Why would you get a PCI-X card for a home NAS? I don't think I've > > ever seen a non-server motherboard with PCI-X. Are you sure you > > don't want a PCI-E card instead? > > > > Anyway, if someone is aware of some cheap (but reliable) SATA PCI-E > > card then I'd be very interested. > > It may be less expensive to purchase a new motherboard with 6 SATA > ports on it. Sure, but which one? I've been trying to find one for many, many months already, but it has turned out to be impossible to find anyone that has tried one successfully and is willing to tell about it. I'm currently looking at the NForce 570 SLI based GA-M57SLI-S4 motherboard, but I don't know if it'll work well or not. If someone knows of some common (and cheap) motherboard with 6+ SATA ports and gigabit ethernet then please tell. Also, if I just use on-board ports then: A) I'm stuck with as many ports as the motherboard has, and B) when the manufacturer stops making the motherboard I have to start my search all over again (and mobos become obsolete much, much quicker than expansion cards). However, although I'd rather find an inexpensive SATA PCI-E card, a mobo with 6+ SATA ports is just fine for me at this point. - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Drives of different size
Let's say I have two 300 GB drives and one 500 GB drive. Can I put a RAID-Z on the three drives and a separate partition on the last 200 GB of the 500 GB drive? - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Home NAS - Configuration & Questions
Kava <[EMAIL PROTECTED]> wrote: > Can anyone recommend a cheap (but reliable) SATA PCI or PCIX card? Why would you get a PCI-X card for a home NAS? I don't think I've ever seen a non-server motherboard with PCI-X. Are you sure you don't want a PCI-E card instead? Anyway, if someone is aware of some cheap (but reliable) SATA PCI-E card then I'd be very interested. - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Cheap ZFS homeserver.
> So I was hoping that this board would work: [...]GA-M57SLI-S4 I've been looking at that very same board for the very same purpose. It has 2 gb nics, 6 sata ports, supports ECC memory and is passively cooled. And it's very cheap compared to most systems that people recommend for running OpenSolaris on. (A GA-M57SLI-S4, an Athlon64 LE-1620 and 2 * 1GB 800MHz DDR2 ECC all together sum up to a total of only 165-175 € here, which is a lot less than what the recommended SATA cards cost. Add 3 500GB disks and you have a pretty nice raid-z system for only a total of 440 € (assuming you already have a case and PSU, which I do). Or you could use 3 1TB disks instead and add a good UPS and still have the whole package for less than 1000 €.) There are not many reports about the nforce 570 sli chipset, but several people have got the nforce 570 chipset working without problems. Here is a system with the GA-M57SLI-S4 in the HCL: http://www.sun.com/bigadmin/hcl/data/systems/details/2714.html It says the SATA ports run in "Legacy Mode" (which means no hotswap or NCQ, but I don't know if it has any other downsides, anyone?) in Solaris Express Developer Edition 05/07. However, there seems to have been new MCP55 (all nf570 are mcp55-based) drivers released since then: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6296435 Has anyone tested the "new" mcp55 drivers with the sata ports on an nforce 570 sli motherboard? - Marcus ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss