Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-17 Thread Roland Rambau

Eric,

in my understanding ( which I learned from more qualified people
but I may be mistaken anyway ), whenever we discuss a transfer rate
like  x Mb/s, y GB/s or z PB/d, the M, G, T or P refers to the
frequency and not to the data.

1 MB/s means  "transfer bytes at 1 MHz", NOT "transfer megabytes at 1Hz"

therefor its 1'000'000 B/s  ( strictly speaking )


Of course usually some protocol overhead is much larger and so the small
1000:1024 difference is irrelevant anyway and can+will be neglected.

  -- Roland





Am 17.03.2010 04:45, schrieb Erik Trimble:

On 3/16/2010 4:23 PM, Roland Rambau wrote:

Eric,

careful:

Am 16.03.2010 23:45, schrieb Erik Trimble:


Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE,
not just us techies. I would hardly call 40+ years of using the various
giga/mega/kilo prefixes as a power of 2 in computer science as
non-authoritative.


How long does it take to transmit 1 TiB over a 1 GB/sec tranmission
link, assuming no overhead ?

See ?

hth

-- Roland



I guess folks have gotten lazy all over.

Actually, for networking, it's all "GigaBIT", but I get your meaning.
Which is why it's all properly labeled "1Gb" Ethernet, not "1GB" ethernet.

That said, I'm still under the impression that Giga = 1024^3 for
networking, just like Mega = 1024^2. After all, it's 100Mbit Ethernet,
which doesn't mean it runs at 100Mhz.

That is, on Fast Ethernet, I should be sending a max 100 x 1024^2 BITS
per second.


Data amounts are (so far as I know universally) employing powers-of-2,
while frequencies are done in powers-of-10. Thus, baud (for modems) is
in powers-of-10, as are CPU/memory speeds. Memory (*RAM of all sorts),
bus THROUGHPUT (i.e. PCI-E is in powers-of-2), networking throughput,
and even graphics throughput is in powers-of-2.

If they want to use powers-of-10, then use the actual "normal" names,
like graphics performance ratings have done (i.e. 10 billion texels, not
"10 Gigatexels". Take a look at Nvidia's product literature:

http://www.nvidia.com/object/IO_11761.html


It's just the storage vendors using the broken measurements. Bastards!





--


Roland Rambau Server and Solution Architects
Principal Field Technologist  Global Systems Engineering
Phone: +49-89-46008-2520  Mobile:+49-172-84 58 129
Fax:   +49-89-46008-  mailto:roland.ram...@sun.com

Sitz der Gesellschaft: Sun Microsystems GmbH,
Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht München: HRB 161028;
Geschäftsführer: Thomas Schröder
*** UNIX ** /bin/sh * FORTRAN **
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-16 Thread Roland Rambau

Eric,

careful:

Am 16.03.2010 23:45, schrieb Erik Trimble:


Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE,
not just us techies. I would hardly call 40+ years of using the various
giga/mega/kilo prefixes as a power of 2 in computer science as
non-authoritative.


How long does it take to transmit 1 TiB over a 1 GB/sec tranmission
link, assuming no overhead ?

See ?

  hth

  -- Roland



--


Roland Rambau Server and Solution Architects
Principal Field Technologist  Global Systems Engineering
Phone: +49-89-46008-2520  Mobile:+49-172-84 58 129
Fax:   +49-89-46008-  mailto:roland.ram...@sun.com

Sitz der Gesellschaft: Sun Microsystems GmbH,
Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht München: HRB 161028;
Geschäftsführer: Thomas Schröder
*** UNIX ** /bin/sh * FORTRAN **
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thin device support in ZFS?

2009-12-30 Thread roland
making transactional,logging filesystems thin-provisioning aware should be hard 
to do, as every new and every changed block is written to a new location. 
so what applies to zfs, should also apply to btrfs or nilfs or similar 
filesystems.

i`m not sure if there is a good way to make zfs thin-provisioning 
aware/friendly - so you should wait what a zfs developer has to tell about this.

not sure about vxfs, but i think vxfs is very different by it`s basic design 
and on-disk structure
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs upgrade freezes desktop

2009-12-30 Thread roland
seems, my problem is unrelated.

after disabling the gui and working console only, i see no freezes. so it must 
be a problem of the desktop/X environment and not kernel/zfs issue.

sorry for the noise.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs upgrade freezes desktop

2009-12-29 Thread roland
i have a problem which is perhaps related.

i installed opensolaris snv_130.
after adding 4 additional disks and creating a raidz on them with 
compression=gzip and dedup enabled, i got reproducable system freeze (not sure, 
but the desktop/mouse-coursor froze) directly after login - without actively 
accessing the disks at all.

after removing the disks, all is fine again - no freeze.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Roland Rambau

Per,

Per Baatrup schrieb:

Roland,

Clearly an extension of "cp" would be very nice when managing large files.
Today we are relying heavily on snapshots for this, but this requires disipline 
on storing files in separate zfs'es avioding to snapshot too many files that 
changes frequently.

The reason I was speaking about "cat" in stead of "cp" is that in addition to copying a 
single file I would like also to concatenate several files into a single file. Can this be accomplished with 
your "(z)cp"?


No - "zcp" is a simpler case than what you proposed, and thats why
I pointed it out as a discussion case.  ( And it is clearly NOT
the same as 'ln'. )

Btw. I would be surprised to hear that this can be implemented
with current APIs;  you would need a call like (my fantasy here)
"write_existing_block()" where the data argument is not a pointer
to a buffer in memory but instead a reference to an already existing
data block in the pool. Based on such a call ( and a corresponding one
for read that returns those references in the pool ) IMHO an implementation
of the commands would be straight forward ( the actual work would be
in the implementation of those calls ).

This can certainly been done - I just doubt it already exists.

  -- Roland


--

**
Roland Rambau Platform Technology Team
Principal Field Technologist  Global Systems Engineering
Phone: +49-89-46008-2520  Mobile:+49-172-84 58 129
Fax:   +49-89-46008-  mailto:roland.ram...@sun.com
**
Sitz der Gesellschaft: Sun Microsystems GmbH,
Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht München: HRB 161028;  Geschäftsführer:
Thomas Schröder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates:   Martin Häring
*** UNIX * /bin/sh  FORTRAN **
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Roland Rambau

Michael,

michael schuster schrieb:

Roland Rambau wrote:

gang,

actually a simpler version of that idea would be a "zcp":

if I just cp a file, I know that all blocks of the new file
will be duplicates; so the cp could take full advantage for
the dedup without a need to check/read/write anz actual data


I think they call it 'ln' ;-) and that even works on ufs.


quite similar but with a critical difference:

with hard links any modifications through either link are
seen by both links, since it stays a single file (note that
editors like vi do an implicit cp, they do NOT update the
original file )

That "zcp" ( actually it should be just a feature of 'cp' )
would be blockwise copy-on-write. It would have exactly
the same semantics as cp but just avoid any data movement,
since we can easily predict what the effect of a cp followed
by a dedup should be.

  -- Roland




--

******
Roland Rambau Platform Technology Team
Principal Field Technologist  Global Systems Engineering
Phone: +49-89-46008-2520  Mobile:+49-172-84 58 129
Fax:   +49-89-46008-  mailto:roland.ram...@sun.com
**
Sitz der Gesellschaft: Sun Microsystems GmbH,
Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht München: HRB 161028;  Geschäftsführer:
Thomas Schröder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates:   Martin Häring
*** UNIX * /bin/sh  FORTRAN **
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Roland Rambau

gang,

actually a simpler version of that idea would be a "zcp":

if I just cp a file, I know that all blocks of the new file
will be duplicates; so the cp could take full advantage for
the dedup without a need to check/read/write anz actual data

  -- Roland

Per Baatrup schrieb:

"dedup" operates on the block level leveraging the existing FFS checksums. Read 
"What to dedup: Files, blocks, or bytes" here http://blogs.sun.com/bonwick/entry/zfs_dedup

The trick should be that the zcat userland app already knows that it will 
generate duplicate files so data read and writes could be avoided all together.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar thin provisioning space reclamation

2009-11-21 Thread roland
sdelete may be the easiest, but not the best tool here, since it`s made for 
secure deletion and not made for filling a disk with zeroes quickly. 

i have no windows around here for performance testing, but dd may perform 
better:

http://www.chrysocome.net/dd

you should try "dd if=/dev/zero of=largefile.dat bs=1M" and remove that file 
afterwards

this should give the same effect as with sdelete, but it may perform better.

if you like to try that, please report the results here.

there are more tools like this around, but they are hard to find.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ls -l hang, process unkillable

2009-11-11 Thread roland
thanks.

we will try that if the error happens again - needed to reboot as a quick-fix, 
as the machine is in production

regards
roland
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ls -l hang, process unkillable

2009-11-11 Thread roland
hello, 

one of my colleague has a problem with an application. the sysadmins, 
responsible for that server told him that it was the applications fault, but i 
think they are wrong, and so does he.

from time to time, the app gets unkillable and when trying to list the contents 
of some dir which is being read/written by the app, "ls" can list the contents, 
but "ls -l" gets stuck and simply hangs. cp,rm,mv on some file in that dir 
doesn`t work either.

i think this is a solaris/kernel/zfs problem.

can somebody give a hint how to analyze/fix this? 

i can provide more input on request.

regards
roland
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zle compression ?

2009-11-10 Thread roland
by some posting on zfs-fuse mailinglist, i came across "zle" compression which 
seems to be part of the dedupe-commit some days ago:

http://hg.genunix.org/onnv-gate.hg/diff/e2081f502306/usr/src/uts/common/fs/zfs/zle.c

--snipp
31 + * Zero-length encoding.  This is a fast and simple algorithm to eliminate
  32 + * runs of zeroes.  Each chunk of compressed data begins with a 
length byte, b.
  33 + * If b < n (where n is the compression parameter) then the next b + 
1 bytes
  34 + * are literal values.  If b >= n then the next (256 - b + 1) bytes 
are zero.
--snipp

i`m curious - what does that mean?

does zfs have another compression scheme named "zle" now ? 
if yes, why ?

wasn´t zero-length encoding already there and just a "builtin feature" ?

maybe that builtin has now become an option ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup question

2009-11-02 Thread roland
>forgive my ignorance, but what's the advantage of this new dedup over
>the existing compression option?
 
it may provide another space saving advantage. depending on your data, the 
savings can be very significant.

>Wouldn't full-filesystem compression
>naturally de-dupe?
no. compression doesn`t look forth and back, only the actual data block is 
compressed and redundant information being removed. compression != 
deduplication !
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20

2009-09-24 Thread Roland Rambau

Richard, Tim,

yes, one might envision the X4275 as OpenStorage appliances, but
they are not. Exadata 2 is
 - *all* Sun hardware
 - *all* Oracle software (*)
and that combination is now an Oracle product: a database appliance.

All nodes run Oracles Linux; as far as I understand - and that is not
sooo much - Oracle has offloaded certain database functionality into
the storage nodes. I would not assume that there is a hybrid storage
pool with a file system - it is a distributed data base that knows to
utilize flash storage. I see it as a first quick step.

  hth

  -- Roland

PS: (*) disregarding firmware-like software components like Service
Processor code or IB subnet managers in the IB switches, which are
provided by Sun



Richard Elling schrieb:


On Sep 24, 2009, at 10:17 AM, Tim Cook wrote:




On Thu, Sep 24, 2009 at 12:10 PM, Richard Elling 
 wrote:

On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:

I'm surprised no-one else has posted about this - part of the Sun 
Oracle Exadata v2 is the Sun Flash Accelerator F20 PCIe card, with 48 
or 96 GB of SLC, a built-in SAS controller and a super-capacitor for 
cache protection. 
http://www.sun.com/storage/disk_systems/sss/f20/specs.xml


At the Exadata-2 announcement, Larry kept saying that it wasn't a 
disk.  But there
was little else of a technical nature said, though John did have one 
to show.


RAC doesn't work with ZFS directly, so the details of the 
configuration should prove

interesting.
 -- richard

Exadata 2 is built on Linux from what I read, so I'm not entirely sure 
how it would leverage ZFS, period.  I hope I heard wrong or the whole 
announcement feels like a bit of a joke to me.


It is not clear to me. They speak of "storage servers" which would be 
needed to

implement the shared storage. These are described as Sun Fire X4275 loaded
with the FlashFire cards. I am not aware of a production-ready Linux 
file system
which implements a hybrid storage pool. I could easily envision these as 
being

OpenStorage appliances.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--

******
Roland Rambau Platform Technology Team
Principal Field Technologist  Global Systems Engineering
Phone: +49-89-46008-2520  Mobile:+49-172-84 58 129
Fax:   +49-89-46008-  mailto:roland.ram...@sun.com
**
Sitz der Gesellschaft: Sun Microsystems GmbH,
Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht München: HRB 161028;  Geschäftsführer:
Thomas Schröder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates:   Martin Häring
*** UNIX * /bin/sh  FORTRAN **
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfssupport?

2009-09-15 Thread Roland Mainz
Robert Thurlow wrote:
> Roland Mainz wrote:
> 
> > Ok... does that mean that I have to create a ZFS filesystem to actually
> > test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
> > other options ?
> 
> By all means, test with ZFS.  But it's easy to do that:
> 
> # mkfile 64m /zpool.file
> # zpool create test /zpool.file
> # zfs list
> test   67.5K  27.4M18K  /test

I know... but AFAIK this requires "root" priviledges which the test
suite won't have...



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfssupport?

2009-09-15 Thread Roland Mainz
Ian Collins wrote:
> Roland Mainz wrote:
> > Norm Jacobs wrote:
> >> Roland Mainz wrote:
> >>> Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
> >>> "yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
> >>> etc.) are supported by tmpfs ?
> >>>
> >> I have some vague recollection that tmpfs doesn't support ACLs snd it
> >> appears to be so...
> >
> > Is there any RFE which requests the implementation of NFSv4-like ACLs
> > for tmpfs yet ?
> >
> >> ZFS
> >>
> >> opensolaris% touch /var/tmp/bar
> >> opensolaris% chmod A=user:lp:r:deny /var/tmp/bar
> >> opensolaris%
> >>
> >> TMPFS
> >>
> >> opensolaris% touch /tmp/bar
> >> opensolaris% chmod A=user:lp:r:deny /tmp/bar
> >> chmod: ERROR: Failed to set ACL: Operation not supported
> >> opensolaris%
> >>
> >
> > Ok... does that mean that I have to create a ZFS filesystem to actually
> > test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
> > other options ?
> Use function interposition.

Umpf... the matching code is linked with -Bdirect ... AFAIK I can't
interpose library functions linked with this option, right ?



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Roland Mainz
Norm Jacobs wrote:
> Roland Mainz wrote:
> > Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
> > "yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
> > etc.) are supported by tmpfs ?
> 
> I have some vague recollection that tmpfs doesn't support ACLs snd it
> appears to be so...

Is there any RFE which requests the implementation of NFSv4-like ACLs
for tmpfs yet ?

> ZFS
> 
> opensolaris% touch /var/tmp/bar
> opensolaris% chmod A=user:lp:r:deny /var/tmp/bar
> opensolaris%
> 
> TMPFS
> 
> opensolaris% touch /tmp/bar
> opensolaris% chmod A=user:lp:r:deny /tmp/bar
> chmod: ERROR: Failed to set ACL: Operation not supported
> opensolaris%

Ok... does that mean that I have to create a ZFS filesystem to actually
test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
other options ?

[1]=The idea is to have a test module which checks whether ACL
operations work correctly, however the testing framework must only run
as normal, unpriviledged user...



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Roland Mainz

Hi!



Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
"yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
etc.) are supported by tmpfs ?

----

Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS export issue

2009-09-14 Thread roland
what you want is possible with linux nfs, but solaris nfs developers don`t like 
this feature and will not implement it.  see 
http://www.opensolaris.org/jive/thread.jspa?threadID=109178&start=0&tstart=0
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Read about ZFS backup - Still confused

2009-09-03 Thread roland
>I would like to duplicate this scheme using zfs commands.
you don`t want to do that.

zfs is meant for using it as a filesystem on a backup server, but not for 
long-term storing of data on removable media
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Petabytes on a budget - blog

2009-09-02 Thread Roland Rambau

Jacob,

Jacob Ritorto schrieb:

Torrey McMahon wrote:

3) Performance isn't going to be that great with their design 
but...they might not need it.



Would you be able to qualify this assertion?  Thinking through it a bit, 
even if the disks are better than average and can achieve 1000Mb/s each, 
each uplink from the multiplier to the controller will still have 
1000Gb/s to spare in the slowest SATA mode out there.  With (5) disks 
per multiplier * (2) multipliers * 1000GB/s each, that's 1Gb/s at 
the PCI-e interface, which approximately coincides with a meager 4x 
PCI-e slot.


they use a 85$ PC motherboard - that does not have "meager 4x PCI-e slots",
it has one 16x and 3 *1x* PCIe slots, plus 3 PCI slots ( remember, long time
ago: 32-bit wide 33 MHz, probably shared bus ).

Also it seems that all external traffic uses the single GbE motherboard port.

  -- Roland


--

******
Roland Rambau Platform Technology Team
Principal Field Technologist  Global Systems Engineering
Phone: +49-89-46008-2520  Mobile:+49-172-84 58 129
Fax:   +49-89-46008-  mailto:roland.ram...@sun.com
**
Sitz der Gesellschaft: Sun Microsystems GmbH,
Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht München: HRB 161028;  Geschäftsführer:
Thomas Schröder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates:   Martin Häring
*** UNIX * /bin/sh  FORTRAN **
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to prevent /usr/bin/chmod from followingsymboliclinks?

2009-08-22 Thread Roland Mainz
Kris Larsen wrote:
> 
> Thanks. It works for GNU-style chmod usage.

Erm... technically this isn't GNU "chmod", it's a different "chmod"
implementation which includes GNU+BSD+MacOSX options...

> But aren't ACL's supported?

No, not yet... but it's on my todo list (the tricky part is to find the
person who originally added ACL support to Solaris's "chmod" since I
have a couple of questions...) ... 



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to prevent /usr/bin/chmod from following symboliclinks?

2009-08-22 Thread Roland Mainz
Kris Larsen wrote:
> 
> Hello!
> 
> How can I prevent /usr/bin/chmod from following symbolic links? I can't find 
> any -P option in the documentation (and it doesn't work either..). Maybe find 
> can be used in some way?
[snip]

Try:
1. Start ksh93
$ ksh93
2. Load "chmod" builtin command
$ builtin chmod
3. View help
$ chmod --man
or
$ chmod --help

Does that work for you ?



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] surprisingly poor performance

2009-08-11 Thread roland
>SSDs with capacitor-backed write caches
>seem to be fastest.

how to distinguish them from ssd`s without one?
i never saw this explicitly mentioned in the specs.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS load balancing / was: ZFS, ESX , and NFS. oh my!

2009-08-11 Thread roland
>I tried making my nfs mount to higher zvol level. But I cannot traverse to the 
>sub-zvols from this mount. 

i really wonder when someone will come up with a little patch which implements 
crossmnt option for solaris nfsd (like it exists for linux nfsd).

ok, even if it´s a hack - if it works it just works and using it with esx/nfs 
would be a killer feature.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-08-11 Thread roland
>Re-surfacing an old thread. I was wondering myself if there are any 
>home-use commercial NAS devices with zfs. I did find that there is 
>Thecus 7700. But, it appears to come with Linux, and use ZFS in FUSE, 
>but I (perhaps unjustly) don't feel comfortable with :)

no, you justly feel unconfortable with that. i`m really curious how an 
enterprise targeted nas device can implement zfs-fuse, which is known to have 
issues and is considered to be beta (if not alpha) quality software. or they 
have developed it internally to stable version !? (as cddl does not force them 
to release the sources afterwards - is that correct?)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I setting 'zil_disable' to increase ZFS/iscsi performance ?

2009-08-07 Thread roland
>Yes, but to see if a separate ZIL will make a difference the OP should 
>try his iSCSI workload first with ZIL then temporarily disable ZIL and 
>re-try his workload.

or you may use the zilstat utility
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-05 Thread roland
doesn´t solaris have the great builtin dtrace for issues like these ?

if we knew in which syscall or kernel-thread the system is stuck, we may get a 
clue...

unfortunately, i don´t have any real knowledge of solaris kernel internals or 
dtrace...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-04 Thread roland
what exact type of sata controller do you use?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zdb assertion failure/zpool recovery

2009-08-02 Thread roland
>IIRC the corruption (i.e. pool being not importable) was caused 
>when I killed virtual box, because it was hung.

that scares me using zfs inside virtual machines. is such issue known with 
vmware?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] crossmnt ?

2009-07-30 Thread roland
Hello !

How can i export a filesystem /export1 so that sub-filesystems within that 
filesystems will be available and usable on the client side without additional 
"mount/share effort" ?

this is possible with linux nfsd and i wonder how this can be done with solaris 
nfs.

i`d like to use /export1 as datastore for ESX and create zfs sub-filesystems 
for each VM in that datastore, for better snapshot handling.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fed up with ZFS causing data loss

2009-07-30 Thread roland
what`s your disk controller?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-25 Thread roland
thanks for the explanation !

one more question:

> there are situations where the disks doing strange things
>(like lying) have caused the ZFS data structures to become wonky. The
>'broken' data structure will cause all branches underneath it to be
>lost--and if it's near the top of the tree, it could mean a good
>portion of the pool is inaccessible.

can snapshots also be affected by such issue or are they somewhat "immune" here?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-25 Thread roland
>As soon as you have more then one disk in the equation, then it is
>vital that the disks commit their data when requested since otherwise
>the data on disk will not be in a consistent state.

ok, but doesn`t that refer only to the most recent data?
why can i loose a whole 10TB pool including all the snapshots with the 
logging/transactional nature of zfs?

isn`t the data in the snapshots set to read only so all blocks with snapshotted 
data don`t change over time (and thus give an secure "entry" to a consistent 
point in time) ?

ok, this are probably some short-sighted questions, but i`m trying to 
understand how things could go wrong with zfs and how issues like these happen.

on other filesystems, we have tools for fsck as a last resort or tools to 
recover data from unmountable filesystems. 
with zfs i don`t know any of these, so it`s that "will solaris mount my zfs 
after the next crash?" question which frightens me a little bit.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-25 Thread roland
>Running this kind of setup absolutely can give you NO garanties at all.
>Virtualisation, OSOL/zfs on WinXP. It's nice to play with and see it
>"working" but would I TRUST precious data to it? No way!

why not?
if i write some data trough virtualization layer which goes straight trough to 
raw disk - what`s the problem?
do a snapshot and you can be sure you have a safe state. or not?
you can check if you are consistent by doing a scrub. or not?
taken buffers/caches into consideration, you could eventually loose some 
seconds/minutes of work, but doesn`t zfs use transactional design which ensures 
consistency? 

so, how can that happen what´s being reported here, if zfs takes so much care 
of consistency?

>When that happens, ZFS believes the data is safely written, but a power cut or 
>>crash can cause severe problems with the pool.

didn`t i read a million times that zfs ensures an "always consistent state" and 
is self healing, too?

so, if new blocks are always written at new positions - why can`t we just roll 
back to a point in time (for example last snapshot) which is known to be 
safe/consistent ?

i give a shit about the last 5 minutes of work if i can recover my TB sized 
pool instead.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS tale of woe and fail

2009-07-11 Thread roland
mhh, i think i`m afraid, too, as i also need to use zfs on a single, large lun.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Adaptec SAS 2405 - anyone ?

2009-07-07 Thread roland
Hello, 

is anybody using this controller with opensolaris/snv ?

http://www.adaptec.com/de-DE/products/Controllers/Hardware/sas/entry/SAS-2405/

does it run out of the box ?

how does it perform with zfs? (especially when using it for zfs/nfs esx setup)

the driver from adaptec is for solaris 10 u4. If it does not work out of the 
box, can i use that driver with opensolaris/snv ?

thanks
roland
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-24 Thread roland
>Dennis is correct in that there are significant areas where 32-bit
>systems will remain the norm for some time to come. 

think of that hundreds of thousands of VMWare ESX/Workstation/Player/Server 
installations on non VT capable cpu`s - even if the cpu has 64bit capability, a 
VM cannot run in 64bit mode the cpu is missing VT support. And VT isn`t 
available for so long, and still there are even recent CPUs which don`t have VT 
support
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread roland
just a side-question:

>I folthis thread with much interest.

what are these "*" for ?

why is "followed" turned into "fol*" on this board?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] space in use by snapshots

2009-06-17 Thread roland
great, will try it tomorrow!

thanks very much!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] space in use by snapshots

2009-06-17 Thread roland
hello, 

i`m doing backups to several backup-dirs where each is a sub-filesystem on 
/zfs, i.e. /zfs/backup1 , /zfs/backup2

i do snapshots on daily base, but have a problem:
how can i see, how much space is in use by the snapshots for each sub-fs, i.e. 
i want to see what`s being in use on /zfs/backup1 (that`s easy, just du -s -h 
/zfs/backup1) and how much space do the snapshots need (that seems not so easy)

thanks
roland
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-17 Thread roland
>Solaris is NOT a super-duper-plays-in-all-possible-spaces OS.

yes, i know - but it`s disappointing that not even 32bit and 64bit x86 hardware 
is handled the same.
1TB limit on 32bit, less stable on 32bit.

sorry, but if you are used to linux, solaris is really weird.

issue here, limitation there

doh!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-16 Thread roland
so, we have a 128bit fs, but only support for 1tb on 32bit?

i`d call that a bug, isn`t it ?  is there a bugid for this? ;)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-16 Thread roland
>the only problems i've run into are: slow (duh) and will not 
>take disks that are bigger than 1tb

do you think that 1tb limit is due to 32bit solaris ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-15 Thread roland
so, besides performance there COULD be some stability issues.

thanks for the answers - i think i`ll stay with 32bit, even if there COULD be 
issues. (i`m happy to report and help fixing those)

i don`t have free 64bit hardware around for building storage boxes.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs on 32 bit?

2009-06-14 Thread roland
Hello, 

the ZFS best practices guide at 
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide  tells:

>*  Run ZFS on a system that runs a 64-bit kernel 


besides performance aspects, what`s the con`s of running zfs on 32 bit ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pause Solaris with ZFS compression busy by doing a cp?

2008-06-03 Thread roland
>Try running iostat in another ssh window, you'll see it can't even gather 
>stats every 5 seconds >(below is iostats every 5 seconds):
>Tue May 27 09:26:41 2008
>Tue May 27 09:26:57 2008
>Tue May 27 09:27:34 2008

that should not happen!
i`d call that a bug!

how does vmstat behave with lzjb compression?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LZO compression?

2008-04-12 Thread roland
nothing new on this? 

i'm really wondering that interest in alternative compression schemes is that 
low, especially due to the fact that lzo seems to compress better and be faster 
than lzjb.

nobody at sun who has done further investigation ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic ZFS disk accesses by ksh93... / was: Re: [dtrace-discuss] periodic ZFS disk accesses

2008-03-02 Thread Roland Mainz
Bill Shannon wrote:
> Roland Mainz wrote:
> > What's the exact filename and how often are the accesses ? Is this an
> > interactive shell or is this a script (an interactive shell session will
> > do periodical lookups for things like the MAIL*-variables (see ksh(1)
> > and ksh93(1) manual pages) while scripts may do random stuff as intended
> > by the script's author(s)) ?
> > And how does the output of $ set # look like ?
> 
> The filename is /home/shannon/.history.datsun, which is what I have
> HISTFILE set to.  Again, it's doing setattr, which it shouldn't be doing
> for $MAIL.  And, based on the dtrace output, the setattrs aren't at
> any obvious period.

Do you have an userland stacktrace for these setattr calls ?

> Even stranger, despite the fact that I have something
> like eight shells running, the calls are coming from a shell from which I
> started another (superuser) shell, from which I'm running the dtrace
> command.

That sounds weired... is it possible that something in the interactive
environment may cause this ?

> What is "set #"?

"set" prints all shell variables (local, global, environment) to stdout
including all their values... the '$' character was thought as shell
prompt and the '#' character is the shell's  comment character to make
sure that nothing gets executed past this point when someone is
copy&pasting such lines from the email client to the terminal window...



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Periodic ZFS disk accesses by ksh93... / was: Re: [dtrace-discuss] periodic ZFS disk accesses

2008-03-02 Thread Roland Mainz
Bill Shannon wrote:
> Jonathan Edwards wrote:
> > On Mar 1, 2008, at 4:14 PM, Bill Shannon wrote:
> >> Ok, that's much better!  At least I'm getting output when I touch files
> >> on zfs.  However, even though zpool iostat is reporting activity, the
> >> above program isn't showing any file accesses when the system is idle.
> >>
> >> Any ideas?
> >
> > assuming that you're running an interval (ie: zpool iostat -v 5) and
> > skipping past the initial summary .. you know it's not file read/write
> > activity .. you might want to check other vop calls .. eg:
> > http://blogs.sun.com/erickustarz/resource/zvop_times_fsid.d
> >
> > to see what's happening .. scrubs or silvering perhaps?
> 
> I ended up combining a few programs (cut & paste programming without
> really understanding what I was doing!) and ended up with a program
> that traces all the zfs_* entry points.  Based on the data I'm getting
> from that, correlated with the zpool iostat output, it appears that
> the culprit is ksh93!  It seems that ksh93 is doing a setattr call of
> some sort on the .history file.

What's the exact filename and how often are the accesses ? Is this an
interactive shell or is this a script (an interactive shell session will
do periodical lookups for things like the MAIL*-variables (see ksh(1)
and ksh93(1) manual pages) while scripts may do random stuff as intended
by the script's author(s)) ?
And how does the output of $ set # look like ?



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-02-27 Thread Roland Mainz
Roland Mainz wrote:
> Tim Haley wrote:
> > Roland Mainz wrote:
> > > Bart Smaalders wrote:
> > >> Marcus Sundman wrote:
> > >>> I'm unable to find more info about this. E.g., what does "reject file
> > >>> names" mean in practice? E.g., if a program tries to create a file
> > >>> using an utf8-incompatible filename, what happens? Does the fopen()
> > >>> fail? Would this normally be a problem? E.g., do tar and similar
> > >>> programs convert utf8-incompatible filenames to utf8 upon extraction if
> > >>> my locale (or wherever the fs encoding is taken from) is set to use
> > >>> utf-8? If they don't, then what happens with archives containing
> > >>> utf8-incompatible filenames?
> > >> Note that the normal ZFS behavior is exactly what you'd expect: you
> > >> get the filenames you wanted; the same ones back you put in.
> > >
> > > Does ZFS convert the strings to UTF-8 in this case or will it just store
> > > the multibyte sequence unmodified ?
> > >
> > ZFS doesn't muck with names it is sent when storing them on-disk.  The
> > on-disk name is exactly the sequence of bytes provided to the open(),
> > creat(), etc.  If normalization options are chosen, it may do some
> > manipulation of the byte strings *when comparing* names, but the on-disk
> > name should be untouched from what the user requested.
> 
> Ok... that was the part which I was _praying_ for... :-)
> 
> ... just some background (for those who may be puzzled by the statement
> above): The conversion to Unicode is not always "lossless" (Unicode is
> sometimes marketed as
> "convert-any-encoding-to-unicode-without-loosing-any-information") ...
> for example if you have a mixed-language ISO-2022 character sequence the
> conversion to Unicode will use the language information itself 

s/use/loose/ ... sorry...



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-02-27 Thread Roland Mainz
Tim Haley wrote:
> Roland Mainz wrote:
> > Bart Smaalders wrote:
> >> Marcus Sundman wrote:
> >>> I'm unable to find more info about this. E.g., what does "reject file
> >>> names" mean in practice? E.g., if a program tries to create a file
> >>> using an utf8-incompatible filename, what happens? Does the fopen()
> >>> fail? Would this normally be a problem? E.g., do tar and similar
> >>> programs convert utf8-incompatible filenames to utf8 upon extraction if
> >>> my locale (or wherever the fs encoding is taken from) is set to use
> >>> utf-8? If they don't, then what happens with archives containing
> >>> utf8-incompatible filenames?
> >> Note that the normal ZFS behavior is exactly what you'd expect: you
> >> get the filenames you wanted; the same ones back you put in.
> >
> > Does ZFS convert the strings to UTF-8 in this case or will it just store
> > the multibyte sequence unmodified ?
> >
> ZFS doesn't muck with names it is sent when storing them on-disk.  The
> on-disk name is exactly the sequence of bytes provided to the open(),
> creat(), etc.  If normalization options are chosen, it may do some
> manipulation of the byte strings *when comparing* names, but the on-disk
> name should be untouched from what the user requested.

Ok... that was the part which I was _praying_ for... :-)

... just some background (for those who may be puzzled by the statement
above): The conversion to Unicode is not always "lossless" (Unicode is
sometimes marketed as
"convert-any-encoding-to-unicode-without-loosing-any-information") ...
for example if you have a mixed-language ISO-2022 character sequence the
conversion to Unicode will use the language information itself and
converting it back to an ISO-2022 sequence will result in a different
multibyte sequence than the original input (the issue could be
worked-around by inserting the "language tag" characters to preserve
this information but almost every converter doesn't do that (and since
these "tags" are outside the BMP you have to pray that everything in the
toolchain works with Unicode charcters beyond 65535) ... ;-( ).



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-02-27 Thread Roland Mainz
Bart Smaalders wrote:
> Marcus Sundman wrote:
> > I'm unable to find more info about this. E.g., what does "reject file
> > names" mean in practice? E.g., if a program tries to create a file
> > using an utf8-incompatible filename, what happens? Does the fopen()
> > fail? Would this normally be a problem? E.g., do tar and similar
> > programs convert utf8-incompatible filenames to utf8 upon extraction if
> > my locale (or wherever the fs encoding is taken from) is set to use
> > utf-8? If they don't, then what happens with archives containing
> > utf8-incompatible filenames?
> 
> Note that the normal ZFS behavior is exactly what you'd expect: you
> get the filenames you wanted; the same ones back you put in.

Does ZFS convert the strings to UTF-8 in this case or will it just store
the multibyte sequence unmodified ?



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iSCSI initiator BUG

2007-12-21 Thread roland
i have a difficulty in understanding:

you tell that the device get`s lost whenver the I/O error occurs.

you tell that you cannot use ext3 or xfs, but reiser.

with reiser, the device doesn`t get lost on I/O error ?

that`s very weird.

what`s your distro/kernel version ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2007-11-30 Thread roland
*bump*

just wanted to keep this into discussion. i think it could be important to zfs 
if it could compress faster with a better compressratio.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4500 device disconnect problem persists

2007-11-16 Thread roland egle
We are having the same problem.

First with 125025-05 and then also with 125205-07
Solaris 10 update 4 - Know with all Patchesx


We opened a Case and got

T-PATCH 127871-02

we installed the Marvell Driver Binary 3 Days ago.

T127871-02/SUNWckr/reloc/kernel/misc/sata
T127871-02/SUNWmv88sx/reloc/kernel/drv/marvell88sx
T127871-02/SUNWmv88sx/reloc/kernel/drv/amd64/marvell88sx
T127871-02/SUNWsi3124/reloc/kernel/drv/si3124
T127871-02/SUNWsi3124/reloc/kernel/drv/amd64/si3124 

It seems that this resolve the device reset problem and the nfsd crash on
x4500 with one raidz2 pool and a lot of zfs Filesystems
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] first public offering of NexentaStor

2007-11-06 Thread roland
is there any pricing information available ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2007-10-17 Thread roland
being at $300 now - a friend of mine just adding another $100
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-16 Thread roland
and what about compression?

:D
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/zpools iscsi

2007-10-12 Thread roland
>I was courious if I could use zfs to have it shared on those two hosts 
no, that`s not possible for now.

>but aparently I was unable to do it for obvious reasons. 
you will corrupt your data!

>On my linuc oracle rac I was using ocfs which works just as I need it

yes, because ocfs is build for that.
it`s a cluster filesystem - that`s what you need for this.
another name is "shared disk filesystem"
see wikipedia -> http://en.wikipedia.org/wiki/List_of_file_systems

>maybe if not now but in the future?
it has been discussed, iirc.

>is there anything that I could do at this moment to be able to have my two 
>other
>solaris clients see my zpool that I am presenting via iscsi to them both? 
zpool? i assume you mean zvol, correct ?

>Is there any solutions out there of this kind?
i`m not that deep into solaris, but iirc there isn`t one for free.
veritas is quite popular, but you need spend lots of bucks for this.
maybe SAM-QFS ?

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2007-10-09 Thread roland
for those who are interested in lzo with zfs, i have made a special version of 
the patch taken from the zfs-fuse mailinglist:

http://82.141.46.148/tmp/zfs-fuse-lzo.tgz

this file contains the patch in unified diff format and also a broken out 
version (i.e. split into single files).

maybe this makes integrating into an onnv-tree easier and also is better for 
review.

i took some quick look and compared to onnv sources and it looks that it`s not 
too hard to be integrated - most lines are new files and oonv files seem to be 
changed just little. 

unfortunately i have no solaris build environment around for now, so i cannot 
give it a try and i also have no clue if this will compile at all. maybe the 
code needs much rework to be able to run in kernelspace, maybe not - but some 
solaris kernelhacker will better know
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2007-10-08 Thread roland
besides re-inventing the wheel somebody at sun should wake up and go ask mr. 
oberhumer and pay him $$$ to get lzo into ZFS. 

this is taken from http://www.oberhumer.com/opensource/lzo/lzodoc.php :

Copyright
 -
 LZO is Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
 2005 Markus Franz Xaver Johannes Oberhumer

 LZO is distributed under the terms of the GNU General Public License (GPL).
 See the file COPYING.

 Special licenses for commercial and other applications which
 are not willing to accept the GNU General Public License
 are available by contacting the author.


so, lzo with opensolaris doesn`t sound like a no-go for me.

if Sun doesn`t jump in to pay for that - let`s create some LZO-into-ZFS-fund.

i`m here with the first $100 bucks. :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raid Edition drive with RAIDZ

2007-10-07 Thread roland
seems that standard drives are ok.
sun is using Hitachi Deskstar 7K500 for it`s Sunfire x4500/thumper.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VFSTAB mounting for ProFTPD

2007-10-07 Thread roland
what you`re looking for is called a bind-mount, and that`s a linux kernel 
feature.

i don`t know if solaris has a perfect equivalent for this - maybe lofs is what 
you need.
see "man lofs"
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2007-10-07 Thread roland
any news on additional compression-schemes for zfs ?

this is interesting research-topic, imho :)

so, some more real-world tests with zfs-fuse + lzo patch :

-LZO
zfs set compression=lzo mypool

time cp /vmware/vserver1/vserver1.vmdk /mypool

real7m8.540s
user0m0.708s
sys 0m24.839s

zfs get compressratio mypool
NAMEPROPERTY   VALUE   SOURCE
mypool  compressratio  1.74x   -

1.7Gvserver1.vmdk  compressed
3.0Gvserver1.vmdk  uncompressed

-LZJB
zfs set compression=lzjb mypool

time cp /vmware/vserver1/vserver1.vmdk /mypool

real7m16.392s
user0m0.709s
sys 0m25.107s

zfs get compressratio mypool
NAMEPROPERTY   VALUE   SOURCE
mypool  compressratio  1.47x   -

2.0Gvserver1.vmdk compressed 
3.0Gvserver1.vmdk uncompressed

-GZIP
zfs set compression=gzip mypool

time cp /vmware/vserver1/vserver1.vmdk /mypool/

real12m54.183s
user0m0.653s
sys 0m24.933s

zfs get compressratio
NAMEPROPERTY   VALUE   SOURCE
mypool  compressratio  2.02x   -

1.5Gvserver1.vmdkcompressed
3.0Gvserver1.vmdkuncompressed


btw - lzo-patch for zfs-fuse (does apply to latest zfs-fuse sources) is at 
http://groups.google.com/group/zfs-fuse/attach/a489f630aa4aa189/zfs-lzo.diff.bz2?part=4
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs + iscsi target + vmware esx server

2007-10-07 Thread roland
take a look at this one 
http://www.opensolaris.org/jive/thread.jspa?messageID=98176
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs + iscsi target + vmware esx server

2007-10-07 Thread roland
i don`t have a solution for you - but at least some comments:

- i have read several complaints that esx iscsi is broken to some degree. there 
are some known incompatibilities and at least one ceo of a somewhat popular 
iscsi software vendor recently gave such statement.
- i have read more than once that software iscsi in ESX is slow. using iscsi 
directly from inside a windows VM would be faster than using a virtual-disk on 
iscsi-storage connected via ESX iscsi-initiator. i can try giving you a pointer 
to some postings.
- try reposting in storage->discuss , i think that forum is more iscsi related 
and it maybe rather an iscsi-target issue than a zfs issue. also see 
http://opensolaris.org/os/project/iscsitgt/  Rick McNeal is your man here :)
- consider using NFS ! -> 
http://storagefoo.blogspot.com/2007/09/vmware-over-nfs.html
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] use 32-bit inode scripts on zfs?

2007-10-02 Thread roland
could you give an example what a 32bit inode script is ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub halts

2007-08-07 Thread roland
>6564677 oracle datafiles corrupted on thumper

wow, must be a huuge database server!
:D
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Compression algorithms - Project Proposal

2007-07-10 Thread roland
> Wouldn't ZFS's being an integrated filesystem make it
> easier for it to 
> identify the file types vs. a standard block device
> with a filesystem 
> overlaid upon it?
> 
> I read in another post that with compression enabled,
> ZFS attempts to 
> compress the data and stores it compressed if it
> compresses enough. As far 
> as identifying the file type/data type how about:
> 1.) ZFS block compression system reads the ZFS file
> table to identify which 
> blocks are the beginning of files (or for new writes,
> the block compression 
> system is notified that file.ext is being written on
> block  (e.g. block 
> 9,000,201).
> 2.) ZFS block compression system reads block ,
> identifies the file type 
> probably based on the file header and applies the
> most appropriate 
> compression format, or if none found, the default
> 
> An approach for maximal compression:
> The algorithm selection could be
> 1.) attempt to compress using BWT, store compressed
> if BWT works better 
> than no compression
> 2.) when CPU is otherwise idle, use 10% of spare cpu
> cycles to "walk the 
> disk", trying to recompress each block with each of
> the various supported 
> compression algorithms, ultimately storing that block
> in the most space 
> efficient compression format.
> 
> This technique would result in a file system that
> tends to compact its data 
> ever more tightly as the data sits in it. It could be
> compared to 
> 'settling' flakes in a cereal box...the contents may
> have had a lot of 'air 
> space' before shipment, but are now 'compressed'. The
> recompression step 
> might even be part of a period disk scrubbing step
> meant to check and 
> recheck previously written data to make sure the
> sector it is sitting on 
> isn't going bad.


this sounds really good - i like that ideas
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Compression algorithms - Project Proposal

2007-07-08 Thread roland
>One thing ZFS is missing is the ability to select which files to compress.
yes.

there is also no filesystem based approach in compressing/decompressing a whole 
filesystem. you can have 499gb of data on a 500gb partition - and if you need 
some more space you would think turning on compression on that fs would solve 
your problem. but compression only affects files which are new. i wished there 
was some zfs set compression=gzip  , zfs compress , zfs uncompress 
 and it would be nice if we could get compresion information for single 
files. (as with ntfs)

>Even a simple heuristic like "don't compress mp3,avi,zip,tar files"
that`s already existing. afaik, zfs tries to compress a datablock and if that 
isn`t compressible enough, it doesn`t store it compressed. it has no 
"knowlegde" of what type of data a file contains, tough.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Compression algorithms - Project Proposal

2007-07-07 Thread roland
nice idea! :)

>We plan to start with the development of a fast implementation of a Burrows 
>Wheeler Transform based algorithm (BWT).

why not starting with lzo first - it`s already in zfs-fuse on linux and it 
looks, that it`s just "in between lzjb and gzip" in terms of performance and 
compression ratio.
there needs yet to be demonstrated that it behaves similar on solaris.

see http://www.opensolaris.org/jive/thread.jspa?messageID=111031
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: cross fs/same pool mv

2007-07-01 Thread roland
> > You can just re-copy all of the data after enabling compression (it's fairly
> > easy to write a script, or just do something like:
> >
> > find . -xdev -type f | cpio -ocB | cpio -idmuv
> >
> > to re-write all of the data.
>
>  and to destroy the content of all files > 5k.

i tried the above for fun on /, got tons of "File  was modified while being 
copied" and now ended up reinstalling my system from scratch :D
(not a problem, it was just a nexenta test installation)

is there a reliable method of re-compressing a whole zfs volume after turning 
on compression or changing compression scheme ?

roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: LZO compression?

2007-07-01 Thread roland
this is on linux with zfs-fuse (since no other zfs implementation besides 
zfs-fuse has support for lzo at this time)

btw - here is some additional comparison - now with some real-world data.

copying over some mysql database dir (var/lib/mysql) of size 231M gives:

lzo  | 0m41.152s | 2.31x
lzjb | 0m42.976s | 1.83x
gzip |1m21.637s | 3.26x

i cannot tell if lzo and lzjb performance is really comparable, since 
zfs-fuse/linux may behave very differently to zfs/solaris - but in comparison 
with lzjb, lzo compression seems to give better results overall.

btw: besides opensource lzo there is some closed source lzo professional which 
is even more optimzied. 

maybe sun should think about lzo in zfs - albeit those licensing issues. i`m 
sure that could be resolved somehow, maybe by spending an appropriate amount of 
bucks to mr. oberhumer. 

roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: LZO compression?

2007-06-30 Thread roland
some other funny benchmark numbers:

i wondered how performance/compressratio of lzjb,lzo and gzip would compare if 
we have optimal compressible datastream.

since zfs handles repeating zero`s quite efficiently (i.e. allocating no space) 
i tried writing non-zero values.

the result is quite interesting, so i`m posting it here.

writing a 200meg file ( time dd if=/dev/zero bs=1024k count=200 |tr '\0' 'a' 
>test.dat ) has the following results (compression used |time needed 
|compressratio)

lzo   | 0m10.872s | 118.32x  <- (!) 
lzjb  | 0m11.886s | 27.94x 
gzip | 0m17.418s | 219.64x   

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS delegation script

2007-06-26 Thread Roland Mainz
Nicolas Williams wrote:
> On Wed, Jun 27, 2007 at 12:55:15AM +0200, Roland Mainz wrote:
> > Nicolas Williams wrote:
> > > On Sat, Jun 23, 2007 at 12:31:28PM -0500, Nicolas Williams wrote:
> > > > On Sat, Jun 23, 2007 at 12:18:05PM -0500, Nicolas Williams wrote:
> > > > > Couldn't wait for ZFS delegation, so I cobbled something together; see
> > > > > attachment.
> > > >
> > > > I forgot to slap on the CDDL header...
> > >
> > > And I forgot to add a -p option here:
> > >
> > > > #!/bin/ksh
> > >
> > > That should be:
> > >
> > > > #!/bin/ksh -p
> >
> > Uhm... that's no longer needed for /usr/bin/ksh in Solaris 10 and ksh93
> > never needed it.
> 
> But will ksh or ksh93 know that this script must not source $ENV?

Erm, I don't know what's the correct behaviour for Solaris ksh88... but
for ksh93 it's clearly defined that ${ENV} and /etc/ksh.kshrc are only
sourced for _interactive_ shell sessions by default - and that excludes
non-interactive scripts.

> Apparently ksh won't source it anyways; this was not clear from the man
> page.
> 
> Note that in the RBAC profile for this script the script gets run with
> privs=all, not euid=0, so checking that euid == uid is not sufficient.

What do you mean with that ?

> > > Note that this script is not intended to be secure, just to keep honest
> > > people honest and from making certain mistakes.  Setuid-scripts (which
> > > this isn't quite) are difficult to make secure.
> >
> > Uhm... why ? You only have to make sure the users can't inject
> > data/code. David Korn provided some guidelines for such cases, see
> > http://mail.opensolaris.org/pipermail/shell-discuss/2007-June/000493.html
> > (mainly avoid "eval", put all variable expensions in quotes, set IFS= at
> > the beginning of the script and harden your script against unexpected
> > input (classical example is $ myscript "$(cat /usr/bin/cat)" # (e.g. the
> > attempt to pass a giant binary string as argument))) ... and I am
> > currently working on a new shell code style guideline at
> > http://www.opensolaris.org/os/project/shell/shellstyle/ with more stuff.
> 
> As you can see the script quotes user arguments throughout.  It's
> probably secure -- what I meant is that I make no guarantees about this
> script :)

Yes... I saw that... and I realised that the new ksh93 getopts, pattern
matching (e.g. [[ "${pat}" == ~(Ei).*myregex.* ]] to replace something
like [ "$(echo "${pat}" | egrep -i ".*myregex.*")" != "" ] ) and
associative arrays (e.g. use string as index instead of numbers) would
be usefull for this script.

Anyway... the script looks good... I wish the script code in OS/Net
Makefiles would have that quality... ;-/



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS delegation script

2007-06-26 Thread Roland Mainz
Nicolas Williams wrote:
> On Sat, Jun 23, 2007 at 12:31:28PM -0500, Nicolas Williams wrote:
> > On Sat, Jun 23, 2007 at 12:18:05PM -0500, Nicolas Williams wrote:
> > > Couldn't wait for ZFS delegation, so I cobbled something together; see
> > > attachment.
> >
> > I forgot to slap on the CDDL header...
> 
> And I forgot to add a -p option here:
> 
> > #!/bin/ksh
> 
> That should be:
> 
> > #!/bin/ksh -p

Uhm... that's no longer needed for /usr/bin/ksh in Solaris 10 and ksh93
never needed it.

> Note that this script is not intended to be secure, just to keep honest
> people honest and from making certain mistakes.  Setuid-scripts (which
> this isn't quite) are difficult to make secure.

Uhm... why ? You only have to make sure the users can't inject
data/code. David Korn provided some guidelines for such cases, see
http://mail.opensolaris.org/pipermail/shell-discuss/2007-June/000493.html
(mainly avoid "eval", put all variable expensions in quotes, set IFS= at
the beginning of the script and harden your script against unexpected
input (classical example is $ myscript "$(cat /usr/bin/cat)" # (e.g. the
attempt to pass a giant binary string as argument))) ... and I am
currently working on a new shell code style guideline at
http://www.opensolaris.org/os/project/shell/shellstyle/ with more stuff.



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs space efficiency

2007-06-24 Thread roland
update on this:

i think i have been caught by a rsync trap.

it seems, using rsync locally (i.e. rsync --inplace localsource 
localdestination) and "remotely" (i.e. rsync --inplace localsource 
localhost:/localdestination) is something different and rsync seems to handle 
the writing very different. 
when i use rsync through the network stack( i.e. localhost:/localdestination) 
it seems to work as expected. 

need some more testing to be real sure.but for now things look more 
promising

roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs space efficiency

2007-06-24 Thread roland
whoops - i see i have posted the same several times.

this was duo to i got an error message when posting and thought, it didn`t get 
trough

could some moderator probably delete those double posts ?


meanwhile, i did some tests and have very weird results.

first off, i tried "--inplace" to update a slightly changing large binary file 
- and it didn`t give a result as expected.

after a snapshot, rewriting that file made it take exactly twice the space, 
although i only changed some bytes of the source file. (i used binary editor 
"bed" for that) 
i would have expected, that rsync would actually only merge the changed bytes 
into  the destination file and thus zfs snapshot would be efficient with this 
(because of some few write() calls)

it wasn`t.

this changed when i additionally added "--append" - this results in space 
efficiency - but it only works as desired if i append data to the end of the 
source file. if i change the file in the middle (i.e. only changing bytes, not 
inserting new ones) , then rsync tells me:

WARNING: test.dat failed verification -- update retained (will try again).

which i don`t know what rsync want`s to tell me here..

looks like this is not a real problem, but i wonder about that message.

i tested this on zfs-fuse and cannot believe that the result is due to zfs-fuse 
- but i will try to test this on solaris to see if it behaves differently.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs space efficiency

2007-06-23 Thread roland
>So, in your case, you get maximum
>space efficiency, where only the new blocks are stored, and the old
>blocks simply are referenced.

so - i assume that whenever some block is read from file A and written 
unchanged to file B, zfs recognizes this and just creates a new reference to 
file A ?

that would be great.

i shouldn`t ask so much but try on my own, instead :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs space efficiency

2007-06-23 Thread roland
hello !

i think of using zfs for backup purpose of large binary data files (i.e. vmware 
vm`s, oracle database) and want to rsync them in regular interval from other 
systems to one central zfs system with compression on.

i`d like to have historical versions and thus want to make a snapshot before 
each backup - i.e. rsync.

now i wonder:
if i have one large datafile on zfs, make a snapshot from that zfs fs holding 
it and then overwrting that file by a newer version with slight differences 
inside - what about the real disk consumption on the zfs side ?
do i need to handle this a special way to make it space-efficient ? do i need 
to use rsync --inplace ?

typically , rsync writes a complete new (temporary) file based on the existing 
one and on what has change at the remote site - and then replacing the old one 
by the new one via delete/rename. i assume this will eat up my backup space 
very quickly, even when using snapshots and even if only small parts of the 
large file are changing.

comments? 

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: New german white paper on ZFS

2007-06-20 Thread roland
nice one !

i think this is one of the best and most comprehensive papers about zfs i have 
seen

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: LZO compression?

2007-06-18 Thread roland
>Those are interesting results. 
yes, indeed

>Does this mean you've already written lzo support into ZFS? 
no, i`m not a "real" coder, but there exists an lzo patch for zfs-fuse which 
has also been discussed here, afaik.
not sure if this has already been included in zfs-fuse tree - just wanted to 
check out, but website (http://www.wizy.org/wiki/ZFS_on_FUSE) seems down for 
the moment - if this takes longer to recover (happens often and normally 
doesn`t) maybe i have it left on my system somewhere where i tried zfs-fuse 
with lzo patch and where i did thos performance comparison.

also see:
http://groups.google.com/group/zfs-fuse/tree/browse_frm/month/2007-04/541ff514ecc880b2?rnum=31&_done=%2Fgroup%2Fzfs-fuse%2Fbrowse_frm%2Fmonth%2F2007-04%3F#doc_a489f630aa4aa189
http://groups.google.com/group/zfs-fuse/tree/browse_frm/month/2007-04/b85b3bd20725fec6?rnum=11&_done=%2Fgroup%2Fzfs-fuse%2Fbrowse_frm%2Fmonth%2F2007-04%3F#doc_b85b3bd20725fec6

>If not, that would be a great next step
if other performance comparisons give similar results, i really think so.
not sure, if my results are reliable, but at least it`s worth taking a closer 
look, because also eric dillman told, it would perform better than lzjb - at 
least with zfs-fuse.

>licensing issues can be sorted out later..
good attitude ! :)

zfs-fuse author/maintainer is Ricardo Correia and the lzo patch was done by 
Eric Dillmann. I can provide contact data if you like.

roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: LZO compression?

2007-06-17 Thread roland
last number (2.99x) is compression ratio and was much better than lzjb.
not sure if there is some mistake here, i was quite surprised that it was so 
much better than lzjb
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Adding my own compression to zfs

2007-06-16 Thread roland
lzo in-kernel implementation for solaris/sparc ?
your answer makes me believe, it exists.

could you give a comment ?

roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: LZO compression?

2007-06-16 Thread roland
btw - is there some way to directly compare lzjb vs lzo compression - to see 
which performs better and using less cpu ?

here those numbers from my little benchmark:

|lzo |6m39.603s |2.99x
|gzip |7m46.875s |3.41x
|lzjb |7m7.600s |1.79x

i`m just curious about these numbers - with lzo i got better speed and better 
compression in comparison to lzjb

nothing against lzjb compression - it's pretty nice - but why not taking a 
closer look  here? maybe here is some room for improvement....

roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: LZO compression?

2007-06-16 Thread roland
>For what it's worth, at a previous job I actually ported LZO to an 
>OpenFirmware 
>implementation. It's very small, doesn't rely on the standard libraries, and 
>would be 
>trivial to get running in a kernel. (Licensing might be an issue, of course.)

just for my personal interest - are you speaking of a kernel port?
linux, solaris or what OS ?
could you tell which firmware ?

actually, in-kernel lzo compression for linux has just been implemented and 
being discussed on lkml. ( see 
http://marc.info/?l=linux-kernel&w=2&r=1&s=lzo&q=b)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs compression - scale to multiple cpu ?

2007-06-16 Thread roland
hi !

i think i have read somewhere that zfs gzip compression doesn`t scale well 
since the in-kernel compression isn`t done multi-threaded.

is this true - and if so - will this be fixed ?

what about default lzjb compression - is it different regarding this "issue" ?

thanks
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs performance on fuse (Linux) compared to other fs

2007-04-23 Thread roland
> So, at this point in time that seems pretty discouraging for an everyday 
> user, on Linux.

nobody told, that zfs-fuse is ready for an everyday user at it`s current state 
! ;)

although it runs pretty stable for now, there still remain major issues and 
especially, it`s not yet being optimized for performance.
you should give it some more time, perhaps performance will get better one day 
if there is some work put into this.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: LZO compression?

2007-04-19 Thread roland
please be cautious with this benchmarks and don`t make early decisions based on 
this.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: .zfs snapshot directory in all directories

2007-02-26 Thread roland
for what purpose ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: solaris - ata over ethernet - zfs - HPC

2007-02-06 Thread roland
>We've considered looking at porting the AOE _server_ module to Solaris,
>especially since the Solaris loopback driver (/dev/lofi) is _much_ more
>stable than the loopback module in Linux (the Linux loopback module is a
>stellar piece of crap). 

ok, it`s quite old and probably not the most elegant solution - but unstable?
could you underline that somehow?
i use the loopback module for years and never had a problem.
anyway - it`s getting a competitor: bugfixed version of dm-loop device-mapper 
target has just been posted on dm-devel today.

roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Adding my own compression to zfs

2007-01-29 Thread roland
hey, thanks for your overwhelming private lesson for english colloquialism  :D

now back to the technical :)

> # zfs create pool/gzip
> # zfs set compression=gzip pool/gzip
> # cp -r /pool/lzjb/* /pool/gzip
> # zfs list
> NAMEUSED  AVAIL  REFER  MOUNTPOINT
> pool/gzip  64.9M  33.2G  64.9M  /pool/gzip
> pool/lzjb   128M  33.2G   128M  /pool/lzjb
> 
> That's with a 1.2G crash dump (pretty much the most compressible file 
> imaginable). Here are the compression ratios with a pile of ELF binaries 
> (/usr/bin and /usr/lib):

> # zfs get compressratio
> NAME   PROPERTY   VALUE  SOURCE
> pool/gzip  compressratio  3.27x  -
> pool/lzjb  compressratio  1.89x  -

this looks MUCH better than i would have ever expected for smaller files. 

any real-world data how good or bad compressratio goes with lots of very small 
but good compressible files , for example some (evil for those solaris 
evangelists) untarred linux-source tree ?

i'm rather excited how effective gzip will compress here.

for comparison:

sun1:/comptest #  bzcat /tmp/linux-2.6.19.2.tar.bz2 |tar xvf -
--snipp--

sun1:/comptest # du -s -k *
143895  linux-2.6.19.2
1   pax_global_header

sun1:/comptest # du -s -k --apparent-size *
224282  linux-2.6.19.2
1   pax_global_header

sun1:/comptest # zfs get compressratio comptest
NAME  PROPERTY   VALUE  SOURCE
comptest tank  compressratio  1.79x  -
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Adding my own compression to zfs

2007-01-29 Thread roland
> Have a look at:
>
>  http://blogs.sun.com/ahl/entry/a_little_zfs_hack

thanks for the link, dick !

this sounds fantastic !

is the source for that (yet) available somewhere ?

>Adam Leventhal's Weblog
>inside the sausage factory

btw - just wondering - is this some english phrase or some running gag ? i 
have seen it once ago on another blog and so i`m wondering

greetings from the beer and sausage nation ;)

roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: can I use zfs on just a partition?

2007-01-28 Thread roland
> Take note though, that giving zfs the entire disk gives a possible
> performance win, as zfs will only enable the write cache for the disk
> if it is given the entire disk.

really? 
why this?
is this tuneable somehow/somewhere? can i enabyle writecache if only using a 
dedicated partition ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Adding my own compression to zfs

2007-01-27 Thread roland
is it planned to add some other compression algorithm to zfs ?

lzjb is quite good and especially performing very well, but i`d like to have 
better compression (bzip2?) - no matter how worse performance drops with this. 

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thumper Origins Q

2007-01-24 Thread Roland Rambau

Chris,

well, "Thumper" is actually a reference to Bambi

The comment about being risque was refering to "Humper" as
a codename proposed for a related server
( and e.g. leo.org confirms that is has a meaning labelled as "[vulg.]" :-)

  -- Roland


Chris Ridd schrieb:

On 24/1/07 9:06, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:


But Fowler said the name was too risque (!).  Fortunately the name
"Thumper" stuck...


I assumed it was a reference to Bambi... That's what comes from having small
children :-)

Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: question about self healing

2007-01-13 Thread roland
thanks for your infos!

> > can zfs protect my data from such single-bit-errors with a single drive ?
> >
>nope.. but it can tell you that it has occurred.

can it also tell (or can i use a tool to determine), which data/file is 
affected by this error (and needs repair/restore from backup) ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question about self healing

2007-01-13 Thread roland
i have come across an interesting article at : 

http://www.anandtech.com/IT/showdoc.aspx?i=2859&p=5

it`s about sata vs. sas/scsi realiability , telling that typical desktop sata 
drives 
".on average experience an Unrecoverable Error every 12.5 terabytes written 
or read (EUR of 1 in 1014 bits)."

since the 1TB drive is out very soon, this really makes me afraid of data 
integrity on my backup disks, so the question is:

will zfs help detect/prevent such single-bit errors ?

i`m somewhat sure, that it will help if i use raid1 setup with ZFS - it`s self 
healing will detect those single-bit-errors and correct this - but what about 
single disk setup ?

can zfs protect my data from such single-bit-errors with a single drive ?

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Why is "+" not allowed in a ZFS file system name ?

2007-01-10 Thread roland
# zpool create 500megpool /home/roland/tmp/500meg.dat
cannot create '500megpool': name must begin with a letter
pool name may have been omitted

huh?
ok - no problem if special characters aren`t allowed, but why _this_ weird 
looking limitaton ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   >