Re: [zfs-discuss] mimic size=1024m setting in /etc/vfstab whenusing rpool/swap (zfs root)

2010-01-31 Thread Prakash Kochummen
Thanks for the reply.

Sorry i confused you too. when I mentioned ufs , i just meant ufs root scenario 
(pre u6).

Suppose I have a 136G Hdd which as my boot disk,which has been sliced it like 
s0-80gb (root slice)
s1-55Gb (swap slice) 
s7- (SVM metadb) 

My understanding was that if I had 16Gb of memory, my "df -h /tmp" output 
should be something like
tmp=16+55 - (memory used by solaris)

And if I specify size=512m in the /etc/vfstab file for swap, my /tmp would 
still show $tmp size instead of 512m but I would not be able to copy an 700mb 
iso into /tmp. 

Hope I am clear. 

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mimic size=1024m setting in /etc/vfstab when using rpool/swap (zfs root)

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 10:15 PM, Prakash Kochummen wrote:
> Hi,
> 
> While using ufs root, we had an option for limiting the /tmp size using mount 
> -o size manual option or setting size=1024m in the vfstab. 

This is no different when using ZFS root.  /tmp is, by default, a tmpfs file 
system.

> Do we have any comparable option available when we use zfs root. If we 
> execute 
> zfs set size=1024m rpool/swap
> it resizes the whole of the swap area which results in reducing the VM size. 

Yes, but the equivalent for a UFS root would be to format the swap device and
change the partition size.  Clearly, the ZFS solution is simpler and safer.

> AFAIK in the ufs option for limiting swap size, we are just limiting the 
> filesystem behaviour of the tmpfs filesystem while the kernel will still be 
> able to use the swap space for paging etc. Is this correct understanding. 

By default, the Solaris 10 installer does not use UFS for swap.  So there is no
"ufs option for limiting swap size."
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mimic size=1024m setting in /etc/vfstab when using rpool/swap (zfs root)

2010-01-31 Thread Prakash Kochummen
Hi,

While using ufs root, we had an option for limiting the /tmp size using mount 
-o size manual option or setting size=1024m in the vfstab. 

Do we have any comparable option available when we use zfs root. If we execute 
zfs set size=1024m rpool/swap
it resizes the whole of the swap area which results in reducing the VM size. 

AFAIK in the ufs option for limiting swap size, we are just limiting the 
filesystem behaviour of the tmpfs filesystem while the kernel will still be 
able to use the swap space for paging etc. Is this correct understanding. 

Thanks in advance. 

Rgds
PK
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-31 Thread Mark
I thank each of you for all of your insights. I think if this was a production 
system I'd abandon the idea of 2 drives and get a more capable system, maybe a 
2U box with lots of SAS drives so I could use RAIDZ configurations. But in this 
case, I think all I can do is try some things until I understand it better. 

Please continue to add to the discussion as I learn something each time someone 
posts a reply.
Thanks again
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Christo Kutrovsky
Thanks Bill, that looks relevant. Note however this only happens with gzip 
compression, but it's definiteness something I've experienced.

I've decided to wait for the next full release before upgrading. I was just 
wondering if the problem was resolved.

I'll migrate to COMSTAR soon, I hope the kernel mode iscsi will make a 
difference.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-31 Thread Jacob Ritorto
Hey Mark,
I spent *so* many hours looking for that firmware.  Would you please
post the link?  Did the firmware dl you found come with fcode? Running blade
2000 here (SPARC).

Thx
Jake

On Jan 26, 2010 11:52 AM, "Mark Nipper"  wrote:

> It may depend on the firmware you're running. We've
> got a SAS1068E based
> card in Dell R710 at...
Well, I may have misspoke.  I just spent a good portion of yesterday
upgrading to the latest firmware myself (downloaded from SuperMicro's FTP,
version 1.26.00 also; after I figured out I had to pass the -o option to
mptutil to force the flash since it was complaining about a mismatched card
or some such) and I thought that the machine had locked up again later in
the day yesterday because I couldn't ssh into the machine.

To my surprise though, I was able to log into the machine just fine this
morning directly on the console from the command line.  It seems the snv_125
bug with /dev/ptmx bit me (the "error: /dev/ptmx: Permission denied" problem
that required me tracking down the release notes for snv_125 to figure out
the problem) and the server was happy otherwise.  More importantly, the
zpool activity had all finished and I have three clean spares again!
 Normally this amount of I/O would have totally killed the machine!

So somewhere between upgrading the firmware to the latest version and
upgrading to snv_131, it looks like the problem may have actually been
addressed.  I'm guardedly optimistic at this point, given the previous
problems I've had so far with this on-board controller.

Interesting to hear someone else with the same chip but on an expansion card
has no problems (but was with the on-board chip).

-- 
This message posted from opensolaris.org
___

zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] quick overhead sizing for DDT and L2ARC

2010-01-31 Thread Daniel Carosone
Two related questions:

 - given an existing pool with dedup'd data, how can I find the
   current size of the DDT?  I presume some zdb work to find and dump the
   relevant object, but what specifically? 

 - what's the expansion ratio for the memory overhead of L2ARC entries?
   If I know my DDT can fit on a ssd of size X, that's good - but how
   much RAM do I need for that ssd to be usable effectively as l2 cache? 

--
Dan.

pgpVxo12Ymvk5.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Obtaining zpool volume size from a C coded application.

2010-01-31 Thread Petros Koutoupis
Ian,

Thank you very much for your response. On Friday, as I was writing my post, I 
was navigating through the zpool user land binary's source code and had 
observed that method. I was just hoping that there was some other and simpler 
way. I guess not. Unless the time was taken to patch ZFS with an additional 
supported ioctl() to return a total accessible "block" count of the zpool. That 
is the beauty of open source.

Again, thank you very much for your reply. You have been very helpful.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-31 Thread Eric D. Mudama

On Sat, Jan 30 at 18:07, Frank Middleton wrote:

After more than a year or so of experience with ZFS on drive constrained
systems, I am convinced that it is a really good idea to keep the root pool
and the data pools separate. 


That's what we do at the office.  The data pool is a collection of
mirror vdevs and backed up to another "live" system, while we just put
the boot disk on a single SSD with no extra redundancy.

It exposes us to a disk failure making the system unable to boot, but
I know I can re-install opensolaris from scratch on this system in
about 45 minutes (we're only about a dozen pfexec commands away from a
"default" installation) so the tradeoff was worth it to us given the
expected low failure rate of the SSD combined with how little the
rpool device gets written to in normal operation.

In a pinch, I can just enable smb on the backup machine in read-only
fashion to give people access to their files while the primary is
rebuilding.  If the backup pool design isn't fast enough, I could just
drag the disks over from the primary and import the pool on the backup
server.

--eric

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-31 Thread Richard Elling
See also PSARC 2008/769 which considers 4 KB blocks for the entire OS
in a phased approach.
http://arc.opensolaris.org/caselog/PSARC/2008/769/inception.materials/design_doc
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-31 Thread Kjetil Torgrim Homme
Mark Bennett  writes:

> Update:
>
> For the WD10EARS, the blocks appear to be aligned on the 4k boundary
> when zfs uses the whole disk (whole disk as EFI partition).
>
> Part  TagFlag First Sector Size Last Sector
>  0usrwm256   931.51Gb  1953508750
>  
>  calc256*512/4096=32

I'm afraid this isn't enough.  if you enable compression, any ZFS write
can be unaligned.  also, if you're using raid-z with an odd number of
data disks, some of (most of?) your stripes will be misaligned.

ZFS needs to use 4096 octets as the basic block to fully exploit
the performance of these disks.
-- 
Kjetil T. Homme
Redpill Linpro AS - Changing the game

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-31 Thread Frank Cusack
On January 30, 2010 10:27:41 AM -0800 Michelle Knight 
 wrote:

I did this as a test because I am aware that zpools don't like drives
switching controlers without being exported first.


They don't mind it at all.  It's one of the great things about zfs.

What they do mind is being remounted on a system with a different
hostid without having been exported first.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-31 Thread Mark Bennett
Update:

For the WD10EARS, the blocks appear to be aligned on the 4k boundary when zfs 
uses the whole disk (whole disk as EFI partition).

Part  TagFlag First Sector Size Last Sector
 0usrwm256   931.51Gb  1953508750
 
 calc256*512/4096=32

Mark.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Bill Sommerfeld

On 01/31/10 07:07, Christo Kutrovsky wrote:

I've also experienced similar behavior (short freezes) when running
zfs send|zfs receive with compression on LOCALLY on ZVOLs again.

Has anyone else experienced this ? Know any of bug? This is on
snv117.


you might also get better results after the fix to:

6881015 ZFS write activity prevents other threads from running in a 
timely manner


which was fixed in build 129.

As a workaround, try a lower gzip compression level -- higher gzip
levels usually burn lots more CPU without significantly increasing the
compression ratio.

- Bill

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-31 Thread Daniel Carosone
On Sat, Jan 30, 2010 at 06:07:48PM -0500, Frank Middleton wrote:
> On 01/30/10 05:33 PM, Ross Walker wrote:
>> Just install the OS on the first drive and add the second drive to form
>> a mirror. 
>
> After more than a year or so of experience with ZFS on drive constrained
> systems, I am convinced that it is a really good idea to keep the root pool
> and the data pools separate.

Odd.   I have the opposite conclusion.

As I became more comfortable using send|recv for rebuilding machines
and rearranging disks, most of the reasons for such separation went
away.  ZFS mechanisms (eg reservations, quotas and multiple BE's)
really are a better and more flexible way.

The remaining reasons come from the constraints on root pool: single
non-raidz vdevs, no slog.  Those can be restrictive, but they share a
common factor: they both need more disks/controller ports you may not
have anyway.  Often, systems must be built to the tightest constraint,
and if that is the number of disks then I'm more than happy to put
data in rpool in return for other benefits. 

USB sticks and PATA-to-CF cards for rpool are useful alternatives,
almost as a way to cheat some extra ports and case space.  For me at
least, only as a way of working around the rpool constraints (e.g, it
lets me have raidz data in a typical generic pc with 4 disk maximum).
I wouldn't use them *just* to keep data and BE's in separate pools; if
raidz ever becomes bootable, I'd switch to that. 

There's something to be said for having a usable BE together
with your data pool, if you ever need to move the disks elsewhere
quickly because of a fault.  Your quick replacement board might not
have the pata ports for your CF cards at all, for example.  I've taken
to replicating the datasets from rpool into the data pool in such
situations, just as a self-contained backup even if they're not
bootable. 

--
Dan.



pgpvIurOfE2IN.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-31 Thread Michelle Knight
Perhaps an expert could kindly chime in an opinion of making the drives one 
large zpool (rather than separate hard partitions) and using the various 
options within ZFS to ensure that there is always disk space available to the 
operating system (zpool reservation) ... but the more I sit and think, the more 
I'm not sure how that would work on the rpool.

There are so many ways of handling this. I still think I'd go with hard 
partitioning for the OS and data, but that is only because of my lack of 
overall experience with ZFS.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-31 Thread Michelle Knight
Correct that ... I have seen a bad batch of drives fail in close succession; 
down to a manufacturing problem.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-31 Thread Michelle Knight
Hi whitetr6,

An interesting situation to which there is no, "right," answer. In fact, many 
different answers depending on where you put your priorities.

I'm with Frank in keeping data and OS separate. As you've only got two drives, 
I'd put between 30 to 40 gig as an OS pool on each drive (making each 
individually bootable - I was helped out with the method on this thread - 
http://opensolaris.org/jive/thread.jspa?messageID=454491) and then the 
remainder of the drives as data.

So you've sort of got...

Drive 1 - <-30gig OS-> <-720gig data->
Drive 2 - <-30gig OS-> <-720gig data->

...completely mirrored and independently bootable.

Open Solaris creates its boot partition as a Zpool anyway, so it is relatively 
straightforward to mirror it. I made a short video of the advice that I was 
given, here - http://www.youtube.com/watch?v=tpzsSptzmyA - and that technique 
will also cover you for installing on drives of different sizes, so if you 
upgrade the hard drives later, this technique will hold solid and also enable 
you to add another drive should you have to replace one. You might have to 
adjust that advice if you're not using your entire drive for the system 
partition.

The only thing I've ever seen take out both internal drives at the same time is 
a power surge by either external sources, or an overheated PSU blowing in the 
PC. Surge protection and adequate cooling should minimise the risks.

I'm assuming you've got a routine to take snapshots and get them off the box; 
based on what you've already written.

Having an OS booting from USB is possible; from what I've seen of ZFS so far, I 
believe it would be possible to attach two USB keys and have them mirrored and 
bootable also! But personally I don't see any real need to do it this way.

So, if I were in your shoes, I'd partition as per above and run the two hard 
disks ... but have an external drive available for backup.

It would be worth practising the technique of mirroring the root partition, 
handling zpools and recovering from failures before committing data to it. The 
practice is well worth it IMHO.

I hope this helps.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-31 Thread Tiernan OToole








Thanks for the info.

I will take the Napp-it question off line with Günther and see if i can
fix that.

Intel Nic sounds like a plan... was thinking of sticking 2 in anyway...
just looking at the cards though, they are GigaNIX 2032T and searching
for this online results nothing when i include Solaris or
OpenSolaris... Pity...

finally, i will use the 750 somewhere else, and use 3 500s i have
here... should be enough to start with... 



Tiernan OToole
Software Developer
Chat Google Talk: lsmart...@gmail.com Skype: tiernanotoole MSN: lotas...@hotmail.com
Contact Me 
Tiernans
Comms Closet New Year, New Upgrades…

---
@ WiseStamp Signature.
Get
it now

On 31/01/2010 18:43, Günther wrote:

  hello

napp-it is just a simple cgi-script

common 3 reasons for error 500:
- file permission: just set all files in /napp-it/.. to 777 recursively
  (rwx for all - napp-it will set them correct at first call)

-files must be copied in ascii-mode
  have you used winscp? ; otherwise you must care about

-cgi-bin folder must be allowed to hold executabels
  is set in apache config file in /etc/apache2/sites/enabled/000-default
  should be ok per default

- missing modules (not a problem in nexenta)

please look at apache error log-file at
/var/log/apache2/error.log

there you will find the reason for this error

your nics:
i suppose, your other nics are not supported by default
you can try to find and install drivers or (better):
forget them and buy a intel nic -(about 50 euro, no problem)

your hd:
if you build a raid 1 or raid-z, the capacity depends on
the smallest drive, so your 750 hd is used as 500 gig-

gea
  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-31 Thread William Bauer
This comment has only to do with booting an old drive on a different 
computer--a bit of a tangent to this discussion:

I've also used this to migrate to a new computer with larger disks. The only 
caveat I've run into is you need to move from SATA/AHCI to the same, or 
SATA/IDE to the same. They can be different controllers, but for me they have 
to match their AHCI mode. I'm sure someone has a method to address that issue.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-31 Thread William Bauer
Richard already addressed this process, but I do this basic concept all the 
time (moving to a larger disk or new computer).  I simply create the partition 
on the new disk with format, then "zpool attach -f" the larger drive.  Once 
done mirroring, use installgrub as normal.  Remove the smaller drive and you're 
done.  You get the new, larger capacity in my experience.  I've done this 
several times without an issue.

This is a bit of an abbreviation, but don't overthink it.

I've also used this to migrate to a new computer with larger disks.  The only 
caveat I've run into is you need to move from SATA/AHCI to the same, or 
SATA/IDE to the same.  They can be different controllers, but for me they have 
to match their AHCI mode.  I'm sure someone has a method to address that issue.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-31 Thread Günther
hello

i also suggest, use your 750g drives as raid-1 data pool.
i usually use one or better two (raid-1) 2,5" drives in the floppy-bay 
as system drive


gea
http://www.napp-it.org/hardware/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Odd ZFS resilver time

2010-01-31 Thread William Bauer
I have an external disk that was offline yesterday, so today when I booted my 
system I made sure it was turned on.  ZFS of course brought it current with the 
pool (I have a 3 disk zfs mirror), and for the first time I saw this result for 
the resilver process:

resilver completed after 307445734561825859h50m with 0 errors

Note it ends in xx hours and 50 minutes.  When I convert the hours portion to 
octal or binary numbers, just because I'm curious, it reveals some interesting 
patterns.

I have no problem, but I just found this odd.  Any ideas what happened?  The 
resilver completed in a matter of minutes--under 10.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-31 Thread Günther
hello

napp-it is just a simple cgi-script

common 3 reasons for error 500:
- file permission: just set all files in /napp-it/.. to 777 recursively
  (rwx for all - napp-it will set them correct at first call)

-files must be copied in ascii-mode
  have you used winscp? ; otherwise you must care about

-cgi-bin folder must be allowed to hold executabels
  is set in apache config file in /etc/apache2/sites/enabled/000-default
  should be ok per default

- missing modules (not a problem in nexenta)

please look at apache error log-file at
/var/log/apache2/error.log

there you will find the reason for this error

your nics:
i suppose, your other nics are not supported by default
you can try to find and install drivers or (better):
forget them and buy a intel nic -(about 50 euro, no problem)

your hd:
if you build a raid 1 or raid-z, the capacity depends on
the smallest drive, so your 750 hd is used as 500 gig-

gea
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Snapshots

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 9:39 AM, Bob Friesenhahn wrote:

> On Sun, 31 Jan 2010, Tony MacDoodle wrote:
> 
>> Has anyone encountered any file corruption when snapping ZFS file systems? 
>> How does ZFS handle open files when compared to other file system types that 
>> use similar technology ie. Veritas, etc...??
> 
> I see that Richard did not really answer your question.

Yes, there are many cases where applications can get out of sync, as Bob 
mentions.
But from the file system perspective, ZFS snapshots are safe.

> 
> Zfs snapshot captures the exact state of data which is already committed to 
> disk.  Zfs may buffer written data up to 30 seconds before committing it to 
> disk.  Synchronous writes go to disk essentially immediately, and before 
> returning control back to the application.

Also, when you take a snapshot, the txgs will be committed prior to the snap.
 -- richard

> 
> Since written data may be in an inconsistent state, it is certainly quite 
> possible for a snapshot to contain "corrupted" data from the perspective of 
> the application, similar to the way that data may be corrupted after a system 
> panic or power failure.  An application which knows about the snapshot 
> mechanism can synchronise its data to disk before requesting a snapshot.
> 
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Snapshots

2010-01-31 Thread Bob Friesenhahn

On Sun, 31 Jan 2010, Tony MacDoodle wrote:

Has anyone encountered any file corruption when snapping ZFS file 
systems? How does ZFS handle open files when compared to other file 
system types that use similar technology ie. Veritas, etc...??


I see that Richard did not really answer your question.

Zfs snapshot captures the exact state of data which is already 
committed to disk.  Zfs may buffer written data up to 30 seconds 
before committing it to disk.  Synchronous writes go to disk 
essentially immediately, and before returning control back to the 
application.


Since written data may be in an inconsistent state, it is certainly 
quite possible for a snapshot to contain "corrupted" data from the 
perspective of the application, similar to the way that data may be 
corrupted after a system panic or power failure.  An application which 
knows about the snapshot mechanism can synchronise its data to disk 
before requesting a snapshot.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Christo Kutrovsky
Thanks for your replies.

I am aware of the 512 bytes concept, thus my selection of 8 KB (matched with 
8KB ntfs). Even 20% reduction is still good, that's like having 20% extra ram 
(for cache).

I haven't experimented with the default lzjb compression. If I want to compress 
something usually I want it compressed well. Originally I had tried 64 Kb, but 
then I discovered windows does partial reads and writes (not entire clusters), 
thus I decided to pick 8K something that fits in 9k jumbo frame.

Either way, I think it's very bad for an OS compression to cause your server to 
not respond to pings (other side affects aside).

I am running 117, thus the fix should be in place. Nevertheless it does point 
out that there could be other things wrong with gzip compression and zfs.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] demise of community edition

2010-01-31 Thread Tom Bird

Richard Elling wrote:


It is not true that there is only a "horrible Gnome based installer."  Try the 
Automated
Installation (AI) version instead of the LiveCD if you've used JumpStart 
previously.

But if you just want a text-based installer and AI is overkill, then b131 is available 
with the Text Installer Project.  Downloads available on:

http://www.genunix.org
http://hub.opensolaris.org/bin/view/Project+caiman/TextInstallerProject


Thanks, this looks useful.


Nothing is sane about Solaris 10 installer, good riddance :-)


It wasn't that bad! :)

PS sorry for this being a non specifically ZFS question, but ZFS is the 
reason I use opensolaris so there's a link in there somewhere.


Tom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 7:21 AM, Henrik Johansson wrote:
> Hello Christo,
> 
> On Jan 31, 2010, at 4:07 PM, Christo Kutrovsky wrote:
> 
>> Hello All,
>> 
>> I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 
>> and blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM.

For NTFS, use recordsize=4 KB, but I wouldn't worry too much about compression 
at
that recordsize.

>> Whenever I start copying files from Windows onto the ZFS disk, after about 
>> 100-200 Mb been copied the server starts to experience freezes. I have 
>> iostat running, which freezes as well. Even pings on both of the network 
>> adapters are reporting either 4000 ms or timeouts for when the freeze is 
>> happening.
>> 
>> I have reproduce the same behavior with a 1 GB test ZVOL. Whenever I do 
>> sequential writes of 64 Kb with compression=gzip-9 I experience the freezes. 
>> With compression=off it's all good.

gzip-9 is a pig.  b115 includes the fix for:
CR6586537  async zio taskqs can block out userland commands
which greatly reduced this effect.  But you might consider the default gzip-6
instead.

Back to my note above, compression is done to records, but the size is still
512 byte sectors. In other words, there are 8 sectors in a 4 KB record, so
compression is bounded by 12.5% chunks.

>> I've also experienced similar behavior (short freezes) when running zfs 
>> send|zfs receive with compression on LOCALLY on ZVOLs again.
> 
> I think gzip in ZFS have a reputation being somewhat heavy on system 
> resources, that said it would be nice if it did not have such a large impact 
> on low level functions. Have a look in the archive, search for example 
> death-spriral or Death-spriral revisited. Have you tried using the default 
> compression algorithm also (lzjb, compresison=on)?

Good idea, and for small records compression gains less.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] demise of community edition

2010-01-31 Thread Torrey McMahon
This is a topic for indiana-discuss, not zfs-discuss. If you read 
through the archives of that alias you should see some pointers.


On 1/31/2010 11:38 AM, Tom Bird wrote:

Afternoon,

I note to my dismay that I can't get the "community edition" any more 
past snv_129, this version was closest to the normal way of doing 
things that I am used to with Solaris <= 10, the standard OpenSolaris 
releases seem only to have this horrible Gnome based installer that 
gives you only one option - install everything.


Am I just doing it wrong or is there another way to get OpenSolaris 
installed in a sane manner other than just sticking with community 
edition at snv_129?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Snapshots

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 6:55 AM, Tony MacDoodle wrote:

> Has anyone encountered any file corruption when snapping ZFS file systems?

I've had no problems. My first snapshot was in June 2006 and I've been regularly
snapshotting since then.

> How does ZFS handle open files when compared to other file system types that
> use similar technology ie. Veritas, etc...??

VxFS is not at all similar.  ZFS is a copy-on-write file system, so a snapshot
merely changes the free side of the alloc/free process.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] demise of community edition

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 8:38 AM, Tom Bird wrote:

> Afternoon,
> 
> I note to my dismay that I can't get the "community edition" any more past 
> snv_129, this version was closest to the normal way of doing things that I am 
> used to with Solaris <= 10, the standard OpenSolaris releases seem only to 
> have this horrible Gnome based installer that gives you only one option - 
> install everything.

It is true that SXCE b130 is the last SXCE build and only available until 
31-jan-10.

It is not true that there is only a "horrible Gnome based installer."  Try the 
Automated
Installation (AI) version instead of the LiveCD if you've used JumpStart 
previously.

But if you just want a text-based installer and AI is overkill, then b131 is 
available 
with the Text Installer Project.  Downloads available on:
http://www.genunix.org
http://hub.opensolaris.org/bin/view/Project+caiman/TextInstallerProject

It is not true that the LiveCD installer installs everything.  It is only ~700 
MBytes while
SXCE is > 3 GB.  The difference is that many things are installed after the 
initial 
installation (eg. OpenOffice, adobe reader, etc.)

> Am I just doing it wrong or is there another way to get OpenSolaris installed 
> in a sane manner other than just sticking with community edition at snv_129?

Nothing is sane about Solaris 10 installer, good riddance :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] demise of community edition

2010-01-31 Thread Tom Bird

Afternoon,

I note to my dismay that I can't get the "community edition" any more 
past snv_129, this version was closest to the normal way of doing things 
that I am used to with Solaris <= 10, the standard OpenSolaris releases 
seem only to have this horrible Gnome based installer that gives you 
only one option - install everything.


Am I just doing it wrong or is there another way to get OpenSolaris 
installed in a sane manner other than just sticking with community 
edition at snv_129?


--
Tom

// www.portfast.co.uk -- internet services and consultancy
// hosting from 1.65 per domain
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ? NFSv4 and ZFS: removing write_owner attribute does not stop a user changing file group ownership

2010-01-31 Thread Tim Thomas

Hi

I am accessing files in a  ZFS file system via NFSv4.

I am not logged in a root.

File permissions look as expected when I inspect them with ls -v and ls -V

I only have owner and group ACLs...nothing for everyone.

bash-3.00$ id
uid=100(timt) gid=10001(ccbcadmins)
bash-3.00$ groups
ccbcadmins staff
bash-3.00$ ls -v testacl
-rwxrwx---+  1 timt ccbcadmins   0 Jan 31 16:24 testacl
 
0:owner@:read_data/write_data/append_data/read_xattr/write_xattr/execute

 /delete_child/read_attributes/write_attributes/delete/read_acl
 /write_acl/write_owner/synchronize:allow
 
1:group@:read_data/write_data/append_data/read_xattr/write_xattr/execute

 /delete_child/read_attributes/write_attributes/delete/read_acl
 /write_acl/write_owner/synchronize:allow

I can change the group ownership of a file to any group I am a member 
off, but not to groups I am not a member of - this is as expected.


My question is how do I make it so that I CANNOT change group ownership 
of files that I own


I have changed the ACLs on the file so that owner and group do not have 
write_owner permissions but I can still change the group ownership as 
before. I have tried removing write_owner from allow permissions and 
adding a deny ACL which denies write_owner permissions.


bash-3.00$ ls -v testacl
-rwxrwx---+  1 timt ccbcadmins   0 Jan 31 16:23 testacl
 0:user:timt:write_owner:deny
 1:group@:write_owner:deny
 2:owner@:write_owner:deny
 
3:owner@:read_data/write_data/append_data/read_xattr/write_xattr/execute

 /delete_child/read_attributes/write_attributes/delete/read_acl
 /write_acl/synchronize:allow
 
4:group@:read_data/write_data/append_data/read_xattr/write_xattr/execute

 /delete_child/read_attributes/write_attributes/delete/read_acl
 /write_acl/synchronize:allow

but this makes no difference...I can still change the group ownership.

Clearly I am doing something wrong..or have incorrect expectations.

Anyone got any ideas on this ?

Thanks

Tim
--

*Tim Thomas
Open Storage Technical Specialist
Sun Microsystems UK *

Mobile: +44 (0)7802-212209
DDI: +44 (0)161 905-8097
Email: tim.tho...@sun.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Henrik Johansson
Hello Christo,

On Jan 31, 2010, at 4:07 PM, Christo Kutrovsky wrote:

> Hello All,
> 
> I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and 
> blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM.
> 
> Whenever I start copying files from Windows onto the ZFS disk, after about 
> 100-200 Mb been copied the server starts to experience freezes. I have iostat 
> running, which freezes as well. Even pings on both of the network adapters 
> are reporting either 4000 ms or timeouts for when the freeze is happening.
> 
> I have reproduce the same behavior with a 1 GB test ZVOL. Whenever I do 
> sequential writes of 64 Kb with compression=gzip-9 I experience the freezes. 
> With compression=off it's all good.
> 
> I've also experienced similar behavior (short freezes) when running zfs 
> send|zfs receive with compression on LOCALLY on ZVOLs again.

I think gzip in ZFS have a reputation being somewhat heavy on system resources, 
that said it would be nice if it did not have such a large impact on low level 
functions. Have a look in the archive, search for example death-spriral or 
Death-spriral revisited. Have you tried using the default compression algorithm 
also (lzjb, compresison=on)?

Regards

Henrik
http://sparcv9.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Christo Kutrovsky
Hello All,

I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and 
blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM.

Whenever I start copying files from Windows onto the ZFS disk, after about 
100-200 Mb been copied the server starts to experience freezes. I have iostat 
running, which freezes as well. Even pings on both of the network adapters are 
reporting either 4000 ms or timeouts for when the freeze is happening.

I have reproduce the same behavior with a 1 GB test ZVOL. Whenever I do 
sequential writes of 64 Kb with compression=gzip-9 I experience the freezes. 
With compression=off it's all good.

I've also experienced similar behavior (short freezes) when running zfs 
send|zfs receive with compression on LOCALLY on ZVOLs again.

Has anyone else experienced this ? Know any of bug? This is on snv117.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Snapshots

2010-01-31 Thread Tony MacDoodle
Has anyone encountered any file corruption when snapping ZFS file systems?
How does ZFS handle open files when compared to other file system types that
use similar technology ie. Veritas, etc...??

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss