[zfs-discuss] ZFS Locking Up periodically

2009-04-01 Thread Andrew Robert Nicols
I've recently re-installed an X4500 running Nevada b109 and have been
experiencing ZFS lock ups regularly (perhaps once every 2-3 days).

The machine is a backup server and receives hourly ZFS snapshots from
another thumper - as such, the amount of zfs activity tends to be
reasonably high. After about 48 - 72 hours, the file system seems to lock
up and I'm unable to do anything with the zfs system - e.g. zfs list, df on
the file system in question, zfs receive, etc. zpool does still list
information and targetted zfs list does work. NFS also locks up. I'm unable
to test whether I can write as the file system is read only.

The structure of the pool is something like:
/thumperpool  -- Does not lock up
/thumperpool/mnt  -- Does lock up and is the file system receiving lots of
  snapshots

The server is currently in it's locked state so if anyone can suggest
useful diagnostics to run on it while it's like this, please get back to me
asap. I will need to restart the box so that our backups aren't too out of
sync this afternoon.

Thanks in advance,

Andrew Nicols

-- 
Systems Developer

e: andrew.nic...@luns.net.uk
im: a.nic...@jabber.lancs.ac.uk
t: +44 (0)1524 5 10147

Lancaster University Network Services is a limited company registered in
England and Wales. Registered number: 4311892. Registered office:
University House, Lancaster University, Lancaster, LA1 4YW


signature.asc
Description: Digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Casper . Dik

River Tarnell wrote:
 Matthew Ahrens:
 ZFS user quotas (like other zfs properties) will not be accessible over NFS;
 you must be on the machine running zfs to manipulate them.
 
 does this mean that without an account on the NFS server, a user cannot see 
 his
 current disk use / quota?

That's correct.


So that's different from ufs with NFS where rquotad and the NFS client 
code makes sure that you can see the melbourne quota from the UFS server.

I know that this is one of the additional protocols developed for NFSv2 
and NFSv3; does NFSv4 has a similar mechanism to get the quota?

Is there any reason why rquota is not supported?  (It's not about 
manipulating quota; only displaying)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Darren J Moffat

casper@sun.com wrote:

River Tarnell wrote:

Matthew Ahrens:

ZFS user quotas (like other zfs properties) will not be accessible over NFS;
you must be on the machine running zfs to manipulate them.

does this mean that without an account on the NFS server, a user cannot see his
current disk use / quota?

That's correct.



So that's different from ufs with NFS where rquotad and the NFS client 
code makes sure that you can see the melbourne quota from the UFS server.


I know that this is one of the additional protocols developed for NFSv2 
and NFSv3; does NFSv4 has a similar mechanism to get the quota?


Is there any reason why rquota is not supported?  (It's not about 
manipulating quota; only displaying)


If we had the .zfs/props/propname RFE implemented that would allow 
users to see this regardless of what file sharing protocol they use.

As well as lots of other very interesting info about the filesystem.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-01 Thread Michael Shadle
I'm going to try to move one of my disks off my rpool tomorrow (since
it's a mirror) to a different controller.

According to what I've heard before, ZFS should automagically
recognize this new location and have no problem, right?

Or do I need to do some sort of detach/etc. process first?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-01 Thread Robert Thurlow

Michael Shadle wrote:

I'm going to try to move one of my disks off my rpool tomorrow (since
it's a mirror) to a different controller.

According to what I've heard before, ZFS should automagically
recognize this new location and have no problem, right?

Or do I need to do some sort of detach/etc. process first?


I've got a 4-way RaidZ pool made from IDE disks that I've
connected in three different ways:

- via a Firewire-to-IDE case
- via a 4-port PCI-to-IDE card
- via 4 SATA-to-IDE converters

Each transition resulted in different device IDs, but I
could always see them with 'zpool import'.  You should
'zpool export' your pool first.

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Robert Milkowski
Hello Matthew,

Tuesday, March 31, 2009, 9:16:42 PM, you wrote:

MA Robert Milkowski wrote:
 Hello Matthew,
 
 Excellent news.
 
 Wouldn't it be better if logical disk usage would be accounted and not
 physical - I mean when compression is enabled should quota be
 accounted based by a logical file size or physical as in du?
MA ]
MA The compressed space *is* the amount of space charged, same as struct stat's
MA st_blocks and du(1) (and the referenced property, and the used property,
MA etc).  I don't think that we ever report the uncompressed size; it's only
MA available indirectly by multiplying by the compressratio property.

What I mean is: assume user joe have a quota of 1GB.
So he creates a file fill in with 1's which is 5GB in size.
ls utility will confirm it is 5GB in size while du will say it is
100MB (haven't done the actual tes but you get the idea).

So from a user perspective isn't it a little bit confusing as he
managed to write more data than he thinks he is allowed to.

From a sysdmin perspective Nicilas is probably right that in most
cases they would care more about physical usage than logical, or would
they?

-- 
Best regards,
 Robert Milkowski
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Richard Elling

Robert Milkowski wrote:

Hello Matthew,

Tuesday, March 31, 2009, 9:16:42 PM, you wrote:

MA Robert Milkowski wrote:
  

Hello Matthew,

Excellent news.

Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file size or physical as in du?
  

MA ]
MA The compressed space *is* the amount of space charged, same as struct stat's
MA st_blocks and du(1) (and the referenced property, and the used property,
MA etc).  I don't think that we ever report the uncompressed size; it's only
MA available indirectly by multiplying by the compressratio property.

What I mean is: assume user joe have a quota of 1GB.
So he creates a file fill in with 1's which is 5GB in size.
ls utility will confirm it is 5GB in size while du will say it is
100MB (haven't done the actual tes but you get the idea).

So from a user perspective isn't it a little bit confusing as he
managed to write more data than he thinks he is allowed to.

From a sysdmin perspective Nicilas is probably right that in most
cases they would care more about physical usage than logical, or would
they?
  


I think what this says is that from a practical perspective, quotas are
either ineffective or incomprehensible for modern systems.  By ineffective
I mean that you cannot limit a user's use of space, you can only limit
that to which the user is accounted.  Perhaps we should change it from
quota to goodwill, to borrow a term from the accounting world :-)
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] tmpfs under zfs pool

2009-04-01 Thread A Darren Dunham
On Wed, Apr 01, 2009 at 12:41:25AM +, A Darren Dunham wrote:
 On Wed, Apr 01, 2009 at 01:41:06AM +0300, Dimitar Vasilev wrote:
  Hi all,
  Could someone give a hint if it's possible to create rpool/tmp, mount
  it as /tmp so that tmpfs has some disk-based back-end instead of
  memory-based size-limited one.
 
 You mean you want /tmp to be a regular ZFS filesystem instead of a
 tmpfs one.  Yes, that's possible.

 I'll have to try that and see if anything breaks.

Aside from being a bit slower to complete login, it seemed to work
okay.  (Worked much better after I set correct permissions on the
rpool/tmp directory).

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Matthew Ahrens

Robert Milkowski wrote:

Hello Matthew,

Tuesday, March 31, 2009, 9:16:42 PM, you wrote:

MA Robert Milkowski wrote:

Hello Matthew,

Excellent news.

Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file size or physical as in du?

MA ]
MA The compressed space *is* the amount of space charged, same as struct stat's
MA st_blocks and du(1) (and the referenced property, and the used property,
MA etc).  I don't think that we ever report the uncompressed size; it's only
MA available indirectly by multiplying by the compressratio property.

What I mean is: assume user joe have a quota of 1GB.
So he creates a file fill in with 1's which is 5GB in size.
ls utility will confirm it is 5GB in size while du will say it is
100MB (haven't done the actual tes but you get the idea).

So from a user perspective isn't it a little bit confusing as he
managed to write more data than he thinks he is allowed to.


Pleasant surprises tend to be tolerated :-)

--matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Robert Milkowski
Hello Richard,

Wednesday, April 1, 2009, 5:32:25 PM, you wrote:

RE Robert Milkowski wrote:
 Hello Matthew,

 Tuesday, March 31, 2009, 9:16:42 PM, you wrote:

 MA Robert Milkowski wrote:
   
 Hello Matthew,

 Excellent news.

 Wouldn't it be better if logical disk usage would be accounted and not
 physical - I mean when compression is enabled should quota be
 accounted based by a logical file size or physical as in du?
   
 MA ]
 MA The compressed space *is* the amount of space charged, same as struct 
 stat's
 MA st_blocks and du(1) (and the referenced property, and the used 
 property,
 MA etc).  I don't think that we ever report the uncompressed size; it's only
 MA available indirectly by multiplying by the compressratio property.

 What I mean is: assume user joe have a quota of 1GB.
 So he creates a file fill in with 1's which is 5GB in size.
 ls utility will confirm it is 5GB in size while du will say it is
 100MB (haven't done the actual tes but you get the idea).

 So from a user perspective isn't it a little bit confusing as he
 managed to write more data than he thinks he is allowed to.

 From a sysdmin perspective Nicilas is probably right that in most
 cases they would care more about physical usage than logical, or would
 they?
   

RE I think what this says is that from a practical perspective, quotas are
RE either ineffective or incomprehensible for modern systems.  By ineffective
RE I mean that you cannot limit a user's use of space, you can only limit
RE that to which the user is accounted.  Perhaps we should change it from
RE quota to goodwill, to borrow a term from the accounting world :-)

After giving it a little bit more thought I think the current approach
is better in most cases. In most cases user could compress a file by
using external program anyway and physical disk space is the main
issue being tried to be address by user/group quotas.




-- 
Best regards,
 Robert Milkowski
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Nicolas Williams
On Wed, Apr 01, 2009 at 10:58:34AM +0200, casper@sun.com wrote:
 I know that this is one of the additional protocols developed for NFSv2 
 and NFSv3; does NFSv4 has a similar mechanism to get the quota?

Yes, NFSv4.0 and 4.1 both provide the same quota information retrieval
interface, three file/directory attributes:

 - quota_avail_hard
 - quota_avail_soft
 - quota_used

It's not clear if the values returned for these attributes are supposed
to specific to the credentials of the caller or what, but I assume it's
the former.  I don't know if the Solaris NFSv4 client and server support
this feature; the attributes are REQUIRED to implement in v4.1, but I'm
not sure if that's also true in v4.0).

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Nicolas Williams
On Wed, Apr 01, 2009 at 10:04:47AM +0100, Darren J Moffat wrote:
 If we had the .zfs/props/propname RFE implemented that would allow 
 users to see this regardless of what file sharing protocol they use.
 As well as lots of other very interesting info about the filesystem.

Indeed!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Matthew Ahrens

Mike Gerdts wrote:

On Tue, Mar 31, 2009 at 7:12 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:

River Tarnell wrote:

Matthew Ahrens:

ZFS user quotas (like other zfs properties) will not be accessible over
NFS;
you must be on the machine running zfs to manipulate them.

does this mean that without an account on the NFS server, a user cannot
see his
current disk use / quota?

That's correct.


Do you have a reason for not wanting this to be implemented, or are
you just avoiding scope creep?


The latter -- I just don't have time to tack on any more features with this. 
 We've filed RFE 6824968 to add support to rquotad to report on zfs user 
quotas.  I'll note this in the PSARC case as well.


This additional RFE is staffed, but we don't have an ETA right now.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Bob Friesenhahn

On Wed, 1 Apr 2009, Matthew Ahrens wrote:


So from a user perspective isn't it a little bit confusing as he
managed to write more data than he thinks he is allowed to.


Pleasant surprises tend to be tolerated :-)


Until it comes time to back that data up.  It is conceivable for users 
to create a DOS for the backup system by intentionally creating many 
huge files which compress perfectly with ZFS, but not so perfectly for 
backup (assuming that backup even supports compression).


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Robert Milkowski
Hello Bob,

Wednesday, April 1, 2009, 7:14:46 PM, you wrote:

BF On Wed, 1 Apr 2009, Matthew Ahrens wrote:
 
 So from a user perspective isn't it a little bit confusing as he
 managed to write more data than he thinks he is allowed to.

 Pleasant surprises tend to be tolerated :-)

BF Until it comes time to back that data up.  It is conceivable for users
BF to create a DOS for the backup system by intentionally creating many
BF huge files which compress perfectly with ZFS, but not so perfectly for
BF backup (assuming that backup even supports compression).

c'mon - they can do it anyway even with file system with no built-in
compression as they can use any external program to compress their
data. And if you are really worried about it then do not use
compression in zfs.

Not to mention that such a case is rather unrealistic (well, maybe
except dd if=/dev/zero).


-- 
Best regards,
 Robert Milkowsk
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-01 Thread Carson Gaspar
[ re-sending to the list address - stupid thunderbird still doesn't have 
reply-to-list :-( ]


Robert Milkowski wrote:

Hello Bob,

Wednesday, April 1, 2009, 7:14:46 PM, you wrote:

...

BF Until it comes time to back that data up.  It is conceivable for users
BF to create a DOS for the backup system by intentionally creating many
BF huge files which compress perfectly with ZFS, but not so perfectly for
BF backup (assuming that backup even supports compression).

c'mon - they can do it anyway even with file system with no built-in
compression as they can use any external program to compress their
data. And if you are really worried about it then do not use
compression in zfs.


But then the compressed file is backed up. Different case entirely.

And don't forget zfs send/recv - sadly they send the uncompressed data,
so they'd also suffer from this.

Not that I think it's worth counting virtual space, mind you - too much
effort for too little benefit, and adding more knobs adds complexity /
bugs / cost. But the potential for problems is real - more so if you
don't rule out malicious users.

--
Carson


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is zpool export/import | faster than rsync or cp

2009-04-01 Thread Cindy . Swearingen

Hi Harry,

I was on vacation so am late to this discussion.

For this part of your question:

The zpool export/import feature is a pool-level operation for moving
the pool, disks, and data to another system.

For moving data from one pool to another pool, you would want to use
zfs send/recv, rsync, or tar, and so on.

Cindy

Harry Putnam wrote:

[...]

Harry wrote:


Now I'm wondering if the export/import sub commands might not be a
good bit faster.  



Ian Collins i...@ianshome.com answered:   


I think you are thinking of zfs send/receive.

I've never done a direct comparison, but zfs send/receive would be my
preferred way to move data between pools.



Why is that?  I'm too new to know what all it encompasses (and a bit
dense to boot)

Fajar A. Nugraha fa...@fajar.net writes:



On Sat, Mar 28, 2009 at 5:05 AM, Harry Putnam rea...@newsguy.com wrote:


Now I'm wondering if the export/import sub commands might not be a
good bit faster.


I believe the greatest advantage of zfs send/receive over rsync is not
about speed, but rather it's on zfs send -R, which would (from man
page)

Generate a replication stream  package,  which  will
replicate  the specified filesystem, and all descen-
dant file systems, up to the  named  snapshot.  When
received, all properties, snapshots, descendent file
systems, and clones are preserved.

pretty much allows you to clone a complete pool preserving its structure.
As usual, compressing the backup stream (whether rsync or zfs) might
help reduce transfer time a lot. My favorite is lzop (since it's very
fast), but gzip should work as well.




Nice... good reasons it appears.


Robert Milkowski mi...@task.gda.pl writes:



Hello Harry,



[...]



As Ian pointed you want zfs send|receive and not import/export.
For a first full copy zfs send not necessarily will be noticeably
faster than rsync but it depends on data. If for example you have
milions of small files zfs send could be much faster then rsync.
But it shouldn't be slower in any case.

zfs send|receive really shines when it comes to sending incremental
changes.



Now that would be something to make it stand out.  Can you tell me a
bit more about that would work..I mean would you just keep receiving
only changes at one end and how do they appear on the filesystem.

There is a backup tool called `rsnapshot' that uses rsync but creates
hard links to all unchanged files and moves only changes to changed
files.  This is all put in a serial directory system and ends up
taking a tiny fraction of the space that full backups would take, yet
retains a way to get to unchanged files right in the same directory
(the hard link).

Is what your talking about similar in some way.

= * = * = * =
 
To all posters... many thanks for the input.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss