why is sum of disks bandwidth from `zpool iostat -v 1`
less than the pool total while watching `du /zfs`
on opensol-20060605 bits?
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
zfs
On Thu, Eric Schrock wrote:
> The problem is that statvfs() only returns two values (total blocks and
> free blocks) from which we have to calculate three values: size, free,
?
>From statvfs(2) the following are returned in struct statvfs:
fsblkcnt_t f_blocks;/* total # of blo
The problem is that statvfs() only returns two values (total blocks and
free blocks) from which we have to calculate three values: size, free,
and available space. Prior to pooled storage, available = size - free.
This isn't true with ZFS. On your local filesystem, df(1) recognizes it
as a ZFS fil
bash-3.00# zfs list|grep 5201
nfs-s5-s8/d5201331G 269G 314G /nfs-s5-s8/d5201
nfs-s5-s8/[EMAIL PROTECTED]17.2G - 331G -
nfs-s5-s8/[EMAIL PROTECTED]9.29M - 314G -
600g-17g = 583gb
Hmmm... why after setting quota to a filesystem reported size over nfs is
lowere
NFS server (b39):
bash-3.00# zfs get quota nfs-s5-s8/d5201 nfs-s5-p0/d5110
NAME PROPERTY VALUE SOURCE
nfs-s5-p0/d5110 quota 600G local
nfs-s5-s8/d5201 quota 600G local
bash-3.00#
bash-3.00# df -h
Hello Matthew,
Friday, June 9, 2006, 1:16:41 AM, you wrote:
MA> On Thu, Jun 08, 2006 at 03:43:08PM -0700, Robert Milkowski wrote:
>> According to zfs(1M)
>>
>> " -v Print verbose information about the stream and
>> the time required to perform the receive.
>> "
>
On Thu, Jun 08, 2006 at 03:43:08PM -0700, Robert Milkowski wrote:
> According to zfs(1M)
>
> " -v Print verbose information about the stream and
> the time required to perform the receive.
> "
>
> However when using -v option I only get:
>
> receiving full stream
According to zfs(1M)
" -v Print verbose information about the stream and
the time required to perform the receive.
"
However when using -v option I only get:
receiving full stream of p6/[EMAIL PROTECTED] into nfs-s5-s8/[EMAIL PROTECTED]
Or maybe it's due to I p
Hello Matthew,
Thursday, June 8, 2006, 8:51:39 PM, you wrote:
MA> On Thu, Jun 08, 2006 at 11:46:32AM -0700, Robert Milkowski wrote:
>> I can't send/receive incramental for one filesystem. Other filesystems
>> on the same servers (some in the same pool) work ok - just problem
>> with that one. Eve
On Thu, Jun 08, 2006 at 11:46:32AM -0700, Robert Milkowski wrote:
> I can't send/receive incramental for one filesystem. Other filesystems
> on the same servers (some in the same pool) work ok - just problem
> with that one. Even if rollback destination filesystem I still can't
> receive incramenta
On Thu, Jun 08, 2006 at 10:51:24AM +0200, Eric Vanden Meersch wrote:
> i am testing how many files/sec i can create on zfs, with some script .
> I carelessly let it run until ... it made my system crash.
This is a bug. Can you provide a crash dump?
--matt
_
I can't send/receive incramental for one filesystem. Other filesystems on the
same servers (some in the same pool) work ok - just problem with that one. Even
if rollback destination filesystem I still can't receive incramental send.
I try to send incramental from 'SRC HOST' to 'DST HOST'. On 'SR
On Thu, 2006-06-08 at 06:06, Alec Muffett wrote:
> the low-hanging fruit is a "view" as a window onto a static snapshot.
>
> why not just create the view instantly and then filter all operations
> which scan a directory from that point onwards, try growing it
> incrementally upon demand?
It seems
>> yeah, but doing a full tree walk under the covers in that first readdir
>> will Suck. I'd rather the view creation be a bit slowerer in order that
>> the first access to the view would be responsive.
>
>Pick your poison. O(N) creation or O(N) initial traversal.
the low-hanging fruit is
I carelessly let it run until ... it made my system crash.
Is that the expected behaviour?
Not funny ;-)
Couldbe (based solely on the presence of
zio_write_allocate_gang_members; no deep analysis)
6411261 busy intent log runs out of space on small pools.
-r
good morning,
i am testing how many files/sec i can create on zfs, with some script .
I carelessly let it run until ... it made my system crash.
Is that the expected behaviour? Is it then best practice to always use
quota?
i am using Solaris 10 6/06 s10s_u2wos_09 SPARC (build 9 of U2)
pani
16 matches
Mail list logo