If cpu seems to be idle, the tool latencytop probably can give you some clue.
It's developed for OpenSolaris but Solaris 10 should work too (with glib 2.14
installed). You can get a copy of v0.1 at
http://opensolaris.org/os/project/latencytop/
To use latencytop, open a terminal and start "laten
Bob,
Catching up late on this thread.
Would it be possible for you to collect the following data :
- /usr/sbin/lockstat -CcwP -n 5 -D 20 -s 40 sleep 5
- /usr/sbin/lockstat -HcwP -n 5 -D 20 -s 40 sleep 5
- /usr/sbin/lockstat -kIW -i 977 -D 20 -s 40 sleep 5
Or if you have access to the GUD
Robert Thurlow wrote:
Harry Putnam wrote:
I think this has probably been discussed here.. but I'm getting
confused about how to determine actual disk usage of zfs filesystems.
Here is an example:
$ du -sb callisto
46744 callisto
$ du -sb callisto/.zfs/snapshot
86076 callisto/.zfs/sna
Harry Putnam wrote:
I think this has probably been discussed here.. but I'm getting
confused about how to determine actual disk usage of zfs filesystems.
Here is an example:
$ du -sb callisto
46744 callisto
$ du -sb callisto/.zfs/snapshot
86076 callisto/.zfs/snapshot
Two questions th
Thanks guys
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hua-Ying,
The partition table *is* confusing so don't try to make sense of it. :-)
Partition or slice 2 represents the entire disk, cylinders 0-24317.
You created slice 0, which is cylinders 1-24316. Slice 8 is a reserved,
legacy area for boot info on some x86 systems. You can ignore it.
Looks
Hua-Ying Ling wrote:
On Mon, Jul 6, 2009 at 11:27 AM, wrote:
Hi Hua-Ying,
Some disks don't have target identifiers, like you c3d0
and c3d1 disks.
To attach your c3d1 disk, you need to relabel it with an
SMI label and provide a slice, s0, for example.
See the steps here:
http://www.solari
I'm confused by the output of the partition command, this is the
partition table created by the installer:
current partition table (original):
Total disk cylinders available: 24318 + 2 (reserved cylinders)
Part TagFlag Cylinders SizeBlocks
0 rootwm
I think this has probably been discussed here.. but I'm getting
confused about how to determine actual disk usage of zfs filesystems.
Here is an example:
$ du -sb callisto
46744 callisto
$ du -sb callisto/.zfs/snapshot
86076 callisto/.zfs/snapshot
Two questions then.
I do need to add
Hi,
I'd like to compress quite well compressable (~4x) data on a file
server using ZFS compression, and still get good transfer speed. The
users are transferring several GB of data (typically, 8-10 GB). The
host is a X4150 with 16 GB of RAM.
Looking at ZFS layer described at http://www.ope
On Mon, Jul 06, 2009 at 09:08:44AM -0700, Daniel Liebster wrote:
> I ran a zfs destroy on a 20TB volume on a Thumper running snv_117, and its
> been 2 hours now with a huge amount of read activity. In the past(2008.06)
> destroy came back with in a minutes.
> Is this expected in snv_117? and if n
DL Consulting wrote:
Just reread your response. If the send/recv fails the snapshot should NOT turn
up on chucky (the recv machine) right? However, it is turning up but the
original on the sending machine is being destroyed by something (which I'm
guessing is the time-slider-cleanup cronjob be
I ran a zfs destroy on a 20TB volume on a Thumper running snv_117, and its been
2 hours now with a huge amount of read activity. In the past(2008.06) destroy
came back with in a minutes.
After a couple of hours , activity still looks like:
-- - - - - - -
dat
Andre van Eyssen wrote:
On Mon, 6 Jul 2009, Gary Mills wrote:
As for a business case, we just had an extended and catastrophic
performance degradation that was the result of two ZFS bugs. If we
have another one like that, our director is likely to instruct us to
throw away all our Solaris toys
+--
| On 2009-07-07 01:29:11, Andre van Eyssen wrote:
|
| On Mon, 6 Jul 2009, Gary Mills wrote:
|
| >As for a business case, we just had an extended and catastrophic
| >performance degradation that was the result of two Z
On Mon, 6 Jul 2009, Gary Mills wrote:
As for a business case, we just had an extended and catastrophic
performance degradation that was the result of two ZFS bugs. If we
have another one like that, our director is likely to instruct us to
throw away all our Solaris toys and convert to Microsoft
On Sat, Jul 04, 2009 at 07:18:45PM +0100, Phil Harman wrote:
> Gary Mills wrote:
> >On Sat, Jul 04, 2009 at 08:48:33AM +0100, Phil Harman wrote:
> >
> >>ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC
> >>instead of the Solaris page cache. But mmap() uses the latter. So if
Hi Hua-Ying,
Some disks don't have target identifiers, like you c3d0
and c3d1 disks.
To attach your c3d1 disk, you need to relabel it with an
SMI label and provide a slice, s0, for example.
See the steps here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2
On Mon, 6 Jul 2009, Boyd Adamson wrote:
Probably this is encouraged by documentation like this:
The memory mapping interface is described in Memory Management
Interfaces. Mapping files is the most efficient form of file I/O for
most applications run under the SunOS platform.
Found at:
http:
Phil Harman writes:
> Gary Mills wrote:
> The Solaris implementation of mmap(2) is functionally correct, but the
> wait for a 64 bit address space rather moved the attention of
> performance tuning elsewhere. I must admit I was surprised to see so
> much code out there that still uses mmap(2) for
Already tried that ... :-(
# zpool destroy -f emcpool2
cannot open 'emcpool2': no such pool
#
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
DL Consulting writes:
> It takes daily snapshots and sends them to another machine as a
> backup. The sending and receiving is scripted and run from a
> cronjob. The problem is that some of the snapshots disappear from
> monster after they've been sent to the backup machine.
Do not use the snaps
Ketan writes:
> I had a pool which was exported and due to some issues on my SAN i
> was never able to import it again. Can anyone tell me how can i
> destroy the exported pool to free up the LUN.
I did that once; I *think* that was with the "-f" option to "zpool
destroy".
Regards, Juergen.
___
Hua-Ying Ling wrote:
When I use "cfgadm -a" it only seems to list usb devices?
#cfgadm -a
Ap_Id Type Receptacle Occupant Condition
usb2/1 unknown emptyunconfigured ok
usb2/2 unknown empty
Hi,
The format shows the 2nd disk as
1. c3d1
Which means some H/w issue and the OS did not recognize it label/size.
Please check the physical disk and the connectivity and once ok. Please run
"devfsadm" and try the "format" command again.
Thanks & Regards,
Vikash Gupta
Extn: 88-220-7318
Cell
If you want to use the entire disk in a zpool, you "should" use the
notation without the "c?" trailing part. Ie, "c2d0". (SATA related disks
do not have the "t?" target part, whereas SCSI, and SCSI-emulated
devices do. Like CDROMs, USB etc).
If you are using just a part of a disk, one partit
When I use "cfgadm -a" it only seems to list usb devices?
#cfgadm -a
Ap_Id Type Receptacle Occupant Condition
usb2/1 unknown emptyunconfigured ok
usb2/2 unknown emptyunconfigured ok
usb
27 matches
Mail list logo