Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible
with sas-addresses such as this in zpool status output:
NAME STATE READ WRITE CKSUM
cuve
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor
read/write ops using iostat, but that doesn't tell me how contiguous the
data
I am not sure you can monitor actual mechanical seeks short
of debugging and interrogating the HDD firmware - because
it is the last responsible logic in the chain of caching, queuing
and issuing actual commands to the disk heads.
For example, a long logical IO spanning several cylinders
would
what is output
echo |format
On 5/19/2011 3:55 AM, Evaldas Auryla wrote:
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible
with sas-addresses such as this in zpool status output:
NAME
2011-05-19 17:00, Jim Klimov пишет:
I am not sure you can monitor actual mechanical seeks short
of debugging and interrogating the HDD firmware - because
it is the last responsible logic in the chain of caching,
queuing and issuing actual commands to the disk heads.
For example, a long logical
On 19 May 2011, at 08:55, Evaldas Auryla wrote:
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single
path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with
sas-addresses such as this in zpool status output:
NAME STATE
On 19 May, 2011 - Sa??o Kiselkov sent me these 0,6K bytes:
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor
read/write
Op 03-05-11 17:55, Brandon High schreef:
-H: Hard links
If you're going to this for 2 TB of data, remember to expand your swap
space first (or have tons of memory). Rsync will need it to store every
inode number in the directory.
--
No part of this copyright message may be reproduced, read or
The same format as in zpool status:
3. c9t5000C50025D5A266d0 SEAGATE-ST91000640SS-AS02-931.51GB
/pci@0,0/pci10de,376@e/pci1000,3080@0/iport@f/disk@w5000c50025d5a266,0
4. c9t5000C50025D5AF66d0 SEAGATE-ST91000640SS-AS02-931.51GB
On 05/19/2011 03:35 PM, Tomas Ögren wrote:
On 19 May, 2011 - Sa??o Kiselkov sent me these 0,6K bytes:
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Evaldas Auryla
Is there an easy way to map these sas-addresses to the physical disks in
enclosure ?
Of course in the ideal world, when a disk needs to be pulled, hardware would
know about
On 19 May 2011, at 14:44, Evaldas Auryla wrote:
Hi Chris, there is no sestopo on this box (Solaris Express 11 151a), fmtopo
-dV works nice, although it's a bit overkill with manually parsing the
output :)
You need to install pkg:/system/io/tests.
Chris
I'll add my 2 cents, since I just suffered some pretty bad pool corruption a
few months ago and went through a lot of pain to get most of it restored. See
http://www.opensolaris.org/jive/thread.jspa?messageID=512687 for the gory
details.
Steps you should take:
1) as mentioned above, delete
If you cannot even boot to single user mode on the server, boot from SXCE or
openindiana, then:
1. import syspool:
# zpool import syspool
2. mount affected rootfs:
# mkdir /a; mount -F zfs syspool/rootfs-nmu-### /a
3. remove zpool.cache:
# rm -f /a/etc/zfs/zpool.cache
4. rebuild boot archive:
On Thu, 19 May 2011 15:39:50 +0200, Frank Van Damme wrote:
Op 03-05-11 17:55, Brandon High schreef:
-H: Hard links
If you're going to this for 2 TB of data, remember to expand your
swap
space first (or have tons of memory). Rsync will need it to store
every
inode number in the directory.
On Thu, May 19 at 9:55, Evaldas Auryla wrote:
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are
visible with sas-addresses such as this in zpool status output:
NAME STATE READ
IIRC there are tool/SW from lsi like MegaRaid SW that may display some
info
not sure you can use Common Array Manager
On 5/19/2011 11:20 AM, Eric D. Mudama wrote:
On Thu, May 19 at 9:55, Evaldas Auryla wrote:
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path,
On Thu, May 19, 2011 at 5:35 AM, Sašo Kiselkov skiselkov...@gmail.com wrote:
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor
On May 19, 2011, at 5:35 AM, Sašo Kiselkov wrote:
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (50) sequentially read a
large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor
read/write ops using
On May 19, 2011, at 12:55 AM, Evaldas Auryla wrote:
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single
path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with
sas-addresses such as this in zpool status output:
NAME STATE
I thought this was interesting - it looks like we have a failing drive in our
mirror, but the two device nodes in the mirror are the same:
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for
Just a random thought: if two devices have same IDs and seem to work in
turns,
are you certain you have a mirror and not two paths to the same backend?
A few years back I was given to support a box with sporadically failing
drives
which turned out to be two paths to the same external array,
I just got a call from another of our admins, as I am the resident ZFS
expert, and they have opened a support case with Oracle, but I figured
I'd ask here as well, as this forum often provides better, faster
answers :-)
We have a server (M4000) with 6 FC attached SE-3511 disk arrays
(some
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Kraus
Over the past few months I have seen mention of FreeBSD a couple time in
regards to ZFS. My question is how stable (reliable) is ZFS on this platform ?
On May 19, 2011, at 2:09 PM, Paul Kraus p...@kraus-haus.org wrote:
I just got a call from another of our admins, as I am the resident ZFS
expert, and they have opened a support case with Oracle, but I figured
I'd ask here as well, as this forum often provides better, faster
answers :-)
25 matches
Mail list logo