On Wed, Aug 26, 2009 at 12:09 AM, Duncan Groenewald
dagroenew...@optusnet.com.au wrote:
That was a typo, missing an s - I copied the incorrect line from the
terminal...
sbdadm create-lu /dev/zvol/rdsk/storagepool/backups/isci/macbook_dg
Blog is here...
The intended use is NFS storage to back some VMWare servers running a
range of different VM's, including Exchange, Lotus Domino, SQL Server
and Oracle. :-) It's a very random workload, and all the research I've
done points to mirroring as the better option for providing a better
total IOP/s. The
Here's one horror story of mine - ZFS taking over 20 minutes to flag a drive as
faulty, with the entire pool responding so slowly during those 20 minutes that
it crashed six virtual machines running off the pool:
http://www.opensolaris.org/jive/thread.jspa?messageID=369265#369265
There are some
# cat /etc/release
Solaris Express Community Edition snv_105 X86
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 15 December 2008
# zpool status tww
pool: tww
Okay, I'm trying to do whatever I can NONDESTRUCTIVELY to fix this. I have
almost 5TB of data that I can't afford to lose (baby pictures and videos,
etc.). Since no one has seen this problem before, maybe someone can tell me
what I need to do to make a backup of what I have now so I can try
I'm using the Caviar Green drives in a 5-disk config.
I downloaded the WDTLER utility and set all the drives to have a 7-second
timeout, like the RE series have.
WDTLER boots a small DOS app and you have to hit a key for each drive to
adjust. So this might take time for a large raidz2.
--
On 21.08.09 14:52, No Guarantees wrote:
Every time I attempt to import a particular RAID-Z pool, my system hangs.
Specifically, if I open up a gnome terminal and input '$ pfexec import zpool
mypool', the process will never complete and I will return to the prompt. If
I open up another terminal,
I've been running ZFS under FreeBSD which is experimental, and i've had
nothing but great luckso i guess it depeneds on a number of things. I
went with FreeBSD because the hardware i had wasn't supported in
solarisi expected problems but honestly, it's been rock solidit's
survived all
Hello all,
I used liveUSB Creator to create a OpenSolaris LiveUSB. It's booting fast and
easy.
Mainly I'll need it to deploy packages (use it for jumpstart or so). I'll need
to stick on a USB stick (16GB) and be able to reach it from the network.
So far it is good, but now the issue is that
On Wed, 26 Aug 2009, Tristan Ball wrote:
Complete disk failures are comparatively rare, while media or transient
errors are far more common. As a media I/O or transient error on the
It seems that this assumption is is not always the case. The
expensive small-capacity SCSI/SAS enterprise
Hi Tim Cook.
If I was building my own system again, I would prefer not to go with
consumer harddrives.
I had a raidz pool containing eight drives on a snv108 system, after
rebooting, four of the eight drives was so broken they could not be
seen by format, let alone the zpool they belonged
Hi,
I'd appreciate if anyone can point me how to identify poor performing disks
that might have dragged down performance of the pool. Also the system logged
following error about one of the drives. Does it show the disk was having
problem?
Aug 17 13:45:56 zfs1.domain.com scsi: [ID 107833
If I was building my own system again, I would prefer not to go with consumer
harddrives.
I had a raidz pool containing eight drives on a snv108 system, after
rebooting, four of
the eight drives was so broken they could not be seen by format, let alone the
zpool they
belonged to.
This was with
But the real question is whether the enterprise drives would have
avoided your problem.
A.
--
Adam Sherman
+1.613.797.6819
On 2009-08-26, at 11:38, Troels Nørgaard Nielsen t...@t86.dk wrote:
Hi Tim Cook.
If I was building my own system again, I would prefer not to go with
consumer
On Aug 25, 2009, at 9:38 PM, Tristan Ball wrote:
What I’m worried about that time period where the pool is
resilvering to the hot spare. For example: one half of a mirror has
failed completely, and the mirror is being rebuilt onto the spare –
if I get a read error from the remaining half
On 08/25/09 10:46 PM, Tim Cook wrote:
On Wed, Aug 26, 2009 at 12:22 AM, thomas tjohnso...@gmail.com
mailto:tjohnso...@gmail.com wrote:
I'll admit, I was cheap at first and my
fileserver right now is consumer drives. nbsp;You
can bet all my future purchases will be of the
On Wed, Aug 26, 2009 at 11:45 AM, Neal Pollack neal.poll...@sun.com wrote:
Luck or design/usage ?
Let me explain; I've also had many drives fail over the last 25
years of working on computers, I.T., engineering, manufacturing,
and building my own PCs.
Drive life can be directly affected
HI
i'm using latest official solaris version 10/06 = patches and
i'm getting problem
after import of zpool sometime when i do
mount -F zfs zpool/fs /mountpoint
the command freeze
and after this these is no way i can umount, destroy or anything
all commands will hung
even rebooting
You can try:
zpool iostat pool_name -v 1
This will show you IO on each vdev at one second intervals. Perhaps you will
see different IO behavior on any suspect drive.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
for release sorry i meant
Solaris 10 10/08 s10s_u6wos_07b SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 27 October 2008
--
This message posted from opensolaris.org
The latest official Solaris 10 is actually 05/09. There are update patch
bundles available
on Sunsolve for free download that will take you to 05/09. It may well
be worth applying
these to see if they remedy the problem for you. They certainly allow
you to bring ZFS up to version
10 from
actually i did apply the latest recommended patches
SunOS VL-MO-ZMR01 5.10 Generic_139555-08 sun4v sparc SUNW,SPARC-Enterprise-T5120
but still
perhaps you are not doing much import - export
because when i do not do, i do not experience much problem
but when doing it, outch ...
a reboot will
Hi Richard,
So you have to wait for the sd (or other) driver to
timeout the request. By
default, this is on the order of minutes. Meanwhile,
ZFS is patiently awaiting a status on the request. For
enterprise class drives, there is a limited number
of retries on the disk before it reports an
serge goyette wrote:
actually i did apply the latest recommended patches
Recommended patches and upgrade clusters are different by the way.
10_Recommended != Upgrade Cluster that. Upgrade cluster will upgrade
the system to a effectively the Solaris Release that the upgrade cluster
is
Also you may wish to look at the output of 'iostat -xnce 1' as well.
You can post those to the list if you have a specific problem.
You want to be looking for error counts increasing and specifically 'asvc_t'
for the service times on the disks. I higher number for asvc_t may help to
isolate
Hum
i know the recommended and the update patches bundles
according to the readme
Patch 139555-08 is the kernel patch associated with the Solaris 10 5/09 release
(Solaris 10 Update 7).
so i believe i'm up to date
i understand i'm a bit vague but i cannot provide any Zpool or zfs output
until
On Aug 26, 2009, at 1:17 PM, thomas wrote:
Hi Richard,
So you have to wait for the sd (or other) driver to
timeout the request. By
default, this is on the order of minutes. Meanwhile,
ZFS is patiently awaiting a status on the request. For
enterprise class drives, there is a limited number
of
Maybe you can run a Dtrace probe using Chime?
http://blogs.sun.com/observatory/entry/chime
Initial Traces - Device IO
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
No unfortunately the type does not fix it !!
Still stuck !!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cool - just found the problem. I had to upgrade the zpool using
upgrade zpool storagepool
onwards...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
All fixed now ... Backup has been running for maybe a minute or two and has
backed up over 1GB.
Thanks guys...
Here is the complete command set I used...
Creating the ZFS iSCSI target using COMSTAR.
1. DO NOT use ZFS set shareiscsi=on ...
2. MAKE SURE your zpool is upgraded. Run zpool
We have a situation where all of the spares in a set of pools have
gone into a faulted state and now, apparently, we can't remove them
or otherwise de-fault them. I'm confidant that the underlying disks
are fine, but ZFS seems quite unwilling to do anything with the spares
situation.
(The
Running iostat -nxce 1, I saw write sizes alternate between two raidz groups
in the same pool.
At one time, drives on cotroller 1 have larger writes (3-10 times) than ones on
controller2:
extended device statistics errors ---
r/sw/s
33 matches
Mail list logo