Ding Honghui wrote:
Hi,
My solaris storage hangs. I login to the console and there is
messages[1] display on the console.
I can't login into the console and seems the IO is totally blocked.
The system is solaris 10u8 on Dell R710 with disk array Dell MD3000. 2
HBA cable connect the server an
Hi,
My solaris storage hangs. I login to the console and there is messages[1]
display on the console.
I can't login into the console and seems the IO is totally blocked.
The system is solaris 10u8 on Dell R710 with disk array Dell MD3000. 2 HBA
cable connect the server and MD3000.
The symptom is
I am creating a custom Solaris 11 Express CD used for disaster recovery.
I have included the necessary files on the system to run zfs commands
without error (no apparent missing libraries or drivers). However, when
I create a zvol, the device in /devices and the link to
/dev/zvol/dsk/rpool do n
On Mon, Aug 15, 2011 at 2:07 PM, Ray Van Dolson wrote:
> Looks interesting... specs around the same as the old X-25E. We have
> heard however, that Intel will be announcing a true successor to their
> X-25E line shortly.
I think it's the 710 and 720 that you're referring to.
The 710 is MLC-HET
> From: Ray Van Dolson [mailto:rvandol...@esri.com]
> Sent: Monday, August 15, 2011 12:26 PM
>
> On the Intel SSD 320 Series, the spare capacity reserved at the
> factory is 7% to 11% (depending on the SKU) of the full NAND
> capacity. For better random write performance and endurance, the
>
On Mon, Aug 15, 2011 at 01:38:36PM -0700, Brandon High wrote:
> On Thu, Aug 11, 2011 at 1:00 PM, Ray Van Dolson wrote:
> > Are any of you using the Intel 320 as ZIL? It's MLC based, but I
> > understand its wear and performance characteristics can be bumped up
> > significantly by increasing the
On Thu, Aug 11, 2011 at 1:00 PM, Ray Van Dolson wrote:
> Are any of you using the Intel 320 as ZIL? It's MLC based, but I
> understand its wear and performance characteristics can be bumped up
> significantly by increasing the overprovisioning to 20% (dropping
> usable capacity to 80%).
Intel re
Hello Stu Whitefish and List,
On August, 15 2011, 21:17 wrote in [1]:
>> 7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
>> kernel panic, even when booted from different OS versions
> Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest
> from Oracle) several
Given I can boot to single user mode and elect not to import or mount any
pools, and that later I can issue an import against only the pool I need, I
don't understand how this can help.
Still, given that nothing else seems to help I will try this and get back to
you tomorrow.
Thanks,
Jim
-
Hi Paul,
> 1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),
> system works fine
I don't remember at this point which disks were which, but I believe it was 0
and 1 because during the first install there were only 2 drives in the box
because I had only 2 drives.
> 2. add two mo
In message <1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com>, Stu Whi
tefish writes:
>I'm sorry, I don't understand this suggestion.
>
>The pool that won't import is a mirror on two drives.
Disconnect all but the two mirrored drives that you must import
and try to import from a S11X Live
I'm sorry, I don't understand this suggestion.
The pool that won't import is a mirror on two drives.
- Original Message -
> From: LaoTsao
> To: Stu Whitefish
> Cc: "zfs-discuss@opensolaris.org"
> Sent: Monday, August 15, 2011 5:50:08 PM
> Subject: Re: [zfs-discuss] Kernel panic on zp
I am catching up here and wanted to see if I correctly understand the
chain of events...
1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),
system works fine
2. add two more disks (c0t0d0s0 c0t1d0s0), create zpool tank, test and
determine these disks are fine
3. copy data to save to
On Mon, August 15, 2011 12:25, Ray Van Dolson wrote:
> Perhaps this is it. Pulled the recommendation from Intel's Solid-State
> Drive 320 Series in Server Storage Applications whitepaper.
>
> Section 4.1:
[...]
> On the Intel SSD 320 Series, the spare capacity reserved at the
> factory is 7%
iirc if you use two hdd, you can import the zpool
can you try to import -R with only two hdd at time
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Aug 15, 2011, at 13:42, Stu Whitefish wrote:
> Unfortunately this panics the same exact way. Thanks for the suggestion
> though.
>
>
>
>
imho, not a good idea, any two hdd in your raid0 fail zpool is dead
if possible just do one hdd raid0 then use zfs to do mirror
raidz or raidz2 will be the last choice
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Aug 12, 2011, at 21:34, Tom Tang wrote:
> Suppose I want to build a 100-dr
Unfortunately this panics the same exact way. Thanks for the suggestion though.
- Original Message -
> From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.""
> To: zfs-discuss@opensolaris.org
> Cc:
> Sent: Monday, August 15, 2011 3:06:20 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool imp
On Aug 11, 2011, at 1:16 PM, Ray Van Dolson wrote:
> On Thu, Aug 11, 2011 at 01:10:07PM -0700, Ian Collins wrote:
>> On 08/12/11 08:00 AM, Ray Van Dolson wrote:
>>> Are any of you using the Intel 320 as ZIL? It's MLC based, but I
>>> understand its wear and performance characteristics can be bum
On 8/15/2011 11:25 AM, Stu Whitefish wrote:
Hi. Thanks I have tried this on update 8 and Sol 11 Express.
The import always results in a kernel panic as shown in the picture.
I did not try an alternate mountpoint though. Would it make that much
difference?
try it
- Original Message -
On Fri, Aug 12, 2011 at 06:53:22PM -0700, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Ray Van Dolson
> >
> > For ZIL, I
> > suppose we could get the 300GB drive and overcommit to 95%!
>
> What kind of benefi
Hi Doug,
The "vms" pool was created in a non-redundant way, so there is no way to
get the data off of it unless you can put back the original c0t3d0 disk.
If you can still plug in the disk, you can always do a zpool replace on it
afterwards.
If not, you'll need to restore from backup, pref
On Fri, 12 Aug 2011, Tom Tang wrote:
Suppose I want to build a 100-drive storage system, wondering if
there is any disadvantages for me to setup 20 arrays of HW RAID0 (5
drives each), then setup ZFS file system on these 20 virtual drives
and configure them as RAIDZ?
The main concern would be
Hi. Thanks I have tried this on update 8 and Sol 11 Express.
The import always results in a kernel panic as shown in the picture.
I did not try an alternate mountpoint though. Would it make that much
difference?
- Original Message -
> From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.""
>
D'oh. I shouldn't answer questions first thing Monday morning.
I think you test this configuration with and without the
underlying hardware RAID.
If RAIDZ is the right redundancy level for your workload,
you might be pleasantly surprised with a RAIDZ configuration
built on the h/w raid array in
Help - I've got a bad disk in a zpool and need to replace it. I've
got
an extra drive that's not being used, although it's still marked
like
it's in a pool. So I need to get the "xvm" pool destroyed, c0t5d0
marked as available, and replace c0t3d0 with c0t5d0.
may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris
then choose single user mode(6))
2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool
On 8/15/2011 9:12 AM, Stu
David Wragg wrote:
I've not done anything different this time from when I created the original
(512b) pool. How would I check ashift?
For a zpool called "export"...
# zdb export | grep ashift
ashift: 12
^C
#
As far as I know (although I don't have any WD's), all the current 4k
sectorsiz
I've not done anything different this time from when I created the original
(512b) pool. How would I check ashift?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/
Did you 4k align your partition table and is ashift=12?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all, first post to this mailing list so please forgive me if I miss
something obvious. Earlier this year I went over 80% disk utilisation on my
home server and saw performance start to degrade. I migrated from the old pool
of 4 x 1TB WD RE2-GPs (raidz1) to a new pool made of 6 x 2TB WD EURS (
Over provisioning does not directly increase flash performance, but allows
for greater reliability as the drive ages by improving garbage collection
efforts and reducing write amplification. This article doesn't provide any
sources, but it explains the concept at a very basic level -
http://thessd
Suppose I want to build a 100-drive storage system, wondering if there is any
disadvantages for me to setup 20 arrays of HW RAID0 (5 drives each), then setup
ZFS file system on these 20 virtual drives and configure them as RAIDZ?
I understand people always say ZFS doesn't prefer HW RAID. Under
> On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
> wrote:
>> # zpool import -f tank
>>
>> http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
>
> I encourage you to open a support case and ask for an escalation on CR
> 7056738.
>
> --
> Mike Gerdts
Hi Mike,
Unfortunately I
Hi Ian,
> I would use a newer (express maybe) system.
I did and it panics. Posted screenshot last week.
> Recent OpenSolaris based builds have a handy utility usbcopy.
Thanks, I used that to create a bootable Solaris 11 Express. Hiroshi did a
great job!
>> This is really frustrating. I have
- Original Message -
> From: Brian Wilson
> To: zfs-discuss@opensolaris.org
> Cc:
> Sent: Thursday, August 4, 2011 2:57:26 PM
> Subject: Re: [zfs-discuss] Wrong rpool used after reinstall!
>
> I'm curious - would it work to boot from a live CD, go to shell, and
> deport/import/rename t
> > We've migrated from an old samba installation to a new box with
> > openindiana, and it works well, but... It seems Windows now honours
> > the executable bit, so that .exe files for installing packages, are
> > no longer directly executable. While it is positive that windows
> > honours this b
36 matches
Mail list logo