I’m testing the new online zpool expansion feature of Solaris 10 9/10. My
zpool was created using the entire disk (ie. no slice number was used). When I
resize my LUN on our SAN (an HP-EVA4400) the EFI label does not change.
On the zpool, I have autoexpand=on, and I’ve tried using zpool
On Tue, Oct 12, 2010 at 9:30 AM, Alexander Lesle gro...@tierarzt-mueller.de
wrote:
Hello guys,
I want to built a new NAS and I am searching for a controller.
At supermicro I found this new one with the LSI 2008 controller.
On Tue, October 12, 2010 18:31, Bob Friesenhahn wrote:
On Tue, 12 Oct 2010, Saxon, Will wrote:
Another article concerning Sandforce performance:
http://www.anandtech.com/show/3667/6
[...]
When I read this I thought that it kind of eliminated Sandforce
drives from consideration as SLOG
I have a Dell R710 which has been flaky for some time. It crashes about
once per week. I have literally replaced every piece of hardware in it, and
reinstalled Sol 10u9 fresh and clean.
I am wondering if other people out there are using Dell hardware, with what
degree of success, and in
I have a Dell R710 which has been flaky for some time. It crashes about once
per week. I have literally replaced every piece of hardware in it, and
reinstalled Sol 10u9 fresh and clean.
I am wondering if other people out there are using Dell hardware, with what
degree of success, and
Do you have dedup on? Removing large files, zfs destroy a snapshot, or a zvol
and you'll see hangs like you are describing.
Turn off dedup is best option..
If you want dedup get more ram, and more, and more, and.. add SSD cache
device.. then it works ok usually..
Right now I'm fighting an
From: Markus Kovero [mailto:markus.kov...@nebula.fi]
Sent: Wednesday, October 13, 2010 10:43 AM
Hi, we've been running opensolaris on Dell R710 with mixed results,
some work better than others and we've been struggling with same issue
as you are with latest servers.
I suspect somekind
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Steve Radich, BitShop, Inc.
Do you have dedup on? Removing large files, zfs destroy a snapshot, or
a zvol and you'll see hangs like you are describing.
Thank you, but no.
I'm running sol
How consistent are your problems? If you change something and things get
better or worse, will you be able to notice?
Right now, I think I have improved matters by changing the Perc to
WriteThrough instead of WriteBack. Yesterday the system crashed several
times before I changed that, and
Hi ,
I have some Dell R710 and Dell R410 running OSOL (snv_130 or
snv_134) attached to a Supermicro chassis, and the PERC it's only used for
the root disks.
I did got some issues with this type of servers, but here's
what i did that made them quite stable :
- disable virtualization support
in
The BROADCOM NIC was also a problem faced by me, and if you downgrade the
FW to the 4.x series everything is fine...
But i think there's a new updated driver somewhere...
Bruno
On Wed, 13 Oct 2010 14:58:32 +, Markus Kovero
markus.kov...@nebula.fi
wrote:
How consistent are your problems? If
Hi Ed,
I have been using the Dell r710 for a while. You might try
disabling c-states, as the problem you saw is identical to one I
was seeing (disk i/o stops working, other things are ok). Since
disabling c-states, I haven't seen the problem again.
max
On Oct 13, 2010, at 4:56 PM, Edward Ned
On Tue, Oct 12, 2010 at 08:49:00PM -0700, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
I have a pool with a single SLOG device rated at Y iops.
If I add a second (non-mirrored) SLOG device
Here are some more findings...
The Nexenta box has 3 pools:
syspool: made of 2 mirrored (hardware RAID) local SAS disks
pool_sas: made of 22 15K SAS disks in ZFS mirrors on 2 JBODs on 2 controllers
pool_sata: made of 42 SATA disks in 6 RAIDZ2 vdevs on a single controller
When we copy data from
Wanted to test the zfs diff command and ran into this.
I turned off all windows sharing.
the rpool has normal permissions for .zfs/shares
how do I fix this ?
Dirk
r...@osolpro:/data/.zfs# zfs diff d...@10aug2010 d...@13oct2010
Cannot stat /data/.zfs/shares/: unable to generate diffs
pwd
Here are some more findings...
The Nexenta box has 3 pools:
syspool: made of 2 mirrored (hardware RAID) local SAS
disks
pool_sas: made of 22 15K SAS disks in ZFS mirrors on
2 JBODs on 2 controllers
pool_sata: made of 42 SATA disks in 6 RAIDZ2 vdevs on
a single controller
When we copy
As a home user, here are my thoughts.
WD = ignore (TLER issues, parking issues, etc)
I recently built up a server on Osol running Samsung 1.5TB drives. They are
green, but don't seem to have the irritating features found on the WD
green drives. They are 5400RPM, but seem to transfer data
On 10/13/10 10:20 AM, dirk schelfhout wrote:
Wanted to test the zfs diff command and ran into this.
I turned off all windows sharing.
the rpool has normal permissions for .zfs/shares
how do I fix this ?
Dirk
r...@osolpro:/data/.zfs# zfs diff d...@10aug2010 d...@13oct2010
Cannot stat
On Wed, Oct 13 at 10:13, Edward Ned Harvey wrote:
I have a Dell R710 which has been flaky for some time. It crashes about
once per week. I have literally replaced every piece of hardware in it,
and reinstalled Sol 10u9 fresh and clean.
I am wondering if other people out there are
cd /data/.zfs
sche...@osolpro:/data/.zfs$ ls -alt
ls: cannot access shares: Operation not supported
total 4
drwxr-xr-x 19 schelfd staff 25 2010-10-13 18:57 ..
dr-xr-xr-x 2 rootroot 2 2010-10-13 17:44 snapshot
dr-xr-xr-x 4 rootroot 4 2009-01-28 23:08 .
?? ? ? ? ?
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
Out of curiosity, did you run into this:
http://blogs.everycity.co.uk/alasdair/2010/06/broadcom-nics-dropping-
out-on-solaris-10/
I personally haven't had the broadcom problem. When my
Dell R710 ... Solaris 10u9 ... With stability problems ...
Notice that I have several CPU's whose current_cstate is higher than the
supported_max_cstate.
Logically, that sounds like a bad thing. But I can't seem to find
documentation that defines the meaning of supported_max_cstates, to verify
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Dell R710 ... Solaris 10u9 ... With stability problems ...
Notice that I have several CPU's whose current_cstate is higher than
the
supported_max_cstate.
One more data
On 13 Oct 2010, at 18:30, Edward Ned Harvey wrote:
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
Out of curiosity, did you run into this:
http://blogs.everycity.co.uk/alasdair/2010/06/broadcom-nics-dropping-
out-on-solaris-10/
I
Hi James,
I'm looking into this and will get back to you shortly.
Thanks,
Cindy
On 10/13/10 00:14, James Patterson wrote:
I’m testing the new online zpool expansion feature of Solaris 10 9/10. My
zpool was created using the entire disk (ie. no slice number was used). When I
resize my LUN
The only thing that still stands out is that network
operations (iSCSI and NFS) to external drives are
slow, correct?
Yes, that pretty much resume it.
Just for completeness, what happens if you scp a file
to the three different pools? If the results are the
same as NFS and iSCSI, then I
'Edward Ned Harvey' wrote:
I have a Dell R710 which has been flaky for some time. It crashes
about once per week. I have literally replaced every piece of hardware
in it, and reinstalled Sol 10u9 fresh and clean.
I am wondering if other people out there are using Dell hardware, with
what
From: Henrik Johansen [mailto:hen...@scannet.dk]
The 10g models are stable - especially the R905's are real workhorses.
You would generally consider all your machines stable now?
Can you easily pdsh to all those machines?
kstat | grep current_cstate ; kstat | grep supported_max_cstates
I'd
Budy, if you are using raid-5 or raid-6 underneath ZFS, then you should know
that raid-5/6 might corrupt data. See here for lots of technical articles why
raid-5 is bad:
http://www.baarf.com/
raid-6 is not better. I can show you links about raid-6 being not safe.
I is a good thing you run ZFS,
Would it be possible to install OpenSolaris to an USB
disk and boot from it and try? That would take 1-2h
and could maybe help you narrow things down further?
I'm a little afraid to lose my data, i wouldnt be the end of the world, but I'd
rather avoid that. I'll do it in last resort.
Ian
--
More stuff...
We ran the same tests on another Nexenta box with fairly similar hardware and
had the exact same issues. The two boxes have the same models of HBAs, NICs
and JBODs but different CPUs and motherboards.
Our next test is to try with a different kind of HBA, we have a Dell H800
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of dirk schelfhout
Wanted to test the zfs diff command and ran into this.
What's zfs diff? I know it's been requested, but AFAIK, not implemented
yet. Is that new feature being developed now
Folks,
If I have 20 disks to build a raidz3 pool, do I create one big raidz vdev or do
I create multiple raidz3 vdevs? Is there any advantage of having multiple
raidz3 vdevs in a single pool?
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
Hello Peter,
Read the ZFS Best Practices Guide to start. If you still have questions, post
back to the list.
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pool_Performance_Considerations
-Scott
On Oct 13, 2010, at 3:21 PM, Peter Taps wrote:
Folks,
If I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Taps
If I have 20 disks to build a raidz3 pool, do I create one big raidz
vdev or do I create multiple raidz3 vdevs? Is there any advantage of
having multiple raidz3 vdevs in a single
On Oct 13, 2010, at 12:59 PM, Orvar Korvar wrote:
On the other hand, ZFS is safe. There are research papers showing that ZFS
detects and corrects all errors. You want to see them?
I would. URLs please?
-- richard
--
OpenStorage Summit, October 25-27, Palo Alto, CA
36 matches
Mail list logo