Hi,
I've been searching around on the Internet to fine some help with this, but
have been
unsuccessfull so far.
I have some performance issues with my file server. I have an OpenSolaris
server with a Pentium D
3GHz CPU, 4GB of memory, and a RAIDZ1 over 4 x Seagate (ST31500341AS) 1,5TB
SATA
There is alot there to reply to...but I will try and help...
Re. TLER. Do not worry about TLER when using ZFS. ZFS will handle it either way
and will NOT time out and drop the drive...it may wait a long time, but it will
not time out and drop the drive - nor will it have an issue if you do
That's because NFS adds synchronous writes to the mix (e.g. the client needs to
know certain transactions made it to nonvolatile storage in case the server
restarts etc). The simplest safe solution, although not cheap, is to add an SSD
log device to the pool.
On 23 Jul 2010, at 08:11, Sigbjorn
I've found the Seagate 7200.12 1tb drives and Hitachi 7k2000 2TB drives to
be by far the best.
I've read lots of horror stories about any WD drive with 4k
sectorsit'sbest to stay away from them.
I've also read plenty of people say that the green drives are terrible.
On Wed, Jul 21, 2010 at 12:42 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
Are there any drawbacks to partition a SSD in two parts and use L2ARC on
one partition, and ZIL on the other? Any thoughts?
--
This message posted from opensolaris.org
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:
Hi,
I've been searching around on the Internet to fine some help with this, but
have been
unsuccessfull so far.
I have some performance issues with my file server. I have an OpenSolaris
server with a Pentium D
3GHz
Thomas Burgess wrote:
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie sigbj...@nixtra.com
mailto:sigbj...@nixtra.com wrote:
Hi,
I've been searching around on the Internet to fine some help with
this, but have been
unsuccessfull so far.
I have some performance issues with
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of
95-105MB and NFS of 5-20MB.
Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI
target...as I am getting 2-5MB on that too.
--
This message posted from opensolaris.org
On 23 Jul 2010, at 09:18, Andrew Gabriel andrew.gabr...@oracle.com wrote:
Thomas Burgess wrote:
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie sigbj...@nixtra.com
mailto:sigbj...@nixtra.com wrote:
Hi,
I've been searching around on the Internet to fine some help with
this, but
I see I have already received several replies, thanks to all!
I would not like to risk losing any data, so I believe a ZIL device would be
the way for me. I see
these exists in different prices. Any reason why I would not buy a cheap one?
Like the Intel X25-V
SSD 40GB 2,5?
What size of ZIL
On Fri, July 23, 2010 10:42, tomwaters wrote:
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of
95-105MB and NFS of
5-20MB.
Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI
target...as I am
getting 2-5MB on that too. --
This
Sent from my iPhone
On 23 Jul 2010, at 09:42, tomwaters tomwat...@chadmail.com wrote:
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of
95-105MB and NFS of 5-20MB.
Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI
target...as I am
On 23/07/2010 10:02, Sigbjorn Lie wrote:
On Fri, July 23, 2010 10:42, tomwaters wrote:
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of
95-105MB and NFS of
5-20MB.
Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI
target...as I am
On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:
I see I have already received several replies, thanks to all!
I would not like to risk losing any data, so I believe a ZIL device would
be the way for me. I see
these exists in different prices. Any reason why I would
Hi guys, I physically removed disks from a pool without offlining the pool
first...(yes I know) anyway I now want to delete/destroy the pool...but zpool
destroy -f dvr says can not open 'dvr':no such pool
I can not offline it or delete it!
I want to reuse the name dvr but how do I do this?
If all disks were actually removed, renaming /etc/zfs/zpool.cache and rebooting
should do the trick.
I am not sure but you may have to import root pool at next reboot.
F.
tomwaters wrote:
Hi guys, I physically removed disks from a pool without offlining the pool
first...(yes I know)
Hello,
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows 5MB/s writes on the replaced disks.
I'm thinking a small performance degradation would be sometimes
better than the increased risk window (where a
On Fri, July 23, 2010 11:21, Thomas Burgess wrote:
On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:
I see I have already received several replies, thanks to all!
I would not like to risk losing any data, so I believe a ZIL device would
be the way for me. I see
On 23/07/2010 10:53, Sigbjorn Lie wrote:
The X25-V has up to 25k random read iops and up to 2.5k random write iops per
second, so that
would seem okay for approx $80. :)
What about mirroring? Do I need mirrored ZIL devices in case of a power outage?
Note there is not a ZIL device, there is a
It was fine on the reboot...so even though zfs destroy threw up the errors, it
did remove them...just needed a reboot to refresh/remove
it in the zpool list.
thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
From: Robert Milkowski [mailto:mi...@task.gda.pl]
[In raidz] The issue is that each zfs filesystem block is basically
spread across
n-1 devices.
So every time you want to read back a single fs block you need to wait
for all n-1 devices to provide you with a part of it - and keep in mind
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Phil Harman
Milkowski and Neil Perrin's zil synchronicity [PSARC/2010/108] changes
with sync=disabled, when the changes work their way into an available
The fact that people run unsafe
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sigbjorn Lie
What size of ZIL device would be recommened for my pool consisting for
Get the smallest one. Even an unrealistic high performance scenario cannot
come close to using 32G. I am
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sigbjorn Lie
What about mirroring? Do I need mirrored ZIL devices in case of a power
outage?
You don't need mirroring for the sake of *power outage* but you *do* need
mirroring for the sake
Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Phil Harman
Milkowski and Neil Perrin's zil synchronicity [PSARC/2010/108] changes
with sync=disabled, when the changes work their way into an available
The fact that
Edward Ned Harvey wrote:
From: Robert Milkowski [mailto:mi...@task.gda.pl]
[In raidz] The issue is that each zfs filesystem block is basically
spread across
n-1 devices.
So every time you want to read back a single fs block you need to wait
for all n-1 devices to provide you with a part of it
Hi Ryan,
You are seeing this CR:
http://bugs.opensolaris.org/view_bug.do?bug_id=6916574
zpool add -n displays incorrect structure
This is a display problem only.
Thanks,
Cindy
On 07/22/10 15:54, Ryan Schwartz wrote:
I've got a system running s10x_u7wos_08 with only half of the disks
Phil Harmon wrote:
Not the thread hijack, but I assume a SSD ZIL will similarly improve
an iSCSI target...as I am getting 2-5MB on that too.
Yes, it generally will. I've seen some huge improvements with iSCSI,
but YMMV depending on your config, application and workload.
Sorry this isn't
On Jul 23, 2010, at 2:31 AM, Giovanni Tirloni wrote:
Hello,
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows 5MB/s writes on the replaced disks.
This is lower than I expect, but It may be IOPS bound.
On 07/23/10 02:31, Giovanni Tirloni wrote:
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows5MB/s writes on the replaced disks.
What build of opensolaris are you running? There were some recent
On Fri, Jul 23, 2010 at 11:59 AM, Richard Elling rich...@nexenta.com wrote:
On Jul 23, 2010, at 2:31 AM, Giovanni Tirloni wrote:
Hello,
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows 5MB/s writes on
On Fri, Jul 23, 2010 at 12:50 PM, Bill Sommerfeld
bill.sommerf...@oracle.com wrote:
On 07/23/10 02:31, Giovanni Tirloni wrote:
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows5MB/s writes on the
Symptoms:
1. System remains pingable
2. When trying to ssh in terminal hangs after entering pass
3. At the console terminal hangs after entering pass
4. Problem persists after disabling snapshots/compression/dedup
Solution:
Hard reboot (A+F1 does not work)
Configuration:
Supermicro Mobo
24 x
On Jul 23, 2010, at 9:10 AM, Ruslan Sivak wrote:
I have recently upgraded from NexentaStor 2 to NexentaStor 3 and somehow one
of my volumes got corrupted. Its showing up as a socket. Has anyone seen
this before? Is there a way to get my data back? It seems like it's still
there, but
On 7/23/2010 3:39 AM, tomwaters wrote:
There is alot there to reply to...but I will try and help...
Re. TLER. Do not worry about TLER when using ZFS. ZFS will handle it either way and will
NOT time out and drop the drive...it may wait a long time, but it will not time out and
drop the drive
Hi,
Is it true? Any way to find it in every hierarchy?
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
From: Arne Jansen [mailto:sensi...@gmx.net]
Can anyone else confirm or deny the correctness of this statement?
As I understand it that's the whole point of raidz. Each block is its
own
stripe.
Nope, that doesn't count for confirmation. It is at least theoretically
possible to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Linder, Doug
On a related note - all other things being equal, is there any reason
to choose NFS over ISCI, or vice-versa? I'm currently looking at this
iscsi and NFS are completely
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Fred Liu
Is it true? Any way to find it in every hierarchy?
Yup. Nope.
If you use ZFS, you make a filesystem at whatever level you need it, in
order for the .zfs directory to be available
Thanks.
But too many file systems may be an issue for management and also normal user
cannot create file system.
I think it should go like what NetApp's snapshot does.
It is a pity.
Thanks.
Fred
-Original Message-
From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
Sent: 星期六, 七月
Fundamentally, my recommendation is to choose NFS if your clients can
use it. You'll get a lot of potential advantages in the NFS/zfs
integration, so better performance. Plus you can serve multiple
clients, etc.
The only reason to use iSCSI is when you don't have a choice, IMO. You
should only
On Jul 23, 2010, at 7:33 PM, Fred Liu fred_...@issi.com wrote:
Thanks.
But too many file systems may be an issue for management and also normal user
cannot create file system.
The ability to create or snapshot a file system can easily be delegated to a
user.
I think it should go like
I think it should go like what NetApp's snapshot does.
There was a long thread on this topic earlier this year. Please see the
archives for details.
Do you have the URL? I don't have a long subscription
I too do not have a long subscription, and I would be interested in
the subject line
43 matches
Mail list logo