On Tue, Oct 19, 2010 at 6:13 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net
wrote:
Now, is there a way, manually or automatically, to somehow balance the data
across these LVOLs? My first guess is that doing this _automatically_ will
require block pointer rewrite, but then, is there way to hack
Hi all
I'm reposting this since I don't know if the first one made it to the list.
I created a sparse 1TB file box recently using dd from /dev/zero, in
order to get a rough idea of write performance, and when I deleted it
the space was not freed.
This is on a raidz1, 4x1TB SATA drives.
I removed the file using a simple rm /mnt/tank0/temp/mytempfile.bin.
It's definitely gone. But the space hasn't been freed.
I have been pointed in the direction of this bug
http://bugs.opensolaris.org/view_bug.do?bug_id=6792701
It was apparently introduced in build 94 and at that time we had a
If you do a dd to the storage from the heads do you still get the same
issues?
On 31 Oct 2010 12:40, Ian D rewar...@hotmail.com wrote:
I get that multi-cores doesn't necessarily better performances, but I doubt
that both the latest AMD CPUs (the Magny-Cours) and the latest Intel CPUs
(the
If you do a dd to the storage from the heads do
you still get the same issues?
no, local read/writes are great, they never choke. It's whenever NFS or iSCSI
are involved and that the read/writes are done from a remote box that we
experience the problem. Local operations barely affects the
What if you connect locally via NFS or iscsi?
SR
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
I realize this is a perpetual topic, but after re-reading all the messages I
had saved on the subject I find that I am still uncertain. I am finally ready
to add a couple of mirrored SSD ZIL drives to my ZFS box and would like to know
the current recommendations on make and model. I was
Check your TXG settings, it could be a timing issue, nagles issue, also TCP
buffer issue. Check setup system properties.
On 1 Nov 2010 19:36, SR rraj...@gmail.com wrote:
What if you connect locally via NFS or iscsi?
SR
--
This message posted from opensolaris.org
- Original Message -
Likely you don#39;t have enough ram or CPU in the box.
The Nexenta box has 256G of RAM and the latest X7500 series CPUs. That
said, the load does get crazy high (like 35+) very quickly. We can't
figure out what's taking so much CPU. It happens even when
- Original Message -
Hello,
I realize this is a perpetual topic, but after re-reading all the
messages I had saved on the subject I find that I am still uncertain.
I am finally ready to add a couple of mirrored SSD ZIL drives to my
ZFS box and would like to know the current
Maybe you are experiencing this:
http://opensolaris.org/jive/thread.jspa?threadID=119421
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You doubt AMD or Intel cpu's suffer from bad cache
mgmt?
In order to clear that out, we've tried using an older server (about 4 years
old) as the head and we see the same pattern. It's actually more obvious that
it consumes a whole lot of CPU cycles. Using the same box as a Linux-based NFS
Maybe you are experiencing this:
http://opensolaris.org/jive/thread.jspa?threadID=11942
It does look like this... Is this really the expected behaviour? That's just
unacceptable. It is so bad it sometimes drop connection and fail copies and
SQL queries...
Ian
--
This message posted from
After upgrading to snv_150, I'm getting continual writes at about 30
wps. The M40 is much more sluggish and has been doing this since last
Thursday after upgrade. The iosnoop constantly shows large zpool-rpool
writes on an idle system:
UID PID DBLOCK SIZE COMM PATHNAME
Le 18 oct. 2010 à 08:44, Habony, Zsolt zsolt.hab...@hp.com a écrit :
Hi,
I have seen a similar question on this list in the archive but
haven’t seen the answer.
Can I avoid striping across top level vdevs ?
If I use a zpool which is one LUN from the
On 18 Oct 2010, at 12:40, Habony, Zsolt wrote:
Is there a way to avoid it, or can we be sure that the problem does not
exist at all ?
Grow the existing LUN rather than adding another one.
The only way to have ZFS not stripe is to not give it devices to stripe
over. So stick with simple
Congratulations Ed, and welcome to open systems…
Ah, but Nexenta is open and has no vendor lock-in. That's what you probably
should have done is bank everything on Illumos and Nexenta. A winning
combination by all accounts.
But then again, you could have used Linux on any hardware as well.
I've been having the same problems, and it appears to be from a remote
monitoring app that calls zpool status and/or zfs list. I've also found
problems with PERC and I'm finally replacing the PERC cards with SAS5/E
controllers (which are much cheaper anyway). Every time I reboot, the PERC
Jim,
They are running Solaris 10 11/06 (u3) with kernel patch 142900-12. See
inline for the rest...
On 10/25/10 11:19 AM, Jim Mauro wrote:
Hi Jim - cross-posting to zfs-discuss, because 20X is, to say the
least, compelling.
Obviously, it would be awesome if we had the opportunity to
This combination of tunables is probably a worst case set for doing
sequential or multi-block reads, particularly from a COW file system.
We know that disaggregation can occur due to small, random writes,
and that this can result in an increase in IOPS required to do
sequential or multi-block
On 26 oct. 2010, at 16:21, Matthieu Fecteau wrote:
Hi,
I'm planning to use the replication scripts on that page :
http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html
It uses the timeslider (other way possible) to take snapshots, uses zfs
send/receive to
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
and a week being 168
On Oct 30, 2010, at 12:25 PM, Cuyler Dingwell wrote:
It would have been nice if performance didn't take a nose dive when nearing
(and not even at) capacity. In my case I would have preferred if the
necessary space was reserved and I got a space issue before degrading to the
point of
On Nov 1, 2010, at 5:09 PM, Ian D rewar...@hotmail.com wrote:
Maybe you are experiencing this:
http://opensolaris.org/jive/thread.jspa?threadID=11942
It does look like this... Is this really the expected behaviour? That's just
unacceptable. It is so bad it sometimes drop connection and
On Nov 1, 2010, at 3:33 PM, Mark Sandrock mark.sandr...@oracle.com wrote:
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Mark Sandrock
I'm working with someone who replaced a failed 1TB drive (50%
utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon,
Ross Walker wrote:
On Nov 1, 2010, at 5:09 PM, Ian D rewar...@hotmail.com wrote:
Maybe you are experiencing this:
http://opensolaris.org/jive/thread.jspa?threadID=11942
It does look like this... Is this really the expected behaviour? That's just
unacceptable. It is so bad it
On 11/ 2/10 08:33 AM, Mark Sandrock wrote:
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done,
On 11/ 2/10 11:55 AM, Ross Walker wrote:
On Nov 1, 2010, at 3:33 PM, Mark Sandrock mark.sandr...@oracle.com
mailto:mark.sandr...@oracle.com wrote:
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be
29 matches
Mail list logo