2010/12/8 taemun tae...@gmail.com:
Dedup? Taking a long time to boot after hard reboot after lookup?
I'll bet that it hard locked whilst deleting some files or a dataset that
was dedup'd. After the delete is started, it spends *ages* cleaning up the
DDT (the table containing a list of dedup'd
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Also, if you have a NFS datastore, which is not available at the time of
ESX
bootup, then the NFS datastore doesn't come online, and there seems to be
no
way of telling
On 9 déc. 2010, at 13:41, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Also, if you have a NFS datastore, which is not available at the time of
ESX
bootup, then the NFS datastore doesn't
Looking for a little help, please. A contact from Oracle (Sun)
suggested I pose the question to this email.
We're using ZFS on Solaris 10 in an application where there are so many
directory-subdirectory layers, and a lot of small files (~1-2Kb) that we
ran out of inodes (over 30 million!).
2010/12/8 gon...@comcast.net:
To explain further the slow delete problem:
It is absolutely critical for zfs to manage the incoming data rate.
This is done reasonably well for write transactions.
Delete transactions, prior to dedup, were very light-weight, nearly free,
so these are not
On 09 December, 2010 - David Strom sent me these 0,7K bytes:
Looking for a little help, please. A contact from Oracle (Sun)
suggested I pose the question to this email.
We're using ZFS on Solaris 10 in an application where there are so many
directory-subdirectory layers, and a lot of
On Thu, Dec 9, 2010 at 1:23 PM, David Strom dst...@ciesin.columbia.edu wrote:
Looking for a little help, please. A contact from Oracle (Sun) suggested I
pose the question to this email.
We're using ZFS on Solaris 10 in an application where there are so many
directory-subdirectory layers, and
Hi All,
Is there a way to tune the zfs prefetch on a per pool basis? I have
a customer that is seeing slow performance on a pool the contains
multiple tablespaces from an Oracle database, looking at the LUNs
associated to that pool they are constantly at 80% -
Hello Tony,
If the hardware hasn't changed I'd look at the workload on the database
server. If the customer is taking regular statspack snapshots they might be
able to see whats causing the extra activity. They can use AWR or the
diagnostic pack, if they are licensed, to see the offending SQL or
I've also found this
http://developers.sun.com/solaris/docs/wp-oraclezfsconfig-0510_ds_ac2.pdf
On 9 December 2010 20:22, Jabbar aja...@gmail.com wrote:
Hello Tony,
If the hardware hasn't changed I'd look at the workload on the database
server. If the customer is taking regular statspack
Hi
I'd certainly look at the sql being run, examine the explain plan and in
particular SQL_TRACE, TIMED_STATISTICS, and TKPROF, these will really
highlight issues.
see following for autotrace which can generate explain plan etc.
Hi all, from much of the documentation I've seen, the advice is to set
readonly=on on volumes on the receiving side during send/receive
operations. Is this still a requirement?
I've been trying the send/receive while NOT setting the receiver to
readonly and haven't seen any problems even though
On 12/10/10 12:31 PM, Moazam Raja wrote:
Hi all, from much of the documentation I've seen, the advice is to set
readonly=on on volumes on the receiving side during send/receive
operations. Is this still a requirement?
I've been trying the send/receive while NOT setting the receiver to
readonly
On Thu, Dec 9, 2010 at 5:31 PM, Ian Collins i...@ianshome.com wrote:
On 12/10/10 12:31 PM, Moazam Raja wrote:
So, is it OK to send/recv while having the receive volume write enabled?
A write can fail if a filesystem is unmounted for update.
True, but ZFS recv will not normally unmount a
14 matches
Mail list logo