Richard Elling wrote:
Miles Nordin wrote:
ave == Andre van Eyssen an...@purplecow.org writes:
et == Erik Trimble erik.trim...@sun.com writes:
ea == Erik Ableson eable...@mac.com writes:
edm == Eric D. Mudama edmud...@bounceswoosh.org writes:
ave The LSI SAS controllers with
Hi,
in nfs-discuss, Andrwe Watkins has brought up the question, why an inheritable
ACE is split into two ACEs when a descendant directory is created.
Ref:
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zfs_acl.c#1506
I must admit that I had observed this
Hi all,
I have a ZFS mirror of two 500GB disks, I'd like to up these to 1TB disks, how
can I do this? I must break the mirror as I don't have enough controller on my
system board. My current mirror looks like this:
[b]r...@beleg-ia:/share/media# zpool status share
pool: share
state: ONLINE
Thank you for your reply.
I had read the blog. The most interesting thing is WHY is there no performance
improve when it set any compression?
The compressed read I/O is less than uncompressed data, and decompress is
faster than compress.
so if lzjb write is better than non-compressed, the lzjb
On Wed, 24 Jun 2009 03:14:52 PDT
Ben no-re...@opensolaris.org wrote:
If I detach c5d1s0, add a 1TB drive, attach that, wait for it to
resilver, then detach c5d0s0 and add another 1TB drive and attach
that to the zpool, will that up the storage of the pool?
That will do the trick perfectly. I
Hi,
In company where I'm working we are using zpool status -x command output
in monitoring scripts for check health all ZFS pools. Everything is OK
except few systems where zpool status -x is exactly the same as zpool
status. I'm not sure but looks like this behavior is not OS version
specific (I
How to turn off the timeslider snapshots on certain file systems?
http://wikis.sun.com/display/OpenSolarisInfo/How+to+Manage+the+Automatic+ZFS+Snapshot+Service
Thank you, very handy stuff!
BTW - will zfs automatically delete snapshots, when I`ll go low on disk space?
--
With respect,
Nik
Hi,
I have OpenSolaris 2009.06 currently installed on a 160 GB IDE drive.
I want to replace this with a 2-way mirror 30 GB SATA SSD boot setup.
I found these 2 threads which seem to answer some questions I had, but I still
have some questions.
dick hoogendijk schrieb:
On Wed, 24 Jun 2009 03:14:52 PDT
Ben no-re...@opensolaris.org wrote:
If I detach c5d1s0, add a 1TB drive, attach that, wait for it to
resilver, then detach c5d0s0 and add another 1TB drive and attach
that to the zpool, will that up the storage of the pool?
That
Thomas,
Could you post an example of what you mean (ie commands in the order to use
them)? I've not played with ZFS that much and I don't want to muck my system
up (I have data backed up, but am more concerned about getting myself in a mess
and having to reinstall, thus losing my
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
i'm getting involved in a pre-production test and want to be sure of the
means i'll have to use.
Take 2 SunFire x4150 1 3750 Gb Cisco Switche
1 private VLAN on the Gb ports of the SW.
1 x4150 is going to be ESX4 aka VSphere Server ( 1
cindy.swearin...@sun.com writes:
Hi Harry,
Are you attempting this change when logged in as yourself or
as root?
my user
The top section of this procedure describes how to add yourself
to zfssnap role. Otherwise, if you are doing this step as a
non-root user, it probably won't work.
my
Many thanks Thomas,
I have a test machine so I shall try it on that before I try it on my main
system.
Thanks very much once again,
Ben
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Nils Goroll wrote:
Hi,
I just noticed that Mark Shellenbaum has replied to the same question in
a thread ACL not being inherited correctly on zfs-discuss.
Sorry for the noise.
Out of curiosity, I would still be interested in answers to this question:
It there a reason why inheritable
On 24.06.09 17:10, Thomas Maier-Komor wrote:
Ben schrieb:
Thomas,
Could you post an example of what you mean (ie commands in the order to use
them)? I've not played with ZFS that much and I don't want to muck my system
up (I have data backed up, but am more concerned about getting myself
On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
In my tests ESX4 seems to work fine with this, but i haven't already
stressed it ;-)
Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
don't know either i'have to put sort of redundant access form ESX to
SAN,etc
It might be easier to look for the pool status thusly
zpool get health poolname
-- richard
Tomasz Kłoczko wrote:
Hi,
In company where I'm working we are using zpool status -x command output
in monitoring scripts for check health all ZFS pools. Everything is OK
except few systems where
2 first disks Hardware mirror of 146Go with Sol10 UFS filesystem on it.
The next 6 others will be used as a raidz2 ZFS volume of 535G,
compression and shareiscsi=on.
I'm going to CHAP protect it soon...
you're not going to get the random read write performance you need
for a vm backend out
See this thread for information on load testing for vmware:
http://communities.vmware.com/thread/73745?tstart=0start=0
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare what
you get with what
http://opensolaris.org/jive/thread.jspa?threadID=105702tstart=0
Yes, this does sound very similar. It looks to me like data from read
files is clogging the ARC so that there is no more room for more
writes when ZFS periodically goes to commit unwritten data.
I'm wondering if changing
Bottim line with virtual machines is that your IO will be random by
definition since it all goes into the same pipe. If you want to be
able to scale, go with RAID 1 vdevs. And don't skimp on the memory.
Our current experience hasn't shown a need for an SSD for the ZIL but
it might be
On Wed, 24 Jun 2009, Ethan Erchinger wrote:
http://opensolaris.org/jive/thread.jspa?threadID=105702tstart=0
Yes, this does sound very similar. It looks to me like data from read
files is clogging the ARC so that there is no more room for more
writes when ZFS periodically goes to commit
Ok, this is getting weird. I just ran a zpool clear, and now it says:
# zpool clear zfspool
# zpool status
pool: zfspool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool
Ben wrote:
Hi all,
I have a ZFS mirror of two 500GB disks, I'd like to up these to 1TB disks, how
can I do this? I must break the mirror as I don't have enough controller on my
system board. My current mirror looks like this:
[b]r...@beleg-ia:/share/media# zpool status share
pool: share
Chookiex wrote:
Thank you for your reply.
I had read the blog. The most interesting thing is WHY is there no
performance improve when it set any compression?
There are many potential reasons, so I'd first try to identify what your
current bandwidth limiter is. If you're running out of CPU on
I think this is the board that shipped in the original T2000 machines
before they began putting the sas/sata onboard: LSISAS3080X-R
Can anyone verify this?
Justin Stringfellow wrote:
Richard Elling wrote:
Miles Nordin wrote:
ave == Andre van Eyssen an...@purplecow.org writes:
et == Erik
Dennis is correct in that there are significant areas where 32-bit
systems will remain the norm for some time to come.
think of that hundreds of thousands of VMWare ESX/Workstation/Player/Server
installations on non VT capable cpu`s - even if the cpu has 64bit capability, a
VM cannot run in
jr == Jacob Ritorto jacob.rito...@gmail.com writes:
jr I think this is the board that shipped in the original
jr T2000 machines before they began putting the sas/sata onboard:
jr LSISAS3080X-R
jr Can anyone verify this?
can't verify but FWIW i fucked it up:
I
Thomas Maier-Komor wrote:
Ben schrieb:
Thomas,
Could you post an example of what you mean (ie commands in the order to use
them)? I've not played with ZFS that much and I don't want to muck my system
up (I have data backed up, but am more concerned about getting myself in a mess
and
Hey sbreden! :o)
No, I havent tried to tinker with my drives. They have been functioning all the
time. I suspect (can not remember) that each SATA slot in the card has a number
attached to it? Can anyone confirm this? If I am right, OpenSolaris will say
something about disc 6 is broken and on
Wouldn't it make sense for the timing technique to be used if the data is
coming in at a rate slower than the underlying disk storage?
But then if the data starts to come at a faster rate, ZFS needs to start
streaming to disk as quickly as it can, and instead of re-ordering writes in
blocks,
Bob Friesenhahn wrote:
On Wed, 24 Jun 2009, Marcelo Leal wrote:
Hello Bob,
I think that is related with my post about zio_taskq_threads and TXG
sync :
( http://www.opensolaris.org/jive/thread.jspa?threadID=105703tstart=0 )
Roch did say that this is on top of the performance problems, and in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
milosz a écrit :
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare
what you get with what you need. Just because striping 3 mirrors *will* give
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
David Magda a écrit :
On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
In my tests ESX4 seems to work fine with this, but i haven't already
stressed it ;-)
Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
don't know
On Wed, 24 Jun 2009, Marcelo Leal wrote:
I think that is the purpose of the current implementation:
http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle But seems
like is not that easy... as i did understand what Roch said, seems
like the cause is not always a hardy writer.
I see
On Thu, 25 Jun 2009, Ian Collins wrote:
I wonder whether a filesystem property streamed might be appropriate? This
could act as hint to ZFS that the data is sequential and should be streamed
direct to disk.
ZFS does not seem to offer an ability to stream direct to disk other
than perhaps
- - the VM will be mostly few IO systems :
- -- WS2003 with Trend Officescan, WSUS (for 300 XP) and RDP
- -- Solaris10 with SRSS 4.2 (Sunray server)
(File and DB servers won't move in a nearby future to VM+SAN)
I thought -but could be wrong- that those systems could afford a high
latency
On Wed, Jun 24 at 15:38, Bob Friesenhahn wrote:
On Wed, 24 Jun 2009, Orvar Korvar wrote:
I thought of exchanging my PCI card with a PCIe card variant instead
to reach higher speeds. PCI-X is legacy. The problem with PCIe cards
is that soon SSD drives will be common. A ZFS raid with SSD
On Jun 24, 2009, at 16:54, Philippe Schwarz wrote:
Out of curiosity, any reason why went with iSCSI and not NFS? There
seems
to be some debate on which is better under which circumstances.
iSCSI instead of NFS ?
Because of the overwhelming difference in transfer rate between
them, In
Bob Friesenhahn wrote:
On Wed, 24 Jun 2009, Marcelo Leal wrote:
I think that is the purpose of the current implementation:
http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle But seems
like is not that easy... as i did understand what Roch said, seems
like the cause is not always a
On Wed, 24 Jun 2009, Eric D. Mudama wrote:
The main purpose for using SSDs with ZFS is to reduce latencies for
synchronous writes required by network file service and databases.
In the available 5 months ago category, the Intel X25-E will write
sequentially at ~170MB/s according to the
On Wed, 24 Jun 2009, Richard Elling wrote:
The new code keeps track of the amount of data accepted in a TXG and the
time it takes to sync. It dynamically adjusts that amount so that each TXG
sync takes about 5 seconds (txg_time variable). It also clamps the limit to
no more than 1/8th of
Hi Mykola,
Yes, if you are speaking of the automatic TimeSlider snapshots,
the snapshots are rotated. I think the threshold is 80% full
disk space.
Cheers,
Cindy
Mykola Maslov wrote:
How to turn off the timeslider snapshots on certain file systems?
On Wed, 24 Jun 2009, Richard Elling wrote:
The new code keeps track of the amount of data
accepted in a TXG and the
time it takes to sync. It dynamically adjusts that
amount so that each TXG
sync takes about 5 seconds (txg_time variable). It
also clamps the limit to
no more than
Does anyone know if related problems to the panic's dismissed as duplicate of
6746456 ever resulted in Solaris 10 patches? It sounds like they were actually
solved in OpenSolaris but S10 is still panicing predictably when Linux NFS
clients try to change a nobody UID/GID on a ZFS exported
On Wed, Jun 24, 2009 at 6:32 PM, Simon Bredenno-re...@opensolaris.org wrote:
FIRST QUESTION:
Although, it seems possible to add a drive to form a mirror for the ZFS boot
pool 'rpool', the main problem I see is that in my case, I would be
attempting to form a mirror using a smaller drive
46 matches
Mail list logo