I plan on removing the second USAS-L8i and connect
all 16 drives to the
first USAS-L8i when I need more storage capacity. I
have no doubt that
it will work as intended. I will report to the list
otherwise.
I'm a little late to the party here. First, I'd like to thank those pioneers
who
id# -R /mnt newpool
but I don't really have a way to check this...
Thanks
Jay
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I may be missing something here, but from the set up he is discribing his
raid-z should be seeing 4 1tb drives. Thus in theory he should be able to lose
both 500gb drives and still recover since they are only viewed as a singe drive
in the raid-z. The main draw backs being performance, and
From your description, it sounds like you are looking for an independent nas
hardware box? In which case using freenas or opensolaris to handle the
hardware and present iscsi volumes to your vms, is a pretty simple solution.
If your instead looking for one box to handle both data storage and
Really if your just talking a handful of drives then hardware raid may be the
simpilest solution for now. However, I also would be inclided to use seperate
nas and vm servers. Even with ecc you can put together a nas box for a few
hundred (or use existing hardware), plus what you need for a
In terms of capability and preformance, esxi is well above anything your
getting from vmware serve, even just using the free utilities. The issues to
consider are complexity and hardware support. You shouldn't have a problem with
hardware if you do your home work before you buy. However the
Specifically it sounds like you want to write a script to rsync over ssh. You
may need to do some manual work to keep ips updated if either side isn't
static.
Sent from my BlackBerry® smartphone with SprintSpeed
-Original Message-
From: Richard Elling richard.ell...@gmail.com
Date:
This actually sounds a little like what ms is trying to accomplish, in win7,
with libraries. They will act as standard folders if you treat them as such.
But they are really designed to group different pools of files into one easy
place. You just have to configure it to pull from local and
I'm no expert but if I was in the same situation, I would definately keep the
integrity check on. Especially since your only running a raid5, the sooner you
know there is a problem the better. Even if zfs can not fix it for you it can
still be a useful tool. Basically a few errors may not be
I have b105 running on a Sun Fire X4500, and I am constantly seeing checksum
errors reported by zpool status. The errors are showing up over time on every
disk in the pool. In normal operation there might be errors on two or three
disks each day, and sometimes there are enough errors so it
hi richard,
the bugs database ... figures ... now that you said it, it's really
quite obvious :)
thanks, and thanks for the hint towards the drivers-discuss forum.
bye,
jay
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
I would think only the casesensitivity=mixed should have to be set at
creation time, that casesensitivity=insensitive could be set at any
time. Hmmm.
We don't allow this for a couple of reasons. If the file system was
case-sensitive or mixed and you suddenly make it insensitive,
I tried this question in the CIFS forum and didn't get any responses, but maybe
it is more appropriate for this forum.
I have many large zfs filesystems on Solaris 10 servers that I would like to
upgrade to OpenSolaris so the filesystems can be shared using the CIFS Service
(I'm currently
On Wed, Dec 10, 2008 at 11:46 AM, Nico wrote:
On Wed, Dec 10, 2008 at 10:51 AM, Jay Anderson wrote:
I have many large zfs filesystems on Solaris 10 servers that I would like
to upgrade to OpenSolaris so the filesystems can be shared using the CIFS
Service (I'm currently using Samba). ZFS
messages to the effect of need to import the pool first.
Suggestions?
thanks
Jay
Hardware:
Two
Sun-Fire T2000s running
Sol10 8/07 s10s_u4wos_12b (SPARC) under control of Veritas Cluster.
Sun
StorageTek 6140 storage array.
Highlevel
Configuration:
6140
is setup in RAID5.
Several volumes were
The slides from my ZFS presentation at OSCON (as well as some
additional information) are available at http://www.meangrape.com/
2007/08/oscon-zfs/
Jay Edwards
[EMAIL PROTECTED]
http://www.meangrape.com
___
zfs-discuss mailing list
zfs-discuss
Hello !
Can you please share your experiences with ZFS deployment for Oracle
databases for production usage ?
Why did you choose to deploy the database on ZFS ?
What features of ZFS are you using ?
What tuning was done during ZFS setup ?
How big are the databases ?
Thank you,
Jay
The V120 has 4GB of RAM , on the HDS side we are in a RAID 5 on the LUN and not
shairing any ports on the MCdata, but with so much cache we aren't close to
taxing the disk. You mentioned the 50MB on the throughput and that's something
we've been wondering around here as to what the average is
To answer your question Yes I did expect the same or better performance than
standard UFS based on all the hype and to quote Sun Blazing performance
ZFS is based on a transactional object model that removes most of the
traditional constraints on the order of issuing I/Os, which results in huge
Thanks Robert, I was hoping something like that hard turned up allot of what I
will need to use ZFS for will be sequential writes at this time.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
command ran mkfile -v 6gb /ufs/tmpfile
Test 1 UFS mounted LUN (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN (Single LUN in a pool) (3m13.126s)
Sunfire V120
1 Qlogic 2340
Solaris 10
21 matches
Mail list logo