Dave Stubbs wrote:
I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
(a.k.a. worst file system ever) in a production
environment. Microsoft Windows is a horribly
unreliable operating system
Same here, I've got a test server at work running 15x 500GB SATA disks on a
pair of AOC-SAT2-MV8 cards, it suffered some 20 minutes of slow response when a
disk started to fail, but although that caused a few problems with the clients,
the data is still there.
However, my home system has been
I've been running ZFS on FreeBSD and i've had no problems. ZFS is still
considered experimental in FreeBSD but it's working wonderfully. I have 3
raidz1 vdevs with 4 1tb drives each and i've had several power outages and
i've yanked out disks just to see what would happenit's been fine. I
On Fri, 31 Jul 2009, Brian wrote:
I must say this thread has also damaged the view I
have of ZFS.
Ive been considering just getting a Raid 5
controller and going the
linux route I had planned on.
Thankfully, the zfs users who have never lost a pool
do not spend much
time posting
On 25.07.09 00:30, Rob Logan wrote:
The post I read said OpenSolaris guest crashed, and the guy clicked
the ``power off guest'' button on the virtual machine.
I seem to recall guest hung. 99% of solaris hangs (without
a crash dump) are hardware in nature. (my experience backed by
an uptime
Nah, that didnt seem to do the trick.
Also tried this
http://blogs.sun.com/thaniwa/entry/en_opensolaris_installation_into_usb
But that either didnt seem to work. After unmounting and rebooting, i get the
same error msg from my previous post.
Dont know if there is much more to do...
On Fri, Jul 31, 2009 at 12:46 AM, rolandno-re...@opensolaris.org wrote:
Hello !
How can i export a filesystem /export1 so that sub-filesystems within that
filesystems will be available and usable on the client side without
additional mount/share effort ?
this is possible with linux nfsd
Nah, that didnt seem to do the trick.
After unmounting
and rebooting, i get the same error msg from my
previous post.
Did you get these scsi error messages during installation
to the usb stick, too?
Another thing that confuses me: the unit attention /
medium may have changed message is
Some preliminary speed tests, not too bad for a pci32 card.
http://lundman.net/wiki/index.php/Lraid5_iozone
Jorgen Lundman wrote:
Finding a SATA card that would work with Solaris, and be hot-swap, and
more than 4 ports, sure took a while. Oh and be reasonably priced ;)
Double the price of
hi,
i'm using a zvol someone else created (and then used as
an iSCSI Target, via: iscsitadm ... -b /dev/zvol ...).
I see that AVAIL has a size of 33GB, yet the VOLSIZE is 24GB ;
# zfs list -t volume -o name,avail,used,volsize iscsi-pool/log_1_1
NAMEAVAIL USED VOLSIZE
I was wondering if this is a known problem..
I am running stock b118 bits. System has a UFS root
and a single zpool (with multiple nfs, smb, and iscsi
exports)
Powered off my machine last night.. Powered it on this
morning and it hung during boot. It hung when reading the
zpool disks.. It
On Sat, 2009-08-01 at 22:31 +0900, Jorgen Lundman wrote:
Some preliminary speed tests, not too bad for a pci32 card.
http://lundman.net/wiki/index.php/Lraid5_iozone
I don't know anything about iozone, so the following may be NULL
void.
I find the results suspect. 1.2GB/s read, and 500MB/s
On Sat, 1 Aug 2009, Louis-Frédéric Feuillette wrote:
I find the results suspect. 1.2GB/s read, and 500MB/s write ! These are
impressive numbers indeed. I then looked at the file sizes that iozone
used... How much memory do you have? I seems like the files would be
able to comfortably fit in
On Fri, 31 Jul 2009 15:43:11 -0400
Mark Johnson mark.john...@sun.com wrote:
One thing that could be related is that I was running
a scrub when I had powered off the system. The scrub
started up again after I had imported the pool.
Anyone know if this is a known problem?
I knwo people
Hi Jorgen,
warning ... weird idea inside ...
Ah it just occurred to me that perhaps for our specific problem, we
will buy two X25-Es and replace the root mirror. The OS and ZIL logs
can live together and put /var in the data pool. That way we would
not need to rebuild the data-pool and all
Mario Goebbels wrote:
An introduction to btrfs, from somebody who used to work on ZFS:
http://www.osnews.com/story/21920/A_Short_History_of_btrfs
*very* interesting article.. Not sure why James didn't directly link to
it, but courteous of Valerie Aurora (formerly Henson)
Are there any message with Error level: fatal ?
Not that I know of, however, i can check. But im
unable to find out what to change in grub to get
verbose output rather than just the splashimage.
Edit the grub commands, delete all splashimage,
foreground and background lines, and delete
Ok I have redone the initial tests as 4G instead. Graphs are on the same
place.
http://lundman.net/wiki/index.php/Lraid5_iozone
I also mounted it with nfsv3 and mounted it for more iozone. Alas, I
started with 100mbit, so it has taken quite a while. It is constantly at
11MB/s though. ;)
18 matches
Mail list logo