This is from build 125.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Great! So if I want another build, for instance b125, I just change step 10?
10) pkg -R /mnt install ent...@0.5.11-0.125
Yes?
What is this "0.5.11" thing? Should that be changed too, if I try to install
b125? Like "0.5.12-0.125"?
--
This message posted from opensolaris.org
_
A zpool consists of vdevs (a group of discs). You can create a mirror of your
250GB. And later you can add another group of discs to your zpool, on the fly.
Each group of discs should have redundancy, for instance, mirror, raidz1 or
raidz2. So you can add vdev to a zpool on the fly, but you can
"I can't boot into an older version because the last version I had was b118
which doesn't have zfs version 19 support. I've been looking to see if there's
a way to downgrade via IPS but that's turned up a lot of nothing."
If someone can tell me which files are needed for the driver I can extrac
I saw the same checksum error problem when I booted into b126. I havent dared
try b126 again, I use b125 now, without problems. Here is my hardware
Intel Q9450 + P45 Gigabyte EP45-DS3P motherboard + Ati 4850
I have the same AOC SATA controller card. And some Samsung Spinpoint F1, 1TB
drives. Bran
I can confirm that Tim is right, I have done it myself.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, so you changed drives and you still see errors? Are the drives brand new or
used? What kind of drives, which brand? 2TB? And if you reboot into an earlier
build such as b125 you dont see any errors, right?
Right now I am running b125. I dont dare to run b126, if your observation is
correct
I read about some guy that shut off his RAID when he didnt use it. And he had a
large system disc he used for temporary storage. So he copied everything to the
temp storage and immediately shut down the raid.
--
This message posted from opensolaris.org
___
Also, read this:
http://c0t0d0s0.org/archives/6067-PSARC-2009479-zpool-recovery-support.html
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Such a functionality is in the ZFS code now. It will be available later for us
http://c0t0d0s0.org/archives/6067-PSARC-2009479-zpool-recovery-support.html
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
I was under the impression that you can create a new zfs dataset and turn on
the dedup functionality, and copy your data to it. Or am I wrong?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
I have the same card and might have seen the same problem. Yesterday I upgraded
to b126 and started to migrate all my data to 8 disc raidz2 connected to such a
card. And suddenly ZFS reported checksum errors. I thought the drives were
faulty. But you suggest the problem could have been the drive
Ok, ctrl-x or whatever combination killed the zfs send. It took some
time,though. Solved problem. Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/z
Ok, ctrl-x or whatever combination killed the zfs send. It took some
time,though. Solved problem. Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/z
I am doing a large zfs send | zfs receive and suddenly, during the zfs send,
one drive is faulted. I try to break this zfs send and examine the faulty drive
so the zpool stops being DEGRADED mode. I can not stop this zfs send. I tried
kill -9 PID
CTRL-X
CTRL-Z
CTRL-D
CTRL-C
nothing can stop thi
So the solution is to never get more than 90% full disk space, för fan?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
TO CONCLUDE:
Ok, it seems that I have misjudged the speed of zfs send, and I remember
wrongly. This is because of the large amounts of data I transfered, and it felt
like it took forever. But now I clocked and 239GB is transfered in about one
hour (from 5 disc raidz1 onto a single disc), giving
When I create a zfs filesystem there are lots of options. Which options are
recommended?
I use CIFS and then I choose mixedcasesensitivity=true. But it turns out that
(due to a bug, fixed in b127) if I have non utf8 characters in the file name, I
can not see the file in listings. So I should us
Hey, buy them a beer from me, too! (Easy for me to say, as I cant shell out
money). ZFS developers rocks. Hard. And long.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
I am trying to backup a large zfs file system to two different identical hard
drives. I have therefore started two commands to backup "myfs" and when they
have finished, I will backup "nextfs"
zfs send mypool/m...@now | zfs receive backupzpool1/now & zfs send
mypool/m...@now | zfs receive backu
Would this be possible to implement ontop ZFS? Maybe it is a dumb idea, I dont
know. What do you think, and how to improve this?
Assume all files are put in the zpool, helter skelter. And then you can create
arbitrary different filters that shows you the files you want to see.
As of now, you h
And, ZFS likes 64bit CPUs. I had 32bit P4 and 1GB RAM. It worked fine, but I
only got 20-30MB/sec. 64 bit CPU and 2-3GB gives you over 100MB/sec.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
You dont need a hw raid card with ZFS, instead ZFS prefers to work alone. It is
the best solution to ditch a hw raid card.
I strongly advice you to use raidz2 (raid-6). Because if you use raidz1
(raid-5) and a drive fails, you have to swap that disc and repair your zfs
raid. That will cause lot
You could add these new drives to your zpool. Then you should create a new vdev
as a raidz1 or raidz2 vdev, and then add them to your zpool. I suggest raidz2,
becuase that gives you greater reliability.
However, you can not remove a vdev. In the future, say that you have swapped
your original d
So is there is a Change Request on this?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Many sysadmins recommends raidz2. The reason is, if a drive breaks and you have
to rebuild your array, it will take a long time with a large drive. With a 4TB
drive or larger, it could take a week to rebuild your array! During that week,
there will be heavy load on the rest of the drives, which
Have you considered bying support? Maybe you will get guaranteed help, then?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I had this same question. I was recommended to use rsync or zfs send. I used
both just to be safe. With zfs send, you create a snapshot and then send the
snapshot. After deleting the snapshot on the target, you have identical copies.
rsync seems to be used for this task also. And also zfs send.
This controller card, you have turned off any raid functionality, yes? ZFS has
total control of all discs, by itself? No hw raid intervening?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
I asked the same question about one year ago here, and the posts poured in.
Search for my user id? There is more info in that thread about which is best:
ZFS vs ZFS+HWraid
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-dis
Adam,
Thanx for your answer. I often read your blog. :o)
Anyway, will it be possible to change from raidz1 to raidz2? I am switching to
raidz2 because if a drive fails, there is a high chance that another drive
fails while rebuilding the zpool (Ive heard).
I want to know my options to migrate t
Will BP rewrite allow adding a drive to raidz1 to get raidz2? And how is status
on BP rewrite? Far away? Not started yet? Planning?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
Yes, vdevs allow you to expand your zpool. One zpool consists of groups of hard
drives. Your zpool consists of one group of drives. That group contains 4
drives that are 2TB. You can easily add another group of drives to your zpool.
You can not change the number of discs of a group, but you can
Sorry to hear that, but you do know that VirtualBox is not really stable?
VirtualBox does show some instability from time to time. You havent read the
VirtualBox forums? I would advice against VirtualBox for saving all your data
in ZFS. I would use OpenSolaris without virtualization. I hope your
With dedup, will it be possible somehow to identify files that are identical
but has different names? Then I can find and remove all duplicates. I know that
with dedup, removal is not really needed because the duplicate will just be a
reference to an existing file. But nevertheless I want to kee
Hey sbreden! :o)
No, I havent tried to tinker with my drives. They have been functioning all the
time. I suspect (can not remember) that each SATA slot in the card has a number
attached to it? Can anyone confirm this? If I am right, OpenSolaris will say
something about "disc 6 is broken" and on
I use the AOC-SAT2-MV8 in a ordinary PCI slot. The PCI slot maxes at 150MB/sec
or so. That is the fastest you will get. That card works very good with
Solaris/OpenSolaris. Detects automatically, etc. Ive heard though that it does
not work with hot swapping discs - avoid this.
However, In a PCI-
Ok, so you mean the comments are mostly FUD and bull shit? Because there are no
bug reports from the whiners? Could this be the case? It is mostly FUD? Hmmm...?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
I totally agree with you. I am just concerned about ZFS' reputation.
If there are complaints, what should SUN do? Should the complaints be taken
seriously or not? Me love ZFS, and I dont want it to loose it's credibility.
BTW, ZFS rocks. Hard.
--
This message posted from opensolaris.org
__
Ive asked the same question about 32bit. I created a thread and asked. It were
something like "does 32bit ZFS fragments RAM?" or something similar. As I
remember it, 32 bit had some issues. Mostly due to RAM fragmentation or
something similar. The result was that you had to restart your server a
In the comments there are several people complaining of loosing data. That
doesnt sound to good. It takes a long time to build a good reputation, and 5
minutes to ruin it. We dont want ZFS to loose it's reputation of an uber file
system.
--
This message posted from opensolaris.org
_
According to this webpage, there are some errors that makes ZFS unusable under
certain conditions. That is not really optimal for an Enterprise file system.
In my opinion the ZFS team should focus on bug correction instead of adding new
functionality. The functionality that exists far surpass an
Seagate7,
You are not using ZFS correctly. You have misunderstood how it is used. If you
dont follow the manual (which you havent) then any filesystem will cause
problems and corruption, even ZFS or ntfs or FAT32, etc. You must use ZFS
correctly. Start by reading the manual.
For ZFS to be able
Thank you. I will spread the word.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Too bad. I will follow this thread. Me, and others hope you find a solution. We
would like to hear about this setup.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
I'll second that. A wiki page on the ZFS wiki, with best practices and
recommendations about adding SSD would be great. There is not much information
on this subject, I feel. Case scenarios, blogs, etc showing some numbers.
--
This message posted from opensolaris.org
So are there no guide lines how to add a SSD disk as a home user? Which is the
best SSD disk to add? What percentage improvements are typical? Or, will a home
user not benefit from adding a SSD drive? It is only enterprise SSD drives that
works, together with some esoteric software from Fishwork
Ok thanks for your help guys! :o)
One last question, how do I know that the spare sectors are finishing? SMARTS
are not available for Solaris, right? Is there any warnings that plop up in
ZFS? Will scrubbing reveal that there are errors? How will I know?
--
This message posted from opensolaris.
Ok. Just to confirm: A modern disk has already some spare capacity which is not
normally utilized by ZFS, UFS, etc. If the spare capacity is finished, then the
disc should be replaced.
--
This message posted from opensolaris.org
___
zfs-discuss mailing
You have a raid with 5 terabyte discs. Now some bad sectors arises, so the
discs differ in size. What happens with the ZFS raid? Will there be seriuos
trouble? Or is this only a problem when ZFS raid is 100% full?
Of old, in Linux you didnt allocate the entire disc. Instead you let 100MB be
fre
Maybe add a timer or something? When doing a "destroy", ZFS will keep
everything for 1 minute or so, before overwriting. This way the disk won't get
as fragmented. And if you had fat fingers and typed wrong, you have up to one
minute to undo. That will catch 80% of the mistakes?
--
This message
Imagine 10 SATA discs in raidz2 and one or two SSD drives as a cache. Each
Vista client reaches ~90MB/sec to the server, using Solaris CIFS and iSCSI. So
you want to use iSCSI with this. (iSCSI allows ZFS to export a file system as a
native SCSI disc to a desktop PC. The desktop PC can mount thi
Thanx for your answers guys. :o)
I dont contemplating trying this for my ZFS raid, as the SSD drives are
expensive right now. I just want to be able to answer questions when I convert
Windows/Linux to Solaris. And therefore collect info. Has anyone tried this on
a blogg? Would be cool to blog
I understand Fishworks has a L2ARC cache, which as I have understood it, is a
SSD drive as a cache?
I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a similar
vein? Would it be easy to do? What would be the impact? Has anyone tried this?
--
This message posted from opensolaris
If zfs says that one disk is broken, how do I locate it? It says that disk
c0t3d0 is broken. Which disk is that? I must locate them during install?
In Thumper it is possible to issue a ZFS command, and the corresponding disk's
lamp will flash? Is there any "zlocate" command that will flash a par
So ZFS is not hindered at all, if you use it in conjunction with HW raid? ZFS
can utilize all functionality and "heal corrupted blocks" without problems -
with HW raid?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discus
What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able
to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you
should only use ZFS?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
"ZFS works well with storage based protecte
Ok, ive upgraded mother board's BIOS. Installed ZFS with b105 over the existing
UFS b104. It works better now. The disk sounds almost like normal, barely
audiable. But sometimes it goes back and sounds like hell. Very seldom now.
I dont get it. Why does UFS do this? Hmm...
--
This message poste
Ive read about some Areca bug(?) being fixed in SXCE b105?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So you recommend ZFS + HW raid, instead of only ZFS? It is preferable to add HW
raid to ZFS?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Oh, thanx for your very informative answer. Ive added a link to your
information in this thread:
But... Sorry, but I wrote wrong. I meant "I will not recommend against HW raid
+ ZFS anymore" instead of "... recommend against HW raid".
The windows people's question is:
which is better?
1. HW rai
Got some more information about HW raid vs ZFS:
http://www.opensolaris.org/jive/thread.jspa?messageID=326654#326654
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
Ok, I draw the conclusion that there is no consensus on this. Nobody really
knows for sure.
I am in the process of converting some Windows guys to ZFS, and they think that
HW raid + ZFS should be better than only ZFS. I tell them they should ditch
their HW raid, but can not really motivate why
Some people use ZFS and HardWare raid. The recommended is to use only ZFS and
HW raid.
What are the advantages/disadvantages? Should you use a HW raid in conjunction
with ZFS, if possible? Or only ZFS? Which is best?
--
This message posted from opensolaris.org
_
I have taken a Samsung 500GB from my old ZFS raid. I have created a 100GB
Windows XP partition and installed WinXP. The rest of the disk is unformatted.
And then I wanted to install SXCE b104, so I started the SXCE install with ZFS.
But it refused to install. Said that the partitions overlap and
A question: why do you want to use HW raid together with ZFS? I thought ZFS
performing better if it was in total control? Would the results have been
better if no HW raid controller, and only ZFS?
--
This message posted from opensolaris.org
___
zfs-dis
Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy and you all, thanx for your answers! I have got us several more
OpenSolaris converts meanwhile. One guy said, "Why didnt I try ZFS before??" :o)
A quick question, in scenario A)
My old 4 samsung 500GB is a raidz1. If I exchange each drive and finally add a
hot spare, it is not the same re
I have a ZFS raid with 4 samsung 500GB disks. I now want 5 drives samsung 1TB
instead. So I connect the 5 drives, create a zpool raidz1 and copy the content
from the old zpool to the new zpool.
Is there a way to safely copy the zpool? Make it sure that it really have been
copied safely? Ideall
It is not recommended to store more than 90% on any file system, I think. For
instance, NTFS can behave very badly when it runs out of space. Similar to if
you fill up your RAM and you have no swap space. Then the computer starts to
thrash badly. Not recommended. Avoid 90% and above, and you hav
"ECC theory tells, that you need a minimum distance of 3
to correct one error in a codeword, ergo neither RAID-5 or RAID-6
are enough: you need RAID-2 (which nobody uses today)."
What is "RAID-2"? Is it raidz2?
--
This message posted from opensolaris.org
__
Ive studied all links here. But I want information of the HW raid controllers.
Not about ZFS, because I have plenty of ZFS information now. The closest thing
I got was
www.baarf.org
Where in one article he states that "raid5 never does parity check on reads".
Ive wrote that to the Linux guys. An
Que? So what can we deduce about HW raid? There are some controller cards that
do background concistency checks? And error detection of various kind?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
This is good information guys. Do we have some more facts and links about HW
raid and it's data integrity, or lack of?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
On a Linux forum, Ive spoken about ZFS end to end data integrity. I wrote
things as "upon writing data to disc, ZFS reads it back and compares to the
data in RAM and corrects it otherwise". I also wrote that ordinary HW raid
doesnt do this check. After a heated discussion, I now start to wonder
Ok, so I could partition a drive into two parts, and treat each of the
partitions as one drive? And then I exchange one partition at a time with a
whole new drive? That sounds neat. I must format the drive into two zfs
partitions? Or UFS partitions? ZFS doesnt have partitions?
And another thing
I dont understand. The other hard drive connected to the temporary SATA port,
mustn't it be very big? I copy the entire old zpool to the temp drive,
disconnect old zpool and create a new one. Then I send from the temp drive to
the new zpool? The temp drive must be very large to hold the entire o
The problem is this. I have 8 SATA port card. 4 of the slots (0-3) are occupied
with 500 GB drives in a ZFS raid, it is named "ZFSraid1". I now want to buy 5
terabyte drives to create a new ZFS raid, which will have the name "ZFSraid2".
How can I move the data from ZFSraid1 to ZFSraid2 so I can
I have a ZFS raid and wonder if it is possible to move the ZFS raid around from
SATA port to another? Ive heard that someone assembled the SATA connections
differently and the ZFS raid wouldnt work.
Say that I have 8 SATA port controller card with 4 drives in a ZFS raid. Sata
ports 0-3 are occu
Ok, thanx for your input, guys. So Bvians comment still is valid. I tell the
Linux guys that "OpenSolaris on 32 bit will fragment the memory to the point
that you have to reboot once in a while. It shouldnt corrupt your data when it
runs out of RAM."
Vodevick.
--
This message posted from opens
Its not me. There are people on Linux forums that wont to try out Solaris + ZFS
and this is a concern, for them. What should I tell them? That it is not fixed?
That they have reboot every week? Someone knows?
--
This message posted from opensolaris.org
___
I see this old post about ZFS fragmenting the RAM if it is 32 bit. This makes
the memory run out. Is it still true, or has it been fixed?
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-July/003506.html
--
This message posted from opensolaris.org
__
OpenSolaris + ZFS achieves 120MB/sec read speed with 4 SATA 7200 rpm discs.
440 MB/Sec read speed with 7 SATA discs. 220MB/sec write speed.
2GB/sec write speed with 48 discs (on SUN Thumper x4600).
I have links to websites were Ive read this.
--
This message posted from opensolaris.org
__
For those of you who wants to build a NAS, this is mandatory reading I think.
Read all comments too.
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
I use a P45 mobo, Intel Q9450, ATI4850 and 4 GB RAM. AOC SATA card with 8 Sata
slots. 4 Samsung 500GB drives. Works excellent in a
would it help to insert the raid into another computer and import it there?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ive upgraded the pool, because CIFS required that, if I remember correctly.
I wondered if it was possible to use b80 and upgrade only ZFS. If that is not
possible I am considering to change my computer hardware. I have to think about
this. To bad of the regressions for bleading edge hardware tho
I dont think the mother board is on the HCL. But everything worked fine in b90.
I realize I havent provided all necessary info. Here is more info.
http://www.opensolaris.org/jive/thread.jspa?threadID=69654&tstart=0
The thing is, Ive upgraded ZFS to the newest version with b95. And b95 is very
un
Thanx for your suggestions. Maybe we can come to a satisfactory conclusion
together?
I have a freshly installed b95. With new install of b90 or so, everything
worked fine.
If I wait for OpenSolaris in october, then maybe I can access my ZFS raid? Or
should I sell my computer and get an old one
I, as several others, have severe problems with the latest builds of SXCE.
After b93-94 or, everything became extremely unstable to the point of rendering
my Solaris totally useless. This is written from a Windows machine.
http://www.opensolaris.org/jive/thread.jspa?threadID=69654&tstart=0
The
I use a Intel Q9450 + P45 mobo + ATI 4850 + ZFS + VirtualBox.
I have installed WinXP. It works good and is stable. There are features not
implemented yet, though. For instance USB.
I suggest you try VB yourself. It is ~20MB and installs quick. I used it on 1GB
RAM P4 machine. It worked fine. If
You can also try SunRay ultra thin clients. Read about them on www.sun.com and
the forum is called filibeto. Google for "filibeto sunray"
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
Remember, you can not delete a device, so be careful what you add.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Wouldnt it be nice to break out all file systems in separate zfs file systems?
Then you could snapshot each file system individually. Just like each user has
his own filesystem, and I can snapshot that filesystem independently from other
users.
Because of now, if I do a snapshot of /, then ever
For the server Enterprise target, memory is secondary? Running a company well,
and RAM cost is secondary? For the Enterprise target market, RAM shouldnt be an
issue.
For the consumer market, RAM should be an issue. But ZFS is not targeted for
consumer market. Yet? ZFS is still being polished fo
Ouch, that seems slow. Do you think ZFS is still the best solution for this
work load, or would for instance Veritas do better?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
Ok, so when I am reinstalling from build 68, to build 91ish, I can upgrade my
ZFS raid. Then I have to upgrade both the zpool and the zfs??? I should upgrade
zpool first, and then zfs? Is the order of the upgrade important?
This message posted from opensolaris.org
Robert,
thanx a lot for your warning! Now I know that I must be extremely cautios
before adding my 4 new drives, to my existing raidZ with 4 drives.
I create a new vdev with 4 new discs, and add them to the existing raidZ, as
you have suggested. Thanx a lot for your help! :o)
This message
This sounds like a pain.
Is it possible that you bought support from SUN on this matter, if this is
really important to you?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
Im using the AOC card with 8 SATA-2 ports too. It got detected automatically
during Solaris install. Works great. And it is cheap. Ive heard that it is the
same chipset as used in X4500 thumper with 48 drives?
In a PCI, the PCI bottle necks at ~150MB/sec, or so.
In a PCI-X slot, you will reach
So, it basically boils down to this, what operations can I do with a vdev? Any
links? Ive googled a bit, but there is no comprehensive list of what I can do.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolar
101 - 200 of 216 matches
Mail list logo