Thanks for the continuing flow of information. I already have all of the
equipment. I'm actually upgrading my main computer to a new Core 2 Duo setup
which is why this hardware is going to the file server. I think I'm going to
try a 64bit install using the four 500GB drives in a RAID-Z config
> > 1. evacuating a vdev resulting in a smaller pool
> for all raid configs - ?
> >
> > 2. adding new vdev and rewriting all existing data
> to new larger
> >stripe - ?
> >
> > 3. expanding stripe width for raid-z1 and raid-z2 -
> ?
> >
> > 4. live conversion between different raid kinds on
On 8 May, 2007, at 22.51, Cyril Plisko wrote:
So I quickly hacked together a script which defines the necessary
complete clauses (yes I am a tcsh user). After playing with it
for a while I decided to share it with community in a hope that
it may be improved/extended and be a useful tool in day
Hi !
I was in the middle of very long and boring transatlantic flight and
played with zfs and gzip compression on my laptop. And I just
thought that it may be quite useful for your shell to be able to
autocomplete the arguments to the zfs/zpool command.
So I quickly hacked together a script which
I would personally avoid the P4 chip. They are power hogs and will cost you
more money in the long run than getting a low-end core 2 duo - which should be
faster and not much more money. Make sure you keep power consumption in mind
when you pick up a power supply and video card too. The always o
The ZFS boot disk support we need has to be SPARC-based and our standards would
require that the support be present in the commercial Solaris 10 code-base,
unfortunately.
Your comments and observations are welcome input. I thank you very much for
your time and effort in replying. :-)
Best Reg
Onboard RAID solutions actually do all their work on your CPU, so you won't be
using that for anything if you use ZFS. You just want them acting like regular
SATA controllers.
Just run the Solaris hardware compatibility thinger (google it), or compare
your hardware to the supported hardware
With b62, he can do ZFS mirror for root in addition to the raidz for the
data but it would still require a second root drive...
Malachi
On 5/8/07, Paul Armstrong <[EMAIL PROTECTED]> wrote:
I'd recommend getting a second 80GB disk and mirroring your root as
well.
UFS+SDS for root (don
> Probably RAID-Z as you don't have enough disks to be interesting for doing
> 1+0.
> Paul
How do you configure ZFS RAID 1+0 ?
Will next lines do that right? :
[b]zpool create -f zfs_raid1 mirror c0t1d0 c1t1d0
zpool add zfs_raid1 mirror c2t1d0 c3t1d0
zpool add zfs_raid1 mirror c4t1d0 c5t1d0[/b]
A
While trying some things earlier in figuring out how zpool iostat is supposed
to be interpreted, I noticed that ZFS behaves kind of weird when writing data.
Not to say that it's bad, just interesting. I wrote 160MB of zeroed data with
dd. I had zpool iostat running with an one second interval.
Robert Milkowski wrote:
Hello Matthew,
Tuesday, May 8, 2007, 1:04:56 AM, you wrote:
MA> Pawel Jakub Dawidek wrote:
This is what I see on Solaris (hole is 4GB):
# /usr/bin/time dd if=/ufs/hole of=/dev/null bs=128k
real 23.7
# /usr/bin/time dd if=/zfs/hole of=/dev/null b
Bryan,
Thanks for your suggestion. I am looking at this as more of a DR solution.
However, I might be able to use your method if my data can be a little old.
Perhaps this way I could sync the data nightly with a remote site to make sure
that I am no more than 24 hours behind in the case of a di
Torrey,
Yes. I am used to dealing with the array based software you mention in your
post as well as filesystem based products like AVS and Veritas Volume
Replicator.
After reading some documentation send to me by Cindy Swearingen (Thanks Cindy!)
about zfs send and receive it seems that it is
Group,
MOST people want a system to work without doing
ANYTHING when they turn on the system.
So yes, the thought of people buying another
drive and installing it in a brand new system
would be insane for this group of buyers.
Mitchell Erblich
Hi!
I have installed svn_62 a few days ago and thought to give zfs boot a try. I
followed the manual instructions and was able to get everything up and running
with a zfs boot/root environment.
After some tweaking (adding 2. Disk to the rootpool, moving some files from
some old zfs fs to the
If you were really worried about it you could mount forcedirect IO or
something, however, if you read the post, I mentioned to do it as a spare. A
spare isn't active until there's a problem, so you'd only be running with the
filedevice temporarily in theory anyway.
This message posted from o
I hope it will be released soon. I asked Jiri about it and didn't get a
negative reply so I am optimistic now.
Steve
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Hi Jiri,
> 3.0.25rc1 was released 2 days ago so the "final
> version" will be available soon. vfs_zfsacl.c module
> was tested soon so I think it is a question of 2-3
> weeks.
3 weeks after you posted this...can I ask you to update the community about the
availability of vfs_zfsacl.c module? Eve
Hi.
bash-3.00# uname -a
SunOS 5.10 Generic_125101-04 i86pc i386 i86pc
Server is x4100.
NFS server under Sun Cluster with ZFS.
I issued 'zpool import' to see other pool available to import on one node and
then I got very slow nfs access to that node, nfsd go upto ~3500 threads (from
~500)
Hello Claus,
Saturday, April 28, 2007, 4:27:58 PM, you wrote:
CG> Speaking of backup-software. I heard that legato supports zfs but is
CG> anyone using zfs and legato or some other backup-software that can
CG> handle multi-TB-file systems (in production)?
Legato+ZFS works here in production.
-
Hello Ian,
Thursday, May 3, 2007, 10:20:20 PM, you wrote:
IC> Roch Bourbonnais wrote:
>>
>> with recent bits ZFS compression is now handled concurrently with many
>> CPUs working on different records.
>> So this load will burn more CPUs and acheive it's results
>> (compression) faster.
>>
IC> Wou
I've used ZFS to back up my laptops to an external USB disk that formed one
half of the mirror for a long while.
See: http://blogs.sun.com/chrisg/entry/external_usb_disk_drive
I recently stopped doing that in favour of doing the same to iSCSI luns hosted
on ZFS ZVOLs on a server:
http://blogs.
Hello Matthew,
Tuesday, May 8, 2007, 1:04:56 AM, you wrote:
MA> Pawel Jakub Dawidek wrote:
>> This is what I see on Solaris (hole is 4GB):
>>
>> # /usr/bin/time dd if=/ufs/hole of=/dev/null bs=128k
>> real 23.7
>> # /usr/bin/time dd if=/zfs/hole of=/dev/null bs=128k
>>
John Smith wrote:
> Sorry about that, the specific processor in question is the Pentium D 930
> which supports 64 bit computing through the Extended Memory 64 Technology.
> It was my initial reaction to say I'd go with 32 bit computing because my
> general experience with 64-bit is Windows, Lin
I'd recommend getting a second 80GB disk and mirroring your root as well.
UFS+SDS for root (don't forget a live upgrade slice) and ZFS for the other
disks.
Probably RAID-Z as you don't have enough disks to be interesting for doing 1+0.
Paul
This message posted from opensolaris.org
_
Sorry about that, the specific processor in question is the Pentium D 930 which
supports 64 bit computing through the Extended Memory 64 Technology. It was my
initial reaction to say I'd go with 32 bit computing because my general
experience with 64-bit is Windows, Linux, and some FreeBSD. Gen
Same here, needed ASAP. It's a shame Jiri can't release his work-in-progress
code, I've asked for a prerelease (even untested) version several times.
:(
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
John Smith wrote:
> The original thought was 3 of the drives as storage, and one of the drives as
> parity. So that would yield around 1.4TB of useable storage.
Then raidz is your only option.
> I hadn't given any thought to running 64 bit. This system is being built
> from the ground up. I
The original thought was 3 of the drives as storage, and one of the drives as
parity. So that would yield around 1.4TB of useable storage. I hadn't given
any thought to running 64 bit. This system is being built from the ground up.
I guess in the back of my head I had assumed it would be 32
29 matches
Mail list logo