before the zoneadm attach or boot you must create the configuration on the
second host, manuell or with the detached config from first host.
zonecfg -z heczone 'create -a /hecpool/zones/heczone'
zoneadm -z heczone attach ( to attach the requirements must fulfilled
(pkgs and patches in sync)
Joubert Nel wrote:
Hi,
Stupid question I'm sure - I've just upgraded to Solaris Express Dev Edition
(05/07) by installing over my previous Solaris 10 installation (intentionally,
so as to get a clean setup).
The install is on Disk #1.
I also have a Disk #2, which was the sole disk in a ZFS po
On Wed, Jun 20, 2007 at 05:54:49PM -0700, Joubert Nel wrote:
> Hi,
>
> Stupid question I'm sure - I've just upgraded to Solaris Express Dev
> Edition (05/07) by installing over my previous Solaris 10 installation
> (intentionally, so as to get a clean setup). The install is on Disk
> #1.
>
> I a
Hi,
Stupid question I'm sure - I've just upgraded to Solaris Express Dev Edition
(05/07) by installing over my previous Solaris 10 installation (intentionally,
so as to get a clean setup).
The install is on Disk #1.
I also have a Disk #2, which was the sole disk in a ZFS pool under Solaris 10.
I have created a zfs pool and I have installed a zone in the pool. For example
my pool name is hec pool /hecpool and I have installed my zone to the following
location /hecpool/zones/heczone. Is there away to migrate all of my pool data
and zones to another SUN host if my pools are created on p
Hi Ed,
This BP was added as a lesson learned for not mixing these
models because its too confusing to administer and no other reason.
I'll update the BP to be clear about this.
I'm sure someone else will answer your NFSv3 question. (I'd like
to know too).
Cindy
Ed Ravin wrote:
Looking over t
Looking over the info at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_and_NFS_Server_Performance
I see this:
Do not mix NFS legacy shared ZFS file systems and ZFS NFS shared file
systems. Go with ZFS NFS shared file systems.
Other than which command tur
Thanks, Constantin! That sounds like the right answer for me.
Can I use send and/or snapshot at the pool level? Or do I have
to use it on one filesystem at a time? I couldn't quite figure this
out from the man pages.
--chris
This message posted from opensolaris.org
_
On 20-Jun-07, at 12:23 PM, Richard L. Hamilton wrote:
Hello,
I'm quite interested in ZFS, like everybody else I
suppose, and am about
to install FBSD with ZFS.
On that note, i have a different first question to
start with. I
personally am a Linux fanboy, and would love to
see/use ZFS on linux
Oliver Schinagl wrote:
zo basically, what you are saying is that on FBSD there's no performane
issue, whereas on solaris there (can be if write caches aren't enabled)
Solaris plays it safe by default. You can, of course, override that safety.
Whether it is a performance win seems to be the sub
mike wrote:
I would be interested in hearing if there are any other configuration
options to squeeze the most space out of the drives. I have no issue
with powering down to replace a bad drive, and I expect that I'll only
have one at the most fail at a time.
This is what is known as "famous las
On Jun 20, 2007, at 1:25 PM, mario heimel wrote:
Linux is the first operating system that can boot from RAID-1+0,
RAID-Z or RAID-Z2 ZFS, really cool trick to put zfs-fuse in the
initramfs.
( Solaris can only boot from single-disk or RAID-1 pools )
http://www.linuxworld.com/news/2007/06180
On Wed, Jun 20, 2007 at 01:25:35PM -0700, mario heimel wrote:
> Linux is the first operating system that can boot from RAID-1+0,
> RAID-Z or RAID-Z2 ZFS, really cool trick to put zfs-fuse in the
> initramfs. ( Solaris can only boot from single-disk or RAID-1 pools )
Note that this method is much
Linux is the first operating system that can boot from RAID-1+0, RAID-Z or
RAID-Z2 ZFS, really cool trick to put zfs-fuse in the initramfs.
( Solaris can only boot from single-disk or RAID-1 pools )
http://www.linuxworld.com/news/2007/061807-zfs-on-linux.html
http://groups.google.com/group/zfs-
Will Murnane wrote:
On 6/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Huitzi,
Awesome graphics! Do we have your permission to use them? :-)
I might need to recreate them in another format.
The numbers don't look quite right. Shouldn't the first image have a
600GB zpool as a result, not
nice one !
i think this is one of the best and most comprehensive papers about zfs i have
seen
regards
roland
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
On Wed, Jun 20, 2007 at 09:48:08AM -0700, Eric Schrock wrote:
> On Wed, Jun 20, 2007 at 12:45:52PM +0200, Pawel Jakub Dawidek wrote:
> >
> > Will be nice to not EFI label disks, though:) Currently there is a
> > problem with this - zpool created on Solaris is not recognized by
> > FreeBSD, because
On Wed, 2007-06-20 at 12:45 +0200, Pawel Jakub Dawidek wrote:
> Will be nice to not EFI label disks, though:) Currently there is a
> problem with this - zpool created on Solaris is not recognized by
> FreeBSD, because FreeBSD claims GPT label is corrupted.
Hmm. I'd think the right answer here is
[EMAIL PROTECTED] wrote:
One of the reasons i switched back from X/JFS to ReiserFS on my linux
box was that I couldn't shrink the FS ontop of my LVM, which was highly
annoying. Also sometimes you might wanna just remove a disk from your
array: Say you setup up a mirrored ZFS with 2 120gb disks.
>One of the reasons i switched back from X/JFS to ReiserFS on my linux
>box was that I couldn't shrink the FS ontop of my LVM, which was highly
>annoying. Also sometimes you might wanna just remove a disk from your
>array: Say you setup up a mirrored ZFS with 2 120gb disks. 4 years
>later, you get
I'm reading the administration guide pdf and noticed that it claims that
at the moment ZFS does not support shrinking of the pool. Will this
feature be added in the future? Also expanding of raid-z is not yet
supported, will this also change?
One of the reasons i switched back from X/JFS to Reiser
mike wrote:
> On 6/20/07, Constantin Gonzalez <[EMAIL PROTECTED]> wrote:
>
>> One disk can be one vdev.
>> A 1+1 mirror can be a vdev, too.
>> A n+1 or n+2 RAID-Z (RAID-Z2) set can be a vdev too.
>>
>> - Then you concatenate vdevs to create a pool. Pools can be extended by
>> adding more vdev
Dominik Saar wrote:
Hi there,
have a strange behavior if i´ll create a zfs pool at an EMC PowerPath
pseudo device.
I can create a pool on emcpower0a
but not on emcpower2a
zpool core dumps with invalid argument
Thats my second maschine with powerpath and zfs
the first one works fine,
On Wed, Jun 20, 2007 at 12:23:18PM -0400, Torrey McMahon wrote:
> James C. McPherson wrote:
> >Roshan Perera wrote:
> >>
> >>>But Roshan, if your pool is not replicated from ZFS' point of view,
> >>>then all the multipathing and raid controller backup in the world will
> >>>not make a difference.
>
After researching this further, I found that there are some known
performance issues with NFS + ZFS. I tried transferring files via SMB, and
got write speeds on average of 25MB/s.
So I will have my UNIX systems use SMB to write files to my Solaris server.
This seems weird, but its fast. I'm sure
On 6/20/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
Also, how does replication at the ZFS level use more storage - I'm
assuming raw block - then at the array level?
___
Just to add to the previous comments. In the case where you have a SAN
array pro
On Wed, Jun 20, 2007 at 12:45:52PM +0200, Pawel Jakub Dawidek wrote:
>
> Will be nice to not EFI label disks, though:) Currently there is a
> problem with this - zpool created on Solaris is not recognized by
> FreeBSD, because FreeBSD claims GPT label is corrupted. On the other
> hand, creating ZF
Pawel Jakub Dawidek wrote:
> On Wed, Jun 20, 2007 at 01:45:29PM +0200, Oliver Schinagl wrote:
>
>> Pawel Jakub Dawidek wrote:
>>
>>> On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote:
>>>
>>>
> On that note, i have a different first question to start with. I
>>>
James C. McPherson wrote:
Roshan Perera wrote:
But Roshan, if your pool is not replicated from ZFS' point of view,
then all the multipathing and raid controller backup in the world will
not make a difference.
James, I Agree from ZFS point of view. However, from the EMC or the
customer point
On Wed, Jun 20, 2007 at 01:45:29PM +0200, Oliver Schinagl wrote:
>
>
> Pawel Jakub Dawidek wrote:
> > On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote:
> >
> >>> On that note, i have a different first question to start with. I
> >>> personally am a Linux fanboy, and would love to
On 6/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Huitzi,
Awesome graphics! Do we have your permission to use them? :-)
I might need to recreate them in another format.
The numbers don't look quite right. Shouldn't the first image have a
600GB zpool as a result, not 400GB? Similarly, t
On 6/20/07, mike <[EMAIL PROTECTED]> wrote:
On 6/20/07, Paul Fisher <[EMAIL PROTECTED]> wrote:
> I would not risk raidz on that many disks. A nice compromise may be 14+2
> raidz2, which should perform nicely for your workload and be pretty reliable
> when the disks start to fail.
Would anyone on
http://www.bsdcan.org/2007/schedule/events/43.en.html
Direct link to the presentation:
http://www.bsdcan.org/2007/schedule/attachments/27-Porting_ZFS_file_system_to_FreeBSD_Pawel_Jakub_Dawidek.pdf
And presentation for Asia BSDCon 2007:
http://asiabsdcon.org/papers/P16-slides.pdf
http://asiabsdco
On 20 June, 2007 - Oliver Schinagl sent me these 1,9K bytes:
> Also what about full disk vs full partition, e.g. make 1 partition to
> span the entire disk vs using the entire disk.
> Is there any significant performance penalty? (So not having a disk
> split into 2 partitions, but 1 disk, 1 parti
> > A 6 disk raidz set is not optimal for random reads, since each disk in=20
> > the raidz set needs to be accessed to retrieve each item.
>
> I don't understand, if the file is contained within a single stripe, why
> would it need to access the other disks, if the checksum of the stripe
> is OK?
Huitzi,
Awesome graphics! Do we have your permission to use them? :-)
I might need to recreate them in another format.
Someone was kind enough to point out the error in this example yesterday
and I fixed it in the opensolaris.../zfs version, found here:
http://opensolaris.org/os/community/zfs/d
On 6/20/07, Paul Fisher <[EMAIL PROTECTED]> wrote:
I would not risk raidz on that many disks. A nice compromise may be 14+2
raidz2, which should perform nicely for your workload and be pretty reliable
when the disks start to fail.
Would anyone on the list not recommend this setup? I could li
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of mike
> Sent: Wednesday, June 20, 2007 9:30 AM
>
> I would prefer something like 15+1 :) I want ZFS to be able to detect
> and correct errors, but I do not need to squeeze all the performance
> out of it (I'll be using it as a home
Hi Mike,
> If I was to plan for a 16 disk ZFS-based system, you would probably
> suggest me to configure it as something like 5+1, 4+1, 4+1 all raid-z
> (I don't need the double parity concept)
>
> I would prefer something like 15+1 :) I want ZFS to be able to detect
> and correct errors, but I d
On 6/20/07, Constantin Gonzalez <[EMAIL PROTECTED]> wrote:
One disk can be one vdev.
A 1+1 mirror can be a vdev, too.
A n+1 or n+2 RAID-Z (RAID-Z2) set can be a vdev too.
- Then you concatenate vdevs to create a pool. Pools can be extended by
adding more vdevs.
- Then you create ZFS file s
Roshan Perera wrote:
But Roshan, if your pool is not replicated from ZFS' point of view,
then all the multipathing and raid controller backup in the world will
not make a difference.
James, I Agree from ZFS point of view. However, from the EMC or the
customer point of view they want to do the
Hi,
> How are paired mirrors more flexiable?
well, I'm talking of a small home system. If the pool gets full, the
way to expand with RAID-Z would be to add 3+ disks (typically 4-5).
With mirror only, you just add two. So in my case it's just about
the granularity of expansion.
The reasoning is
Roshan Perera wrote:
Hi all,
Is there a place where I can find ZFS best practices guide to use against
DMX and a roadmap of ZFS ?
Also, the customer now looking at big ZFS installations in production.
Would you guys happen to know or where I can find details of the numbers
of current installatio
Constantin Gonzalez wrote:
> Hi,
>
>
>> I'm quite interested in ZFS, like everybody else I suppose, and am about
>> to install FBSD with ZFS.
>>
>
> welcome to ZFS!
>
>
>> Anyway, back to business :)
>> I have a whole bunch of different sized disks/speeds. E.g. 3 300GB disks
>> @ 40mb,
Hi all,
Is there a place where I can find ZFS best practices guide to use against DMX
and a roadmap of ZFS ?
Also, the customer now looking at big ZFS installations in production. Would
you guys happen to know or where I can find details of the numbers of current
installations ? We are looking
Pawel Jakub Dawidek wrote:
> On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote:
>
>>> On that note, i have a different first question to start with. I
>>> personally am a Linux fanboy, and would love to see/use ZFS on linux. I
>>> assume that I can use those ZFS disks later with a
On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote:
> >On that note, i have a different first question to start with. I
> >personally am a Linux fanboy, and would love to see/use ZFS on linux. I
> >assume that I can use those ZFS disks later with any os that can
> >work/recognizes ZFS c
Mario Goebbels wrote:
>> A 6 disk raidz set is not optimal for random reads, since each disk in
>> the raidz set needs to be accessed to retrieve each item.
>>
>
> I don't understand, if the file is contained within a single stripe, why
> would it need to access the other disks, if the checks
> A 6 disk raidz set is not optimal for random reads, since each disk in
> the raidz set needs to be accessed to retrieve each item.
I don't understand, if the file is contained within a single stripe, why
would it need to access the other disks, if the checksum of the stripe
is OK? Also, why wou
> Correction:
>
> SATA Controller is a Sillcon Image 3114, not a 3112.
Do these slow speeds only appear when writing via NFS or generally in
all scenarios? Just asking, because Solaris' ata driver doesn't
initialize settings like block mode, prefetch and such on IDE/SATA
drives (that is if ata a
> But Roshan, if your pool is not replicated from ZFS'
> point of view, then all the multipathing and raid
> controller backup in the world will not make a difference.
James, I Agree from ZFS point of view. However, from the EMC or the customer
point of view they want to do the replication at t
> I had the same question last week decided to take a similar approach.
> Instead of a giant raidz of 6 disks, i created 2 raidz's of 3 disks
> each. So when I want to add more storage, I just add 3 more disks.
Even if you've created a giant 6 disk RAID-Z, apart from a formal
warning requiring the
> Hello,
>
> I'm quite interested in ZFS, like everybody else I
> suppose, and am about
> to install FBSD with ZFS.
>
> On that note, i have a different first question to
> start with. I
> personally am a Linux fanboy, and would love to
> see/use ZFS on linux. I
> assume that I can use those ZFS
Hi,
> I'm quite interested in ZFS, like everybody else I suppose, and am about
> to install FBSD with ZFS.
welcome to ZFS!
> Anyway, back to business :)
> I have a whole bunch of different sized disks/speeds. E.g. 3 300GB disks
> @ 40mb, a 320GB disk @ 60mb/s, 3 120gb disks @ 50mb/s and so on.
>
Hi Chris,
> What is the best (meaning fastest) way to move a large file system
> from one pool to another pool on the same machine. I have a machine
> with two pools. One pool currently has all my data (4 filesystems), but it's
> misconfigured. Another pool is configured correctly, and I want t
55 matches
Mail list logo