On 08/09/2009, at 2:01 AM, Ross Walker wrote:
On Sep 7, 2009, at 1:32 AM, James Lever wrote:
Well a MD1000 holds 15 drives a good compromise might be 2 7 drive
RAIDZ2s with a hotspare... That should provide 320 IOPS instead of
160, big difference.
The issue is interactive responsiveness
Would I just do the following then:
> zpool create -f zone1 c1t1d0s0
> zfs create zone1/test1
> zfs create zone1/test2
Woud I then use zfs set quota=xxxG to handle disk usage?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
Mike Gerdts wrote:
On Wed, Sep 23, 2009 at 7:32 AM, bertram fukuda wrote:
Thanks for the info Mike.
Just so I'm clear. You suggest 1)create a single zpool from my LUN 2) create a
single ZFS filesystem 3) create 2 zone in the ZFS filesystem. Sound right?
Correct
Well I would ac
On Wed, 23 Sep 2009, Ray Clark wrote:
My understanding is that if I "zfs set checksum=" to
change the algorithm that this will change the checksum algorithm
for all FUTURE data blocks written, but does not in any way change
the checksum for previously written data blocks.
This is correct. Th
My understanding is that if I "zfs set checksum=" to change the
algorithm that this will change the checksum algorithm for all FUTURE data
blocks written, but does not in any way change the checksum for previously
written data blocks.
I need to corroborate this understanding. Could someone p
On Wed, Sep 23, 2009 at 8:12 PM, David Magda wrote:
> On Sep 23, 2009, at 20:48, bertram fukuda wrote:
>
>> What if we have no plans on moving or cloning the zone, I can get away
>> with only pool right?
>
> Sure.
>
>> If I'm doing a separate FS for each zone is it just slice up my LUN,
>> create
On Sep 23, 2009, at 20:48, bertram fukuda wrote:
What if we have no plans on moving or cloning the zone, I can get
away with only pool right?
Sure.
If I'm doing a separate FS for each zone is it just slice up my LUN,
create a FS for each zone and I'm done?
One pool from the LUN ('zpool
David,
What if we have no plans on moving or cloning the zone, I can get away with
only pool right?
If I'm doing a separate FS for each zone is it just slice up my LUN, create
a FS for each zone and I'm done?
Thanks,
Bert
--
This message posted from opensolaris.org
To: Developers and Students
You are invited to participate in the first OpenSolaris Security Summit
OpenSolaris Security Summit
Tuesday, November 3rd, 2009
Baltimore Marriott Waterfront
700 Aliceanna Street
Baltimore, Maryland 21202
Join us as we explore the latest trends of OpenSolaris Sec
Thanks Richard and Jim,
Your answers helped me to show to the customer that there was no issue
with ZFS and the HDS.
I went onsite to see the problem, and as Jim suggested, the customer
just saw the %b and average service time and he thought there was a problem.
The server is running an Or
While a resilver was running, we attempted a recursive snapshot which
resulted in a kernel panic:
panic[cpu1]/thread=ff00104c0c60: assertion failed: 0 ==
zap_remove_int(mos, next_clones_obj, dsphys->ds_next_snap_obj, tx) (0x0 ==
0x2), file: ../../common/ fs/zfs/dsl_dataset.c, line: 1869
Richard,
I compared the libzfs_jni source code and they're pretty different
from what we're doing. libzfs_jni is essentially a jni wrapper to
(yet?) another set of zfs-related programs written in C. zfs for Java,
on the other hand, is a Java wrapper to the functionality of (and only
of) libzfs. I
On Sep 23, 2009, at 08:13, Mike Gerdts wrote:
That is, at time t1 I have zones z1 and z2 on host h1. I think that
at some time
in the future I would like to move z2 to host h2 while leaving z1 on
h1.
You can have a single pool, but it's probably good to have each zone
in its own file sys
> zfs share -a
Ah-ha! Thanks.
FYI, I got between 2.5x and 10x improvement in performance, depending on the
test. So tempting :)
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
On 23 September, 2009 - Scott Meilicke sent me these 0,5K bytes:
> Thank you both, much appreciated.
>
> I ended up having to put the flag into /etc/system. When I disabled
> the ZIL and umount/mounted without a reboot, my ESX host would not see
> the NFS export, nor could I create a new NFS conn
Thank you both, much appreciated.
I ended up having to put the flag into /etc/system. When I disabled the ZIL and
umount/mounted without a reboot, my ESX host would not see the NFS export, nor
could I create a new NFS connection from my ESX host. I could get into the file
system from the host i
Cindy: AWESOME! Didn't know about that property, I'll make sure I set it :).
All I did to replace the drives was to power off the machine (the failed drive
had hard-locked the SCSI bus, so I had to anyways). Once the machine was
powered off, I pulled the bad drive, inserted the new drive, and
Le 23 sept. 09 à 19:07, Neil Perrin a écrit :
On 09/23/09 10:59, Scott Meilicke wrote:
How can I verify if the ZIL has been disabled or not? I am trying
to see how much benefit I might get by using an SSD as a ZIL. I
disabled the ZIL via the ZFS Evil Tuning Guide:
echo zil_disable/W0t1 |
Dustin,
You didn't describe the process that you used to replace the disk so its
difficult to commment on what happened.
In general, you physically replace the disk and then let ZFS know that
the disk is replaced, like this:
# zpool replace pool-name device-name
This process is described here:
Tim: I couldn't do a zpool scrub, since the pool was marked as UNAVAIL.
Believe me, I tried :)
Bob: Ya, I realized that after I clicked send. My brain was a little frazzled,
so I completely overlooked it.
Solaris 10u7 - Sun E450
ZFS pool version 10
ZFS filesystem version 3
-Dustin
--
This m
On 09/23/09 10:59, Scott Meilicke wrote:
How can I verify if the ZIL has been disabled or not?
I am trying to see how much benefit I might get by using an SSD as a ZIL. I
disabled the ZIL via the ZFS Evil Tuning Guide:
echo zil_disable/W0t1 | mdb -kw
- this only temporarily disables the z
How can I verify if the ZIL has been disabled or not?
I am trying to see how much benefit I might get by using an SSD as a ZIL. I
disabled the ZIL via the ZFS Evil Tuning Guide:
echo zil_disable/W0t1 | mdb -kw
and then rebooted. However, I do not see any benefits for my NFS workload.
Thanks,
Awesome!!! Thanks for you help.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 23 Sep 2009, Dustin Marquess wrote:
Okay.. I "fixed" it by powering the server off, removing the new drive, letting
the pool come up degraded, and then doing zpool replace.
I'm assuming what happened was ZFS saw that the disk was online,
tried to use it, and then noticed that the chec
On Wed, Sep 23, 2009 at 10:57 AM, Dustin Marquess wrote:
> Okay.. I "fixed" it by powering the server off, removing the new drive,
> letting the pool come up degraded, and then doing zpool replace.
>
> I'm assuming what happened was ZFS saw that the disk was online, tried to
> use it, and then no
Okay.. I "fixed" it by powering the server off, removing the new drive, letting
the pool come up degraded, and then doing zpool replace.
I'm assuming what happened was ZFS saw that the disk was online, tried to use
it, and then noticed that the checksums didn't match (of course) and marked the
hereby is my menu.lst
I tried to change the zpool mountpoints but no way either
menu.lst=
j...@opensolaris:~# more /a/boot/grub/menu.lst
splashimage /boot/grub/splash.xpm.gz
background 215ECA
timeout 30
default 10
#-- ADDED BY BOOTADM - DO NOT EDIT --
title OpenSolaris 2008.11 snv
On Wed, Sep 23, 2009 at 3:32 AM, vattini giacomo wrote:
> Hi there i'v been able to restore my zpool on a live cd,reinstall the
> grub,but booting from the HD it hangs for a while and than nothing comes up
> j...@opensolaris:~# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
I replaced a bad disk in a RAID-Z2 pool, and now the pool won't come online.
Status shows nothing helpful at all. I don't understand why this is, which I
should be able to lose 2 drives, and I only replaced one!
# zpool status -v pool
pool: pool
state: UNAVAIL
scrub: none requested
config:
I wonder if a taskq pool does not suffer from a similar
effect observed for the nfsd pool :
6467988 Minimize the working set of nfsd threads
Created threads round robin our of taskq loop, doing little
work but wake up at least once per 5 minute and so are never
reaped.
-r
Nils Goroll
> The only thing that jumps out at me is the ARC size -
> 53.4GB, or
> most of your 64GB of RAM. This in-and-of-itself is
> not necessarily
> a bad thing - if there are no other memory consumers,
> let ZFS cache
> data in the ARC. But if something is coming along to
> flush dirty
> ARC pages period
(posted to zfs-discuss)
Hmmm...this is nothing in terms of load.
So you say that the system becomes sluggish/unresponsive
periodically, and you noticed the xcall storm when that
happens, correct?
Refresh my memory - what is the frequency and duration
of the sluggish cycles?
Could you capture a
I'm cross-posting to zfs-discuss, as this is now more of a ZFS
query than a dtrace query at this point, and I'm not sure if all the ZFS
experts are listening on dtrace-discuss (although they probably
are... :^).
The only thing that jumps out at me is the ARC size - 53.4GB, or
most of your 64GB o
Hi list,
I have a question about setting up zfs send-receive functionality (between
remote machine) as non-root user.
"server1" - is a server where "zfs send" will be executed
"server2" - is a server where "zfs receive" will be executed.
I am using the following zfs structure:
[server1]$ zfs l
On Wed, Sep 23, 2009 at 7:32 AM, bertram fukuda wrote:
> Thanks for the info Mike.
>
> Just so I'm clear. You suggest 1)create a single zpool from my LUN 2) create
> a single ZFS filesystem 3) create 2 zone in the ZFS filesystem. Sound right?
Correct
--
Mike Gerdts
http://mgerdts.blogspot.co
2009/9/23 bertram fukuda
> Thanks for the info Mike.
>
> Just so I'm clear. You suggest 1)create a single zpool from my LUN 2)
> create a single ZFS filesystem 3) create 2 zone in the ZFS filesystem. Sound
> right?
>
> You can create zfs filesystems for each zone and you also can delegate zfs
fi
Cross-posted to ZFS-Discuss per Vikram's suggestion.
Summary: I upgraded to snv_123 and the system hangs on boot. snv_121, and
earlier are working fine.
Booting with -kv, the system still hung, but after a few minutes, the system
continued, spit out more text (referring to disks, but I could no
Thanks for the info Mike.
Just so I'm clear. You suggest 1)create a single zpool from my LUN 2) create a
single ZFS filesystem 3) create 2 zone in the ZFS filesystem. Sound right?
Thanks again,
Bert
--
This message posted from opensolaris.org
___
z
On Wed, Sep 23, 2009 at 7:04 AM, bertram fukuda wrote:
> I have a 1TB LUN being presented to me from our storage team. I need to
> create 2 zones and share the storage between them. Would it be best to
> repartition the LUNs (2 500Gb slices), create 2 separate storage pools then
> assign them
I have a 1TB LUN being presented to me from our storage team. I need to create
2 zones and share the storage between them. Would it be best to repartition
the LUNs (2 500Gb slices), create 2 separate storage pools then assign them
separately to each zone? If not, what would be the recommended
Hi there i'v been able to restore my zpool on a live cd,reinstall the grub,but
booting from the HD it hangs for a while and than nothing comes up
j...@opensolaris:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 66.2G 5.65G78K /a
rpool/ROOT
41 matches
Mail list logo