illing to go through more hackery if needed.
(If I need to destroy and re-create these LUNS on the storage array, I can do
that too, but I'm hoping for something more host based)
--Jason
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Has any thought been given to exposing some sort of transactional API
for ZFS at the user level (even if just consolidation private)?
Just recently, it would seem a poorly timed unscheduled poweroff while
NWAM was attempting to update nsswitch.conf left me with a 0 byte
nsswitch.conf (which when t
on-updated versions of everything.
On Tue, Oct 16, 2012 at 2:48 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> > From: jason.brian.k...@gmail.com [mailto:jason.brian.k...@gmail.com] On
> > Behalf Of Jason
Hi,
One of my server's zfs faulted and it shows following:
NAMESTATE READ WRITE CKSUM
backup UNAVAIL 0 0 0 insufficient replicas
raidz2-0 UNAVAIL 0 0 0 insufficient replicas
c4t0d0 ONLINE 0 0 0
c
I've done mpxio over multiple ip links in linux using multipathd. Works just
fine. It's not part of the initiator but accomplishes the same thing.
It was a linux IET target. Need to try it here with a COMSTAR target.
-Original Message-
From: Ross Walker
Sender: zfs-discuss-boun...@op
On Tue, Dec 21, 2010 at 7:58 AM, Jeff Bacon wrote:
> One thing I've been confused about for a long time is the relationship
> between ZFS, the ARC, and the page cache.
>
> We have an application that's a quasi-database. It reads files by
> mmap()ing them. (writes are done via write()). We're talki
Use the Solaris cp (/usr/bin/cp) instead
On Wed, Mar 16, 2011 at 8:59 AM, Fred Liu wrote:
> It is from ZFS ACL.
>
>
>
> Thanks.
>
>
>
> Fred
>
>
>
> From: Fred Liu
> Sent: Wednesday, March 16, 2011 9:57 PM
> To: ZFS Discussions
> Subject: GNU 'cp -p' can't work well with ZFS-based-NFS
>
>
>
> Alw
option somewhere to allow sharing tank/nfs/vmware and the zfs
filesystems mounted into that directory tree? It would make for a very neat
solution if it did.
If not I can get around it with one nfs mount per virtual machine, but that is
extra overhead I was hoping to avoid.
Thanks in advance
Ja
I've been looking to build my own cheap SAN to explore HA scenarios with VMware
hosts, though not for a production environment. I'm new to opensolaris but I
am familiar with other clustered HA systems. The features of ZFS seem like
they would fit right in with attempting to build an HA storage
Well, I knew a guy who was involved in a project to do just that for a
production environment. Basically they abandoned using that because there was
a huge performance hit using ZFS over NFS. I didn’t get the specifics but his
group is usually pretty sharp. I’ll have to check back with him.
Specifically I remember storage vmotion being supported on NFS last as well as
jumbo frames. Just the impression I get from past features, perhaps they are
doing better with that.
I know the performance problem had specifically to do with ZFS and the way it
handled something. I know lots of i
So aside from the NFS debate, would this 2 tier approach work? I am a bit
fuzzy on how I would get the RAIDZ2 redundancy but still present the volume to
the VMware host as a raw device. Is that possible or is my understanding
wrong? Also could it be defined as a clustered resource?
--
This m
True, though an enclosure for shared disks is expensive. This isn't for
production but for me to explore what I can do with x86/x64 hardware. The idea
being that I can just throw up another x86/x64 box to add more storage. Has
anyone tried anything similar?
--
This message posted from openso
I guess I should come at it from the other side:
If you have 1 iscsi target box and it goes down, you're dead in the water.
If you have 2 iscsi target boxes that replicate and one dies, you are OK but
you then have to have a 2:1 total storage to usable ratio (excluding expensive
shared disks).
Hi all,
Longtime reader, first time poster Sorry for the lengthy intro and not
really sure the title matches what I'm trying to get at... I am trying to find
a solution where making use of a zfs filesystem can shorten our backup window.
Currently, our backup solution takes data from ufs or
> In general, your backup software should handle making
> incremental dumps, even from a split mirror. What are
> you using to write data to tape? Are you simply
> dumping the whole file system, rather than using
> standard backup software?
>
We are using Veritas Netbackup 5 MP4. It is performing
Sickness, which case are you using? I've been looking for something that
supports many HDDs. Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
Thanks, did it come with the hardware to mount HDD's in 5.25" slots?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-bash-3.2$ zfs share tank
cannot share 'tank': share(1M) failed
-bash-3.2$
how do i figure out what's wrong?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 1010M 227G 1010M /tank
# zfs create tank/storage
cannot share 'tank/storage': share(1M) failed
filesystem successfully created, but not shared
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 1010M 227G 1010M /tank
# zfs set sharenfs=on tank/storage
cannot share 'tank/storage': share(1M) failed
#
i'm fairly certain i installed the nfs stuff, and it looks like the rpc stuff
is running
http://www.student.cs.uwaterloo.ca/~jouellet/netstat.txt
This message posted from opensolaris.org
___
x27; doesn't match any instances
#
# share -F nfs -o ro /tank/storage
Invalid protocol specified: nfs
#
> Jason, are you getting the same return value when you try to set sharesmb=on
# zfs set sharesmb=on tank/storage
cannot share 'tank/storage': share(1M) failed
SMB: Unable
> You're missing the server bits, check for the following packages:
> SUNWnfsskr, SUNWnfssr, and SUNWnfssu
> -- richard
i added those packages and rebooted then did
# svcadm enable network/nfs/server
but nfs still doesn't work
# zfs share tank/storage
cannot share 'tank/storage': share(1M) fail
i got all nfs/server dependancies online, but nfs/server is disabled because
"No NFS filesystems are shared"
# svcs -l nfs/server
fmri svc:/network/nfs/server:default
name NFS server
enabled false (temporary)
statedisabled
next_state none
state_time Sun Feb 17 21:
> Try sharing something else, maybe:
> share -F nfs /mnt
>
> After that, you should see the services started.
> Once you get that to work, then try sharing the
> zfs file systems. Your problems aren't zfs related...
> at least not yet.
> -- richard
# share -F nfs /mnt
share: illegal option -- F
that doesn't work
it looks like something maybe corrupt, maybe something didn't get installed
properly or i have a bad disc, for some reason my share command doesn't have an
-F option
i'm going to get a new disc and reinstall everything
thanks for the help everyone
This message posted from
btw, my machine doesn't have a dns name so i had to enter a phony one to get
nfs/server online
can that have any ill effects?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
es. I wonder if that
improvement didn't make it into sol10U2?
-Jason
Sent via BlackBerry from Cingular Wireless
-Original Message-
From: eric kustarz <[EMAIL PROTECTED]>
Date: Tue, 27 Jun 2006 15:55:45
To:Steve Bennett <[EMAIL PROTECTED]>
Cc:zfs-discuss@opensolaris
On Wed, Mar 31, 2010 at 7:53 PM, Erik Trimble wrote:
> Brett wrote:
>>
>> Hi Folks,
>>
>> Im in a shop thats very resistant to change. The management here are
>> looking for major justification of a move away from ufs to zfs for root file
>> systems. Does anyone know if there are any whitepapers/b
On Thu, Apr 1, 2010 at 9:06 AM, David Magda wrote:
> On Wed, March 31, 2010 21:25, Bart Smaalders wrote:
>
>> ZFS root will be the supported root filesystem for Solaris Next; we've
>> been using it for OpenSolaris for a couple of years.
>
> This is already supported:
>
>> Starting in the Solaris 1
I have been searching this forum and just about every ZFS document i can find
trying to find the answer to my questions. But i believe the answer i am
looking for is not going to be documented and is probably best learned from
experience.
This is my first time playing around with open solaris
Thank you for the replies guys!
I was actually already planning to get another 4 gigs of ram for the box right
away anyway, but thank you for mentioning it! As there appears to be a couple
ways to "skin the cat" here i think i am going to try both a 14 spindle RaidZ2
and 2 X 7 RaidZ2 configura
Ahh,
Thank you for the reply Bob, that is the info i was after. It looks like i will
be going with the 2 X 7 RaidZ2 option.
And just to clarify as far as expanding this pool in the future my only option
is to add another 7 spindle RaidZ2 array correct?
Thanks for all the help guys !
--
This
I am booting from a single 74gig WD raptor attached to the motherboards onboard
SATA port.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Freddie,
now you have brought up another question :) I had always assumed that i would
just used open solaris for this file server build, as i had not actually done
any research in regards to other operatin systems that support ZFS. Does anyone
have any advice as to wether i should be consideri
Since i already have Open Solaris installed on the box, i probably wont jump
over to FreeBSD. However someone has suggested to me to look into
www.nexenta.org and i must say it is quite interesting. Someone correct me if i
am wrong but it looks like it is Open Solaris based and has basically
ev
Well I would like to thank everyone for there comments and ideas.
I finally have this machine up and running with Nexenta Community edition and
am really liking the GUI for administering it. It suits my needs perfectly and
is running very well. I ended up going with 2 X 7 RaidZ2 vdevs in one poo
ISTR POSIX also doesn't allow a number of features that can be turned
on with zfs (even ignoring the current issues that prevent ZFS from
being fully POSIX compliant today). I think an additional option for
the snapdir property ('directory' ?) that provides this behavior (with
suitable warnings ab
It still has the issue that the end user has to know where the root of
the filesystem is in the tree (assuming it's even accessible on the
system -- might not be for an NFS mount).
On Wed, Apr 21, 2010 at 6:01 PM, Brandon High wrote:
> On Wed, Apr 21, 2010 at 10:38 AM, Edward Ned Harvey
> wrote
If you're just wanting to do something like the netapp .snapshot
(where it's in every directory), I'd be curious if the CIFS shadow
copy support might already have done a lot of the heavy lifting for
this. That might be a good place to look
On Mon, May 3, 2010 at 7:25 PM, Peter Jeremy
wrote:
> On
Well the GUI I think is just Windows, it's all just APIs that are
presented to windows.
On Mon, May 3, 2010 at 10:16 PM, Edward Ned Harvey
wrote:
>> From: jason.brian.k...@gmail.com [mailto:jason.brian.k...@gmail.com] On
>> Behalf Of Jason King
>>
>> If you're
In the meantime, you can use autofs to do something close to this if
you like (sort of like the pam_mkhomedir module) -- you can have it
execute a script that returns the appropriate auto_user entry (given a
username as input). I wrote one a long time ago that would do a zfs
create if the dataset
Jason
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
something like this
Disk # Slice 1 Slice 2
1 raid5 raid0
2 raid5 raid0
3 raid5 raid0
I want to have some fast scratch space (raid0) and some protected (raidz)
Greetings
J
--
This message posted from opensolaris.org
_
Ok,
I got it working: however I set up two partitions on each disk using fdisk
inside of format
what's the difference to slices (I checked with gparted)
Bye
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
On Thu, Jun 10, 2010 at 11:32 PM, Erik Trimble wrote:
> On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote:
>>
>> On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwal
>> wrote:
>>
>>>
>>> We at KQInfotech, initially started on an independent port of ZFS to
>>> linux.
>>> When we posted our progress
On Mon, Jul 12, 2010 at 11:09 AM, Garrett D'Amore wrote:
> On Mon, 2010-07-12 at 17:05 +0100, Andrew Gabriel wrote:
>> Linder, Doug wrote:
>> > Out of sheer curiosity - and I'm not disagreeing with you, just wondering
>> > - how does ZFS make money for Oracle when they don't charge for it? Do
>
So, my Areca controller has been complaining via email of read errors for a
couple days on SATA channel 8. The disk finally gave up last night at 17:40.
I got to say I really appreciate the Areca controller taking such good care of
me.
For some reason, I wasn't able to log into the server las
lot of attempts out there, but nothing I've found is comprehensive.
Jason
On Wed, Oct 14, 2009 at 4:23 PM, Eric Schrock wrote:
> On 10/14/09 14:17, Cindy Swearingen wrote:
>>
>> Hi Jason,
>>
>> I think you are asking how do you tell ZFS that you want to replace t
X read errors in Y minutes", Then we can really see
what happened.
Jason
On Wed, Oct 14, 2009 at 4:32 PM, Eric Schrock wrote:
> On 10/14/09 14:26, Jason Frank wrote:
>>
>> Thank you, that did the trick. That's not terribly obvious from the
>> man page though. The man
On Thu, Oct 15, 2009 at 2:57 AM, Ian Collins wrote:
> Dale Ghent wrote:
>>
>> So looking at the README for patch 14144[45]-09, there are ton of ZFS
>> fixes and feature adds.
>>
>> The big features are already described in the update 8 release docs, but
>> would anyone in-the-know care to comment
On Thu, Oct 15, 2009 at 9:25 AM, Enda O'Connor wrote:
>
>
> Jason King wrote:
>>
>> On Thu, Oct 15, 2009 at 2:57 AM, Ian Collins wrote:
>>>
>>> Dale Ghent wrote:
>>>>
>>>> So looking at the README for patch 14144[45]-09,
it's beefs with Sun does). But, I can
live with detaching them if I have to.
Another thing that would be nice would be to receive notification of
disk failures from the OS via email or SMS (like the vendor I
previously alluded to), but I know I'm talking crazy now.
Jason
On Thu, Oct 2
On Sun, Nov 8, 2009 at 7:55 AM, Robert Milkowski wrote:
>
> fyi
>
> Robert Milkowski wrote:
>>
>> XXX wrote:
>>>
>>> | Have you actually tried to roll-back to previous uberblocks when you
>>> | hit the issue? I'm asking as I haven't yet heard about any case
>>> | of the issue witch was not solved
failed (5)
I've searched the forums and they've been very helpful but I don't see anything
about this. I created a pool with the internal sata drives and there are no
issues transferring data on those ports. What should I try to isolate and
hopefully resolve the issue
On Thu, Dec 3, 2009 at 9:58 AM, Bob Friesenhahn
wrote:
> On Thu, 3 Dec 2009, Erik Ableson wrote:
>>
>> Much depends on the contents of the files. Fixed size binary blobs that
>> align nicely with 16/32/64k boundaries, or variable sized text files.
>
> Note that the default zfs block size is 128K a
On Tue, Jan 19, 2010 at 9:25 PM, Matthew Ahrens wrote:
> Michael Schuster wrote:
>>
>> Mike Gerdts wrote:
>>>
>>> On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have
ee0a0aRCRD
--
Jason Fortezzo
forte...@mechanicalism.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Feb 10, 2010 at 6:45 PM, Paul B. Henson wrote:
>
> We have an open bug which results in new directories created over NFSv4
> from a linux client having the wrong group ownership. While waiting for a
> patch to resolve the issue, we have a script running hourly on the server
> which finds d
On Sat, Feb 13, 2010 at 9:58 AM, Jim Mauro wrote:
> Using ZFS for Oracle can be configured to deliver very good performance.
> Depending on what your priorities are in terms of critical metrics, keep in
> mind
> that the most performant solution is to use Oracle ASM on raw disk devices.
> That is
My problem is when you have 100+ luns divided between OS and DB,
keeping track of what's for what can become problematic. It becomes
even worse when you start adding luns -- the chance of accidentally
grabbing a DB lun instead of one of the new ones is non-trivial (then
there's also the chance th
If you're doing anything with ACLs, the GNU utilities have no
knowledge of ACLs, so GNU chmod will not modify them (nor will GNU ls
show ACLs), you need to use /bin/chmod and /bin/ls to manipulate them.
It does sound though that GNU chmod is explicitly testing and skipping
any entry that's a link
Could also try /usr/gnu/bin/ls -U.
I'm working on improving the memory profile of /bin/ls (as it gets
somewhat excessive when dealing with large directories), which as a
side effect should also help with this.
Currently /bin/ls allocates a structure for every file, and doesn't
output anything unt
Did you try adding:
nfs4: mode = special
vfs objects = zfsacl
To the shares in smb.conf? While we haven't done extensive work on
S10, it appears to work well enough for our (limited) purposes (along
with setting the acl properties to passthrough on the fs).
On Fri, Feb 26, 2010 at
We have a running zpool with a 12 disk raidz3 vdev in it ... we gave ZFS the
full, raw disks ... all is well.
However, we built it on two LSI 9211-8i cards and we forgot to change from IR
firmware to IT firmware.
Is there any danger in shutting down the OS, flashing the cards to IT firmware,
a
on LSI 9211-8i)
To: "Jason Usher"
Cc: zfs-discuss@opensolaris.org
Date: Tuesday, July 17, 2012, 5:05 PM
Hi Jason,
I have done this in the past. (3x LSI 1068E - IBM BR10i).
Your pool has no tie with the hardware used to host it (including your HBA).
You could change all your hardware, and s
Hi,
I have a ZFS filesystem with compression turned on. Does the "used" property
show me the actual data size, or the compressed data size ? If it shows me the
compressed size, where can I see the actual data size ?
I also wonder about checking status of dedupe - I created my pool without
de
--- On Fri, 9/21/12, Sašo Kiselkov wrote:
> > I have a ZFS filesystem with compression turned
> on. Does the "used" property show me the actual data
> size, or the compressed data size ? If it shows me the
> compressed size, where can I see the actual data size ?
>
> It shows the allocated n
Oh, and one other thing ...
--- On Fri, 9/21/12, Jason Usher wrote:
> > It shows the allocated number of bytes used by the
> > filesystem, i.e.
> > after compression. To get the uncompressed size,
> multiply
> > "used" by
> > "compressratio&qu
--- On Mon, 9/24/12, Richard Elling wrote:
I'm hoping the answer is yes - I've been looking but do not see it ...
none can hide from dtrace!# dtrace -qn 'dsl_dataset_stats:entry {this->ds =
(dsl_dataset_t *)arg0;printf("%s\tcompressed size = %d\tuncompressed
size=%d\n", this->ds->ds_dir->dd_m
--- On Tue, 9/25/12, Volker A. Brandt wrote:
> Well, he is telling you to run the dtrace program as root in
> one
> window, and run the "zfs get all" command on a dataset in
> your pool
> in another window, to trigger the dataset_stats variable to
> be filled.
>
> > none can hide from dtrace
I can think of two rather ghetto ways to go.
1. write data then set the read-only property. If you need to make updates
cycle back to rw, write data, set read only.
2. Write data, snapshot the fs, expose the snapshot instead of the r/w file
system. Your mileage may vary depending on the impleme
::spa -ev
::arc
Kind regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Replace it. Reslivering should not as painful if all your disks are functioning
normally.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
HyperDrive5 = ACard ANS9010
I have personally been wanting to try one of these for some time as a
ZIL device.
On 12/29/2010 06:35 PM, Kevin Walker wrote:
You do seem to misunderstand ZIL.
ZIL is quite simply write cache and using a short stroked rotating
drive is never going to provide a pe
oller and a CSE-SAS-833TQ SAS backplane.
Have run ZFS with both Solaris and FreeBSD without a problem for a
couple years now. Had one drive go bad, but it was caught early by
running periodic scrubs.
--
Jason Fortezzo
forte...@mechanicalism.net
___
On Wed, Jan 7, 2009 at 3:51 PM, Kees Nuyt wrote:
> On Tue, 6 Jan 2009 21:41:32 -0500, David Magda
> wrote:
>
>>On Jan 6, 2009, at 14:21, Rob wrote:
>>
>>> Obviously ZFS is ideal for large databases served out via
>>> application level or web servers. But what other practical ways are
>>> there to
On Fri, Feb 20, 2009 at 2:59 PM, Darin Perusich
wrote:
> Hello All,
>
> I'm in the process of migrating a file server from Solaris 9, where
> we're making extensive use of POSIX-ACLs, to ZFS and I have a question
> that I'm hoping someone can clear up for me. I'm using ufsrestore to
> restore the
On Mon, Mar 9, 2009 at 5:31 PM, Jan Hlodan wrote:
> Hi Tomas,
>
> thanks for the answer.
> Unfortunately, it didn't help much.
> However I can mount all file systems, but system is broken - desktop
> wont come up.
>
> "Could not update ICEauthority file /.ICEauthority
> There is a problem with the
On Tue, Jun 30, 2009 at 1:36 PM, Erik Trimble wrote:
> Bob Friesenhahn wrote:
>>
>> On Tue, 30 Jun 2009, Neal Pollack wrote:
>>
>>> Actually, they do quite a bit more than that. They create jobs, generate
>>> revenue for battery manufacturers, and tech's that change batteries and do
>>> PM maintena
Mark J Musante wrote:
On Tue, 30 Jun 2009, John Hoogerdijk wrote:
i've setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log
device. to see how well it works, i ran bonnie++, but never saw any
io's on the log device (using iostat -nxce) . pool status is good -
no issues or errors.
John Hoogerdijk wrote:
so i guess there is some porting to do - no O_DIRECT in solaris...
anyone have bonnie++ 1.03e ported already?
For your purposes, couldn't you replace O_DIRECT with O_SYNC as a hack?
If you're trying to benchmark the log device, the important thing is to
generate synch
This is an odd question, to be certain, but I need to find out what size a 1.5
TB drive is to help me create a sparse/fake array.
Basically, if I could have someone do a dd if=<1.5 TB disk> of= and
then post the ls -l size of that file, it would greatly assist me.
Here's what I'm doing:
I hav
As you can add multiple vdevs to a pool, my suggestion would be to do several
smaller raidz1 or raidz2 vdevs in the pool.
With your setup - assuming 2 HBAs @ 24 drives each your setup would have
yielded 20 drives usable storage (about) (assuming raidz2 with 2 spares on each
HBA) and then mirror
Thanks for the reply!
The reason I'm not waiting until I have the disks is mostly because it will
take me several months to get the funds together and in the meantime, I need
the extra space 1 or 2 drives gets me. Since the sparse files will only take
up the space in use, if I've migrated 2 of
@now > /datapool/data/Temp/test.zfs
What am I doing wrong? Why wont the whole thing copy? I've tried an
incremental from origin to @now, but it still doesn't work right...
Thanks for all your help.
-Jason
--
This message posted from opensolaris.org
It does seem to come up regularly... perhaps someone with access could
throw up a page under the ZFS community with the conclusions (and
periodic updates as appropriate)..
On Fri, Sep 25, 2009 at 3:32 AM, Erik Trimble wrote:
> Nathan wrote:
>>
>> While I am about to embark on building a home NAS
s. Fewer than 50% of the
total
were available,
Jan 23 18:51:38 newponit so panic to ensure data integrity.
Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I tried it and it worked great. Even cloned my boot environment, and BFU'd the
clone and it seemed to work (minus a few unrelated annoyances I haven't tracked
down yet). I'm quite excited about the possibilities :)
I am wondering though, is it possible to skip the creation of the pool and have
I've had at least some success (tried it once so far) doing a BFU to cloned
filesystem from a b62 zfs root system, I could probably document that if
there is interest.
I have not tried taking a new ISO and installing the new packages ontop of a
cloned fileystem though.
On 5/31/07, Lori Alt <[EMA
Just playing around a bit w/ zfs + zfs root (no particularly good
reason other than to just mess around a bit), and I hit an issue that
I suspect is simple to fix, but I cannot seem to figure out what that
is.
I wanted to try (essentially) doing a very manual install to an empty
zfs filesystem.
So
On 9/13/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 13, 2007 at 10:54:41AM -0600, Lori Alt wrote:
> > In-place upgrade of zfs datasets is not supported and probably
> > never will be (LiveUpgrade will be the way to go with zfs because
> > the cloning features of zfs make it a natu
On 9/25/07, Gregory Shaw <[EMAIL PROTECTED]> wrote:
>
>
>
> On Sep 25, 2007, at 7:09 PM, Richard Elling wrote:
>
> Dale Ghent wrote:
> On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
> The problem with this is that wrong information is much worse than no
> information, there is no way to automat
Apparently with zfs boot, if the zpool is a version grub doesn't
recognize, it merely ignores any zfs entries in menu.lst, and
apparently instead boots the first entry it thinks it can boot. I ran
into this myself due to some boneheaded mistakes while doing a very
manual zfs / install at the summi
hat the entire sequence of events is here,
> so I'm not sure if there's a bug. Perhaps you could elaborate.
>
> Lori
>
> Jason King wrote:
> > Apparently with zfs boot, if the zpool is a version grub doesn't
> > recognize, it merely ignores any zfs en
I am using ZFS on FreeBSD 7.0_beta3. This is the first time i have
used ZFS and I have run into something that I am not sure if this is
normal, but am very concerned about.
SYSTEM INFO:
hp 320s (storage array)
12 disks (750GB each)
2GB RAM
1GB flash drive (running the OS)
When I take a disk
Edit the kernel$ line and add '-k' at the end. That should drop you
into the kernel debugger after the panic (typing '$q' will exit the
debugger, and resume whatever it was doing -- in this case likely
rebooting).
On Dec 18, 2007 6:26 PM, Michael Hale <[EMAIL PROTECTED]> wrote:
>
>
> Begin forwa
On Thu, May 8, 2008 at 8:59 PM, EchoB <[EMAIL PROTECTED]> wrote:
> I cannot recall if it was this (-discuss) or (-code) but a post a few
> months ago caught my attention.
> In it someone detailed having worked out the math and algorithms for a
> flexible expansion scheme for ZFS. Clearly this is
On Wed, May 14, 2008 at 6:42 PM, Dave Koelmeyer
<[EMAIL PROTECTED]> wrote:
> Hi All, first time caller here, so please be gentle...
>
> I'm on OpenSolaris 2008.05, and following the really useful guide here to
> create a CIFs share in domain mode:
>
> http://blogs.sun.com/timthomas/entry/configuri
On Tue, Jul 1, 2008 at 8:10 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Tue, Jul 1, 2008 at 7:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Mike Gerdts wrote:
>>>
>>> On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat <[EMAIL PROTECTED]>
>>> wrote:
Instead we should take it comple
1 - 100 of 282 matches
Mail list logo