On Fri, Oct 5, 2012 at 3:17 AM, Ian Collins i...@ianshome.com wrote:
I do have to suffer a slow, glitchy WAN to a remote server and rather than
send stream files, I broke the data *on the remote server* into a more
fine grained set of filesystems than I would do normally. In this case, I
On Fri, Oct 5, 2012 at 1:28 PM, Ian Collins i...@ianshome.com wrote:
I do have a lot of what would appear to be unnecessary filesystems, but
after loosing the WAN 3 days into a large transfer, a change of tactic was
required!
I've recently (last year or so) gone the other way, and have made
2012/1/3 Christopher Hearn christopher.he...@cchmc.org:
On Jan 3, 2012, at 9:35 AM, Svavar Örn Eysteinsson wrote:
Hello.
I'm planing to replace my old Apple XRAID, and XSAN Filesystem(1.4.2) Fiber
environment.
This setup only hosted a AFP,CIFS for a large advertising agency.
Now that
On Tue, Dec 20, 2011 at 2:11 PM, Gregg Wonderly gregg...@gmail.com wrote:
On 12/19/2011 8:51 PM, Frank Cusack wrote:
If you don't detach the smaller drive, the pool size won't increase. Even
if the remaining smaller drive fails, that doesn't mean you have to detach
it. So yes, the pool
Of course I meant 'zpool *' not 'zfs *' below.
On Tue, Dec 20, 2011 at 4:27 PM, Frank Cusack fr...@linetwo.net wrote:
On Tue, Dec 20, 2011 at 2:11 PM, Gregg Wonderly gregg...@gmail.comwrote:
On 12/19/2011 8:51 PM, Frank Cusack wrote:
If you don't detach the smaller drive, the pool size
If you don't detach the smaller drive, the pool size won't increase. Even
if the remaining smaller drive fails, that doesn't mean you have to detach
it. So yes, the pool size might increase, but it won't be unexpectedly.
It will be because you detached all smaller drives. Also, even if a
You can just do fdisk to create a single large partition. The attached
mirror doesn't have to be the same size as the first component.
On Thu, Dec 15, 2011 at 11:27 PM, Gregg Wonderly gregg...@gmail.com wrote:
Cindy, will it ever be possible to just have attach mirror the surfaces,
including
Yes, except if your root pool is on a USB stick or removable media.
On Thu, Dec 15, 2011 at 3:20 PM, Anonymous Remailer (austria)
mixmas...@remailer.privacy.at wrote:
On Solaris 10 If I install using ZFS root on only one drive is there a way
to add another drive as a mirror later? Sorry if
It can still be done for USB, but you have to boot from alternate media to
attach the mirror.
On Thu, Dec 15, 2011 at 3:41 PM, Frank Cusack fr...@linetwo.net wrote:
Yes, except if your root pool is on a USB stick or removable media.
On Thu, Dec 15, 2011 at 3:20 PM, Anonymous Remailer
Corruption? Or just loss?
On Sun, Dec 11, 2011 at 1:27 PM, Matt Breitbach
matth...@flash.shanje.comwrote:
I would say that it's a highly recommended. If you have a pool that
needs
to be imported and it has a faulted, unmirrored log device, you risk data
corruption.
-Matt Breitbach
I haven't been able to get this working. To keep it simpler, next I am
going to try usbcopy of the live USB image in the VM, and see if I can boot
real hardware from the resultant live USB stick.
On Tue, Nov 22, 2011 at 5:25 AM, Fajar A. Nugraha l...@fajar.net wrote:
On Tue, Nov 22, 2011 at
On Tue, Nov 29, 2011 at 10:39 PM, Fajar A. Nugraha l...@fajar.net wrote:
On Wed, Nov 30, 2011 at 1:25 PM, Frank Cusack fr...@linetwo.net wrote:
I haven't been able to get this working. To keep it simpler, next I am
going to try usbcopy of the live USB image in the VM, and see if I can
boot
I have a Sun machine running Solaris 10, and a Vbox instance running
Solaris 11 11/11. The vbox machine has a virtual disk pointing to
/dev/disk1 (rawdisk), seen in sol11 as c0t2.
If I create a zpool on the Sun s10 machine, on a USB stick, I can take that
USB stick and access it through the vbox
On Mon, Nov 21, 2011 at 9:04 PM, Fajar A. Nugraha w...@fajar.net wrote:
So basically the question is if you install solaris on one machine,
can you move the disk (in this case the usb stick) to another machine
and boot it there, right?
Yes, but one of the machines is a virtual machine.
The
On Mon, Nov 21, 2011 at 9:31 PM, Fajar A. Nugraha w...@fajar.net wrote:
On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack fr...@linetwo.net wrote:
If we ignore the vbox aspect of it, and assume real hardware with real
devices, of course you can install on one x86 hardware and move the
drive
On Mon, Nov 21, 2011 at 9:59 PM, Fajar A. Nugraha w...@fajar.net wrote:
On Tue, Nov 22, 2011 at 12:53 PM, Frank Cusack fr...@linetwo.net wrote:
On Mon, Nov 21, 2011 at 9:31 PM, Fajar A. Nugraha w...@fajar.net
wrote:
On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack fr...@linetwo.net
wrote
On Mon, Nov 21, 2011 at 10:06 PM, Frank Cusack fr...@linetwo.net wrote:
grub does need to have an idea of the device path, maybe in vbox it's seen
as the 3rd disk (c0t2), so the boot device name written to grub.conf is
disk3 (whatever the terminology for that is in grub-speak), but when I
Op 12-10-11 02:27, Richard Elling schreef:
On Oct 11, 2011, at 2:03 PM, Frank Van Damme wrote:
Honestly? I don't remember. might be a leftover setting from a year
ago. by now, I figured out I need to update the boot archive in
order for the new setting to have effect at boot time which
for a storage server. Can
you explain your reasoning?
Honestly? I don't remember. might be a leftover setting from a year
ago. by now, I figured out I need to update the boot archive in
order for the new setting to have effect at boot time which apparently
involves booting in safe mode.
--
Frank Van Damme
2011/10/8 James Litchfield jim.litchfi...@oracle.com:
The value of zfs_arc_min specified in /etc/system must be over 64MB
(0x400).
Otherwise the setting is ignored. The value is in bytes not pages.
wel I've now set it to 0x800 and it stubbornly stays at 2048 MB...
--
Frank Van
Hello,
quick and stupid question: I'm breaking my head over how to tunz
zfs_arc_min on a running system. There must be some magic word to pipe
into mdb -kw but I forgot it. I tried /etc/system but it's still at the
old value after reboot:
ZFS Tunables (/etc/system):
set zfs:zfs_arc_min
, possibly?
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
zfs-discuss
Op 26-07-11 12:56, Fred Liu schreef:
Any alternatives, if you don't mind? ;-)
vpn's, openssl piped over netcat, a password-protected zip file,... ;)
ssh would be the most practical, probably.
--
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means,
to providing cheap storage).
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
zfs
Op 15-07-11 04:27, Edward Ned Harvey schreef:
Is anyone from Oracle reading this? I understand if you can't say what
you're working on and stuff like that. But I am merely hopeful this work
isn't going into a black hole...
Anyway. Thanks for listening (I hope.) ttyl
If they aren't,
Op 12-07-11 13:40, Jim Klimov schreef:
Even if I batch background RM's so a hundred processes hang
and then they all at once complete in a minute or two.
Hmmm. I only run one rm process at a time. You think running more
processes at the same time would be faster?
--
No part of this copyright
Op 14-07-11 12:28, Jim Klimov schreef:
Yes, quite often it seems so.
Whenever my slow dcpool decides to accept a write,
it processes a hundred pending deletions instead of one ;)
Even so, it took quite a few pool or iscsi hangs and then
reboots of both server and client, and about a week
Op 15-06-11 05:56, Richard Elling schreef:
You can even have applications like databases make snapshots when
they want.
Makes me think of a backup utility called mylvmbackup, which is written
with Linux in mind - basically it locks mysql tables, takes an LVM
snapshot and releases the lock (and
Op 15-06-11 14:30, Simon Walter schreef:
Anyone know how Google Docs does it?
Anyone from Google on the list? :-)
Seriously, this is the kind of feature to be found in Serious CMS
applications, like, as already mentioned, Alfresco.
--
No part of this copyright message may be reproduced, read
2011/6/10 Tim Cook t...@cook.ms:
While your memory may be sufficient, that cpu is sorely lacking. Is it even
64bit? There's a reason intel couldn't give those things away in the early
2000s and amd was eating their lunch.
A Pentium 4 is 32-bit.
--
Frank Van Damme
No part of this copyright
2011/6/1 lance wilson lance.wil...@gmail.com:
The problem is that nfs clients that connect to my solaris 11 express server
are not inheriting the acl's that are set for the share. They create files
that don't have any acl assigned to them, just the normal unix file
permissions. Can someone
a
few days, not withstanding the fact it's supposed to be a hard limit.
I call for an arc_data_max setting :)
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence
Op 26-05-11 13:38, Edward Ned Harvey schreef:
Perhaps a property could be
set, which would store the DDT exclusively on that device.
Oh yes please, let me put my DDT on an SSD.
But what if you loose it (the vdev), would there be a way to reconstruct
the DDT (which you need to be able to delete
Seagates, refurbished. I'd scrub
once a week, that'd probably suck on raidz2, too?
Thanks.
Sequential? Let's suppose no spares.
4 mirrors of 2 = sustained bandwidth of 4 disks
raidz2 with 8 disks = sustained bandwidth of 6 disks
So :)
--
Frank Van Damme
No part of this copyright message may
. If you dedup
once, and later disable dedup, the system won't bother checking to see if
there are duplicate blocks anymore. So the DDT won't need to be in
arc+l2arc. I should say shouldn't.
Except when deleting deduped blocks.
--
Frank Van Damme
No part of this copyright message may
Op 24-05-11 22:58, LaoTsao schreef:
With various fock of opensource project
E.g. Zfs, opensolaris, openindina etc there are all different
There are not guarantee to be compatible
I hope at least they'll try. Just in case I want to import/export zpools
between Nexenta and OpenIndiana?
--
No
Op 25-05-11 14:27, joerg.moellenk...@sun.com schreef:
Well, at first ZFS development is no standard body and at the end
everything has to be measured in compatibility to the Oracle ZFS
implementation
Why? Given that ZFS is Solaris ZFS just as well as Nexenta ZFS just as
well as illumos ZFS, by
Op 20-05-11 01:17, Chris Forgeron schreef:
I ended up switching back to FreeBSD after using Solaris for some time
because I was getting tired of weird pool corruptions and the like.
Did you ever manage to recover the data you blogged about on Sunday,
February 6, 2011?
--
No part of this
Op 03-05-11 17:55, Brandon High schreef:
-H: Hard links
If you're going to this for 2 TB of data, remember to expand your swap
space first (or have tons of memory). Rsync will need it to store every
inode number in the directory.
--
No part of this copyright message may be reproduced, read or
Op 10-05-11 06:56, Edward Ned Harvey schreef:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
BTW, here's how to tune it:
echo arc_meta_limit/Z 0x3000 | sudo mdb -kw
echo ::arc | sudo mdb -k | grep meta_limit
Op 09-05-11 14:36, Edward Ned Harvey schreef:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
So now I'll change meta_max and
see if it helps...
Oh, know what? Nevermind.
I just looked at the source, and it seems
Op 09-05-11 15:42, Edward Ned Harvey schreef:
in my previous
post my arc_meta_used was bigger than my arc_meta_limit (by about 50%)
I have the same thing. But as I sit here and run more and more extensive
tests on it ... it seems like arc_meta_limit is sort of a soft limit. Or it
only
Op 06-05-11 05:44, Richard Elling schreef:
As the size of the data grows, the need to have the whole DDT in RAM or L2ARC
decreases. With one notable exception, destroying a dataset or snapshot
requires
the DDT entries for the destroyed blocks to be updated. This is why people can
go for
idea, or would it even help to set primarycache=metadata too, to not
let RAM fill up with file data?
P.S. the system is: NexentaOS_134f (I'm looking into newer OpenSolaris
variants with bugs fixed/better performance, too).
--
Frank Van Damme
No part of this copyright message may be reproduced
2011/4/26 achim...@googlemail.com achim...@googlemail.com:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi!
We are setting up a new file server on an OpenIndiana box (oi_148). The
spool is run-in version 28, so the aclmode option is gone. The server
has to serve files to Linux, OSX and
2011/1/27 Ryan John john.r...@bsse.ethz.ch:
-Original Message-
From: Frank Lahm [mailto:frankl...@googlemail.com]
Sent: 25 January 2011 14:50
To: Ryan John
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Changed ACL behavior in snv_151 ?
John,
welcome onboard!
2011
2011/1/27 Garrett D'Amore garr...@nexenta.com:
We are working on a change to illumos (and NexentaStor) to revive
acl_mode... lots and lots of people have had very bad experiences as a
result of that particular change.
We had to put a chmod() wrapper into our app (Netatalk) to work around
that.
John,
welcome onboard!
2011/1/25 Ryan John john.r...@bsse.ethz.ch:
I’m sharing file systems using a smb and nfs, and since I’ve upgraded to
snv_151, when I do a chmod from an NFS client, I lose all the NFSv4 ACLs.
http://opensolaris.org/jive/thread.jspa?threadID=134162
I'd summarize as
2010/12/24 Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com:
From: Frank Lahm [mailto:frankl...@googlemail.com]
With Netatalk for AFP he _is_ running a database: any AFP server needs
to maintain a consistent mapping between _not reused_ catalog node ids
(CNIDs
2010/12/24 Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
Now, I am wondering if using a mirror of such 15k SAS drives would be a
good-enough fit for a
On 12/16/10 10:24 AM -0500 Linder, Doug wrote:
Tim Cook wrote:
Claiming you'd start paying for Solaris if they gave you ZFS for free
in Linux is absolutely ridiculous.
*Start* paying? You clearly have NO idea what it costs to run Solaris in
a production environment with support.
In my
artificial restrictions. Anything that advances that, I'm for.
CDDL is close to that, much closer than GPL.
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 12/16/10 11:32 AM +0100 Joerg Schilling wrote:
Note that while there existist
numerous papers from lawyers that consistently explain which parts of
the GPLv2 are violating US law and thus are void,
Can you elaborate?
___
zfs-discuss mailing
, there's really not much point in debating it.
And if they don't, it will be Sad, both in terms of useful code not
being available to a wide community to review and amend, as in terms
of Oracle not really getting the point about open source development.
--
Frank Van Damme
No part
figures for
the gain it gives
- http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author
unresponsive
for
a potentially very long time ( a phenomenon known as bricking).
As I recall a lot fo fixes came in in the 140 series kernels to fix this.
Anything 145 and above should be OK.
I'm on 134f. No wonder.
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen
?
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Thank you all for your help.
Have a nice day!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am a newbie on Solaris.
We recently purchased a Sun Sparc M3000 server. It comes with 2 identical hard
drives. I want to setup a raid 1. After searching on google, I found that the
hardware raid was not working with M3000. So I am here to look for help on how
to setup ZFS to use raid 1.
2010/10/25 Cindy Swearingen cindy.swearin...@oracle.com:
You can't simulate the aclmode-less world in the upcoming release
by setting aclmode to discard in b134.
The reason you see your aclmode discarded because aclmode applies
to both chmod operations and file/dir create operations.
Yes,
Hi list,
while preparing for the changed ACL/mode_t mapping semantics coming
with onnv-147 [1], I discovered that in onnv-134 on my system ACLs are
not inherited when aclmode is set to passthrough for the filesystem.
This very much puzzles me. Example:
$ uname -a
SunOS os 5.11 snv_134 i86pc i386
of $$$?
Cheers -- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and it turned out to be the CPU itself getting
the actual checksum wrong /only on one particular file/, and even then only
when the ambient temperature was high. So ZFS is good at ferreting out
obscure hardware problems :-).
Cheers -- Frank
___
zfs
/09/10 17:00, Frank Middleton wrote:
This is a hypothetical question that could actually happen:
Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0
and for some reason c0t0d0s0 goes off line, but comes back
on line after a shutdown. The primary boot disk would then
be c0t0d0s0 which would have
On 9/8/10 9:32 AM -0400 Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
The 9/10 Update appears to have been released. Some of the more
noticeable
ZFS stuff that made it in:
More at:
of 13,027,407 entries, meaning
it's 6,670,032,384 bytes big. So suppose our data grow on with a factor
12, it will take 80 GB. So, it would be best to buy a 128 GB SSD as
L2ARC cache. Correct?
Thanks for enlightening me,
--
Frank Van Damme
___
zfs-discuss
On 8/19/10 10:48 AM +0200 Joerg Schilling wrote:
1) The OpenSource definition
http://www.opensource.org/docs/definition.php section 9 makes it very
clear that an OSS license must not restrict other software and must not
prevent to bundle different works under different licenses on one medium.
On 8/18/10 3:58 PM -0400 Linder, Doug wrote:
Erik Trimble wrote:
That said, stability vs new features has NOTHING to do with the OSS
development model. It has everything to do with the RELEASE model.
[...]
All that said, using the OSS model for actual *development* of an
Operating System is
On 8/17/10 9:14 AM -0400 Ross Walker wrote:
On Aug 16, 2010, at 11:17 PM, Frank Cusack frank+lists/z...@linetwo.net
wrote:
On 8/16/10 9:57 AM -0400 Ross Walker wrote:
No, the only real issue is the license and I highly doubt Oracle will
re-release ZFS under GPL to dilute it's competitive
On 8/17/10 3:31 PM +0900 BM wrote:
On Tue, Aug 17, 2010 at 5:11 AM, Andrej Podzimek and...@podzimek.org
wrote:
Disclaimer: I use Reiser4
A Killer FS™. :-)
LOL
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 8/16/10 9:57 AM -0400 Ross Walker wrote:
No, the only real issue is the license and I highly doubt Oracle will
re-release ZFS under GPL to dilute it's competitive advantage.
You're saying Oracle wants to keep zfs out of Linux?
___
zfs-discuss
On 8/14/10 10:18 PM -0700 Richard Elling wrote:
On Aug 13, 2010, at 7:06 PM, Frank Cusack wrote:
Interesting POV, and I agree. Most of the many distributions of
OpenSolaris had very little value-add. Nexenta was the most interesting
and why should Oracle enable them to build a business
On 8/13/10 8:56 PM -0600 Eric D. Mudama wrote:
On Fri, Aug 13 at 19:06, Frank Cusack wrote:
Interesting POV, and I agree. Most of the many distributions of
OpenSolaris had very little value-add. Nexenta was the most interesting
and why should Oracle enable them to build a business
On 8/13/10 11:21 PM -0400 Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Cusack
I haven't met anyone who uses Solaris because of OpenSolaris.
What rock do you live under?
Very few people would bother paying
On 8/14/10 7:58 AM -0500 Russ Price wrote:
My guess is that the theoretical Solaris Express 11 will be crippled by
any or all of: missing features, artificial limits on functionality, or a
restrictive license. I consider the latter most likely, much like the OTN
On 8/14/10 3:15 PM -0400 Dave
On 8/15/10 12:39 AM +0100 Kevin Walker wrote:
and Oracle are very, very greedy...
Let's not get all soft about OpenSolaris now ... all public companies
are very, very greedy. They exist solely to make money. It's awesome
that they make things that are useful, but it's just a way to meet
the
On 8/13/10 3:39 PM -0500 Tim Cook wrote:
Quite frankly, I think there will be an even faster decline of Solaris
installed base after this move. I know I have no interest in pushing it
anywhere after this mess.
I haven't met anyone who uses Solaris because of OpenSolaris.
-corp does or
doesn't do...
Interesting POV, and I agree. Most of the many distributions of
OpenSolaris had very little value-add. Nexenta was the most interesting
and why should Oracle enable them to build a business at their expense?
-frank
___
zfs
On 07/19/10 07:26, Andrej Podzimek wrote:
I run ArchLinux with Btrfs and OpenSolaris with ZFS. I haven't had a
serious issue with any of them so far.
Moblin/Meego ships with btrfs by default. COW file system on a
cell phone :-). Unsurprisingly for a read-mostly file system it
seems pretty
this.
Making the new drive bootable is the real problem since it will probably
not have the same identifier. For sure you'd have to edit grub ion the
new drive and perhaps run grub interactively to install a boot loader.
Hope this helps -- Frank
___
zfs
On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote:
It would be nice to have applications request to be notified
before a snapshot is taken, and when that have requested
notification have acknowledged that they're ready, the snapshot
would be taken; and then another notification sent that it was
On 7/16/10 3:07 PM -0500 David Dyer-Bennet wrote:
On Fri, July 16, 2010 14:07, Frank Cusack wrote:
On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote:
It would be nice to have applications request to be notified
before a snapshot is taken, and when that have requested
notification have
On 7/15/10 9:49 AM +0900 BM wrote:
On Thu, Jul 15, 2010 at 5:57 AM, Paul B. Henson hen...@acm.org wrote:
ZFS is great. It's pretty much the only reason we're running Solaris.
Well, if this is the the only reason, then run FreeBSD instead. I run
Solaris because of the kernel architecture and
-- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 6/26/10 9:47 AM -0400 David Magda wrote:
Crickey. Who's the genius who thinks of these URLs?
SEOs
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, shut down as normal
(ie, don't tell zfs you are about to do anything different) and then
just boot with the one disk, now in degraded state but otherwise ok.
Like you, I learned this the hard way!
-frank
___
zfs-discuss mailing list
zfs-discuss
On 6/18/10 9:46 PM -0700 Cott Lang wrote:
I split a mirror to reconfigure and recopy it. I detached one drive,
reconfigured it ... all after unplugging the remaining pool drive during
a shutdown to verify no accidents could happen.
By detach, do you mean that you ran 'zpool detach'?
Should naming the root pool something unique (rpool-nodename) be a
best practice?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Frank Contrepois
Coblan srl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 6/10/10 11:07 PM -0700 Dave Koelmeyer wrote:
I trimmed, and then got complained at by a mailing list user that the
context of what I was replying to was missing. Can't win :P
There's a big difference between trim and remove.
The worst is when people quote 3-4 paragraphs, respond inline to
On 6/4/10 11:46 AM -0700 Brandon High wrote:
Be aware that Solaris on x86 has two types of partitions. There are
fdisk partitions (c0t0d0p1, etc) which is what gparted, windows and
other tools will see. There are also Solaris partitions or slices
(c0t0d0s0). You can create or edit these with the
On 6/2/10 11:10 PM -0400 Roman Naumenko wrote:
Well, I explained it not very clearly. I meant the size of a raidz array
can't be changed.
For sure zpool add can do the job with a pool. Not with a raidz
configuration.
Well in that case it's invalid to compare against Netapp since they
can't do
On 6/3/10 8:45 AM +0200 Juergen Nickelsen wrote:
Richard Elling rich...@nexenta.com writes:
And some time before I had suggested to a my buddy zfs for his new
home storage server, but he turned it down since there is no
expansion available for a pool.
Heck, let him buy a NetApp :-)
based on that (alone).
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
says this is a limitation of the
installer.
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, although it doesn't seem very
important now.
Regards -- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
causes zfs to forget everything it knows about the device
being detached.
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Many many moons ago, I submitted a CR into bugs about a
highly reproducible panic that occurs if you try to re-share
a lofi mounted image. That CR has AFAIK long since
disappeared - I even forget what it was called.
This server is used for doing network installs. Let's say
you have a 64 bit iso
expansive
interpretation that covered just about any form of software distribution.
I'm no supporter of Oracle's business practices, but I am 90% sure that
Sleepycat changed their license before the Oracle acquisition. Yes,
it was particularly onerous before they went to standard GPL.
-frank
be worth trying again
when we eventually get to go past b134.
HTH -- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 523 matches
Mail list logo