few lines above, another test (for a valid bootfs name) does get
bypassed in the case of clearing the property.
Don't know if that alone would fix it.
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracle Pre-Sales
Guillemont Park | M
etween 30 minutes and 4
hours into a scrub, and with it scrubs run successfully.
-Andrew
>>> Demian Phillips 5/23/2010 8:01 AM >>>
On Sat, May 22, 2010 at 11:33 AM, Bob Friesenhahn
wrote:
> On Fri, 21 May 2010, Demian Phillips wrote:
>
>> For years I have been run
if NV ZIL. Trouble is that no other operating systems or
filesystems work this well with such relatively tiny amounts of NV
storage, so such a hardware solution is very ZFS-specific.
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracl
up on the ARC (memory) anyway. If you don't have enough
RAM for this to help, then you could add more memory, and/or an SSD as a
L2ARC device ("cache" device in zpool command line terms).
--
Andrew Gabriel
___
zfs-discuss mailing list
zf
Support for thin reclamation depends on the SCSI "WRITE SAME" command; see this
draft of a document from T10:
http://www.t10.org/ftp/t10/document.05/05-270r0.pdf.
I spent some time searching the source code for support for "WRITE SAME", but I
wasn't able to find much. I assume that if it
900', but it
still said the dataset did not exist.
Finally I exported the pool, and after importing it, the snapshot was
gone, and I could receive the snapshot normally.
Is there a way to clear a "partial" snapshot without an export/import
cycle?
Thanks,
Andrew
[1]
http://mail.o
The correct URL is:
http://code.google.com/p/maczfs/
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Rich Teer
Sent: Sunday, April 25, 2010 7:11 PM
To: Alex Blewitt
Cc: ZFS discuss
Subject: Re: [zfs-discuss] Mac OS X c
3 - community edition
Andrew
On Apr 18, 2010, at 11:15 PM, Richard Elling wrote:
> Nexenta version 2 or 3?
> -- richard
>
> On Apr 18, 2010, at 7:13 PM, Andrew Kener wrote:
>
>> Hullo All:
>>
>> I'm having a problem importing a ZFS pool. When I first
this or something similar before. Thanks in advance for any
suggestions.
Andrew Kener
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to another RFE/BUG and the
pause/resume requirement got lost. I'll see about reinstating it.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all,
Great news - by attaching an identical size RDM to the server and then grabbing
the first 128K using the command you specified Ross
dd if=/dev/rdsk/c8t4d0p0 of=~/disk.out bs=512 count=256
we then proceeded to inject this into the faulted RDM and lo and behold the
volume recovered!
dd
Hi again,
Out of interest, could this problem have been avoided if the ZFS configuration
didnt rely on a single disk? i.e. RAIDZ etc
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
er to discover this before you reduce the pool
redundancy/resilience, whilst it's still fixable.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
eed barely SATA
controllers at all by todays standards as I think they always pretend to
be PATA to the host system.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0 and /dev/dsk/c8t4d0 but neither of them are valid.
Kind Regards
Andrew
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nfortunately I can't rely on connecting using an
iSCSI initiator within the OS to attach the volume so I guess i have to dive
straight into checking the MBR at this stage. I'll no doubt need some help here
so please forgive me if I fall at the first hurdle.
Kind Regards
Andre
Ok,
The fault appears to have occurred regardless of the attempts to move to
vSphere as we've now moved the host back to ESX 3.5 from whence it came and the
problem still exists.
Looks to me like the fault occurred as a result of a reboot.
Any help and advice would be greatly appreciated.
-
that fixed the problem, but
unfortunately, typing Zpool status and Zpool import finds nothing even though
"FORMAT" and FORMAT -E displays the 1TB volume.
Are there any known problems or ways to reimport a supposed lost/confused zpool
on a new host?
Thanks
Andrew
--
This message p
it's because I left the NFSv4 domain setting at the default.
(I'm just using NFSv3, but trying to come up with an explanation. In
any case, using the FQDN works.)
-Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
:
a) spread out the deleting of the snapshots, and
b) create more snapshots more often (and conversely delete more
snapshots, more often), so each one contains fewer accumulated space to
be freed off.
--
Andrew
___
zfs-discuss mailing list
zfs-dis
c6t5d0- - 4 18 294K 126K
c7t2d0- - 4 18 282K 124K
c0t6d0- - 7 19 446K 124K
c5t7d0- - 7 21 452K 122K
- - - - - -
Jesse Reynolds wrote:
Does ZFS store a log file of all operations applied to it? It feels like someone has gained access and run 'zfs destroy mailtmp' to me, but then again it could just be my own ineptitude.
Yes...
zpool history rpool
--
Andr
ould be a useful ZFS dataset parameter.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Darren J Moffat wrote:
On 12/02/2010 09:55, Andrew Gabriel wrote:
Can anyone suggest how I can get around the above error when
sending/receiving a ZFS filesystem? It seems to fail when about 2/3rds
of the data have been passed from send to recv. Is it possible to get
more diagnostics out?
You
m is
currently running build 125 and receiving system something approximating
to 133, but I've had the same problem with this filesystem for all
builds I've used over the last 2 years.
--
Cheers
Andrew Gabriel
___
zfs-discuss mailing list
zf
hen I demonstrate this on the
SSD/Flash/Turbocharge Discovery Days I run the UK from time to time (the
name changes over time;-).
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ctories,
which runs in a few seconds and lists both added/changed files and deleted
files.
http://opensolaris.org/jive/message.jspa?messageID=434176#434176
-Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ich
has ended up with a customer buying a Thumpers or Amber Road systems
from Sun. (but that's my job, I guess;-)
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n you might
want to invest in solid state disk swap devices, which will go some way
towards reducing the factor of 1000 I mentioned above. (Take note of
aligning swap to the 4k flash i/o boundaries.)
Probably lots of other possibilities too, given more than a co
so it doesn't need to swap.
Then it doesn't matter what the performance of the swap device is.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on which types
of file systems, we're limited to guessing.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
), and they now have
affordable storage for their projects, which makes them viable once more.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
much
better with hybrid storage pools.
So these drives sound to me to have been designed specifically for ZFS!
It's hard to imagine any other filesystem which can exploit them so
completely.
--
Andrew
___
zfs-discuss mailing list
zf
ed them to be long
in some cases when things do go wrong and timeouts and retries are
triggered.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Dec 10, 2009 at 09:50:43AM +, Andrew Robert Nicols wrote:
> We've been using ZFS for about two years now and make a lot of use of zfs
> send/receive to send our data from one X4500 to another. This has been
> working well for the past 18 months that we've been doin
We've been using ZFS for about two years now and make a lot of use of zfs
send/receive to send our data from one X4500 to another. This has been
working well for the past 18 months that we've been doing the sends.
I recently upgraded the receiving thumper to Solaris 10 u8 and since then,
I've been
hi,
i'm re-sending this because I'm hoping that someone has some answers
to the following questions. I'm working a hot Escalation on AmberRoad
and am trying to understand what's under zfs' hood.
thanks
Solaris RPE
/andrew rutz
On 11/25/09 13:55, andrew.r...@sun.c
months ago.
Has anyone else seen this?
Thanks,
Andrew
--
Systems Developer
e: andrew.nic...@luns.net.uk
im: a.nic...@jabber.lancs.ac.uk
t: +44 (0)1524 5 10147
Lancaster University Network Services is a limited company registered in
England and Wales. Registered number: 04311892. Registe
eful.
Note: I'm running sol10u8. I expect this to work fine on recent OpenSolaris
also, but I have not tested that. The only change required to make
zfs-auto-snapshot v0.12 work on sol10u8 was changing ksh93 to dtksh in the
shebang line.
Andrew Daugherity
Systems Analyst
Division of
) to have used computers back in the days
when they all did this anyway...
Funny how things go full circle...
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hi,
i hear rumors that a read-prefetch implementation has been
putback to opensolaris/ON. if so, which build is it available
in? will it be putback to S10 ?
tnx
--
Andrew Rutzandrew.r...@sun.com
Solaris RPE Ph: (x64089) 512
he l2arc literally simply a *larger* ARC? eg, does the l2arc
obey the normal cache property where "everything that is in the L1$
(eg, ARC) is also in the L2$ (eg, l2arc)" ? (I have a feeling that
the set-theoretic intersection of ARC and L2ARC is empty (for some
reason).
o
export/zones/s...@20091122 0 - 5.21G -
a20$
All the ones with USED = 0 haven't changed. Don't know if this info is
available without spinning up disks though.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-dis
activities. It would be
really nice to have a speed knob on these operations, which you can vary
while the activity progresses, depending on other uses of the system.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
;t use much CPU, but it does interfere with
the interactive response of the desktop. I suspect this is due to the
i/o's it queues up on the disks.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rpreted as something else, ZFS
doesn't care.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
7;s still in triage so probably not yet visible externally.
Part of this RFE relates to a requirement for vanity naming of disks,
although your requirement is a little different. If you are on support,
you should get yourself added to the RFE, together with your precise
requirements as above.
a replica of the "source".
FWIW, I'm using rsync 3.0.6 from opencsw. Older rsync should work fine but may
take longer to run.
-Andrew
>>> Richard Elling 11/9/2009 7:33 PM >>>
Seems to me that you really want auditing. You can configure the audit
system to on
.1TB-and-growing FS) seems like a great idea, if I could find a working
tool. It looks like dircmp(1) might be a possibility, but I'm open to
suggestions. I suppose I could use something like AIDE or tripwire, although
that seems a bit like swatting a fly with a sledgehammer.
Thanks,
Andre
d/receive, but I need a way to see what changed in the past day.
Thanks,
Andrew Daugherity
Systems Analyst
Division of Research & Graduate Studies
Texas A&M University
>>> Trevor Pretty 10/26/2009 5:16 PM >>>
Paul
Being a script hacker like you the only kludge
A Darren Dunham wrote:
On Wed, Nov 04, 2009 at 09:59:05AM +, Andrew Gabriel wrote:
It can be done by careful use of fdisk (with some risk of blowing away
the data if you get it wrong), but I've seen other email threads here
that indicate ZFS then won't mount the pool, becau
he
end of the partition is).
You could create a separate zpool in the spare fdisk partition. Not good
for performance, but probably fine for infrequently accessed data.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
user.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t_27 svc:/system/cron:default
Why zpool scrub was not done?
What did the output of the cron job say?
My guess would be that zpool wasn't in the PATH.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
externally though).
The relevant part of the document just shows an example zpool replace
command.
Another part of the document gives advice which I suspect is incorrect
(and certainly generally unsuitable), and I'm guessing it's just
someone's internal notes, a
but if ZFS is smart enough to block several threads on fsync at once,
batch up their work to a single ZIL write-and-sync, then the
three-instance scheme will have no benefit.
ZFS does exactly this.
I demonstrate it on the SSD Discovery Days I run periodi
@200908271200
347 r...@thumper1:~> zfs rollback -r thumperpool/m...@200908270100
cannot destroy 'thumperpool/m...@200908271200': dataset already exists
This is an X4500 running Solaris U8. I'm running zpool version 15 and zfs
version 2.
Any guidance much appreciated.
Andre
rade" SSDs, please let me know!
I haven't seen that on the X25-E disks I hammer as part of the demos on
the "Turbocharge Your Apps" discovery days I run.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
we will at least let them know when their multi-million dollar
storage system silently drops a bit, which they tend to far more often
than most customers realise.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t. (Some of the MTA testing
standards do permit message duplication on unexpected MTA outage, but
never any loss, or at least didn't 10 years ago when I was working in
this area.) An MTA is basically a transactional database, and (if
properly written), the requirements on the u
like one you create explicitly, and it shows up in
"zfs list".
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ving a file from one or more snapshots at the same time as removing the source ...
Rudolf
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
different nics (bge and e1000).
Unless you have some specific reason for thinking this is a zfs issue,
you probably want to ask on the crossbow-discuss mailing list.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
This is what my /var/adm/messages looks like:
Sep 27 12:46:29 solaria genunix: [ID 403854 kern.notice] assertion failed: ss
== NULL, file: ../../common/fs/zfs/space_map.c, line: 109
Sep 27 12:46:29 solaria unix: [ID 10 kern.notice]
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice]
I'm getting the same thing now.
I tried moving my 5-disk raidZ and 2disk Mirror over to another machine, but
that machine would keep panic'ing (not ZFS related panics). When I brought the
array back over, I started getting this as well.. My Mirror array is unaffected.
snv111b (2009.06 release)
it puts hot data on the outer edge of a
disk and uses slower parts of disks for less performant data (things
like backups), so it certainly could decide what goes into flash.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
up to the next recordsize
boundary, are we guaranteed to be able to get the from the blocksize
reported by statvfs?
--
Andrew Deason
adea...@sinenomine.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 21 Sep 2009 18:20:53 -0400
Richard Elling wrote:
> On Sep 21, 2009, at 2:43 PM, Andrew Deason wrote:
>
> > On Mon, 21 Sep 2009 17:13:26 -0400
> > Richard Elling wrote:
> >
> >> You don't know the max overhead for the file before it is
> >>
this, it seems like the worst case is when copies=3.
Is that max with copies=3? Assume copies=1; what is it then?
--
Andrew Deason
adea...@sinenomine.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t commit delay.
But solving the general problem for me isn't necessary. If I could just
get a ballpark estimate of the max overhead for a file, I would be fine.
I haven't payed attention to it before, so I don't even have an
intuitive feel for what it is.
--
Andrew Deason
adea...
nly possible for other applications to fill up the disk. We
just need to ensure that we don't fill up the disk to block other
applications. You may think this is fruitless, and just from that
description alone, it may be. But you must understand that with
efficiency somewhat. Files are truncated to 0 and grow again quite often
in busy clients. But that's an efficiency issue, we'd still be able to
stay within the configured limit that way.
But anyway, 128k may be fine for me, but what about if someone sets
thei
er of filesystem-specific steps needed to be taken to set up the
cache. You don't need to do anything special for a tmpfs cache, for
instance, or ext2/3 caches on linux.
--
Andrew Deason
adea...@sinenomine.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pothetical case of the metadata compression ratio being
effectively the same as without compression, what would it be then?
--
Andrew Deason
adea...@sinenomine.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the space calculations to be made /before/ the
write is done. It may be possible to change that, but I'd rather not, if
possible (and I'd have to make sure there's not a significant speed hit
in doing so).
--
Andrew Deason
adea...@sinenomine.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
else.
Plus ideally
you want this as EFI unless you need to put OpenSolaris into that pool
to boot from it - but sounds like you don't.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and any additional parameters needed).
If we just have a way of knowing in advance how much disk space we're
going to take up by writing a certain amount of data, we should be
okay.
Or, if anyone has any other ideas on how to overcome this, it would be
welcomed.
--
Andrew Deason
adea...@s
.
> The case has been identified and I've just received an IDR,which I will
> test next week. I've been told the issue is fixed in update 8, but I'm
> not sure if there is an nv fix target.
>
> I'll post back once I've abused a test system for a while.
rmware.
Unfortunately, the zpool and zfs versions are too high to downgrade
thumper1 too.
I've tried upgrading thumper1 to 117 and now 121. We were originally
running 112. I'm still seeing exactly the same issues though.
What can I do in an attempt to find out what is causing these lockup
abels are involved though.)
--
Andrew Gabriel
Jeff Victor wrote:
I am trying to mirror an existing zpool on OpenSolaris 2009.06. I
think I need to delete two alternate cylinders...
The existing disk in the pool (c7d0s0):
Part TagFlag Cylinders SizeBloc
t functionality is expected to be, based on the structure
of the other commands.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sed by zfs as metadata? and/or: why wouldn't the READ-CAPACITY
command return a value that's a lot closer to 33GB?
tnx
/andrew
(I work in Sun's Sustaining group)
On 08/03/09 12:02, cindy.swearin...@sun.com wrote:
Hi Andrew,
The AVAIL column indicates the pool size, not the vo
14.0K DMU dnode
1416K 8K 24.4G 38.0K zvol object <<<<<<
2116K512512 1K zvol prop
thanks
/andrew
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
36G
disks on runs build 117 w/ zfs pool version 15.)
It also happens (at least in snv_113) just by rebooting (which can be
really bad news if you need it not to happen).
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
14.0K DMU dnode
1416K 8K 24.4G 38.0K zvol object <<<<<<
2116K512512 1K zvol prop
thanks
/andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I
guess I miss something here. Can you explain to me why the above would
be better (nice to have) then "zfs create whate...@now?"
Many more systems have a mkdir (or equivalent) command than have a zfs
command.
--
Andrew
___
zfs-disc
s to capacity, although
improvements in other HDD performance characteristics has been very
disappointing this decade (e.g. IOPs haven't improved much at all,
indeed they've only seen a 10-fold improvement over the last 25 years).
--
Andrew
_
or a 500GB zpool with 8 filesystems and 3,500 snapshots.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
se
it is the config we use for the root/boot pool.
Not so much a 2 disk mirror, but a single vdev mirror (e.g. also a 3-way
mirror, having one split off).
This is asked for by multiple customers at every ZFS Discovery Day we do.
See 6849185 and 5
you can't write new data to the SSD, but it still has all your existing
data available for reading. (At least for Enterprise SSD's -- I don't
know much about the MLC consumer drives.)
--
Andrew
___
zfs-discuss mailing list
zf
5-M drives referred to are Intel's Mainstream drives, using MLC flash.
The Enterprise grade drives are X25-E, which currently use SLC flash
(less dense, more reliable, much longer lasting/more writes). The
expected lifetime is similar to an Enterprise grade hard dri
now, and I've crossed my fingers and typed
zpool detach export c3d0
and on both occasions, it has detached the FAULTED disk rather than the
ONLINE disk, which is exactly what I wanted. However, was I just lucky
each time, or is there logic to pick the FAULTED disk when the disk n
very time-consuming process.
Is there a more proper way to approach this issue? Should I be filing
a bug report?
What release and/or build?
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jul 08, 2009 at 09:41:12AM +0100, Andrew Robert Nicols wrote:
> On Wed, Jul 08, 2009 at 08:31:54PM +1200, Ian Collins wrote:
> > Andrew Robert Nicols wrote:
> >
> >> The thumper unning 112 has continued to experience the issues described by
> >> Ian and
On Wed, Jul 08, 2009 at 08:31:54PM +1200, Ian Collins wrote:
> Andrew Robert Nicols wrote:
>
>> The thumper unning 112 has continued to experience the issues described by
>> Ian and others. I've just upgraded to 117 and am having even more issues -
>> I'm unable
181800
cannot destroy 'thumperpool/m...@200906181900': dataset already exists
As a result, I'm a bit scuppered. I'm going to try going back to by 112
installation instead to see if that resolves any of my issues.
All of our thumpers have the following disk configuration:
hots received with
"zfs receive -d" to inherit their mountpoint from the pool they are imported
into, and/or explicitly override it?
Thanks,
Andrew Daugherity
Systems Analyst
Division of Research & Graduate Studies
Texas A&M University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
toys and convert to Microsoft products.
If you change platform every time you get two bugs in a product, you
must cycle platforms on a pretty regular basis!
You often find the change is towards Windows. That very rarely has the
same rules applied, so things then stick there.
--
Andrew
controller _and_
driver. For SATA disks, the controller must have a Solaris driver which
drives the disks in native SATA mode (such as nv_sata(7D)), and not IDE
compatibility mode (such as SATA disks driven by ata(7D)).
--
Andrew
___
zfs-discuss ma
gress by issuing zpool status commands as a non-privilaged user.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
101 - 200 of 327 matches
Mail list logo