ank.
>If you suspect STMF, then try
> stmfadm list-lu -v
Bingo!
Deleted the LU and destroyed the volume.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mp or swap device
and that iSCSI is disabled.
On Solaris 11.1, how would I determine what's busying it?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rive.
If it imports, you will be able to installgrub(1M).
>By the way, whatever the error message is when booting, it disapears so
>quickly I can't read it, so I am only guessing that this is the reason.
Boot with kernel debugger so you ca
Gregg Wonderly wrote:
> Have you tried importing the pool with that drive completely unplugged?
Thanks for your reply. I just tried that. zpool import now says:
pool: d
id: 13178956075737687211
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be im
I seem to have managed to end up with a pool that is confused abut its children
disks. The pool is faulted with corrupt metadata:
pool: d
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
# pstack core
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Replacing the SANs is cost prohibitive.
On Fri, Nov 23, 2012 at 10:24 AM, Tim Cook wrote:
>
>
> On Fri, Nov 23, 2012 at 9:49 AM, John Baxter wrote:
>
>>
>> We have the need to encypt our data, approximately 30TB on three ZFS
>> volumes under Solaris 10. The vol
After searching for dm-crypt and ZFS on Linux and finding too little
information, I shall ask here. Please keep in mind this in the context of
running this in a production environment.
We have the need to encypt our data, approximately 30TB on three ZFS
volumes under Solaris 10. The volumes curren
Hello everybody,
I just wanted to share my experience with a (partially) broken SSD that was in
use in a ZIL mirror.
We experienced a dramatic performance problem with one of our zpools, serving
home directories. Mainly NFS clients were affected. Our SunRay infrastructure
came to a complete ha
-Original message-
To: zfs-discuss@opensolaris.org;
From: Carsten John
Sent: Tue 11-09-2012 13:08
Subject:[zfs-discuss] Sol11 time-slider / snapshot not starting [again]
> Hello everybody,
>
> my time-slider service on a Sol11 machine died. I already
>
Hello everybody,
my time-slider service on a Sol11 machine died. I already deinstalled/installed
the time-slider packeage, restarted manifest-import service etc., but no
success.
/var/svc/log/application-time-slider:default.log:
--snip--
[ Sep 11 12:40:04 Enabled. ]
[ Sep 11 12:40:04 Execu
On 07/29/12 14:52, Bob Friesenhahn wrote:
My opinion is that complete hard drive failure and block-level media
failure are two totally different things.
That would depend on the recovery behavior of the drive for
block-level media failure. A drive whose firmware does excessive
(reports of up
On 07/19/12 19:27, Jim Klimov wrote:
However, if the test file was written in 128K blocks and then
is rewritten with 64K blocks, then Bob's answer is probably
valid - the block would have to be re-read once for the first
rewrite of its half; it might be taken from cache for the
second half's rew
On 07/10/12 19:56, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3 can
_post_swapgs+0x149()
# pkg info entire| grep Summary
Summary: entire incorporation including Support Repository Update
(Oracle Solaris 11 11/11 SRU 8.5).
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
-Original message-
To: Carsten John ;
CC: zfs-discuss@opensolaris.org;
From: Ian Collins
Sent: Thu 05-07-2012 21:40
Subject:Re: [zfs-discuss] Sol11 missing snapshot facility
> On 07/ 5/12 11:32 PM, Carsten John wrote:
> > -Original message-
> &
-Original message-
To: Carsten John ;
CC: zfs-discuss@opensolaris.org;
From: Ian Collins
Sent: Thu 05-07-2012 11:35
Subject:Re: [zfs-discuss] Sol11 missing snapshot facility
> On 07/ 5/12 09:25 PM, Carsten John wrote:
>
> > Hi Ian,
> >
> >
-Original message-
To: Carsten John ;
CC: zfs-discuss@opensolaris.org;
From: Ian Collins
Sent: Thu 05-07-2012 09:59
Subject:Re: [zfs-discuss] Sol11 missing snapshot facility
> On 07/ 5/12 06:52 PM, Carsten John wrote:
> > Hello everybody,
> >
> >
Hello everybody,
for some reason I can not find the zfs-autosnapshot service facility any more.
I already reinstalles time-slider, but it refuses to start:
RuntimeError: Error reading SMF schedule instances
Details:
['/usr/bin/svcs', '-H', '-o', 'state',
'svc:/system/filesystem/zfs/auto-snaps
On 07/04/12 16:47, Nico Williams wrote:
I don't see that the munmap definition assures that anything is written to
"disk". The system is free to buffer the data in RAM as long as it likes
without writing anything at all.
Oddly enough the manpages at the Open Group don't make this clear. So
I
-Original message-
To: Jim Klimov ;
CC: ZFS Discussions ;
From: Carsten John
Sent: Wed 27-06-2012 08:48
Subject:Re: [zfs-discuss] snapshots slow on sol11?
> -Original message-
> CC: ZFS Discussions ;
> From: Jim Klimov
> Sent: Tue 26-0
-Original message-
CC: ZFS Discussions ;
From: Jim Klimov
Sent: Tue 26-06-2012 22:34
Subject:Re: [zfs-discuss] snapshots slow on sol11?
> 2012-06-26 23:57, Carsten John wrote:
> > Hello everybody,
> >
> > I recently migrated a file server (NFS &
Hello everybody,
I recently migrated a file server (NFS & Samba) from OpenSolaris (Build 111) to
Sol11. This the move we are facing random (or random looking) outages of our
Samba. As we have moved several folders (like Desktop and ApplicationData) out
of the usual profile to a folder inside th
On 06/16/12 12:23, Richard Elling wrote:
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
by the way
when you format start with cylinder 1 donot use 0
There is no requirement for skipping cylinder 0 for root on Solaris, and there
never has been.
Maybe not for core Solaris, but it i
On 06/15/12 15:52, Cindy Swearingen wrote:
Its important to identify your OS release to determine if
booting from a 4k disk is supported.
In addition, whether the drive is really 4096p or 512e/4096p.
___
zfs-discuss mailing list
zfs-discuss@opensolar
In message <201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk>, tpc...@mklab.ph.r
hul.ac.uk writes:
>Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
My WAG is that your "zpool history" is hanging due to lack of
RAM.
Joh
In message <008c01cd4812$7399c180$5acd4480$@net>, David Combs writes:
>Actual newsgroup for zfs-discuss?
Did you try Gmane's interface?
http://groups.google.com/groups?selm=jo43q0%24no50%241%40tr22n12.aset.psu.edu>
John
groenv...@acm.org
__
>different updates.
$ pkg info entire
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/29/12 07:26, bofh wrote:
ashift:9 is that standard?
Depends on what the drive reports as physical sector size.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/29/12 08:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC, and performance for
just about everything was 2
On 05/28/12 08:48, Nathan Kroenert wrote:
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB units up until now (which
are 512 byte sector).
Anyone offer up suggestions of either 3 or preferably 4TB drives that
actually work well with Z
Hello everybody,
just to let you know what happened in the meantime:
I was able to open a Service Request at Oracle.
The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)
The bug has bin fixed (according to Oracle support) since build 164, but there
is no fix for Sola
-Original message-
To: zfs-discuss@opensolaris.org;
From: John D Groenveld
Sent: Fri 30-03-2012 21:47
Subject:Re: [zfs-discuss] kernel panic during zfs import [ORACLE should
notice this]
> In message <4f735451.2020...@oracle.com>, Deepak Honnalli writes:
>
about the modalities of transferring the core
> file though. I will ask around and see if I can help you here.
How to Upload Data to Oracle Such as Explorer and Core Files [ID 1020199.1]
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs
:2091882785479247::NO:RP,6:P6_LPI:27242443094470222098916>
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-Original message-
To: zfs-discuss@opensolaris.org;
From: Borja Marcos
Sent: Thu 29-03-2012 11:49
Subject:[zfs-discuss] Puzzling problem with zfs receive exit status
>
> Hello,
>
> I hope someone has an idea.
>
> I have a replication program that copies a dataset from
eping systems running and not clicking through f
>lash overloaded support portals searching for CSIs, I'm giving the relevant in
>formation to the list now.
If the Flash interface is broken, try the non-Flash MOS site:
http://SupportHTML.Oracle.COM/>
John
groenv...@acm.org
___
-Original message-
To: zfs-discuss@opensolaris.org;
From: Deepak Honnalli
Sent: Wed 28-03-2012 09:12
Subject:Re: [zfs-discuss] kernel panic during zfs import
> Hi Carsten,
>
> This was supposed to be fixed in build 164 of Nevada (6742788). If
> you are still seeing
-Original message-
To: ZFS Discussions ;
From: Paul Kraus
Sent: Tue 27-03-2012 15:05
Subject:Re: [zfs-discuss] kernel panic during zfs import
> On Tue, Mar 27, 2012 at 3:14 AM, Carsten John wrote:
> > Hallo everybody,
> >
> > I have a Solaris 11 box
Hallo everybody,
I have a Solaris 11 box here (Sun X4270) that crashes with a kernel panic
during the import of a zpool (some 30TB) containing ~500 zfs filesystems after
reboot. This causes a reboot loop, until booted single user and removed
/etc/zfs/zpool.cache.
>From /var/adm/messages:
sav
upport
Solaris running on third-party hardware.
http://www.oracle.com/webfolder/technetwork/hcl/hcts/index.html>
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
laotsu said:
> well check this link
>
> https://shop.oracle.com/pls/ostore/product?p1=3DSunFireX4270M2server&p2=3D&p=
> 3=3D&p4=3D&sc=3Docom_x86_SunFireX4270M2server&tz=3D-4:00
>
> you may not like the price
Hahahah! Thanks for the laugh. The dual 10Gbe PCI card breaks my budget. I'm
not going t
On Fri Mar 23 at 10:06:12 2012 laot...@gmail.com wrote:
> well
> use component of x4170m2 as example you will be ok
> intel cpu
> lsi sas controller non raid
> sas 72rpm hdd
> my 2c
That sounds too vague to be useful unless I could afford an X4170M2. I
can't build a custom box and I don't have th
Bob Friesenhahn wrote:
> On Thu, 22 Mar 2012, The Honorable Senator and Mrs. John Blutarsky wrote:
> >
> > This will be a do-everything machine. I will use it for development, hosting
> > various apps in zones (web, file server, mail server etc.) and running other
> >
Ladies and Gentlemen,
I'm thinking about spending around 1,250 USD for a tower format (desk side)
server with RAM but without disks. I'd like to have 16G ECC RAM as a
minimum and ideally 2 or 3 times that amount and I'd like for the case to
have room for at least 6 drives, more would be better but
MegaCli -PDMakeJBOD -PhysDrv[E0:S0,E1:S1,...] -aALL
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello everybody,
I set up a script to replicate all zfs filesystems (some 300 user home
directories in this case) within a given pool to a "mirror" machine. The basic
idea is to send the snapshots incremental if the corresponding snapshot exists
on the remote side or send a complete snapshot if
Hi everybody,
are there any problems to expect if we try to export/import a zfs pool from
opensolaris (intel) (zpool version 14) to solaris 10 (sparc) (zpool version 19)?
thanks
Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
In message <4f435ca9.8010...@tuneunix.com>, nathank writes:
>Is there actually a fix to allow manual setting of ashift now that I
No.
http://docs.oracle.com/cd/E23824_01/html/821-1462/zpool-1m.html>
John
groenv...@acm.org
___
zfs-discuss
2 out of the box or will
Yes.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/25/12 09:08, Edward Ned Harvey wrote:
Assuming the failure rate of drives is not linear, but skewed toward higher
failure rate after some period of time (say, 3 yrs) ...
See section 3.1 of the Google study:
http://research.google.com/archive/disk_failures.pdf
although section 4.2 o
On 01/24/12 17:06, Gregg Wonderly wrote:
What I've noticed, is that when I have my drives in a situation of small
airflow, and hence hotter operating temperatures, my disks will drop
quite quickly.
While I *believe* the same thing and thus have over provisioned
airflow in my cases (for both dri
On 01/16/12 11:08, David Magda wrote:
The conclusions are hardly unreasonable:
While the reliability mechanisms in ZFS are able to provide reasonable
robustness against disk corruptions, memory corruptions still remain a
serious problem to data integrity.
I've heard the same thing said ("use
On 01/08/12 10:15, John Martin wrote:
I believe Joerg Moellenkamp published a discussion
several years ago on how L1ARC attempt to deal with the pollution
of the cache by large streaming reads, but I don't have
a bookmark handy (nor the knowledge of whether the
behavior is still acc
On 01/08/12 20:10, Jim Klimov wrote:
Is it true or false that: ZFS might skip the cache and
go to disks for "streaming" reads?
I don't believe this was ever suggested. Instead, if
data is not already in the file system cache and a
large read is made from disk should the file system
put this d
On 01/08/12 11:30, Jim Klimov wrote:
However for smaller servers, such as home NASes which have
about one user overall, pre-reading and caching files even
for a single use might be an objective per se - just to let
the hard-disks spin down. Say, if I sit down to watch a
movie from my NAS, it is
On 01/08/12 09:30, Edward Ned Harvey wrote:
In the case of your MP3 collection... Probably the only thing you can do is
to write a script which will simply go read all the files you predict will
be read soon. The key here is the prediction - There's no way ZFS or
solaris, or any other OS in th
http://docs.oracle.com/cd/E23823_01/html/819-5461/ggset.html#gkdep>
| How to Create a Mirrored ZFS Root Pool (Postinstallation)
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
xpress SRU 12 to S11 FCS.
Solaris 11 11/11 still spews the "I/O request is not aligned with
4096 disk sector size" warnings but zpool(1M) create's label
persists and I can export and import between systems.
John
groenv...@acm.org
___
zfs-discuss m
S11, I'll reproduce
on a more recent kernel build.
Thanks,
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In message <4e9db04b.80...@oracle.com>, Cindy Swearingen writes:
>This is CR 7102272.
Anyone out there have Western Digital's competing 3TB Passport
drive handy to duplicate this bug?
John
groenv...@acm.org
___
zfs-discuss mailing li
;
>SunOS 5.11 151.0.1.12 i386
>
>You might retry this on more recent bits, like the EA release,
>which I think is b 171.
Doubtful I'll find time to install EA before S11 FCS's
November launch.
>I'll still file the CR.
Thank you.
John
groenv...@acm.org
__
# uname -srvp
SunOS 5.11 151.0.1.12 i386
# zpool destroy foobar
# newfs /dev/rdsk/c1t0d0s0
newfs: construct a new file system /dev/rdsk/c1t0d0s0: (y/n)? y
The device sector size 4096 is not supported by ufs!
John
groenv...@acm.org
___
zfs-discus
In message <201110150202.p9f22w2n000...@elvis.arl.psu.edu>, John D Groenveld
writes:
>I'm baffled why zpool import is unable to find the pool on the
>drive, but the drive is definitely functional.
Per Richard Elling, it looks like ZFS is unable to find
the requisite labels for
x27;m baffled why zpool import is unable to find the pool on the
drive, but the drive is definitely functional.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In message <4e970387.3040...@oracle.com>, Cindy Swearingen writes:
>Any USB-related messages in /var/adm/messages for this device?
Negative.
cfgadm(1M) shows the drive and format->fdisk->analyze->read
runs merrily.
John
groenv...@acm.org
_
ot import it.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nd c1t0d0 fdisk partitions and
Solaris slices presumably hunting for pools.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In message <4e95cb2a.30...@oracle.com>, Cindy Swearingen writes:
>What is the error when you attempt to import this pool?
"cannot import 'foo': no such pool available"
John
groenv...@acm.org
# format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
can't
import it.
I thought weird USB connectivity issue, but I can run
"format -> analyze -> read" merrily.
Anyone seen this bug?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
On 09/12/11 10:33, Jens Elkner wrote:
Hmmm, at least if S11x, ZFS mirror, ICH10 and cmdk (IDE) driver is involved,
I'm 99.9% confident, that "a while" turns out to be some days or weeks, only
- no matter what Platinium-Enterprise-HDDs you use ;-)
On Solaris 11 Express with a dual drive mirror,
http://wdc.custhelp.com/app/answers/detail/a_id/1397/~/difference-between-desktop-edition-and-raid-%28enterprise%29-edition-drives
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ce when run from within
VirtualBox?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and try to import from a S11X LiveUSB.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there a list of zpool versions for development builds?
I found:
http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system
where it says Solaris 11 Express is zpool version 31, but my
system has BEs back to build 139 and I have not done a zpool upgrade
since installing this system but it
Hello everybody,
is there any known way to configure the point-in-time *when* the time-slider
will snapshot/rotate?
With hundreds of zfs filesystems, the daily snapshot rotation slows down a big
file server significantly, so it would be better to have the snapshots rotated
outside the usual w
derivative version for testing?
--
John Wren Kennedy
Delphix
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 06/28/11 02:55, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Carsten John
>>
>> Now I'm wondering about the best option to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 06/28/11 02:55, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Carsten John
>>
>> Now I'm wondering about the best option to replace t
bin65MpxTCk5V.bin
Description: PGP/MIME version identification
encrypted.asc
Description: OpenPGP encrypted message
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello everybody,
some time ago a SSD within a ZIL mirror died. As I had no SSD available
to replace it, I dropped in a normal SAS harddisk to rebuild the mirror.
In the meantime I got the warranty replacement SSD.
Now I'm wondering about the best op
ystem, recreate the rpool
on slice 0 of that fdisk partition, use beadm(1M) to copy
your BE back to your new rpool, and then restore any other ZFS
from those snapshots.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
ward is
>still unknown.
Ask Keith Block and company's sales critter about "Hardware from Oracle
- Pricing for Education (HOPE)":
http://www.oracle.com/ocom/groups/public/@ocom/documents/webcontent/364419.pdf>
John
groenv...@acm.org
___
following are some thoughts if it's not too late:
> 1 SuperMicro 847E1-R1400LPB
I guess you meant the 847E1[b]6[/b]-R1400LPB, the SAS1 version makes no sense
> 1 SuperMicro H8DG6-F
not the best choice, see below why
> 171 Hitachi 7K3000 3TB
I'd go for the more environmentally
> -Original Message-
> From: Frank Lahm [mailto:frankl...@googlemail.com]
> Sent: 25 January 2011 14:50
> To: Ryan John
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Changed ACL behavior in snv_151 ?
> John,
> welcome onboard!
> 2011/1/25
I’m using /usr/bin/chmod
From: phil.har...@gmail.com [mailto:phil.har...@gmail.com]
Sent: 25 January 2011 14:50
To: Ryan John; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Changed ACL behavior in snv_151 ?
Which chmod are you using? (check your PATH)
- Reply message -
From
through
Anyone any ideas?
On a snv_134 system, the ACLs are retained.
Regards
John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm trying to rollback from a bad patch install on Solaris 10. From the
failsafe BE I tried to rollback, but zfs is asking me to provide allow rollback
permissions. It's hard for me to tell exactly because the messages are
scrolling off the screen before I can read them. Any help would be appr
ther half for another ZFS pool, but I can't figure out
>how to access it.
Use beadm(1M) to duplicate your BE to a USB disk, then boot it,
then format/fdisk your workstation disk, then use beadm(1M) to
duplicate your BE back to your workstat
In message <201008112022.o7bkmc2j028...@elvis.arl.psu.edu>, John D Groenveld wr
ites:
>I'm stumbling over BugID 6961707 on build 134.
I see the bug has been stomped in build 150. Awesome!
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6961707>
In which build di
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jeff Bacon wrote:
> I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk
> stripe. They're all supermicro-based with retail LSI cards.
>
> I've noticed a tendency for things to go a little bonkers during the
> weekly scrub (they all
Wouldn't it be possible to saturate the SSD ZIL with enough backlogged sync
writes?
What I mean is, doesn't the ZIL eventually need to make it to the pool, and if
the pool as a whole (spinning disks) can't keep up with 30+ vm's of write
requests, couldn't you fill up the ZIL that way?
--
This
was sending my service order requests to
/dev/null, but someone manually entered after I submitted
web feedback.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
has anyone here hit this and gotten it
resolved?
Is the pool corrupted on disk?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. JBOD, RAID zvols on both controllers.
--
John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello all. I am new...very new to opensolaris and I am having an issue and have
no idea what is going wrong. So I have 5 drives in my machine. all 500gb. I
installed open solaris on the first drive and rebooted. . Now what I want to do
is ad a second drive so they are mirrored. How does one do t
Could you import it back on the original server with
Zpool import -f newpool rpool?
Jay
-Original Message-
From: Brandon High [mailto:bh...@freaks.com]
Sent: Wednesday, June 16, 2010 2:19 PM
To: Seaman, John
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] mount zfs boot disk
d back onto the filesystem, is there another way to
de-dedup the pool?
Thanks,
John
On Jun 13, 2010, at 10:17 PM, Erik Trimble wrote:
> Hernan F wrote:
>> Hello, I tried enabling dedup on a filesystem, and moved files into it to
>> take advantage of it. I had about 700GB of fi
Can I make a pool not mount on boot? I seem to recall reading
somewhere how to do it, but can't seem to find it now.
--
John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
OK, I got a core dump, what do I do with it now?
It is 1.2G in size.
On Wed, May 19, 2010 at 10:54 AM, John Andrunas wrote:
> Hmmm... no coredump even though I configured it.
>
> Here is the trace though I will see what I can do about the coredump
>
> r...@cluster:/export/
1 - 100 of 283 matches
Mail list logo