I was just wondering:
I added a SLOG/ZIL to my new system today...i noticed that the L2ARC shows
up under it's own headingbut the SLOG/ZIL doesn'tis this correct?
see:
capacity operationsbandwidth
poolalloc free read write read write
--
Greetings -
I am migrating a pool from FreeBSD 8.0 to OpenSolaris (Nexenta 3.0 RC1). I am
in what seems to be a weird situation regarding this pool. Maybe someone can
help.
I used to boot off of this pool in FreeBSD, so the bootfs property got set:
r...@nexenta:~# zpool get bootfs tank
NAME
--On 24 May 2010 23:41 -0400 rwali...@washdcmail.com wrote:
I haven't seen where anyone has tested this, but the MemoRight SSD (sold
by RocketDisk in the US) seems to claim all the right things:
http://www.rocketdisk.com/vProduct.aspx?ID=1
pdf specs:
Is there any way you can display the parent of a dataset by zfs (get/list)
command ? I do not need to list for example for a dataset all it's children
by using -r just to get the parent on a child. There are way's of grepping
and doing some preg matches but i was wondering if there is any way by
Hi,
I know the general discussion is about flash SSD's connected through
SATA/SAS or possibly PCI-E these days. So excuse me if I'm askign
something that makes no sense...
I have a server that can hold 6 U320 SCSI disks. Right now I put in 5
300GB for a data pool, and 1 18GB for the root pool.
On Tue, May 25, 2010 at 10:08:57AM +0100, Karl Pielorz wrote:
--On 24 May 2010 23:41 -0400 rwali...@washdcmail.com wrote:
I haven't seen where anyone has tested this, but the MemoRight SSD (sold
by RocketDisk in the US) seems to claim all the right things:
The last couple times i've read this questions, people normally responded
with:
It depends
you might not even NEED a slog, there is a script floating around which can
help determine that...
If you could benefit from one, it's going to be IOPS which help youso if
the usb drive has more
--On 25 May 2010 15:28 +0300 Pasi Kärkkäinen pa...@iki.fi wrote:
I've tried contacting Intel to find out if it's true their enterprise
SSD has no cache protection on it, and what the effect of turning the
write cache off would have on both performance and write endurance, but
not heard
On Tue, May 25, 2010 at 01:52:47PM +0100, Karl Pielorz wrote:
--On 25 May 2010 15:28 +0300 Pasi Kärkkäinen pa...@iki.fi wrote:
I've tried contacting Intel to find out if it's true their enterprise
SSD has no cache protection on it, and what the effect of turning the
write cache off would
Roy,
Thanks for your reply.
I did get a new drive and attempted the approach (as you have suggested pre
your reply) however once booted off the OpenSolaris Live CD (or the rebuilt new
drive), I was not able to import the rpool (which I had established had sector
errors). I expect I should
eon:1:~#zdb -l /dev/rdsk/c1d0
LABEL 0
failed to unpack label 0
LABEL 1
failed to unpack label 1
Hi Thomas,
This looks like a display bug. I'm seeing it too.
Let me know which Solaris release you are running and
I will file a bug.
Thanks,
Cindy
On 05/25/10 01:42, Thomas Burgess wrote:
I was just wondering:
I added a SLOG/ZIL to my new system today...i noticed that the L2ARC
shows up
try to zdb -l /dev/rdsk/c1d0s0
2010/5/25 h bajsadb...@pleasespam.me
eon:1:~#zdb -l /dev/rdsk/c1d0
LABEL 0
failed to unpack label 0
LABEL 1
Is there a best practice on keeping a backup of the zpool.cache file? Is it
possible? Does it change with changes to vdevs?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 5/25/2010 2:55 AM, Vadim Comanescu wrote:
Is there any way you can display the parent of a dataset by zfs
(get/list) command ? I do not need to list for example for a dataset
all it's children by using -r just to get the parent on a child. There
are way's of grepping and doing some preg
Hi Reshekel,
You might review these resources for information on using ZFS without
having to hack code:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
ZFS Administration Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
I will add a section on
i am running the last release from the genunix page
uname -a output:
SunOS wonslung-raidz2 5.11 snv_134 i86pc i386 i86pc Solaris
On Tue, May 25, 2010 at 10:33 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
Hi Thomas,
This looks like a display bug. I'm seeing it too.
Let me know
On Tue, May 25, 2010 at 2:55 AM, Vadim Comanescu va...@syneto.net wrote:
Is there any way you can display the parent of a dataset by zfs (get/list)
command ? I do not need to list for example for a dataset all it's children
by using -r just to get the parent on a child. There are way's of
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise.
I know it doesn't have a supercap so lets' say dataloss
On 5/25/2010 8:24 AM, Brandon High wrote:
On Tue, May 25, 2010 at 2:55 AM, Vadim Comanescuva...@syneto.net wrote:
Is there any way you can display the parent of a dataset by zfs (get/list)
command ? I do not need to list for example for a dataset all it's children
by using -r just to get
On Tue, May 25, 2010 at 11:27 AM, Edward Ned Harvey
solar...@nedharvey.comwrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL
On Tue, May 25, 2010 at 1:58 AM, Reshekel Shedwitz reshe...@spam.la wrote:
Ultimately, I would like to just set the bootfs property back to default, but
this seems to be beyond my ability. There are some checks in libzfs_pool.c
that I can bypass in order to set the value back to its default
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kyle McDonald
I've been thinking lately that I'm not sure I like the root pool being
unprotected, but I can't afford to give up another drive bay.
I'm guessing you won't be able to use the
Cindy,
Thanks for your reply. The important details may have been buried in my post, I
will repeat them again to make it more clear:
(1) This was my boot pool in FreeBSD, but I do not think the partitioning
differences are really the issue. I can import the pool to nexenta/opensolaris
just
On Tue, May 25, 2010 at 1:58 AM, Reshekel Shedwitz
reshe...@spam.la wrote:
Ultimately, I would like to just set the bootfs
property back to default, but this seems to be beyond
my ability. There are some checks in libzfs_pool.c
that I can bypass in order to set the value back to
its
From: Thomas Burgess [mailto:wonsl...@gmail.com]
Just dataloss.
WRONG!
I didn't ask about losing my zil.
I asked about power loss taking out my pool.
As I recall:
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question
is,
The USB stack in OpenSolaris is ... complex (STREAMs based!), and
probably not the most performant or reliable portion of the system.
Furthermore, the mass storage layer, which encapsulates SCSI, is not
tuned for a high number of IOPS or low latencies, and the stack makes
different
Reshekel Shedwitz wrote:
r...@nexenta:~# zpool set bootfs= tank
cannot set property for 'tank': property 'bootfs' not supported on EFI labeled devices
r...@nexenta:~# zpool get bootfs tank
NAME PROPERTY VALUE SOURCE
tank bootfstanklocal
Could this be related to the way
On 5/25/2010 11:39 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kyle McDonald
I've been thinking lately that I'm not sure I like the root pool being
unprotected, but I can't afford to give up another drive
On Mon, 24 May 2010, Thomas Burgess wrote:
It's a sandforce sf-1500 model but without a supercapheres some info on it:
Maximum Performance
* Max Read: up to 270MB/s
* Max Write: up to 250MB/s
* Sustained Write: up to 235MB/s
* Random Write 4k: 15,000 IOPS
* Max 4k IOPS: 50,000
On May 25, 2010, at 7:46 AM, thomas wrote:
Is there a best practice on keeping a backup of the zpool.cache file?
Same as anything else, but a little bit easier because you can
snapshot the root pool. Thus far, the only real use for the backups
is for a manual recovery of missing top-level
At least to me, this was not clearly not asking about losing zil and was
not clearly asking about power loss. Sorry for answering the question
you
thought you didn't ask.
I was only responding to your response of WRONG!!! The guy wasn't wrong in
regards to my questions. I'm sorry for
On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 24 May 2010, Thomas Burgess wrote:
It's a sandforce sf-1500 model but without a supercapheres some info
on it:
Maximum Performance
* Max Read: up to 270MB/s
* Max Write: up to 250MB/s
Also, let me note, it came with a 3 year warranty so I expect it to last at
least 3 years...but if it doesn't, i'll just return it under the warranty.
On Tue, May 25, 2010 at 1:26 PM, Thomas Burgess wonsl...@gmail.com wrote:
On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn
On Tue, 25 May 2010, Thomas Burgess wrote:
The Apollo reentry vehicle was able to reach amazing speeds, but only for a
single use.
What exactly do you mean?
What I mean is what I said. A set of specifications which are all
written as maximums (i.e. peak) is pretty useless. Perhaps if
From: Thomas Burgess [mailto:wonsl...@gmail.com]
I might be somewhat confused to how the ZIL
works but i thought the point of the ZIL was to pretend a write
actually happened when it may not have actually been flushed to disk
yet...
No. How the ZIL works is like this:
Whenever a process
On Tue, May 25, 2010 at 8:47 AM, Reshekel Shedwitz reshe...@gmail.com wrote:
Could this be related to the way FreeBSD's zfs partitioned my disk? I thought
ZFS used EFI by default though (except for boot pools).
Looks like it. Solaris thinks that it's EFI partitioned.
By default, Solaris uses
On Tue, May 25, 2010 at 2:08 AM, Karl Pielorz kpielorz_...@tdx.co.uk wrote:
I've tried contacting Intel to find out if it's true their enterprise SSD
has no cache protection on it, and what the effect of turning the write
The E in X25-E does not mean enterprise. It means extreme. Like
the EE
Hi All, is there any procedure to recover a filesystem from an office pool or
bring a pool on-line quickly.
Here is my issue.
* One 700GB Zpool
* 1 filesystem with compression turn on (only using few MB)
* Try to migrated another filesystem from a different pool with dedup stream.
with
zfs send
Greetings,
I see repeatable crashes on some systems after upgrading.. the signature is
always the same:
operating system: 5.11 snv_139 (i86pc)
panic message: BAD TRAP: type=e (#pf Page fault) rp=ff00175f88c0 addr=0
occurred in module genunix due to a NULL pointer dereference
a manual recovery of missing top-level vdevs
-- a rare event.
Yes, but so rare that I never thought troubling me. In my mind it was only the
slog and loosing the last few seconds doesn't wrong. So I don't have a backup,
a snapshot neither the original zpool.cache file.
Is there any solution
Hi--
I apologize for missing understanding your original issue.
Regardless of the original issues and the fact that current Solaris
releases do not let you set the bootfs property on a pool that has a
disk with an EFI label, the secondary bug here is not being able to
remove a bootfs property
eon:6:~#zdb -l /dev/rdsk/c1d0s0
LABEL 0
version: 22
name: 'videodrome'
state: 0
txg: 55561
pool_guid: 5063071388564101079
hostid: 919514
hostname: 'Videodrome'
top_guid:
--On 25 May 2010 11:15 -0700 Brandon High bh...@freaks.com wrote:
On Tue, May 25, 2010 at 2:08 AM, Karl Pielorz kpielorz_...@tdx.co.uk
wrote:
I've tried contacting Intel to find out if it's true their enterprise
SSD has no cache protection on it, and what the effect of turning the
write
The
As I am looking at this further, I convince myself this should really be an
assert.
(I am running release builds, so assert-s do not fire).
I think in a debug build, I should be seeing the !list_empty() assert in:
list_remove(list_t *list, void *object)
{
list_node_t *lold =
On May 25, 2010, at 12:33 PM, R. Eulenberg wrote:
a manual recovery of missing top-level vdevs
-- a rare event.
Yes, but so rare that I never thought troubling me. In my mind it was only
the slog and loosing the last few seconds doesn't wrong. So I don't have a
backup, a snapshot neither
On Tue, May 25, 2010 at 1:58 AM, Reshekel Shedwitz reshe...@spam.la wrote:
I am migrating a pool from FreeBSD 8.0 to OpenSolaris (Nexenta 3.0 RC1). I am
in what seems to be a weird situation regarding this pool. Maybe someone can
help.
I used to boot off of this pool in FreeBSD, so the
Cindy,
Thanks. Same goes to everyone else on this thread.
I actually solved the issue - I booted back into FreeBSD's Fixit mode and was
still able to import the pool (wouldn't have been able to if I upgraded the
pool version!). FreeBSD's zpool command allowed me to unset the bootfs
property.
Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kyle McDonald
I've been thinking lately that I'm not sure I like the root pool being
unprotected, but I can't afford to give up another drive bay.
I'm guessing
On Wed, 2010-05-05 at 10:35 -0600, Evan Layton wrote:
Do you have any of the older BEs like build 134 that you can boot back
to and see if those will allow you to set the bootfs property on the
root pool? It's just really strange that out of nowhere it started
thinking that the device is
Hello,As an avid fan of the application to flash technologies to the storage stratum, I researched theDMCache project (maintainedhere). It appears that the DmCache project is quite a bit behindL2ARC but headed in the right direction.I found the lwn article very interesting as it is effectivelya
51 matches
Mail list logo