Hello,As an avid fan of the application to flash technologies to the storage stratum, I researched theDMCache project (maintained here). It appears that the DmCache project is quite a bit behindL2ARC but headed in the right direction.I found the lwn article very interesting as it is effectively a
On Wed, 2010-05-05 at 10:35 -0600, Evan Layton wrote:
> >> Do you have any of the older BEs like build 134 that you can boot back
> >> to and see if those will allow you to set the bootfs property on the
> >> root pool? It's just really strange that out of nowhere it started
> >> thinking that the
Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kyle McDonald
I've been thinking lately that I'm not sure I like the root pool being
unprotected, but I can't afford to give up another drive bay.
I'm guessing yo
Cindy,
Thanks. Same goes to everyone else on this thread.
I actually solved the issue - I booted back into FreeBSD's "Fixit" mode and was
still able to import the pool (wouldn't have been able to if I upgraded the
pool version!). FreeBSD's zpool command allowed me to unset the bootfs
property.
On Tue, May 25, 2010 at 1:58 AM, Reshekel Shedwitz wrote:
> I am migrating a pool from FreeBSD 8.0 to OpenSolaris (Nexenta 3.0 RC1). I am
> in what seems to be a weird situation regarding this pool. Maybe someone can
> help.
>
> I used to boot off of this pool in FreeBSD, so the bootfs property
On May 25, 2010, at 12:33 PM, R. Eulenberg wrote:
>> a manual recovery of missing top-level vdevs
>> -- a rare event.
> Yes, but so rare that I never thought troubling me. In my mind it was only
> the slog and loosing the last few seconds doesn't wrong. So I don't have a
> backup, a snapshot nei
As I am looking at this further, I convince myself this should really be an
assert.
(I am running release builds, so assert-s do not fire).
I think in a debug build, I should be seeing the !list_empty() assert in:
list_remove(list_t *list, void *object)
{
list_node_t *lold = list_d2l
--On 25 May 2010 11:15 -0700 Brandon High wrote:
On Tue, May 25, 2010 at 2:08 AM, Karl Pielorz
wrote:
I've tried contacting Intel to find out if it's true their "enterprise"
SSD has no cache protection on it, and what the effect of turning the
write
The "E" in X25-E does not mean "enterpri
eon:6:~#zdb -l /dev/rdsk/c1d0s0
LABEL 0
version: 22
name: 'videodrome'
state: 0
txg: 55561
pool_guid: 5063071388564101079
hostid: 919514
hostname: 'Videodrome'
top_guid: 1508059
Hi--
I apologize for missing understanding your original issue.
Regardless of the original issues and the fact that current Solaris
releases do not let you set the bootfs property on a pool that has a
disk with an EFI label, the secondary bug here is not being able to
remove a bootfs property on
> a manual recovery of missing top-level vdevs
> -- a rare event.
Yes, but so rare that I never thought troubling me. In my mind it was only the
slog and loosing the last few seconds doesn't wrong. So I don't have a backup,
a snapshot neither the original zpool.cache file.
Is there any solution f
Greetings,
I see repeatable crashes on some systems after upgrading.. the signature is
always the same:
operating system: 5.11 snv_139 (i86pc)
panic message: BAD TRAP: type=e (#pf Page fault) rp=ff00175f88c0 addr=0
occurred in module "genunix" due to a NULL pointer dereference
list_remove+
Hi All, is there any procedure to recover a filesystem from an office pool or
bring a pool on-line quickly.
Here is my issue.
* One 700GB Zpool
* 1 filesystem with compression turn on (only using few MB)
* Try to migrated another filesystem from a different pool with dedup stream.
with
zfs send
On Tue, May 25, 2010 at 2:08 AM, Karl Pielorz wrote:
> I've tried contacting Intel to find out if it's true their "enterprise" SSD
> has no cache protection on it, and what the effect of turning the write
The "E" in X25-E does not mean "enterprise". It means "extreme". Like
the "EE" series CPUs t
On Tue, May 25, 2010 at 8:47 AM, Reshekel Shedwitz wrote:
> Could this be related to the way FreeBSD's zfs partitioned my disk? I thought
> ZFS used EFI by default though (except for boot pools).
Looks like it. Solaris thinks that it's EFI partitioned.
By default, Solaris uses SMI for boot volu
> From: Thomas Burgess [mailto:wonsl...@gmail.com]
>
> I might be somewhat confused to how the ZIL
> works but i thought the point of the ZIL was to "pretend" a write
> actually happened when it may not have actually been flushed to disk
> yet...
No. How the ZIL works is like this:
Whenever a p
On Tue, 25 May 2010, Thomas Burgess wrote:
The Apollo reentry vehicle was able to reach amazing speeds, but only for a
single use.
What exactly do you mean?
What I mean is what I said. A set of specifications which are all
written as "maximums" (i.e. peak) is pretty useless. Perhaps if y
Also, let me note, it came with a 3 year warranty so I expect it to last at
least 3 years...but if it doesn't, i'll just return it under the warranty.
On Tue, May 25, 2010 at 1:26 PM, Thomas Burgess wrote:
>
>
> On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn <
> bfrie...@simple.dallas.tx.us>
On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 24 May 2010, Thomas Burgess wrote:
>
>>
>> It's a sandforce sf-1500 model but without a supercapheres some info
>> on it:
>>
>> Maximum Performance
>>
>> * Max Read: up to 270MB/s
>> * Max Wr
>
>
> At least to me, this was not clearly "not asking about losing zil" and was
> not clearly "asking about power loss." Sorry for answering the question
> you
> thought you didn't ask.
>
I was only responding to your response of WRONG!!! The guy wasn't wrong in
regards to my questions. I'm s
On May 25, 2010, at 7:46 AM, thomas wrote:
> Is there a best practice on keeping a backup of the zpool.cache file?
Same as anything else, but a little bit easier because you can
snapshot the root pool. Thus far, the only real use for the backups
is for a manual recovery of missing top-level vdev
On Tue, 25 May 2010, Karl Pielorz wrote:
The X25-E's do apparently honour the 'Disable Write Cache' command - without
write cache, there is no cache to flush - all data is written to flash
immediately - presumably before it's ACK'd to the host.
There is always a cache, even if it is just a 4
On Mon, 24 May 2010, Thomas Burgess wrote:
It's a sandforce sf-1500 model but without a supercapheres some info on it:
Maximum Performance
* Max Read: up to 270MB/s
* Max Write: up to 250MB/s
* Sustained Write: up to 235MB/s
* Random Write 4k: 15,000 IOPS
* Max 4k IOPS: 50,000
On 5/25/2010 11:39 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Kyle McDonald
>>
>> I've been thinking lately that I'm not sure I like the root pool being
>> unprotected, but I can't afford to give up another
Reshekel Shedwitz wrote:
r...@nexenta:~# zpool set bootfs= tank
cannot set property for 'tank': property 'bootfs' not supported on EFI labeled devices
r...@nexenta:~# zpool get bootfs tank
NAME PROPERTY VALUE SOURCE
tank bootfstanklocal
Could this be related to the way FreeBSD
The USB stack in OpenSolaris is ... complex (STREAMs based!), and
probably not the most performant or reliable portion of the system.
Furthermore, the mass storage layer, which encapsulates SCSI, is not
tuned for a high number of IOPS or low latencies, and the stack makes
different assumption
> From: Thomas Burgess [mailto:wonsl...@gmail.com]
> > Just dataloss.
> WRONG!
>
> I didn't ask about losing my zil.
>
> I asked about power loss taking out my pool.
As I recall:
> I recently got a new SSD (ocz vertex LE 50gb)
>
> It seems to work really well as a ZIL performance wise. My que
> On Tue, May 25, 2010 at 1:58 AM, Reshekel Shedwitz
> wrote:
> > Ultimately, I would like to just set the bootfs
> property back to default, but this seems to be beyond
> my ability. There are some checks in libzfs_pool.c
> that I can bypass in order to set the value back to
> its default of "-",
Cindy,
Thanks for your reply. The important details may have been buried in my post, I
will repeat them again to make it more clear:
(1) This was my boot pool in FreeBSD, but I do not think the partitioning
differences are really the issue. I can import the pool to nexenta/opensolaris
just fi
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Kyle McDonald
>
> I've been thinking lately that I'm not sure I like the root pool being
> unprotected, but I can't afford to give up another drive bay.
I'm guessing you won't be able to use
On Tue, May 25, 2010 at 1:58 AM, Reshekel Shedwitz wrote:
> Ultimately, I would like to just set the bootfs property back to default, but
> this seems to be beyond my ability. There are some checks in libzfs_pool.c
> that I can bypass in order to set the value back to its default of "-", but
>
On Tue, May 25, 2010 at 11:27 AM, Edward Ned Harvey
wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Nicolas Williams
> >
> > > I recently got a new SSD (ocz vertex LE 50gb)
> > >
> > > It seems to work really well as a ZIL perform
On 5/25/2010 8:24 AM, Brandon High wrote:
On Tue, May 25, 2010 at 2:55 AM, Vadim Comanescu wrote:
Is there any way you can display the parent of a dataset by zfs (get/list)
command ? I do not need to list for example for a dataset all it's children
by using -r just to get the parent on a ch
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nicolas Williams
>
> > I recently got a new SSD (ocz vertex LE 50gb)
> >
> > It seems to work really well as a ZIL performance wise.
> > I know it doesn't have a supercap so lets' say datalos
On Tue, May 25, 2010 at 2:55 AM, Vadim Comanescu wrote:
> Is there any way you can display the parent of a dataset by zfs (get/list)
> command ? I do not need to list for example for a dataset all it's children
> by using -r just to get the parent on a child. There are way's of grepping
> and doin
i am running the last release from the genunix page
uname -a output:
SunOS wonslung-raidz2 5.11 snv_134 i86pc i386 i86pc Solaris
On Tue, May 25, 2010 at 10:33 AM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:
> Hi Thomas,
>
> This looks like a display bug. I'm seeing it too.
>
> Let m
Hi Reshekel,
You might review these resources for information on using ZFS without
having to hack code:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
ZFS Administration Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
I will add a section on migrat
On 5/25/2010 2:55 AM, Vadim Comanescu wrote:
Is there any way you can display the parent of a dataset by zfs
(get/list) command ? I do not need to list for example for a dataset
all it's children by using -r just to get the parent on a child. There
are way's of grepping and doing some preg matc
Is there a best practice on keeping a backup of the zpool.cache file? Is it
possible? Does it change with changes to vdevs?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
try to "zdb -l /dev/rdsk/c1d0s0"
2010/5/25 h
> eon:1:~#zdb -l /dev/rdsk/c1d0
>
> LABEL 0
>
> failed to unpack label 0
>
> LABEL 1
> -
Hi Thomas,
This looks like a display bug. I'm seeing it too.
Let me know which Solaris release you are running and
I will file a bug.
Thanks,
Cindy
On 05/25/10 01:42, Thomas Burgess wrote:
I was just wondering:
I added a SLOG/ZIL to my new system today...i noticed that the L2ARC
shows up u
eon:1:~#zdb -l /dev/rdsk/c1d0
LABEL 0
failed to unpack label 0
LABEL 1
failed to unpack label 1
---
Roy,
Thanks for your reply.
I did get a new drive and attempted the approach (as you have suggested pre
your reply) however once booted off the OpenSolaris Live CD (or the rebuilt new
drive), I was not able to import the rpool (which I had established had sector
errors). I expect I should hav
On Tue, May 25, 2010 at 01:52:47PM +0100, Karl Pielorz wrote:
>
> --On 25 May 2010 15:28 +0300 Pasi Kärkkäinen wrote:
>
>>> I've tried contacting Intel to find out if it's true their "enterprise"
>>> SSD has no cache protection on it, and what the effect of turning the
>>> write cache off would ha
--On 25 May 2010 15:28 +0300 Pasi Kärkkäinen wrote:
I've tried contacting Intel to find out if it's true their "enterprise"
SSD has no cache protection on it, and what the effect of turning the
write cache off would have on both performance and write endurance, but
not heard anything back yet.
The last couple times i've read this questions, people normally responded
with:
It depends
you might not even NEED a slog, there is a script floating around which can
help determine that...
If you could benefit from one, it's going to be IOPS which help youso if
the usb drive has more io
On Tue, May 25, 2010 at 10:08:57AM +0100, Karl Pielorz wrote:
>
>
> --On 24 May 2010 23:41 -0400 rwali...@washdcmail.com wrote:
>
>> I haven't seen where anyone has tested this, but the MemoRight SSD (sold
>> by RocketDisk in the US) seems to claim all the right things:
>>
>> http://www.rocketdisk.
Hi,
I know the general discussion is about flash SSD's connected through
SATA/SAS or possibly PCI-E these days. So excuse me if I'm askign
something that makes no sense...
I have a server that can hold 6 U320 SCSI disks. Right now I put in 5
300GB for a data pool, and 1 18GB for the root pool.
I
Is there any way you can display the parent of a dataset by zfs (get/list)
command ? I do not need to list for example for a dataset all it's children
by using -r just to get the parent on a child. There are way's of grepping
and doing some preg matches but i was wondering if there is any way by do
--On 24 May 2010 23:41 -0400 rwali...@washdcmail.com wrote:
I haven't seen where anyone has tested this, but the MemoRight SSD (sold
by RocketDisk in the US) seems to claim all the right things:
http://www.rocketdisk.com/vProduct.aspx?ID=1
pdf specs:
http://www.rocketdisk.com/Local/Files/Pr
Greetings -
I am migrating a pool from FreeBSD 8.0 to OpenSolaris (Nexenta 3.0 RC1). I am
in what seems to be a weird situation regarding this pool. Maybe someone can
help.
I used to boot off of this pool in FreeBSD, so the bootfs property got set:
r...@nexenta:~# zpool get bootfs tank
NAME P
I was just wondering:
I added a SLOG/ZIL to my new system today...i noticed that the L2ARC shows
up under it's own headingbut the SLOG/ZIL doesn'tis this correct?
see:
capacity operationsbandwidth
poolalloc free read write read write
--
52 matches
Mail list logo