Hello,
I've looked around Google and the zfs-discuss archives but have not been
able to find a good answer to this question (and the related questions
that follow it):
How well does ZFS handle unexpected power failures? (e.g. environmental
power failures, power supply dying, etc.)
Does it c
> On Wed, 24 Jun 2009, Lejun Zhu wrote:
>
> > There is a bug in the database about reads blocked
> by writes which may be related:
> >
> >
> http://bugs.opensolaris.org/view_bug.do?bug_id=6471212
> >
> > The symptom is sometimes reducing queue depth makes
> read perform better.
>
> I have been ba
On Mon, Jun 29, 2009 at 2:48 PM, Bob
Friesenhahn wrote:
> On Wed, 24 Jun 2009, Lejun Zhu wrote:
>
>> There is a bug in the database about reads blocked by writes which may be
>> related:
>>
>> http://bugs.opensolaris.org/view_bug.do?bug_id=6471212
>>
>> The symptom is sometimes reducing queue depth
On Wed, 24 Jun 2009, Lejun Zhu wrote:
There is a bug in the database about reads blocked by writes which may be
related:
http://bugs.opensolaris.org/view_bug.do?bug_id=6471212
The symptom is sometimes reducing queue depth makes read perform better.
I have been banging away at this issue wit
On 06/27/09 23:50, Ian Collins wrote:
Leela wrote:
So no one has any idea?
About what?
This was in regards to a question sent to the install-discuss alias on
6/18 and later copied to zfs-discuss. I have answered it on the install
alias, if anyone is following the issue.
Lori
__
> try to be spread across different vdevs.
% zpool iostat -v
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
z686G 434G 40 5 2.46M 271K
c1t0d0s7 250G 194G
Hi,
Is there a time frame when L2ARC would be available in Solaris 10. With
the latest U7 release, L2ARC appears to be disabled (operation not
supported on this type of pool).
Thanks,
Ravi
--
*Ravi Kota
ISV Engineering
Sun Microsystems, Inc.
Phone: 408-228-1264, x69401
Mobile: 408-393-362
Hi,
Is there a time frame when L2ARC would be available in Solaris 10. With the
latest U7 release, L2ARC appears to be disabled (operation not supported on
this type of pool).
Thanks,
Ravi
--
This message posted from opensolaris.org
___
zfs-discuss
Hi Patrick,
To answer your original question, yes, you can create your root swap
and dump volumes before you run the lucreate operation. LU won't change
them if they are already created.
Keep in mind that you'll need approximately 10 GBs of disk space for the
ZFS root BE and the swap/dump volume
On 29.06.09 23:01, Carsten Aulbert wrote:
One question:
Where can I find more about CR 6827199? I logged into sun.com with my
service contract enabled log-in but I cannot find it there (or the
search function does not like me too much).
You can try bugs.opensolaris.org too:
http://bugs.openso
Hi Mark,
Mark J Musante wrote:
>
> OK, looks like you're running into CR 6827199.
>
> There's a workaround for that as well. After the zpool import, manually
> zfs umount all the datasets under /atlashome/BACKUP. Once you've done
> that, the BACKUP directory will still be there. Manually moun
On Mon, 29 Jun 2009, Carsten Aulbert wrote:
s11 console login: root
Password:
Last login: Mon Jun 29 10:37:47 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
s11:~# zpool export atlashome
s11:~# ls -l /atlashome
/atlashome: No such file or directory
s11:~# zpool import at
On 06/28/09 08:41, Ross wrote:
Can't you just boot from an OpenSolaris CD, create a ZFS pool on the new
device, and just do a ZFS send/receive directly to it? So long as there's
enough space for the data, a send/receive won't care at all that the systems
are different sizes.
I don't know wha
Hi
Mark J Musante wrote:
>
> Do a zpool export first, and then check to see what's in /atlashome. My
> bet is that the BACKUP directory is still there. If so, do an rmdir on
> /atlashome/BACKUP and then try the import again.
Sorry, I meant to copy this earlier:
s11 console login: root
Passwor
> On Mon, 29 Jun 2009, NightBird wrote:
> >
> > I checked the output of iostat. svc_t is between 5
> and 50, depending on when data is flushed to the disk
> (CIFS write pattern). %b is between 10 and 50.
> > %w is always 0.
> > Example:
> > devicer/sw/s kr/s kw/s wait actv svc_t
> %w
Jose Luis Barquín Guerola wrote:
Thank you "Relling" and "et151817" for your answers.
So just to end the post:
Relling supouse the next situation:
One zpool in "Dinamic Stripe" with two disk, one of 100MB and the second
with 200MB
if the spread is "stochastic spreading of data across vdevs
On Mon, 29 Jun 2009, NightBird wrote:
I checked the output of iostat. svc_t is between 5 and 50, depending on when
data is flushed to the disk (CIFS write pattern). %b is between 10 and 50.
%w is always 0.
Example:
devicer/sw/s kr/s kw/s wait actv svc_t %w %b
sd27 31.5 127.0
NightBird wrote:
On Fri, 26 Jun 2009, Richard Elling wrote:
All the tools I have used show no IO problems. I
think the problem is
memory but I am unsure on how to troubleshoot it.
Look for latency, not bandwidth. iostat will show
latency at the
devi
> On Fri, 26 Jun 2009, Richard Elling wrote:
>
> >> All the tools I have used show no IO problems. I
> think the problem is
> >> memory but I am unsure on how to troubleshoot it.
> >
> > Look for latency, not bandwidth. iostat will show
> latency at the
> > device level.
>
> Unfortunately, the
On Mon, Jun 29 at 11:43, Patrick O'Sullivan wrote:
I've had success with the SIIG SC-SAE012-S2. PCIe and no problems
booting off of it in 2008.11.
I think there's a 4-port version of the 1068e-based chips from LSI,
and I believe this is it:
http://www.lsi.com/storage_home/products_home/host_b
On Mon, 29 Jun 2009, Carsten Aulbert wrote:
Is there any way to force zpool import to re-order that? I could delete
all stuff under BACKUP, however given the size I don't really want to.
Do a zpool export first, and then check to see what's in /atlashome. My
bet is that the BACKUP directory
Thank you "Relling" and "et151817" for your answers.
So just to end the post:
Relling supouse the next situation:
One zpool in "Dinamic Stripe" with two disk, one of 100MB and the second
with 200MB
if the spread is "stochastic spreading of data across vdevs" you will have the
double of poss
Hi
a small addendum. It seems that all sub ZFS below /atlashome/BACKUP are
already mounted when /atlashome/BACKUP is tried to be mounted:
# zfs get all atlashome/BACKUP|head -15
NAME PROPERTY VALUE SOURCE
atlashome/BACKUP type filesystem
I've had success with the SIIG SC-SAE012-S2. PCIe and no problems
booting off of it in 2008.11.
On Jun 27, 2009, at 3:02 PM, Simon Breden
wrote:
Hi,
Does anyone know of a reliable 2 or 4 port SATA card with a solid
driver, that plugs into a PCIe slot, so that I can benefit from the
h
Erik Trimble wrote:
Jose Luis Barquín Guerola wrote:
Hello.
I have a question about how ZFS works with "Dinamic Stripe".
Well, start with the next situation:
- 4 Disk of 100MB in stripe format under ZFS.
- We use the stripe in a 75%, so we have free 100MB. (easy)
Well, we add a new disk
This can occur if the location of the end of the partition has
changed locations. This could be due to the partition actually
shrinking or if more than one partition references the same
starting block, but different ending locations. Check your
partition configuration in format and debug with "z
> # *swap -d /dev/zvol/dsk/rpool/swap*
> # *zfs volsize=8G rpool/swap*
> # *swap -a /dev/zvol/dsk/rpool/swap*
I'm still a bit fuzzy about how swap/dump
and ZFS. If I have a pool:
pool: pool1
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
po
Jose Luis Barquín Guerola wrote:
Hello.
I have a question about how ZFS works with "Dinamic Stripe".
Well, start with the next situation:
- 4 Disk of 100MB in stripe format under ZFS.
- We use the stripe in a 75%, so we have free 100MB. (easy)
Well, we add a new disk of 100MB in the pool. S
I upgraded powepath on my system and exported the zfs pool after upgradation
i was able to import the pool but after reboot i 'm not able to import the
pool and it fails with error
zpool import emcpool1
cannot import 'emcpool1': invalid vdev configuration
and digging a lil bit into it i found fo
Hello.
I have a question about how ZFS works with "Dinamic Stripe".
Well, start with the next situation:
- 4 Disk of 100MB in stripe format under ZFS.
- We use the stripe in a 75%, so we have free 100MB. (easy)
Well, we add a new disk of 100MB in the pool. So we have 200MB free but only
100M
And i just found out that one of my disk in the pool is showing missing labels
r...@essapl020-u006 # zdb -l /dev/dsk/emcpower0c
LABEL 0
version=4
name='emcpool1'
state=0
txg=6973090
pool_
Hi,
I've browsed the archives but there soes not seem to be a nice solution
to this one (happening on a Solaris 10u5 production machine):
zpool export atlashome
zpool import atlashome
cannot mount '/atlashome/BACKUP': directory is not empty
(from old emails I gathered that the output of zfs list
didn't help .. tried
r...@essapl020-u006 # zpool import
pool: emcpool1
id: 5596268873059055768
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-
On 29.06.09 11:41, Ketan wrote:
I'm having following issue .. i import the zpool and it shows pool imported
correctly
'zpool import' only show what pools are available to import. In order to
actually import pool you need to to
zpool import emcpool1
but after few seconds when i issue comma
Hi,
you have to upgrade your pool:
The pool is formatted using an older on-disk version.
# *zpool upgrade -v*
Then it should works fine.
Kind regards,
Moutacim
Ketan schrieb:
I'm having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i i
Hi,
you have to upgrade your pool:
The pool is formatted using an older on-disk version.
Then it should works fine.
Kind regards,
Moutacim
Ketan schrieb:
I'm having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool lis
I'm having following issue .. i import the zpool and it shows pool imported
correctly but after few seconds when i issue command zpool list .. it does not
show any pool and when again i try to import it says device is missing in pool
.. what could be the reason for this .. and yes this all star
So there is no possibility to do this with or before the lucreate command?
hm. well-
thank you anyway then
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
Hi, it is recommended to delete old swap and make new one:
# *swap -d /dev/zvol/dsk/rpool/swap*
# *zfs volsize=8G rpool/swap*
# *swap -a /dev/zvol/dsk/rpool/swap*
kind regards,
Moutacim
Patrick Bittner schrieb:
Thx for your quick answer, but that is exactly what i am trying: to manage this
39 matches
Mail list logo