Richard - the l2arc is c1t13d0. What tools can be use to show the l2arc stats?
raidz1 2.68T 580G543453 4.22M 3.70M
c1t1d0 - -258102 689K 358K
c1t2d0 - -256103 684K 354K
c1t3d0 - -258102 690K 359K
I've had this happen to me too. I found some dtrace scripts at the
time that showed that the file system was spending too much time
finding available 128k blocks or the like as I was near full per each
disk, even though combined I still had 140GB left of my 3TB pool. The
SPA code I believe it was w
I have two snv_126 systems. I'm trying to zfs send a recursive snapshot
from one system to another:
# zfs send -v -R tww/opt/chro...@backup-20091225 |\
ssh backupserver "zfs receive -F -d -u -v tww"
...
found clone origin tww/opt/chroots/a...@ab-1.0
receiving incremental stream of tww/opt
One thing that bugged me is that I can not ssh as myself to my box when a zpool
import is running. It just hangs after accepting my password.
I had to convert root from a role to a user and ssh as root to my box.
I now know why this is, when I log in, /usr/sbin/quota gets called. This must
do a
On Sun, Dec 27, 2009 at 8:40 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sun, 27 Dec 2009, Tim Cook wrote:
>
> How is that going to prevent blocks being spread all over the disk when
>> you've got files several GB in size being written concurrently and deleted
>> at random? A
On Sun, 27 Dec 2009, Tim Cook wrote:
How is that going to prevent blocks being spread all over the disk
when you've got files several GB in size being written concurrently
and deleted at random? And then throw in a mix of small files as
well, kiss that goodbye.
There would certainly be bloc
The best place to start looking at disk-related performance problems
is iostat.
Slow disks will show high service times. There are many options, but I
usually use something like:
iostat -zxcnPT d 1
Ignore the first line. Look at the service times. They should be
below 10ms
for goo
On Sun, Dec 27, 2009 at 6:43 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sun, 27 Dec 2009, Tim Cook wrote:
>
>>
>> That is ONLY true when there's significant free space available/a fresh
>> pool. Once those files have been deleted and the blocks put back into the
>> free pool,
Lately my zfs pool in my home server has degraded to a state where it can be
said it doesn't work at all. Read spead is slower than I can read from the
internet on my slow dsl-line... This is compared to just a short while ago,
where I could read from it with over 50mb/sec over the network.
My
On Sun, 27 Dec 2009, Tim Cook wrote:
That is ONLY true when there's significant free space available/a
fresh pool. Once those files have been deleted and the blocks put
back into the free pool, they're no longer "sequential" on disk,
they're all over the disk. So it makes a VERY big differe
On Sun, Dec 27, 2009 at 1:38 PM, Roch Bourbonnais
wrote:
>
> Le 26 déc. 09 à 04:47, Tim Cook a écrit :
>
>
>>
>> On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov
>> wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> I've started porting a video streaming application to opensolaris on
On Sun, Dec 27 at 12:57, Errol Neal wrote:
I'm looking at upgrading a box serving a lun via iSCSI (Comstar) currently
running snv_121.
The initiators are cluster pair running SLES11.
Any gotchas that I should be aware of?
I bumped into 4 of the ~12 warnings in the release notes. I don't
have
I have a pool in the same state. I deleted a file set that was compressed and
deduped and had a bunch of zero blocks in it. The delete ran for a while and
then it hung. Trying to import with any combination of -f or -fF or -fFX gives
the same results you guys get. zdb -eud shows all my file
Here is iostat output of my disks being read:
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
45.30.0 27.60.0 0.0 0.60.0 13.3 0 60 c3d0
44.30.0 27.00.0 0.0 0.30.07.7 0 34 c3d1
43.50.0 27.40.0 0.0 0.50.0 12.6
It sounds like you have less data on yours, perhaps that is why yours freezes
faster.
Whatever mine is doing during the import, it reads my disks now for nearly
24-hours, and then starts writing to the disks.
The reads start out fast, then they just sit, going at something like 20k /
second on
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach wrote:
> Brent,
>
> I had known about that bug a couple of weeks ago, but that bug has been files
> against v111 and we're at v130. I have also seached the ZFS part of this
> forum and really couldn't find much about this issue.
>
> The other issu
On Sun, 27 Dec 2009, Hillel Lubman wrote:
Thanks, that's what I wanted to know. But why can't OpenSolaris boot
from its own RAID-Z? Is it a GRUB related limitation? It would make
sense to be able to boot from RAID-Z if it's such an integral part
of ZFS.
Yes, it is a GRUB limitation. Anythi
Bob Friesenhahn wrote:
> OpenSolaris can only boot from a single drive, or a mirror pair. It can't
> boot from
> raidz. This means that you need to dedicate one or two drives (or
> partitions) for the root pool.
Thanks, that's what I wanted to know. But why can't OpenSolaris boot from its
own RA
On Sun, 27 Dec 2009, Hillel Lubman wrote:
May be this question was already asked here, so I'm sorry for redundancy.
What is a minimal amount of hard drives for enabling RAID-Z on
OpenSolaris? Is it possible to have only 4 identical hard drives,
and to install a whole system on them with softw
This isn't an option for me. The current machine is going to be totally
upgraded,
New motherboard, new ram (ecc) new controller cards and 9 new hard drives.
Current pool is 3 raidz1 vdevs with 4 drives each (all 1 tb)
It's about 65% full.
If i have to use some other filesystem that is an opt
May be this question was already asked here, so I'm sorry for redundancy.
What is a minimal amount of hard drives for enabling RAID-Z on OpenSolaris? Is
it possible to have only 4 identical hard drives, and to install a whole system
on them with software RAID-Z underneath? Or enabling RAID-Z i
Le 26 déc. 09 à 04:47, Tim Cook a écrit :
On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've started porting a video streaming application to opensolaris on
ZFS, and am hitting some pretty weird performance issues. The thing
I'm
t
4 Gigabytes. The hang on my system happens much faster. I can watch the drives
light up and run iostat but 3 minutes in like clockwork everything gets hung
and I'm left with a blinking cursor at the console that newlines but doesn't do
anything. Although if I run kmdb and hit f1-a I can get int
I don't know how much progress has been made on this, but back when I moved
from FreeBSD (an older version, maybe the first to have stable ZFS) to Solaris,
this couldn't be done since they were not quite compatible yet. I got some new
drives since the ones I had were dated, copied the data to th
I'm looking at upgrading a box serving a lun via iSCSI (Comstar) currently
running snv_121.
The initiators are cluster pair running SLES11.
Any gotchas that I should be aware of?
Sys Specs are:
2 Xeon E5410 procs
8 GB RAM
12 10K Savios
1 X25-E as ZIL
Supermicro rebranded LSI SAS controller
Th
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Thanks for the mdb syntax - I wasn't sure how to set it using mdb at
runtime, which is why I used /etc/system. I was quite intrigued to find
out that the Solaris kernel was in fact designed for being tuned at
runtime using some generic debugging mechan
Are there any negative consequences as a result of a force import? I mean
STUNT; "Sudden Totally Unexpected and Nasty Things"
-Me
On Sun, Dec 27, 2009 at 17:55, Sriram Narayanan wrote:
> opensolaris has a newer version of ZFS than Solaris. What you have is
> a pool that was not marked as exporte
Also, if you don't care about the existing pool and want to create a
new pool one the same devices, you can go ahead and do so.
The format command will list the storage devices available to you.
-- Sriram
On 12/27/09, Sriram Narayanan wrote:
> opensolaris has a newer version of ZFS than Solaris
opensolaris has a newer version of ZFS than Solaris. What you have is
a pool that was not marked as exported for use on a different OS
install.
Simply force import the pool using zpool import -f
-- Sriram
On 12/27/09, Havard Kruger wrote:
> Hi, in the process of building a new fileserver and I'
On Sun, 27 Dec 2009, Cyril Plisko wrote:
gzip compression is not supported in GRUB zfs reader. You should avoid
using it for boot filesystem. If may try to revert compression setting
to "off or "on" (which defaults to lzjb) and try to boot that way.
(That is if you didn't rewrite any critical da
OK, I'll take a stab at it...
On Dec 26, 2009, at 9:52 PM, Brad wrote:
repost - Sorry for ccing the other forums.
I'm running into a issue where there seems to be a high number of
read iops hitting disks and physical free memory is fluctuating
between 200MB -> 450MB out of 16GB total. We h
I know I'm a bit late to contribute to this thread, but I'd still like to
add my $0.02. My "gut feel" is that we (generally) don't yet understand the
subtleties of disk drive failure modes as they relate to 1.5 or 2Tb+ drives.
Why? Because those large drives have not been widely available until
Hi, in the process of building a new fileserver and I'm currently playing
around with various operating systems, I created a pool in Solaris, before I
decided to try OpenSolaris aswell, so I installed OpenSolaris 20009.06, but I
forgot to destroy the pool I created in Solaris, so now I can't imp
On Fri, Dec 25, 2009 at 5:49 PM, Michael Armstrong wrote:
> Hi, I currently have 4x 1tb drives in a raidz configuration. I want to add
> another 2 x 1tb drives, however if i simply zpool add, i will only gain an
> extra 1tb of space as it will create a second raidz set inside the existing
> tank/
You could revert to the @install snapshot (via the livecd) and swe if
that works for you.
-- Sriram
On 12/27/09, Tomas Bodzar wrote:
> So I booted from Live CD and then :
>
> zpool import
> pfexec zpool import -f rpool
> pfexec zfs set compression=off rpool
> pfexec zpool export rpool
>
> and re
On 26/12/2009 12:22, Saso Kiselkov wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Thank you, the post you mentioned helped me move a bit forward. I tried
putting:
zfs:zfs_txg_timeout = 1
btw: you can tune it on a live system without a need to do reboots.
mi...@r600:~# echo zfs_txg_timeo
Hi, I currently have 4x 1tb drives in a raidz configuration. I want to
add another 2 x 1tb drives, however if i simply zpool add, i will only
gain an extra 1tb of space as it will create a second raidz set inside
the existing tank/pool. Is there a way to add my new drives into the
existing
Hi
I just picked up one of these cards and had a few questions
After installing it I can see it via scanpci but any devices I've connected to
it don't show up in iostat -En , is there anything specific I need to do to
enable it?
Do any of you experience the bug mentioned below (worried about usi
So I booted from Live CD and then :
zpool import
pfexec zpool import -f rpool
pfexec zfs set compression=off rpool
pfexec zpool export rpool
and reboot but still same problem.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
On 12/26/2009 10:41 AM, Saso Kiselkov wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Would an upgrade to the development repository of 2010.02 do the same?
I'd like to avoid having to do a complete reinstall, since I've got
quite a bit of custom software in the system already in various pl
you should be able to boot with the live cd then import the pool i would
think...
On Sun, Dec 27, 2009 at 4:40 AM, Tomas Bodzar wrote:
> Uh, but why system allowed that if it's not running? And how to revert it
> as I can't boot even to single user mode? Is there a way to do that with
> Live CD
Uh, but why system allowed that if it's not running? And how to revert it as I
can't boot even to single user mode? Is there a way to do that with Live CD?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
On Sun, Dec 27, 2009 at 11:25 AM, Tomas Bodzar wrote:
> Hi all,
>
> I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows
> because snv_130 doesn't boot anymore after installation of VirtualBox guest
> additions. Older builds before snv_129 were running fine too. I like some
Hi Tomas,
On 27/12/2009, at 7:25 PM, Tomas Bodzar wrote:
> pfexec zpool set dedup=verify rpool
> pfexec zfs set compression=gzip-9 rpool
> pfexec zfs set devices=off rpool/export/home
> pfexec zfs set exec=off rpool/export/home
> pfexec zfs set setuid=off rpool/export/home
grub doesn’t support g
44 matches
Mail list logo