Hi ;
I dont think that anyone owns the list and as anyone else you are very
welcome to ask any question.
L2arc will cache zpool so if your iscs lun is a zvol or a file on zfs
it will be cached.
Please use constar if you need performance
You are correct that you will only need couple of g
>
> Next to that I am reading all kind of performance benefits using seperate
> devices
> for the ZIL (write) and the Cache (read). I was wondering if I could
> share a single SSD between both ZIL and Cache device?
>
> Or is this not recommended?
>
>
> i asked something similar recently. The answe
Hi all,
Sorry for spamming your mailinglist,
but since I could not find a direct awnser on the internet and archives, I give
this a try!
I am building an ZFS filesystem to export iSCSI LUN's.
Now I was wondering if the L2ARC has the ability to cache non-filesystem iscsi
lun's?
Or does it only
I'm hoping someone has already come across this problem and solved it.
I'm using xVM to create 2 NFS fileservers. The Dom0 has a zpool with 8
zvols in it. 4 of the zvols are used for 4 DomU. 2 of the DomUs are
fileservers, and each fileserver attaches 2 more of the zvols for the user
filespace
Mattias Pantzare wrote:
On Sun, Jan 10, 2010 at 16:40, Gary Gendel wrote:
I've been using a 5-disk raidZ for years on SXCE machine which I converted to
OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which
was fixed.
So, now I'm at OSOL snv_111b and I'm finding that
On Jan 8, 2010, at 7:49 PM, bank kus wrote:
> dd if=/dev/urandom of=largefile.txt bs=1G count=8
>
> cp largefile.txt ./test/1.txt &
> cp largefile.txt ./test/2.txt &
>
> Thats it now the system is totally unusable after launching the two 8G
> copies. Until these copies finish no other applicati
On Sun, Jan 10, 2010 at 09:54:56AM -0600, Bob Friesenhahn wrote:
> WTF?
urandom is a character device and is returning short reads (note the
0+n vs n+0 counts). dd is not padding these out to the full blocksize
(conv=sync) or making multiple reads to fill blocks (conv=fullblock).
Evidently the ur
On Sun, 10 Jan 2010, Lutz Schumann wrote:
Talking about read performance. Assuming a reliable ZIL disk (cache
flush = working): The ZIL can guarantee data integrity, however if
the backend disks (aka pool disks) do not properly implement cache
flush - a reliable ZIL device does not "workaroun
Actually the performance decrease when disableing the write cache on the SSD is
aprox 3x (aka 66%).
Setup:
node1 = Linux Client with open-iscsi
server = comstar (cache=write through) + zvol (recordsize=8k, compression=off)
--- with SSD-Disk-write cache disabled:
node1:/mnt/ssd# iozone -
Hello Arnaud,
Thanks for your reply.
We have a system ( 2 x Xeon 5410, Intel S5000PSL mobo and 8 GB memory) with
12 x 500 GB SATA disks on a Areca 1130 controller. rpool is a mirror over 2
disks. 8 disks in raidz2, 1 spare. We have 2 aggr links.
Our goal is a ESX storage system, I am using I
On Sun, 10 Jan 2010, Henrik Johansson wrote:
As an interesting aside, on my Solaris 10U8 system (plus a zfs IDR), dd
(Solaris or GNU) does
not produce the expected file size when using /dev/urandom as input:
Do you feel this is related to the filesystem, is there any difference betw
I managed to disable the write cache (did not know a tool on Solaris, hoever
hdadm from the EON NAS binary_kit does the job):
Same power discuption test with Seagate HDD and write cache disabled ...
---
r...@nex
Hello Bob,
On Jan 10, 2010, at 4:54 PM, Bob Friesenhahn wrote:
> On Sun, 10 Jan 2010, Phil Harman wrote:
>> In performance terms, you'll probably find that block sizes beyond 128K add
>> little benefit. So I'd suggest something like:
>>
>> dd if=/dev/urandom of=largefile.txt bs=128k count=65536
A very interesting thread
(http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/)
and some thinking about the design of SSD's lead to a experiment I did with
the Intel X25-M SSD. The question was:
Is my data safe, once it has reached the di
On Sun, Jan 10, 2010 at 16:40, Gary Gendel wrote:
> I've been using a 5-disk raidZ for years on SXCE machine which I converted to
> OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which
> was fixed.
>
> So, now I'm at OSOL snv_111b and I'm finding that scrub repairs errors
place a sync call after dd ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, 10 Jan 2010, Phil Harman wrote:
In performance terms, you'll probably find that block sizes beyond 128K add
little benefit. So I'd suggest something like:
dd if=/dev/urandom of=largefile.txt bs=128k count=65536
dd if=largefile.txt of=./test/1.txt bs=128k &
dd if=largefile.txt of=./test
I've been using a 5-disk raidZ for years on SXCE machine which I converted to
OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which
was fixed.
So, now I'm at OSOL snv_111b and I'm finding that scrub repairs errors on
random disks. If I repeat the scrub, it will fix error
Hello again,
On Jan 10, 2010, at 5:39 AM, bank kus wrote:
> Hi Henrik
> I have 16GB Ram on my system on a lesser RAM system dd does cause problems as
> I mentioned above. My __guess__ dd is probably sitting in some in memory
> cache since du -sh doesnt show the full file size until I do a sync.
Hi Phil
You make some interesting points here:
-> yes bs=1G was a lazy thing
-> the GNU cp I m using does __not__ appears to use mmap
open64 open64 read write close close is the relevant sequence
-> replacing cp with dd 128K * 64K does not help no new apps can be launched
until the copies
We had a similar problem on Areca 1680. It was caused by a drive that
didn't properly reset (took ~2 seconds each time according to the drive
tray's led).
Replacing the drive solved this problem, but then we hit another problem
which you can see in this thread :
http://opensolaris.org/jive/thre
> No, sorry Dennis, this functionality doesn't exist yet, but
> is being worked,
> but will take a while, lots of corner cases to handle.
>
> James Dickens
> uadmin.blogspot.com
1 ) dammit
2 ) looks like I need to do a full offline backup and then restore
to shrink a zpool.
As usual, Thanks
What version of Solaris / OpenSolaris are you using? Older versions use
mmap(2) for reads in cp(1). Sadly, mmap(2) does not jive well with ZFS.
To be sure, you could check how your cp(1) is implemented using truss(1)
(i.e. does it do mmap/write or read/write?)
I find it interesting that ZFS'
No, sorry Dennis, this functionality doesn't exist yet, but is being worked,
but will take a while, lots of corner cases to handle.
James Dickens
uadmin.blogspot.com
On Sun, Jan 10, 2010 at 3:23 AM, Dennis Clarke wrote:
>
> Suppose the requirements for storage shrink ( it can happen ) is it
> p
Suppose the requirements for storage shrink ( it can happen ) is it
possible to remove a mirror set from a zpool?
Given this :
# zpool status array03
pool: array03
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features
Hi, it seems you might have somekind of hardware issue there, I have no way
reproducing this.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of bank kus
Sent: 10. tammikuuta 2010 7:21
To: zfs-discus
26 matches
Mail list logo