On 15 feb 2010, at 23.33, Bob Beverage wrote:
>> On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff
>> wrote:
>> I've seen exactly the same thing. Basically, terrible
>> transfer rates
>> with Windows
>> and the server sitting there completely idle.
>
> I am also seeing this behaviour. It start
> The other thing I've noticed with all of the "destroyed a large dataset with
> dedup
> enabled and it's taking forever to import/destory/ questions
> is that the process runs so so so much faster with 8+ GiB of RAM. Almost to
> a man,
> everyone who reports these 3, 4, or more day destroys
On Feb 15, 2010, at 8:43 PM, heinz zerbes wrote:
>
> Gents,
>
> We want to understand the mechanism of zfs a bit better.
>
> Q: what is the design/algorithm of zfs in terms of reclaiming unused blocks?
> Q: what criteria is there for zfs to start reclaiming blocks
The answer to these questions
The system in question has 8GB of ram. It never paged during the
import (unless I was asleep at that point, but anyway).
It ran for 52 hours, then started doing 47% kernel cpu usage. At this
stage, dtrace stopped responding, and so iopattern died, as did
iostat. It was also increasing ram usage ra
> RFE open to allow you to store [DDT] on a separate top level VDEV
hmm, add to this spare, log and cache vdevs, its to the point of making
another pool and thinly provisioning volumes to maintain partitioning
flexibility.
taemun: hay, thanks for closing the loop!
Gents,
We want to understand the mechanism of zfs a bit better.
Q: what is the design/algorithm of zfs in terms of reclaiming unused blocks?
Q: what criteria is there for zfs to start reclaiming blocks
Issue at hand is an LDOM or zone running in a virtual (thin-provisioned)
disk on a NFS serv
The DDT is stored within the pool, IIRC, but there is an RFE open to allow
you to
store it on a separate top level VDEV, like a SLOG.
The other thing I've noticed with all of the "destroyed a large dataset with
dedup
enabled and it's taking forever to import/destory/ wrote:
> Just thought I'd chi
Just thought I'd chime in for anyone who had read this - the import
operation completed this time, after 60 hours of disk grinding.
:)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Feb 14, 2010 at 11:08:52PM -0600, Tracey Bernath wrote:
> Now, to add the second SSD ZIL/L2ARC for a mirror.
Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
add more devices and let them load-balance. This is especially true
if you're sharing ssd writes with ZIL, as
On Mon, Feb 15, 2010 at 01:45:57PM +0100, Bogdan ?ulibrk wrote:
> One more thing regarding SSD, will be useful to throw in additional
> SAS/SATA drive in to serve as L2ARC? I know SSD is the most logical
> thing to put as L2ARC, but will conventional drive be of *any* help in
> L2ARC?
Only i
> On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff
> wrote:
> I've seen exactly the same thing. Basically, terrible
> transfer rates
> with Windows
> and the server sitting there completely idle.
I am also seeing this behaviour. It started somewhere around snv111 but I am
not sure exactly when
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff wrote:
> I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN
> box, and am experiencing absolutely poor / unusable performance.
...
>
> From here, I discover the iscsi target on our Windows server 2008 R2 File
> server, a
Hi--
From your pre-promotion output, both fs1-patch and snap1 are
referencing the same 16.4 GB, which makes sense. I don't see how fs1
could be a
clone of fs1-patch because it should be REFER'ing 16.4 GB as well in
your pre-promotion zfs list.
If you snapshot, clone, and promote, then the sna
Thank you, Darren and Richard. I think this gives what I wanted.
Yi
On Mon, Feb 15, 2010 at 3:13 PM, Darren J Moffat wrote:
> On 15/02/2010 19:15, Yi Zhang wrote:
>>>
>>> Can you create a zvol and use that for ufs? Slow, but ...
>>>
>>> Casper
>>>
>>>
>>
>> Casper, thanks for the tip! Actually
On 15/02/2010 19:15, Yi Zhang wrote:
Can you create a zvol and use that for ufs? Slow, but ...
Casper
Casper, thanks for the tip! Actually I'm not sure if this would work
for me. I wanted to use directio to bypass the file system cache when
reading/writing files. That's why I chose UFS inst
On Feb 15, 2010, at 11:15 AM, Yi Zhang wrote:
> On Mon, Feb 15, 2010 at 1:48 PM, wrote:
>>
>>> Hi,
>>>
>>> I recently installed OpenSoalris 200906 on a 10GB primary partition on
>>> my laptop. I noticed there wasn't any option for customizing the
>>> slices inside the solaris partition. After
On Mon, Feb 15, 2010 at 1:48 PM, wrote:
>
>>Hi,
>>
>>I recently installed OpenSoalris 200906 on a 10GB primary partition on
>>my laptop. I noticed there wasn't any option for customizing the
>>slices inside the solaris partition. After installation, there was
>>only a single slice (0) occupying t
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>Hi,
>
>I recently installed OpenSoalris 200906 on a 10GB primary partition on
>my laptop. I noticed there wasn't any option for customizing the
>slices inside the solaris partition. After installation, there was
>only a single slice (0) occupying the entire partition. Now the
>problem is that I n
On 02/15/10 10:26, Nick wrote:
There is no doubt that it is both a bug and expected
behavior and is
related to deduplication being enabled.
Is it expected because it's a bug, or is it a bug that is not going to be fixed
and so I should expect it? Is there a bug/defect I can keep an eye on in
Hi,
I recently installed OpenSoalris 200906 on a 10GB primary partition on
my laptop. I noticed there wasn't any option for customizing the
slices inside the solaris partition. After installation, there was
only a single slice (0) occupying the entire partition. Now the
problem is that I need to s
>
> There is no doubt that it is both a bug and expected
> behavior and is
> related to deduplication being enabled.
Is it expected because it's a bug, or is it a bug that is not going to be fixed
and so I should expect it? Is there a bug/defect I can keep an eye on in one
of the Opensolaris
Yes send and receive will do the job. see zfs manpage for details.
James Dickens
http://uadmin.blogspot.com
On Mon, Feb 15, 2010 at 11:56 AM, Tiernan OToole wrote:
> Good morning all.
>
> I am in the process of building my V1 SAN for media storage in house, and i
> am already thinkg ov the V2 b
Good morning all.
I am in the process of building my V1 SAN for media storage in house, and i
am already thinkg ov the V2 build...
Currently, there are 8 250Gb hdds and 3 500Gb disks. the 8 250s are in a
RAIDZ2 array, and the 3 500s will be in RAIDZ1...
At the moment, the current case is quite f
On Mon, 15 Feb 2010, Nick wrote:
I'm using the latest Opensolaris dev build (132) and I have my
storage pools and volumes upgraded to the latest available versions.
I am using deduplication on my ZFS volumes, set at the highest
volume level, so I'm not sure if this has an impact. Can anyone
I've seen threads like this around this ZFS forum, so forgive me if I'm
covering old ground. I currently have a ZFS configuration where I have
individual drives presented to my Opensolaris machine and I'm using ZFS to do a
RAIDZ-1 on the drives. I have several filesystems and volumes on this s
Thanks. That makes sense. This is raidz2.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Charles,
What kind of pool is this?
The SIZE and AVAIL amounts will vary depending on the ZFS redundancy and
whether the deflated or inflated amounts are displayed.
I attempted to explain the differences in the zpool list/zfs list
display, here:
http://hub.opensolaris.org/bin/view/Communi
Hi,
Sending to zfs-discuss too since this seems to be related to the zfs
receive operation.
The following only holds true when the replication stream (ie the delta
between snap1 and snap2) is more than about 800GB.
If I proceed with this command the transfer fails after some variable
amount
For those following the saga:
With the prefetch problem fixed, and data coming off the L2ARC instead of
the disks, the system switched from IO bound to CPU bound, I opened up the
throttles with some explicit PARALLEL hints in the Oracle commands, and we
were finally able to max out the single SSD:
Gents,
We want to understand the mechanism of zfs a bit better.
Q: what is the design/algorithm of zfs in terms of reclaiming unused
blocks?
Q: what criteria is there for zfs to start reclaiming blocks
Issue at hand is an LDOM or zone running in a virtual
(thin-provisioned) disk on a NFS ser
Kjetil and Richard thanks for this.
Kjetil Torgrim Homme wrote:
Bogdan Ćulibrk writes:
What are my options from here? To move onto zvol with greater
blocksize? 64k? 128k? Or I will get into another trouble going that
way when I have small reads coming from domU (ext3 with default
blocksize
zfs ml wrote:
sorry, scratch the above - I didn't see this:
9. domUs have ext3 mounted with: noatime,commit=120
Is the write traffic because you backing up to the same disks that the
domUs live on?
Yes it is.
___
zfs-discuss mailing list
zfs-disc
Yes, if you value your data you should change from USB drives to normal drives.
I heard that USB did some strange things? Normal connection such as SATA is
more reliable.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
Hi,
If I may - you mentioned that you use ICH10 over ahci. As far as I know ICH10
is not officially supported by the ahci module. I have also tried myself on
various ICH10 systems without success. OSOL wouldn't even install on pre-130
builds, and I haven't tried since.
Regards,
Tonmaus
--
Th
Richard Elling wrote:
...
As you can see, so much has changed, hopefully for the better, that running
performance benchmarks on old software just isn't very interesting.
NB. Oracle's Sun OpenStorage systems do not use Solaris 10 and if they did, they
would not be competitive in the market. The n
36 matches
Mail list logo