On Dec 30, 2009, at 9:35 PM, Ross Walker wrote:
On Dec 30, 2009, at 11:55 PM, "Steffen Plotner"
wrote:
Hello,
I was doing performance testing, validating zvol performance in
particularly, and found that zvol write performance to be slow
~35-44MB/s at 1MB blocksize writes. I then teste
On Dec 30, 2009, at 11:55 PM, "Steffen Plotner"
wrote:
Hello,
I was doing performance testing, validating zvol performance in
particularly, and found that zvol write performance to be slow
~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs
file system with the same te
On Wed, Dec 30, 2009 at 8:55 PM, Steffen Plotner wrote:
> Hello,
>
> I was doing performance testing, validating zvol performance in
> particularly, and found that zvol write performance to be slow ~35-44MB/s at
> 1MB blocksize writes. I then tested the underlying zfs file system with the
> same t
On Dec 30, 2009, at 2:24 PM, Ragnar Sundblad wrote:
On 30 dec 2009, at 22.45, Richard Elling wrote:
On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote:
Richard,
That's an interesting question, if it's worth it or not. I guess
the question is always who are the targets for ZFS (I assume
Hello,
I was doing performance testing, validating zvol performance in particularly,
and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize
writes. I then tested the underlying zfs file system with the same test and got
121MB/s. Is there any way to fix this? I really woul
I'm just wondering what some of you might do with your systems.
We have an EMC Clariion unit that I connect several sun machines to. I allow
the EMC to do it's hardware raid5 for several luns and then I stripe them
together. I considered using raidz and just configuring the EMC as a JBOD, bu
Just thought I would let you all know that I followed what Alex suggested along
with what many of you pointed out and it worked! Here are the steps I followed:
1. Break root drive mirror
2. zpool export filesystem
3. run the command to start MPIOX and reboot the machine
4. zpool import filesystem
On Wed, Dec 30, 2009 at 7:08 AM, Thomas Burgess wrote:
>
> I'm about to build a ZFS based NAS and i'd like some suggestions about how to
> set up my drives.
>
> The case i'm using holds 20 hot swap drives, so i plan to use either 4 vdevs
> with 5 drives or 5 vdevs with 4 drives each (and a hot s
On Wed, 30 Dec 2009, Mike Gerdts wrote:
Should the block size be a tunable so that page size of SSD (typically
4K, right?) and upcoming hard disks that sport a sector size > 512
bytes?
Enterprise SSDs are still in their infancy. The actual page size of
an SSD could be almost anything. Due t
will i be able to see which files were "affected" by dedup or can i do a
zfs send/recieve to another filesystem to clean it up?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
On 30 dec 2009, at 22.45, Richard Elling wrote:
> On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote:
>
>> Richard,
>>
>> That's an interesting question, if it's worth it or not. I guess the
>> question is always who are the targets for ZFS (I assume everyone, though in
>> reality priorities
On Wed, Dec 30, 2009 at 3:12 PM, Richard Elling
wrote:
> If the allocator can change, what sorts of policies should be
> implemented? Examples include:
> + should the allocator stick with best-fit and encourage more
> gangs when the vdev is virtual?
> + should the allocato
On Dec 30, 2009, at 12:25 PM, Andras Spitzer wrote:
Richard,
That's an interesting question, if it's worth it or not. I guess the
question is always who are the targets for ZFS (I assume everyone,
though in reality priorities has to set up as the developer
resources are limited). For a ho
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Richard Elling wrote:
>>> Why each file can't have also "expiration date/time" field, e.g.,
>>> date/time when operation system will delete it automatically?
>>> This could be usable for backups, camera raw files, internet browser
>>> cached files, etc
On Dec 30, 2009, at 12:41 PM, Tomas Ögren wrote:
On 30 December, 2009 - Dennis Yurichev sent me these 0,7K bytes:
Hi.
Why each file can't have also "expiration date/time" field, e.g.,
date/time when operation system will delete it automatically?
This could be usable for backups, camera raw fi
Ack..
I've just re-read your original post. :-) It's clear you are talking
about support for thin devices behind the pool, not features inside the
pool itself.
Mea culpa.
So I guess we wait for trim to be fully supported.. :-)
T.
On 31/12/2009 8:09 AM, Tristan Ball wrote:
To some exten
now this is getting interesting :-)...
On Dec 30, 2009, at 12:13 PM, Mike Gerdts wrote:
On Wed, Dec 30, 2009 at 1:40 PM, Richard Elling
wrote:
On Dec 30, 2009, at 10:53 AM, Andras Spitzer wrote:
Devzero,
Unfortunately that was my assumption as well. I don't have source
level
knowledge of
To some extent it already does.
If what you're talking about is filesystems/datasets, then all
filesystems within a pool share the same free space, which is
functionally very similar to each filesystem within the pool being
thin-provisioned. To get a "thick" filesystem, you'd need to set at
l
On 30 December, 2009 - Dennis Yurichev sent me these 0,7K bytes:
> Hi.
>
> Why each file can't have also "expiration date/time" field, e.g.,
> date/time when operation system will delete it automatically?
> This could be usable for backups, camera raw files, internet browser
> cached files, etc.
Richard,
That's an interesting question, if it's worth it or not. I guess the question
is always who are the targets for ZFS (I assume everyone, though in reality
priorities has to set up as the developer resources are limited). For a home
office, no doubt thin provisioning is not much of a use
On 30/12/2009 20:12, ono wrote:
I tried the deduplication feature but the performance of my fileserver
dived from writing 50MB/s via CIFS to 4MB/s.
what happens to the deduped blocks when you set dedup=off?
are they written back to disk?
is the deduptable deleted or is it still there?
Tur
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi.
Why each file can't have also "expiration date/time" field, e.g.,
date/time when operation system will delete it automatically?
This could be usable for backups, camera raw files, internet browser
cached files, etc.
- --
My PGP public key: http:/
On Wed, 30 Dec 2009, Richard Elling wrote:
Disagree. Scrubs and resilvers are IOPS bound.
This is a case of "it depends". On both of my Solaris systems, scrubs
seem to be bandwidth-limited. However, I am not using raidz or SATA
and the drives are faster than the total connectivity.
Bob
-
On Wed, Dec 30, 2009 at 1:40 PM, Richard Elling
wrote:
> On Dec 30, 2009, at 10:53 AM, Andras Spitzer wrote:
>
>> Devzero,
>>
>> Unfortunately that was my assumption as well. I don't have source level
>> knowledge of ZFS, though based on what I know it wouldn't be an easy way to
>> do it. I'm not
I tried the deduplication feature but the performance of my fileserver
dived from writing 50MB/s via CIFS to 4MB/s.
what happens to the deduped blocks when you set dedup=off?
are they written back to disk?
is the deduptable deleted or is it still there?
thanks
--
This message posted from openso
On 12/30/2009 2:40 PM, Richard Elling wrote:
There are a few minor bumps in the road. The ATA PASSTHROUGH
command, which allows TRIM to pass through the SATA drivers, was
just integrated into b130. This will be more important to small servers
than SANs, but the point is that all parts of the sof
On Dec 30, 2009, at 10:53 AM, Andras Spitzer wrote:
Devzero,
Unfortunately that was my assumption as well. I don't have source
level knowledge of ZFS, though based on what I know it wouldn't be
an easy way to do it. I'm not even sure it's only a technical
question, but a design question,
On Dec 30, 2009, at 11:01 AM, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Thomas Burgess wrote:
Just curious, but in your "ideal" situation, is it considered best
to use 1 controller for each vdev or user a different controler for
each device in the vdev (i'd guess the latter but ive been wr
> making transactional,logging filesystems
> thin-provisioning aware should be hard to do, as
> every new and every changed block is written to a new
> location. so what applies to zfs, should also apply to btrfs or
> nilfs or similar filesystems.
>
> i`m not sure if there is a good way to make z
On Dec 30, 2009, at 10:56 AM, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Richard Elling wrote:
He's limited by GbE, which can only do 100 MB/s or so...
the PCI busses, bridges, memory, controllers, and disks will
be mostly loafing, from a bandwidth perspective. In other
words, don't worry abo
On Wed, Dec 30, 2009 at 19:23, roland wrote:
> making transactional,logging filesystems thin-provisioning aware should be
> hard to do, as every new and every changed block is written to a new location.
> so what applies to zfs, should also apply to btrfs or nilfs or similar
> filesystems.
If t
On Wed, Dec 30, 2009 at 2:01 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Wed, 30 Dec 2009, Thomas Burgess wrote:
>
>>
>> Just curious, but in your "ideal" situation, is it considered best to use
>> 1 controller for each vdev or user a different controler for each device in
>> t
On Wed, 30 Dec 2009, Thomas Burgess wrote:
Just curious, but in your "ideal" situation, is it considered best
to use 1 controller for each vdev or user a different controler for
each device in the vdev (i'd guess the latter but ive been wrong
before)
From both a fault-tolerance standpoint,
On Wed, 30 Dec 2009, Richard Elling wrote:
He's limited by GbE, which can only do 100 MB/s or so...
the PCI busses, bridges, memory, controllers, and disks will
be mostly loafing, from a bandwidth perspective. In other
words, don't worry about it.
Except that cases like 'zfs scrub' and resilve
On Wed, Dec 30, 2009 at 1:17 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Wed, 30 Dec 2009, Thomas Burgess wrote:
>
>>
>> and, onboard with 6 sata portsso what would be the best method of
>> connecting the drives if i go with 4 raidz vdevs or 5 raidz vdevs?
>>
>
> Try to
Devzero,
Unfortunately that was my assumption as well. I don't have source level
knowledge of ZFS, though based on what I know it wouldn't be an easy way to do
it. I'm not even sure it's only a technical question, but a design question,
which would make it even less feasible.
Apart from the te
On Wed, 30 Dec 2009, Dr. Martin Mundschenk wrote:
I have a mac mini running as an OSOL box. The OS is installed on the
internal hard drive on the vdrive rpool. On rpool there is no
redundancy.
If I add an external block device (USB / Firewire) to rpool to
mirror the internal hard drive and
On Wed, 30 Dec 2009, Richard Elling wrote:
are written because you also assume that you can later read either side. For
ZFS, if only one side of the mirror is written, you know the bad side is bad
because of the checksum. The checksum is owned by the parent, which is
an important design decision
On Dec 30, 2009, at 10:26 AM, tom wagner wrote:
Yeah, still no joy on getting my pool back. I think I might have to
try grabbing another server with a lot more memory and slapping the
HBA and the drives in that. Can ZFS deal with a controller change?
Yes.
-- richard
On Dec 30, 2009, at 10:17 AM, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Thomas Burgess wrote:
and, onboard with 6 sata portsso what would be the best
method of connecting the drives if i go with 4 raidz vdevs or 5
raidz vdevs?
Try to distribute the raidz vdevs as evenly as possib
Yeah, still no joy on getting my pool back. I think I might have to try
grabbing another server with a lot more memory and slapping the HBA and the
drives in that. Can ZFS deal with a controller change?
--
This message posted from opensolaris.org
___
making transactional,logging filesystems thin-provisioning aware should be hard
to do, as every new and every changed block is written to a new location.
so what applies to zfs, should also apply to btrfs or nilfs or similar
filesystems.
i`m not sure if there is a good way to make zfs thin-prov
On Wed, 30 Dec 2009, Thomas Burgess wrote:
and, onboard with 6 sata portsso what would be the best
method of connecting the drives if i go with 4 raidz vdevs or 5
raidz vdevs?
Try to distribute the raidz vdevs as evenly as possible across the
available SATA controllers. In other wo
seems, my problem is unrelated.
after disabling the gui and working console only, i see no freezes. so it must
be a problem of the desktop/X environment and not kernel/zfs issue.
sorry for the noise.
--
This message posted from opensolaris.org
___
zfs
On Dec 30, 2009, at 9:35 AM, Bob Friesenhahn wrote:
On Tue, 29 Dec 2009, Ross Walker wrote:
Some important points to consider are that every write to a raidz
vdev must be synchronous. In other words, the write needs to
complete on all the drives in the stripe before the write may
return
On Wed, Dec 30, 2009 at 12:35 PM, Bob Friesenhahn
wrote:
> On Tue, 29 Dec 2009, Ross Walker wrote:
>>
>>> Some important points to consider are that every write to a raidz vdev
>>> must be synchronous. In other words, the write needs to complete on all the
>>> drives in the stripe before the writ
On Tue, 29 Dec 2009, Ross Walker wrote:
Some important points to consider are that every write to a raidz vdev must
be synchronous. In other words, the write needs to complete on all the
drives in the stripe before the write may return as complete. This is also
true of "RAID 1" (mirrors) whi
On Dec 30, 2009, at 7:50 AM, Thomas Burgess wrote:
ok, but how should i connect the drives across the controllers?
Don't worry about the controllers. They are at least an order of
magnitude more reliable than the disks and if you are using HDDs,
then you will have plenty of performance.
-- ri
Hi!
I wonder if the following scenario works:
I have a mac mini running as an OSOL box. The OS is installed on the internal
hard drive on the vdrive rpool. On rpool there is no redundancy.
If I add an external block device (USB / Firewire) to rpool to mirror the
internal hard drive and if the
ok, but how should i connect the drives across the controllers?
i'll have 3 pci-x cards each with 8 sata ports
2 pci-x bus with 133 Mhz and 2 with 100 Mhz
and, onboard with 6 sata portsso what would be the best method of
connecting the drives if i go with 4 raidz vdevs or 5 raidz vdevs?
On 29-Dec-09, at 11:53 PM, Ross Walker wrote:
On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn
wrote:
...
However, zfs does not implement "RAID 1" either. This is easily
demonstrated since you can unplug one side of the mirror and the
writes to the zfs mirror will still succeed, catching
Hello,
On Dec 30, 2009, at 2:08 PM, Thomas Burgess wrote:
> I'm about to build a ZFS based NAS and i'd like some suggestions about how to
> set up my drives.
>
> The case i'm using holds 20 hot swap drives, so i plan to use either 4 vdevs
> with 5 drives or 5 vdevs with 4 drives each (and a ho
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ok, I figured out that apparently I was the idiot in this story, again.
I forgot to set SO_RCVBUF on my network sockets higher, so that's why I
was dropping input packets.
The zfs_txg_timeout=1 flag is still necessary (or else dropping occurs
when com
I can't answer your question - but I would like to see more details about the
system you are building (sorry if off topic here). What motherboard and what
compact flash adapters are you using?
--
This message posted from opensolaris.org
___
zfs-discus
I know dedup is on the roadmap for the 7000 series, but I don't think it is
officially supported yet, since we would have seen a note about the release of
the software on the FishWorks Wiki
http://wikis.sun.com/display/FishWorks/Software+Updates
--
This message posted from opensolaris.org
_
I'm about to build a ZFS based NAS and i'd like some suggestions about how
to set up my drives.
The case i'm using holds 20 hot swap drives, so i plan to use either 4 vdevs
with 5 drives or 5 vdevs with 4 drives each (and a hot spare inside the
machine)
The motherboard i'm getting has 4 pci-x sl
Hi,
Does anyone heard about having any plans to support thin devices by ZFS? I'm
talking about the thin device feature by SAN frames (EMC, HDS) which provides
more efficient space utilization. The concept is similar to ZFS with the pool
and datasets, though the pool in this case is in the SAN f
Richard,
well… I am willing to experiment with dedup but I am quite unsure on how to
share my results effectively. That is, what would be the interesting data, that
would help improving ZFS/dedup and how should that data be presented?
I reckon that just from sharing general issues, some hard fa
58 matches
Mail list logo