On Thu, Jun 18, 2009 at 10:56 AM, Dave Ringkorno-re...@opensolaris.org wrote:
But what if I used zfs send to save a recursive snapshot of my root pool on
the old server, booted my new server (with the same architecture) from the
DVD in single user mode and created a ZFS pool on its local
Bob Friesenhahn wrote:
On Wed, 17 Jun 2009, Haudy Kazemi wrote:
usable with very little CPU consumed.
If the system is dedicated to serving files rather than also being
used interactively, it should not matter much what the CPU usage is.
CPU cycles can't be stored for later use. Ultimately,
2009/6/18 Timh Bergström timh.bergst...@diino.net:
USB-sticks has proven a bad idea with zfs mirrors
I think, USB sticks is bad idea for mirrors in general... :-)
ZFS on iSCSI *is* flaky
OK, so what is the status of your bugreport about this? Was ignored or
just rejected?..
Flaming people on
yeah. many of those ARM systems will be low-power
builtin-crypto-accel builtin-gigabit-MAC based on Orion and similar,
NAS (NSLU2-ish) things begging for ZFS.
So what's the boot environment they use?
cd It's true for most of the Intel Atom family (Zxxx and Nxxx but
cd not the 230 and
Hi Joseph ;
You cant share SSDs between pools (at least for today) unless you slice.
Also it's better to use 2x SSD's for L2 ARC as depending on your system
there can be slight limitations of using one SSD.
Best regards
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun
Den 18 juni 2009 09.42 skrev Bogdan M. Maryniukbogdan.maryn...@gmail.com:
ZFS on iSCSI *is* flaky
OK, so what is the status of your bugreport about this? Was ignored or
just rejected?..
No bug report because I don't think it's the file systems fault, and
why bother when disappearing vdevs
Hi Cindy and Christo,
this is a good example of how useless ZFS ACLs are. Nobody understands how to
use them!
Please note in Cindy's examples above:
You can not use file_inherit on files. Inheritance can only be set on
directories. Depending on the zfs aclinherit mode, the result may not be
On Thu, 18 Jun 2009, Haudy Kazemi wrote:
for text data, LZJB compression had negligible performance benefits (task
times were unchanged or marginally better) and less storage space was
consumed (1.47:1).
for media data, LZJB compression had negligible performance benefits (task
times were
Hi all,
(down to the wire here on EDU grant pricing :)
i'm looking at buying a pair of 7110's in the EDU grant sale.
The price is sure right. I'd use them in a mirrored, cold-failover
config.
I'd primarily be using them to serve a vmware cluster; the current config
is two standalone ESX
correct ratio of arc to l2arc?
from http://blogs.sun.com/brendan/entry/l2arc_screenshots
It costs some DRAM to reference the L2ARC, at a rate proportional to record
size.
For example, it currently takes about 15 Gbytes of DRAM to reference 600 Gbytes
of
L2ARC - at an 8 Kbyte ZFS record size.
hi Dirk,
How might we explain running find on a linux client to an NFS mounted
file system under the 7000 taking significantly longer (i.e. performance
behaving as though the command was run from Solaris?) Not sure if find
would have the intelligence to differentiate between file system
Hi Jose,
Well it depends on the total size of your Zpool and how often these
files are changed.
I was at a customer an huge internet provider, who had 40x an X4500
with Standard solaris and using ZFS.
All the machines were equiped with 48x 1TB disks. The machines were
used to provide the
bmm == Bogdan M Maryniuk bogdan.maryn...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
bmm That's why I think that speaking My $foo crashes therefore it
bmm is all crap is bad idea: either help to fix it or just don't
bmm use it,
First, people are allowed to
correct ratio of arc to l2arc?
from http://blogs.sun.com/brendan/entry/l2arc_screenshots
Thanks Rob. Hmm...that ratio isn't awesome.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Thu, Jun 18, 2009 at 11:51:44AM -0400, Dan Pritts wrote:
I'm curious about a couple things that would be unsupported.
Specifically, whether they are not supported if they have specifically
been crippled in the software.
We have not crippled the software in any way, but we have designed an
Cor Beumer - Storage Solution Architect wrote:
Hi Jose,
Well it depends on the total size of your Zpool and how often these
files are changed.
...and the average size of the files. For small files, it is likely
that the default
recordsize will not be optimal, for several reasons. Are
Hey ZFS experts,
Where is the ZFS metadata stored under? Can it be viewed through some commands?
Here is my requirement: I have a machine with lots of ZFS filesystems on it
under couple of zpools and there is this another new machine with empty disks,
what I want now is the similar layout of
Ethan Erchinger wrote:
correct ratio of arc to l2arc?
from http://blogs.sun.com/brendan/entry/l2arc_screenshots
Thanks Rob. Hmm...that ratio isn't awesome.
TANSTAAFL
A good SWAG is about 200 bytes for L2ARC directory in the ARC for
each record in the L2ARC.
So if your recordsize
Hi Nikhil,
take a look at the output from 'zpool history'. You should get/see all
the
information you need to be able to recreate your configuration.
http://docs.sun.com/app/docs/doc/819-5461/gdswe?a=view
Cheers,
Henrik
On Jun 18, 2009, at 8:47 PM, Nikhil wrote:
Hey ZFS experts,
Where
We have a 7110 on try and buy program.
We tried using the 7110 with XEN Server 5 over iSCSI and NFS. Nothing seems to
solve the slow write problem. Within the VM, we observed around 8MB/s on
writes. Read performance is fantastic. Some troubleshooting was done with local
SUN rep. The
On Thu, Jun 18, 2009 at 12:12:16PM +0200, Cor Beumer - Storage Solution
Architect wrote:
What they noticed on the the X4500 systems, that when the zpool became
filled up for about 50-60% the performance of the system
did drop enormously.
They do claim this has to do with the fragmentation
Hi Dave,
Until the ZFS/flash support integrates into an upcoming Solaris 10
release, I don't think we have an easy way to clone a root pool/dataset
from one system to another system because system specific info is still
maintained.
Your manual solution sounds plausible but probably won't work
On 18-Jun-09, at 12:14 PM, Miles Nordin wrote:
bmm == Bogdan M Maryniuk bogdan.maryn...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
...
tt /. is no person...
... you and I both know it's plausible
speculation that Apple delayed unleashing ZFS on their consumers
Both iSCSI and NFS are slow? I would expect NFS to be slow, but in my iSCSI
testing with OpenSolaris 2008.11, performance we reasonable, about 2x NFS.
Setup: Dell 2950 with a SAS HBA and SATA 3x5 raidz (15 disks, no separate ZIL),
iSCSI using vmware ESXi 3.5 software initiator.
Scott
--
This
There's a configuration issue in there somewhere. I have a ZFS based
system serving up to some ESX servers working great with a few
exceptions.
First off perf was awful, but there was some confusion on how to
optimize network traffic on ESX so I installed a fresh one using only
the
With XenServer 4 and NFS you had to grow the disks (modified manually from
thin to fat) in order to get decent performance.
On Fri, Jun 19, 2009 at 7:06 AM, lawrence ho no-re...@opensolaris.orgwrote:
We have a 7110 on try and buy program.
We tried using the 7110 with XEN Server 5 over iSCSI
Hey Lawrence,
Make sure you're running the latest software update. Note that this forumn
is not the appropriate place to discuss support issues. Please contact your
official Sun support channel.
Adam
On Thu, Jun 18, 2009 at 12:06:02PM -0700, lawrence ho wrote:
We have a 7110 on try and buy
Gary Mills wrote:
On Thu, Jun 18, 2009 at 12:12:16PM +0200, Cor Beumer - Storage Solution
Architect wrote:
What they noticed on the the X4500 systems, that when the zpool became
filled up for about 50-60% the performance of the system
did drop enormously.
They do claim this has to do with
cd == Casper Dik casper@sun.com writes:
yeah. many of those ARM systems will be low-power
builtin-crypto-accel builtin-gigabit-MAC based on Orion and
similar, NAS (NSLU2-ish) things begging for ZFS.
cd So what's the boot environment they use?
i think it is called
Toby,
On 17-Jun-09, at 7:37 AM, Orvar Korvar wrote:
Ok, so you mean the comments are mostly FUD and bull shit? Because
there are no bug reports from the whiners? Could this be the case? It
is mostly FUD? Hmmm...?
Having read the thread, I would say without a doubt.
Slashdot was never
On Thu, Jun 18, 2009 at 4:28 AM, Miles Nordincar...@ivy.net wrote:
djm http://opensolaris.org/os/project/osarm/
yeah. many of those ARM systems will be low-power
builtin-crypto-accel builtin-gigabit-MAC based on Orion and similar,
NAS (NSLU2-ish) things begging for ZFS.
Are they feasible
Fajar A. Nugraha wrote:
On Thu, Jun 18, 2009 at 4:28 AM, Miles Nordincar...@ivy.net wrote:
djm http://opensolaris.org/os/project/osarm/
yeah. many of those ARM systems will be low-power
builtin-crypto-accel builtin-gigabit-MAC based on Orion and similar,
NAS (NSLU2-ish) things begging
On Fri, Jun 19, 2009 at 11:16 AM, Erik Trimbleerik.trim...@sun.com wrote:
I can't say as to the entire Atom line of stuff, but I've found the Atoms
are OK for desktop use, and not anywhere powerful enough for even a basic
NAS server. The demands of wire-speed Gigabit, ZFS, and
33 matches
Mail list logo