On Oct 7, 2010, at 11:40 AM, Jim Sloey wrote:
> One of us found the following:
>
> The presence of snapshots can cause some unexpected behavior when you attempt
> to free space. Typically, given appropriate permissions, you can remove a
> file from a full file system, and this action results in
On Oct 9, 2010, at 1:18 PM, sridhar surampudi wrote:
> Hi,
>
> what is the right way to check versions of zfs and zpool ??
zpool get version POOLNAME
zfs get version DATASET
> I am writing piece of code which call zfs command line further. Before
> actually initiating and going ahead I wan to
On Oct 8, 2010, at 10:01 AM, Bob Friesenhahn wrote:
> Regardless, nothing beats raidz3 based on computable statistics.
Well, no, not really. It all depends on the number of sets and the MTTR.
Consider the case where you have 1 set of raidz3 and 2 sets of 3-way
mirrors. The raidz3 set can only sta
Are we living in the past?
In the bad old days, UNIX systems spoke NFS and Windows systems spoke
CIFS. The cost of creating a file system was expensive -- slices, partitions,
etc.
With ZFS, file systems (datasets) are relatively inexpensive.
So, are we putting too many constraints into a system
Ok,
Let's think about this for a minute. The log drive is c1t11d0 and it appears
to be almost completely unused, so we probably can rule out a ZIL bottleneck.
I run Ubuntu booting iSCSI against OSol 128a and the writes do not appear to be
synchronous. So, writes aren't the issue.
>From the
On Sun, Oct 10, 2010 at 3:18 AM, sridhar surampudi
wrote:
> Hi,
>
> what is the right way to check versions of zfs and zpool ??
>
> I am writing piece of code which call zfs command line further. Before
> actually initiating and going ahead I wan to check the kind of version zfs
> and zpool pres
Hi,
what is the right way to check versions of zfs and zpool ??
I am writing piece of code which call zfs command line further. Before actually
initiating and going ahead I wan to check the kind of version zfs and zpool
present on the system.
As an example "zpool split" is not present on prior
\> We're aware of that. The original plan was to use
> mirrored DDRDrive X1s but we're experiencing
> stability issues. Chris George is being very
> responsible and we'll help us out investigate that
> once we figure out our most pressing performance
> problems.
I feel I need to add to my commen
> If you have a single SSD for dedicated log, that will
> surely be a bottleneck
> for you.
We're aware of that. The original plan was to use mirrored DDRDrive X1s but
we're experiencing stability issues. Chris George is being very responsible
and we'll help us out investigate that once we f
>> What sort of drives are these? It looks like iSCSI or FC device names, and
>> not local drives
>
> The "Pool_sas" is made of 15K SAS drives on external JBOD arrays (Dell
> MD1000) connected on mirrored LSI 9200-8e SAS HBAs.
>
> The "Pool_sata" is made of SATA drives on other JBODs.
>
> Th
>A couple of notes: we know the "Pool_sata" is resilvering, but we're
>concerned about the performances of the other pool ("Pool_sas"). We also know
>that we're >not using jumbo frames as for some reason it makes the linux box
>crash. Could that explain it all?
>What sort of drives are thes
A couple of notes: we know the "Pool_sata" is resilvering, but we're concerned
about the performances of the other pool ("Pool_sas"). We also know that we're
not using jumbo frames as for some reason it makes the linux box crash. Could
that explain it all? What sort of drives are these? It l
> I'll suggest trying something completely different, like, dd if=/dev/zero
> bs=1024k | pv | ssh othermachine 'cat > /dev/null' ... Just to verify there
> isn't something horribly wrong with your hardware (network).
>
> In linux, run "ifconfig" ... You should see "errors:0"
>
> Make sure each
13 matches
Mail list logo