You could try to build it from source on your FreeNAS machine. If it
builds, I guess it will work.
Here is the project page: http://www.maier-komor.de/mbuffer.html
BR
Sebastian
Am 05.05.2015 um 19:38 schrieb openindiana-discuss-requ...@openindiana.org:
Message: 3
Date: Tue, 05 May 2015 15:30
Am 05.05.2015 um 14:00 schrieb openindiana-discuss-requ...@openindiana.org:
As a side note; just piping to ssh is excruciatingly slow, using netcat,
"nc", speeds things up at least 4fold.
The method that worked fine for me was using mbuffer. (I couldn't make
netcat work, shame on me, but I also
ieb openindiana-discuss-requ...@openindiana.org:
Message: 2
Date: Thu, 26 Mar 2015 08:34:01 -0500 (CDT)
From: Bob Friesenhahn
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] rsyncd configuration
Message-ID:
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flow
Hi,
I am trying to solve a problem that i have ignored for quite a long
time. The issue is that "messages" are flooded with rsync permission
errors, and that some files are not backed up properly. What I have
found so far is the following:
- rsyncd is running as "root"
-the issue is the same
Hi Geoff and Jason (and anybody else interested),
thanks for your comments.
I have been using arc_summary.pl. From what I understand the stats are
since boot time, which is around 700 days in my case, so the impact of
any tuning measures will not reflect there. Still thee are some findings:
Hi,
I have 12 GB RAM, 20 GB total pool size machine. L2ARC for one pool is
107G SSD. I am wondering a bit about:
- why not more of the RAM is used by ARC (I have run the ARC
optimisation script from the evil tuning guide, there machine is file
server exclusively, so no other apps steal RAM)
Hi,
today I had a faulted disk in one of my pools. For some reason, the hot
spare did not kick in, meaning the status was "available".
The last thing I did about that hot spare was a zpool clear poolname,
upon which it became available again automatically.
Now, strange enough that pool had
Hi,
I just wanted to share experience I made with an issue that seems
similar to what Clement BRIZARD reported recently.
Monday morning I found our ZFS Backup server hanging with a message that
a disk was gone. The machine was hanging in zpool status infinitely.
Shutdown failed, too. So, ent
server
Once the resilvering is done, should I do a scrub job ?
Le 23/10/2013 10:16, Sebastian Gabler a ?crit :
From 'zpool status' this looks very much like an issue I had myself
2.5 years ago with a pool containing WD20EARS and some other 2 TB
disks. It started exactly when I moved the
From 'zpool status' this looks very much like an issue I had myself 2.5
years ago with a pool containing WD20EARS and some other 2 TB disks. It
started exactly when I moved them from the Intel ICH to an LSI SAS 1068
controller, and added more disks to the enclosure. I decided then to
destroy th
Basically, there are two mechanisms supported:
1. anonymous access using the guestok=true parameter in zfs
2. mapping user ids using idmap.
The former is simple, the latter not so much. You need to decide which
path you want to follow. It could be that narrowing down the use case
also helps th
Hi,
I am trying to mount a ZFS fileset as oradata on a OL6.4 client. Using
client defaults (=nfs4) I can't chown the mountpoint to oracle:oinstall.
The error I am getting is "invalid argument". So far my research, this
has to do with name resolution/idmap issues. I found some how-tos to
debug
-ascii
On Jun 17, 2013, at 1:36 PM, Sebastian Gabler wrote:
>Dear Bill, Peter, Richard, and Saso.
>
>Thanks for the great comments.
>
>Now, changing to reverse gear, isn't it more likely to loose data by having a
pool that spans across mutiple HBAs than if you connect all dr
Dear Bill, Peter, Richard, and Saso.
Thanks for the great comments.
Now, changing to reverse gear, isn't it more likely to loose data by
having a pool that spans across mutiple HBAs than if you connect all
drives to a single HBA? I mean, unless you make sure that there are
never any more driv
Hi,
it occured to me that obviously some ZFS Storage systems only feature a
single SAS HBA, including the ZFSSA 7320. At least, as far as I understand.
From what I saw in the 7320 documentation, each of the two HBA ports is
connected to each of the two ports of a shelf, which should protect fro
:
Recommendations for fast storage
Message-ID:<0b43e9ea-10fd-41af-81ef-31644ff49...@richardelling.com>
Content-Type: text/plain; charset=windows-1252
Terminology warning below?
On Apr 18, 2013, at 3:46 AM, Sebastian Gabler wrote:
>Am 18.04.2013 03:09, schriebopenindiana-discuss-requ...@openin
Am 19.04.2013 11:22, schrieb openindiana-discuss-requ...@openindiana.org:
Message: 1
Date: Thu, 18 Apr 2013 16:03:32 -0500
From: Timothy Coalson
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] vdev reliability was:
Recommendations for fast storage
Message-I
Am 18.04.2013 16:28, schrieb openindiana-discuss-requ...@openindiana.org:
Message: 1
Date: Thu, 18 Apr 2013 12:17:47 +
From: "Edward Ned Harvey (openindiana)"
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] Recommendations for fast storage
Message-ID:
Am 18.04.2013 03:09, schrieb openindiana-discuss-requ...@openindiana.org:
Message: 1
Date: Wed, 17 Apr 2013 13:21:08 -0600
From: Jan Owoc
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] Recommendations for fast storage
Message-ID:
Content-Type: text/plain;
Am 17.04.2013 11:16, schrieb "Edward Ned Harvey (openindiana)"
It's a fact that NAND has a finite number of write cycles, and it gets slower
to write, the more times it's been re-written.
AFAIC, these are two facts, and the latter is much more relevant in
production. Someone mentioned it ear
ent-Type: text/plain; charset="us-ascii"
>> From: Sebastian Gabler [mailto:sequoiamo...@gmx.net]
>> Sent: Saturday, April 13, 2013 11:38 AM
>>
>> - zfs send mainbranch@1 -R > /pool2/mainbranch.dmp for each nfs, iscsi,
>> smb
>It is advisable, if possible, t
Am 13.04.2013 17:37, schrieb Sebastian Gabler:
Hi Jim and Edward,
thanks for your comments.
Taking them into account, I decided to apply the following method:
- clean up, that is all the file sets that should remain on the other
pool in the future have been migrated using zfs send | receive
Hi Jim and Edward,
thanks for your comments.
Taking them into account, I decided to apply the following method:
- clean up, that is all the file sets that should remain on the other
pool in the future have been migrated using zfs send | receive. That
worked well. Mount points were moved (the
I am planning to restructure one of the zpools on my file server,
re-organizing the vdevs. The process will draw upon zfs send | zfs recv
forth and back to another local zpool.
The zpool in question contains (nested) file sets and zvols. Some of the
file sets will remain on the other pool to max
Hello,
there is a reproducible issue I am seeing copying files over Solaris
CIFS (OI 151a) to Windows clients.
On the same client, I copy different large files from the same share
simultaneously in 2 sessions
Session 1 is established as \\myhostname\mysharename\file1
Session 2 is establish
Hi Robbie,
thanks for your elaborate thoughts. I am trying to follow, so another
round of comments from my side inserted
Am 07.03.2013 20:37, schrieb openindiana-discuss-requ...@openindiana.org:
--
Message: 3
Date: Thu, 7 Mar 2013 11:36:04 -0500
From: Robbie Crash
Hi Robbie, I have inserted some comments:
Am 06.03.2013 01:38, schrieb openindiana-discuss-requ...@openindiana.org:
--
Message: 6
Date: Tue, 5 Mar 2013 11:55:40 -0500
From: Robbie Crash
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss]
around the
bug noted by Oracle in the CIFS server, and/or backport the fix to the
current Illumos code base/ my OI installation to profit from the fix w/o
migrating to a different distro.
Best,
Sebastian
From: Sebastian Gabler [mailto:sequoiamobil at gmx.net]
Sent: Thursday, February 28, 2013 6:03
Hi,
we are suffering for quite some time now from the notorious 0x8007003
error on Windows 7 desktops in our network when accessing shares on our
OI based file server over SMB. I am using the Solaris CIFS server.
It's an on-and-off issue, and I always had the impression that it had
something t
Hi,
I am considering to update firmware on two SSDs in my file server. The
models are Intel X25-M and OCZ Vertex 2. Obviously, the manufacturers
are providing proprietary tools on DOS or Windows to flash the drives. I
am looking for what would be the least intrusive method, and for the
Intel
>On 01/24/2013 03:57 PM, Sebastian Gabler wrote:
>> Hello,
>>
>> I am using a share via nfs as esxi datastore for more than a year. 3
>> hosts have root access. I have added a vcenter server appliance to
>> manage the esxi hosts, and added the vcenter server'
Hello,
I am using a share via nfs as esxi datastore for more than a year. 3
hosts have root access. I have added a vcenter server appliance to
manage the esxi hosts, and added the vcenter server's IP address to the
allowed hosts using "zfs set sharenfs=root=host1:host2:host3:vcsa
dataset/shar
Am 29.11.2012 16:48, schrieb openindiana-discuss-requ...@openindiana.org:
Beware, the intel 313 ssd seems to have no power loss protection:
http://ark.intel.com/products/66290/Intel-SSD-313-Series-24GB-mSATA-3Gbs-25nm-SLC
zil is relying on this feature.
(from Michael)
and
Anyhow, if at all p
isk for ZIL Message-ID: <50b75948.5060...@cos.ru>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed On
2012-11-29 12:43, Sebastian Gabler wrote:
>I have bought and installed an Intel SSD 313 20 GB to use as ZIL for one
>or many pools. I am running openindiana on a x86 platfor
Hi,
I have bought and installed an Intel SSD 313 20 GB to use as ZIL for one
or many pools. I am running openindiana on a x86 platform, no SPARC. As
4 GB should suffice, I am considering to partition the drive in order to
assign each partition to one pool (ATM pools are 2 on the Server, but I
Message: 7
Date: Tue, 30 Oct 2012 22:03:13 +0400
From: Jim Klimov
To: Discussion list for OpenIndiana
Subject:
Message-ID:<50901661.9050...@cos.ru>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
2012-10-30 19:21, Sebastian Gabler wrote:
>Whereas that
Am 23.10.2012 13:52, schrieb Sebastian Gabler:
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand
it is nowadays d
Hi,
I am facing a problem with zfs receive through ssh. As usually, root
can't log on ssh; the log on users can't receive a zfs stream (rights
problem), and pfexec is disabled on the target host (as I understand it
is nowadays default for OI151_a...)
What are the suggestions to solve this? I
I should have pointed out better that my intention is to use a ZIL SSD
in context with "spinning rust", not with MLC SSDs behind them. In
general, SLCs are more suitable as a ZIL because they can sustain write
rates more continuously, for a wider variety of workloads, and sustain
to writes more
Hi,
from what I understood from negative experience with a 12-drive SSD RAID
set build with MDRaid on linux, and from answers to a related question I
raised recently in this list, it is not so easy to engineer a
configuration using a large count of SSDs anyhow. The budget option,
using SATA SS
Hi,
I detected lately that if I set a zfs filesystem to sharesmb=off, the
share will only be disabled on a subsequent re-start of the CIFS server.
In the meantime they remain accessible, but it is removed from the
sharemgr list. Has anybody else come across that already?
BR
Sebastian
_
Hi,
I am facing problems with the solaris cp command:
$ usr/bin/cp -pr . is raising the following errors:
"cp: cannot create ./ too many open files"
"cp: cannot open too many open files"
"cp: Failed to preserve extended system attributes of directory"
Any idea how to fix that on 5.11 oi_151a
Hi,
I was asked to set up a pool for storing oracle datafiles. The databases
are for test purpose, so that there is no ultra-high resiliency
required. There is however the requirement to have really good read
I/Os, and there is a comparably low budget available. I thought of
using NFS4, or A
43 matches
Mail list logo