thanks for all the feedback. Some followup questions:
If OS will see all 4 cores, will it also make use of all 4 cores for ZFS. ie is
ZFS fully multi threaded?
Is there any point to run ZFS over just two 2 disks? without the extra sata
ports I'm thinking I may have to abandon this idea. The
I've been looking at this board myself for the same thing
The blog below is regarding the D945GCLF but looking at the two, it looks like
the
processor is the only thing that is different (single core vs. dual core).
http://blogs.sun.com/PotstickerGuru/entry/solaris_running_on_intel_atom
--
T
It depends on what your doing.
I got a AMD Sempron Processor LE-1100 (1.9Ghz) doing NAS for mythtv and seems
to do ok.
If the board you quote is what your getting I think it is 64bit chip - intel
site says its a Atom 330.
Solaris will should use all its cores/threads - intel have added a load
I'm running ZFS on nevada (b94 and b98) on two machines at home, both
with 4 gig ram. one has a quad core intel core2 w/ ECC ram, the other
has normal RAM and an athlon 64 dual-core low power. both seem to be
working great.
On Thu, Oct 23, 2008 at 2:04 PM, Peter Bridge <[EMAIL PROTECTED]> wrote:
>
On Thu, 23 Oct 2008, Pramod Batni wrote:
> On 10/23/08 08:19, Paul B. Henson wrote:
> >
> > Ok, that leads to another question, why does creating a new ZFS filesystem
> > require determining if any of the existing filesystems in the dataset are
> > mounted :)?
>
> I am not sure. All the checking i
I'm looking to buy some new hardware to build a home ZFS based NAS. I know ZFS
can be quite CPU/mem hungry and I'd appreciate some opinions on the following
combination:
Intel Essential Series D945GCLF2
Kingston ValueRAM DIMM 2GB PC2-5300U CL5 (DDR2-667) (KVR667D2N5/2G)
Firstly, does it sound
No problem. I didn't use mirrored slogs myself, but that's certainly
a step up for reliability.
It's pretty easy to create a boot script to re-create the ramdisk and
re-attach it to the pool too. So long as you use the same device name
for the ramdisk you can add it each time with a simple "zpoo
On Thu, Oct 23, 2008 at 4:49 PM, Laurent Burnotte
<[EMAIL PROTECTED]>wrote:
>
> => is there in zfs an automatic mechanism during solaris 10 boot that
> prevent the import of pool B ( mounted /A/B ) before trying to import A
> pool or do we have to legacy mount and file /etc/vfstab
>
This is fine
Laurent Burnotte wrote:
> Hi experts,
>
> Short question
>
> What happen if we have cross zpool mount ?
>
> meaning :
>
> zpool A -> should be mounted in /A
> zpool B -> should be mounted in /A/B
I have exactly that situation on my home system:
Where A is the boot/root pool (rpool) and B is m
Hi experts,
Short question
What happen if we have cross zpool mount ?
meaning :
zpool A -> should be mounted in /A
zpool B -> should be mounted in /A/B
=> is there in zfs an automatic mechanism during solaris 10 boot that
prevent the import of pool B ( mounted /A/B ) before trying to import A
Hello Armin,
Thursday, October 23, 2008, 10:13:23 AM, you wrote:
AO> Good morning,
AO> i experience file corruption on a zfs in a two node Cluster. The
AO> Filesystem holds the datafile of a VirtualBox windows-guest
AO> instance. It is placed in one resourcegroup together with the
AO> gds-scrip
On 10/23/08 08:19, Paul B. Henson wrote:
On Tue, 21 Oct 2008, Pramod Batni wrote:
Why does creating a new ZFS filesystem require enumerating all existing
ones?
This is to determine if any of the filesystems in the dataset are mounted.
Ok, that leads to another question, wh
On Thu, 23 Oct 2008, Constantin Gonzalez wrote:
>
> This is what the customer told me. He uses rsync and he is ok with restarting
> the rsync whenever the NFS server restarts.
Then remind your customer to tell rsync to inspect the data rather
than trusting time stamps. Rsync will then run quite
Hi,
Bob Friesenhahn wrote:
> On Thu, 23 Oct 2008, Constantin Gonzalez wrote:
>>
>> Yes, we're both aware of this. In this particular situation, the customer
>> would restart his backup job (and thus the client application) in case
>> the
>> server dies.
>
> So it is ok for this customer if their
On Thu, 23 Oct 2008, Constantin Gonzalez wrote:
>
> Yes, we're both aware of this. In this particular situation, the customer
> would restart his backup job (and thus the client application) in case the
> server dies.
So it is ok for this customer if their backup becomes silently
corrupted and th
Hi,
yes, using slogs is the best solution.
Meanwhile, using mirrored slogs from other servers' RAM-Disks running on UPSs
seem like an interesting idea, if the reliability of UPS-backed RAM is deemed
reliable enough for the purposes of the NFS server.
Thanks for siggesting this!
Cheers,
Cons
Hi,
Bob Friesenhahn wrote:
> On Wed, 22 Oct 2008, Neil Perrin wrote:
>> On 10/22/08 10:26, Constantin Gonzalez wrote:
>>> 3. Disable ZIL[1]. This is of course evil, but one customer pointed out to
>>> me
>>> that if a tar xvf were writing locally to a ZFS file system, the writes
>>> would
Good morning,
i experience file corruption on a zfs in a two node Cluster. The Filesystem
holds the datafile of a VirtualBox windows-guest instance. It is placed in one
resourcegroup together with the gds-scripts which manage the virtual-machine
startup and probe:
clresourcegroup create vb1
Hi,
>> - The ZIL exists on a per filesystem basis in ZFS. Is there an RFE
>> already
>>that asks for the ability to disable the ZIL on a per filesystem
>> basis?
>
> Yes: 6280630 zil synchronicity
good, thanks for the pointer!
> Though personally I've been unhappy with the exposure that z
19 matches
Mail list logo