On Wed, May 27, 2009 at 09:24:17PM +0200, Wojciech Puchar wrote:
> > I haven't looked at the ZFS code but this sort of thing is exactly why
> > all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when
> > the first thing I have to do with a new compiler is to work out the
> > proper t
On Wed, May 27, 2009 at 09:24:17PM +0200, Wojciech Puchar wrote:
> >I haven't looked at the ZFS code but this sort of thing is exactly why
> >all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when
> >the first thing I have to do with a new compiler is to work out the
> >proper typed
I haven't looked at the ZFS code but this sort of thing is exactly why
all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when
the first thing I have to do with a new compiler is to work out the
proper typedefs to create them.
int, short and char are portable, only other things mu
On Wed, May 27, 2009 at 11:52:33AM -0500, Kirk Strauser wrote:
> On Wednesday 27 May 2009 11:40:51 am Wojciech Puchar wrote:
>
> > you talk about performance or if it work at all?
>
> Both, really. If they have to code up macros to support identical
> operations (such as addition) on both platfo
you talk about performance or if it work at all?
Both, really. If they have to code up macros to support identical operations
OK. talking about performance:
- 64-bit addition/substraction on 32-bit computer: 2 instructions instead
of one (ADD+ADC)
- 64-bit NOT, XOR, AND, OR and compare/t
On Wednesday 27 May 2009 11:40:51 am Wojciech Puchar wrote:
> you talk about performance or if it work at all?
Both, really. If they have to code up macros to support identical operations
(such as addition) on both platforms, and accidentally forget to use the macro
in some place, then voila:
in C, there will work as good in 64-bit mode and in 32-bit mode.
Wojciech, I have to ask: are you actually a programmer or are you repeating
yes i am. if you are interested i wrote programs for x86, ARM (ARM7TDMI),
MIPS32 (4Kc), and once for alpha. I have quite good knowledge for ARM and
MIP
On Wednesday 27 May 2009 09:52:42 am Wojciech Puchar wrote:
> > ZFS should work on i386. As far as I know there aren't any killer bugs
> > that are architecture specific, but I'm no expert. Unless your aim is to
> > learn
> unless someone assume than size of pointers are 4 bytes, and write progra
ZFS should work on i386. As far as I know there aren't any killer bugs that
are architecture specific, but I'm no expert. Unless your aim is to learn
unless someone assume than size of pointers are 4 bytes, and write program
in C, there will work as good in 64-bit mode and in 32-bit mode.
___
I really don't have any hard data on ZFS performance relative to UFS + geom.
so please test yourself :)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd
Gary Gatten wrote:
What about with PAE and/or other extension schemes?
Doesn't help with the KVM requirement, and still only provides a 4GB address
space for any single process.
If it's just memory requirements, can I assume if I don't have a $hit
load of storage and billions of files it will
Wojciech Puchar wrote:
You can make ZFS work on i386, but it requires very careful tuning and
is not
going to work brilliantly well for particularly large or high-throughput
filesystems.
you mean "high transfer" like reading/writing huge files. anyway not
faster than properly configured UFS+m
ZFS is thoroughly 64-bit and uses 64-bit math pervasively. That means
you
have to emulate all those operations with 2 32-bit values, and on the
register-starved x86 platform you end up with absolutely horrible
performance.
no this difference isn't that great. it doesn't use much less CPU on the
You can make ZFS work on i386, but it requires very careful tuning and is not
going to work brilliantly well for particularly large or high-throughput
filesystems.
you mean "high transfer" like reading/writing huge files. anyway not
faster than properly configured UFS+maybe gstripe/gmirror.
f
- Filesystem sizes are dynamic. They all grow and shrink inside the
same
pool, so you don't have to worry about making one too large or too
small.
there are actually almost no filesystems, just one filesystem with many
"upper descriptors" and separate per filesystem quota.
just to make happ
10-4, thanks!
-Original Message-
From: owner-freebsd-questi...@freebsd.org
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Kirk Strauser
Sent: Tuesday, May 26, 2009 2:00 PM
To: freebsd-questions@freebsd.org
Subject: Re: FreeBSD & Software RAID
On Tuesday 26 May 2009 01:44:5
Wojciech hates it for some reason, but I wouldn't let that deter you. I'm
same == incredibly low performance.
of course having overmuscled CPU not much used for anything else - it may
not be a problem.
___
freebsd-questions@freebsd.org mailing list
On Tuesday 26 May 2009 01:44:51 pm Gary Gatten wrote:
> What about with PAE and/or other extension schemes?
>
> If it's just memory requirements, can I assume if I don't have a $hit
> load of storage and billions of files it will work "ok" with 4GB of RAM?
> I guess I'm just making sure there isn't
xists on
the i386 architecture?
-Original Message-
From: owner-freebsd-questi...@freebsd.org
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Matthew Seaman
Sent: Tuesday, May 26, 2009 1:38 PM
To: Gary Gatten
Cc: freebsd-questions@freebsd.org
Subject: Re: FreeBSD & Software RAID
Gar
Gary Gatten wrote:
Why avoid ZFS on x86?
Because in order to deal most effectively with disk arrays of 100s or 1000s
of GB as are typical nowadays, ZFS requires more than the 4GB of addressable
RAM[*] that the i386 arch can provide.
You can make ZFS work on i386, but it requires very careful t
On Tue, May 26, 2009 at 01:15:41PM -0500, Gary Gatten wrote:
> Why avoid ZFS on x86?
That's because ZFS works best with huge amounts of (Kernel-)RAM, and
i386 32-bit doesn't provide enough adressing space.
Btw, I've tried ZFS on two FreeBSD/amd64 test machines with 8GB and
16GB of RAM, and it loo
Why avoid ZFS on x86?
-Original Message-
From: owner-freebsd-questi...@freebsd.org
[mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Kirk Strauser
Sent: Tuesday, May 26, 2009 12:39 PM
To: freebsd-questions@freebsd.org
Subject: Re: FreeBSD & Software RAID
On Monday 25 May 200
On Monday 25 May 2009 08:57:48 am Howard Jones wrote:
> I'm was half-considering switching to ZFS, but the most positive thing I
> could find written about that (as implemented on FreeBSD) is that it
> "doesn't crash that much", so perhaps not. That was from a while ago
> though.
Wojciech hates i
Sweet thanks for the info. Building one of those boxes is next in the list.
On 5/26/09, Steve Bertrand wrote:
> Howard Jones wrote:
>> Wojciech Puchar wrote:
>>> you are right. you can't be happy of warm house without getting really
>>> cold some time :)
>>>
>>> that's why it's excellent that ZF
Howard Jones wrote:
> Wojciech Puchar wrote:
>> you are right. you can't be happy of warm house without getting really
>> cold some time :)
>>
>> that's why it's excellent that ZFS (and few other things) is included
>> in FreeBSD but it's COMPLETELY optional.
>>
> Well, I switched from the heater t
Wojciech Puchar wrote:
> you are right. you can't be happy of warm house without getting really
> cold some time :)
>
> that's why it's excellent that ZFS (and few other things) is included
> in FreeBSD but it's COMPLETELY optional.
>
Well, I switched from the heater that doesn't work and is poorly
but of course lots of people like to make their life harder
No I am not making life harder at all ... I have 6x500gb hard disks I
want in a good solid raid 5 type configuration. So you are somewhat wide
of the mark in your assumptions.
that's a reason. just don't forget that RAID-z is MUCH closer
-Original Message-
From: Wojciech Puchar [mailto:woj...@wojtek.tensor.gdynia.pl]
Sent: 25 May 2009 18:54
To: Graeme Dargie
Cc: FreeBSD-Questions@freebsd.org; Howard Jones; Valentin Bud
Subject: RE: FreeBSD & Software RAID
> Ok granted this is a server sat in my house and it i
It makes a certain degree of sense. Sometimes things have to be done
wrong for us to realize how good we had it before. How would we know how
great FreeBSD is if we didn't have Linux? I had to look at ZFS to decide
not to use it when I rebuild my storage this week due to a failing
drive.
you are
Ok granted this is a server sat in my house and it is not a "mission"
critical server in a large business, personally I have can live with ZFS
taking a bit longer vs resilience.
simply gmirror and UFS gives the same. much simpler, much faster.
but of course lots of people like to make their lif
On Mon, May 25, 2009 at 07:09:15PM +0200, Wojciech Puchar wrote:
> >
> >I have looked at ZFS recently. Appears to be a memory hog, needs
> >about 1 GB especially if large file transfers may occur over gigabit
> >ethernet
>
> while it CAN be set up on 256MB machine with a little big flags in
> loade
-Original Message-
From: Wojciech Puchar [mailto:woj...@wojtek.tensor.gdynia.pl]
Sent: 25 May 2009 18:09
To: FreeBSD-Questions@freebsd.org
Cc: Howard Jones; Graeme Dargie; Valentin Bud
Subject: Re: FreeBSD & Software RAID
>
> I have looked at ZFS recently. Appears to be a
I have looked at ZFS recently. Appears to be a memory hog, needs about 1
GB especially if large file transfers may occur over gigabit ethernet
while it CAN be set up on 256MB machine with a little big flags in
loader.conf (should be autotuned anyway) - it generally takes as much
memory as it's
i use gmirror but once i tried gvinum and it doesn't work well.
i think simply use mirroring. ZFS will introduce 100 times more problems
than it solves
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-qu
On Mon, May 25, 2009 at 07:37:59PM +0300, Valentin Bud wrote:
> On Mon, May 25, 2009 at 7:30 PM, Graeme Dargie
> wrote:
>
> > Can anyone with experience of software RAID point me in the right
> > direction please? I've used gmirror before with no trouble, but nothing
> > fancier.
[76 lines trimm
On Mon, May 25, 2009 at 7:30 PM, Graeme Dargie wrote:
>
>
> -Original Message-
> From: Howard Jones [mailto:howard.jo...@network-i.net]
> Sent: 25 May 2009 14:58
> To: freebsd-questions@freebsd.org
> Subject: FreeBSD & Software RAID
>
> Hi,
>
> Can an
-Original Message-
From: Howard Jones [mailto:howard.jo...@network-i.net]
Sent: 25 May 2009 14:58
To: freebsd-questions@freebsd.org
Subject: FreeBSD & Software RAID
Hi,
Can anyone with experience of software RAID point me in the right
direction please? I've used gmirror befor
Hi,
I remember building a RAID5 on gvinum with 3 500GB hard drives some
months ago, and it took horribly long to initialize the raid5 (several
hours).
It seems to be a one-time job, cause since the raid finished it's
initialization the machine starts up/ reboots within normal times.
The documen
Hi,
Can anyone with experience of software RAID point me in the right
direction please? I've used gmirror before with no trouble, but nothing
fancier.
I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD
7.1-p4 system.
I created a RAID 5 set with gvinum:
drive d0 device /dev/a
27;t see any logical reason for such a requirement, however.
- Original Message -
From: "Bill Moran" <[EMAIL PROTECTED]>
To: "Moti Levy" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, June 19, 2003 9:49 AM
Subject: Re: FreeBSD - software rai
;
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, June 19, 2003 9:49 AM
Subject: Re: FreeBSD - software raid
> Moti Levy wrote:
> > Hi,
> > before I do the unthinkable and use Linux for a server , I ask for your
> > help.
> > I have a set of 4 IDE drives .
> > I need to
Moti Levy wrote:
Hi,
before I do the unthinkable and use Linux for a server , I ask for your
help.
I have a set of 4 IDE drives .
I need to build a file server that'll run samba/nfs
I've done this. Works very well.
I want to use all 4 drives as a raid 5 array and use the combined space for
storag
Hi,
before I do the unthinkable and use Linux for a server , I ask for your
help.
I have a set of 4 IDE drives .
I need to build a file server that'll run samba/nfs
I want to use all 4 drives as a raid 5 array and use the combined space for
storage.
is there a way to do it with FreeBSD ?
I looked
43 matches
Mail list logo