On 28.11.20 05:51, Nick Holland wrote:
> I've heard that from a lot of people.
> And yet, those same people, when pressed, will tell you that a ZFS-equipped
> system will crash much more often than simpler file systems. That's one
> heck of a real penalty to pay for a theoretical advantage.
>
>
On 2020-11-27 16:03, Karel Gardas wrote:
,,,
> To me this looks like too much pray for luck. With such amount of data,
> I would stay with ZFS...
I've heard that from a lot of people.
And yet, those same people, when pressed, will tell you that a ZFS-equipped
system will crash much more often
,
I would stay with ZFS...
Good luck!
Karel
On 11/14/20 1:50 PM, Mischa wrote:
Hi All,
I am currently in the process of building a large filesystem with
12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a
central, mostly download, platform with around 100 concurr
> On 15 Nov 2020, at 20:57, Kenneth Gober wrote:
> On Sun, Nov 15, 2020 at 8:59 AM Mischa wrote:
>
>> On 15 Nov at 14:52, Otto Moerbeek wrote:
>>> fsck wil get slower once you start filling it, but since your original
>>> fs had about 104k files it expect it not getting too bad. If the speed
On Sun, Nov 15, 2020 at 02:57:49PM -0500, Kenneth Gober wrote:
> On Sun, Nov 15, 2020 at 8:59 AM Mischa wrote:
>
> > On 15 Nov at 14:52, Otto Moerbeek wrote:
> > > fsck wil get slower once you start filling it, but since your original
> > > fs had about 104k files it expect it not getting too
On Sun, Nov 15, 2020 at 8:59 AM Mischa wrote:
> On 15 Nov at 14:52, Otto Moerbeek wrote:
> > fsck wil get slower once you start filling it, but since your original
> > fs had about 104k files it expect it not getting too bad. If the speed
> > for your usecase is good as well I guess you should
ion is that one time the server
> > > > > > > > > crashed and i
> > > > > > > > > had to do a fsck during the next boot. It took around 10
> > > > > > > > > hours for the 12TB.
> > > > > > > > > This migh
10 hours
> > > > > > > > for the 12TB.
> > > > > > > > This might be something to keep in mind if you want to use this
> > > > > > > > on a server.
> > > > > > > > But if my memory
> > > > > > on a server.
> > > > > > > But if my memory serves me well otto did some changes to fsck on
> > > > > > > ffs2, so
> > > > > > > maybe thats a lot faster now.
> >
t; > > > > > the 12TB.
> > > > > > This might be something to keep in mind if you want to use this on
> > > > > > a server.
> > > > > > But if my memory serves me well otto did some changes to fsck on
> > > > &g
t; > > > > But if my memory serves me well otto did some changes to fsck on
> > > > > ffs2, so
> > > > > maybe thats a lot faster now.
> > > > >
> > > > > I hope this helps you a little bit!
> > > > >
> > > > 12TB.
> > > > This might be something to keep in mind if you want to use this on a
> > > > server.
> > > > But if my memory serves me well otto did some changes to fsck on ffs2,
> > > > so
> > > > maybe thats a lot fa
es me well otto did some changes to fsck on ffs2, so
> > > maybe thats a lot faster now.
> > >
> > > I hope this helps you a little bit!
> > > Greetings from Vienna
> > > Leo
> > >
> > > Am 14.11.2020 um 13:50 schrieb Mischa:
> &g
On 14 Nov at 16:49, Johan Huldtgren wrote:
> hello,
>
> On 2020-11-14 13:50, Mischa wrote:
> > Hi All,
> >
> > I am currently in the process of building a large filesystem with
> > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a
> Leo
> >
> > Am 14.11.2020 um 13:50 schrieb Mischa:
> > > I am currently in the process of building a large filesystem with
> > > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a
> > > central, mostly download, platform wit
hello,
On 2020-11-14 13:50, Mischa wrote:
> Hi All,
>
> I am currently in the process of building a large filesystem with
> 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a
> central, mostly download, platform with around 100 concurrent
> connec
serves me well otto did some changes to fsck on ffs2, so
> maybe thats a lot faster now.
>
> I hope this helps you a little bit!
> Greetings from Vienna
> Leo
>
> Am 14.11.2020 um 13:50 schrieb Mischa:
> > I am currently in the process of building a large filesystem wi
a little bit!
Greetings from Vienna
Leo
Am 14.11.2020 um 13:50 schrieb Mischa:
I am currently in the process of building a large filesystem with
12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a
central, mostly download, platform with around 100 concurrent
connec
Hi All,
I am currently in the process of building a large filesystem with
12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a
central, mostly download, platform with around 100 concurrent
connections.
The current system is running FreeBSD with ZFS and I would like t
Hi!
On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek wrote:
[...]
De fsck_ffs code allocates a number of arrays directly depending on
the # of indodes in setup(), totalling 4 bytes per inode. Some other
data is also needed, so it's not surprise you hit the 1G data space limit.
Any chance
On Mon, May 19, 2008 at 02:38:35PM +0200, Hannah Schroeter wrote:
Hi!
On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek wrote:
[...]
De fsck_ffs code allocates a number of arrays directly depending on
the # of indodes in setup(), totalling 4 bytes per inode. Some other
data is
Hi!
On Mon, May 19, 2008 at 03:00:08PM +0200, Otto Moerbeek wrote:
On Mon, May 19, 2008 at 02:38:35PM +0200, Hannah Schroeter wrote:
On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek wrote:
[...]
Any chance to get rid of that 1G limit that seems more and more
arbitrary nowadays? I
On Mon, May 19, 2008 at 03:12:22PM +0200, Hannah Schroeter wrote:
Hi!
On Mon, May 19, 2008 at 03:00:08PM +0200, Otto Moerbeek wrote:
On Mon, May 19, 2008 at 02:38:35PM +0200, Hannah Schroeter wrote:
On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek wrote:
[...]
Any chance to
On 2008-05-19, Hannah Schroeter [EMAIL PROTECTED] wrote:
Who does still use sbrk() after OpenBSD's malloc uses mmap only?
grepping an unpacked ports tree picks up at least emacs, spice,
boehm-gc, erlang, and some Mozilla software. Some of these are
known to use sbrk for sure, some are possible
It is very arbitrary. But its not so easy to fix. Ok, the diff is only
about 8 lines, but its the other things like testing and compat that
make it hard.
On May 19, 2008, at 8:38 AM, Hannah Schroeter [EMAIL PROTECTED] wrote:
Hi!
On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek
On Fri, May 09, 2008 at 11:16:28AM -0400, Will wrote:
Here are the requested outputs.
OK, your filesystem indeed uses default block and fragment sizes. The
# of inodes is about 238M.
De fsck_ffs code allocates a number of arrays directly depending on
the # of indodes in setup(), totalling 4
Thanks for taking a look.
I will play with larger fragment/block sizes unless anyone suggests otherwise.
-William
On Mon, May 12, 2008 at 11:49 AM, Otto Moerbeek [EMAIL PROTECTED] wrote:
On Fri, May 09, 2008 at 11:16:28AM -0400, Will wrote:
Here are the requested outputs.
OK, your
On Thu, May 08, 2008 at 05:18:26PM -0400, Will wrote:
I did see that, but did not realize that the 1GB limit is not a
user-configurable feature.
Even so, the FAQ implies that a 2TB filesystem is possible with
default options, which is what I have.
It might be the 2TB limit is a little too
Here are the requested outputs.
output of `df -i`:
Filesystem 512-blocks Used Avail Capacity iused ifree %iused
Mounted on
/dev/sd0a 31448010115219760434%2189 23409 9%
/
/dev/sd0h 826419692 7850896 0% 20 545642 0%
/home
Hello all,
I just upgraded to 4.3, and I would like to congratulate the devs on
another wonderful release! shutdown -p works and the wbng sensor
support was a nice surprise. However, the most useful feature to me
was the support for ffs2.
I upgraded without a hitch, and repartitioned from a 1tb
Isn't this the 1GB application limit mentioned in FAQ 14.7 - By the
time one gets to a 2TB file system with default fragment and block
sizes, fsck will require 1GB RAM to run, which is the application limit
under OpenBSD. Larger fragments and/or blocks will reduce the number of
inodes, and
I did see that, but did not realize that the 1GB limit is not a
user-configurable feature.
Even so, the FAQ implies that a 2TB filesystem is possible with
default options, which is what I have.
relevant output of df:
Filesystem 512-blocks Used Avail Capacity Mounted on
/dev/sd0i
On Wed, 18 Oct 2006, Derick Siddoway wrote:
This is what I see:
[EMAIL PROTECTED]:~$ df
Filesystem512-blocks Used Avail Capacity Mounted
on
/dev/wd0a 74826724 27903788 4318160039%/
se-nas01:/fs04/prodstfs01 4181818080 1654186208
This is what I see:
[EMAIL PROTECTED]:~$ df
Filesystem512-blocks Used Avail Capacity Mounted on
/dev/wd0a 74826724 27903788 4318160039%/
se-nas01:/fs04/prodstfs01 4181818080 1654186208 -176733542440%/data
[EMAIL PROTECTED]:~$ df -h
34 matches
Mail list logo