Re: Large Filesystem

2020-11-28 Thread infoomatic
On 28.11.20 05:51, Nick Holland wrote: > I've heard that from a lot of people. > And yet, those same people, when pressed, will tell you that a ZFS-equipped > system will crash much more often than simpler file systems. That's one > heck of a real penalty to pay for a theoretical advantage. > >

Re: Large Filesystem

2020-11-27 Thread Nick Holland
On 2020-11-27 16:03, Karel Gardas wrote: ,,, > To me this looks like too much pray for luck. With such amount of data, > I would stay with ZFS... I've heard that from a lot of people. And yet, those same people, when pressed, will tell you that a ZFS-equipped system will crash much more often

Re: Large Filesystem

2020-11-27 Thread Karel Gardas
, I would stay with ZFS... Good luck! Karel On 11/14/20 1:50 PM, Mischa wrote: Hi All, I am currently in the process of building a large filesystem with 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a central, mostly download, platform with around 100 concurr

Re: Large Filesystem

2020-11-16 Thread Mischa
> On 15 Nov 2020, at 20:57, Kenneth Gober wrote: > On Sun, Nov 15, 2020 at 8:59 AM Mischa wrote: > >> On 15 Nov at 14:52, Otto Moerbeek wrote: >>> fsck wil get slower once you start filling it, but since your original >>> fs had about 104k files it expect it not getting too bad. If the speed

Re: Large Filesystem

2020-11-15 Thread Otto Moerbeek
On Sun, Nov 15, 2020 at 02:57:49PM -0500, Kenneth Gober wrote: > On Sun, Nov 15, 2020 at 8:59 AM Mischa wrote: > > > On 15 Nov at 14:52, Otto Moerbeek wrote: > > > fsck wil get slower once you start filling it, but since your original > > > fs had about 104k files it expect it not getting too

Re: Large Filesystem

2020-11-15 Thread Kenneth Gober
On Sun, Nov 15, 2020 at 8:59 AM Mischa wrote: > On 15 Nov at 14:52, Otto Moerbeek wrote: > > fsck wil get slower once you start filling it, but since your original > > fs had about 104k files it expect it not getting too bad. If the speed > > for your usecase is good as well I guess you should

Re: Large Filesystem

2020-11-15 Thread Mischa
ion is that one time the server > > > > > > > > > crashed and i > > > > > > > > > had to do a fsck during the next boot. It took around 10 > > > > > > > > > hours for the 12TB. > > > > > > > > > This migh

Re: Large Filesystem

2020-11-15 Thread Otto Moerbeek
10 hours > > > > > > > > for the 12TB. > > > > > > > > This might be something to keep in mind if you want to use this > > > > > > > > on a server. > > > > > > > > But if my memory

Re: Large Filesystem

2020-11-15 Thread Mischa
> > > > > > on a server. > > > > > > > But if my memory serves me well otto did some changes to fsck on > > > > > > > ffs2, so > > > > > > > maybe thats a lot faster now. > >

Re: Large Filesystem

2020-11-15 Thread Otto Moerbeek
t; > > > > > the 12TB. > > > > > > This might be something to keep in mind if you want to use this on > > > > > > a server. > > > > > > But if my memory serves me well otto did some changes to fsck on > > > > &g

Re: Large Filesystem

2020-11-15 Thread Mischa
t; > > > > But if my memory serves me well otto did some changes to fsck on > > > > > ffs2, so > > > > > maybe thats a lot faster now. > > > > > > > > > > I hope this helps you a little bit! > > > > >

Re: Large Filesystem

2020-11-15 Thread Otto Moerbeek
> > > > 12TB. > > > > This might be something to keep in mind if you want to use this on a > > > > server. > > > > But if my memory serves me well otto did some changes to fsck on ffs2, > > > > so > > > > maybe thats a lot fa

Re: Large Filesystem

2020-11-14 Thread Otto Moerbeek
es me well otto did some changes to fsck on ffs2, so > > > maybe thats a lot faster now. > > > > > > I hope this helps you a little bit! > > > Greetings from Vienna > > > Leo > > > > > > Am 14.11.2020 um 13:50 schrieb Mischa: > &g

Re: Large Filesystem

2020-11-14 Thread Mischa
On 14 Nov at 16:49, Johan Huldtgren wrote: > hello, > > On 2020-11-14 13:50, Mischa wrote: > > Hi All, > > > > I am currently in the process of building a large filesystem with > > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a

Re: Large Filesystem

2020-11-14 Thread Mischa
> Leo > > > > Am 14.11.2020 um 13:50 schrieb Mischa: > > > I am currently in the process of building a large filesystem with > > > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a > > > central, mostly download, platform wit

Re: Large Filesystem

2020-11-14 Thread Johan Huldtgren
hello, On 2020-11-14 13:50, Mischa wrote: > Hi All, > > I am currently in the process of building a large filesystem with > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a > central, mostly download, platform with around 100 concurrent > connec

Re: Large Filesystem

2020-11-14 Thread Otto Moerbeek
serves me well otto did some changes to fsck on ffs2, so > maybe thats a lot faster now. > > I hope this helps you a little bit! > Greetings from Vienna > Leo > > Am 14.11.2020 um 13:50 schrieb Mischa: > > I am currently in the process of building a large filesystem wi

Re: Large Filesystem

2020-11-14 Thread Leo Unglaub
a little bit! Greetings from Vienna Leo Am 14.11.2020 um 13:50 schrieb Mischa: I am currently in the process of building a large filesystem with 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a central, mostly download, platform with around 100 concurrent connec

Large Filesystem

2020-11-14 Thread Mischa
Hi All, I am currently in the process of building a large filesystem with 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as a central, mostly download, platform with around 100 concurrent connections. The current system is running FreeBSD with ZFS and I would like t

Re: fsck large filesystem, memory limit problem

2008-05-19 Thread Hannah Schroeter
Hi! On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek wrote: [...] De fsck_ffs code allocates a number of arrays directly depending on the # of indodes in setup(), totalling 4 bytes per inode. Some other data is also needed, so it's not surprise you hit the 1G data space limit. Any chance

Re: fsck large filesystem, memory limit problem

2008-05-19 Thread Otto Moerbeek
On Mon, May 19, 2008 at 02:38:35PM +0200, Hannah Schroeter wrote: Hi! On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek wrote: [...] De fsck_ffs code allocates a number of arrays directly depending on the # of indodes in setup(), totalling 4 bytes per inode. Some other data is

Re: fsck large filesystem, memory limit problem

2008-05-19 Thread Hannah Schroeter
Hi! On Mon, May 19, 2008 at 03:00:08PM +0200, Otto Moerbeek wrote: On Mon, May 19, 2008 at 02:38:35PM +0200, Hannah Schroeter wrote: On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek wrote: [...] Any chance to get rid of that 1G limit that seems more and more arbitrary nowadays? I

Re: fsck large filesystem, memory limit problem

2008-05-19 Thread Otto Moerbeek
On Mon, May 19, 2008 at 03:12:22PM +0200, Hannah Schroeter wrote: Hi! On Mon, May 19, 2008 at 03:00:08PM +0200, Otto Moerbeek wrote: On Mon, May 19, 2008 at 02:38:35PM +0200, Hannah Schroeter wrote: On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek wrote: [...] Any chance to

Re: fsck large filesystem, memory limit problem

2008-05-19 Thread Stuart Henderson
On 2008-05-19, Hannah Schroeter [EMAIL PROTECTED] wrote: Who does still use sbrk() after OpenBSD's malloc uses mmap only? grepping an unpacked ports tree picks up at least emacs, spice, boehm-gc, erlang, and some Mozilla software. Some of these are known to use sbrk for sure, some are possible

Re: fsck large filesystem, memory limit problem

2008-05-19 Thread Ted Unangst
It is very arbitrary. But its not so easy to fix. Ok, the diff is only about 8 lines, but its the other things like testing and compat that make it hard. On May 19, 2008, at 8:38 AM, Hannah Schroeter [EMAIL PROTECTED] wrote: Hi! On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek

Re: fsck large filesystem, memory limit problem

2008-05-12 Thread Otto Moerbeek
On Fri, May 09, 2008 at 11:16:28AM -0400, Will wrote: Here are the requested outputs. OK, your filesystem indeed uses default block and fragment sizes. The # of inodes is about 238M. De fsck_ffs code allocates a number of arrays directly depending on the # of indodes in setup(), totalling 4

Re: fsck large filesystem, memory limit problem

2008-05-12 Thread Will
Thanks for taking a look. I will play with larger fragment/block sizes unless anyone suggests otherwise. -William On Mon, May 12, 2008 at 11:49 AM, Otto Moerbeek [EMAIL PROTECTED] wrote: On Fri, May 09, 2008 at 11:16:28AM -0400, Will wrote: Here are the requested outputs. OK, your

Re: fsck large filesystem, memory limit problem

2008-05-09 Thread Otto Moerbeek
On Thu, May 08, 2008 at 05:18:26PM -0400, Will wrote: I did see that, but did not realize that the 1GB limit is not a user-configurable feature. Even so, the FAQ implies that a 2TB filesystem is possible with default options, which is what I have. It might be the 2TB limit is a little too

Re: fsck large filesystem, memory limit problem

2008-05-09 Thread Will
Here are the requested outputs. output of `df -i`: Filesystem 512-blocks Used Avail Capacity iused ifree %iused Mounted on /dev/sd0a 31448010115219760434%2189 23409 9% / /dev/sd0h 826419692 7850896 0% 20 545642 0% /home

fsck large filesystem, memory limit problem

2008-05-08 Thread Will
Hello all, I just upgraded to 4.3, and I would like to congratulate the devs on another wonderful release! shutdown -p works and the wbng sensor support was a nice surprise. However, the most useful feature to me was the support for ffs2. I upgraded without a hitch, and repartitioned from a 1tb

Re: fsck large filesystem, memory limit problem

2008-05-08 Thread David J. Stillman
Isn't this the 1GB application limit mentioned in FAQ 14.7 - By the time one gets to a 2TB file system with default fragment and block sizes, fsck will require 1GB RAM to run, which is the application limit under OpenBSD. Larger fragments and/or blocks will reduce the number of inodes, and

Re: fsck large filesystem, memory limit problem

2008-05-08 Thread Will
I did see that, but did not realize that the 1GB limit is not a user-configurable feature. Even so, the FAQ implies that a 2TB filesystem is possible with default options, which is what I have. relevant output of df: Filesystem 512-blocks Used Avail Capacity Mounted on /dev/sd0i

Re: df reports negative available space on large filesystem

2006-10-19 Thread Otto Moerbeek
On Wed, 18 Oct 2006, Derick Siddoway wrote: This is what I see: [EMAIL PROTECTED]:~$ df Filesystem512-blocks Used Avail Capacity Mounted on /dev/wd0a 74826724 27903788 4318160039%/ se-nas01:/fs04/prodstfs01 4181818080 1654186208

df reports negative available space on large filesystem

2006-10-18 Thread Derick Siddoway
This is what I see: [EMAIL PROTECTED]:~$ df Filesystem512-blocks Used Avail Capacity Mounted on /dev/wd0a 74826724 27903788 4318160039%/ se-nas01:/fs04/prodstfs01 4181818080 1654186208 -176733542440%/data [EMAIL PROTECTED]:~$ df -h