On 2023-05-15 18:55, J Doe wrote:
On 2023-05-12, J Doe wrote:
Hello,
I was configuring Samba on my OpenBSD 7.2 server and wanted to support
iOS/iPad OS and macOS clients.
The documentation for Samba states that the following vfs options are
required to support these clients:
On 2023-05-12 03:22, Stuart Henderson wrote:
On 2023-05-12, J Doe wrote:
Hello,
I was configuring Samba on my OpenBSD 7.2 server and wanted to support
iOS/iPad OS and macOS clients.
The documentation for Samba states that the following vfs options are
required to support these clients:
On 2023-05-15 17:16, Lyndon Nerenberg (VE7TFX/VE6BBM) wrote:
Marcus MERIGHI writes:
vfs = catia fruit streams_xattr
I run a Samba server that does not have these options set - but
successfully serves iOS/macOS clients.
You need those extra attributes if you want to use your Samba
On 2023-05-12 03:24, Marcus MERIGHI wrote:
Hello,
gene...@nativemethods.com (J Doe), 2023.05.12 (Fri) 04:47 (CEST):
I was configuring Samba on my OpenBSD 7.2 server and wanted to support
iOS/iPad OS and macOS clients.
The documentation for Samba states that the following vfs options are
Marcus MERIGHI writes:
> > vfs = catia fruit streams_xattr
>
> I run a Samba server that does not have these options set - but
> successfully serves iOS/macOS clients.
You need those extra attributes if you want to use your Samba
share for TimeMachine backups.
--lyndon
Hello,
gene...@nativemethods.com (J Doe), 2023.05.12 (Fri) 04:47 (CEST):
> I was configuring Samba on my OpenBSD 7.2 server and wanted to support
> iOS/iPad OS and macOS clients.
>
> The documentation for Samba states that the following vfs options are
> required to support these clients:
>
>
On 2023-05-12, J Doe wrote:
> Hello,
>
> I was configuring Samba on my OpenBSD 7.2 server and wanted to support
> iOS/iPad OS and macOS clients.
>
> The documentation for Samba states that the following vfs options are
> required to support these clients:
>
> /etc/samba/smb.conf
>
Hello,
I was configuring Samba on my OpenBSD 7.2 server and wanted to support
iOS/iPad OS and macOS clients.
The documentation for Samba states that the following vfs options are
required to support these clients:
/etc/samba/smb.conf
. . .
vfs = catia fruit
We had a similar challenge. My advice would be to leave all the filesystems as
is - and add the 'sync' flag.
It slows down disk access - but makes the filesystem resilient to power
outages. See sync option - https://man.openbsd.org/mount
On Tuesday, 29 November 2022 at 09:03:28 am AWST,
Hi,
In the case of CPE, I do not think that programs will be added or
removed after the start of operation, so I think that the following
partition can be mounted as read-only to reduce the impact of power
failure.
/bin
/sbin
/usr
/usr/local (with wxallowed)
For the rest, you can just mount
On Mon, Nov 28, 2022, at 4:06 PM, Tom Smyth wrote:
> /dev/sd0a / ffs rw,softdep,noatime 1 1
> /dev/sd0d /usr/local ffs rw,wxallowed,nodev,softdep,noatime 1 1
softdep is a useful option for metadata-heavy workloads, but it is not a
magical go-fast flag. While it's possible that characterization
sorry there was an omission in my /etc/fstab
i had left out the softdep,noatime flags on the filessytems that were
funning off the disk using FFS
Thanks
#begin corrected /etc/fstab##
/dev/sd0a / ffs rw,softdep,noatime 1 1
/dev/sd0d /usr/local ffs
Hello, Folks,
Im reviewing our filesystem setup for OpenBSD CPEs that we deploy in the
field
in order to minimise the impact of Power Outages / Customer interference on
the boxes,
we install a 4G root partition /
and a 2GB /usr/local (to allow the wxallowed flag for the filesystem)
we use mfs
On 13/2/20 5:17 am, jeanfrancois wrote:
> Good evening,
>
> Very good videos are available from one of the developer of EXT2/3/4
> recommended to see.
>
> https://www.youtube.com/watch?v=2mYDFr5T4tY
>
> OpenBSD's FFS code looks awesome.
It's mature, and not worth chucking out anytime soon as
Good evening,
Very good videos are available from one of the developer of EXT2/3/4
recommended to see.
https://www.youtube.com/watch?v=2mYDFr5T4tY
OpenBSD's FFS code looks awesome.
Jean-François
Le 09/01/2020 à 03:25, Theo de Raadt a écrit :
Xiyue Deng wrote:
It would be better to
On Fri, Jan 10, 2020 at 11:28:07AM +0100, Stefan Sperling wrote:
> On Fri, Jan 10, 2020 at 12:52:44PM +0300, Consus wrote:
> > On 20:06 Thu 09 Jan, Marc Espie wrote:
> > > It's been that way for ages. But no-one volunteered
> > > to work on this.
> >
> > Anyone even knows about this? Aside from
On 11:28 Fri 10 Jan, Stefan Sperling wrote:
> On Fri, Jan 10, 2020 at 12:52:44PM +0300, Consus wrote:
> > On 20:06 Thu 09 Jan, Marc Espie wrote:
> > > It's been that way for ages. But no-one volunteered
> > > to work on this.
> >
> > Anyone even knows about this? Aside from OpenBSD developers
On Fri, Jan 10, 2020 at 12:52:44PM +0300, Consus wrote:
> On 20:06 Thu 09 Jan, Marc Espie wrote:
> > It's been that way for ages. But no-one volunteered
> > to work on this.
>
> Anyone even knows about this? Aside from OpenBSD developers (who have
> their plates full already) how an average
On 11:08 Fri 10 Jan, Janne Johansson wrote:
> By using the parts that OpenBSD is made up of, and not automatically moving
> to other OSes as soon as you leave the comfort zone.
I'm not sure, but it seems like from a user perspective there is nothing
wrong with amd(8). Only that it keeps using
Den fre 10 jan. 2020 kl 10:55 skrev Consus :
> On 20:06 Thu 09 Jan, Marc Espie wrote:
> > It's been that way for ages. But no-one volunteered
> > to work on this.
>
> Anyone even knows about this? Aside from OpenBSD developers (who have
> their plates full already) how an average person can find
On 20:06 Thu 09 Jan, Marc Espie wrote:
> It's been that way for ages. But no-one volunteered
> to work on this.
Anyone even knows about this? Aside from OpenBSD developers (who have
their plates full already) how an average person can find out that there
is rusty piece of code that should be
If you want a useful project related to filesystems,
try the automounter.
Yes, that ancient code.
Look very closely. It has tendrils in NFSv2.
And some people, most prominently Theo, use amd(8).
Write an automounter that does not depend on NFSv2,
and then, most probably we can kill NFSv2.
On 09/01/2020 05:15, Xiyue Deng wrote:
Some guy asks whether there's any plan to improve file system
performance, the answer given is the code is right there if you want to
contribute. Then some other guy offers a proposal to start working on
it, and the answer now becomes you are hardly
On Thu, Jan 09, 2020 at 09:07:38AM +1000, Stuart Longland wrote:
> On 9/1/20 12:56 am, Ian Darwin wrote:
> >> - If we could clean-room implement a BSD-licensed
> >> EXT3/EXT4/BTRFS/XFS/JFS/whatever, following style(8), would there be
> >> interest in supporting that in OpenBSD?
> >
> > And which
Am 09.01.2020 16:10 schrieb Ingo Schwarze:
https://www.youtube.com/watch?v=HTD9Gow1wTU
And Bob gave a talk about VFS hacking the very same
event. Might be an eye-opener of those "proposing to help".
https://www.youtube.com/watch?v=rVb8jdlP4gE
(somehow the slides didn't made it to /papers/?)
in a hackathon, and it wasn't that
one. But yeah, that talk certainly relates to the point i was
trying to make. Not my area of work really, but as far as i
understand, some things mentioned in the talk have changed a lot,
while others didn't at all, and Bob certainly still has plenty of
opportuni
On Thu, Jan 09, 2020 at 12:47:31PM +0300, Consus wrote:
> Relax, it was a joke.
Whatever, what I wrote wasn't just directed at you.
misc@ sucks a lot lately.
On 01/08/2020 09:25 PM, Theo de Raadt wrote:
> Xiyue Deng wrote:
>
>> It would be better to point out where to start, what
>> hard problems to solve, what work has been done in this area that people
>> can continue to work on.
> Looking at that list, noone here owes you any of those.
>
> Do
On 10:45 Thu 09 Jan, Stefan Sperling wrote:
> On Thu, Jan 09, 2020 at 11:02:17AM +0300, Consus wrote:
> > On 18:15 Wed 08 Jan, Xiyue Deng wrote:
> > > It would be better to point out where to start, what hard problems to
> > > solve, what work has been done in this area that people can continue
>
On Thu, Jan 09, 2020 at 11:02:17AM +0300, Consus wrote:
> On 18:15 Wed 08 Jan, Xiyue Deng wrote:
> > It would be better to point out where to start, what hard problems to
> > solve, what work has been done in this area that people can continue
> > to work on.
>
> They don't remember as there is
Den tors 9 jan. 2020 kl 02:11 skrev Ingo Schwarze :
>
> Are you aware that even Bob Beck@ is seriously scared of some
> parts of our file system code, and of touching some parts of it?
> Yes, this Bob Beck, who isn't really all that easily scared:
>
> https://www.youtube.com/watch?v=GnBbhXBDmwU
On 18:15 Wed 08 Jan, Xiyue Deng wrote:
> It would be better to point out where to start, what hard problems to
> solve, what work has been done in this area that people can continue
> to work on.
They don't remember as there is no bugtracker.
gwes writes:
> Suggestion: to improve file system performance,
> first document the bad behavior in detail.
>
> Begin with examples of traces/logs of disk accesses associated
> with file system operations.
>
> Include scenarios (one hopes reproducible ones) to provoke
> bad behavior.
>
> Are
Suggestion: to improve file system performance,
first document the bad behavior in detail.
Begin with examples of traces/logs of disk accesses associated
with file system operations.
Include scenarios (one hopes reproducible ones) to provoke
bad behavior.
Are reads worse than writes?
n this case, a clear answer is really required.
>> >
>> > There is few code that is as difficult as a file system.
>> > There is few code that is as closely entangled with the hardest
>> > parts of there kernel like file system code.
>> > There is few code
On 9/1/20 12:20 pm, Theo de Raadt wrote:
>> and the answer now becomes you are hardly qualified for such kind of
>> work.
> I suspect you are also unqualified.
>
You don't become qualified by writing words on a mailing list… and while
I acknowledge a lack of experience in the area, I do
Xiyue Deng wrote:
> It would be better to point out where to start, what
> hard problems to solve, what work has been done in this area that people
> can continue to work on.
Looking at that list, noone here owes you any of those.
Do your own homework.
Re-reading the thread is remarkable.
code that is as closely entangled with the hardest
> > parts of there kernel like file system code.
> > There is few code where touching it is as dangerous as touching
> > file system code.
> > There are few areas of the system where people get as upset
> > when you break it
as touching
> file system code.
> There are few areas of the system where people get as upset
> when you break it as with file systems. You literally make people
> lose their personal data, and when they realize something went wrong,
> it's usually too late, the data is usually already g
Ingo Schwarze wrote:
> Even if you had, let's say, a whole year to spend full-time, you
> would not really be making any sense right now. So, could we drop
> this thread, please?
Ingo, you know that's impossible.
These are people on misc, their self-importance and optimism knows
no bounds.
hat is as closely entangled with the hardest
parts of there kernel like file system code.
There is few code where touching it is as dangerous as touching
file system code.
There are few areas of the system where people get as upset
when you break it as with file systems. You literally make peop
On 9/1/20 12:56 am, Ian Darwin wrote:
>> - If we could clean-room implement a BSD-licensed
>> EXT3/EXT4/BTRFS/XFS/JFS/whatever, following style(8), would there be
>> interest in supporting that in OpenBSD?
>
> And which "we" are you referring to here? Did you mean yourself,
> or are you hoping
Hi Karel,
Thanks, for the correction...
I thought zfs was bigger than that ;)
Thanks
On Wednesday, 8 January 2020, Karel Gardas wrote:
>
>
> On 1/8/20 12:44 PM, Tom Smyth wrote:
>
>> As far as im aware there are 2 concerns about ZFS,
>> 1) its license is not BSD /ISC you can use it and
On 1/8/20 12:44 PM, Tom Smyth wrote:
As far as im aware there are 2 concerns about ZFS,
1) its license is not BSD /ISC you can use it and make money and not be sued,
but it is more restrictive than BSD / ISC
Yes, CDDL seems to be a no go based on past CDDL discussion which is
available
> - If we could clean-room implement a BSD-licensed
> EXT3/EXT4/BTRFS/XFS/JFS/whatever, following style(8), would there be
> interest in supporting that in OpenBSD?
And which "we" are you referring to here? Did you mean yourself,
or are you hoping that "somebody" will do it?
> There's merit in
Howdy Stuart,
On Wed, 8 Jan 2020 at 11:17, Stuart Longland wrote:
>
> On 8/1/20 1:25 am, Karel Gardas wrote:
> > And yes, ffs performance sucks, but nor me nor you provide any diff to
> > change that so we can just shut up and use what's available.
>
> Okay, question is if not ffs, then what?
>
small!)
EXT4 is also very widespread and stable, and seems to offer decent
performance.
ZFS and BTRFS are much newer, and more complicated with software RAID
functionality built in. I think these would be harder to implement from
scratch.
DIY file systems doesn't seem like a good plan for success…
On 11/12/2017 12:27 PM, Philip Guenther wrote:
On Mon, Dec 11, 2017 at 9:16 AM, Otto Moerbeek > wrote:
On Mon, Dec 11, 2017 at 08:30:54AM -0700, Steve Williams wrote:
> cpio has always been my "go to" for file system duplication
because it
On Mon, Dec 11, 2017 at 9:16 AM, Otto Moerbeek wrote:
> On Mon, Dec 11, 2017 at 08:30:54AM -0700, Steve Williams wrote:
> > cpio has always been my "go to" for file system duplication because it
> will
> > re-create device nodes.
>
> Both pax and tar do that as well.
>
Come on,
On Mon, Dec 11, 2017 at 08:30:54AM -0700, Steve Williams wrote:
> Hi,
>
> cpio has always been my "go to" for file system duplication because it will
> re-create device nodes.
Both pax and tar do that as well.
-Otto
>
> Cheers,
> Steve Williams
>
>
> On 10/12/2017 11:03 AM,
Hi,
cpio has always been my "go to" for file system duplication because it
will re-create device nodes.
Cheers,
Steve Williams
On 10/12/2017 11:03 AM, webmas...@bennettconstruction.us wrote:
Forgive problems with this email.
I saw how my emails showed up on marc.info
Scary. This is just
On Mon, December 11, 2017 4:28 am, Robert Paschedag wrote:
>>
>
> Is "rsync" not an option?
>
+1 for rsync. never had a problem.
cheers.
--
x9p | PGP : 0x03B50AF5EA4C8D80 / 5135 92C1 AD36 5293 2BDF DDCC 0DFA 74AE
1524 E7EE
On Sun, Dec 10, 2017 at 10:39 PM, Philip Guenther
wrote:
> 'pax' and 'tar' are actually the same binary so they have the same
> limitation from the file formats that are supported, as well as any purely
> internal limitations. "pax -rw" actually has file format limitations
Am 10. Dezember 2017 22:17:11 MEZ schrieb vincent delft
:
>Hello,
>
>Did you tried pax ?
>some thing like: pax -rw -pe
>
>I don't know if this the best tool, but I'm using it to duplicate a 1TB
>drive (having lot of hard links) onto an other one.
>I've done it couple of
On Sun, Dec 10, 2017 at 3:46 PM, wrote:
>
> > Wait, you previously said your problem was with symlinks *permissions*
> but
> > now you're saying *ownership*! I can confirm that restore(8) didn't
> > preserve the permissions (thus the patch I sent), but as long
>
> Wait, you previously said your problem was with symlinks *permissions* but
> now you're saying *ownership*! I can confirm that restore(8) didn't
> preserve the permissions (thus the patch I sent), but as long as you ran it
> with sufficient privilege it should have always restored symlink
On Sun, Dec 10, 2017 at 3:24 PM, wrote:
...
>
> > > > dump
> > > > I had to move /usr/local to a bigger partition. growfs,
> > > > etc. I kept the /usr/local untouched and then dumped it
> > > > to the new partition, expecting a true duplication.
> > > > Nope.
>
>
> 'pax' and 'tar' are actually the same binary so they have the same
> limitation from the file formats that are supported, as well as any purely
> internal limitations. "pax -rw" actually has file format limitations by
> design, so it doesn't automagically free you from those
On Sun, 10 Dec 2017, vincent delft wrote:
> Did you tried pax ?
> some thing like: pax -rw -pe
>
> I don't know if this the best tool, but I'm using it to duplicate a 1TB
> drive (having lot of hard links) onto an other one.
> I've done it couple of time, and I've do not see issues.
'pax' and
Hello,
Did you tried pax ?
some thing like: pax -rw -pe
I don't know if this the best tool, but I'm using it to duplicate a 1TB
drive (having lot of hard links) onto an other one.
I've done it couple of time, and I've do not see issues.
rgds
On Sun, Dec 10, 2017 at 7:03 PM,
I'm not able to try it right now, but would gtar
accomplish what that our tar doesn't for this?
As in maybe pull something out of it into our tar?
Chris Bennett
Forgive problems with this email.
I saw how my emails showed up on marc.info
Scary. This is just temporary.
OK. I've tried to use both methods and just don't
get true duplication.
tar
It can't work with file and directory names
that are OK in filesystem, but too long for itself.
Quite a while
to more manageable chunks if at all possible.
Some of the benefits to "chunking" your data:
* If you can RO "full" file systems, your fsck time drops to zero for those.
* You buy storage when you need it instead of in advance. Since storage
almost always gets bigger and cheaper
One 27T drive?!
On May 26, 2017 17:27, "Scott Bonds" wrote:
> I've got a 27T drive, single partition, about half full. Combination of
> big files and lots of small ones. 32G of ECC RAM. Hardware RAID5 ATM though
> I've used software RAID5 on the same array and that was good too.
On Fri, 26 May 2017 11:35:49 -0300
Friedrich Locke wrote:
> Hi folks,
>
> does anybody here run OBSD with a file system bigger than 10TB ?
> How much time boot takes to bring the system up (i mean fsck) ?
> Are you using ffs2 ? With softdep ?
>
> Thanks.
This
I've got a 27T drive, single partition, about half full. Combination of
big files and lots of small ones. 32G of ECC RAM. Hardware RAID5 ATM
though I've used software RAID5 on the same array and that was good too.
I keep offline backups of everything. I think it takes around an hour to
fsck,
On 2017 May 26 (Fri) at 11:35:49 -0300 (-0300), Friedrich Locke wrote:
:Hi folks,
:
:does anybody here run OBSD with a file system bigger than 10TB ?
:How much time boot takes to bring the system up (i mean fsck) ?
:Are you using ffs2 ? With softdep ?
:
:Thanks.
I created a 24T disk with ff2. I
Hi folks,
does anybody here run OBSD with a file system bigger than 10TB ?
How much time boot takes to bring the system up (i mean fsck) ?
Are you using ffs2 ? With softdep ?
Thanks.
Hi,
Could Some one recommend good books on File Systems and File System
Programming Please?
Thanks
Siju
on File Systems and File System
Programming Please?
Thanks
Siju
On Thu, Aug 13, 2009 at 9:11 AM, Siju Georgesgeorge...@gmail.com wrote:
Could Some one recommend good books on File Systems and File System
Programming Please?
On Aug 13 09:45:17, Luis Useche wrote:
A fast file system for UNIX by McKusick, a clasic one.
The design and implementation
On Thu, Aug 13, 2009 at 09:45:17AM -0400, Luis Useche wrote:
I can recommend some papers and books I know for theory as well as
programming.
Papers (you can find them on the internet):
A fast file system for UNIX by McKusick, a clasic one.
Or in /usr/share/doc/smm/05.fastfs; make ps and
On Thu, Aug 13, 2009 at 8:04 PM, Otto Moerbeeko...@drijf.net wrote:
Or in /usr/share/doc/smm/05.fastfs; make ps and you have it nicely
formatted in postscript.
Thanks a million Luis, Jan and Otto.
--Siju
This is a patch to 4.3 release fsck_ffs to reduce the block map
memory usage in almost all cases. It uses a sparse representation
where regions of all zeros or all ones require no memory.
In the worst case (every region contains both ones and zeros)
it increases memory usage by less than 2%. In
OK,
Here is the source of the problem. The cache file generated by
webazolver is the source of the problem. Based on the information of the
software webalizer, as this:
Cached DNS addresses have a TTL (time to live) of 3 days. This may be
changed at compile time by editing the dns_resolv.h
On Tue, 17 Jan 2006, Daniel Ouellet wrote:
OK,
Here is the source of the problem. The cache file generated by webazolver is
the source of the problem. Based on the information of the software webalizer,
as this:
Cached DNS addresses have a TTL (time to live) of 3 days. This may be
On Tue, Jan 17, 2006 at 02:15:57PM +0100, Otto Moerbeek wrote:
On Tue, 17 Jan 2006, Daniel Ouellet wrote:
OK,
Here is the source of the problem. The cache file generated by
webazolver is the source of the problem. Based on the information of
the software webalizer, as this:
On Tue, 17 Jan 2006, Joachim Schipper wrote:
On Tue, Jan 17, 2006 at 02:15:57PM +0100, Otto Moerbeek wrote:
You are wrong in thinking sparse files are a problem. Having sparse
files quite a nifty feature, I would say.
Are we talking about webazolver or OpenBSD?
I'd argue that relying
You are wrong in thinking sparse files are a problem. Having sparse
files quite a nifty feature, I would say.
Are we talking about webazolver or OpenBSD?
I'd argue that relying on the OS handling sparse files this way instead
of handling your own log data in an efficient way *is* a problem,
On Tue, Jan 17, 2006 at 05:49:24PM +0100, Otto Moerbeek wrote:
On Tue, 17 Jan 2006, Joachim Schipper wrote:
On Tue, Jan 17, 2006 at 02:15:57PM +0100, Otto Moerbeek wrote:
You are wrong in thinking sparse files are a problem. Having sparse
files quite a nifty feature, I would say.
On Tue, Jan 17, 2006 at 02:36:44PM -0500, Daniel Ouellet wrote:
[...] But having a
file that is let say 1MB of valid data that grow very quickly to 4 and
6GB quickly and takes time to rsync between servers were in one instance
fill the fill system and create other problem. (: I wouldn't
Hi all,
First let me start with my apology to some of you for having waisted
your time!
As much as this was/is interesting and puzzling to me and that I am
trying obviously to get my hands around this issue and usage of sparse
files, the big picture of it, is obviously something missing in
On Sun, 15 Jan 2006, Daniel Ouellet wrote:
Otto Moerbeek wrote:
On Sun, 15 Jan 2006, Daniel Ouellet wrote:
Since the bsize and fsize differ, it is expected that the used kbytes of the
file systems differ. Also, the inode table size will not be the same.
Not sure that I would agree
run du on both filesystems and compare the results.
Ted Unangst wrote:
run du on both filesystems and compare the results.
OK, just because I am curious more then think there is a problem, and
because I am still puzzle from what Otto and Ted said, here is what I
did and the answer to question from Otto as well.
- Both system run 3.8. (www1
Otto Moerbeek wrote:
Now I agree that the difference you are seeing is larger than I would
expect. I would run a ls -laR or du -k on the filesystems and diff the
results to see if the contents are realy the same. My bet is that
you'll discover some files that are not on the system with a smaller
Just a bit more information on this.
As I couldn't understand if that was an AMD64 issue as illogical as that
might be, I decided to put that to the test. So, I pull out an other
AMD64 server and it's running 3.8, same fsize and bsize, one drive, etc.
Use rsync to mirror the content and the
On Mon, 16 Jan 2006, Daniel Ouellet wrote:
Just a bit more information on this.
As I couldn't understand if that was an AMD64 issue as illogical as that might
be, I decided to put that to the test. So, I pull out an other AMD64 server
and it's running 3.8, same fsize and bsize, one drive,
Otto Moerbeek wrote:
On Mon, 16 Jan 2006, Daniel Ouellet wrote:
Just a bit more information on this.
As I couldn't understand if that was an AMD64 issue as illogical as that might
be, I decided to put that to the test. So, I pull out an other AMD64 server
and it's running 3.8, same fsize and
Here is something I can't put my hands around to well and I don't really
understand why that is, other then may be the fize of each mount point
not process properly on AMD64, but that's just an idea. See lower below
for why I think it might be the case. In any case, I would welcome a
logical
# Cyl 48372 - 58167
Since the bsize and fsize differ, it is expected that the used kbytes of the
file systems differ. Also, the inode table size will not be the same.
You're comparing apples and oranges.
BTW, you don't say which version(s) you are running. That's bad. since
some bugs were fixed
86 # Cyl 48372 - 58167
Since the bsize and fsize differ, it is expected that the used kbytes of the
file systems differ. Also, the inode table size will not be the same.
Not sure that I would agree fully with that, but I differ to your
judgment. Yes there will and should be difference in usage
92 matches
Mail list logo