Re: desire for journaled filesystem

2023-09-10 Thread John Holland
I appreciate your taking the time to send that, and also thanks to Nick 
Holland for posting earlier.



I'm writing this on the laptop in question.


It seems like, as with so many things, it's hard to know who to believe 
and things are probably more nuanced than some would say. That's my way 
of kind of waffling. There's no shortage of ZFS fanboys, so it's 
interesting to see what seems to be a pretty informed opinion to the 
contrary.



I haven't noticed any issues with this machine since the kernel 
panic/fsck issue.  Certainly the links you posted show people having bad 
experiences with (or in spite of) ZFS.



When I restarted the machine and it dropped to a shell saying there were 
filesystem issues the ensuing fsck process was worse than I had seen 
before. That kind of alarmed me.




Thanks,

John



On 9/10/23 06:26, tetrosalame wrote:

Il 05/09/2023 14:54, John Holland ha scritto:
I just had a kernel panic when reloading a firefox tab pointed at 


[...]

I've really been enjoying OpenBSD but I think it could really use a 
journaled filesystem. I believe I have the correct options in fstab for 


[...]

Journals *might* make your filesystem consistent faster. Except when 
they don't. When things go bad, you come to learn that journals are 
different: some journal is more about metadata, another thing pretends 
to care about your data too (then you read the code and find out it 
mostly doesn't). Complex things tend to fail really hard.


To get the system up quickly in case of disaster OpenBSD has altroot, 
see daily(8). From there is just fsck's job. Or time to discover if 
your backups are in good shape. OpenBSD has solid tools for backups 
too in base (dump, tar, openrsync...).


> OpenZFS? License issues? Hammer? Anything?

Please, read this 2017 message from Nick Holland:
https://marc.info/?l=openbsd-misc&m=148894780327753&w=2

Re-read the paragraph about ZFS. I can't agree more. And when he wrote 
that, ZFS was, for the most part, Solaris -> Illumos based. Back then, 
it kinda worked. Now, OpenZFS is ZFS on Linux and shares the same bad 
attitude.
Sometimes people use ZFS to "avoid" fsck, because, you know, ZFS (and 
HAMMER and God-knows-what-the-latest-linux-thing-is-named) doesn't 
need fsck. Except when it does (read this horror story 
http://www.michellesullivan.org/blog/1726).


If I got a cent everytime I heard "sorry friend your pool is gone, 
restore from backup" I could retire now.







Re: desire for journaled filesystem

2023-09-10 Thread tetrosalame

Il 05/09/2023 14:54, John Holland ha scritto:
I just had a kernel panic when reloading a firefox tab pointed at 


[...]

I've really been enjoying OpenBSD but I think it could really use a 
journaled filesystem. I believe I have the correct options in fstab for 


[...]

Journals *might* make your filesystem consistent faster. Except when 
they don't. When things go bad, you come to learn that journals are 
different: some journal is more about metadata, another thing pretends 
to care about your data too (then you read the code and find out it 
mostly doesn't). Complex things tend to fail really hard.


To get the system up quickly in case of disaster OpenBSD has altroot, 
see daily(8). From there is just fsck's job. Or time to discover if your 
backups are in good shape. OpenBSD has solid tools for backups too in 
base (dump, tar, openrsync...).


> OpenZFS? License issues? Hammer? Anything?

Please, read this 2017 message from Nick Holland:
https://marc.info/?l=openbsd-misc&m=148894780327753&w=2

Re-read the paragraph about ZFS. I can't agree more. And when he wrote 
that, ZFS was, for the most part, Solaris -> Illumos based. Back then, 
it kinda worked. Now, OpenZFS is ZFS on Linux and shares the same bad 
attitude.
Sometimes people use ZFS to "avoid" fsck, because, you know, ZFS (and 
HAMMER and God-knows-what-the-latest-linux-thing-is-named) doesn't need 
fsck. Except when it does (read this horror story 
http://www.michellesullivan.org/blog/1726).


If I got a cent everytime I heard "sorry friend your pool is gone, 
restore from backup" I could retire now.



--
f



Re: desire for journaled filesystem

2023-09-08 Thread Rudolf Leitgeb
If push comes to shove, then the journaling file system may lose
more data, but it will be consistent. FFS will have written as much
as possible, sometimes without association with an inode, that's when
people encounter full lost+found directories.

Neither file system will correctly record the most recent additions,
but will most likely hold on to all the old stuff. Backup is therefore
of little help in most situations like this.

>From this perspective the main difference is, that a journaling FS
will be consistent and bootable after each and every crash (and I've 
seen hundreds), whereas I have positively seen instances, in which FFS
would throw me into root console and ask for manual fsck.

The latter is not much of a problem in a desktop (assuming you know 
how to clean up), but definitely a big nuissance for routers or off site
machines. I do agree, that home routers (like my EdgeRouter) should run on
a readonly FS to avoid this problem the right way, not scream for FS
journals.

On Wed, 2023-09-06 at 08:05 +0200, Janne Johansson wrote:
> Den tis 5 sep. 2023 kl 20:53 skrev John Holland :
> > 
> > I have a backup that is at least 2 days old offsite at a friend’s house. It 
> > would be a
> > bit of a pain to go retrieve it, but I could do that.
> > 
> >  Short of that, I have 4000+ files in lost+found with names like #1094827. 
> > What can I
> > do with those? I tried running “file” on the first 50 via xargs and they 
> > mostly at
> > least purport to be some sort of intact file. How can I determine what they 
> > are?
> > Please don’t suggest that I manually use “file” and then an appropriate 
> > program to
> > examine each one in turn
> > 
> 
> Those "files" are fragments of files, named after the inode number,
> which you get when fsck finds a not-complete chain of
> directory-entry/filename -> inode -> linked list of file-contents.
> 
> While fsck can't figure out the filename and where in the directory
> structure it is meant to belong, or possibly if it is only some part
> of a whole file, it will give you a chance to recover at least partial
> contents from the lost+found folder. Sometimes this might be awesome
> if you can dig out some key or pw needed for something super
> important, sometimes you get half of a database file and that is
> probably close to zero usefulness.
> 
> That said, if it was (as written later) browser cache and partial
> downloads, it is not very surprising that data files exist which are
> not yet linked during the download, or temp files unlinked for later
> deletion by the FS, had the computer not crashed. If you had something
> like zfs, those half-written or half-deleted files might just have
> been totally missing instead of ending up in lost+found, since they
> represent a point-in-time in which the FS is not in a consistent
> state, so the end result would mostly have been the same, this data is
> not visible under your home account after the crash.
> 
> Journaling has some great advantages, like write aggregation if your
> journal can be placed on a faster device and when it comes to quick
> checkups after crashes, an empty journal often means the fs was not in
> a broken state and probably needs less or no total checkup by fsck
> tools, which is nice.
> It will not fix a half-downloaded ISO or unlinked temp files that you
> for some reason want to look at afterwards, nor will the journal fix
> any kind of broken sectors, though checksumming file systems will of
> course help you find the errors before handing the bad sectors over to
> your applications.
> 



Re: desire for journaled filesystem

2023-09-08 Thread Gregory Edigarov
On Wed, 6 Sep 2023 22:52:59 -0400
Nick Holland  wrote:

> On 9/6/23 08:23, John Holland wrote:
> > Janne-
> > 
> > Thanks for all that useful information.
> > 
> > others- this is a thinkpad, that's not on all the time, so a cron
> > backup is not that good. I actually back up manually, currently
> > using "borg" for that. I mostly just do email and web on it so
> > there's probably nothing serious lost. In a few days I will have
> > the external disk with the backup back here and I may see what I
> > can find on it. My /home partition has a lot of data on it because
> > I built an AWS Openbsd machine image on it. But it would be good to
> > see whether my system is working correctly.
> 
> Cats are fuzzy
> Fire is hot
> Journaling file systems are complicated
> Backups are important.

well, speaking about backups,what I (well, somewhat) miss on openbsd,
is the ability to make a snapshot of filesystem, (in the style of
freebsd mksnap). but I can live without it definitely.
my sources are in git, my data backups live in borg, the system is
subject to reinstall in case of disaster.

 
 



Re: desire for journaled filesystem

2023-09-07 Thread Janne Johansson
Den fre 8 sep. 2023 kl 03:47 skrev Steve Litt :
>
> My main computer is Void Linux. If I had to restore from backup every
> time the disks became mildly messed up, all my time would be spent
> backing up and restoring.
>
> I remember back in the 90's and early 00's before journalling every
> system crash was grounds for an ulcer.

Then again, ext2-3-4 run in asynch mode for all operations, which is
why e2fsck takes such a long time, the act of creating a new file
needs at least four operations (allocating space for contents, adding
filename entry to directory, creating inode for metadata and writing
out the actual contents).

If you run async file systems, these can happen in any random order,
and if you have a crash while files are being created (and deleted)
any of these may or may not have happened. BSD ffs does these mostly
in order (where softdep can change/delay some of them) which means
that fsck for ffs can know that if step 3 isn't done, step 4 will not
have started either.

For e2fsck, all possible combinations must be explored. Adding to
this, ext filesystems don't seem to have any kind of way to express "I
found an unchecked error so I am in need of a detailed fsck", which is
why dists using ext2 would have "magic" files like touching /autofsck
and removing said file in order to indicate if last shutdown was good
or bad.

Even with this simplistic method, they would STILL force fsck every
100 days or 58 reboots, because well, you can't tell if there ever was
an error during the last 100 days, since there is no method to mark
the known-broken fs as needing fsck.

In the light of this, the need for a journal (even at the cost of
slightly more IO at times) becomes obvious. The fine folks over at the
penguin camp will rather write to a journal "I am about to create
/tmp/tmp.FSGSGRg3", then send those four operations, then clear the
journal entry again, just so the middle 4 ops can be async, than
"suffer" some ordering in the file system operations.

Now, bsd can run softdep which speeds some writes up, at some cost and
some added risk, and you can certainly mount async and have really
large risks added, but for each of those two steps, I would make very
sure that I had either useless data, or (as suggested) good backups in
place.

As Nick wrote, bsd people tend to like the fact that when your IO
subsystem says "the data is on the disk", it actually is there. Ext4
had a nice period* when "on the disk" meant "it will be on disk in 2
and a half minutes" even for atomic operations. You can imagine how
many people managed to have issues or lose power in the span of 150
seconds. I think they shortened the time, but the amount of tears
needed for the "go fast even if you go in the wrong direction" crowd
to change their minds was quite large.

To me, it is like usb writing speeds. OpenBSD will have dog slow
speed. But it will also allow you to unmount the device when the write
is finished. Other common OSes will tell you "done!" in a few seconds,
then the stick is still blinking, and you ask to unmount and then it
still takes this long amount of time because it was just lying to you
about the writes being finished. If I am to wait 30 seconds to write a
large ISO to my stick, I'd rather have the computer show me it is
working, instead of hoping I would write the file in "three" seconds
and then read comics for 27 seconds before unmounting so I don't
notice the discrepancy.

*)  
https://www.pointsoftware.ch/2014/02/05/linux-filesystems-part-4-ext4-vs-ext3-and-why-delayed-allocation-is-bad/

-- 
May the most significant bit of your life be positive.



Re: desire for journaled filesystem

2023-09-07 Thread Steve Litt
My main computer is Void Linux. If I had to restore from backup every
time the disks became mildly messed up, all my time would be spent
backing up and restoring.

I remember back in the 90's and early 00's before journalling every
system crash was grounds for an ulcer.

I didn't know that the main OpenBSD filesystem didn't journal.

SteveT


Ronan Viel said on Tue, 5 Sep 2023 16:37:19 +0200

>What about backup? 
>tar is ready to use.
>
>Ronan
>
>> Le 5 sept. 2023 à 16:12, John Holland  a
>> écrit :
>> 
>> I just had a kernel panic when reloading a firefox tab pointed at
>> facebook. After restarting, all the filesystems had errors but /home
>> was particularly bad and caused the boot to stop and prompt if I
>> wanted to enter a root shell.
>> 
>> 
>> I eventually got fsck to mark the /home filesystem clean but it
>> found >4000 lost files that it moved to lost&found. I am not so
>> experienced with this, running "file" on a few of them shows that
>> they may be intact files but they have numeric names now.
>> 
>> 
>> I've really been enjoying OpenBSD but I think it could really use a
>> journaled filesystem. I believe I have the correct options in fstab
>> for the best results:
>> 
>> 1f08fbc2b303f0ef.k /home ffs rw,softdep,noatime,nodev,nosuid 1 2
>> 
>> 
>> I was just thinking how much I was enjoying OpenBSD compared to some
>> others when this happened.
>> 
>> 
>> OpenZFS? License issues? Hammer? Anything?
>>   
>



Re: desire for journaled filesystem

2023-09-06 Thread Nick Holland

On 9/6/23 08:23, John Holland wrote:

Janne-

Thanks for all that useful information.

others- this is a thinkpad, that's not on all the time, so a cron backup
is not that good. I actually back up manually, currently using "borg"
for that. I mostly just do email and web on it so there's probably
nothing serious lost. In a few days I will have the external disk with
the backup back here and I may see what I can find on it. My /home
partition has a lot of data on it because I built an AWS Openbsd machine
image on it. But it would be good to see whether my system is working
correctly.


Cats are fuzzy
Fire is hot
Journaling file systems are complicated
Backups are important.

That's four mostly unrelated topics.  I'd argue there is more
connection between cats and journaled file systems then there is
between journaled filesystems and backups, in that both cats and
fancy file systems can be adorably cute and cause lots of data
loss (and the backups are how you recover from bad file systems
and cat mischief).

Put bluntly, I turn my OpenBSD machines off by yanking power
from them often, and I've been doing that for well over 20 years
(since OpenBSD v2.5).  Sometimes accidentally (bad laptop battery,
power outage, tripping over a power cord), sometimes out of lazy
indifference and not feeling like logging in and doing it right.
Yeah, I get lots of scary looking messages about my data being
turned to hash, but you know how often I've had actual data loss
because of that?  Only when I didn't save my work (which does
happen too often).

The ONLY time I remember I had an "event" that caused actual file
system corruption that wasn't easily fixed with a routine fsck
was when a SCSI controller literally fell out of the computer
while the computer was on. Yeah, ended up reformatting that one.
Pretty sure your journaled file systems would have been in
pretty much the same place, and ZFS would have shit itself on
an unrelated computer across the room in sympathy.  (oh, there
was that incident with the nail gun going through the hard disk,
but I'm pretty sure no FS was gonna save that one).

You know how often your beloved "journaling file system" would
have saved my data?  I can't think of one time.  I'm sure someone
somewhere will swear it saved their data, but that's hard to
prove.  I'm just lacking experience losing data on FFS that
could have been saved by a "better" FS.  Twenty+ years of power
outages, broken hardware, testing software, tripping over power
cords and being lazy with hundreds of machines, and can't say I
ever said, "Gee, I wish I had a journaled file system, that
really would have saved me right there".

You know how often those piece of shit Linux File System of the
Month have bit me in various ways?  A lot.  Just spent the last
week dealing with a problem that turned out to be 100% CAUSED
by BTRFS.  A problem that just wouldn't have been a thing if it
was running FFS.  It was literally "features" taking down a
customer facing system, over and over.

You are trying to "fix" a non-problem by making things more
complicated.  Not gonna work they way you expect.

Nick.



Re: desire for journaled filesystem

2023-09-06 Thread John Holland

Janne-

Thanks for all that useful information.

others- this is a thinkpad, that's not on all the time, so a cron backup 
is not that good. I actually back up manually, currently using "borg" 
for that. I mostly just do email and web on it so there's probably 
nothing serious lost. In a few days I will have the external disk with 
the backup back here and I may see what I can find on it. My /home 
partition has a lot of data on it because I built an AWS Openbsd machine 
image on it. But it would be good to see whether my system is working 
correctly.



I still think it would be nice if OpenBSD could use a journaling 
filesystem, but I do not have the expertise to do anything to contribute 
to that.


Regards,

John


On 9/6/23 02:05, Janne Johansson wrote:

Den tis 5 sep. 2023 kl 20:53 skrev John Holland :

I have a backup that is at least 2 days old offsite at a friend’s house. It 
would be a bit of a pain to go retrieve it, but I could do that.

  Short of that, I have 4000+ files in lost+found with names like #1094827. 
What can I do with those? I tried running “file” on the first 50 via xargs and 
they mostly at least purport to be some sort of intact file. How can I 
determine what they are? Please don’t suggest that I manually use “file” and 
then an appropriate program to examine each one in turn


Those "files" are fragments of files, named after the inode number,
which you get when fsck finds a not-complete chain of
directory-entry/filename -> inode -> linked list of file-contents.

While fsck can't figure out the filename and where in the directory
structure it is meant to belong, or possibly if it is only some part
of a whole file, it will give you a chance to recover at least partial
contents from the lost+found folder. Sometimes this might be awesome
if you can dig out some key or pw needed for something super
important, sometimes you get half of a database file and that is
probably close to zero usefulness.

That said, if it was (as written later) browser cache and partial
downloads, it is not very surprising that data files exist which are
not yet linked during the download, or temp files unlinked for later
deletion by the FS, had the computer not crashed. If you had something
like zfs, those half-written or half-deleted files might just have
been totally missing instead of ending up in lost+found, since they
represent a point-in-time in which the FS is not in a consistent
state, so the end result would mostly have been the same, this data is
not visible under your home account after the crash.

Journaling has some great advantages, like write aggregation if your
journal can be placed on a faster device and when it comes to quick
checkups after crashes, an empty journal often means the fs was not in
a broken state and probably needs less or no total checkup by fsck
tools, which is nice.
It will not fix a half-downloaded ISO or unlinked temp files that you
for some reason want to look at afterwards, nor will the journal fix
any kind of broken sectors, though checksumming file systems will of
course help you find the errors before handing the bad sectors over to
your applications.





Re: desire for journaled filesystem

2023-09-06 Thread Stuart Henderson
On 2023-09-05, Ronan Viel  wrote:
> What about backup? 
> tar is ready to use.

The file formats currently supported by tar/pax don't work for all situations
(they can't store all filenames). For a base OS tool for backup, dump is 
probably
a better choice.

https://marc.info/?l=openbsd-tech&m=169393159502429&w=2 will eventually help 
for tar.




Re: desire for journaled filesystem

2023-09-05 Thread Janne Johansson
Den tis 5 sep. 2023 kl 20:53 skrev John Holland :
>
> I have a backup that is at least 2 days old offsite at a friend’s house. It 
> would be a bit of a pain to go retrieve it, but I could do that.
>
>  Short of that, I have 4000+ files in lost+found with names like #1094827. 
> What can I do with those? I tried running “file” on the first 50 via xargs 
> and they mostly at least purport to be some sort of intact file. How can I 
> determine what they are? Please don’t suggest that I manually use “file” and 
> then an appropriate program to examine each one in turn
>

Those "files" are fragments of files, named after the inode number,
which you get when fsck finds a not-complete chain of
directory-entry/filename -> inode -> linked list of file-contents.

While fsck can't figure out the filename and where in the directory
structure it is meant to belong, or possibly if it is only some part
of a whole file, it will give you a chance to recover at least partial
contents from the lost+found folder. Sometimes this might be awesome
if you can dig out some key or pw needed for something super
important, sometimes you get half of a database file and that is
probably close to zero usefulness.

That said, if it was (as written later) browser cache and partial
downloads, it is not very surprising that data files exist which are
not yet linked during the download, or temp files unlinked for later
deletion by the FS, had the computer not crashed. If you had something
like zfs, those half-written or half-deleted files might just have
been totally missing instead of ending up in lost+found, since they
represent a point-in-time in which the FS is not in a consistent
state, so the end result would mostly have been the same, this data is
not visible under your home account after the crash.

Journaling has some great advantages, like write aggregation if your
journal can be placed on a faster device and when it comes to quick
checkups after crashes, an empty journal often means the fs was not in
a broken state and probably needs less or no total checkup by fsck
tools, which is nice.
It will not fix a half-downloaded ISO or unlinked temp files that you
for some reason want to look at afterwards, nor will the journal fix
any kind of broken sectors, though checksumming file systems will of
course help you find the errors before handing the bad sectors over to
your applications.

-- 
May the most significant bit of your life be positive.



Re: desire for journaled filesystem

2023-09-05 Thread deich...@placebonol.com
A couple questions, did you look OpenBSD installer create the filesystems or 
did you define a custom layout? 

FWIW, you should have a pretty good idea what is in/home.  I reckon you could 
ignore lost+found contents as they would be related to some application running 
when the fault occurred.

73
diana 

On September 5, 2023 11:31:26 AM MDT, John Holland  
wrote:
>I have a backup that is at least 2 days old offsite at a friend’s house. It 
>would be a bit of a pain to go retrieve it, but I could do that. 
>
> Short of that, I have 4000+ files in lost+found with names like #1094827. 
> What can I do with those? I tried running “file” on the first 50 via xargs 
> and they mostly at least purport to be some sort of intact file. How can I 
> determine what they are? Please don’t suggest that I manually use “file” and 
> then an appropriate program to examine each one in turn
>
>> On Sep 5, 2023, at 1:17 PM, Andreas Kähäri  wrote:
>> 
>> On Tue, Sep 05, 2023 at 08:54:58AM -0400, John Holland wrote:
>>> I just had a kernel panic when reloading a firefox tab pointed at facebook.
>>> After restarting, all the filesystems had errors but /home was particularly
>>> bad and caused the boot to stop and prompt if I wanted to enter a root
>>> shell.
>>> 
>>> 
>>> I eventually got fsck to mark the /home filesystem clean but it found >4000
>>> lost files that it moved to lost&found. I am not so experienced with this,
>>> running "file" on a few of them shows that they may be intact files but they
>>> have numeric names now.
>> [cut]
>> 
>> 
>> A regular external backup would have saved your data no matter what
>> filesystem you might have been using.  There are a few different backup
>> solutions available in the ports tree.  I use restic, both on OpenBSD
>> and macOS.
>> 
>> 
>> -- 
>> Andreas (Kusalananda) Kähäri
>> SciLifeLab, NBIS, ICM
>> Uppsala University, Sweden
>> 
>> .
>> 
>


Re: desire for journaled filesystem

2023-09-05 Thread Geoff Steckel

On 9/5/23 15:41, Rudolf Leitgeb wrote:

On Tue, 2023-09-05 at 14:16 -0400, John Holland wrote:

So this gave me the list of the files with what they seem to be in
groups. I think a lot of them are browser cache, jpegs, pngsI
looked
at some of the gzipped ones and they were web pages and css files.

There are some that don't make sense, for instance #16251989 is
listed
as "ISO Media " and contains binary data and then some HTML.

So: great surprise, most of these lost+found files are browser cache
stuff, which was created just before the big crash. This is the kind of
stuff which hasn't been written to disk yet. It's extremely unlikely,
that you'll find a file in lost+found, which you haven't touched in
jours/days before the crash.

I would suggest, you ignore this lost+found directory for now, and only
if you can't find some important data you can search this directory
based on strings which should be in the missing file.

Maybe that's, why OpenBSD never cared about a journaling FS ...


What I've done to recover filenames, etc.

restore your backup somewhere

doas find  -type f -exec cksum {} + > sums.restored
doas find  -type f -exec cksum {} + > sums.damaged

munge with sed, awk, uniq & friends
-> list of undamaged files
-> list of possible undamaged files in orphan directories
-> list of new? files & clues to where they lived
then
  file &c on interesting orphans

Not necessarily quick but as Mr. Leitgeb suggests this should give
a very strong clue as to which files are caches and thus ignorable and
where the orphaned files should live

There have been studies about typical lifetimes of files.
IIRC there's a glob of short-lived browser, object, temp etc. files
There's another glob of system, etc files updated regularly but
infrequently.

Most live for weeks to years.

A cron job which does hierarchical dumps daily, weekly, etc. and sent
offline might help.



Re: desire for journaled filesystem

2023-09-05 Thread Rudolf Leitgeb
On Tue, 2023-09-05 at 14:16 -0400, John Holland wrote:
> So this gave me the list of the files with what they seem to be in 
> groups. I think a lot of them are browser cache, jpegs, pngsI
> looked 
> at some of the gzipped ones and they were web pages and css files.
> 
> There are some that don't make sense, for instance #16251989 is
> listed 
> as "ISO Media " and contains binary data and then some HTML.

So: great surprise, most of these lost+found files are browser cache 
stuff, which was created just before the big crash. This is the kind of
stuff which hasn't been written to disk yet. It's extremely unlikely,
that you'll find a file in lost+found, which you haven't touched in 
jours/days before the crash.

I would suggest, you ignore this lost+found directory for now, and only
if you can't find some important data you can search this directory 
based on strings which should be in the missing file.

Maybe that's, why OpenBSD never cared about a journaling FS ...



Re: desire for journaled filesystem

2023-09-05 Thread Ronan Viel
What about backup? 
tar is ready to use.

Ronan

> Le 5 sept. 2023 à 16:12, John Holland  a écrit :
> 
> I just had a kernel panic when reloading a firefox tab pointed at facebook. 
> After restarting, all the filesystems had errors but /home was particularly 
> bad and caused the boot to stop and prompt if I wanted to enter a root shell.
> 
> 
> I eventually got fsck to mark the /home filesystem clean but it found >4000 
> lost files that it moved to lost&found. I am not so experienced with this, 
> running "file" on a few of them shows that they may be intact files but they 
> have numeric names now.
> 
> 
> I've really been enjoying OpenBSD but I think it could really use a journaled 
> filesystem. I believe I have the correct options in fstab for the best 
> results:
> 
> 1f08fbc2b303f0ef.k /home ffs rw,softdep,noatime,nodev,nosuid 1 2
> 
> 
> I was just thinking how much I was enjoying OpenBSD compared to some others 
> when this happened.
> 
> 
> OpenZFS? License issues? Hammer? Anything?
> 



Re: desire for journaled filesystem

2023-09-05 Thread John Holland
Sorry to kind of have an attitude with the "please don't suggest..."   
-- I realized I could kind of do something like that.



as root, using bash I ran

#cd /home/lost+found

#echo "" > ~jholland/list; for i in `ls`; do file $i >> ~jholland/list; done

and then as me (jholland) I ran

cat list | sort -k 2 | less

So this gave me the list of the files with what they seem to be in 
groups. I think a lot of them are browser cache, jpegs, pngsI looked 
at some of the gzipped ones and they were web pages and css files.


There are some that don't make sense, for instance #16251989 is listed 
as "ISO Media " and contains binary data and then some HTML.


I will have the backup from a couple days ago back here in a week if I 
need it.





On 9/5/23 13:31, John Holland wrote:

I have a backup that is at least 2 days old offsite at a friend’s house. It 
would be a bit of a pain to go retrieve it, but I could do that.

  Short of that, I have 4000+ files in lost+found with names like #1094827. 
What can I do with those? I tried running “file” on the first 50 via xargs and 
they mostly at least purport to be some sort of intact file. How can I 
determine what they are? Please don’t suggest that I manually use “file” and 
then an appropriate program to examine each one in turn


On Sep 5, 2023, at 1:17 PM, Andreas Kähäri  wrote:

On Tue, Sep 05, 2023 at 08:54:58AM -0400, John Holland wrote:

I just had a kernel panic when reloading a firefox tab pointed at facebook.
After restarting, all the filesystems had errors but /home was particularly
bad and caused the boot to stop and prompt if I wanted to enter a root
shell.


I eventually got fsck to mark the /home filesystem clean but it found >4000
lost files that it moved to lost&found. I am not so experienced with this,
running "file" on a few of them shows that they may be intact files but they
have numeric names now.

[cut]


A regular external backup would have saved your data no matter what
filesystem you might have been using.  There are a few different backup
solutions available in the ports tree.  I use restic, both on OpenBSD
and macOS.


--
Andreas (Kusalananda) Kähäri
SciLifeLab, NBIS, ICM
Uppsala University, Sweden

.







Re: desire for journaled filesystem

2023-09-05 Thread John Holland
I have a backup that is at least 2 days old offsite at a friend’s house. It 
would be a bit of a pain to go retrieve it, but I could do that. 

 Short of that, I have 4000+ files in lost+found with names like #1094827. What 
can I do with those? I tried running “file” on the first 50 via xargs and they 
mostly at least purport to be some sort of intact file. How can I determine 
what they are? Please don’t suggest that I manually use “file” and then an 
appropriate program to examine each one in turn

> On Sep 5, 2023, at 1:17 PM, Andreas Kähäri  wrote:
> 
> On Tue, Sep 05, 2023 at 08:54:58AM -0400, John Holland wrote:
>> I just had a kernel panic when reloading a firefox tab pointed at facebook.
>> After restarting, all the filesystems had errors but /home was particularly
>> bad and caused the boot to stop and prompt if I wanted to enter a root
>> shell.
>> 
>> 
>> I eventually got fsck to mark the /home filesystem clean but it found >4000
>> lost files that it moved to lost&found. I am not so experienced with this,
>> running "file" on a few of them shows that they may be intact files but they
>> have numeric names now.
> [cut]
> 
> 
> A regular external backup would have saved your data no matter what
> filesystem you might have been using.  There are a few different backup
> solutions available in the ports tree.  I use restic, both on OpenBSD
> and macOS.
> 
> 
> -- 
> Andreas (Kusalananda) Kähäri
> SciLifeLab, NBIS, ICM
> Uppsala University, Sweden
> 
> .
> 



Re: desire for journaled filesystem

2023-09-05 Thread Andreas Kähäri
On Tue, Sep 05, 2023 at 08:54:58AM -0400, John Holland wrote:
> I just had a kernel panic when reloading a firefox tab pointed at facebook.
> After restarting, all the filesystems had errors but /home was particularly
> bad and caused the boot to stop and prompt if I wanted to enter a root
> shell.
> 
> 
> I eventually got fsck to mark the /home filesystem clean but it found >4000
> lost files that it moved to lost&found. I am not so experienced with this,
> running "file" on a few of them shows that they may be intact files but they
> have numeric names now.
[cut]


A regular external backup would have saved your data no matter what
filesystem you might have been using.  There are a few different backup
solutions available in the ports tree.  I use restic, both on OpenBSD
and macOS.


-- 
Andreas (Kusalananda) Kähäri
SciLifeLab, NBIS, ICM
Uppsala University, Sweden

.



Re: desire for journaled filesystem

2023-09-05 Thread John Holland

I turned off softdep.

My backups aren't super-current so I don't see restoring from backup as 
a good idea.


My point was, OpenBSD is behind some other OS's (most?) in the 
filesystem department. I know, "patches welcome" :) ..




On 9/5/23 08:54, John Holland wrote:
I just had a kernel panic when reloading a firefox tab pointed at 
facebook. After restarting, all the filesystems had errors but /home 
was particularly bad and caused the boot to stop and prompt if I 
wanted to enter a root shell.



I eventually got fsck to mark the /home filesystem clean but it found 
>4000 lost files that it moved to lost&found. I am not so experienced 
with this, running "file" on a few of them shows that they may be 
intact files but they have numeric names now.



I've really been enjoying OpenBSD but I think it could really use a 
journaled filesystem. I believe I have the correct options in fstab 
for the best results:


1f08fbc2b303f0ef.k /home ffs rw,softdep,noatime,nodev,nosuid 1 2


I was just thinking how much I was enjoying OpenBSD compared to some 
others when this happened.



OpenZFS? License issues? Hammer? Anything?





Re: desire for journaled filesystem

2023-09-05 Thread Manuel Solis
As the book "OpenBSD Mastery Filesystems", Michael W Lucas, 2023 reads at
the side cover:

“Many users assume that their advanced filesystem is better than UFS
because they have so many features—snapshots, checksums, compression,
sophisticated caching algorithms, and so on—while all UFS has ever done is
muck about putting data on disk. But, conversely, UFS users believe their
filesystem is better for exactly the same reasons.”
—Hitchhikers Guide to OpenBSD

And go on chapter 0:
" OpenBSD includes many standard tools for disk management. Its Unix File
System has been continuously used for decades and is both robust and
well–understood. While it lacks features found in newer filesystems like
ZFS and btrfs, the OpenBSD developers have never been seriously interested
in file system features. A file system should put data on disk. That data
should be safely stored and reliably read. That’s it. Error checking?
Deduplication? No. The operating system has other tools for ensuring data
integrity and compactness."

on Chapt 1:
" While many Unix-like operating systems dump everything onto the disk and
hope things work out, OpenBSD uses a meticulous partitioning scheme to
improve confidentiality, integrity, and availability. Additionally,
OpenBSD’s multi–architecture design demands abstraction layers between the
storage hardware and the user–visible file system. Taking a little time to
understand these two systems and how they interact will greatly simplify
your work as a system administrator."

I am the type of guy that likes to hold the pages to read them, that is why
i quote the book, but you could also look at the internet or in the papers
https://www.openbsd.org/events.html,
https://www.openbsd.org/papers/eurobsdcon2022-krw-blockstobooting.pdf

i suggest you to read and investigate to understand the difference in the
approach from:
a) giving lots of choices you will never use or are poorly maintained or
b) place the one that could be audited and maintained.
Personally I have grown to see and love that from OpenBSD, the same happens
in filesystems, as well as in bluetooth, docker, wine, and all the stuff
that some could dessire at one point just because you are used to, but if
you want to use OpenBSD, you should be ready to learn and enjoy the tools
available, i think is similar to the people that install some Linux distro
and just want to work with MS Office and Photoshop, use libreoffice and
Gimp or don´t waste time, go back to Windows and be happy!

I hope it helps you to clarify your concern.

Manuel

El mar, 5 sept 2023 a las 8:12, John Holland ()
escribió:

> I just had a kernel panic when reloading a firefox tab pointed at
> facebook. After restarting, all the filesystems had errors but /home was
> particularly bad and caused the boot to stop and prompt if I wanted to
> enter a root shell.
>
>
> I eventually got fsck to mark the /home filesystem clean but it found
>  >4000 lost files that it moved to lost&found. I am not so experienced
> with this, running "file" on a few of them shows that they may be intact
> files but they have numeric names now.
>
>
> I've really been enjoying OpenBSD but I think it could really use a
> journaled filesystem. I believe I have the correct options in fstab for
> the best results:
>
> 1f08fbc2b303f0ef.k /home ffs rw,softdep,noatime,nodev,nosuid 1 2
>
>
> I was just thinking how much I was enjoying OpenBSD compared to some
> others when this happened.
>
>
> OpenZFS? License issues? Hammer? Anything?
>
>

-- 
Lic. Manuel Solís Vázquez


Re: desire for journaled filesystem

2023-09-05 Thread Dave Voutila


John Holland  writes:

> I just had a kernel panic when reloading a firefox tab pointed at
> facebook. After restarting, all the filesystems had errors but /home
> was particularly bad and caused the boot to stop and prompt if I
> wanted to enter a root shell.
>
>
> I eventually got fsck to mark the /home filesystem clean but it found
>>4000 lost files that it moved to lost&found. I am not so experienced
> with this, running "file" on a few of them shows that they may be
> intact files but they have numeric names now.
>
>
> I've really been enjoying OpenBSD but I think it could really use a
> journaled filesystem. I believe I have the correct options in fstab
> for the best results:
>
> 1f08fbc2b303f0ef.k /home ffs rw,softdep,noatime,nodev,nosuid 1 2

You don't mention what version you're running, but I would say don't run
with softdep if you prefer data safety over "performance."

It's effectively a no-op in -current IIRC. So if you're on 7.3 or
earlier, ditch it.

>
>
> I was just thinking how much I was enjoying OpenBSD compared to some
> others when this happened.
>
>
> OpenZFS? License issues? Hammer? Anything?



desire for journaled filesystem

2023-09-05 Thread John Holland
I just had a kernel panic when reloading a firefox tab pointed at 
facebook. After restarting, all the filesystems had errors but /home was 
particularly bad and caused the boot to stop and prompt if I wanted to 
enter a root shell.



I eventually got fsck to mark the /home filesystem clean but it found 
>4000 lost files that it moved to lost&found. I am not so experienced 
with this, running "file" on a few of them shows that they may be intact 
files but they have numeric names now.



I've really been enjoying OpenBSD but I think it could really use a 
journaled filesystem. I believe I have the correct options in fstab for 
the best results:


1f08fbc2b303f0ef.k /home ffs rw,softdep,noatime,nodev,nosuid 1 2


I was just thinking how much I was enjoying OpenBSD compared to some 
others when this happened.



OpenZFS? License issues? Hammer? Anything?