Re: [syzbot] [hfs?] WARNING in hfs_write_inode
On Fri, Jul 21, 2023 at 11:03:28AM +1000, Finn Thain wrote: > On Fri, 21 Jul 2023, Dave Chinner wrote: > > > > I suspect that this is one of those catch-22 situations: distros are > > > going to enable every feature under the sun. That doesn't mean that > > > anyone is actually _using_ them these days. > > I think the value of filesystem code is not just a question of how often > it gets executed -- it's also about retaining access to the data collected > in archives, museums, galleries etc. that is inevitably held in old > formats. That's an argument for adding support to tar, not for maintaining read/write support. > > We need to much more proactive about dropping support for unmaintained > > filesystems that nobody is ever fixing despite the constant stream of > > corruption- and deadlock- related bugs reported against them. > > IMO, a stream of bug reports is not a reason to remove code (it's a reason > to revert some commits). > > Anyway, that stream of bugs presumably flows from the unstable kernel API, > which is inherently high-maintenance. It seems that a stable API could be > more appropriate for any filesystem for which the on-disk format is fixed > (by old media, by unmaintained FLOSS implementations or abandoned > proprietary implementations). You've misunderstood. Google have decided to subject the entire kernel (including obsolete unmaintained filesystems) to stress tests that it's never had before. IOW these bugs have been there since the code was merged. There's nothing to back out. There's no API change to blame. It's always been buggy and it's never mattered before. It wouldn't be so bad if Google had also decided to fund people to fix those bugs, but no, they've decided to dump them on public mailing lists and berate developers into fixing them.
Re: [syzbot] [hfs?] WARNING in hfs_write_inode
On Thu, Jul 20, 2023 at 05:38:52PM -0400, Jeffrey Walton wrote: > On Thu, Jul 20, 2023 at 2:39 PM Matthew Wilcox wrote: > > > > On Thu, Jul 20, 2023 at 07:50:47PM +0200, John Paul Adrian Glaubitz wrote: > > > > Then we should delete the HFS/HFS+ filesystems. They're orphaned in > > > > MAINTAINERS and if distros are going to do such a damnfool thing, > > > > then we must stop them. > > > > > > Both HFS and HFS+ work perfectly fine. And if distributions or users are > > > so > > > sensitive about security, it's up to them to blacklist individual features > > > in the kernel. > > > > > > Both HFS and HFS+ have been the default filesystem on MacOS for 30 years > > > and I don't think it's justified to introduce such a hard compatibility > > > breakage just because some people are worried about theoretical evil > > > maid attacks. > > > > > > HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac > > > and I don't think it's okay to break all these systems running Linux. > > > > If they're so popular, then it should be no trouble to find somebody > > to volunteer to maintain those filesystems. Except they've been > > marked as orphaned since 2011 and effectively were orphaned several > > years before that (the last contribution I see from Roman Zippel is > > in 2008, and his last contribution to hfs was in 2006). > > One data point may help.. I've been running Linux on an old PowerMac > and an old Intel MacBook since about 2014 or 2015 or so. I have needed > the HFS/HFS+ filesystem support for about 9 years now (including that > "blessed" support for the Apple Boot partition). > > There's never been a problem with Linux and the Apple filesystems. > Maybe it speaks to the maturity/stability of the code that already > exists. The code does not need a lot of attention nowadays. > > Maybe the orphaned status is the wrong metric to use to determine > removal. Maybe a better metric would be installation base. I.e., how > many users use the filesystem. I think you're missing the context. There are bugs in how this filesystem handles intentionally-corrupted filesystems. That's being reported as a critical bug because apparently some distributions automount HFS/HFS+ filesystems presented to them on a USB key. Nobody is being paid to fix these bugs. Nobody is volunteering to fix these bugs out of the kindness of their heart. What choice do we have but to remove the filesystem, regardless of how many happy users it has?
Re: [syzbot] [hfs?] WARNING in hfs_write_inode
On Thu, Jul 20, 2023 at 07:50:47PM +0200, John Paul Adrian Glaubitz wrote: > > Then we should delete the HFS/HFS+ filesystems. They're orphaned in > > MAINTAINERS and if distros are going to do such a damnfool thing, > > then we must stop them. > > Both HFS and HFS+ work perfectly fine. And if distributions or users are so > sensitive about security, it's up to them to blacklist individual features > in the kernel. > > Both HFS and HFS+ have been the default filesystem on MacOS for 30 years > and I don't think it's justified to introduce such a hard compatibility > breakage just because some people are worried about theoretical evil > maid attacks. > > HFS/HFS+ mandatory if you want to boot Linux on a classic Mac or PowerMac > and I don't think it's okay to break all these systems running Linux. If they're so popular, then it should be no trouble to find somebody to volunteer to maintain those filesystems. Except they've been marked as orphaned since 2011 and effectively were orphaned several years before that (the last contribution I see from Roman Zippel is in 2008, and his last contribution to hfs was in 2006).
An fc4 user appears
On Wed, Mar 17, 2010 at 10:52:22AM +0100, Josip Rodin wrote: On Tue, Mar 16, 2010 at 06:16:05PM -0700, Mr Ian Primus wrote: So, now I have Linux installed. Next task: getting the fibre channel working. I'm currently digging through the configuration options on the kernel. (Downloaded 2.6.33.1 from kernel.org) I can't for the life of me find the Sun fibre channel driver. I could have sworn it was SOC or something like that... On Tue, Mar 16, 2010 at 06:46:56PM -0700, Mr Ian Primus wrote: Doing some more digging, it seems that the Sun fibre channel driver is missing from recent versions of the kernel, the newest one I've been able to find that contains it is 2.6.23. On this kernel, it's in drivers/fc4, and Fibre Channel support is a menu option right in the main menu after make menuconfig. I can't find this driver in the newer kernels. Does anyone know what happened to it? Did it get merged in with another driver? I really want to get this A5000 working :) fc4 was removed in October 2007 with this commit: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=1ecd3902c6e16c2445165b872c49e73770b72da7 Yep, I removed it. Resurrecting it will be quite a task. Best of luck. I'm willing to answer questions (please cc linux-scsi), but not do the work. -- Matthew Wilcox Intel Open Source Technology Centre Bill, look, we understand that you're interested in selling us this operating system, but compare it to ours. We can't possibly take such a retrograde step. -- To UNSUBSCRIBE, email to debian-sparc-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20100318184952.gb3...@parisc-linux.org
Re: RFC and status report: Kernel upgrades for woody-sarge upgrades
On Thu, Mar 24, 2005 at 02:31:55PM +0100, Frank Lichtenheld wrote: As many of you may know on some machines users will need to install a current kernel before they will be able to upgrade woody to sarge (or better: glibc of woody to glibc of sarge). I've tried to use the available information to provide the needed files for these kernel upgrades. To my knowledge the affected machines/architecures are currently hppa64, sparc sun4m (only some of them) and 80386. It's all hppa machines, not just hppa64. I've prepared the necessary backports and some rudimentary documentation and put it online at http://higgs.djpig.de/upgrade/upgrade-kernel/ I'll give it a try now. -- Next the statesmen will invent cheap lies, putting the blame upon the nation that is attacked, and every man will be glad of those conscience-soothing falsities, and will diligently study them, and refuse to examine any refutations of them; and thus he will by and by convince himself that the war is just, and will thank God for the better sleep he enjoys after this process of grotesque self-deception. -- Mark Twain -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Packages stuck in Uploaded
You probably just want to give this package back to the buildd: devel/ocamldsort_0.14.1-1: Uploaded by buildd-vore [optional:out-of-date] Previous state was Building until 2003 Aug 31 19:33:27 -- It's not Hollywood. War is real, war is primarily not about defeat or victory, it is about death. I've seen thousands and thousands of dead bodies. Do you think I want to have an academic debate on this subject? -- Robert Fisk
Re: a small C program to test xdm's /dev/mem reading on your architecture
On Mon, Aug 26, 2002 at 09:06:00AM -0400, Carlos O'Donell wrote: Done. I've submitted the output for HPPA boxes running 32 and 64-bit kernels. Looks like they pass without any problem. I'll pass on the yes, but it may well crash them. some parts of /dev/mem map random IO addresses which may not take kindly to being read in an unprovoked manner. -- Revolutions do not require corporate support.
Re: a small C program to test xdm's /dev/mem reading on your architecture
On Mon, Aug 26, 2002 at 09:10:54PM +0200, Marcus Brinkmann wrote: Also, reading /dev/mem doesn't sound very secure at all (even if it works) because the patterns in the memory of a computer are probably predictable and a lot of information can be observed from the outside (which processes are running etc). why do you assume that xdm uses the raw result from /dev/mem? running, say, md5 over the results would give you something as close to random as i doubt you could find a difference. -- Revolutions do not require corporate support.
Re: Kaffe porting issues
On Wed, Dec 12, 2001 at 02:34:22PM -0500, John R. Daily wrote: I'm cc'ing the ports in question in hopes that some of these problems can be solved by developers more familiar with those platforms. I haven't included s390, because the failure to build there seems to involve several missing dependencies, and I would guess that the lack of kaffe on that platform won't affect the package's ability to get into woody. the rule is that if a package has built on an architecture, it must continue to build there. so for arches like s390 and hppa which have never built it, there is no problem. having said that, it would of course be nice to have kaffe build on those architectures :-) MIPS(EL) On mips, the current problem with both 1.0.6 and CVS is that an include of sigcontext.h should actually be asm/sigcontext.h; the i386-specific headers included with kaffe properly handle this via #ifdef, so that can be easily remedied. Er. Userspace should _never_ include asm/* headers. The mips/CVS kaffe code, though, has another problem: mips1 and mips2 do not support the 'movn' assembler instruction used, so that will have to be rewritten, or the code compiled exclusively for mips3. That's not so bad; it only eliminates r3000-series processors and earlier. in terms of which boxes, i think that eliminates everything prior to the indigo, which is prehistory as far as i'm concerned :-) -- Revolutions do not require corporate support.
Re: Kaffe porting issues
On Wed, Dec 12, 2001 at 12:56:53PM -0700, Tom Tromey wrote: I'm unaware of a libffi port to HPPA. If there is one, it hasn't been integrated into the main libffi sources :-( Nobody's made libffi compile on debian/hppa yet: $ madison libffi2 libffi2 | 1:3.0.2-4 | testing | alpha, arm, i386, ia64, m68k, powerpc, sparc libffi2 | 1:3.0.2-4 | unstable | arm, ia64, m68k, powerpc, sparc libffi2 | 1:3.0.3-0pre011209 | unstable | alpha, i386 -- Revolutions do not require corporate support.