Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
On Fri, May 22, 2020 at 5:40 PM antlists wrote: > > On 22/05/2020 19:23, Rich Freeman wrote: > > A big problem with drive-managed SMR is that it basically has to > > assume the OS is dumb, which means most writes are in-place with no > > trims, assuming the drive even supports trim. > > I think the problem with the current WD Reds is, in part, that the ATA-4 > spec is required to support trim, but the ATA-3 spec is the current > version. Whoops ... > Probably was thought up by the same genius who added the 3.3V reset pin to the SATA standard. -- Rich
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
On 22/05/2020 19:23, Rich Freeman wrote: A big problem with drive-managed SMR is that it basically has to assume the OS is dumb, which means most writes are in-place with no trims, assuming the drive even supports trim. I think the problem with the current WD Reds is, in part, that the ATA-4 spec is required to support trim, but the ATA-3 spec is the current version. Whoops ... So yes, even if the drive does support trim, it has no way of telling the OS that it does ... Cheers, Wol
Re: [gentoo-user] Courier Sub-addressing
On Fri, May 22, 2020 at 04:17:24PM +0100, antlists wrote: > If I understand what you are attempting correctly (not a given!) then what > you are trying won't work. You're confusing multiple *folders* with multiple > *users*. Sorry, my original e-mail was quite nondescript. Consider that I have a few folders in my INBOX maildir, created with maildirmake(1) -f: Sent, Trash, Drafts, AcademicMatters, etc. Should an e-mail be sent to ash+academicmatt...@suugaku.co.uk, it should automatically be redirected into the appropriate sub-maildir.As the AcademicMatters folder is a folder inside the `ash` maildir, there is only a single user involved, with multiple folders. > This is, I believe, an RFC so Courier is simply implementing the spec. > That's probably why there is precious little Courier reference material, it > assumes you have the RFC to hand ... It seems like sub-addressing is defined in #5233 [1]. Further discussion specific to the Sieve language is found in #5228 [2]. It seems like, to use the example in [1], an e-mail addressed to `ken+si...@example.org` is sent to the mailbox `sieve` belonging to `ken`. In my case, this would be sending mail to the `AcademicMatters` mailbox owned by `ash`. > I don't know what happens with your "-" example, but it just looks wrong to > me. In my original message, I complained that the server was throwing out an error stating that the corresponding entry could not be found in the virtual users table. It seems like the `recipient_delimiter` attribute in Postfix's main.cf can specify an arbitrary delimiter, so a plus, hyphen, or any other legal character can be used to denote the sub-address. With `recipient_delimiter = +`, e-mail sent to ash+*@suugaku.co.uk now ends up in my inbox. To inspect the text after the delimiter and move it to the correct folder accordingly is a job for Courier's `maildrop`, I suspect. > It should be looking for an AcademicMatters POP account, and then > delivering the mail to a user account called ash on the server called > AcademicMatters. I don't really understand this sentence, sorry. How can a user account called `ash` also be called `AcademicMatters` ? `AcademicMatters` is a subdirectory inside the `ash` user's inbox. On Fri, May 22, 2020 at 08:52:23AM -0400, james wrote: > Yes, but with mail-client/Thunderbird. The tricks (with thunderbird) are > mostly related to how you set up your filters, and the order of the filters. I would rather do this on the server, as I access my e-mail from various machines, many of which are not listening for mail constantly. I also dislike Thunderbird as I find it too heavy for a mail client; (Neo)Mutt has served me well for a long time. > Do post your findings, as I'm sure others would appreciate a robust (gentoo) > solution, particularly if the feature list supports cell phones (android > and/or apple cell phones) and those text/emails. I think the problem you're posing is a very different one to mine: I am only concerned with filtering e-mail to particular folders based on the address to which the mail was sent. Your problem seems to be far more generalised and large-scale. [1] https://tools.ietf.org/html/rfc5233 [2] https://tools.ietf.org/html/rfc5228 -- Ashley Dixon suugaku.co.uk 2A9A 4117 DA96 D18A 8A7B B0D2 A30E BF25 F290 A8AA signature.asc Description: PGP signature
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
On Fri, May 22, 2020 at 2:08 PM antlists wrote: > > So what you could do is allocate one zone of CMR to every four or five > zones of SMR and just reshingle each SMR as the CMR filled up. The > important point is that zones can switch from CMR cache to SMR filling > up, to full SMR zones decaying as they are re-written. I get how this will provide more flexibility. However, there is a big problem here. Unless you are using TRIM you have no idea what space is in use vs free. Once a block has been written to once, it needs to be forever treated as occupied. So this cache really is only useful when the drive is brand new. Once it has all been written once you're limited to dedicated CMR regions for cache, because all the SMR areas are shingled. If you are using TRIM then this does give you more flex space, but only if enough overlapping space is unused, and you do need to reshingle to write to that unused space. Depending on the degree of overlap you still have only a fraction of disk available for your cache. > Which is why I'd break it down to maybe 2GB zones. If as the zone fills > it streams, but is then re-organised and re-written properly when time > permits, you've not got too large chunks of metadata. > ... > The problem with drives at the moment is they run out of CMR cache, so > they have to rewrite all those blocks WHILE THE USER IS STILL WRITING. > The point of my idea is that they can repurpose disk as SMR or CMR as > required, so they don't run out of cache at the wrong time ... You still have a limited cache, and if it fills up you hit the performance wall. The question just comes whether it is more efficient to have flex-space that can be PMR or SMR, or having dedicated space that is PMR-only. I think that depends greatly on whether you can assume the use of TRIM and how much free space the drive will have in general. Since PMR is less dense you have to give up a lot of SMR space for any PMR use of that space. A big problem with drive-managed SMR is that it basically has to assume the OS is dumb, which means most writes are in-place with no trims, assuming the drive even supports trim. -- Rich
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
On 22/05/2020 18:20, Rich Freeman wrote: On Fri, May 22, 2020 at 12:47 PM antlists wrote: What puzzles me (or rather, it doesn't, it's just cost cutting), is why you need a *dedicated* cache zone anyway. Stick a left-shift register between the LBA track and the hard drive, and by switching this on you write to tracks 2,4,6,8,10... and it's a CMR zone. Switch the register off and it's an SMR zone writing to all tracks. Disclaimer: I'm not a filesystem/DB design expert. Well, I'm sure the zones aren't just 2 tracks wide, but that is worked around easily enough. I don't see what this gets you though. If you're doing sequential writes you can do them anywhere as long as you're doing them sequentially within any particular SMR zone. If you're overwriting data then it doesn't matter how you've mapped them with a static mapping like this, you're still going to end up with writes landing in the middle of an SMR zone. Let's assume each shingled track overwrites half the previous write. Let's also assume a shingled zone is 2GB in size. My method converts that into a 1GB CMR zone, because we're only writing to every second track. I don't know how these drives cache their writes before re-organising, but this means that ANY disk zone can be used as cache, rather than having a (too small?) dedicated zone... So what you could do is allocate one zone of CMR to every four or five zones of SMR and just reshingle each SMR as the CMR filled up. The important point is that zones can switch from CMR cache to SMR filling up, to full SMR zones decaying as they are re-written. The other thing is, why can't you just stream writes to a SMR zone, especially if we try and localise writes so lets say all LBAs in Gig 1 go to the same zone ... okay - if we run out of zones to re-shingle to, then the drive is going to grind to a halt, but it will be much less likely to crash into that barrier in the first place. I'm not 100% following you, but if you're suggesting remapping all blocks so that all writes are always sequential, like some kind of log-based filesystem, your biggest problem here is going to be metadata. Blocks logically are only 512 bytes, so there are a LOT of them. You can't just freely remap them all because then you're going to end up with more metadata than data. I'm sure they are doing something like that within the cache area, which is fine for short bursts of writes, but at some point you need to restructure that data so that blocks are contiguous or otherwise following some kind of pattern so that you don't have to literally remap every single block. Which is why I'd break it down to maybe 2GB zones. If as the zone fills it streams, but is then re-organised and re-written properly when time permits, you've not got too large chunks of metadata. You need a btree to work out where each zone is stored, then each one has a btree to say where the blocks is stored. Oh - and these drives are probably 4K blocks only - most new drives are. Now, they could still reside in different locations, so maybe some sequential group of blocks are remapped, but if you have a write to one block in the middle of a group you need to still read/rewrite all those blocks somewhere. Maybe you could use a COW-like mechanism like zfs to reduce this somewhat, but you still need to manage blocks in larger groups so that you don't have a ton of metadata. The problem with drives at the moment is they run out of CMR cache, so they have to rewrite all those blocks WHILE THE USER IS STILL WRITING. The point of my idea is that they can repurpose disk as SMR or CMR as required, so they don't run out of cache at the wrong time ... Yes metadata may bloom under pressure, but give the drives a break and they can grab a new zone, do an SMR ordered stream, and shrink the metadata. With host-managed SMR this is much less of a problem because the host can use extents/etc to reduce the metadata, because the host already needs to map all this stuff into larger structures like files/records/etc. The host is already trying to avoid having to track individual blocks, so it is counterproductive to re-introduce that problem at the block layer. Really the simplest host-managed SMR solution is something like f2fs or some other log-based filesystem that ensures all writes to the disk are sequential. Downside to flash-based filesystems is that they can disregard fragmentation on flash, but you can't disregard that for an SMR drive because random disk performance is terrible. Which is why you have small(ish) zones so logically close writes are hopefully physically close as well ... Even better, if we have two independent heads, we could presumably stream updates using one head, and re-shingle with the other. But that's more cost ... Well, sure, or if you're doing things host-managed then you stick the journal on an SSD and then do the writes to the SMR drive opportunistically. You're basically describing a
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
Rich Freeman wrote: > On Fri, May 22, 2020 at 12:15 PM Dale wrote: >> The thing about the one I have now in use by LVM for /home, one is SMR and >> one is PMR. Even if the OS is aware, does it even know which drive the data >> is going to end up being stored on? I'm pretty sure since the PMR drive was >> in use before the SMR that the PMR is likely full. From my understanding, >> LVM doesn't balance the data out. It will fill up a drive and then move on >> to the next. If you add another, it will fill up the 2nd drive and then >> start on the newly added drive. Maybe it does do some magic but does the OS >> know which drive data is going to hit? >> > So, as far as I'm aware nothing on linux is natively optimized for > SMR. I'm sure some people use host-managed SMR drives on linux for > application-specific writing, but they're probably writing raw blocks > without using a filesystem. > > However, if anything was going to be made SMR-aware then it would have > to be implemented at all block layers, just like barrier support. I > think back in the early days of barrier support some layers didn't > support it, and a barrier can only make it from the filesystem to the > drive if it is implemented in lmv+mdadm+driver and so on. > > If somebody added SMR support to ext4 but not to LVM then ext4 > wouldn't detect the LV as an SMR drive, because lvm wouldn't pass that > through. > > I suspect sooner or later a solution will emerge, but it could be a > while. I suspect any solution would be for drives that could be set > to be host-managed, because otherwise you're working around an extra > layer of obfuscation. Maybe you could have trim support without > further optimization, but that obviously isn't ideal. > That's why I want to avoid them if at all possible. The best way to know what I'm getting is to get what I know works best due to the fact they been around for ages. To your point tho, it would likely take quite some effort to make every layer aware of SMR. Like you said, everything has to detect it and be able to work with it or it fails and defaults to what we know slows things down, to a crawl in some large write situations. I read some of the link that was posted. I'm going to try to read it again and hopefully finish it later, after I dig up and replace about a 100 feet of sewer line. Tractor and really, REALLY, wet soil does not go well for a sewer line only a few inches underground. Gonna have to go deeper and put some better fill in this time. Dale :-) :-)
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
On Fri, May 22, 2020 at 12:47 PM antlists wrote: > > What puzzles me (or rather, it doesn't, it's just cost cutting), is why > you need a *dedicated* cache zone anyway. > > Stick a left-shift register between the LBA track and the hard drive, > and by switching this on you write to tracks 2,4,6,8,10... and it's a > CMR zone. Switch the register off and it's an SMR zone writing to all > tracks. Disclaimer: I'm not a filesystem/DB design expert. Well, I'm sure the zones aren't just 2 tracks wide, but that is worked around easily enough. I don't see what this gets you though. If you're doing sequential writes you can do them anywhere as long as you're doing them sequentially within any particular SMR zone. If you're overwriting data then it doesn't matter how you've mapped them with a static mapping like this, you're still going to end up with writes landing in the middle of an SMR zone. > The other thing is, why can't you just stream writes to a SMR zone, > especially if we try and localise writes so lets say all LBAs in Gig 1 > go to the same zone ... okay - if we run out of zones to re-shingle to, > then the drive is going to grind to a halt, but it will be much less > likely to crash into that barrier in the first place. I'm not 100% following you, but if you're suggesting remapping all blocks so that all writes are always sequential, like some kind of log-based filesystem, your biggest problem here is going to be metadata. Blocks logically are only 512 bytes, so there are a LOT of them. You can't just freely remap them all because then you're going to end up with more metadata than data. I'm sure they are doing something like that within the cache area, which is fine for short bursts of writes, but at some point you need to restructure that data so that blocks are contiguous or otherwise following some kind of pattern so that you don't have to literally remap every single block. Now, they could still reside in different locations, so maybe some sequential group of blocks are remapped, but if you have a write to one block in the middle of a group you need to still read/rewrite all those blocks somewhere. Maybe you could use a COW-like mechanism like zfs to reduce this somewhat, but you still need to manage blocks in larger groups so that you don't have a ton of metadata. With host-managed SMR this is much less of a problem because the host can use extents/etc to reduce the metadata, because the host already needs to map all this stuff into larger structures like files/records/etc. The host is already trying to avoid having to track individual blocks, so it is counterproductive to re-introduce that problem at the block layer. Really the simplest host-managed SMR solution is something like f2fs or some other log-based filesystem that ensures all writes to the disk are sequential. Downside to flash-based filesystems is that they can disregard fragmentation on flash, but you can't disregard that for an SMR drive because random disk performance is terrible. > Even better, if we have two independent heads, we could presumably > stream updates using one head, and re-shingle with the other. But that's > more cost ... Well, sure, or if you're doing things host-managed then you stick the journal on an SSD and then do the writes to the SMR drive opportunistically. You're basically describing a system where you have independent drives for the journal and the data areas. Adding an extra head on a disk (or just having two disks) greatly improves performance, especially if you're alternating between two regions constantly. -- Rich
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
On Fri, May 22, 2020 at 12:15 PM Dale wrote: > > The thing about the one I have now in use by LVM for /home, one is SMR and > one is PMR. Even if the OS is aware, does it even know which drive the data > is going to end up being stored on? I'm pretty sure since the PMR drive was > in use before the SMR that the PMR is likely full. From my understanding, > LVM doesn't balance the data out. It will fill up a drive and then move on to > the next. If you add another, it will fill up the 2nd drive and then start > on the newly added drive. Maybe it does do some magic but does the OS know > which drive data is going to hit? > So, as far as I'm aware nothing on linux is natively optimized for SMR. I'm sure some people use host-managed SMR drives on linux for application-specific writing, but they're probably writing raw blocks without using a filesystem. However, if anything was going to be made SMR-aware then it would have to be implemented at all block layers, just like barrier support. I think back in the early days of barrier support some layers didn't support it, and a barrier can only make it from the filesystem to the drive if it is implemented in lmv+mdadm+driver and so on. If somebody added SMR support to ext4 but not to LVM then ext4 wouldn't detect the LV as an SMR drive, because lvm wouldn't pass that through. I suspect sooner or later a solution will emerge, but it could be a while. I suspect any solution would be for drives that could be set to be host-managed, because otherwise you're working around an extra layer of obfuscation. Maybe you could have trim support without further optimization, but that obviously isn't ideal. -- Rich
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
On 22/05/2020 16:43, Rich Freeman wrote: On Fri, May 22, 2020 at 11:32 AM Michael wrote: An interesting article mentioning WD Red NAS drives which may actually be SMRs and how latency increases when cached writes need to be transferred into SMR blocks. Yeah, there is a lot of background on this stuff. You should view a drive-managed SMR drive as basically a journaled filesystem/database masquerading as a virtual drive. One where the keys/filenames are LBAs, and all the files are 512 bytes long. :) Really even most spinning drives are this way due to the 4k physical sectors, but this is something much easier to deal with and handled by the OS with aligned writes as much as possible. SSDs have similar issues but again the impact isn't nearly as bad and is more easily managed by the OS with TRIM/etc. A host-managed SMR drive operates much more like a physical drive, but in this case the OS/application needs to be SMR-aware for performance not to be absolutely terrible. What puzzles me (or rather, it doesn't, it's just cost cutting), is why you need a *dedicated* cache zone anyway. Stick a left-shift register between the LBA track and the hard drive, and by switching this on you write to tracks 2,4,6,8,10... and it's a CMR zone. Switch the register off and it's an SMR zone writing to all tracks. The other thing is, why can't you just stream writes to a SMR zone, especially if we try and localise writes so lets say all LBAs in Gig 1 go to the same zone ... okay - if we run out of zones to re-shingle to, then the drive is going to grind to a halt, but it will be much less likely to crash into that barrier in the first place. Even better, if we have two independent heads, we could presumably stream updates using one head, and re-shingle with the other. But that's more cost ... Cheers, Wol
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
Rich Freeman wrote: > On Fri, May 22, 2020 at 11:32 AM Michael wrote: >> An interesting article mentioning WD Red NAS drives which may actually be >> SMRs >> and how latency increases when cached writes need to be transferred into SMR >> blocks. > Yeah, there is a lot of background on this stuff. > > You should view a drive-managed SMR drive as basically a journaled > filesystem/database masquerading as a virtual drive. One where the > keys/filenames are LBAs, and all the files are 512 bytes long. :) > > Really even most spinning drives are this way due to the 4k physical > sectors, but this is something much easier to deal with and handled by > the OS with aligned writes as much as possible. SSDs have similar > issues but again the impact isn't nearly as bad and is more easily > managed by the OS with TRIM/etc. > > A host-managed SMR drive operates much more like a physical drive, but > in this case the OS/application needs to be SMR-aware for performance > not to be absolutely terrible. > The thing about the one I have now in use by LVM for /home, one is SMR and one is PMR. Even if the OS is aware, does it even know which drive the data is going to end up being stored on? I'm pretty sure since the PMR drive was in use before the SMR that the PMR is likely full. From my understanding, LVM doesn't balance the data out. It will fill up a drive and then move on to the next. If you add another, it will fill up the 2nd drive and then start on the newly added drive. Maybe it does do some magic but does the OS know which drive data is going to hit? It seems to me that we could end up stuck with SMR or pay a premium for PMR. That's the part that worries me. I'm not saying SMR isn't good for a lot of folks but for us power type users, it matters. You get into servers and it matters a whole lot I'd imagine. Maybe I need to buy some drives before I can't even get them at a affordable price at all??? Dale :-) :-) P. S. Thanks to Michael for the info. I'll read it in a bit. Having a little sewer problem. Dirty job. -_o
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
On Fri, May 22, 2020 at 11:32 AM Michael wrote: > > An interesting article mentioning WD Red NAS drives which may actually be SMRs > and how latency increases when cached writes need to be transferred into SMR > blocks. Yeah, there is a lot of background on this stuff. You should view a drive-managed SMR drive as basically a journaled filesystem/database masquerading as a virtual drive. One where the keys/filenames are LBAs, and all the files are 512 bytes long. :) Really even most spinning drives are this way due to the 4k physical sectors, but this is something much easier to deal with and handled by the OS with aligned writes as much as possible. SSDs have similar issues but again the impact isn't nearly as bad and is more easily managed by the OS with TRIM/etc. A host-managed SMR drive operates much more like a physical drive, but in this case the OS/application needs to be SMR-aware for performance not to be absolutely terrible. -- Rich
Re: [gentoo-user] Seagate ST8000NM0065 PMR or SMR plus NAS SAS SATA question
On Sunday, 10 May 2020 21:52:54 BST antlists wrote: > On 10/05/2020 20:11, Rich Freeman wrote: > >> I did find a WD Red 8TB drive. It costs a good bit more. It's a good > >> deal but still costs more. I'm going to keep looking. Eventually I'll > >> either spend the money on the drive or find a really good deal. My home > >> directory is at 69% so I got some time left. Of course, my collection > >> is still growing. o_O > > > > In theory the 8TB reds are SMR-free. > > I thought I first found it on this list - wasn't it reported that the > 1TB and 8TB were still CMR but everything between was now SMR? Pretty > naff since the SMR drives all refuse to add to a raid array, despite > being advertised as "for NAS and RAID". Under UK law that would be a > slam dunk RMA as "unfit for purpose". > > Try the "Red Pro", which apparently are still all CMR. To the best of my > knowledge the Seagate Ironwolves are still SMR-free, and there's also > the Ironwolf Pros. > > I've got two Ironwolves, but they're 2018-vintage. I think they're Red > equivalents. > > Cheers, > Wol An interesting article mentioning WD Red NAS drives which may actually be SMRs and how latency increases when cached writes need to be transferred into SMR blocks. https://blocksandfiles.com/2020/04/15/shingled-drives-have-non-shingled-zones-for-caching-writes/ signature.asc Description: This is a digitally signed message part.
Re: [gentoo-user] Courier Sub-addressing
On 21/05/2020 21:14, Ashley Dixon wrote: Hello, I am attempting to set up sub-addressing on my Courier mail server, allowing senders to directly deliver messages to a particular folder in my mailbox. For example, I want to provide my University with the address `ash-academicmatt...@suugaku.co.uk` to force all their messages into the "AcademicMatters" subdirectory. Unfortunately, I can't find any official Courier documentation regarding sub-addressing. I have found [1], however I'm not sure it will apply as I am using virtual mailboxes. If I understand what you are attempting correctly (not a given!) then what you are trying won't work. You're confusing multiple *folders* with multiple *users*. I'm probably not describing this right, but let's say you've got a small business, with a POP3 email account of "busin...@isp.co.uk". However, you've set up a central server with each user having their own account eg John, Mary & Sue. So you configure Sue's mail client to have an address of "Sue ". Out in the internet, smtp servers look at the @isp.co.uk bit to deliver it to the right mailserver. Your ISP sees "sue+business", *ignores* the bit in front of the plus, and puts it in the "business" pop account. Your local mailserver now pulls down the email, ignores the bit *after* the +, and shoves it in Sue's email. This is, I believe, an RFC so Courier is simply implementing the spec. That's probably why there is precious little Courier reference material, it assumes you have the RFC to hand ... I don't know what happens with your "-" example, but it just looks wrong to me. It should be looking for an AcademicMatters POP account, and then delivering the mail to a user account called ash on the server called AcademicMatters. Internet email addresses and domains are read right-to-left (Janet used to be left-to-right, but the Americans won, as usual). Cheers, Wol
Re: [gentoo-user] Courier Sub-addressing
On 5/21/20 4:14 PM, Ashley Dixon wrote: Hello, I am attempting to set up sub-addressing on my Courier mail server, allowing senders to directly deliver messages to a particular folder in my mailbox. For example, I want to provide my University with the address `ash-academicmatt...@suugaku.co.uk` to force all their messages into the "AcademicMatters" subdirectory. Unfortunately, I can't find any official Courier documentation regarding sub-addressing. I have found [1], however I'm not sure it will apply as I am using virtual mailboxes. When attempting to send e-mail to myself using sub-addressing, my server complains that the address is not found in the virtual users table, suggesting that it is entirely unaware of the sub-addressing notation. Has anyone here managed to get this working ? I believe it is sometimes referred to as "plus-addressing", however it seems that Courier uses a hyphen as opposed to a plus symbol (+). Cheers, Ashley. [1] https://www.talisman.org/~erlkonig/misc/courier-email-subaddressing/ Hello Ashley, Yes, but with mail-client/Thunderbird. The tricks (with thunderbird) are mostly related to how you set up your filters, and the order of the filters. On thunderbird the 'message filters' under the three horizontal bars on the top right of the base screen, is the starting point. Then I have to click various selections, sometimes 2 or 3 times. It's tricky with thunderbird and filtering incoming mail, ymmv. No clue on a courier mail server. BUT, I'd be most interested in testing/verifying what you come up with, as thunderbird is a very bloated pig of an app. And I'm looking for a unified system where email, broweser-links, local edited files (vi/vim/etc) and browser saved files and links are all unified into one 'common-logical' viewer/storage renders the various way we can save data, whether local or net-based resources. I'd be most interested in *any* unifying scheme, gentoo-centric. 30++ years of linux/bsd/linux has given me a very rich source of resources, docs and wonderful emails. But data-harvesting needs a modern approach. /usr/portage/mail-* does not have many robust and secure options. I've even thought about returning to a sendmail server, as a unifying start, but that's probably not a wise idea. Do post your findings, as I'm sure others would appreciate a robust (gentoo) solution, particularly if the feature list supports cell phones (android and/or apple cell phones) and those text/emails. hth, James
Re: [gentoo-user] nvidia-drivers-440.82-r3 failing to compile: Kernel configuration is invalid
On 21/05/2020 20:25, Ashley Dixon wrote: On Thu, May 21, 2020 at 08:13:38PM +0100, Ján Zahornadský wrote: when updating the system today, a new revision of nvidia-drivers ebuild fails with ERROR: Kernel configuration is invalid. include/generated/autoconf.h or include/config/auto.conf are missing. Run 'make oldconfig && make prepare' on kernel src to fix it. (full log attached as build.log) I'm fairly sure my kernel sources and configuration are in place: bolt /usr/src/linux-5.6.14-gentoo # ls -l include/generated/autoconf.h include/config/auto.conf -rw--- 1 root root 26144 May 21 10:13 include/config/auto.conf -rw--- 1 root root 35329 May 21 10:13 include/generated/autoconf.h Try executing `chmod a+r ` on both of those files. Yes, thanks, it was a permission issue after all, re-setting umask and re-running make mrproper && make && make modules_install fixed it! All the best, Jan