Re: [gentoo-user] How to emergency manual install of libffi-compat.
I've bitten by this too... the problem is that portage does not follow a particular order when rebuilding subslot deps, so when upgrading dev-libs/libffi (in this case from 0/7 to 0/8) it can schedule lots of other packages between when the new libffi is installed and when python is rebuilt. If portage is interrupted for any reason during this period, you're screwed. To recover, I followed these steps: 1. temporarily bring back /usr/lib/libffi.so.7 (/usr/lib64/libffi.so.7 if you're on amd64) from whatever source you can find (backup, saved binpkg, livecd/usb). You will need to copy both the actual library (libffi.so.7.x.y) and the versioned symlink (libffi.so.7 -> libffi.so.7.x.y); make sure you do not touch the existing libffi.so -> libffi.so.8.z.w symlink 2. re-emerge your main python version (the one you use to run portage, see emerge --info) 3. finish your upgrade 4. remove the files you copied in 1 5. do a revdep-rebuild pass, just in case I would not call installing dev-libs/libffi-compat a solution -- you still need a working python to do that, and it won't protect you from future breakage like this. This may help: https://wiki.gentoo.org/wiki/Project:Portage/Fixing_broken_portage It will not help in this case, since what's broken is python and not portage. If that won't work for whatever reason, chroot into your system after you boot with the latest Live-USB and try to update @system. Alternatively, reinstall. Neither will this, as you won't be able to execute python (i.e. run portage) inside the chroot. HTH andrea
Re: [gentoo-user] BIOS can not find boot partition
Hello, I don't want to use EFI. Then you probably should not be attempting to boot off an NVMe drive, as that is only possible if the drive has an onboard BIOS-mode boot ROM; AFAIK those are only found on some of the earliest NVMe drives. Moreover... grub-install /dev/nvme0n1p2 Installing for x86_64-efi platform. You are installing GRUB in EFI mode. My guess is that it's because you're running the command from a system that was booted in EFI mode, so grub-install picks EFI by default. For BIOS you want the 'i386-pc' platform, and you _must_ install GRUB to the block device itself (/dev/nvme0n1) And once again, whether or not you'll be able to boot from that is very much open to debate. Am I suppose to put any file system on /dev/nvme0n1p1 (2Mb partition) the installation manual did not mention anything. That partition is only there to reserve space for the initial stages of GRUB when BIOS-booting from a GPT disk. It does not need to be formatted or mounted, and as long as it has the proper flags grub-install should be able to pick it up on its own. andrea
Re: [gentoo-user] kernel support for: i211 - intel network driver
Hello, The i219 is a completely different (and much older) chip; the right driver for the i211 is definitely igb. That said, I think the OP should first make sure the onboard LAN is enabled in the BIOS and then post the output of "lspci -tv". andrea On 17/11/20 00:59, Adam Carter wrote: On Sat, Nov 14, 2020 at 9:18 AM mailto:the...@sys-concept.com>> wrote: I have Asus X570-pro MB with Intel I211-AT network When I compiled into the kernel (not as module) the "IGB" network driver but the network is not recognized. lsmod |grep igb is not showing anything. Not sure how close the I211 is to the I219, but lspci -k shows; 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (6) I219-V (rev 30) DeviceName: LAN Subsystem: Intel Corporation Ethernet Connection (6) I219-V Kernel driver in use: e1000e If i were you i'd just compile all these suggestions as modules and see what gets loaded.
Re: [gentoo-user] Trouble with backup harddisks
> I think, I feel better if I repartitioning/reformat both drives, > though. It's not necessary, but if it makes you feel better by all means do so. > *GPT/MBR > From a discussion based on a "GPT or MBR for my system drive" in > conjunction with UEFI it was said, that GPT is more modern and > save. More modern I concur. For the rest it's mainly about features: >2TB partitions and way more metadata, plus not having to bother with CHS values which make no sense in today's drives. And being able to define >4 partitions without littering the disk with extended boot records, which is probably the only thing I'd call "safer". My point was that none of this is relevant in an external drive which is under 1TB and will only hold a single partition starting at sector 1 and spanning the rest of the disk. A system drive, especially if booting from UEFI is a different case for which GPT absolutely makes sense. > My question was meant not so much as "MBR or GPT?" > but more whether there are some variants of GPT (with > protected MBR for example -- which was completly new to me), > which I should use or avoid. There are really no "variants" of GPT. The protective MBR is only there to make all space in the disk look allocated to MBR partitioning tools that are not GPT-aware, and is automatically written for you by all GPT partitioning tools. In addition to the opaque entry of type 0xee, this MBR can also contain entries pointing to at least some of the actual partitions; this is called a 'hybrid' MBR and allows MBR-only access to partitions that are within the limits of MBR addressing (start and end sector <2TB). These are only useful in very specific cases an I would consider them a hack more than a solution; while gpt-fdisk has some support for creating hybrid MBRs (don't know about fdisk), you won't get one unless you specifically ask for it. > But: Are rescue systems for USB-stick more UEFI/GPT aware nowadays > or "traditionally" based on MBR/BIOS-boot? I think that anything that's not ancient will have tools and kernel support for both MBR and GPT, and will boot fine in both BIOS and UEFI modes. > One thing I found is really handy: An USB-stick with an rEfind > installation. As long as your PC supports UEFI (or can switched to it) > rEfind is able to boot "everything" without prior configuration. You can probably do the same with GRUB2, albeit in a way less user-friendly fashion :) But why do you consider the ability to boot anything but the rescue system itself important in a rescue system? > > Some rescue-system which really shines and with which you have made good > experiences? My usual go-to is SystemRescueCD (the old 5.x gentoo-based one). andrea
Re: [gentoo-user] Trouble with backup harddisks
A very *#BIG THANK YOU#* for all the great help, the research and the solution. I myself am back in "normal mode" :) You're welcome! What is the most reasonable setup here: GPT without any hybrid magic and ext4 because it is so common? I would go with MBR and a single ext4 partition. GPT is fine too, but for a 1TB disk with a single partition it has absolutely zero advantage over MBR. andrea
Re: [gentoo-user] Trouble with backup harddisks
does my posting from this morning reached you ? ...I did not received anything back from the mailinglist... Nope. Just this night's response to Wol.
Re: [gentoo-user] Trouble with backup harddisks
> Since the disk was only ever accessed through an operating system that knew > solely about MBR, the GPT data meant nothing to it. It happily wrote data > past the MBR headers. Because the protective MBR is positioned before GPT > information, the primary GPT header was destroyed and most likely overwritten > with the file system. See also [1], the actual file system data probably > begins somewhere past LBA 0. If it was created according to the MBR that was shown previously, it will start at the beginning of the protective entry, i.e. at sector 1. > If the initial assumption is correct, GPT *must not* be restored. Your modern > PC sees the GPT partition type and assumes the existence of a GPT. It should, > however, access the MBR layout and interpret the partition marked with the > GPT ID as a regular partition. It won't, as long as it recognizes it as a protective MBR. Which is the right thing to do, as a disk with a protective MBR and no valid GPT is inherently broken. > 1) Boot an older system that only understands MBR, and mount the disk there. > This was suggested earlier but was dismissed because we assumed the sector > size had something to do with it. I do not think this is the case anymore - > the old system should be able to read it. > > 2) Boot a VM with a kernel that only understands MBR, pass USB through to the > virtual machine, mount the disk there. > > 3) Try confirming that there exists file system data past the MBR header. > > Maybe something like this: > > # dd if=/dev/sdb of=sdb-data bs=512 skip=1 count=16384 > $ file sdb-data ... or just bypass the partition table altogether. The filesystem starts at sector 1, i.e. 1*512B, so: mount -o ro,offset=512 /dev/sdb /mnt/xxx But while this will allow you to access your data, you will still have a broken disk until you fix the MBR. andrea
Re: [gentoo-user] Trouble with backup harddisks
Hi, CONFIG_PARTITION_ADVANCED=y CONFIG_MSDOS_PARTITION=y CONFIG_EFI_PARTITION=y That's all you need. This could be the key. Sector sizes have been changing from 512 to 4096 over many years. If your kernel has been updated to expect/use 4096 byte sectors, it might not be able to read the disk properly. Sector size is only a (fixed) property of a specific block device, not something expected or required by the kernel. Sector sizes other than 512B have been around for ages without any problems, even in consumer hardware (e.g. CDs and DVDs have 2KB sectors). Disklabel type: dos Device Boot StartEndSectors Size Id Type /dev/sdb1 1 1953458175 1953458175 931.5G ee GPT That looks... broken. fdisk is recognizing the disk as MBR (a GPT disk would have "Disklabel type: gpt"), but the partition table is a protective MBR, which makes no sense in a non-GPT disk. My guess is that this disk was at some time partitioned with GPT (possibly it came that way from WD?), but then it was only used in machines with no kernel support for GPT. Such a machine will happily treat that as a normal MBR and allow you to access the protective entry as a normal partition, which means you can create a filesystem on it and fill it with data, destroying the GPT structures. A GPT-aware kernel on the other hand will recognize that as a protective MBR and it will ignore it --but since the disk does not contain any valid GPT structures, it will not show any partitions. Try running "gdisk -l /dev/sdb"; for a valid GPT disk it will say: Partition table scan: MBR: protective BSD: not present APM: not present GPT: present If that's not the case and you have no GPT, you will have to fix things manually. Since the disk is only 1TB, there is no reason to use GPT at all, so your best bet is to use fdisk to make that a standard MBR by changing the partition type from 'ee' to '83'. andrea
Re: [gentoo-user] Understanding fstrim...
Have your backup cron job call fstrim once everything is safely backed up? Well, yes, but that's beside the point. What I really wanted to stress was that mounting an SSD-backed filesystem with "discard" has effects on the ability to recover deleted data. Normally it's not a problem, but shit happens -- and when it happens on such a filesystem don't waste time with recovery tools, as all you'll get back are files full of 0xFFs. andrea
Re: [gentoo-user] Understanding fstrim...
> My SSD (NVme/M2) is ext4 formatted and I found articles on the > internet, that it is neither a good idea to activate the "discard" > option at mount time nor to do a fstrim either at each file deletion > no triggered by a cron job. I have no desire to enter the whole performance/lifetime debate; I'd just like to point out that one very real consequence of using fstrim (or mounting with the discard option) that I haven't seen mentioned often is that it makes the contents of any removed files Truly Gone(tm). No more extundelete to save your back when you mistakenly rm something that you haven't backed up for a while... andrea
Re: [gentoo-user] Difficulties to install a bootloader for the new system
> rc.log stops here: > > * Executing: /lib/rc/sh/openrc-run.sh /lib/rc/sh/openrc-run.sh > /etc/init.d/local start > * Starting local ... > [ ok ] So apparently it's booting all the way... Looking at my working config (asus x370 prime, ryzen 7 1700, UEFI boot from an NVMe SSD), you might want to try a couple of other things: 1) recompile your kernel with CONFIG_FB_SIMPLE=y 2) set "GRUB_GFXMODE=auto" and "GRUB_GFXPAYLOAD_LINUX=keep" in /etc/default/grub and rebuild grub.cfg andrea
Re: [gentoo-user] Difficulties to install a bootloader for the new system
On 07/04/20 11:32, tu...@posteo.de wrote: When I boot this setup, grub starts and displays: booting Linux 5051500-64-RT ... and freezes. I have to powercycle the whole thing. If you're getting there, your firmware was successful in loading GRUB from your system partition, so you can probably rule out problems with partitioning or GRUB setup and concentrate on the actual kernel. Make sure your kernel has CONFIG_FB_EFI=y (it's under Device Drivers/Graphics support/Frame buffer Devices/Support for frame buffer defices/EFI-based Framebuffer Support), or you won't see any output from the kernel until your video driver is loaded. andrea
Re: [gentoo-user] Preparing a blank NVMe as a boot drive...
Then there was something mentioned about namespaces, which should be allocated smaller than the physical drive Is this really needed - just to boot from this SSD? NVMe namespaces are an abstraction layer that allows a controller to present its connected storage as a number of independent volumes. Think LVM LVs, or the way a hardware RAID card presents volumes as multiple SCSI LUNs. Your run-of-the-mill NVMe "gumstick" SSD by default will expose all of its capacity in a single namespace (and I don't even think it can be configured any other way), so you don't have to worry. Just remember that NVMe storage is always accessed through a namespace, so the equivalent of good old /dev/sda is not /dev/nvme0 (the controller) but /dev/nvme0n1 (the first namespace on the controller) Or is it sufficient (and harmless for the SSD) to just partitioning and format the drive? It's not only harmless, it's the way it's supposed to be used. Remember that you will need to boot in UEFI mode, so you will need a system partition (and you really, really want to use GPT). The gentoo handbook has a good section on UEFI booting. I found some hints regarding page sizes and erase block sizes when partitioning the drive. I wouldn't bother with that, but you're free to experiment :) andrea
Re: [gentoo-user] Still questions concerning a reasonable setup of a new system: UEFI &&/|| MTBR
> The BIOS of any machine old enough to be using FAT-style MBR cannot cope with > anything newer, so if you have one, you're stuck with it. The BIOS does not care about partitions at all: it just loads and executes whatever is on the first sector of the disk. As long as you are using a bootloader that understands GPT (such as GRUB2), you can BIOS-boot Linux from a GPT disk just fine. andrea
Re: [gentoo-user] RYZEN 5: Hyperthreading or no hyperthreading...
Hello, Thread(s) per core: 1 <<<<< Does my CPU hyperthread? Definitely not. Your kernel config is fine, chances are hyperthreading (aka "SMT mode") is disabled in your BIOS settings. andrea
Re: [gentoo-user] SDD, what features to look for and what to avoid.
My mobo is just old enough to not support NVMe drives. I checked on that a while back. It'll be old school SDDs, well, the ones that mount hardware wise like HDDs anyway. It's not like SDDs are really that old. We're talking about interface standards here, not form factors. SATA drives use the good old SATA interface and for all intents and purposes look just like HDDs to the BIOS and OS. NVMe drives have an onboard controller that attaches directly to the PCI express bus and use a completely different programming model for access. Most consumer-grade 2.5" SSDs (those that "mount like HDDs") are SATA, while board-mounted M.2 ("gumstick") drives can be either SATA or NVMe, as the M.2 connector supports both standards. If you have no M.2 connectors, you're safely stuck with SATA drives. Avoid flash. Got it. A couple I looked at mentioned NAND. I'm somewhat familiar with AND/NAND gates so I think those are different from flash. "NAND flash" (as opposed to "NOR flash") refers to the way memory cells are organized and connected. See for example https://www.embedded.com/flash-101-nand-flash-vs-nor-flash/ AFAIK all SSDs use some variant of NAND flash. andrea
Re: [gentoo-user] SDD strategies...
On 17/03/20 10:03, Neil Bothwick wrote: On Tue, 17 Mar 2020 09:35:10 +0100, Andrea Conti wrote: The SSD is currently reporting 98% of its rated life left: I feel quite confident it's going to outlast the laptop's useful life. What are you using to get that niformation? smartctl -A /dev/sdX All SSDs I have expose a "life left" attribute, altough the specific name and encoding of the value will vary. andrea
Re: [gentoo-user] SDD, what features to look for and what to avoid.
So, before I find a SDD to buy, what are some things I should look for it to have and what are things I should avoid? I think the single most important thing is buying stuff from a reputable brand (there's quite a number of those by now). Look at reviews. Top-tier performance probably isn't going to matter much for daily use, but I'd still look for a drive with a good warranty and a high write endurance rating, even if it commands a premium. Avoid drives based on QLC flash (they still have reliability and performance issues, and frankly prices aren't that great either). Most NVMe drives can only be booted from in UEFI mode (*), so if for any reason you still need to boot from an SSD in legacy BIOS mode -- stay safe and go for SATA or be sure to buy from a place with a good return policy. (*) boot-time NVMe access relies on a boot ROM carried on the drive, and most (all?) drives only have a UEFI ROM. While some UEFI firmwares claim to have a "universal NVMe driver", my experience with those has not been good. While at it, if I look for a NAS type HDD, would all those be PMR instead of SMR? I would expect any SMR drives sold at retail (i.e. not in USB boxes or the like) to be clearly marked as such, since they are a niche product with abysmal performance on common workloads. You're not going to silently get SMR drives in a NAS product line. From my understanding that should be correct. Mostly I buy WD, Seagate and Samsung. I've had a WD fail, I've had a Seagate fail. I'm not looking for a HDD flame up. O_o I'm starting to look at HGST. I think I got the spelling correct. Never had one tho. While Seagate seems to be the current leader in selling crap, I've had all kind of drives die on me. Most notable are a couple of high-end WDs literally going up in smoke some years ago, and an HGST going belly up with a good impression of a machine gun just the other day. In general I've had good luck with 3TB HGST and Toshiba drives, though the Toshibas I have are really HGST drives rebranded following a round of company mergers and subsequent antitrust-driven spinoffs. WD Enterprise drives are quite good, but they do command a sizable premium. I've not had any experience with "NAS" drives, nor with modern helium-filled high-capacity drives. Apart from the unit price, I don't need that much space and I'm not particularly keen on having that much data go poof if a single device decides to stop working. andrea
Re: [gentoo-user] SDD strategies...
Hello, SSDs are a common replacement for HDs nowaday -- but I still trust my HDs more than this "flashy" things...call me retro or oldschool, but it my current "Bauchgefühl" (gut feeling). The days of shitty JMicron stuff and OCZ drives dropping like flies are long gone... you are not going to encounter write endurance problems with a modern SSD from a reputable brand and any kind of reasonable workload. Stay clear from QLC drives and you'll be fine. I have a laptop with a 256GB Plextor M5M SSD installed in 2014. I dual boot Gentoo and Windows, and in addition to the normal stuff, on the Gentoo side I do a couple of world updates per week -- which with a full KDE desktop involves quite a bit of compiling and writing around. The SSD is currently reporting 98% of its rated life left: I feel quite confident it's going to outlast the laptop's useful life. That's not a single datapoint; every system I have around has used an SSD as a primary disk for years now, and I've yet to see one fail or develop any kind of corruption issue. In the same timespan I've had a fair number of HDD failures. The HD will contain the whole system including the complete root filesustem. Updateing, installing via Gentoo tools will run using the HD. If that process has ended, I will rsync the HD based root fileystem to the SSD. > Folders, which will be written to by the sustem while running will be symlinked to the HD. This should work...? It will probably work, if you hack at it long enough :D But seriously, what's the point? Setting up a patchwork of a filesystem like that and maintaining it in time is going to be a complexity and reliability nightmare: if you're going to those lengths because you don't trust SSDs, why have an SSD at all? andrea
Re: [gentoo-user] To all IPv6-slackers among the Gentoo community
TIM has also been offering "experimental" native IPv6 to all PPPoE-connected customers for years [1]. It works, but they (intentionally?) made it less-than-useful by choosing to give out a dynamic /64. andrea [1] https://assistenzatecnica.tim.it/at/portals/assistenzatecnica.portal?_nfpb=true&_pageLabel=InternetBook=consumer_root=/AT_REPOSITORY/876181 On 28/11/19 03:46, Alessandro Barbieri wrote: I can switch provider (currently with Vodafone) but in Italy only Fastweb has IPv6 (AFAIK) and it's not native but 6RD Il Lun 25 Nov 2019, 15:54 Ralph Seichter <mailto:ab...@monksofcool.net>> ha scritto: https://www.ripe.net/ripe/mail/archives/ripe-list/2019-November/001712.html This does not come as a surprise, of course, but I consider it a good point in time to pause and ask oneself what each individual can do to move further towards IPv6. The end is neigh(ish). -Ralph
Re: [gentoo-user] How to flash an LSI SAS controller from IR to IT mode on linux with sas2flsh
> Jesus christ it is 2018 and they still want us to use dos to flash > hardware >:'[ While DOS is usually the recommended environment for flashing hardware, in my experience the most reliable option for (cross)flashing LSI SAS2 controllers is the EFI version of sas2flash. Specifically, on some recent motherboards the DOS version of sas2flash will simply refuse to start with obscure hardware initialization errors. If you can build a FreeDOS image with little effort, it might be worth a try; just be warned that it's not a guaranteed way to success and that your time might be better spent by looking for a board with UEFI firmware that you can borrow for the ten minutes needed to flash the card. And no, I don't like UEFI either, but I do think it's a useful tool for the job at hand. HTH, Andrea
Re: [gentoo-user] Partition of 3TB USB drive not detected
Hi, > ~ # parted /dev/sde print > Model: WD My Book 1230 (scsi) > Disk /dev/sde: 6001GB > Sector size (logical/physical): 4096B/4096B [...] > AFAICS this partition works fine, fsck does not report any problem. The > funny thing is, it should not have been possible, because of the 2GB limit > of MBR. The real limit of MBR is 2^32 sectors, which amounts to 2TB when using 512B sectors. Both your disks are using native 4kB sectors (look at the logical sector size in the parted output), which effectively raises the MBR limit to 16TB. Not that this answers your question, as the Linux kernel has supported 4kB sectors for years and AFAIK it does not need any special configuration options to do so... andrea
Re: [gentoo-user] iscsitarget or targetcli?
On 29/01/15 11:25, J. Roeleveld wrote: Hi all, I want to set up an iSCSI server (target in iSCSI terminology) running on Gentoo. Does anyone know which of the following 2 are better: - sys-block/iscsitarget - sys-block/targetcli Both don't seem to have had an update for over 2 years, but targetcli seems to be just the config-tool for whatever is in current kernels where, I think, iscsitarget is a userspace daemon? Many thanks, Joost Hi, sys-block/iscsitarget is composed of a kernel module and a userspace daemon, both compiled and installed by the ebuild. I would second the recommendation for SCST, which is actively developed and in my experience is quite more stable and tends to recover better from unexpected events than sys-block/iscsitarget (I have used both for quite some time). The only downside of SCST is that it requires a bit more work to install, mainly because there is no ebuild for it; moreover, while it can be built against and run on a stock kernel, it comes with a couple of kernel patches which should be applied for optimal performance or are needed fot specific features (e.g. the vdisk backend). andrea
Re: [gentoo-user] iscsitarget or targetcli?
What is the difference between the kernel-stuff (targetcli is only the config- tool) and scst? http://scst.sourceforge.net/comparison.html It was written by the SCST team, so it should be taken with a grain of salt; it is nonetheless a useful overview of the alternatives out there. andrea
Re: [gentoo-user] Mac Mini with Grub booting Mac OSX and Windows?!
Thanks Andrea. I had though that the MBR was automatically mapped to the the first 4 gpt partitions because that's they way it's always been on my system. So now I wonder how it's been set that way, because I know i've never touched gpt-fdisk and I didn't use bootcamp. Maybe the refit installer. I have no idea. No sane GPT partitioning tool should ever automatically create a hybrid MBR: it's not part of the GPT spec and if it's not properly updated at every change of the GPT table there's the risk of massive corruption. I know for sure that OSX's Disk Utility doesn't create one. AFAIK the rEFIt installer just copies the boot manager on the OSX partition and installs a startup item that makes sure rEFIt is blessed (i.e. selected as the active EFI boot manager) at every reboot. andrea
Re: [gentoo-user] Mac Mini with Grub booting Mac OSX and Windows?!
The real problem is that while rEFIt/rEFInd, OSX and Linux have no problem dealing with a GPT partition table, Windows only supports MBR. (Windows 7+ supports GPT partition tables but it can only boot from a GPT disk in EFI mode. So, let us assume we have in the game: Windows 7 Ultimate Edition Gentoo Linux and Mac OSX (latest version) then we are all on the same side accessing the same partition table type, no?! No. :) While Intel Macs are EFI platforms, they have an early and quirky implementation that cannot properly boot Windows in EFI mode, so you're stuck with booting in BIOS emulation mode, which in turn means that Windows will not use the GPT table. This is a really stupid Windows limitation, but we can't do anything about it. The Linux kernel can use GPT with no restrictions, however booting is another story. Booting directly from GPT requires a GPT-aware bootloader such as GRUB 2. Alternatively you can use GRUB legacy, but you need an entry in the MBR for the boot partition. The root partition (and any other partitions) need not appear in the MBR, as they are mounted by the kernel. OSX uses GPT natively and does not need MBR entries for its partition(s). The only exception is if you want read-only access to an HFS+ partition in Windows through the driver provided by BootCamp; in that case you need to ensure that the first entry in the hybrid MBR covers the HFS+ partition you want to access. andrea
Re: [gentoo-user] Mac Mini with Grub booting Mac OSX and Windows?!
We can't have more then 4 primary partitions on a hard disk. Gentoo needs 2 partitions, /boot and a Virtual partition (that count's as well as one primary) with all the other folders. Windows will create 2. and Mac OSX minimum 1, am I right?! Your Windows partitions have to be in the first four, but OSX and linux partitions can be anywhere thanks to the gpt partition table. Things are both simpler and more complex than that. The real problem is that while rEFIt/rEFInd, OSX and Linux have no problem dealing with a GPT partition table, Windows only supports MBR. (Windows 7+ supports GPT partition tables but it can only boot from a GPT disk in EFI mode. On a Mac OSes other than OSX must be booted in BIOS emulation mode, therefore the requirement for MBR on the system disk for Windows still stands). GPT and MBR, however, are only indexing schemes: they describe how many partitions are on a disk and their location, but apart from providing a high level 'type' label they have nothing to do with what's inside a partition. GPT-partitioned disks traditionallly have what's called a 'protective MBR', i.e. a dummy MBR which defines a single partition of type 0xEE spanning the whole disk; this is intended to keep partitioning tools that are not GPT-aware from considering the disk uninitialized and inadvertently destroying its contents. However, nothing prevents you from adding to the protective MBR regular entries for some of the partitions, and have the disk look like a 'normal' MBR disk as far as those partitions are concerned. The result is called a 'hybrid MBR' and it's the main trick behind Boot Camp. There is really nothing special about booting (or installing) Windows on a Mac: it just works, as long as you have both a properly set up hybrid MBR with an entry for the Windows partition and a suitable EFI boot manager. The former can be done with a tool such as gpt-fdisk (you can easily find a binary package for OSX, and there are directions for dealing with hybrid MBRs on the author's site); rEFInd is your best option for the latter. The standard Apple boot manager will also do, if you only need to boot OSX and Windows. Booting Linux works in a similar fashion. You don't even need a GPT-aware bootloader: good old GRUB 1 is perfectly up to the task, as long as there is an entry for its boot partition in the hybrid MBR. Then you can load a kernel with GPT support, and from there it's just a standard multiboot setup. HTH, andrea
Re: [gentoo-user] Fine Tuning NTP Server
Hello, server tick.nrc.ca minpoll 64 maxpoll 1024 iburst prefer Ouch! minpoll and maxpoll should be specified as the log2 of the actual value, i.e. 6 and 10. Those are the defaults anyway. disable auth broadcastclient server ntp.server.com prefer This looks fine to me; although configuring a broadcast association when your clients also have a unicast association to the same server seems a bit pointless, this should not cause any harm. I think you should first try to fix your server config and see if getting a proper sync on the server also solves the problem with the clients. As for /etc/conf.d/ntpd, we have set nothing. To be honest I did not even know the file existed till you mentioned it: NTPD_OPTS=-u ntp:ntp That is where you put the commandline options you want ntpd to be started with. I would have liked to be better prepared for this but the gentoo wiki page has been down for a few weeks now. We are not looking for microsecond synchronization however, down to the second would be nice! I doubt you can consistently achieve microsecond-level synchronization with NTP ;) The official documentation of the ntp suite [1] is a good source of information; the man pages of ntpd and ntp.conf are also quite extensive, albeit a bit terse. andrea [1] http://www.eecis.udel.edu/~mills/ntp/html/index.html
Re: [gentoo-user] Fine Tuning NTP Server
Hi, The official explanation of the output of ntpq -p is here: http://www.eecis.udel.edu/~mills/ntp/html/decode.html#peer although http://tech.kulish.com/2007/10/30/ntp-ntpq-output-explained/ is probably a bit more understandable if you're not familiar with ntpd. remote refid st t when poll reach delay offset jitter *tick.nrc.com .GPS.1 u 52h 36h2 56.0368.131 0.445 tock.nrc.com .INIT. 16 u- 36h00.0000.000 0.000 192.168.2.255 .BCST. 16 u- 102400.0000.000 0.000 Your time server is using 'tick.nrc.com', a stratum 1 server, as a reference, and has an estimated 8 ms error. You have a second peer configured, 'tock.nrc.com', but it's not being used as it's never been reached. You also have a broadcast association which is not needed if all your clients are configured in unicast mode (as the one below is). Those numbers do look a bit off, though: a 36h poll interval with those reach masks probably means that you manually set minpoll=17 for the peers, which is not a good idea. Ntpd is not designed to do 'point synchronization' like ntpdate; it runs an algorithm which periodically takes measurements from peers and continually updates an error estimate which is then used to adjust your system time (generally by slewing the kernel timekeeping loop frequency). The poll interval is how often a measurement is taken from (i.e. a packet is exchanged with) a specific peer, not how often your time is adjusted. Since frequent measurements are crucial to maintaining a good error estimate, forcing such a long poll interval basically means that your system time will be finely adjusted to a very coarse reference. remote refid st t when poll reach delay offset jitter time.server 128.233.154.245 2 u 32 64 3770.168 25539.9 40.427 Your client is not using the time server as a reference, and their times are 25.5s off. I don't know what the problem is but if this is the steady-state situation (as a reach mask of 377 would suggest) you probably have a configuration error on one or both ends. It would definitely help if you could post /etc/ntp.conf and /etc/conf.d/ntpd from the server and one of the clients. Finally, is refid 128.233.154.245 on client's pointing to the outside stratum server normal behaviour? Shoild refid not be pointing to our NTP server? It's perfectly normal: the peer your client is using is in the 'remote' column, while the 'refid' column contains the peer's upstream reference. HTH andrea
Re: [gentoo-user] can't mount ext4 fs as est3 or ext3
Hi, EXT3-fs (sda5): error: couldn't mount because of unsupported optional features (240) /dev/sda5 / ext4noatime,discard 0 1 When first mounting the root filesystem the kernel has no access to /etc/fstab and therefore by default tries mounting it with all available FS drivers until one succeeds. ext3 (or ext4 in ext3 mode) is tried before ext4 and you get that error when it fails because the filesystem is using ext4-only features such as extents. You can avoid that by adding rootfstype=ext4 to the kernel command line. Since all my fs are ext4 I could remove ext3 support from the kernel (3.5.4). Is that the recommended procedure? You can remove ext2/ext3 support even if you still have ext2/ext3 filesystems around; the ext4 driver is backwards compatible and can handle those with no problems. You just have to make sure that CONFIG_EXT4_USE_FOR_EXT23 is set in your kernel configuration. HTH andrea
Re: [gentoo-user] DHCP - specific inet no - how to
dhcpcd has a --resquest option to do just this. One never stops learning... It only works if the address is available and if the server is returning a different address now it may be that your preferred address is in use. It also depends on the dhcp server understanding the request. I don't know if the option is actually a required part of the DHCP protocol, and as implemented even in cheap telco routers... although many routers these days run some sort of embedded linux with dnsmasq. andrea
Re: [gentoo-user] nfs mounting back and forth...
Hi, As an alternative to quickpkg and friends: Mount the beaglebones rootfs to /usr/$CTARGET of my Gentoo Linux PC. Then nfs-mount a part of my Linux PC filesystem on /usr/$CTARGET/tmp No need for nfs, just bind mount /tmp onto /usr/$CTARGET/tmp. Look up the --bind option in the man page of 'mount'. andrea
Re: [gentoo-user] DHCP - specific inet no - how to
Hello, Normally, a device tries to get the previous inet number, but sometime this changes. DHCP clients can neither request nor suggest a specific IP address, so they don't try to get anything. It's just the DHCP server giving out the previous IP to the same client, either by chance or because the existing lease hasn't expired yet. But I cannot configure the DHCP server myself since this is provided by my internet provider. Then you're basically out of luck. Since you have few devices and those devices are under your control, just forget about DHCP and configure them statically. andrea
Re: [gentoo-user] USERDIR problem with apache on new install (SOLVED)
Hello, I put in a symlink /home - /local/allan/gottlieb so that programs looking in /home would be happy. I had /etc/passwd say /local/allan/gottlieb since it is the real directory. apache doesn't like this. There is probably an option to let it do this since it has several options on symlinks It's not about liking... mod_userdir automatically maps a URL in the form ~/foo onto user foo's home dir, as it is recorded in the system's user database. If you put /local/allan/gottlieb there, apache tries to serve files directly from /local/allan/gottlieb. The default mod_userdir configuration (/etc/apache2/modules.d/00_mod_userdir.conf, of which you pasted an excerpt in the other email) only sets an Allow from all for directories in the form /home/*/public_html, which does not include anything under /local. You can either change your home directory, or add Directory /local/allan/gottlieb Order allow,deny Allow from all [whatever other options you need] /Directory in the apache config for the virtualhost you're using. As for the FollowSymlinks and SymlinksIfOwnerMatch options, I'm not sure they apply here: they should only affect whether the server follows symlinks *within* the document root, not symlinks in the path *leading to* the document root. andrea
Re: [gentoo-user] distcc cross-compiling for OSX
I think I must add a OSX specified symbolic link. Symlinks are only needed on the distcc client, not on the server running distccd. But that is a trivial matter. Which tools / configuration must be set for cross-compiling OSX code on my Gentoo box? You need to put together a complete OSX cross toolchain. This basically means building cctools (osx's equivalent of binutils) and apple's compiler from source. Then you have to do some additional plumbing on both ends to get it all to work. An overview of the process for 10.4: http://myownlittleworld.com/miscellaneous/computers/darwin-cross-distcc.html Do note that quite a lot of things have changed from back then, so those instructions are probably not going to work. andrea
Re: [gentoo-user] distcc cross-compiling for OSX
Thanks for this link, I have read it before I write the post. Did I understand thr problem correct: I need a full OSX compatible toolchain !? So I download all Apple developer tools, compile it under my Gentoo box and add all header files which I used under OSX to my Linux box? According to those instructions, for distcc use you only need cctools and a compiler. You don't need any headers as the code distcc sends to servers is preprocessed on the client. You won't be able to cross-compile directly on the linux box (you're missing headers, libraries and frameworks), but that should not be a problem. The OpenDarwin project died a long time ago, so odcctools is no more. The source packages for cctools and apple's blend of gcc can be downloaded from opensource.apple.com. I have no idea whether they support building on Linux or not (especially cctools)... andrea
[gentoo-user] Re: Some essential packages fail to compile
If I fully follow that wiki page (I did until the wrapper script is added) I would have to change these links: lrwxrwxrwx 1 root root 16 Sep 6 21:35 c++ - ../../bin/distcc lrwxrwxrwx 1 root root 16 Sep 6 21:35 cc - ../../bin/distcc lrwxrwxrwx 1 root root 16 Sep 6 21:35 g++ - ../../bin/distcc lrwxrwxrwx 1 root root 16 Sep 6 21:35 gcc - ../../bin/distcc That's the idea, although now I can see that this is not your problem. Yes, i686-pc-linux-gnu-gcc and i686-pc-linux-gnu-g++ are just symbolic links to the native compilers (because I don't have those binaries). This is what's biting you. Distcc is invoking i686-pc-linux-gnu-gcc on a server and is getting back 64-bit output, because the x86_64 compiler is configured to produce 64-bit output by default. Should I better remove the symbolic links and add scripts there which adds -m32 -march=i686 to the parameter list (I could do it because those compiler names are only used on 'laptop'). You can do that, and it's surely better than mucking with default CFLAGS. Be warned, though, that the components of the native Debian toolchain probably are not the same version as those on your laptop. This might expose you to random runtime breakage which will be quite hard to diagnose, especially in case of different glibc versions. This is the main reason why a dedicated toolchain is recommended. So CFLAGS and HOSTCFLAGS must be set to the same in make.conf? It is really confusing. :( Unless you are doing strange things you should never need to touch your HOSTCFLAGS. In your case i think it would simply be better to fix your setup :) andrea
Re: Aw: [gentoo-user] Re: Some essential packages fail to compile
One way is (if you have read any 'environment' files in my tar archive) to set the guest architecture explitcitly in /etc/(portage/)make.conf which I did. /etc/make.conf: CFLAGS=-O2 -march=i686 -pipe -fPIC -m32 CXXFLAGS=${CFLAGS} CHOST=i486-pc-linux-gnu That is not setting the guest architecture explicitly, you're just telling whatever compiler gets invoked on the remote host to produce 32-bit output. Cross-compiling with distcc is quite straightforward. As long as distcc is setup correctly on the client (which for cross-compiling involves a manual step, see below), and as long as the ebuilds properly invoke the compiler with the CHOST prefix (i.e. 'i486-pc-linux-gnu-gcc' instead of 'gcc'), the appropriate compiler will be called on the servers, with no need to manually play with your CFLAGS. If you need -m32, it means you are *not* cross-compiling, i.e. you are invoking the native gcc on the remote hosts instead of your cross-compiler. That usually works as any x86_84 gcc with multilib support can produce 32-bit output, but it is just masking the problem and will break if the -m32 flag is lost for some reason. I left the default CHOST as is and on the Debian systems I provided the required compiler. provided the required compiler should mean that on every server you have a complete 32-bit toolchain (binutils, gcc, glibc and kernel headers) with the version of each component matching those on your distcc client. You should be able to compile a 32-bit executable locally on any of the Debian systems just by invoking 'i486-pc-linux-gnu-gcc'. Setting up such a toolchain can be quite a PITA, so on Gentoo it's usually done with crossdev -- but as long as you get things right that's not a requirement. One of the nodes has compiled a 64 bit object (conf.o) which the linker (running on 32 bit) tried to link to a 32 bit program/library (the output). So for me, the Makefile in that package (klibc) didn't provide the specified CFLAGS I configured which needs fixing, if my assuming is right. I can deeper more investigate here. export HOSTCFLAGS := -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer I think this line only needs to be extended with $(CFLAGS) then the fix is complete. No. CFLAGS are for the build target, HOSTCFLAGS are for the build host. Building (configuring, actually) klibc involves compiling a tool which is run on the host (i.e. the machine you're building on), before compiling klibc itself for the build target. In your case both the host and the target are the same (i486-pc-linux-gnu), so the difference might not be very clear, but if you were compiling klibc for a different arch (e.g. powerpc) you would have a completely different build target, with its set of CFLAGS. Back to your problem, the klibc build is correctly picking up your HOSTCC: Source unpacked in /var/tmp/portage/dev-libs/klibc-1.5.20/work Compiling source in /var/tmp/portage/dev-libs/klibc-1.5.20/work/klibc-1.5.20 ... make -j6 defconfig CC=i486-pc-linux-gnu-gcc HOSTCC=i486-pc-linux-gnu-gcc but distcc seems to be invoking the native x86_64 compiler (i.e. 'gcc') on the remote systems (you can also see all those warnings about differing integer and pointer sizes). My guess is that you didn't set up properly distcc for cross-compiling on your client. Try following the instructions at http://www.gentoo.org/doc/en/cross-compiling-distcc.xml and let us know if it fixes your problem. HTH, andrea
[gentoo-user] Re: Some essential packages fail to compile
http://www.gentoo.org/doc/en/cross-compiling-distcc.xml I also read it far before I wrote my email. In case I wasn't clear in my other email, what you should be paying attention to are the instructions under 'Configuring distcc to cross-compile correctly'. Of course, the instructions are just an example and you should substitute the correct triplet (i.e. 'sparc-unknown-linux-gnu' should become 'i486-pc-linux-gnu'). I have to temporary disable distcc (FEATURES variable needs to be commented out) for klibc, emerge klibc and then re-enable distcc FYI, environment values take priority over those in make.conf, so you can just override FEATURES for the current build by putting it on the command line: $ FEATURES=-distcc emerge klibc andrea
Re: [gentoo-user] GCC : another trap for the unwary
(1) Gcc 4.5.4 seems to require USE=cxx, not the previous -nocxx, which was covered by -* at the beginning of my list in make.conf . http://article.gmane.org/gmane.linux.gentoo.devel/73962 I guess they ended not putting in the check after all :) andrea
Re: [gentoo-user] new machine : DVD drive
(1) do I need to configure the kernel to find the drive ? It's basically handled exactly the same as a CD drive, so you need the same configuration options you would use for that. Yes. As a minimum have a look at BLK_DEV_SR and BLK_DEV_SR_VENDOR. You may also need SCSI_PROC_FS for legacy applications. The AHCI drivers would probably be enabled for your hard drive SATA controller anyway. BLK_DEV_SR_VENDOR made sense when every drive manufacturer adopted their own standard in designing interface protocols... with every drive made on the planet in the last ten years being mmc-compliant, there is not much point in still using that. Not that it hurts even if it's not needed... (3) are there rewritable DVDs, as there used to be rewritable CDs ? -- among the specs are much slower speeds labelled 'RW'. Yes, +RW, -RW, but don't know much more on this other than older DVD writers would only do one format not another and if you didn't pay attention to the specification/limitations of your hardware you could end up buying the wrong type of DVDs. Someone more experienced on recording media could answer this better. Every modern recorder does both standards; depending on both the burner and the reader you might find that one standard works better than the other (i.e. has lower read error rates). Trial and error seems to be the only working approach... As for the standards, if you're just burning backups they're basically equivalent. The +RW standard is theoretically more flexible as media can be formatted in a packet mode which allows (almost) random r/w access, but in my experience software support and reliability have always been lousy, so forget about it. +RW media cannot be erased in the same way CD-RWs are erased, -- you can only overwrite it with new data. -RW behaves the same as CD-RWs in this regard. If you need rewritable DVD media with reliable random r/w access (but this doesn't seem to be your case), there is a third standard (DVD-RAM) which uses special disks with hardware sector marks. Drive support is not hard to find nowadays (the drive you cited actually supports it), but writing is slow, good media is expensive and the disks cannot be read in most normal dvd drives; I have no idea about the state of software support in Linux. (4) anything else I sb aware of ? DVDs (especially rewritable ones) are much less resilient than CDs. Don't rely on a recorded DVD to be still readable after more than 3-4 years, because it probably won't be. While good quality (i.e. expensive) brand media tends to be a little more durable, DVDs are not the right choice for long-term archival. Given your adoption rate of new technology I suggest you consider buying a BluRay player if not recorder, because I don't know how long it will be before DVDs become obsolete too. I doubt BD-R will ever supplant DVD-R the same way DVD-R did with CD-R. When DVD-R came out there were no practical and affordable alternatives for recording and transporting large quantities of data. Nowadays, on the other hand, flash storage is ubiquitous and cheap enough to satisfy the needs of most people. This slowed the adoption of BD-R a lot, to the point that I'm not sure it will ever become a widespread technology. IOW, I would only consider shelling out the cash for a BD-R drive if it made sense for my current storage needs, not as an investment for the future. my € 0.02, andrea
Re: [gentoo-user] Just a heads-up, I think =sys-libs/glibc-2.14.1-r3 is a stinker.
Ok, yes. This version of glibc, =sys-libs/glibc-2.14.1-r3, is crud. It's the current stable version on amd64, and you seem to be the only one having problems with it. I think the problem is likely with your setup (i.e. other toolchain component, CFLAGS, ...). least, if you're doing parallel building. Out of my three machines, the 8-core box got bit by it, the 4-core box got bit by it, but the 2-core laptop sailed past. Been running 2.14.1-r3 on an amd64 16-core server for a couple of weeks. MAKEOPTS is at -j20. No crashes so far, emerge keeps working and the machine had no problems rebooting after a kernel update. andrea
Re: [gentoo-user] new mobo : no Eth0
Does anyone have suggestions ? Your logs show that the interface is being detected and is named 'eth0'. If you can't see eth0 at the end of the boot process, the device node has probably been renamed by udev (you should see it as eth1, e.g. in the output of ifconfig -a). So: # rm /etc/udev/rules.d/70-persistent-net.rules and reboot andrea
Re: [gentoo-user] The End Is Near ... or, get the vaseline, they're on the way!
This news item is to inform you that once you upgrade to a version of udev =181, if you have /usr on a separate partition, you must boot your system with an initramfs which pre-mounts /usr. [...] Happy Computer Users, systemd is on your horizon. The problem, if you really want to call this a problem, is with udev, not OpenRC. Switching to systemd is not going to solve it. Personally I stopped bothering with a separate /usr ages ago, so I don't really care. andrea
Re: [gentoo-user] In TTYs, pinguins remain
In TTYs, the penguins you see on top of the booting process remain. Then in less, i cannot scroll upwards, which sucks using man and like that. You're using a framebuffer driver which is either misconfigured or not supported by your system. Most of the time it's just vesafb interacting badly with a broken VGA BIOS; if that's the case you can try playing with the commandline options pertaining to how scrolling is done (read vesafb.txt under Documentation/fb in the kernel source tree) If you have a reasonably recent intel/amd/ati/nvidia card and you're mainly interested in text mode, the framebuffer provided by the relevant in-kernel DRM driver is usually the best choice (and it's a hell of a lot faster than anything using BIOS calls like VESA). If you have older hardware, uvesafb tends to work better than vesafb in a lot of cases, although it requires a bit of work for setting it up. If all else fails, there is always the basic VGA text console :) HTH, andrea
Re: [gentoo-user] grub vs grub 2
PS: If you know how to get rid of any background image, could you say how? Remove or comment out any splashimage directives from the config file. *** Re grub2: as long as grub0 works, I really don't care if grub2 is better, cleaner, shinier, more modern or anything else. I don't need a freakin' whole OS to boot linux, and having a configuration that is so convoluted that it *has to* be generated by running a set of scripts makes no sense at all. I thought the days of m4 and sendmail.cf were over a long time ago... I am sure grub2 can be made to work, but for a piece of software as vital as a boot loader, that level of complexity in my opinion is totally unreasonable and impossible to justify. andrea
Re: [gentoo-user] hwclock -- sysclock and the ntp-client
Is your RTC driver compiled into the kernel? CONFIG_RTC_HCTOSYS=y CONFIG_RTC_HCTOSYS_DEVICE=rtc0 Those have nothing to do with the RTC *driver*. AFAIK on a PC the relevant option is CONFIG_RTC_DRV_CMOS andrea
Re: [gentoo-user] HEADS UP - postfix-2.9.0 is broken
BTW postfix 2.9.0 also fails to emerge if USE=vda because it tried to apply the patch for 2.8.5. (patch for 2.9.0 has not yet been released) And it also fails to start if have maps in hash format and emerge with USE=-berkdb. Luckily the error messages are informative enough... but let's say that a word of caution in the emerge message would have been welcomed. andrea
Re: [gentoo-user] gentoo-sources-3.2.0-r1 and genkernel
Il 06/01/2012 10:51, András Csányi ha scritto: under /boot directory. Did I missed something? Is there anything new in genkernel? Should I report it? Check /etc/genkernel/genkernel.conf maybe is commented the option that install it into /boot . hth A.
Re: [gentoo-user] Re: -march=native is *EXTREMELY* conservative
Could be. It could also be it's because of -mfpmath=sse. AFAIK most video decoders (outside of reference implementations) are written using integer math only... -O3 is a much more likely candidate. andrea
Re: [gentoo-user] What happened to OpenRC 0.9.6?
On 27/11/11 16.36, Nikos Chantziaras wrote: sys-apps/openrc-0.9.6 is just... gone? Not even masked, but completely gone from portage. FYI, sys-apps/openrc-0.9.7 is out. Apparently, the solution to the rc_parallel issues was to remove every mention of rc_parallel from the default /etc/rc.conf Brilliant. andrea
Re: [gentoo-user] Re: What happened to OpenRC 0.9.6?
Oh, you just want to test the features *you* use, understood. Guys, I did not want to start a flamewar. I've been running ~arch for years and I've had my fair share of breakage, which I'm perfectly fine with (e.g. I'm not complaining that dev-lang/php-5.4.0._rc2 currently fails to compile with USE=+snmp). It's my choice to run unstable, and I only do so on machines where a hosed system is a nuisance rather than an emergency. I write software for a living, so I know perfectly well that covering every possible configuration in your tests is extremely difficult, especially if you're not granted ample resources (i.e. time+$$$) specifically for that purpose. I was just a little surprised that a system package turned out to be completely broken in a scenario that I thought was quite widespread, especially among the devs (as rc_parallel results in _very_ tangible time savings, especially on a desktop with lots of services and frequent boots). Things were handled well: as soon as the issue was reported, the breakage was acknowledged and the offending version was masked and then removed. That's all as far as I'm concerned. No data was lost and no kittens were killed. Let's move on. andrea
Re: [gentoo-user] What happened to OpenRC 0.9.6?
On 27/11/11 16.36, Nikos Chantziaras wrote: sys-apps/openrc-0.9.6 is just... gone? Not even masked, but completely gone from portage. What happened to it? Last time I checked it was hardmasked. Now it's been confined into oblivion, I hope. It had a little problem in resolving the dependencies of a newly introduced boot service that created a cycle and caused the boot process to hang (almost) forever with rc_parallel=YES. With 100% repeatability, mind you, which does raise same questions on the amount of testing done before release. Yes, it's ~arch and rc_parallel is explicitly marked experimental, but it's not expected to be completely and consistently broken, either. If that sounds like I'm ranting, it's because I just spent about an hour getting three machines affected by this problem back into working state. If anyone still has it installed, it's time to sync and downgrade :) andrea
Re: [gentoo-user] java java everywhere
Or maybe the build system is stable enough for general use. If someone can share some experience with the source build, I'd like to hear about it. The build system of the source build, of course. Well, it works, and my impression is that it's a bit faster than icedtea-6 (the build system, I mean). Unless you've got time to spare, though, I wouldn't recommend building from source on anything else than a recent machine. Then there's the usual catch that you need to have a jdk installed in order to build icedtea -- so the first time you cannot use the source ebuild. andrea
Re: [gentoo-user] [OT] laptop desktop serial connection don't work on one direction
I have connected the wires by hand, 3-2 and 2-3 but without 5. I'll try it again today with 5 connected, and post my findings. Not having a common ground reference between the two sides could very well cause the kind of problems you're seeing :) andrea
Re: [gentoo-user] problem with pam
On 29/10/11 13.10, co wrote: # ls -l /lib/libpam.so* lrwxrwxrwx 1 root root11 Oct 21 23:47 /lib32/libpam.so - libpam.so.0 lrwxrwxrwx 1 root root16 Oct 21 23:48 /lib32/libpam.so.0 - libpam.so.0.83.0 -rwxr-xr-x 1 root root 46520 Sep 28 19:37 /lib32/libpam.so.0.83.0 That's not what you were asked for (ie /lib32 != /lib, as you seem to be on amd64). Are you by any chance trying to rescue your system by booting from an install cd? If so, make sure you use an amd64 ISO, and don't forget to chroot to your system installation. Unpacking Linux-PAM-1.1.5.tar.bz2 to /var/tmp/portage/sys-libs/pam-1.1.5/work tar: Linux-PAM-1.1.5/INSTALL: Cannot open: Invalid argument tar: Linux-PAM-1.1.5/ABOUT-NLS: Cannot open: Invalid argument I think this is the real problem. Whatever is causing this also has probably something to do with your openrc issues. What kind of filesystem is your /var/tmp/portage directory on? Is it free of errors? Is there any free space left? Can you create a new file on it? Try # echo test /var/tmp/portage/test andrea
Re: [gentoo-user] problem with pam
On 29/10/11 19.47, co wrote: After I run e2fsck -c on /var partition.I have re-emerged pam. So I think there is something wrong with my hard disc. Why do you think that? Did badblocks identify any specific problems? A failing hard disk generally shows very obvious symptoms (noises, periodic system lockups due to read retries, I/O errors in the kernel log) before getting to the point of causing the widespread filesystem corruption you seem to be experiencing. Ext4 is generally quite resilient even when handled roughly, so I would tend to suspect a memory issue. That's just a guess, though, since you didn't provide much information. something Error and Warning on boot time,and after startx mouse can't move. So how to fix the hard disc now? As Mick said, you don't fix a hard disk, you try to salvage whatever is on it and then you go for a replacement. However, your first priority should be to rule out memory issues: doing any kind of data recovery operation on a machine with defective memory is a recipe for disaster. andrea
Re: [gentoo-user] subversion-1.7.0 and layman
Hello, svn: E155036: Please see the 'svn upgrade' command svn: E155036: Working copy is too old. Should I downgrade subversion or just waiting till the particular layman repositorys' format will be upgraded? The problem is with your working copy, not with the repository. Subversion 1.7 uses a new format for storing metadata in working copies which is not compatible with the one used up to 1.6. You can upgrade to the new format with $ svn upgrade /var/lib/layman/overlay name Be warned, though, that there is no way to convert back to the old format -- if you decide to downgrade subversion later on you will need to delete the overlay and check it out again. andrea
Re: [gentoo-user] Filesystem with lowest CPU load, acceptable emerge performance, and stable?
So, can anyone recommend me a filesystem that fulfills my following needs: Scenario: vFirewall (virtual Firewall) that is going to be deployed at my IaaS Cloud Provider. Disk I/O Characteristic: Occasional writes during 'normal' usage, once-a-week eix-sync + emerge -avuD Priority: Stable (i.e., less chance of corruption), least CPU usage. My Google-Fu seems to indicate either XFS or JFS; what do you think? IMHO a firewall (physical or virtual) is something that fits strictly into the appliance category. It must do only one thing and do it well, with as little complexity and maintenance overhead as possible. Why in the world would anyone want to run gentoo (which among the rest needs portage and a whole compiler stack) -- or for that matter any other full-fledged linux distribution -- on something like that in production is beyond me... That said, XFS and JFS are targeted at completely different use cases and are way too complex for your scenario. Without appropriately-sized hardware I'm not even sure XFS fits in the stable category. Stick to ext3, keeping an eye on the inode count for /usr/portage as the default value on a small partition probably won't be enough. Fs-related CPU usage in a firewall (which has nearly zero disk activity when up and running) is mostly a non-issue unless you need some form of heavy logging or you're doing something wrong. Weekly updates, on the other hand are exposing you to the risk of random breakages and -- if you compile from source -- are going to cost you a serious amount of CPU. My advice would be to limit updates to those fixing known vulnerabilities, and even then compiling somewhere else and doing binary installs would be preferable. andrea
Re: [gentoo-user] Layman errors. Google ain't helping.
No such file or directory: '/var/lib/layman/make.config' ^^ -rw-r--r-- 1 root root 64 Sep 3 22:49 /var/lib/layman/make.conf That's either a typo or the source of the problem :) andrea
Re: [gentoo-user] Question regarding UTF-8 settings
rc-update You meant env-update, right? . The output was -- despite of what the guide exspected -- The guide shows the output from setting LANG. You set LC_CTYPE, so unless I'm missing something obvious your result is perfectly normal. The wording in the green box under listing 2.4 is perhaps a bit unclear: depending on what you are trying to achieve, setting LC_CTYPE instead of LANG might be enough, but they are definitely *not* the same thing. Follow the link in the same box for an explanation. andrea
Re: [gentoo-user] [Gentoo install] Disk full at 35%?
If you run man mke2fs, you should check out -N and -i. It was trial-and-error (for me, anyway) to find the right number. Consider using reiserfs for /usr/portage. No real performance advantage over ext[234], but works well with lots of small files and there's no inode count to worry about. In my experience the main downside of reiserfs is that fsck.reiserfs is almost never able to recover cleanly if the filesystem metadata does get corrupted in a non-trivial way. But for the portage snapshot this isn't really a problem... andrea
Re: [gentoo-user] [Gentoo install] Disk full at 35%?
Would LVM somehow prevent these sort of things from happening? LVM doesn't affect inode usage, does it? LVM has nothing to do with inodes. Inodes are a filesystem concept, and filesystems do not really care about the kind of block device they reside on. Well, generally. AFAIK you will gain more inodes when you increase the size. Only because by unless you specify a value mke2fs allocates a number of inodes proportional to the size of the filesystem, with the default being 1 inode every 16kB (see /etc/mke2fs.conf). But for ext[234] the number of inodes is fixed at filesystem creation, so even if you use LVM you can't increase it by -- say -- growing the underlying LV and then using resize2fs. andrea
Re: [gentoo-user] [Gentoo install] Disk full at 35%?
Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/weird-inodetest 1024 1024 0 100% /mnt /dev/mapper/weird-inodetest 2048 1024 1024 50% /mnt Then I stand corrected. I guess that the man page for mke2fs saying that the inode count of a filesystem cannot be changed does not take resizing into account. I also thought that if resize2fs had the ability to extend the inode table, then it would have options to give the user some degree of control over the process. Apparently that's not the case. andrea
Re: [gentoo-user] {OT} Can I retrieve my SSL key?
On 18/08/11 03.23, Grant wrote: I just accidentally overwrote my SSL certificate key. Is there any way to retrieve it? Possibly some sort of export since I haven't restarted apache2 yet? If apache keeps the certificate file open after reading it (I doubt that's the case, but if you have lsof installed you should check just to make sure) and you didn't restart it, you could try this method: http://computer-forensics.sans.org/blog/2009/01/27/recovering-open-but-unlinked-file-data Otherwise, assuming you're on ext2/ext3, ext3undel works quite well, *provided that you stop any writes to the affected volume ASAP*, e.g. by remounting it read-only. If the data hasn't been overwritten, carving tools should work too, as the ASCII-armor of the certificate provides an easily recognizable pattern and the file is almost certainly small enough to fit within a single FS block. andrea
Re: [gentoo-user] Firmware exists but fails to load
it does not actually matter how you configured the driver -- built-in kernel or as module: everytime when driver operates the device, it checks whether firmware is loaded. Are you sure about that? AFAIK firmware loading is only attempted once, when the driver is first initialized. functionality of /lib/udev/firmware is controlled by USE=extras. That might have been the case at some point but now sys-fs/udev-164-r2 and sys-fs/udev-171-r1 both install the firmware-related stuff (rules and helper) even with USE=-extras andrea
Re: [gentoo-user] Re: Firmware exists but fails to load
It's not a workaround, but how it's supposed to work. Loading from userspace means using a user-space program to load the firmware. This is not what you're trying to do, since you don't have such a program. ? Udev has been the standard way to service kernel firmware requests for quite some time. The relevant bit is in /lib/udev/rules.d/50-firmware.rules . However, udevd is only started after the kernel is loaded, and therefore will only load firmware for drivers which are built as modules. Firmware for built-in drivers must either be compiled into the kernel or be provided in an initrd along with a suitable helper. Is there a specific reason why the r8169 driver cannot be loaded as a module? AFAIK the only case in which you *need* a built-in net driver is if you're doing root over NFS. Your other option is writing a userspace program that reads the firmware after the kernel has booted and patches it into the hardware. Patching is always done by the kernel driver. The userspace helper only has to answer kernel requests for a specific firmware by providing the correct data. But why would you want to do something like that anyway? Typical reasons are to keep the kernel image size down and to avoid having to recompile the kernel whenever a new firmware version is released. Though I'll admit that kernel releases tend to be more frequent than new firmware versions :) Just my .02€ andrea
Re: [gentoo-user] Torrent with dynamics throttles
Hello, This set/unset is boring.. Is there any way to dynamic set my torrents limits, so when I'm using internet it frees some bandwidth to me, and when I stop to use it goes full speed again?? You want to use QoS / traffic-pritoritising, which is normally done at the router. http://wiki.openwrt.org/doc/howto/tc/tc.theory http://wiki.openwrt.org/doc/howto/tc The openwrt docs are fine, but apart from the technical details I think it's worth mentioning a couple of important points: QoS done on the user's end can only directly affect *upstream* bandwidth allocation. Having said that, if you're on an asymmetric connection (that is, if your available downstream bandwidth is significantly larger than your available upstream bandwidth) like most home users are, your bottleneck is probably in the upstream direction and so it can be worked on. In your scenario, upstream bandwidth allocation also influences what happens in the downstream direction because HTTP is TCP-based: a full tx queue in your modem will delay both new requests (thus increasing latency) and the ACKs your computer is sending in response to received data (which will effectively throttle the relevant connections, decreasing throughput). Therefore a QoS policy intended to favour internet surfing over background bittorrent traffic should aim to minimize the delay incurred by connection setup packets (i.e. DNS and HTTP requests) and by the kind of ACKs usually associated with HTTP downloads (that is TCP packets that have the ACK flag set and carry no payload -- and therefore are smaller than a certain size, let's say 80 bytes). All the other considerations apply, especially the one about capping the total outgoing bandwidth to something less than the actual available bandwidth so that the modem's tx queue stays empty. You can probably do this on your main computer, just by building the iptables kernel modules and by creating a rule (or a handful of them) so that BitTorrent traffic is delegated below everything else. It really depends on network topology. In general, queueing should be set up as close as possible to where the congestion is happening, which means on the router or on the pc to which a modem is connected. Remember that traffic shaping only works if *all* traffic passes through the shaper. Interventions on machines which only originate traffic should be limited to making life as easy as possible for the packet classifier, e.g. by setting up the bittorrent client to use a specific TOS value so that torrent traffic can be identified and assigned a low priority without using complex rules. HTH, andrea
Re: [gentoo-user] What's up with the hardened USE flag?
Hello, Everyone will get this. The culprit is a change in the pax-utils.eclass [1]. Which adds USE=hardened to every consumer of the eclass. That's IUSE, not USE. USE flags are not touched (at least on non-hardened systems), so the change is only picked up by emerge if you use the --new-use option. It changes nothing for non hardened users but forces a rebuild of the affected packages. If you're positively sure that a package's USE flags did not change since when it was last compiled, you can avoid recompiling by adding (or removing) the hardened flag in /var/db/pkg/category/package/IUSE. andrea
Re: [gentoo-user] What's up with the hardened USE flag?
That's IUSE, not USE. IUSE~=USE [1] Um, yes. It's what I wrote. [editing saved IUSE by hand] Please do not use such hacks I know it's a hack, and I was not recommending it as a general-purpose solution. use --changed-use to avoid a rebuild instead of --new-use like Neil suggested. This only works if you *permanently* switch to --changed-use, otherwise you'll just postpone things to next time you use --new-use. Anyway the change in the eclass was reverted, so everything is fine again. Except for those who were lucky enough to do a sync+rebuild before the change was reverted. I'm not complaining, really, just stating things. andrea
Re: [gentoo-user] Recovering RAID1 after disk failure
Hello, However, the good drive started on sector 63 and the new drive want's to start on sector 2048. Fdisk won't let me create the partition table on the new drive as it is on the old drive. Recent versions of fdisk require partitions to begin on a 1MB boundary; this among other things guarantees that there are no alignment issues with 4k-sector drives. If you really need to use fdisk for this task you can start it in compatibility mode (i.e. fdisk -c=dos). The recommended way of preparing the new drive, though, is to simply use sfdisk to copy the partition table from the existing one: sfdisk -d old drive | sfdisk -L new drive andrea
Re: [gentoo-user] RFC: Implementing a spamfiltering frontend
On 21/05/11 06.13, Pandu Poluan wrote: Hello list! Due to the increase of spam/phishing emails received by my office, I decided to explore the idea of implementing a spamfiltering 'frontend' in front of my email server. Here's how I plan to do it: fetchmail (G) -- postfix (G) -- amavisd+spamassassin+database (G) -- postfix (G) -- current email back-end (WS) -- clients (W) Having a second postfix instance between amavisd and the email server is going to make things way more complicated. Amavisd is perfectly capable of speaking smtp/lmtp by itself, so unless you need to perform complex mail routing you could directly send the filtered mail to the windows server. Other than that, I have very similar setup (fetchmail-postfix-amavis-cyrus-imap, where all hops but the first are done with lmtp) that has been working quite well for the past few years. HTH, andrea
Re: [gentoo-user] RFC: Implementing a spamfiltering frontend
so unless you need to perform complex mail routing you could directly send the filtered mail to the windows server. Hmm... interesting points. But can it still do the 2nd part of the equation, that is, perform outgoing routing? That's what I meant with complex mail routing :) The problem with having two passes through postfix in the mail routing chain is that you either run two separate postfix instances with independent configurations or you have to figure out a robust way to avoid loops. It can be done, it's just more difficult :) Of course I can have postfix to skip amavisd for outgoing emails, but then I guess I'll lose amavisd's automated whitelisting (the so-called 'pen pal' feature). True. In my case that's not really a problem as we only have amavisd add a spam level header to messages; actually deleting spam is left to the clients, and most clients that support user-configurable spam policies and rulesets can do some sort of address whitelisting. andrea
Re: [gentoo-user] Re: Gentoo-based 'filer' / GentooFiler How-To?
Hello, Is it true that tgt will replace IET? Last time someone asked that question on the IET mailing list the answer was no. I can't find the relevant thread right now but I think the general idea was that IET concentrates on code stability (thus no inclusion in the kernel sources) and enterprise-oriented features, while STGT follows kernel development and supports lots of fancy things. Also, I think there is/was at least a partial overlap in the developers behind the two projects. If true, are there any links/howtos/tutorial on using tgt? And tuning it? I would start from the project home page, http://stgt.sourceforge.net/ HTH, andrea
Re: [gentoo-user] Re: checking whether the C compiler works... no Oooops !!
Well, this ain't good. Neither python-updater nor revdep-rebuild can complete. Either it is a missing package or some other error. Am I to the point where I have to reinstall? If you can't sort out the mess manually, try emerge -e system, then emerge -e world. You can also save some time by substituting the above with emerge -eb system followed by emerge -ek world (this will skip a second build of the system set by using binary packages from the first build. If you have FEATURES=buildpkg, you should delete the contents of your binary package directory before starting). Although, depending on your hardware and on the contents of your wold file, just reinstalling the whole thing could be faster. andrea
Re: [gentoo-user] Re: checking whether the C compiler works... no Oooops !!
/usr/libexec/gcc/i686-pc-linux-gnu/4.4.4/cc1: error while loading shared libraries: libmpfr.so.1: cannot open shared object file Meaning, run revdep-rebuild :) Yeah, right. So revdep-rebuild does its thing, finds out that gcc is broken and tries to rebuild it with the broken gcc :) In this case the only way out involves backups or binary packages, or doing a binary build of the old mpfr version on another machine with CFLAGS compatible to those in use on the target. What's strange however, is that AFAIK in order to avoid this kind of breakage system ebuilds such as mpfr never delete old library versions; they just print a warning saying that the old library has been kept around and should be manually deleted after running revdep-rebuild. On all my sytems the transition to dev-libs/mpfr-3 was handled this way; note that I am still using portage 2.1, so this has nothing to do with the preserve feature in 2.2. andrea
Re: [gentoo-user] php-cgi must be run as root?
The access permission to /usr/lib64/php5.3/bin/php-cgi is 755, so I think everyone can execute it. Then, what is the problem? Most probably, the nginx user cannot access the .php file you're trying to execute, either because of its permissions or because it cannot traverse one of its parent directories. andrea
[gentoo-user] How to send mail with attachment from a cron job?
Hi, what is the best way to send a mail with a pdf attachment from a cron job? The machine is already set up with qmail + vpopmail + dovecot. I really don't know where to look for instructions on how to use qmail-send from a script. Thank you -- TopperH http://topperh.ath.cx
Re: [gentoo-user] Re: How to send mail with attachment from a cron job? [SOLVED]
Yeah, that works great!! Thank you! On Thu, Feb 17, 2011 at 6:36 PM, Grant Edwards grant.b.edwa...@gmail.com wrote: On 2011-02-17, Andrea Momesso momesso.and...@gmail.com wrote: Hi, what is the best way to send a mail with a pdf attachment from a cron job? The machine is already set up with qmail + vpopmail + dovecot. I really don't know where to look for instructions on how to use qmail-send from a script. I'd use mutt: echo Here's an attachment | mutt -s cron sending a pdf -a whatever.pdf -- someb...@invalid.com -- Grant Edwards grant.b.edwards Yow! I just heard the at SEVENTIES were over!! And gmail.com I was just getting in touch with my LEISURE SUIT!! -- TopperH http://topperh.ath.cx
Re: [gentoo-user] Re: Re: Re: Changing boot device with 2.6.36
Had to boot this morning 5 times, since the root device switched arbitrarily between sde3 and sdg3 Try disabling CONFIG_SCSI_SCAN_ASYNC (Asynchronous SCSI scanning under SCSI options). While it is not a solution, this might somewhat reduce the randomness you are experiencing. andrea
Re: [gentoo-user] Re: Re: Changing boot device with 2.6.36
boot=LABEL=boot_device_label in grub config work for you? I hoped so, but actually no. Grub complains at boot time not finding the root device. Is this available in the grub-0.97 series at all? I am not sure about grub 2, but 0.97 knows nothing about filesystem labels (and neither does the kernel itself). Where the syntax above works, it's because the distribution provides an initramfs containing (among other things) a script which looks at the kernel commandline and figures out the correct device node before attempting to mount the root filesystem. The only sensible way to handle the OP's problem is to have everything USB-related (or at least usb-storage) built as a module. Why would you want USB compiled in the kernel, anyway? Even if you were using an USB keyboard, you would not be able to do much if the boot process does not even reach the point where udev starts loading modules... andrea
Re: [gentoo-user] Questions about SATA and hot plugging.
E-SATA != SATA Nah. They are *exactly* the same. Evidently someone realized that the original SATA connector is way too fragile to be regularly used to plug/unplug a cable by hand, so they engineered in some features which make it a bit more resilient. But apart from the shape of the connector there is really no difference. (Well, in the old days of SATAI not many chipsets supported hotplug; often boards came with a couple of eSATA ports wired to a separate chip with hotplug support. But on virtually all new boards all ports support hotplug). andrea
Re: [gentoo-user] Questions about SATA and hot plugging.
The SATA spec allows for hot plugging, so technically yes ... My recollection of my understanding (multiple disclaimers) was that SATA *allowed* for SATA hot-plugging but didn't *mandate* it. a good summary of the hardware/driver situation wrt hotplugging can be found here: https://ata.wiki.kernel.org/index.php/SATA_hardware_features Short version: most controllers nowadays support hotplug, provided they are not operated in compatibility (IDE) mode. We have quite a number of software RAID setups with SATA disks in hot-swap backplanes; so far we found that hotplug works quite reliably on Intel (ICH9R/ICH10R), AMD (SB700/SB800) and Silicon Image (sil3132) controllers. andrea
Re: [gentoo-user] FAN-Speed readout/control ???
Just build all the sensor drivers into the kernel, not modules but built in. A simpler way: - make sure you have CONFIG_I2C_CHARDEV=y, CONFIG_I2C_HELPER_AUTO=y and select the correct I2C hardware bus drivers for your platform (CONFIG_I2C_I801 for most recent Intel chipsets and CONFIG_I2C_PIIX4 for most recent AMD chipsets; reading the help text of the various drivers should point you in the right direction); - emerge sys-apps/lm_sensors - run sensors-detect - enable the drivers for all the things sensors-detect finds. Hopefully you won't have any unsupported chips... - you can then add lm_sensors to the default runlevel, so that it loads the correct modules during the boot process. The final step is to configure the software you use to display the sensor readings. It is usually a matter of attaching the correct labels to the various inputs, and possibly tweaking the scaling factors so that the readings match those shown by the BIOS; as the details depend on the specific manufacturer and model of your board, this will usually be a trial and error process, although google might help you. The comments in /etc/sensor3.conf, which controls software using the libraries provided by lm_sensors, are also a useful source of information. cat /sys/devices/platform/ This will miss those sensors which do not appear as a platform device (e.g. the AMD k10 on-die temperature sensors, which is a PCI device). andrea
Re: [gentoo-user] Gigabyte and controlling fans
It's a Gigabyte 770T series mobo. It uses the it8720 chip. You can try writing directly to the pwm control inputs under the platform device node (i.e. /sys/devices/platform/it87.xxx/pwm*); these usually take 8-bit (0-255) values. E.g. to set pwm1 to a value of 127 you just do echo 127 /sys/devices/platform/it87.xxx/pwm1 You should take a look at the it87 driver source to find out what the various parameters mean. Note that I don't have a board with a modern it87 chip (i.e. one which can do fan PWM control) at hand, so the device names might be different; these were taken from a board using a Winbond w83627dhg chip. Also, if you wish to control the fans manually you should probably turn off any kind of automatic fan control in the BIOS. andrea
Re: [gentoo-user] FAN-Speed readout/control ???
AMD k10 was already in and reports everything -- only the fan stuff was missing, which (normall) the ITE (it87) chip is used for. Current IT87xx chips provide fan, temperature and voltage readings. If you built the drivers as modules, are you sure everything (it87 and the relevant i2c drivers) is loaded? Check the kernel log for any error messages. The version of lm_sensors, which is in portage reports here driver to be written but the svn-version of lm_sensors seems to support it Support for a specific sensor chip is provided by the kernel driver, not by lm_sensors. sensor-detect only provides advice based on the situation at the time it was released; if a more recent revision says that the chip is supported, it means that a driver for the chip now exists, *not* that the driver provided by the current kernel supports it. However, according to this page the it8720F chip seems to be supported starting from kernel 2.6.29, so kernel version is not your problem. http://www.lm-sensors.org/wiki/Devices andrea
Re: [gentoo-user] symlinks + rsync backup
rsync -aERPv --progress rsync://localhost/Windows-Vista-backup /mnt/rsync/vista/ Is there a specific reason you are using rsync in daemon mode on the sending side for a local tranfer? If the symlinks look right on the mounted windows fs, I guess that rsync -aEPv /mnt/vista/. /mnt/rsync/vista/ would give the correct result. andrea
Re: [gentoo-user] migrating disks (from mounts to disklabels
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes A small caveat -- if this is an advanced format drive be sure to use fdisk in sector mode (fdisk -uc) and start the first partition on a sector number which is a multiple of 8. (Yes, I know it says 512 bytes physical sector size above, but all five of my 1TB AF WD greens happily advertise a 512 byte physical sector size) Ok now I was going to use same reiserfs no big deal unless I can use reiser4? good idea? discuss-caveats Assuming you care about your data, my advice is to drop reiserfs for everything but unimportant, easily replaceable stuff (like /usr/portage). Reiserfs undoubtedly has performance advantages in some areas, but its structure is more prone to damage and it has lousy fs utils. Ext4 might be slower at times but it is backed by a very well tested and maintained fsck. Reiser4? Not a chance in hell. Just replace /dev/sdb1 with LABEL=boot Small caveat: labels in /etc/fstab are ok (even for swap partitions, just create them with mkswap -L), but you must still use a device name in the root parameter on the kernel command line. Labels/UUIDs are not supported there. Another approach (less readable but arguably less easy to break) is using UUID= You can find these out with dumpe2fs. I guess something similar exists for reiserfs, as well. or just ls -al /dev/disk/by-uuid dd if=/dev/sda of=/dev/sdb bs=32768 Definitely not. Sure, you can grow the fs to fill the partition aftwerwards (resize2fs or its reiser equivalent), but you will be wasting time and taking unnecessary risks. Boot from a livecd, create a new filesystem on the target, mount both filesystems and use rsync -aHPv /path/to/old/mountpoint/ /path/to/new/mountpoint/ or simply tar c /path/to/old/ | tar xvp /path/to/new rsync can show you the progress of the operation, which is nice, but it is not available on all live cds (for example, gentoo-minimal did not ship with it last time I checked). If you use rsync, pay special attention to the -H option as -a (archive mode) does not preserve hard links by default. HTH, andrea
Re: [gentoo-user] migrating disks (from mounts to disklabels
tar c /path/to/old/ | tar xvp /path/to/new Whoops... That should be tar c -C /path/to/old/ . | tar xvp -C /path/to/new/ Sorry, andrea
Re: [gentoo-user] gcc upgrade: Active gcc profile is invalid!
On 19/10/2010 19:45, Jarry wrote: Hi, I just tried to upgrade gcc (stable amd64, from 4.4.3-r2 to 4.4.4-r2) following the procedure recommended in Gentoo GCC Upgrade Guide: emerge -uav gcc At the end of compilation, I got these strange messages: in '/etc/env.d/gcc/' ! * Running 'fix_libtool_files.sh 4.4.3' * Scanning libtool files for hardcoded gcc library paths... cat: ld.so.conf.d/*.conf: No such file or directory :0: assertion failed: (gcc -dumpversion) | getline NEWVER) Original instance of package unmerged safely. * Switching native-compiler to x86_64-pc-linux-gnu-4.4.4 ... * gcc-config: Active gcc profile is invalid! * Your gcc has a bug with GCC_SPECS. * Please re-emerge gcc. * http://bugs.gentoo.org/68395 Regenerating /etc/ld.so.cache...[ ok ] * If you intend to use the gcc from the new profile in an already * running shell, please remember to do: * If you have issues with packages unable to locate libstdc++.la, * then try running 'fix_libtool_files.sh' on the old gcc versions. * You might want to review the GCC upgrade guide when moving between * major versions (like 4.2 to 4.3): * http://www.gentoo.org/doc/en/gcc-upgrading.xml Regenerating /etc/ld.so.cache... Recording sys-devel/gcc in world favorites file... Auto-cleaning packages... No outdated packages were found on your system. * Regenerating GNU info directory index... * Processed 7 info files. What does that invalid profile mean, and how can I fix it? Well... it means that the active gcc profile is invalid :) You can have several gcc versions installed on your system for a given target arch; each version has an associated profile and you can choose the active one (i.e. which version is run when you type gcc) by using gcc-config. When you do a regular (i.e. -multislot) gcc upgrade, the active profile must be changed from the old version to the new one; the ebuild should take care of this for you but sometimes it chokes in the process. First you have some warnings about the old profile being broken -- which is expected as you just uninstalled the old gcc version: * gcc-config: Could not locate 'x86_64-pc-linux-gnu-4.4.3' [...] * gcc-config: Active gcc profile is invalid! gcc-config: error: could not run/locate 'gcc' Then the new profile is correctly selected: * Switching native-compiler to x86_64-pc-linux-gnu-4.4.4 ... It's not clear to me whether the following warnings (let alone the whole your gcc is broken scary thing) are caused by the new profile being actually broken or -- more likely -- by the problems encountered when trying to do something with the old profile. Try a simple gcc -v in a new shell. If it works, you are fine. If it does not work, try again after doing gcc-config 1. If it still does not work, well, you're in for lots of fun. HTH, andrea
Re: [gentoo-user] Lilo vs. LVM
Quoting Konstantinos Agouros elw...@agouros.de: Hi, I have a VM with a gentoo guest. For testing I set it up with an LVM Volume Group that consisted of only one disk. Now I added a 2nd resized the FS but lilo stopped working. When I call it I get: # lilo device-mapper: table ioctl failed: No such device or address Fatal: device-mapper: dm_task_run(DM_DEVICE_TABLE) failed /boot is also on the LVM volume. Is maybe grub the answer to my problem? Sorry for the late reply, I am way back on this list reading :) I have the same exact problem that I haven't been able to investigate yet, calling lilo gives me: device-mapper: table ioctl failed: No such device or address Fatal: device-mapper: dm_task_run(DM_DEVICE_TABLE) fail The funny thing is that I can workaround that chrooting from a livecd, and invoking lilo from that, as described in the gentoo handbook. The other funny thing is that this problem happens randomly, for example last week, I upgraded my kernel, run lilo from my system and it worked. If I try today it doesn't, but may work in a few days... I really don't know how to explain that, anyway I always keep a livecd around, just in case. Should you switch to grub? I don't know, for me is not an option since I'm running a triple boot (osx, windows, and gentoo) on a macbook, and my gentoo partition is under lvm (root included since I cannot create anymore physical partitions) and last time I checked, grub-static (yes, it's a no-multilib profile), didn't support booting directly from an lvm partition. -- TopperH http://topperh.ath.cx This message was sent using IMP, the Internet Messaging Program.
Re: [gentoo-user] Does updating grub require mounting /boot first?
On 09/10/2010 18:30, Tanstaafl wrote: Hello, I'm ready to update grub, but just realized I'm not sure if I need to mount /boot before I emerge update it or not... /boot must be mounted, but the ebuild will mount it for you if it is not. Note however that the ebuild will *not* update the stages installed in the MBR and/or boot sector: after you're done emerging you have to do that either manually or by re-running grub-install. Depending on what changed between the two releases, failure to update the stages might result in an unbootable system. HTH, andrea
Re: [gentoo-user] Copying a file via ssh with no password, keeping the system safe
On 08/10/2010 0:28, Willie Wong wrote: You can't do that on a per-command basis. You'd be trying to control the authentication method accepted by sshd on B according to which command is run on A -- something sshd on B knows nothing about. That's partially false. See my response in this thread. Why? SSH can do a lot of things, but when used in the way we're discussing here it just sets up one or more authenticated and encrypted channels between two endpoints; ignoring the details, each one of these roughly acts as a couple of pipes, mutually connecting stdin/stdout of a pair of processes. When you ssh to a remote host, the remote host's sshd will run a command and attach the newly created process to the remote end of the connection; The name of the command is usually passed by the client, but can be overridden on the server either through the sshd configuration files (globally or per-client) or with a per-key 'command' option. Now, the remote sshd is never sent any information about what is connected to the local end of the pipe (which is not even known to ssh!), so there is no way to alter its behavior depending on that. IOW, nothing in the setup you and I proposed prevents the user from piping an arbitrary command into ssh (or even using a ssh-invoking wrapper such as scp or rsync) and getting successfully authenticated on the server. You are only guaranteed that the server will run tar in place of whatever remote command the client requests, so that the connection will break if the client side sends non-tar data. In my opinion this is quite different from [allowing] only one single command from a single cronjob to operate passwordless, but then I might just be splitting hairs. andrea
[gentoo-user] Copying a file via ssh with no password, keeping the system safe
Hi list, I need to set up a cron job to transfer a file every day from server A to server B. I'd like to do that via ssh and with no user assistance, completely automated. Setting up a public key, would do the job, but then, all the connections between the servers would be passwordless, so if server A gets compromised, also server B is screwed. Is there a way to allow only one single command from a single cronjob to operate passwordless, while keeping all the other connections secured by a password? Thank you in advance for your help. TopperH http://topperh.ath.cx This message was sent using IMP, the Internet Messaging Program.
Re: [gentoo-user] Copying a file via ssh with no password, keeping the system safe
On 07/10/2010 18:45, Momesso Andrea wrote: Setting up a public key, would do the job, but then, all the connections between the servers would be passwordless, so if server A gets compromised, also server B is screwed. Well, not really... public key authentication works on a per-user basis, so all you get is that some user with a specific key can log in as some other user of B without typing a password. Of course, if you authorize a given key for logging in as r...@b, then what you said is true. But that is a problem with the specific setup. Is there a way to allow only one single command from a single cronjob to operate passwordless, while keeping all the other connections secured by a password? You can't do that on a per-command basis. You'd be trying to control the authentication method accepted by sshd on B according to which command is run on A -- something sshd on B knows nothing about. I would try the following way: - Set up an unprivileged user on B -- let's call it foo -- which can only write to its own home directory, /home/foo. - add the public key you will be using (*) to f...@b's authorized_keys file. You should set the key's options to 'pattern=address_of_A,no-pty,command=/usr/bin/scp -t -- /home/foo' (man sshd for details). - chattr +i /home/foo/.ssh/authorized_keys, so that the file can only be changed by a superuser (you can't just chown the file to root as sshd is quite anal about the permissions of the authorized_keys file) Now your cron job on A can do scp file f...@b:/home/foo without the need for entering a password; you just have to set up another cron job on B that picks up the file from /home/foo and puts it where it should go with the correct permissions, possibly after doing a sanity check on its contents. If you use something else than scp, (e.g. rsync) you should also adjust the command option in the key options above. Note that the option refers to what is run on B, not on A. Also, it is *not* an authorization directive à la /etc/sudoers (i.e., it does not specify what commands the user is allowed to run): it simply overwrites whichever command is requested by the client side of the ssh connection, so that, for example, the client cannot request a shell or do cat somefile. (*) You can either use the key of the user running the cron job on A, or generate a separate key which is only used for the copy operation. In this case, you will need to tell scp the location of the private key file with the -i option. HTH, andrea
Re: [gentoo-user] Copying a file via ssh with no password, keeping the system safe
Quoting Andrea Conti a...@alyf.net: On 07/10/2010 18:45, Momesso Andrea wrote: Setting up a public key, would do the job, but then, all the connections between the servers would be passwordless, so if server A gets compromised, also server B is screwed. Well, not really... public key authentication works on a per-user basis, so all you get is that some user with a specific key can log in as some other user of B without typing a password. Of course, if you authorize a given key for logging in as r...@b, then what you said is true. But that is a problem with the specific setup. Is there a way to allow only one single command from a single cronjob to operate passwordless, while keeping all the other connections secured by a password? You can't do that on a per-command basis. You'd be trying to control the authentication method accepted by sshd on B according to which command is run on A -- something sshd on B knows nothing about. I would try the following way: - Set up an unprivileged user on B -- let's call it foo -- which can only write to its own home directory, /home/foo. - add the public key you will be using (*) to f...@b's authorized_keys file. You should set the key's options to 'pattern=address_of_A,no-pty,command=/usr/bin/scp -t -- /home/foo' (man sshd for details). - chattr +i /home/foo/.ssh/authorized_keys, so that the file can only be changed by a superuser (you can't just chown the file to root as sshd is quite anal about the permissions of the authorized_keys file) Now your cron job on A can do scp file f...@b:/home/foo without the need for entering a password; you just have to set up another cron job on B that picks up the file from /home/foo and puts it where it should go with the correct permissions, possibly after doing a sanity check on its contents. If you use something else than scp, (e.g. rsync) you should also adjust the command option in the key options above. Note that the option refers to what is run on B, not on A. Also, it is *not* an authorization directive à la /etc/sudoers (i.e., it does not specify what commands the user is allowed to run): it simply overwrites whichever command is requested by the client side of the ssh connection, so that, for example, the client cannot request a shell or do cat somefile. (*) You can either use the key of the user running the cron job on A, or generate a separate key which is only used for the copy operation. In this case, you will need to tell scp the location of the private key file with the -i option. HTH, andrea Thank you all for your fast replies, I think I'll use all of your suggestions: -create an unprivilegied user with no shell access as Stroller and Andrea suggested -I'll setup a passwordless key for this user, only limited to a single command, as Willie suggested This sounds pretty sane to me. TopperH http://topperh.ath.cx This message was sent using IMP, the Internet Messaging Program.
Re: [gentoo-user] Re: static-libs
DL libraries aren't really a different kind of library format (both static and shared libraries can be used as DL libraries); Library archives (.a) and shared objects (.so) differ in several ways. Roughly speaking: From a file format perspective, .a files are simple collections (man ar) of independent compiled objects, while .so files are complete libraries produced by the linker and contain additional information which tell the dynamic linker (ld.so) how to load and share the code. More importantly, code which is intended to be used in shared objects must be compiled with specific options as position-independent code, whereas code in an archive needs not. You can't link dynamically against a library archive, so all DL code on Linux must be compiled as a shared object, whether it's actually shared or not (think plugins). As for the static use-flag: don't use it, unless you have very good reasons to do so. It will result in a system with larger binaries and in many cases you will *not* get true statically-linked binaries (e.g. most of the things which link against glibc). As for dynamic linking breakage following upgrades, IMO portage and revdep-rebuild give good enough support for fixing that. andrea
Re: [gentoo-user] Re: static-libs
So why shouldn't you be able to load an unshared lib (without PIC) dynamically? Sure there still would be some additional steps. I am not talking specifically about PIC/non-PIC code here. PIC is relevant because when you're doing dynamic loading you generally cannot predict at what (virtual) address in the process space the loaded object will end up at. That said, whether you can dynamically load non-PIC code depends on the specific architecture (e.g. on x86 you can have non-PIC code in shared objects, albeit at the price of the dynamic linker having to patch all relocations in the loaded object, while on amd64 you can't because the ABI allows certain kinds of relocations in non-PIC code which cannot be handled this way.). What I am saying is that there is no way to dynamically load code from a .a file, at least not with the system tools, period. There are reasons for this: first, a .a is not a real library but a collection of independent compiled objects (building a .a does not entail any kind of linking: it's about the same as tar'ing .o files together). Moreover, the dynamic linker (ld.so) needs a certain amount of information about the contents of any object it has to load: this information is stored in specific ELF sections and is computed and written by the standard linker (ld) when it builds the shared object from its components. andrea
Re: [gentoo-user] Shared libraries in Gentoo
I still try to understand the relation of shared libraries and dynamic libraries. I read that dynamic libraries are linked at runtime. I also read, that you can dynamically link againgst a shared as well as against a normal library. Pardon me, but in my opinion if you're asking this kind of questions, porting the Gentoo build system (or even any non-trivial application without upstream support) to a different and basically unsupported environment is way beyond what you can manage with your current level of technical expertise. In other words, this is not the kind of thing you can solve by iteratively trying to build, look at what breaks and doing a point fix. I am not saying it can't be done, but porting is hard and requires an in-depth knowledge of the source and the target environment, plus a lot of development experience in both. You should begin with that, instead of diving head-first into what is all but a simple task. just my €0.02, andrea
Re: [gentoo-user] Pipe Lines - A really basic question
My generic question is: When I'm using a pipe line series of commands do I use up more/less space than doing things in sequence? When you use a pipe you don't need the space to store intermediate results between the two programs. Thepipe is backed by a small system-allocated RAM buffer (4k under linux AFAIK) and program execution is controlled according to the amount of data in the buffer. Not having to save intermediate results generally means that you will need less disk space: this is especially true in the mysqldump-gzip example as the uncompressed dump will not be written to the disk at any stage. Note however (this is the it depends part :) that piping does not affect whatever the programs might allocate or save internally: in your second example (which does not involve any disk writing in either case) sort needs to see the complete input before producing any output, so it will allocate enough memory to store it whether it is invoked alone or as part of a pipeline (in which case it will also stall the downstream pipeline section until the upstream pipe is closed). HTH, andrea