[gentoo-user] Simple repos.conf layout please
I'm running an update on my netbook, and I noticed the warning about SYNC in make.conf no longer being supported. I had a look at the documentation, and gave up. It appears to be written BY developers who have several overlays on their machines, FOR developers who have several overlays on their machines, and assumes significant knowledge in managing multiple overlays on a machine. For those of us who run bog-standard machines without all those bells and whistles, it comes as a shock to the system. What is the quickest-n-dirtiest conversion to the new order? Can someone please post a bog-standard example... *WITHOUT* inheriting a gazillion eclasses and/or cascading multiple overlays? The SYNC statement in my make.conf is... SYNC=rsync://rsync.namerica.gentoo.org/gentoo-portage My /etc/portage directory is... aa1 portage # ls -l --group-directories-first total 36 drwxr-xr-x 2 root root 4096 Mar 14 18:48 bin drwxr-xr-x 2 root root 4096 Mar 14 18:48 postsync.d drwxr-xr-x 2 root root 4096 May 1 15:20 repo.postsync.d drwxr-xr-x 3 root root 4096 Sep 9 2014 savedconfig -rw-r--r-- 1 root root 1891 Mar 26 23:59 make.conf -rw-r--r-- 1 root root 627 Sep 9 2014 make.conf.catalyst lrwxrwxrwx 1 root root 49 Sep 9 2014 make.profile - ../../usr/portage/profiles/default/linux/x86/13.0 -rw-r--r-- 1 root root 87 Mar 24 21:54 package.keywords -rw-r--r-- 1 root root 12 Mar 17 16:52 package.mask -rw-r--r-- 1 root root 550 Mar 26 03:22 package.use -- Walter Dnes waltd...@waltdnes.org I don't run desktop environments; I run useful applications
Re: [gentoo-user] Simple repos.conf layout please
On 05/01/2015 09:40 PM, Walter Dnes wrote: I'm running an update on my netbook, and I noticed the warning about SYNC in make.conf no longer being supported. I had a look at the documentation, and gave up. It appears to be written BY developers who have several overlays on their machines, FOR developers who have several overlays on their machines, and assumes significant knowledge in managing multiple overlays on a machine. For those of us who run bog-standard machines without all those bells and whistles, it comes as a shock to the system. What is the quickest-n-dirtiest conversion to the new order? Can someone please post a bog-standard example... *WITHOUT* inheriting a gazillion eclasses and/or cascading multiple overlays? The SYNC statement in my make.conf is... SYNC=rsync://rsync.namerica.gentoo.org/gentoo-portage My /etc/portage directory is... aa1 portage # ls -l --group-directories-first total 36 drwxr-xr-x 2 root root 4096 Mar 14 18:48 bin drwxr-xr-x 2 root root 4096 Mar 14 18:48 postsync.d drwxr-xr-x 2 root root 4096 May 1 15:20 repo.postsync.d drwxr-xr-x 3 root root 4096 Sep 9 2014 savedconfig -rw-r--r-- 1 root root 1891 Mar 26 23:59 make.conf -rw-r--r-- 1 root root 627 Sep 9 2014 make.conf.catalyst lrwxrwxrwx 1 root root 49 Sep 9 2014 make.profile - ../../usr/portage/profiles/default/linux/x86/13.0 -rw-r--r-- 1 root root 87 Mar 24 21:54 package.keywords -rw-r--r-- 1 root root 12 Mar 17 16:52 package.mask -rw-r--r-- 1 root root 550 Mar 26 03:22 package.use My repos.conf is below. [gentoo] location = /usr/portage sync-type = rsync sync-uri = rsync://kwopper/portage auto-sync = true [alec] location = /usr/local/portage sync-type = git sync-uri = git://github.com/trozamon/overlay.git auto-sync = true Naturally, you can replace the sync-uri in the gentoo section with rsync://rsync.namerica.gentoo.org/gentoo-portage, so yours would be: [gentoo] location = /usr/portage sync-type = rsync sync-uri = rsync://rsync.namerica.gentoo.org/gentoo-portage auto-sync = true Regards, Alec
[gentoo-user] Re: Simple repos.conf layout please
On Fri, 1 May 2015 21:40:15 -0400 Walter Dnes waltd...@waltdnes.org wrote: What is the quickest-n-dirtiest conversion to the new order? # mkdir /etc/portage/repos.conf # cp /usr/share/portage/config/repos.conf /etc/portage/repos.conf/gentoo.conf remove SYNC=whatever from make.conf relax
Re: [gentoo-user] Recommendations for WLAN-AP?
Am 27.04.2015 um 20:37 schrieb waben...@gmail.com: I'm searching for a new WLAN-AP that is fast, powerful and reliable. I can remember that there were some recommendations in this list some weeks/months ago, but I can't find them. Regards wabe tplink archer series? can cope with 20 laptops at the same time easily and is not too expensiv.
[gentoo-user] [SOLVED] Simple repos.conf layout please
Thanks guys, that was quick. I'll update the system files as soon as the world update finishes. Even with cross-compile (*HOST* is almost 8 years old Core Duo!) building gcc+glibc+seamonkey takes a while. -- Walter Dnes waltd...@waltdnes.org I don't run desktop environments; I run useful applications
Re: [gentoo-user] Simple repos.conf layout please
150501 Alec Ten Harmsel recommended: [gentoo] location = /usr/portage sync-type = rsync sync-uri = rsync://rsync.namerica.gentoo.org/gentoo-portage auto-sync = true I don't have the final line : what is it for ? -- ,, SUPPORT ___//___, Philip Webb ELECTRIC /] [] [] [] [] []| Cities Centre, University of Toronto TRANSIT`-O--O---' purslowatchassdotutorontodotca
[gentoo-user] Re: wicd not restarting after hibernation [solved]
On Sat, 25 Apr 2015 21:32:27 + (UTC) james wirel...@tampabay.rr.com wrote: »Q« boxcars at gmx.net writes: On my amd64 laptop, on resume from hibernation, wicd almost always fails to restart. This problem goes back to when I set up the machine, over a year ago, but it seems to have gotten a lot worse in the last month or so. Shot in the dark: Check (and google for issues with obscure bios settings for your version of the bios. Sometime a bios upgrade|downgrade can help, but my understanding is that hibernate on laptops is so varied that often some brands do not work even with redmond products. some marginally useful keywords: PXE and (u)efi I was hoping someone would hand me a magic bullet so I wouldn't have to go fishing for that stuff, heh. Your advice didn't help me directly, but I think just seeing it in writing was enough to prompt me to some better thinking than I was doing before, so thanks! I procrastinated about the search until my brain cell fired and I tried `hibernate --no-suspend` a few times and found that wicd didn't restart even when the machine didn't actually hibernate. Then with wicd running, I tried just `/etc/init.d/wicd restart` -- every time, it stopped but failed to restart. But if I waited a few seconds after stopping wicd, then it would start. It was a great relief to have isolated the problem from the hibernate process itself. I've solved it by abandoning hibernate's RestartServices directive and instead stopping and starting wicd more explicitly with OnSuspend 10 /etc/init.d/wicd stop OnResume 10 /usr/local/bin/wicd-delayed-start Where wicd-delayed-start is a script with /usr/bin/sleep 3 /etc/init.d/wicd start I have no idea what's at the root of the apparent timing issue -- I figure it's that the networking gremlins refuse to be called back to work too soon once they've been told they can take a break. In case it helps future searchers, I think the relevant hardware info is: Lenovo IdeaPad y510p with Intel Corporation Centrino Wireless-N 2230 (rev c4).
[gentoo-user] Re: CFLAGs for kernel compilation
On 01/05/15 10:44, Andrew Savchenko wrote: On Fri, 1 May 2015 05:09:51 + (UTC) Martin Vaeth wrote: Andrew Savchenko birc...@gentoo.org wrote: That's why kernel makes sure that no floating point instructions sneaks in using CFLAGS, you may see a lot of -mno-${intrucion_set} flags when running make -V. So it should be sufficient that the kernel does not use float or double, shouldn't it? No. Optimizer paths may be very unobvious, i.e. I'll not be surprised if under some conditions vectorizer may use float instructions for int code. The kernel uses -O2 and several -march variants (e.g. -march=core2). Several other options are used to prevent GCC from generating unsuitable code. Specifying another -march variant does not affect the optimizer though. It only affects the code generator. If you don't modify the other CFLAGS and only change -march, you will not get FP instructions unless you use FP in the code. Also, I'd be very interested to see *any* optimization that would somehow transform integer code to FP code (note that SIMD is not FP and is perfectly fine in the kernel.) In fact, optimizers tend to transform FP into SIMD, at least on x86 (and other architectures that have fast SIMD instructions.) If I inspect the generated assembly from GCC or Clang, I cannot find FP anywhere, even for code using float and double operations. They get converted to SIMD on modern CPUs (unless you specify a compiler flag that tells it to use the FPU, for example if you need 80-bit extended precision, which is supported by the x86 FPU.)
[gentoo-user] Re: CFLAGs for kernel compilation
Andrew Savchenko bircoph at gentoo.org writes: I can hardly imagine that otherwise the compiler converts integer or pointer arithmetic into floating point arithmetics, or is this really the case for certain flags? If yes, why should these flags *ever* be useful? I mean: The context switching happens for non-kernel code as well, doesn't it? First off, reading this thread, I cannot really tell what the intended use of the the highly tuned kernels is to be. For almost all workstation and server proposes, what has been previously stated is mostly correct. If you really want test these waters, do it on a system that is not in your critical path. You tune and experiment, you are going to bork your box. Water coolers on the CPUs is a good idea when taxing FPU and other simd hareware on the CPU, imho. sys-power/Powertop is your friend. Yes, context switching happens for all code and have its costs. But for userspace code context switching happens for many other reasons, e.g. on each syscall (userspace - kernelspace switching). Also some user applications may need high precision or context switching pays off due to mass parallel data processing, e.g. SIMD instructions in scientific or multimedia applications. ( Here here, I knew we had an LU expert int he crowd. Most scientific or highly parallelized number cruncing does benefit from experimenting with settings and *profiling* the results (trace-cdm + kernelshark) are in portage and are very useful for analysis of hardware timings, context switching and a myriad of other issues. Be careful, you can sink a lifetime into such efforts with little to show for your efforts. The best thing is to read up on specific optimizations for specific codes as vetted by the specific hardware in your processors. Tuning for one need will most likely retard other types of performances; that is why before you delve into these waters, you really need to learn about profiling both target (applicattion) and kernel codes, *BEFORE* randomly tuning the advanced numerical intricacies of your hardware resources. Start with memory and cgroups before worrying about the hardware inside your processors (cpu and gpu). But unless special conditions mentioned above, fixed point is still faster in userspace, some ffmpeg codecs have both fixed and floating point implementations, you may compare them. Programming in fixed point is much harder, so most people avoid it unless they have a very goode reason to use it. And dont't forget that kernel is performance critical unlike most of userspace applications. Video (mpeg, h.264 and such) massively benefits from the enhanced matrix abilities of the simd hardware in your video card's GPU. These bare metal resources are being integrated into gcc-5.1+ for experimentation. But, it is likely going to take a year or so before ordinary users of linux resources see these performance gains. I would encourage you to experiment, but *never on your main workstation*. I'm purchasing a new nvidia video card just to benchmark and tune some numerically intesive codes that use sci-libs/magma. Although this will be my currently fastest video card, it will sit in a box that not used for visual eye candy (gaming, anime, ray_traces etc). The mesos clustering codes (shark, storm, tachyon etc) and MP(I) codes are going to fundamentally change the numerical processing landscape for even small linux clusters. An excellent bit of code to get your feet_wet is sys-apps/hwloc. More than FPU, MP(I) {sys-cluster/openmpi} and other clustering codes are going to allow you to use the DDR(4|5) memory found in many video cards (GPU) via *RDMA*. The world is rapidly changing and many old fixed point integer folks do not see the Tsunami that is just off_shore. Many computationally expensive codes have development project to move to an in-memory [1] environment where HD resources are avoided as much as possible in a cluster environment. Clustered resources tuned for such things as a video rendering farm, will have very different optimized kernels than your KDE(G*) workstation or web server. medica-gfx/Blender is another excellent collection of codes that benefits from all sorts of tuning on a special_purpose system. So do you really have a valid need to tune the FPU performance due to a numerically demanding applications? YMMV Best regards, Andrew Savchenko hth, James [1] https://amplab.cs.berkeley.edu/
Re: [gentoo-user] Re: CFLAGs for kernel compilation
On Fri, 1 May 2015 05:09:51 + (UTC) Martin Vaeth wrote: Andrew Savchenko birc...@gentoo.org wrote: That's why kernel makes sure that no floating point instructions sneaks in using CFLAGS, you may see a lot of -mno-${intrucion_set} flags when running make -V. So it should be sufficient that the kernel does not use float or double, shouldn't it? No. Optimizer paths may be very unobvious, i.e. I'll not be surprised if under some conditions vectorizer may use float instructions for int code. I can hardly imagine that otherwise the compiler converts integer or pointer arithmetic into floating point arithmetics, or is this really the case for certain flags? If yes, why should these flags *ever* be useful? I mean: The context switching happens for non-kernel code as well, doesn't it? Yes, context switching happens for all code and have its costs. But for userspace code context switching happens for many other reasons, e.g. on each syscall (userspace - kernelspace switching). Also some user applications may need high precision or context switching pays off due to mass parallel data processing, e.g. SIMD instructions in scientific or multimedia applications. But unless special conditions mentioned above, fixed point is still faster in userspace, some ffmpeg codecs have both fixed and floating point implementations, you may compare them. Programming in fixed point is much harder, so most people avoid it unless they have a very goode reason to use it. And dont't forget that kernel is performance critical unlike most of userspace applications. Best regards, Andrew Savchenko pgpmtvztAOVCW.pgp Description: PGP signature
Re: [gentoo-user] xen on new install reboots by itself
J. Roeleveld jo...@antarean.org writes: On Friday, April 24, 2015 10:24:06 PM lee wrote: J. Roeleveld jo...@antarean.org writes: On Thursday, April 23, 2015 11:02:24 PM lee wrote: hydra hydrapo...@gmail.com writes: You mean the documentation at Gentoo about Xen sucks or the upstream documentation? What information are you missing from there? Maybe we can add the missing pieces for Xen being more accessible and easier to use, what do you think? :) I mean the documentation they have on their wiki. It's a confusing mess referring to various version with which things are being done differently. The problem here is the different implementations that exist: - Xen (install and configure yourself, toolset: 'xl' , 'xm' is deprecated) - Citrix and XCP (pre-configured, install on dedicated server, toolset: 'xcp') - OVM (Oracle's implementation, not sure which toolset they use) Maybe, maybe not; the documentation is so confusing that I can't really tell what it is talking about. Where did you look? Everywhere I could find. The xen wiki is particularly messy. Could you add missing pieces about why power management --- as in frequency scaling --- doesn't work What doesn't work with this? The following seems quite detailed: http://wiki.xen.org/wiki/Xen_power_management There was some command to query what frequencies the CPUs are running on, and it didn't give any output. Documentation seems to claim that xen can do power management automagically, yet there was no way to verify what it actually does. It works here: # xenpm get-cpufreq-para all cpu id : 0 affected_cpus: 0 cpuinfo frequency: max [3101000] min [160] cur [160] scaling_driver : acpi-cpufreq scaling_avail_gov: userspace performance powersave ondemand current_governor : ondemand ondemand specific : sampling_rate: max [1000] min [1] cur [2] up_threshold : 80 scaling_avail_freq : 3101000 310 290 270 250 230 210 190 170 *160 scaling frequency: max [3101000] min [160] cur [160] turbo mode : enabled snipped identical results for other CPU-cores Looks like it's actually working and I never configured this. It didn't work for me. And the commands listed there (for the hypervisor based option) work on my server. and what to do about keeping the time in sync between all VMs when you find out that this doesn't work as the documentation would have you think it does? In what way doesn't it work? The clocks are all synchronized and I don't need to use anything like 'ntpd' The clocks were off by quite a bit after a while, and I had to use ntp to get them in sync. Some documentation claims you don't need ntp or anything; some other documentation apparently tries to explain that keeping the clocks in sync cannot work unless the CPU(s) have some features having to do with clock consistency while they are in sleep states, and yet other documentation seems to say that using ntp cannot work because xen screws it off. In the end, it was recommended to me to use ntp, which I found to work. There was no way to figure out what xen was actually doing or not doing towards this, and nobody seemed to know how to keep the clocks in sync, other than using ntp, which appears to be deprecated. Which version did you try? I remember having had clock-issues requiring ntp when I first started using Xen over 10 years ago. The version in Debian --- I don't remember which one it was. Debian was the only distribution I could get it to work with at all, and the VMs also were Debian because there isn't a good way to install an operating system in a VM. -- Again we must be afraid of speaking of daemons for fear that daemons might swallow us. Finally, this fear has become reasonable.
Re: [gentoo-user] xen on new install reboots by itself
J. Roeleveld jo...@antarean.org writes: On Friday, April 24, 2015 10:23:01 PM lee wrote: J. Roeleveld jo...@antarean.org writes: On Thursday, April 23, 2015 11:03:53 PM lee wrote: Do you have anything that you find insufficiently documented or is too difficult? sure, lots Have you contacted the Xen project with this? I've been asking questions on mailing lists. What do you expect? I could tell them your documentation sucks and they might say go ahead and improve it then. I tried to improve it the little bit I could; it's on the wiki, if it's still there. Containers. Chroots don't have much when it comes to isolation. What exactly are the issues with containers? Ppl seem to work on them and to manage to make them more secure over time. Lack of clear documentation on how to use them. All the examples online refer to systemd-only commands. True, there isn't much, if any, clear documentation. I followed the Gentoo wiki and it's working fine, though. Virtualbox is nice for a quick test. I wouldn't use it for production. Why not? Several reasons: 1) I wouldn't trust a desktop application for a server So that's a gut feeling? No, a combination of experience and common sense. A desktop application dies when the desktop dies. You cannot run it from the command line? It only runs in an X session? If that is so, I'm going to need something else. 2) The overhead from Virtualbox is quite high (still better then VMWare's desktop versions though) Overhead in which way? I haven't done much with virtualbox yet and merely found it rather easy to use, very useful and to just work fine. Virtualbox is easy when all you want is to quickly run a VM for a quick test. It isn't designed to run multiple VMs with maximum performance. In my experience I get on average 80% of the performance inside a Virtualbox VM when compared to running them on the machine directly. With Xen, I can get 95%. (This is using normal work-loads, lets not talk about 3D inside a VM) Someone told me that you may find xen reducing the performance by up to 40%. Compared to containers, the overhead xen requires is enormous, Hardly comparable. Containers run inside the same kernel. With Xen, or any other virtualisation technology, you run a full OS. How is that not comparable? You don't need to run a full OS, and you're not stuck with fixed memory assignments without even the ability to overcommit when you use containers. With xen, you're stuck with what you initially assigned, may your VM currently use it or not. If my mail server was a xen VM, I'd have assigned 2GB to it; as a container, it uses less than one. If the machine I'm working on was a xen VM, I'd have assigned at least 16GB to it; as the host of the container, it costs me nothing. So obviously, the overhead required by xen is enormous. and it doesn't give you a stable system to run VMs on because dom0 is already virtualized itself. Why doesn't it provide a stable system? The dom0 has 1 task and 1 task only: Manage the VMs and resources provided to the VMs. That part can be made extremely stable. It's already virtualized itself. With containers, I have a non-virtualized system as usual, as stable as they are. A container is like just another service I can start or stop, and I can access it easily because it simply resides under /etc/lxc while I can use the host for whatever else I'm doing. With xen, I have a virtualized system to begin with, which is wasted because it's only purpose is to provide a way to maintain other VMs. I can't fully use any of these VMs because for what I'm doing, I'd have to pass through my NVDIA card to one of them. IIUC, I wouldn't even be able to log in to the host because it won't have a graphics card, provided that I actually could pass the graphics card through, which appears to be pretty much impossible. The VMs would reside on LVM volumes and be hardly accessible --- though now I'd use ZFS subvolumes, making that as easy as with containers. Power management wouldn't work. The xen documentation sucks. Everything would be difficult and troublesome. It's extremely difficult to install a VM --- I never figured out how to actually do that. I'd be hugely wasting resources. I'd never have the feeling that there is a stable platform to work with, and it's not something I would want to have to maintain. My Lab machine (which only runs VMs for testing and development) currently has an uptime of over a year. In that time I've had VMs crashing because of bad code inside the VM. Not noticing any issues there. Neither with stability nor with performance. My only interaction with the dom0 there is the create/destroy/start/stop/... VMs. How can you use it for testing when it's so ridiculously difficult to install a VM? How do you do updates without rebooting when a new kernel version comes along? How do you adjust resource allocations depending on changing