Re: modular-xorg...my latest
On Sat, 16 Mar 2019, Bob Bernstein wrote: Included is my latest 'make' in modular-xorg. Thank you for your hard work, Bob. I have been an on-again off-again user of the modular build for years. I started back before Nouveau sorted out a lot of nvida issues because it worked better for me in that era. If you have time, could you give a quick summary of why someone might consider using modular Xorg over the NetBSD Xorg dist with their release? For me, newer driver support is the big one. Are there other advantages, reasons, or notable options we should be aware of? Thanks again for your years of hard work, Bob. It's definitely benefited myself and others. Thanks, Swift
fs-uae and e-uae bork the mouse in X11
Either of these Amiga emulators will try to capture the mouse. Fine, good, that's normal. However, once you exit the emulator the mouse will no longer work in X11. I have to unplug and replug the mouse to get it to "wake up". Anyone know why? Is this a bug or a feature I don't understand? -Swift
USB & PCI Audio Device Problems
I have several NetBSD hosts running 7.1 that I use as workstations. Sound has been a big pain. USB sound never works (anywhere under any circumstances on any device). Anything that tries to play sound to a USB audio device says "Audio device got stuck!". I would think it's related to the USB side of things, but I've seen that same error on PCI devices too. On one machine I've tried four (supposedly supported) PCI sound cards without any luck. Each one either locks up the machine during boot, or says "Audio device got stuck!" when attempting to play audio. However, each one also works fine in Windows on the same hardware (swapping drives and booting Windows 7 works fine and sound works). So, I have hard time believing it's a hardware issue. However, machines with an HD audio controller (Intel) tend to work fine. Since I'm having across the board failure with anything other than Intel sound, is there something I'm likely overlooking or doing wrong? I've done things like compile custom kernels with only *one* sound card supported but it doesn't help (nor does full a full debug kernel et al). Any ideas on what to do? Here are the specific devices that fail: USB (broken in amd64 and i386): - Plantronics USB Headset - Sound Blaster Play! v2 USB - Sound BlasterAxx v1 - Various cheap Crystal Sound USB 1.x dongles PCI Sound Cards: - Sound Blaster X-Fi Platinum PCI - ESS Maestro 3 PCI - Sound Blaster Live! PCI - Ensoniq AudioPCI Anyone have one of those that works? Anyone have anything USB or PCI that works? My basic issue is that I've got two machines that won't produce sound out of anything besides the mobo onboard sound (Intel HD audio). Thanks, Swift
Re: Pulseaudio & browsers - anyone got something ELSE working?
On Sun, 13 Aug 2017, Chavdar Ivanov wrote: My firefox-54.0 was build with the default 'oss' option, sound is working well, I have never noticed any problems. I'll try that. I didn't realize there was such a thing. That should work well for my purposes. Thanks! -Swift
Pulseaudio & browsers - anyone got something ELSE working?
So, just a few years ago, we had to have flash (a security nightmare) setup and working to do things like play a youtube video. That sucked because you never knew when someone was going to bend flash over and 0wn your system. My best defense was click-to-play plugins so flash only loaded when I needed it. That worked, at least. It didn't play nice with the sound device and often wouldn't release it until I closed the browser, but it was servicable. Fast forward a few years when sites started to pull their head out of their flash and embrace HTML5 and the in-browser streaming video standards that had only been sitting there a decade or so. I'm thinking "YEA!" no more flash, right? Plus, Gecko browsers are open source, so they ought to embrace more than one sound output meathod, right? ESD, Jack1, Jack2, Arts, OSS, Alsa, NAS, etc.. WRONG. Well, I was half right. Sites like Youtube seem to work in just about all our Mozilla-based browsers (Seamonkey, Firefox*). However, there seems to be NO CHOICE about what kind of sound device to output to. It's Pulseaudio or nothing, I guess. Well, my opinion is that Pulseaudio is a miserable failure at everything it does, since that's been my experience. I've got three NetBSD systems it fails to work on altogether, or has severe drawbacks (like it won't release the sound device - EVER, or it won't work unless it's run as root, despite 666 perms on the audio devs). Plus, nobody seems to want to *fix* Pulseaudio. Anyone who complains is an idiot, according to Pottering or his ilk. Is there ANY way to get sound via a browser without Pulseaudio ? Today I resort to downloading with youtube-dl or something similar and playing the resulting file with mplayer because at least that gives me enough flexibility to choose my sound output and not break it (which Pulseaudio does OFTEN by grabbing the sound device, refusing to release it, and being unkillable even with kill -9 - must reboot after that). Is there any other option besides taking more abuse from Pulseaudio or doing the plugin-download-play-from-CLI option ? I'm using amd64 and i386 ports. Is there a version in the panopoly of firefox versions that has anything-other-than-pulseaudio as an option for sound output that can still do HTML5 video? Has anyone found a formula that works and doesn't ruin the sound device until the end of time just because I played one HTML5 video? -Swift I'm even using SeamlessRDP to run browsers from Windows boxes. Ugh. Bleh. Puh. but at least I know 'rdesktop' will release the sound device!
question on tuning devices with usbhidctl
I fire up usbhidaction and it seems to work. Then as soon as I try to use the device, usbhidaction dies with this "device busy" error. What am I doing wrong? The actions will get executed (once) but then the whole thing collapses. I keep yelling "YEAH! It's BUSY! I'm the one using it!" but that doesn't seem to help, either. :-) $ usbhidaction -d -c /home/aliver/griffin_powermate.conf -f 7 usbhidaction: /dev/uhid7: Device busy I'm trying to get my Griffin Powermate knob working. Some extra information follows. This is what I see when I use: usbhidctl -d 7 -lva Consumer:Consumer_Control.Button:Button_1=0 Consumer:Consumer_Control.Generic_Desktop:Rx=1 Consumer:Consumer_Control.Consumer:Consumer_Control=0 Consumer:Consumer_Control.Consumer:Consumer_Control=79 Consumer:Consumer_Control.Consumer:Consumer_Control=16 Consumer:Consumer_Control.Consumer:Consumer_Control=0 The top two values change when I manipulate the knob. The bottom four duplicate values don't. Here is what I've setup as a configuration file for usbhidaction: Consumer:Consumer_Control.Generic_Desktop:Rx 1 mixerctl -n -w outputs.master++ Consumer:Consumer_Control.Generic_Desktop:Rx -1 mixerctl -n -w outputs.master-- Consumer:Consumer_Control.Button:Button_1 0 mixerctl -n -w outputs.mute++ I'm thinking that usbhidaction can't take the state changes as fast as the knob can pump them out. However, it's just a pure hunch. -Swift
SDIO vs ATA vs SCSI
I've been trying to figure out the relationship of SDIO to ATA. The reason is to find more ATA compatible hardware for DEC Alpha machines. Lots of them had ATA interfaces for CDROMs or system drives. However, most appear to support IDE rather than EIDE. I'm basing that off the lack of a keyed connector with a missing pin. Many SD2IDE devices adapt SD cards to an IDE bus. However, I've noticed that the systems with ACER M1543C controllers like the ones in the Alphastation DS10 will throw CAM errors and generally act-out, slow down, and misbehave (under Tru64, Linux, or BSD - I figure it's a hardware thing). However, systems with the CMD PCI0646 controller like the Alphastation 600a have no trouble whatsoever with SD2IDE devices. So, can anyone speculate on what is going on? Here's my cockamamie theory. I think SDIO is only a subset of the ATA standard and that those little $10 gizmos are basically only changing the electrical interface. They aren't like a SCSI2SD that actually has to "translate" the SDIO into SCSI CDBs. My guess is that the ACER controller is asking the SD card ATA questions it doesn't know the answer to, so to speak. Does anyone who has worked with SDIO know what the relationship to SDIO and ATA is? Is it the same for CF cards? I ask because CF2IDE devices work even worse - they show up (wrong) as WORM or CDROM drives on my Alpha's. I've skimmed over this document: https://www.sdcard.org/developers/overview/sdio/sdio_spec/Simplified_SDIO_Card_Spec.pdf However, it's difficult to get a "summary" or intuit why some SD2IDE & CF2IDE devices work or don't work. My fundamental question is this: Is SDIO a superset of ATA or is ATA a superset of SDIO? Something else? I'm also scratching my head wondering if CAM handles SDIO directly or deals with it via ATA (ie.. in kernel land) ? -Swift
Re: pcmcia scsi
On Mon, 31 Jul 2017, Bj?rn Johannesson wrote: pcmcia1: CIS version PCMCIA 2.0 or 2.1 pcmcia1: CIS info: Adaptec, Inc., APA-1460 SCSI Host Adapter, Version 0.01 I have one of these, too. I got it to use with my Amiga 1200. I wonder if NetBSD supports it on that platform. If someone has experience, let me know. I'll probably just try and find out! -Swift
Re: bnx(4) thread consumes 100%+ of CPU.
On Thu, 13 Jul 2017, John D. Baker wrote: Driver issue? Hardware failing? I could switch to its second interface, "bnx1" and see if it does the same thing. I may be mis-remembering, but I think I've seen these kind of kernel-threads-going-nuts with a Broadcom interface before. I hate Broadcom NICs (old war wounds) and I immediately blamed the NIC and switched it out with a Intel server NIC. However, perhaps I should have filed a PR. -Swift
Re: distcc for pkgsrc issue
On Fri, 7 Jul 2017, John Halfpenny wrote: Just an update for posterity that I resolved this issue. Interesting. The wrapper script idea reminds me of another question about distcc and friends. I've noticed that some packages complain with great aggrevation about my use of "make -jX" where X=CPUs. The question is, since folks make heavy use of distcc, does it have the same limitations and when you hit a compilation error related to a parallel compiler run, is that how the "dont-use-parallel-make" warnings get there, or are the mechanics of 'make -j8' and distcc so different that errors in one doesn't mean problems with the other? The reason I'm asking is occasionally I'd use distcc to get a few of my faster NetBSD boxes compiling things like QT or Firefox in something less than 60 minutes. It's just that I've never set it up because of the many failures I've had trying to use make -j ... -Swift
Re: Can NetBSD cgd be used for encrypted backup?
On Mon, 19 Jun 2017, Mayuresh wrote: Just curious. How does iscsi compare with NFS? Guess even NFS has a notion of block size, that would help optimize io. Sorry for butting in, but I'd point out that NFS is file-based and layers on top of an existing filesystem. So, the block size of the underlying file system is going to determine the block size. There is also the consideration of the network parameters such as send and receive buffer sizes and several others that matter quite a bit (depending on the layer-4 protocol in use and the version of NFS). iSCSI only provides block devices, it can't do file-based I/O natively without a filesystem on top of it. My experience with iSCSI has overall been quite poor. I once did a long whitepaper on iSCSI vs AoE. Being a big fan of SCSI (and not a huge fan of ATA) I was hoping & expecting iSCSI was going to be better than it turned out. However, the experience turned out completely opposite. Not only did AoE stomp it in every performance test I tried, it also scaled better, recovered from failures better, and so forth. iSCSI also has a million dials and settings for mostly useless crap few are going to fiddle with. It feels like some kind of top-heavy machination designed by some committee somewhere that never has to use network block storage in-real-life. I've also seen large scale iSCSI deployments be fraught with pain and peril simply because network engineers can't be trusted to leave the VLANs it runs on alone and can't be bothered to put it on discrete switches. Of course AoE runs on top of layer-2 and iSCSI is a layer-5 protocol. The extra layers underneath iSCSI make it routable, but destroy performance. With AoE you don't have to tune TCP/IP (but it's non-routable). I also remember hearing about HyperSCSI which is supposed to be hybrid strategy that uses SCSI CDB's over Ethernet frames like AoE does. My guess is, based on AoE's good-showing, that approach would rock if they got it off the ground. I guess I should also point out that iSCSI is widely supported across a larger number of operating systems than AoE and has much more vendor acceptance since AoE is seen as the domain of the CORAID (or whatever they are called now) folks. Anyhow, based on my bad experience, I wouldn't recommend iSCSI for anyone unless they simply had no other choice. I have seen it be workable, especially with dedicated hardware (Equallogic gear seems to work okay, and it's got NetBSD bits in there too!), but overall, I'd run screaming away. iSCSI does give a block device to use with CGD, though. I bet it would work fine with CGD, despite being kind of a poor idea in general (iSCSI not CGD). -Swift Just my opinions here. If you use iSCSI and love it, YMMV, and more power to you.
Re: Problems with wsconsole
On Tue, 25 Apr 2017, co...@sdf.org wrote: [...] the new graphical acceleration drivers. Ohh, shiny! Is there any information about these new or newly improved drivers? Where did this code come from; is it a port from Linux or a NetBSD specific enhancement? I'm just curious. -Swift
Re: old i386 3.1 packages or upgrading with KVM
On Tue, 14 Mar 2017, Jeremy C. Reed wrote: Does anyone know where I can find old 3.1 packages for i386? Damn, that could be tough. I just looked and my oldest go back to 4.x only. I burn a DVD or write a tape with the pkg_tarup versions of all my packages before I upgrade. However, I didn't have my ***t together back then, I guess. I am working on an old system that the hosting provider only has a Windows-based KVM. I am concerned upgrading it headless. Whoa. Yeah, that would make me nervous, too. Hopefully you at least have console access (I mean to the guest's console via KVM). I know our upgrade docs have tips of upgrade issues, and I could attempt upgrading 3 to 4, 4 to 5, 5 to 6, 6 to 7. But I'd rather not spend days on this. Anyone have any suggestions? Uhhh, yeah, if I had to put my own money down, I wouldn't bet on that being successful. Also, as you say, it's going to be slow. It's going to be REALLY slow on an unaccelerated KVM instance (IIRC, Windows has no acceleration). Maybe easiest is to just install a new system and migrate data and configs over to it. If it were me, I'd try that first. You could use a copy/snapshot to do it and thus if it went horribly wrong/ugly, you could always start over. I guess for me it'd all depend on how hairy the applications were. If it was just static apache or a BAMP stack, then no sweat. I'd upgrade. If it's a situation where I'd have to keep library linkages to ancient libs because of dynamic binaries in the app, I'd sit down and have a good cry, then cross my fingers and try a clobber-install of 7.x. If that didn't work, I'd do the upgrade-script shuffle you are trying to avoid. -Swift
Re: x86_64 hardware recommendations/warnings?
On Mon, 13 Mar 2017, John D. Baker wrote: I find myself in the position of recommending components for a friend to build a more up-to-date machine on which to run NetBSD. I wish you luck. I've been using NetBSD since 1996 and, though I'm a huge fan, I have never found a great method to find "known good" *systems* with NetBSD. You can easily find known-good components by looking at various man pages for the drivers. Ie.. I look up the mobo then look at each problematic chipset and go digging to see if drivers are there. When I say problematic, I mean network, usb, and sound chipsets. The other method I use is component based. I'll check pcidevs, usbdevs, etc... and then lookup the PCI IDs and find out if they are supported that way. This is sometimes tough because you don't always know what a given bit of kit will show up as (PCI ID). It's also easy to get it wrong when they release a "v2" of the same card. BeOS used to have this site (way way back in the day) called BeOS Hardware Compatilibity Matrix. Folks would contribute short reviews for working hardware. I found it invaluable. It's wierd, you get *great* hardware info from all the non-x86 ports, but then again, they have a lot smaller surface area to cover. So, it's no surprise. Intel or AMD Radeon graphics Lately my experience has been that if your Intel graphics are supported in Linux KMS 1.3 code they will work great in NetBSD 7.x. I have a number of Radeon cards (newest is R9 Nano) and most do work, also. However, I care mostly about 2D performance and they can't beat my onboard video in gtkperf, so I only use those in gaming machines now. My i7 with built-in Intel graphics does all tests in 1.8 seconds. Drop the Radeon R9 Nano in and it goes up to 4.5 seconds. At this stage Intel vs. AMD is not so important as knowing what is supported and will work. I avoid motherboards with AMD chipsets. It's mostly just superstition, though. I had a helluva time with USB on those under NetBSD 6. They basically never worked well enough for everyday on my AMD hardware. After having the same experience on every other AMD mobo after that point, I basically gave up since the Intel boards would "just work". In the back of my mind always is the problem: "new but not TOO new". That's spot on. As UEFI support is only now taking shape in -current, is anyone aware of boards which don't support "Legacy Boot" or "Compatibility Support Mode"? Well, once they do it on HP DL & Dell servers, you can bet it'll start happening everywhere. So far, they still support "Legacy" mode in their latest generations (G9 etc..). What is known about whether the intel DRM/KMS support in Netbsd-7.1 will work with these? The associated driver for Xorg? The KMS driver version is what you are looking for. NetBSD 7 uses KMS 1.3, unless I'm mistaken. If they work at all, how do they fare when playing video? Fantastic. XVideo support seems fully baked and works quite well. On the flipside getting things like vdpau to work may be more challenging. I dunno. What is known about the radeondrmkms support for these parts? The associated driver for Xorg? It's the same. It's all tied to the KMS version. If it turns out that the on-board video options are not suitable, the obvious solution is some sort of Radeon-based add-in video card. That or an nVidia card. Some of those work with the Nouveau driver. I have an old GT9800 I got used that works with that driver even under NetBSD 6.x. The Radeon 9800 is also another can't miss card but it's old, has slow 3D, fairly low memory, etc... However, that card works with the regular old X11 drivers we've had for eons. It really depends on if you want to get good 3D performance. Given the above, are there recommendations for things that are new (available) but not TOO new (unsupported)? Man, it really depends on the type of things they will be doing with the machine. What I do before I buy a new rig is to load NetBSD onto a USB drive (as in, do a full install with X11). Then I go to my local computer store (Microcenter is the best in my area, unfortunately) and I tell the sales guy exactly what I want to do and why and explain it won't hurt his machines. Then I start testing all the machines I'm interested. I just boot them up, check the sound device (make sure it's not one of those PoS's that want to route all the sound down the HDMI interface by default) then make sure it plays clean. Then check the NIC, wifi, and finally give the graphics a try with an intrepid "startx" to see what happens. Then you'll know for sure the hardware works before you even go through the trouble of buying & returning gear. Or conversely, warnings of what to avoid? I avoid Ralink wifi (they always die/offline on me), Realtek NICs (due to crap iperf performance and occasional hardware flaws), and anything as you put it "too new" on the graphics front. YMMV. -Swift
Re: NetBSD installer failure
On Fri, 3 Mar 2017, Al Zick wrote: http://datazap.net/sites/14/hang.jpg Does anyone have any ideas as to why? It's hard to say, but it looks like it's failing right before the real root file system is mounted. Did you have them try without SMP and ACPI ? You can disable those from the bootloader. Sometimes it's useful to setup NetBSD on a USB drive (ie.. do a full installation etc..). Then I put some alternate kernels on the file system and one of them I'll have a full debug kernel built that I can boot up on and (hopefully) get an idea of why/where-exactly the failure is occurring. You could probably do that and create a 2G or 4G image for your data center hands & eyes folks to rawrite to a USB drive and boot up the server on. Then FTP / dropbox it to them etc... -Swift
udoo amd64 sbcs
http://shop.udoo.org Can anyone confirm using one of these with NetBSD? At @ $100 bones these are a lot of firepower vis-a-vis an RPi 3 (at $35, though, dayum).. "It looks like it'd probably work fine" -famous last words -Swift
Re: Status of RPi 3 (b) ?
On Fri, 6 Jan 2017, Christian Baer wrote: Good mornin'! :-) Why would you do such a thing? Just for fun. I knew I was going to make multiple mistakes, use multiple SD cards, and basically just screw around. However, I have several other machines running it - primarily because of ZFS. ZFS is a memory hog - and a big fat one at that. This is no big deal on a server or workstation with lots of RAM, but unfortunately, the Raspi doesn't really fit into that category. I totally agree. Most optimizations for ZFS don't even kick in unless you have 4GB of RAM or more. The OpenZFS tuning guide states that 1GB of RAM is sufficient, but more is recommended. ZFS ARC sizes with 1GB are too small for any kind of decent prefetch performance, though. Although I haven't really measured it, I'm pretty sure that ZFS is also pretty heavy on the CPU (compared to FFS). My experience with it on Solaris is that it depends on how you deploy the compression, deduplication, encryption, and built-in file sharing options. Load up on those, start using it heavily, and you can turn the machine into slag. That's also not to mention the checksums ZFS is constantly calculating. I can't prove it either, but I'd bet money you are right vis-a-vis FFS. UFS/FFS with soft updates is probably a better choice for a machine like a Raspi. Yeah, probably. That and I don't even know if they have a boot sector that'd let you put your root disk on ZFS on the ARM platform. I kinda doubt that. However, I wouldn't bet on UFS+softdeps being better than a *well tuned* ZFS rig. One could disable prefetch, lower the arc_max, and then override the TXG write_limit to force it to flush & sync a bit more and you'd probably be just fine and maybe match or beat UFS. It'd be an interesting test, actually. This "multicolor test pattern" is the Raspi's way of telling you that it can't load the OS. I figured it was something like that. That's better than a black screen, I guess. At least the monitor power save doesn't kick in and you can tell what happened. You have stated the reason quite well: The Raspi3 only works with NetBSD current. Thanks for the confirmation. My RPi v2 comes today in the mail, so either way, I'll be playing with NetBSD on some kind of Pi in the near future. That depends. IIRC everything works fine apart from the wireless-stuff (Wifi and Bluetooth). I heard some rumor that the OpenBSD guys have eschew Pis and focus on BeagleBones because of some binary blobs they don't want to distribute. It's always a mess with wifi and bluetooth. That and I can't keep up with who distributes what and who refuses. I think the deal with the OpenBSD crew is they don't want to be on the hook for redistributing blobs, but they'll cope with them if the machine uses them "internally". However, that whole topic gets ranty and endless once it starts. I mostly can't be bothered to care about it. I get a little "consumerish" and I figure folks like Stallman, Theo, and ESR can fight that fight elsewhere without me. I'll just plug in USB dongles as-needed. However, I have always found using any -current OS to be extremely ball-busting. Heh, true. However, I've had okay luck with picking a -current image for whatever weird kit I'm messing with at the time then just not updating once I find an image that works. Now *tracking* -current, oh man, yeah, that's a challenge. I don't yet have all the chops I need to track down bugs and regressions that crop up. I can code in C, but I'm not well practiced enough to find bugs before others do. I'll grant you that NetBSD is a lot more conservative (even in -current) than FreeBSD and especially most Linux distros (which are completely bleeding edge). But even with NetBSD it can happen that after an update (which you installed because some feature was a little flaky), the system won't work anymore or you just have a bug-change. Basically, not the best choice for a production-system. I'm just playing around, so no big deal, though I agree on the finer points you make. If I had a "production" application using Pi's I'd probably just buy two of them and keep known-good SD card images laying around. Plus, I tend to test & harden using very common hardware extensively before I put them in a position to lose money. Still, Pis are fun and definitely powerful enough for "real work" under the righ circumstances. -Swift
Re: Status of RPi 3 (b) ?
On Thu, 5 Jan 2017, Brad Spencer wrote: You will need to use something -currentish. I mostly run a 7.99.42 kernel with some local mods. Thanks for the tip. I have 3 RPI3b v1.2 boards. Two of them will not boot NetBSD 9 out of 10 times. They will hang between these two lines: That sounds like a bummer. I'll watch for the same issue. Raspbian works fine on those boards [at least it does on one of them, I don't remember if I tried both] and 1 time in 10 NetBSD will get past that point and run fine. The third RPI3 board appears to boot NetBSD just fine, as does my RPI2. I hope you have one that works out for you. Weird. I wonder if it's firmware related... There are some warnings about upgrading firmware on the RPi Wiki page. The two RPI3s that hang were purchased at different times from different places, but as far as I can tell are all the same board. Yeah, that definitely seems like it'd be firmware related. It's the same SoC and other bits. Right now the RPI3 runs like a RPI2, more or less, in terms of devices. I've got an RPi v2 on the way, also. If I have to fight the v3 too much I'll just downgrade. The downside is the RPi is supposed to be about 2x the speed of the RPi2. Of course, I'm not building a compute cluster with them in any case (yet! hehe). -Swift
Re: Status of RPi 3 (b) ?
On Thu, 5 Jan 2017, Brian Buhrow wrote: I'm using NetBSD-current on an RPI3, using the RPI2 images. Everything seems to work except for the wireless and bluetooth modules. Right on. I was trying a 7.x release, I think. I can't remember. Anyhow, I'll just use a -current image and try again. At least now I know it's worth trying, so thanks for that. The dmesg for this is below. Note that this is -current from the end of November 2016. -Brian Hmm, well, unless something has been broken between now and then, that should be okay. As far as wireless & BT, they are "nice to have". Since the RPi3 has 4 USB ports, I can just use one of those if I need Bluetooth or Wifi to hork up my privacy. :-) -Swift
Status of RPi 3 (b) ?
I just picked up an RPI3. I guess I should have waited. A few congenitally systemd-infected distros work on it, but not much else. FreeBSD was a notable exception. It seems to work, but I managed to hork up the SD card jacking around with ZFS before I could test X11 and other stuff. No big deal. However, before I start over, how about NetBSD? When I tried to boot the gzimg on the SD card it just came up with what looked like a multicolor test pattern, but no boot etc... That was just a quick and dirty test. I could have been doing any number of things wrong. For one, I wasn't using -current. My real question is: Does NetBSD work well enough on the RPi3 to make it worth trying ? Also, could one of the anointed ones update the RPI wiki for the RPi3 ? There is only a brief mention of the RPi3 in the firmware section (nothing that helpful) and another user in the comment section is seeing the exact same thing as me. -Swift
The $0.5M donation to FreeBSD Foundation
https://www.freebsdfoundation.org/blog/freebsd-foundation-announces-new-uranium-level-donation/ Folks, we need to make friends with this Mr. Anonymous, guy. He's loaded and generous. I won't forget that the FreeBSD guys are our allies and friends, too. Their rising tide can also help to lift our boat. Some other food for thought just before the New Year: "Sometimes even to live is an act of courage." -Seneca "I am a slow walker, but I never walk back." -Abraham Lincoln "Even if I knew that tomorrow the world would go to pieces, I would still plant my apple tree." -Martin Luther Chins up folks, we hang tough and work in the wilderness off less than @ 1/100th of the resources. Just blood, sweat, and MajorDomo mailing lists, baby. Hang tough, and here's a big "thanks" from a longtime NetBSD user to the The NetBSD Foundation, the NetBSD coders, the pkgsrc guys, and anyone else who helps! -Swift
Re: Xorg vs Wayland (and MIR?) - future for NetBSD X ?
On Wed, 28 Dec 2016, Michael wrote: NetBSD is just about the only OS still using xorg as setuid root. Pretty much everyone else did away with it. We only really need it for /dev/pci*, because that lets you mmap() arbitrary PCI space - things like wsfb or sbus graphics work without it. I'm curious. If you have time, can you explain why those work without it? I'm assuming wsfb because it's in the kernel already and sbus because it has some kind of smarter & more magical method versus PCI ? We could easily do away with it by going back to using ttyE*, ttyF* etc. - that would only allow using graphics hardware with kernel drivers though. I did that back in the xf86 days, As a user, I had zero problem with that. that way every graphics device shows up as its own PCI domain, device 0:0:0, complete with its own IO space, which can be quite confusing since it doesn't correspond to the actual bus layout at all. That makese sense. As a developer, I always hate big lookup tables and TLB-like abstractions. They are helpful, but usually a bit obtuse to work with. -Swift
Re: Xorg vs Wayland (and MIR?) - future for NetBSD X ?
On Wed, 28 Dec 2016, Jonathan A. Kollasch wrote: Is NetBSD going to play with Wayland? 'Cause X.org seems to be in a bit shaky and captured by Linux-droids. What makes you think Wayland isn't also captured by the Penguins? Perhaps I wasn't direct enough. I do think that. I think even worse, actually, it's pretty well taken over by Fedora (Wayland) and Ubuntu (MIR) folks. It looks totally Linuxsy and I was hoping one of the wizards on the lists would come out and say "Oh no Swift, you have it all wrong, let me explain how KMS is wonderful and we have no problems at all with Wayland. There's going to be a new nifty add-on that will do proper remote X. Nothing to worry about. It's just like what we'd have done ourselves." That didn't exactly happen, but it's what I expected anyway. -Swift
Re: Aw: Xorg vs Wayland (and MIR?) - future for NetBSD X ?
On Wed, 28 Dec 2016, Carsten Kunze wrote: This will be the time to leave NetBSD and go to OpenBSD. I run OpenBSD, and it's nice. No problems there. I run FreeBSD, too. Or to Linux. I work with a lot of Linux machines. No thanks. Why not using the original when NetBSD would try to copy Linux? Well, I can think of several reasons: 1. It's that NetBSD is trying to copy Linux, it's that TNF doesn't have the resources to re-invent X11. So we are stuck with what's available. During this discussion I see nobody jumping up and down saying how great Linux's implementation is - just that it's the only viable option. 2. Linux uses systemd and a bunch of other non-Unix-like software I find repugnant. 3. Then I'd be exposed to too many Linux users. That doesn't end well since most of them run Ubuntu and, uhm, it shows. Will systemd on NetBSD be the next step? Negative. That'd be suicide. The project would implode. I highly doubt that'd ever happen. The brain drain would kill TNF, IMHO. It's hyperbolic anyhow. FreeBSD does already try to copy Linux (+ZFS) just to attract users. Negative. As others have pointed out ZFS comes from Solaris. Also, Linux has pretty terrible implementation of ZFS which is far behind FreeBSD's. The closest facsimile, BTRFS, is still light years behind ZFS in features, performance, and stability. Linux currently has no real answer to ZFS besdies "wait for BTRFS". However, they are making lots of progress in BTRFS and since Ted Tso wrote ext2,3,4 (which is downright horrible) I figure he's got to have learned something by now. It's likely to emerge as something more usable in the next couple of years. Indeed, Slackware is much more UNIX like than FreeBSD. Well since that's totally subjective, I'll just go ahead and completely disagree with you. Linux has essentially zero SysV code. I consider early BSD to be more-unixy-than-att-UNIX and Linux has much less of that, versus FreeBSD. Slackware is cool and all, it's a hold-out from systemd, too. However, it's still running Linux. One of the reasons that users prefer NetBSD might be that it is UNIX and not like modern Linux. Well, being a bit pedantic here, I'll point out that NetBSD doesn't have SysV code in it, either. It all depends on how you define "UNIX". I think of it as a way of doing things according to the Unix philosophy, best defined by Mike Gancarz. Others see "UNIX" as a copyright or trademark. Still other see it as a code-path and pedigree from AT&T. If this will change I'm glad that there is still OpenBSD. You have that right. Personally, I'm glad there's more than just OpenBSD. Otherwise, I'd have to put up with Theo more than I do, and nobody wants that. -Swift
Re: Xorg vs Wayland (and MIR?) - future for NetBSD X ?
On Wed, 28 Dec 2016, David Holland wrote: On Tue, Dec 27, 2016 at 03:41:44PM -0700, Swift Griggs wrote: > Is NetBSD going to play with Wayland? 'Cause X.org seems to be in a bit > shaky and captured by Linux-droids. I don't know. But all that stuff is shaky and linuxish. Good. I'm glad I'm not the only one who got that impression. ...you use XDMCP? Anyone uses XDMCP, other than to run some vintage X terminals they found in a skip? Or do you mean "remote X display access"? I do not use an X chooser or a display manager (much). I do have a fully functioning SGI box setup to both, but I rarely use it for anything. I set that up years ago for some NCD X terminals and it's still kicking around in my home lab. Maybe my nomenclature is a bit off and I shouldn't refer to remote X-apps that way. However, I do use remote X displays, and that is specifically what will not be supported (I assume along with display managers and choosers, too) with Wayland. KMS is best thought of as "the linux world finally figures out what everyone else knew by around 1990", that is, you should have device drivers for graphics same as for other hardware, and framebuffer devices exposed to userland that don't require reimplementing drivers in every application (read: X server) wanting to use the framebuffer. What you say makes technical sense. However, from a logistics standpoint, doesn't that also mean that the drivers become specific to choices made in kernel-land for whatever OS implements them? I remember the whole Xfree86 -> Xorg transition and the eventual emergence of KMS. My big fear back them was that Linux would just focus on KMS, the X projects would wither, and any OS's that didn't have a million monkeys to work on graphics drivers would be out in the cold. It turned out that I was pretty much right. I used some pretty old hardware until NetBSD 7.x and FreeBSD 10 came out and had updated their KMS implementations with new drivers. Suddenly 80% of my new hardware was viable again. I didn't have to continue to use AGP graphics cards and expensive funny mobos that allowed newer CPUs with an older graphics bus. My point is that, though you are probably much more knowledgeable about what the "right" architecture is, I did see some advantages to centralizing the drivers in "an application" (X) because at least that creates a common fountain for FOSS to cooperate. Maybe my perception of that whole situation was off and Xfree86 just made it harder. I never coded on that project. Except they apparently don't have it right yet, because the drmkms2 Xorg binary is still setuid root. You are probably just making a point about the architecture from earlier. Point taken. However, as an aside, I don't actually care about that particular bit, and I know others who would agree with me (not in the majority, I'm sure). X doesn't listen to TCP by default anymore and even if it did, it's easily firewalled. Most multi-user server systems don't run an X server. So, it doesn't really impact local security that much either. Then again, I'm not a "security guy". If I was, I'd be all high on OpenBSD. More power to those guys, they seem to get a helluva lot done. However, to me, security is like handrails on a long flight of stairs. You absolutely need it, but don't confuse that fact with the point of building the stairs, which was to get to the top. You can also add handrails later. It's not the smartest or safest way to go, but it's possible. IRIX was hella inscure and I still use it all the time in a version-locked environment behind my firewall. It still does things I can't find better anywhere else. I'll probably use them till I'm dead and I have zero fear of 37337 h0x0x0rs coming after me. The point in this context is that I think Unix principles are more important and helpful than security principles. Small is beautiful. Simple programs that interoperate are good. "Modern X" as Mouse put it, doesn't seem hip to any of that, with or without needing SetUID binaries, which is an afterthought (though I think you were probably bringing up the point to illustrate your architectural critique). At least "crufty X" (my term) showed some awareness of that. There were some historical reasons that XFree86 ended up using the MS-DOS model for hardware and drivers, but it was wrong then anyway and there just wasn't a critical mass of people who knew better. Well, having lived through that time, I can tell you that I was a young guy who'd just come from MSDOS to UNIX in about 1992 or so. I think there were a lot of folks like me who, as you put it, were just too inexperienced to get it right. They were my peers. I had tons of respect and admiration for the acc
Xorg vs Wayland (and MIR?) - future for NetBSD X ?
Is NetBSD going to play with Wayland? 'Cause X.org seems to be in a bit shaky and captured by Linux-droids. More questions if anyone feels like answering: * It's obvious we already have KMS. However, is that all we need to support Wayland? * What do the other BSD's do at this point? Is there any cohesion there? * I don't think GTK + Broadway or RDP/VNC is a viable alternative to XDMCP. The Wayland guys really think that's good enough? XDMCP can do things those can't, like display a single application etc... I've never seen a non-hackish way to do that with VNC or RDP and Broadway is GTK3-only. If XDMCP goes... well damn. I like it and use it. I guess I'm screwed because the Wayland guys seem to see XDMCP and drawing operations as "the dumb way" to do things (from reading their interviews). I'd have a lot easier time accepting that if we had a viable XDMCP alternative. That doesn't seem to me to be the case. Since nobody cares what I will "accept" I guess I'll be doomed to old framebuffer hardware like we were before the last KMS update that came with 7.x Then again, I mostly don't care. I'm old and I like old hardware. However, I'd hate to see a systemd-like-event happen to X11. * Anyone remember AtomBIOS? Wasn't stuff like that supposed to solve most of the we'll-never-share-squat-with-anyone problem for the vendors? They could all just make their special-monkey-magic hyperfast graphics calls from BIOS calls (which would suck for non x86 but at least provide some middle ground for development). I guess it never took off? * Is KMS "just a hack" we support or is it a future X11 direction TNF embraces? Doesn't Linux do things in kernel-land that we either can't or won't do in NetBSD? I'm thinking of all the stuff provided by ./sys/miscfs/procfs/procfs_linux.c and sys/compat/linux*. Doesn't that mean we are forever going to be worried more about making sure we properly ape Linux rather than making anything novel ? * How do weird X11 framebuffer code for off-the-wall platforms get built? I'm thinking of things like Amiga's with RetinaZ3 boards. How is it that these wizards-in-caves can be coaxed out for that, but for x86 we have to beg for a seat at the table with Linux and Microsoft? I'm just ignorant of these dynamics. I'm assuming it's because those older framebuffers are more simplistic or better documented. For reference: Xorg seems to be losing momentum (or not) http://www.phoronix.com/scan.php?page=news_item&px=XServer-Git-2016 http://mirror.linux.org.au/linux.conf.au/2013/ogv/The_real_story_behind_Wayland_and_X.ogv http://www.phoronix.com/scan.php?page=news_item&px=X.Org-Foundation-Missteps (I know, I know, two of those are Larabel links - but his facts are correct in this case.) Some of my biases for Linux device drivers on BSD come from this: http://info.iet.unipi.it/~luigi/freebsd/linux_bsd_kld.html My only real technical knowledge of AtomBIOS comes from this post: http://tinyurl.com/j2q87y8 Amigas have cool X11 drivers. So does SH3, MacPPC/68k, and others: http://ftp.netbsd.org/pub/NetBSD-archive/NetBSD-1.4.2/amiga/INSTALL.X11 -Swift
Re: Serial console setup
On Fri, 23 Dec 2016, Greg Troxel wrote: You are not really wrong in theory. Heh, whew. There, getty waiting for open to succeed until CD was asserted made sense, especially when the modem/line was shared with outgoing UUCP. Oh, I get what you are saying. Even back in the day I rarely put the console on a modem. Rather, as in my case, we'd just setup a "secure" getty on the modem (usually with mgetty, if possible since it tended to handle modems quite well). The only time I'd stick a modem on the console was when I had to ship machines to IT deserts in BFE and I had tested the snot outta the rig and had scripts in place to periodically poke-or-reset the modem, as needed. I also wouldn't accept anything less than a top-of-the-line US Robotics Courier because rarely did other vendors meet their level of quality and speed. I had Zoom and Atmel modems that could never be relied upon if they were left to sit for too long with no calls/reset. I'd have never trusted those on a must-be-available-or-I'm-screwed-800-miles-away-console port. A modem console would be beyond bizarre these days. Heh, very true, but as strange as it is, I've seen it now and again even to this day. The main place is on EMC storage arrays where they take incoming calls from support to do things like system checks and firmware updates. They tend to get a lot less hassle from security folks for whatever reason. I haven't seen it on the newer VMAX chassis, but the older Symmetrix "DMX" line used modems quite a bit and people still use them. There is an outfit on the East coast that still maintains them via dial-in, too. Of course, it's not exactly the same thing because I think those machines they are either DOS or Windows machines running some kind or remote access package (ie.. not gettys). I've also run into refrigeration controllers and digital signage systems that also still run on modems, but again these are embedded devices, not Unix boxen. My point was really that if the cables are not wired up right and DCD ends up not asserted (there are a lot of wrong serial cables out there), then it seems better to just have the console work, rather than not work. I get it now, and I get why you said it and it makes good sense. Ie.. In this case, folks can debate which pins to set high or not, but "working" is considerably more comfortable than "broken" no matter what "standard" is getting violated or who thinks that's not the ideal wiring scheme. Right on. :-) Thanks for explaining. -Swift
Re: Serial console setup
On Fri, 23 Dec 2016, Greg Troxel wrote: Are you saying that the console device itself will refrain from output if either DSR or CD is not asserted? I can see the point of DSR but requiring CD for a console seems non-helpful. Hmm, out of ignorance, ('cause I wouldn't gainsay you, Greg!) why? Carrier detect is pretty modem-ish, but my simple understanding is that when using a null modem, you want to connect DCD to DTR and the same for DSR. I've even built cables this way and they worked. It's all just non-magical 12V low/high. That way you've got a "high" signal telling you "Yeah, it's cool to talk. We're connected." For a modem, that has the additional meaning "You've got a carrier signal" rather than just "The cable got plugged in.". Straigten me out, guys. Am I wrong? -Swift
Re: Serial console setup
On Thu, 22 Dec 2016, j...@sdf.org wrote: I'm wanting to connect an actual serial terminal (Wyse 60; VT100 mode) to a small i386 PC running NetBSD 7.99.25 (snapshot w/ GENERIC kernel). I used to have a Wyse 60, as well. I think mine was the "paper white" model. I used it for years as a terminal to watch my filtered log output from about 200 servers at once. I got it for $5 at a used gear auction. The cable is a straight serial, DB9 (PC COM port) to DB9-DB25 (Wyse MODEM port). Hopefully, you specifically remember using that port on your working SPARC rig and you've triple checked your settings on the terminal itself are matching. I added a 'consdev auto' entry to /boot.cfg (see below) which seems to get the serial port address right and I'm able to get the dmesg stuff to display during bootup but no interaction via the keyboard. Okay, I could be totally wrong here, but my understanding is that you flat out can *NOT* interact with the kernel while it's booting. Ie.. while you are seeing "green" text (ie.. the dmesg ring buffer stuff before any scripts run). I'm also not sure you'll be able to do things like ctrl-c out of startup scripts. Linux and other OS's already prevent that from the console (*sigh*), but it's nice that NetBSD doesn't. However, I don't remember being able to do that unless I'm on a "real" keyboard and watching the BIOS (VGA) console. However, I'm not sure exactly why, because I also specifically remember doing some netbsd kernel debugging (ie.. gathering backtraces and what-not) from a serial console that totally worked. However, I believe it was because, on that particular system, it didn't have a framebuffer and the BIOS console *was* the serial port. I tried various combinations of [Full|Half]Duplex and RCV Handshake [none|XON/XOFF|DTR|both] ; none enabled 2-way communication. I always had the best luck with my Wyse using hardware flow control (DTR). IIRC, they will go up to 38,400 pretty reliably. You definitely want to keep it in full duplex, too. Current settings are attached. Is there something more I need to do? Not to be an ass, but you do know you've got to fire up a getty for the terminal in /etc/inittab once the system is booted, right? The console stuff you've done basically only enables the kernel displaying it's ring buffer output and what-not. To login et al, you'll need a getty. -Swift
Semetic/fuzzy-logic code comparison tool ?
Let's say one wants to make general statement that "This code is 30% the same as that code!" Another example would be someone wants to make the statement that "XX% of this code really came from project X." In my case I'm only interested in "honest" code, not trying to catch someone stealing/permuting existing code. Oh, and everything I care about is in C. My questions are: * Are there tools that already do this? * What do you do about whitespace, simple variable permutation, and formatting issues? Ie.. times when a tiny thing changes the "checksum" of your content but it's essentially still the same code. I know that this is essentially an AI problem and thus can get complex in a hurry. I was writing some scripts to take a swing at some kind of prototype (and I even made some early progress), but then I though "surely someone's already done this, genius." Anyone know of any place to start, here? I know it's awfully arbitrary and subjective. However, as long as the algorithm isn't partisan and generates reproducible and at least somewhat defensible results, I can live with the subjectivity. -Swift Now for those that might be somewhat interested this is what I started with on tissue paper (just notes). Feel free to critique if you have ideas or know of preexisting stuff I should look at. I'd rather not invent this wheel. * Substitute all whitespace for a single space, yeah, for sure. Forget about wrapping characters, too (CR, LF, etc..). * Possibly use something like soundex on variables? Hmm, how to detect when the same variable is used under a new name? Leading/trailing characters? * Count braces and nesting levels? Does this generate a unique enough pattern? Add it to an overall heuristic score ala Bayesian style? * How to solve the problem of old code with a new location? Also when it's slightly permuted? * What will I use for quanta/units to analyze. Going by lines is dumb since it implies whitespace (which is ignored). By function? By sets of braces or parens? By scope ? Multiple types of quanta? H * I'll start with multiple scripts. Each one builds it's own score based on a different technique. Then we aggregate the scores and see which ones are most useful/accurate for my use cases. Then see if any track together or diverge in different cases. * What about old K&R code that's simply been updated with a newer function declaration and C99 or C11 stuff? Should be able to regex to detect this ? * Probably better to write the tool in script, too much string handling to dork with it in C. * If one file is 100k and another 50k make sure that the tools never assert a difference of less than 50%? What if file B is just 2x a bunch of code still found in file A? Grrr... think... Those were just rough notes with my ideas.
Re: NetBSD 7.02 on APU2 PcEngines
On Fri, 9 Dec 2016, 76nem...@gmx.ch wrote: Finally I have switched between the speed of 9600bps and 115200bps to install NetBSD 7.02 on APU2 from PCEngines. Hi, Alan, I also have one of these little systems from PCEngines. I've been following your thread. It's been a few years since I've messed around with my little 500Mhz AMD-based PCEngines board, but I think my method to solve the issue you are having was to hard set the console and baud rate in a custom kernel. From the GENERIC conf file: #optionsCONSDEVNAME="\"com\"",CONADDR=0x2f8,CONSPEED=57600 If you uncomment that line and nail the baud rate, then use that kernel you should be good to go. This is uncomfortable but it works. What doesn't work however is the installation itself. The kernel boots for a while and stops on root device. Can you do the install on another machine, install the custom kernel, then swap-a-roo the drive back to the PCengines host? I can type what I want (wm0, wm1 etc.) but the installation hangs (sometimes after a curious message about NFS mount). If you could post how far it got and what you see, that'd help. Any ideas? Do you think that an network boot and install will solve these problems? Why would netbooting help. I'm curious why that idea occurs to you? I thought the issue was the baud rate on the serial port ? I guess you could use a custom boot kernel via PXE/TFTP/BOOTP etc.. That might work/help. I have tried to boot from an USB2 CDROM and booting the ISO image from syslinux system but this change nothing. Are you passing any custom boot parameters? Can you summarize the specific issue you are having? I can try to help, since I have one of these machines myself and have used it for all kinds of BSD projects. -Swift
Re: The old wiki.netbsd.se
On Fri, 2 Dec 2016, matthew sporleder wrote: I imported things I thought were useful into wiki.netbsd.org, mostly here: http://wiki.netbsd.org/tutorials/ Whoa there it is! I see lots of stuff I wrote way-back-when in there. Thanks a bunch! There is a ton of good stuff there. -Swift
Re: The old wiki.netbsd.se
On Fri, 2 Dec 2016, Jan Danielsson wrote: There was a thread about this a long while back, I believe the argument was that it would be better for the community to have one authoritative wiki so all useful information could be centrally managed. Ah, okay. I seem to remember some talk about it, but I think it was before I was on as many of the ML's as I am nowadays. I also seem to recall that there were some voices saying that the official NetBSD wiki broke the spirit of a wiki Well, like you, I can see both sides of that argument. Oh, and also, thank you for responding and taking the time to catch me up. -- but in the end some consensus was reached. If you search around in the mailing list archives I'm sure you'll be able to find the thread I'm referring to. Yes, I think I did find it after some digging, thanks. I also found this page: https://wiki.netbsd.org/users/asau/ "What happened to wiki.NetBSD.SE Simply put, it was shut down. Stop complaining. Just stop. Better help saving useful information." I guess there was a bit of drama around the discussion. He sounds a bit miffed. Sorry for going over old ground. I never got into the original discussion and I'm not going to get all butthurt about it. The deal is that I have some infrastructure at home and a domain etc... Hosting anything useful for NetBSD sounds like something I'd like, but I don't want to get involved with TNF politics either by re-creating a site that some folks deemed harmful or by trying to rule-follow my way through contributing to the official wiki. I can see both sides of the argument (having a de facto standard unofficial wiki vs not having one); I had some experience with "the" Gentoo unofficial wiki which I think is illustrative: That does indeed sound like a rough time. Sorry that happened to you. As a value-system thing, I guess it's a judgment call between the impact/danger of getting and using bad unofficial information versus the not having the useful bits available at all. (couldn't get any gcc to run, which in Gentoo is bad..). Heh, yeah, that would be. As in, "no new portage software for you - ever." I asked on IRC if it was a known issues and was told that one should *never*, under any circumstances, "update world", and that following some random unofficial wiki was a recipe for disaster. It's a rather extreme example, but valid nonetheless. I'm thinking that's *exactly* the kind of scenario TNF wants to avoid by not having a de facto standard wiki which is unofficial. Hmm, okay. Then I think the best way for me to proceed would be that if I do decide to create another NetBSD documentation resource, I'll be very clear that it's unofficial, completely detached and unrelated from TNF, and that all information is subject to being wrong as heck. No you're not the only one, I liked it and I know others who did -- but it's a little more complicated than that. Well, thanks again for your time and the explanation. "... But the thought of being a lunatic did not greatly trouble him; the horror was that he might also be wrong." George Orwell, 1984 -Swift
The old wiki.netbsd.se
https://web.archive.org/web/20100527034652/http://wiki.netbsd.se/Main_Page "Dear Users, Thank you for your patience and your contributions over the last 4 years. The time has come to shut down this wiki. Please refer to the official NetBSD wiki in the future." I'm just being nosy. Anyone remember what happened that made the site operator ditch the site? "The time has come" Why did it come ? Personally, I found it far more useful and rewarding than the TNF wiki (not knocking TNF). Plus, I had an account and could edit pages etc.. I'm considering snarfing all my content back outta the wayback machine and resurrecting my own version of it. However, I'd first like to understand the original story if anyone knows it. Was it hackers beating on the site? Spammers take over the mail relay? Did the guy just get a divorce and his ex-wife's alimony take away the bandwidth fees? The answer I'd dread to hear is "It was too open and not dryly clinical or academically pedantic enough. So, we shut it down. Too many people had access." Unfortunately, due to the meta-refresh to wiki.netbsd.org appearing right afterward, it's an open question in my mind. Thus, I don't want to end up fussing with TNF if I stood something like it back up. That's the main reason why I'm so curious. Please don't take my speculation the wrong way, it's just guessing/fear. Maybe I was the only one who liked it and I'm just wasting my time. Thanks, Swift
Re: Fwd: pkg_add not working
On Thu, 27 Oct 2016, Saurav Sachidanand wrote: Does'nt work with http either. Also, I'm running NetBSD on VirtualBox on OS X El Capitan, if that's relevant. I'm not sure why you are having the issue with pkg_add, but I can say I've had lots of intermittent problems along the same lines a long time ago. It worked occasionally, but was just too flakey and I just gave up started using 'pkgin' (pkgsrc/pkgtools/pkgin) as soon as it came out. The bonus is that you will be able to use 'pkgin' to search for packages after doing an update with it. Thanks, Swift
Any ideas why this kernel thread is going nuts?
I read the xcall(9) man page, but I still don't see why this kernel thread should be taking this much CPU time. It'll go on like this for 30-60 minutes and I can kill every app I'm running - no change. The main reason I know it's happening is that I hear the CPU fan spin up. I can't coorelate this to any activity I'm starting or stopping. I've only experienced this with the i386 port on 7.0.1. This is related to me asking the ignorant questions about how to see kernel threads and several folks reminded me to either use ps with options or top with 't'. Here's the 'top' view with threads expanded: PID LID USERNAME PRI STATE TIME WCPUCPU NAME COMMAND 034 root 127 xcall/3 89:50 22.71% 22.71% xcall/3 [system] 068 root 126 RUN/3223:24 11.43% 11.43% pgdaemon [system] 028 root 127 xcall/2 81:43 10.16% 10.16% xcall/2 [system] 022 root 127 xcall/1 60:01 1.17% 1.17% xcall/1 [system] 0 7 root 127 xcall/0 63:15 0.63% 0.63% xcall/0 [system] 069 root 124 syncer/2 19:39 0.00% 0.00% ioflush [system] 6820 1 sgriggs 85 select/0 19:29 0.00% 0.00% - fetchmail 037 root 43 i915/314:02 0.00% 0.00% i915 [system] 0 9 root 125 vdrain/0 3:23 0.00% 0.00% vdrain[system] 070 root 125 aiodon/3 2:57 0.00% 0.00% aiodoned [system] 071 root 123 physio/3 1:20 0.00% 0.00% physiod [system] 0 1 root 125 uvm/3 0:20 0.00% 0.00% swapper [system] 016 root 96 smtask/0 0:17 0.00% 0.00% sysmon[system] 011 root 125 cacheg/3 0:06 0.00% 0.00% cachegc [system] 035 root 96 apmev/00:06 0.00% 0.00% apm0 [system] 043 root 96 iicint/1 0:06 0.00% 0.00% iic0 [system] 010 root 125 vrele/10:04 0.00% 0.00% vrele [system] 575 1 root 85 kqueue/0 0:04 0.00% 0.00% - syslogd -Swift
Re: What is the "[system]" process representing ?
On Tue, 4 Oct 2016, Michael van Elst wrote: > [system] is all the kernel threads. In top you can switch to thread > display and get more details. Kernel threads are also displayd with 'ps > -s' and you can augment the display with the thread name using '-o > lname'. Ah, I should have known there would be a switch for 'ps' since there is on several other OS's including Linux. However, IIRC, Linux will show all the threads by default. Thanks for the reminder. > 80% CPU for doing nothing however is bad. The top display probably tells > you which thread is misbehaving. Next time it happens, I'll pay attention and expand the kernel threads to see which one is doing it. I guess there is no way to do a gdb 'attach' to the kernel thread to get a backtrace, though. I'm assuming one has to do this sort of thing with a specialized kernel debuggger/profiler. > Saying this, if you run a kernel with LOCKDEBUG on a system with lots of > memory, this adds a ton of overhead to the ioflush function and then > it's not impossible to see a continous 80% CPU usage for '[system]'. But > that doesn't happen with normal kernels. This is definitely abnormal in my experience. I have machines with less horsepower and they don't have the issue. It's got to be something specific to my configuration or hardware on that one machine. I'll run it to ground eventually, and especially now that I'll remember to drill down to the actual thread in question. I can then put in enough printf()'s or printk()'s that I can find the general problem area enough to report it. > top just can't display CPU usage correctly for processes that are active > for very short intervals, wether kernel threads or not. Well sure, it's default refresh is 5 seconds, so by definition if it's shorter than that, it'll basically be invisible. Thanks for the reply. -Swift
What is the "[system]" process representing ?
Folks, I recently installed NetBSD on a Lenovo M83 Tiny machine and from time to time, I notice the "[system]" (appears to be a kernel thread?) getting up to 80% of the CPU while the box is doing nothing. No processes are active and a reboot clears the issue (except when it doesn't. I power-cycle *then* it's cleared). The only reason I noticed in the first place was because of the system-fan spinning up. FYI, this is just a standard NetBSD 7 install (not -current). What is "[system]" really doing? Is there a way to get a more granular look at what is going on? On another system, I have a question about a 1.8Ghz CoreDuo based 32-bit i386 laptop with 2GB of RAM. I noticed that '[system]' accumulates the most time on the host, but it's never "on the board" when I run top or other tools. It's overall usage is trivial. However, I notice that if I install debian 8.6 on this machine, I see that 'systemd' (yes, I know it's much different and not a kernel process and not apples to apples) is _always_ taking between 5-10% of the CPU and is nearly always the #1 consumer. This is on a fresh installation! I just want to make sure I'm not missing some critical fact like perhaps the '[system]' process on NetBSD is masking it's CPU usage and is doing the same amount of work (doubtful, but possible). This is, after all, a pretty old machine. So, the basic question is this. Is my x61 ThinkPad actually getting slapped around by systemd or is it that system just so slow it's just exaggerating an effect that would be hard to detect on a fast/new box? I'm trying to rule out some mistake or misconfiguration on my part that a systemd advocate (ie.. not me) would point out and say "You just didn't do it right." The corollary is, does NetBSD do the same work but just mask the CPU usage? I really really doubt this but I wanted to ask to make sure before I make any kind of "linux vs netbsd" claim in this case. Man, if that's really the "new normal" for Linux, it's hard to believe. I'm tempted to install it on my 500Mhz AMD Geode system. It'd probably take up 50% of the CPU if the effect scales... Maybe they don't care because they've already eschewed both sysv-init and systemd for Busybox-init? Great news for embedded BSD developers, I'd think. :-P Thanks, Swift
bozohttpd minor fixes to man page
I like NetBSD's httpd. I noticed a couple of minor inconsistencies in the bozohttpd(8) manual page. Where should I report these? * The -v option appears twice in the options summary. It's shown as both a flag and a switch that takes options. They can't be both right. * The -V option is documented in the manual page, but does not appear in the options summary block at the top of the manual page. It's also unclear if "slashdir" is an option for -V or if the text refers to the "slashdir" given as the document root. * -V is also not documented in the usage when you get help directly from the binary (ie.. just run /usr/libexec/httpd to see what I mean). -Swift
Re: Phoronix 8-way-BSD-install - NetBSD bombed
On Wed, 14 Sep 2016, co...@sdf.org wrote: > I feel that for home users, -current may be a good choice. I'd agree. It might be worthwhile to put some link to the releng site where you can download ISO's for -current a little more prominently on the Netbsd website. It's currently a little blurb wy at the bottom of the "Get NetBSD" link and it doesn't mention "You might want to use this if you are dead-set on using newer hardware." I'm not trying to be snarky, I'm just agreeing in detail. :-) > netbsd 7.0 is entirely unusable on much of my hardware. desktop was > extra bad. no USB3 means USB keyboard interrupts are lost or something, Hmm, that sucks. I didn't have the same experience, but I can understand how that's frustrating. > need to boot with ACPI disabled (disables hyperthreading), cannot > install from USB, I didn't try installing from USB yet with any 7.x release. The issue with ACPI is nasty, though. > lack of graphical acceleration for nvidia cards means when running old > Xorg it took 1 minute to run a command like 'su', new Xorg can handle > until X is shut down once (all fixed in -current). IMHO, nVidia will probably always be a problem. I've never forgiven them for being so (incredibly) rude to me in the late 1990's when I was trying to get some non-NDA specs from them. They didn't just refuse, they were completely unprofessional about it and basically laughed in my face. However, that's my own problem. In general, though, no matter what the state of ATI vs nVidia is, they seem to have always fought open source, giving ground only when they felt they had to because of the competition or because they wanted CUDA to spread more. Deep down, nothing has changed in them. I doubt they will every properly cooperate with anyone on the driver front (ever). There is just too much work and logic that goes into their drivers and they don't want to expose it which further complicates and aggravates their jerkholery. I won't buy anything that uses their hardware ... ever. Nonetheless, the situation is still pretty grim, even when looking at other graphics cards. All the old companies like Sigma, Paradise, Matrox, Neomagic, Trident, S3, and Number Nine have either gone out of business, been gobbled up, or fled into a niche. We are pretty much stuck with "the big three" these days. Now that graphics cards are uber-complex compute devices, it's going to take even more work to write drivers and keep them going. None of them seem to be interested in participating in something like AtomBIOS and exposing graphics cards in a similar fashion as wireless firmware works these days (ie.. move the blobs to the hardware and provide a public API instead). I despair of ever seeing drivers hit a non-partisan project which could be shared among the open source world. I see new X11 display server replacements coming out like Wayland that seem ultra-Linux-centric and drop backward compatibility (and some features) from the core X11 protocol. Color me so-far-unimpressed. I've got a sneakin' suspicion that Wayland will be a defacto schism worse than systemd is, but without the loud noises from the crowd. > on linux drivers are written before a release, or right after. so a > typical user which has 2-3 year old hardware can afford to use LTS > kernel. Good observation. It's something benchmarking and hardware test folks don't seem to understand at all. Of course, I've never been all that fired up to "enlighten" everyone or for "world domination". If NetBSD has enough folks to keep going - good enough. We've seen what happens to an OS that actually gets to "world domination" and they can keep it. > in netbsd, drivers only end up written after 2-3 developers get the > hardware, and they don't get it on release day. so this is a 2 year > delay in itself. I've adapted to this over the years. My strategies are: * Never buy something with nVidia anything in it. * Try to buy desktop/laptop gear with Intel hardware that seems to be the best supported, even if it does have to come from Linux. * Create a bootable USB stick for testing new gear in the store. When I go in to buy, I make sure ask the sales staff to let me boot the stick to see if things like the NIC, wifi, or framebuffer are going to give me the bird. * Retrofit newer machines with older framebuffers if you absolutely must have it now. Then re-install the card once it's got support. I'm not suggesting everyone do this (or can do it in all situations). I'm just saying it's what I personally do. > after this many users end up picking netbsd 7.0 release, not knowing it > is effectively like picking old ubuntu LTS, except with the additional > delay until developers (which are normal people and not companies) > obtain the hardware and get around to adding support. There is a cost to running a real open source OS with real principles and an honest-to-goodness UNIX && BSD heritage. I accep
Re: Intel Iris Pro 580 under NetBSD
On Sun, 11 Sep 2016, Benny Siegert wrote: > I admit that I don?t know much about Intel graphics these days. My > machine has an Intel Iris Pro 580 graphics adapter. I also have a machine with this same adapter. It's a Lenovo Thinkcentre "Tiny" machine. I believe the problem is that NetBSD's intel KMS code is not yet pulled up to the rev that supports Iris graphics. > Running X crashes the machine: I get an empty screen with text mode > cursor, then it becomes unresponsive. That's the same as I get. > I tried setting the driver to ?vesa?, but the result is the same. I had the same experience. I even tried disabling the framebuffer console which also didn't matter. > What is the correct incantation to get X on this card? Even > un-accelerated graphics would be preferable to a crash. I don't think that there is a way to do it right this minute. If it helps, I've noticed that FreeBSD Beta 11 is supporting it. I think we'll just have to wait on the NetBSD side of things. It'll happen eventually. Thanks, Swift
Re: Phoronix 8-way-BSD-install - NetBSD bombed
On Fri, 9 Sep 2016, D'Arcy J.M. Cain wrote: > I always dd nulls to a disk that was used for another OS before starting > a new install. Same here. > Specifically I have had issues installing NetBSD to a drive that had > Linux (Red Hat) on it. It's been a while but I think that the behaviour > he saw is what I saw. I ran dd to clean everything off the drive and > then it worked just fine. Hmm. It's one of those ritual-motion things I do now, and I can't even remember what happens when you don't dd-wipe a FreeBSD disk and try to do a fresh install of NetBSD. > In any case I have my doubts about someone who claims to have written > 10,000 articles over the last twelve years. For what it's worth: Well, I'd question the guy's testing methods and his Linux-centric "Phoronix Test Suite". It looks akin to putting gas in a diesel engine and wondering why the engine is such a turd. If he's going to use that script-ware crap on *BSD he might want to do the same level of testing and customization. He writes a lot of one-sentence or one-paragraph articles. The site is more like an RSS feed than a news site. So, I don't doubt the 10,000 number but using the term "article" to apply to all of them is a bit generous. > http://jonimoose.net/2013/moronix-why-amd-wont-take-michael-larabel-seriously-and-you-shouldnt-either/ Well, I have to say, I find this blogger's criticism pretty well placed. -Swift
Phoronix 8-way-BSD-install - NetBSD bombed
http://www.phoronix.com/scan.php?page=article&item=trying-8-bsds&num=2 I wonder what happened in his case. I can tell he's re-using the same box and drive from his FreeBSD install, but I've done that many times and never had a problem (other than being annoyed at 'dk' devices showing up). I just wipe the disk and start over. In his case, the install locks up during the kernel initialization. From the looks of the dmesg/ring-buffer he's screenshotting, I'd say he's never getting to init. Phoronix is a pretty Linux-centric site. I have some suspicions is just a way for him to say "Aww, look, half the benchmarks I setup for specifically for Linux in the Phoronix testing suite don't work. BSD must really suck, I guess." Then it gives the kids on slashdot somewhere to link when they need to say "BSD is dead/sucks". I've *have* been noting that Linux's I/O performance edge in the last couple years seems to be really sliding in comparison to FreeBSD. It's still "kinda better" but it used to be "categorically WAY better". I'm not sure if that's more optimization happening in BSD land, or more suck leaking into Linux-land. I do a lot of testing using 'fio' and other benchmark tools and despite all the constant Linux hype of the Next-Great-IO-Scheduler, it's position in the high speed I/O peloton is, if anything, sliding (esp versus FreeBSD ZFS and DragonFly BSD + HAMMER). Of course, it's all relative to what and how you test. YMMV. Nonetheless it pains me to see NetBSD shown in such a bad light. -Swift
Re: installing on a VPS
Al, I have a friend who recently worked at Rackspace. Here is what he said, just in case it helps (I forwarded your original question to him): Swift's Pal Says: > I'm sure it's possible to get the netbsd kernel/basic userland running > echo 'hello world' to the console, seeing as the virtual machine gives > you direct access to a virtual block device. But Even then I'm not 100% > sure it'd work because Rackspace uses a lot of custom PV drivers in its > xen implementation. E.g. there's a "supervisor domain" on each xenserver > that notifies an API when a customer dom's 'nova-agent' has started. > They do this over the xenbus. NetBSD won't have that in a vanilla build, > so in the rackspace console you'd see a vm in 'unknown' status; you > can't even see the console when it's in this state. > TL;DR: Rackspace makes sure that the linuxes they support can boot and > be controlled by all their special services - they didn't design wide, > just deep." Perhaps this doesn't apply to you if you are renting a "real" server at Rackspace instead of renting a virtualized guest. YMMV. -Swift
Re: configuring remote headless servers
On Wed, 31 Aug 2016, Steve Blinkhorn wrote: > It took three days for an engineer with sufficiently developed skills to > become available: He solved the problem by switching the server on. Having found no good way to truly address issues like this without some control of my own, I don't deal with an ISP that won't give me power control and console. HP ILO's are a good solution since they can be used for both a hard power cycle and give you a real remote console. If your console stays in text mode, you don't even have to license the iLO. Many server BIOSs' have a mode whereby they can provide console support via a dedicated serial port (Tyan comes to mind as one of these). If you combine that functionality with something than can do remote power control (like a Baytech RPC or APC network PDUs) then you've got the same features. > But this led me to wonder how I would cope if, for instance, a server > came up in single-user mode requiring an fsck. If you have true console access it wouldn't matter. You'd do the fsck then keep truckin'. > I can see from the man pages for shutdown(8) and fastboot(8) that there > is provision related to this kind of circumstance. I'll just apologize because I doubt my response was what you were looking for. I'll simply say this, when it comes to hosted systems, the faster the system can bring up the network and ssh with the absolute minimum of dependencies, the better. AFAIK, I've never seen an OS that really "gets" this, as evidenced that even though OS's *could* use their ramdisk/miniroots to launch OpenSSH (and statically link it), they rarely do (and there are some reasons, but I usually disagree with their importance). For a server without a decent console, having Openssh started is a defacto the same thing as having a usable server, thus the strategy should place a categorically *premium value* on doing that as soon in the boot process as possible with the least number of dependencies. Also, NetBSD has the ability to redirect the console to a serial port as soon as the kernel starts booting. However, you'd need a serial console in place first before you can take advantage of that. However, in your scenario of the system needing an fsck and stopping the boot process, it'd save you from having to call some data-center hands & eyes at your ISP. -Swift
Re: still upgrading from 2.0 to 7.0
On Thu, 18 Aug 2016, Steve Blinkhorn wrote: > I can login as an ordinary user across the network, but I cannot su from > there, and on the console if I su to an ordinary account and then try to > su from there, I gent authentication failure. You probably already know this, but make sure the users you are trying to 'su' from are members of the 'wheel' group. Otherwise, they will not be allowed to 'su' to root. -Swift
Re: upgrading an old system
On Tue, 16 Aug 2016, Steve Blinkhorn wrote: > But the disk layout is sorely in need of revision. I'm not trying to be trite, but have you simply considered using dump(8) to backup your filesystems, install your chosen revision, and then restore ? I haven't been closely following the thread, so I apologize if this doesn't work for you for some reason. However, here's a case for backup/restore: 1. You won't have to worry about ABI issues (but NetBSD does well here, as discussed). 2. USB disks and network backup targets are very cheap these days. If you lack good on-site access, a network backup still works if you have some option for that (or simply FTP the compressed dump file somewhere). 3. Most applications keep localized data out of the OS file systems making it pretty easy to move to a new revision. Using the dump(8) utility will save your permissions etc.. > Is the disk layout configuration tool accessible other than through > running sysinst, or will I have to bite the bullet and edit the disk > label by hand? I don't think the disklabel format has changed in a lng time. If it helps you, boot a newer revision of NetBSD and use the tool. Then you can reboot whatever version you want to install and Just Do It. -Swift
Re: Proxy server, mode intercept on NetBSD 7.0.1
On Tue, 2 Aug 2016, Manuel Bouyer wrote: > On Tue, Aug 02, 2016 at 01:06:41AM -0400, metalli...@fastmail.fm wrote: > > Only a "troll" because I was disappointed. Otherwise this would be > > known as kern/50629. > > Enabling IPv6 and ipfilter at the same time apparently leads to null > > pointer dereference. > > I'm running ipfilter on ipv6-enabled hosts and I've never seen panics. Same here. I have at least 2 machines that fit that description, run ALTQ also, and have fairly long ipf.conf configs and no panics. -Swift
Re: Proxy server, mode intercept on NetBSD 7.0.1
On Mon, 1 Aug 2016, metalli...@fastmail.fm wrote: > I've been very disappointed with the quality of NetBSD 7.0.1 since I > upgraded from 6.1.5 a few weeks ago. I'm not. I love 7 and I'm grateful to all the volunteers who made it possible for me to have for FREE. 6.x had much less support for newer display adapters (Radeon, Nvidia, Intel) for X11. When 7 came out and had pulled up the driver versions I found that NetBSD suddenly supported more display hardware than FreeBSD and OpenBSD. Boxes that previously were nearly worthless with any version of BSD became fully usable as workstations. In other words, not all of us were "disappointed" as you put it. I for one don't like your disparaging tone, either. > I've been running pretty much the same system config as my home > router/NAT/firewall/server since NetBSD 1.5. I believe that's almost 15 > years of ipfilter/ipnat. It has always worked well for me... until I > moved to NetBSD 7. So what? I've been running NetBSD longer than you have and I don't care (along with everyone else who care even less) because it isn't germane to squat. I've seen IPF blow up in lots of cases in the past, along with a few issues with ALTQ but that's probably because I do more than just run some basement server. Stuff happens. Darren and others over the years have fixed lots of bugs with IPF. It happens. Deal. Just because one didn't hit you until now doesn't mean anything special. Just deal with it like anyone else does, articulate the issue, file a bug report, create a patch, etc... Crying about your "disappointment" doesn't help anyone. > I've had several issues with various parts of the OS, but ipf is the one > that causes random kernel panics. There are more choices now if you feel you are not getting your money's worth from NetBSD/IPF. There is always PF, for example. > so the entire home network is offline until I go downstairs, plug > something into the rack, and manually refresh it. Not cool, guys. Not > cool. Here's me all concerned that you have to walk downstairs *yawn*. Not cool? Since when was it cool to rip a butthurt attitude to a group of developers and volunteers who've been helping you by your own admission for over 15 years? -Swift
Re: UFS fragments
On Wed, 13 Jul 2016, Michael van Elst wrote: > Fragments are just the last (partial) block of a file. Fragments from > multiple files can be combined into a single block to save space, > especially for small files. Thanks for replying. I'm assuming then that the metadata for each file just points to a set of extents, and thus it doesn't matter if the files are tiny and not occupying a full logical block. Of course, I wonder then what the point of a logical block is at all. However, I suspect this will make itself more apparent as I read more code. > When a file is extended, the partial block may need more space than is > available or may even require a full block for itself. Then it is > re-allocated. That makes good sense and is what I was pretty much expecting. I just couldn't identify if the reallocation copied the fragment into a new block or did some kind of pointer magic to truncate the block and point to a new spot where there was more space. Ie.. I don't know how it works or what I'm doing and was just groping around trying to understand. :-) > This has little to do with contiguous block allocation. Okay, thanks. > The filesystem code needs to find out wether an operation is allowed. > You need something like 'user id', 'set of group ids' and 'has superuser > privileges' for the entity that started the operation, and that's > abstracted into 'cred'. Ahhh. Okay. After reading this I found the kauth(9) manual page and Elad Efrat does a really great job of documenting the kernel authorization framework. I was Googling and man'ing all the wrong things at first. > ffs_alloc() panics when called with NOCRED but not FSCRED. Heh, that part I figured when I saw this 2-liner: if (cred == NOCRED) panic("ffs_alloc: missing credential"); However, since I'm pretty unfamiliar with the kernel code or even more than primitive concepts, I didn't know about the kauth system. > The code isn't even used for modern hard drives because it doesn't make > sense with all the hidden buffering and queuing going on. That's understandable. I was doing some more reading based on this trying to figure out how you knew that. I looked at ffs_dirpref() which has a lot of logic and rules around how to hunt for cylinder groups. However, I got lost pretty quickly. Some of the comments were helpful/insightful, but I'm not used to reading code this terse and compact with s many external references and includes. Ie.. the kernel is a big project and I've only done small rinky-dink 30-40 .c and .h file projects, so it's stretching my limits (a good thing). My own code uses very long and descriptive variable names and that seems to annoy most other coders. However, it's a good crutch for when I come back to read something I wrote 6 months back... It's just my own style and weaknesses showing. I'm not criticizing the kernel code (I'm not smart enough to do that) ! Right now I'm reading code to understand this: " * 2) quadradically rehash into other cylinder groups, until an * available inode is located. " But it's just part of the tour. I'll figure it out or ask more intelligent questions. Thanks again for your insight and help. -Swift
UFS fragments
I've been studying ffs_alloc.c in the kernel source. I was trying to understand how the logical block allocation in UFS works. I never actually knew about "fragments". I get why they are there, but I have some questions if anyone has time: * I notice that fragments can be re-allocated. Could that create fragmentation issues ? Is there a way to change allocations so that they are physically contiguous? Not that this matters much anymore. I'm just curious if it was ever an issue. * I see "cred" (type kauth_cred_t) being passed around a lot in the API calls. What is that doing? Aren't all these calls already going to be operating as root? I couldn't find the meaning of "NOCRED" in /usr/include/*.h or significant info related to kauth_cred_t but I'm probably just looking in the wrong places. I'm thinking this is puffs related, but I didn't think puffs and UFS knew each other. * I see some code that seem to be related to spinning rust. Like this line: " * 2) allocate a rotationally optimal block in the same cylinder." Are there many opportunities for optimizing for SSD, nowadays, or is it just too much trouble for not enough payoff? I'm not a kernel dev, so please bear with my ignorance. I'm just wandering through the code. -Swift
Re: slightly OT hardware question
On Thu, 26 May 2016, William A. Mahaffey III wrote: >> FYI, I use the TrendNET TU3-ETG v1.0R with NetBSD. This is a gigabit >> NIC with USB3 (though it uses USB2 in NetBSD). It works well and might >> give you some more options on smaller machines like that. > H That sounds promising. Pi-B+ ? Pi2/3 ? In service ? What are you > using it for ? The NIC shows up in > ifconfig ? Inquiring minds wanna know ;-). Thanks & TIA I've only used the adapter on AMD64 and i386. I'm only assuming it works on ARM. Sorry, I should have stated that. Here is a bit more info just for fun: % uname -a NetBSD m83.parsec.com 7.0 NetBSD 7.0 (GENERIC.201509250726Z) i386 dmesg: axen0 at uhub3 port 2 axen0: ASIX Elec. Corp. AX88179, rev 2.10/1.00, addr 8 axen0: AX88179 axen0: Ethernet address d8:eb:97:b3:ab:d9 rgephy0 at axen0 phy 3: RTL8169S/8110S/8211 1000BASE-T media interface, rev. 5 rgephy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT-FDX, auto ifconfig: axen0: flags=8802 mtu 1500 capabilities=3ff00 capabilities=3ff00 capabilities=3ff00 enabled=0 ec_capabilities=1 ec_enabled=0 address: d8:eb:97:b3:ab:d9 media: Ethernet autoselect (10baseT half-duplex) status: no carrier -Swift
Re: slightly OT hardware question
On Wed, 25 May 2016, Hal Murray wrote: > Have you considered adding a USB-Ethernet adapter to a Pi? FYI, I use the TrendNET TU3-ETG v1.0R with NetBSD. This is a gigabit NIC with USB3 (though it uses USB2 in NetBSD). It works well and might give you some more options on smaller machines like that. Thanks, Swift
Re: debugging a memory leak
On Fri, 20 May 2016, Manuel Bouyer wrote: > I though ElectricFence would only detect things like use after free or > out of bound access, but not memory leaks ? Hmm, I thought it did. Like if you try to malloc() over a pointer and clobber it before you free()'d the previous one (say in a loop/iterative situation). A ghetto reference counter + LD_PRELOAD works pretty well for that, too. The only problem is that it makes your app fairly slow for testing if it does a lot of dynamic allocation. Neither approach is near as good as Valgrind, for sure. -Swift
Re: debugging a memory leak
On Fri, 20 May 2016, Manuel Bouyer wrote: > what tools do we have on NetBSD to find a memory leak in a userland > program (actually OpenCPN - which is a large C++ program with dynamic > libraries and uses dlopen()) ? Manuel, I'm guessing you are a much better C programmer than I, but I can relate what I use. I do three different things: 1. If it'll compile on Linux, I'll test it with valgrind and just apply the results on the NetBSD side. 2. ElectricFence, but I doubt it'd work with libs dlopen()'d. Then again, if you used it with LD_PRELOAD you might be able to pre-empt the symbols even in the dynamic libs (in theory). 3. I created a cheeseball reference counter similar to the one in glib by just wrappering malloc(). It just add/drops/prints a linked list of structs which are entries I'm tracking. When everything is working perfectly, I remove it. Voila clean memory management without krufty GC overhead! > The memory usage of the process is slowy growing, until the systems gets > out of ram/swap and kills it (on my evbarm which has no swap it takes > about 2 days). My guess (which ain't worth much, take it with a grain of salt) you'd find it pretty darn fast with valgrind. It's never failed me, but it's limited to Linux, Darwin, and Solaris. -Swift
Re: uefi boot?
On Tue, 17 May 2016, Patrick Welche wrote: > > gptmbr.bin is just MBR code. You also need to put bootxx code in the > > partition at index 1 with installboot. I doubt it's going to help or be what you are interested in, but most systems have the ability to turn on "legacy BIOS" in the EFI interface that allows the standard old rig to work. I do this with HP DL gen9 servers because it cuts the post/boot time in half vis-a-vis EFI. However, I realize a lot of folks want EFI to work etc.. Perhaps you are just testing. My understanding is that EFI generally wants a FAT file system with a /boot directory and it uses naming conventions to grab what it believes is the first state loader. However, I know very little about EFI other than "it usually doesn't work at this point". I'm sure there are good things about it, I'm just not up-to-date with it. -Swift
Re: IrDA
On Tue, 10 May 2016, Andy Ball wrote: > I am telling irdaattach which serial port the IrDA adaptor is connected > to but then irdaattach has that port busy, so it's not free for use with > slattach. That makes sense, but then again, it doesn't make sense that pppd or slattach wouldn't let you use the returned TTY as a valid link. With Linux there is an "irCom" driver that will re-present the IRDA link as a serial port. That's what you use to attach SLIP or PPP there. I'm guessing there must be something similar with NetBSD. In fact I just looked at an old script (for Linux) that's doing this: pppd /dev/ircomm0 115200 noauth 192.168.99.1:192.168.99.2 So, there must be something equal to that "ircomm0" device for use with either slip or ppp. I'm just not sure what it is or if there is some kind of special tool to give it an ioctl() to work as a serial/character device. -Swift
Re: IrDA (fwd)
On Tue, 10 May 2016, Andy Ball wrote: > That returns /dev/irframe0, which apparently is not something that I > can use with slattach... Hmm, yes, it seems it's telling you the framing device, not the TTY. Are you giving it a TTY name when you invoke irdaattach? From looking at the man page, that would seem to be the norm. Then you'd give that same TTY to pppd or slattach. -Swift
Re: IrDA
On Tue, 10 May 2016, Andy Ball wrote: > slattach: /dev/tty524288: No such file or directory I doubt that device really exists. I'd use the '-f' flag with irdaattach so it will tell you the real name of the TTY to use. > Do I need to use mkdev to make device files for these ports? I doubt it, I'm guessing a little more fiddling with irdaattach then you can go back to pppd and it'd work. -Swift
Re: IrDA
On Tue, 10 May 2016, ball@mwo-c9kdw31.localdomain wrote: > On the desktop PC, "irdaattach -d tekram -h /dev/dty00"... Good. That gets you frame level attachement to the tty device. Now you can run ppp or slip on it. > Can I run IP over irframe0? Are tty524288 and tty524289 devices that I > could use to configure SLIP? Do I need something (ircomm?) in between > SLIP and irframe0 to make this work? I would welcome some guidance. I think all you need to do at the point where your IRDA framing driver is attached to the TTY is to fire up slip or ppp. I'd recommend PPP since it's more flexible, supports compression, etc.. However, there isn't any reason you couldn't use slip, too. something like this: server: pppd 192.168.0.1:192.168.0.2 ttyname /dev/tty524288 client: pppd noipdefault ttyname /dev/tty524289 That should give you a ppp0 on both sides. -Swift
Re: compression in dump(8)
On Mon, 2 May 2016, Christos Zoulas wrote: > It does; the dump format puts the directory info first so that it can > restore the stuff you selected in a single pass (it does not need to > seek backwards). Gotcha. Sorry for piping up about something that already works then! I'll be sure to try it next time I need to. > Of course if you select some, restore, try to select some more, > restore... It will not work since it will need to seek backwards then. Understood. That makes good sense. > Remember all this stuff was designed with tapes in mind, not random > access devices. Roger that. -Swift
Re: compression in dump(8)
On Mon, 2 May 2016, Christos Zoulas wrote: > Doesn't that work? zcat dump.gz | restore -f - Yes it does. What I don't believe will work is this (interactive restores when I only want to restore a few files): restore -i -f /path/to/mydump -Swift
compression in dump(8)
I notice that the dump command in NetBSD doesn't feature the use of any internal compression. If one compresses the dump file, then you can't use it as the basis for a restore. Is that because the compression functions aren't in libc (ie.. they are off in libz or liblzma) ? Perhaps it's just a matter of someone doing the work ? It'd be a big help for admins who want to be able to use a modern compression algorithm without having to uncompress the dump to use it. I for one, would use it on all my systems. -Swift
Rumpkernel comments by Linus
Linus seems to frown on the rumpkernel efforts since he believes it'll put the OS into a straight jacket (my words, not his). The original post is below. However, what say you folks? Is he reacting to something he doesn't know anything about based on his general instincts or is he making a legitimate critique? Is it a value-system judgment or is he missing some fact about the AnyKernel approach that negates his preconceptions? Is it simply a case that Linux already gets lots of DriverLove[tm] from the vendors and they just don't have to care or is that too cynical? -Swift ---[ Slashdot Snippet ]--- https://linux.slashdot.org/story/15/06/30/0058243/interviews-linus-torvalds-answers-your-question "anykernel"-style portable drivers? by staalmannen What do you think about the "anykernel" concept (invented by another Finn btw) used in NetBSD? Basically, they have modularized the code so that a driver can be built either in a monolithic kernel or for user space without source code changes ( rumpkernel.org ). The drivers are highly portable and used in Genode os (L4 type kernels), minix etc... Would this be possible or desirable for Linux? Apparently there is one attempt called "libos"... Linus: So I have bad experiences with "portable" drivers. Writing drivers to some common environment tends to force some ridiculously nasty impedance matching abstractions that just get in the way and make things really hard to read and modify. It gets particularly nasty when everybody ends up having complicated - and differently so - driver subsystems to handle a lot of commonalities for a certain class of drivers (say a network driver, or a USB driver), and the different operating systems really have very different approaches and locking rules etc. I haven't seen anykernel drivers, but from past experience my reaction to "portable device drivers" is to run away, screaming like little girl. As they say in Swedish "Brnt barn luktar illa".
Re: Prevent firefox from making noise
On Thu, 21 Apr 2016, co...@sdf.org wrote: > It does. Great. I'll switch to pkgsrc-current and shut my whine-hole. > You may need to delete graphite2. Do you have version 1.2.4 and it > doesn't try to update it? What I've got currently with my pkgsrc-2015Q4 rig is: graphite2-1.3.5 Cross-platform rendering for complex writing systems Swift
Re: Prevent firefox from making noise
On Wed, 20 Apr 2016, co...@sdf.org wrote: > my graphics/graphite2 update that broke it (back and forth) I don't know > what should have been changed but didn't happen, missing revbump? it will > not build with graphite2 older than 1.3.5. Ah, okay. I'm not crazy, then. > sorry Thanks a ton for confirming. Now I can quit squirming around trying to figure out what I did wrong. I'll just wait for a fix. Do you happen to know if it's building in -current? I can swap out my pkgsrc rev easy enough if it does. Thanks! -Swift
Re: wireless configuration
On Wed, 20 Apr 2016, g...@duzan.org wrote: >Do you have wpa_supplicant configured and running? See the man page, > https://wiki.netbsd.org/tutorials/how_to_use_wpa_supplicant/ , and/or > /usr/share/examples/wpa_supplicant/*. IIRC, you don't strictly need wpa_supplicant to do WEP (though it does work). You'd at least have the option to simply use the ifconfig(8) for it. Example: ifconfig ipw0 inet 192.168.0.20 netmask 0xff00 nwid my_net nwkey 0x01020304050607080910111213 Just FYI. -Swift (Let me be clear this rant isn't directed at anyone on the list nor does it that germane to the question.) PS: Note in the content above the total lack of "Duuuhhh, why do you want to use WEP. OMG OMG OMG it's insecure. My friends at Def Con will laugh at you. Theo De Raadt will haze you. You'll never be able to smoke with us behind the gym again. etc..". That gets s old, and there are still legit reasons/scenarios to use WEP, as well as a whole constellation of other "insecure" or "obsolete" tech, IMHO. Yes. It's crackable. We ALL KNOW that. Sometimes people do seemingly unwise things for very good reasons that are hard to guess. I wonder why folks even respond with a trite non-answer just to publicly derail the persons question and personally critique their reasoning with their silly-mad 31337 s3cur1ty h0xx0ring skillz? Is there a secret fund for 15 year old security consultants to save us from ourselves that nobody told me about ? Do they really think the question bearer is going to take that onboard as useful info ? Notice how they always act like you couldn't have possibly already known and intentionally disregarded the security 101 "best practice" fact they spout off ? Perhaps this was the announcement: https://xkcd.com/386/ People do this all the time when I talk about IRIX. They will go off on some security rant before they even know if I've got the machine on a _network_ as if they have a prayer of educating me on IRIX, when most of them have never even seen it. "Oh, noes! It's insecure? Well, I'd better just throw all this hardware away, then." By the same token this is why I don't run OpenBSD. I'll probably catch hell for saying this, but security is **NOT** my #1 concern. Doing real work is, and over-securing your resources lowers the resource's accessibility. Plus, I'd like my machine to not run like a complete dog for the "sake" of security. Sorry for the overt negativity toward security maniacs, but I'm betting I'm not the only one who has felt this way. Please forgive me, I'm a working sysadmin with real operational requirements for what I do.
Re: Prevent firefox from making noise
On Wed, 20 Apr 2016, Roy Bixler wrote: > I tried a quick search on it and the articles all say that firefox doesn't > have a built-in way to disable audio (surprising, given the sheer number > of available variables to tweak.) That is surprising, now that you mention it. > However, the articles mention available addons to either mute or alter the > volume. Thanks for checking. That makes sense. I'll try it. I should have looked there already. I still doubt/wonder about building firefox (today, not in the past) under pkgsrc-2015Q4. It appears broken and that brokeness appears to extend to the FTP site and thus to pkgin as well. Just wondering how we got to a pkgsrc-2015Q4 stable freeze with a busted firefox build. So, I figure it's got to be on my end, but then there's the "Why isn't it on the FTP site, then?" Those are auto-built, I believe. That leads me to speculate that the build server is having the same build-failure for firefox as I am. Perhaps there is some problem with dependencies and I need to rebuild them all? Anyone know if there is a pkgsrc keyword to do this ? Make replace perhaps (or does that just rebuild the current package) ? Thanks, Swift
Prevent firefox from making noise
First of all, I love pkgsrc, and give hella credit to the team. Let's just get that straight before I start whining about what are possibly my own self-inflicted problems. Can one disable or prevent sound from playing from one specific application (at the OS level)? If not, then is there any way to simply prevent Firefox from having sound abilities? I notice options to disable PulseAudio (the dirty turd that it is), but I'm not sure that's going to help if it simply switches to OSS, and Firefox has been utterly broken in pkgsrc-2015Q4 for me on every system I try to compile it on for some time now (posted about that about two weeks ago on pkgsrc-users with no replies). So, I can't test that theory. I'm using Firefox24 since it still works. Perhaps I've just horked up my systems and everyone else is fine? I have zero reason to use Firefox or anything it would spawn to play audio. If I want to watch a Youtube video, I use youtube-dl. If I want to stream music, I'll use a real streaming utility like mplayer. I'm one of those curmudgeons that doesn't like s*** just playing willy-nilly from my infernal browser. I flat out just don't want Firefox _touching_ my sound device. The reason being that it (and/or flash) rarely releases it in a timely or easy fashion afterward. The same goes for flash or nspluginwrapper or some bit of that Rube-Goldberg machine. I find that I also get into a situation where after Firefox horks up my sound device I can never get it to function again without a reboot (and yes, I've used fuser and lsof to look for open file pointers on every damn sound device - fail: they don't exist). Has anyone already solved this ? Does anyone else get a big nasty compile failure when trying to build pkgsrc/www/firefox out of pkgsrc-2015Q4 ? If it's working for folks, then why is it missing from pkg_summary.bz2 and thus I can't even snag it with pkgin from ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/i386/7.0_2015Q4/All ? Any help / answers would be appreciated. -Swift
Re: scp dropping connections
On Thu, 7 Apr 2016, Christos Zoulas wrote: > >I attached gdb on sparc64 to sshd process and after 30 seconds got the > >following > Do you have a NAT/firewall and you don't have keep state in your pass rules? I've also seen misconfigured NIDS system that are setup for TCP "shootdown" (ie.. sending RSTs to both sides with valid SEQ numbers causing an instant disconnect). Occasionally they will see something in the encrypted data stream (or just the fact that it's encrypted) and shoot down the connection because it violates some network policy (usually just misconfigured to think that). If that's the cause, it's very easy to see it in a packet trace because all the sudden out of nowhere you just see an RST hit you and kill the connection. Then on the opposite (client) side, if you take a trace at the same time, you won't see it actually _sending_ the RST. Thus, you know a NIDS spoofed it. -Swift
Re: Graphics wiki page
On Thu, 7 Apr 2016, co...@sdf.org wrote: > Others have recommended using such a live image when purchasing hardware > like a laptop at a store. That is exactly what I do. I take a USB stick with a copy of NetBSD installed on it into the store and boot it up. Then I simply run startx and watch the action from there. BTW, if there is work to be done with C or shell script to help with any kind of test suite + website idea like this in the future, then I'd be willing to help out a bit. The same goes for hardware. PCIe-wise I have @20 discrete Radeon and 6 Nvidia graphics cards. Most of them work, but some don't. I'd be willing to test them directly or mail them to folks who want to help. There are only 1 or 2 I'd care to get back. Thanks, Swift
Re: Graphics wiki page
On Sun, 3 Apr 2016, co...@sdf.org wrote: > It would be more fruitful to provide a page where people list hardware > that works for them, http://dmesgd.nycbug.org is close. That's very cool. The idea, in general, reminds me of the supported hardware database for BeOS back in the day. I got a lot of mileage out of that. Currently, none of the BSD distros have anything like that which I'm aware of. I was hoping some of that info would appear in on the wiki at netbsd.se but then it got taken down and replaced with the official NetBSD wiki which doesn't seem to get nearly the same amount of love / content for whatever reason. -Swift
Re: PAE + i7 + 24G RAM + i386 kernel = panic
On Fri, 1 Apr 2016, Benny Siegert wrote: I just took the GENERIC kernel, modified one line to enable PAE and then rebooted my i7 with 24G of RAM. I'm using NetBSD 7.0 i386. This may not be the point, but: If you have that much RAM, why do you use a 32-bit kernel in the first place? I know what you mean, but I have one lame-but-valid reason: wine. It doesn't work on NetBSD AMD64 but rather it works only on i386. I hate this fact, but I use it a lot. There are several little windows programs that do specialized things that are pretty "must have" for me on this specific system (I'm not willing to justify or rationalize to others why I need this - just understand it's must-have in my use-case). Running them over VNC or something like that isn't an option for these specific apps. They need to run on my machine at near-native speed locally. That reminds me, I wish there was something like a "SunPC" card for a regular old PC. That way I could run Windows or some other heinous crap natively on the little PCIe card, while leaving my "real" OS untouched. Anyone ever hear of such a thing? I've seen little transputers but nothing you can pass through a VGA display from ala the SunPC card for Solaris. In benchmarks for Go, the devs found that 64-bit code gives a 10-15% speedup across the board. This is because there are more processor registers in 64-bit mode, so more variables can be kept in registers. I don't doubt it a bit. I normally run AMD64 on most machines, but I have the one that needs to run Wine and when it only sees 2GB of RAM the box will swap even just compiling stuff in pkgsrc. I'm using an SSD and I don't want it to swap (write cycles). So, I want the extra RAM to work so I don't run into swapping issues. -Swift
PAE + i7 + 24G RAM + i386 kernel = panic
I just took the GENERIC kernel, modified one line to enable PAE and then rebooted my i7 with 24G of RAM. I'm using NetBSD 7.0 i386. As soon as the bootloader passes off to the kernel, it crashes and drops into the debugger. Trying to do a 'bt' (backtrace) causes an instant-reboot. So, I recompiled again and turned on all the debugging features. It didn't crash this time and I was able to get a backtrace (once, but type it twice and get another instant-reboot). I can transcribe what I saw on the screen and file a PR if that's helpful. However, I'm curious: 1. Is this a known issue ? 2. Is this reproducible for anyone else ? Thanks, Swift
Re: linking issue - what am I doing wrong?
Some folks, who have had similar issues, asked what I ended up doing and if I'd post it. Here's the skinny: I was doing this: gcc -g -Wall -I/usr/pkg/include -I/usr/X11R7/include -lXm \ -L/usr/pkg/lib -o hello hello.c I switched it to this: gcc -Wl,-rpath,/usr/pkg/lib -Wl,-rpath,/usr/X11R7/lib -g -Wall \ -I/usr/pkg/include -I/usr/X11R7/include -lXm \ -L/usr/pkg/lib -o hello hello.c I guess that bakes in the library search path to your resulting binary. Very helpful, actually. I just never really knew this was an alternative to something like always having to set LD_LIBRARY_PATH, but since I saw other programs that managed to pull it off, I thought I'd ask. I'm glad I did. There are so many smart folks on this list. Thanks, Swift
Re: linking issue - what am I doing wrong?
On Fri, 25 Mar 2016, Rhialto wrote: It looks like you need to give the runtime library path to the linker. See ld's -rpath option. Yep. J. Hannken-Illjes sent me a note about the same issue and I was able to make it work. Unfortunately different compilers have slightly different ways of specifying this (and passing it on to the linker). I think -Wl,-rpath,arg is a common variant. Well, I'm glad I'm learning about this. I'm sure it'll be useful on NetBSD and other platforms that don't use something like ldconfig. I have a rant somewhere about how this is better than a global system-wide search path such as used by lunix or freebsd, but I'd have to look it up :) No sweat, brother, I have no dog in that fight. I'm just trying to make my simple little tutorial programs work. I used to be a lot better with C, and I've been striving lately to get better so I can participate in some projects I care about. I'm not surprised that I'm tripping over the linker after not writing any real C in 10 years. I guess this is how pkgsrc has to compile a _lot_ of stuff. The syntax for GCC is a bit clumsy, but it works. Once it's compiled in, it pretty well stays working. So, at least there is that. I don't have to mess around with ldconfig et al. Learning this also helps me understand the differences between platforms. -Swift
linking issue - what am I doing wrong?
I'm doing some tutorials on Motif. I'm really just getting started. I'm doing something ignorant while linking and I'm not sure what it is. What happens is that I'm able to get Motif and Xtoolkit linked to my little test program, but the program won't run unless LD_LIBRARY_PATH is set. Yet other programs linked to Motif run fine without even setting that env variable. Here's what it looks like (let me know if you care what's in my silly little program, I believe this to be my own problem with linking): So, my fundamental question is: Why does ldd see that 'plan' is linked to Motif versus my silly hello program that can't. Understand that if I set LD_LIBRARY_PATH to include /usr/pkg/lib, everything works fine. sgriggs@m83 ~/code/motif $ gcc -g -Wall -I/usr/pkg/include -I/usr/X11R7/include -lXm -L/usr/pkg/lib -o hello hello.c sgriggs@m83 ~/code/motif $ unset LD_LIBRARY_PATH sgriggs@m83 ~/code/motif $ ./hello Shared object "libXm.so.4" not found sgriggs@m83 ~/code/motif $ ldd hello hello: -lXm.4 => not found -lgcc_s.1 => /usr/lib/libgcc_s.so.1 -lc.12 => /usr/lib/libc.so.12 -lXt.7 => not found sgriggs@m83 ~/code/motif $ ldd /usr/pkg/bin/plan /usr/pkg/bin/plan: -lXm.4 => /usr/pkg/lib/libXm.so.4 -lXmu.7 => /usr/X11R7/lib/libXmu.so.7 -lXt.7 => /usr/X11R7/lib/libXt.so.7 -lX11.7 => /usr/X11R7/lib/libX11.so.7 -lxcb.2 => /usr/X11R7/lib/libxcb.so.2 -lXau.7 => /usr/X11R7/lib/libXau.so.7 -lgcc_s.1 => /usr/lib/libgcc_s.so.1 -lc.12 => /usr/lib/libc.so.12 -lXdmcp.7 => /usr/X11R7/lib/libXdmcp.so.7 -lSM.7 => /usr/X11R7/lib/libSM.so.7 -lICE.7 => /usr/X11R7/lib/libICE.so.7 -lXext.7 => /usr/X11R7/lib/libXext.so.7 -lXrender.2 => /usr/X11R7/lib/libXrender.so.2 -lXft.3 => /usr/X11R7/lib/libXft.so.3 -lfontconfig.2 => /usr/X11R7/lib/libfontconfig.so.2 -lexpat.2 => /usr/lib/libexpat.so.2 -lfreetype.17 => /usr/X11R7/lib/libfreetype.so.17 -lz.1 => /usr/lib/libz.so.1 -lbz2.1 => /usr/lib/libbz2.so.1 -lXrandr.3 => /usr/X11R7/lib/libXrandr.so.3 -ljpeg.9 => /usr/pkg/lib/libjpeg.so.9 -lpng16.16 => /usr/pkg/lib/libpng16.so.16 -lm.0 => /usr/lib/libm.so.0 Any ideas? -Swift
Re: slow disk in NetBSD-7.0 under qemu
On Thu, 24 Mar 2016, co...@sdf.org wrote: Hi, I'm running NetBSD-7.0 in qemu with host being Debian Linux. Disk operations are very slow (1MB/s unpacking local files, a bit better for the disk ones). It's totally un-accelerated. No kqemu or other kernel modules are available for NetBSD, AFAIK. It's just plain slow. There isn't much you can do about it since all the I/O is being done via CPU dynamic translation. You could try to use a different IO subsystem driver, but I doubt it would help. While qemu is a nice stable tool, it's slower than frozen molasses on NetBSD. It's barely usable for most OS's even on a 4.0Ghz i7. I am invoking it with: qemu-system-x86_64 -m 512 -hda netbsd.img -cdrom NetBSD-7.0-amd64.iso -curses You already are doing -curses, which often helps a bit. The only other thing I can think of is that you might try with more memory. Perhaps a bigger cache for the buffer-cache will help. You could also try to run the image file in a RAM disk, though again, I doubt it'll help too much since the problem is just that qemu is 100% software emulation. Since it doesn't look like folks have ever taken on getting something like FreeBSD's kernel module ported over (and I'm sure it'd be a lot of work), I'm just hoping hard that FreeBSD's Behyve or OpenBSD's new "Native Hypervisor" will be ready at some point and turn out to be portable. NetBSD's Xen support is pretty good, but it's still Xen, which I find to be unstable (even on Linux), invasive due to too much kernel futzing without modularity, and too painful to use on the desktop (no direct graphics options). The Python scripts they ship for management and service daemons like to blow up with a lot with tracebacks for me, but YMMV. I'm not trying to slam Xen. I'm sure some people have a decent time with it, just not me. I'd prefer to use qemu over Xen, but I'm not skilled enough to port the kernel module so I acknowledge I have no right to that desire. -Swift
Re: NetBSD's LVM works great for me
On Wed, 23 Mar 2016, Stephen Borrill wrote: It's been around since the -6 era, so it's not that new. Ah, I see. For some reason, I never messed with it in 6.x. Sorry for the misinformation. I don't think any major development has been done on LVM since it was committed which is a shame as, while solid, it has missing features as you describe. From what I can tell, it was done on a contract to Sistina Software UK. At least, that's what's in the man page. Maybe the maintainers/devs are $$$ only and so nobody has picked it up and gave the code a big hug, yet. I'm not complaining, just observing. I know it's a lot of work. I would add multipathing as a wish-list item too. Oh, that would rock. Yes, indeed. On a more spacey note, I wish multipathing could be better handled by the hardware. Ie.. some firmware setting on a dual/quad-port HBA. I've seen this on crappy hardware-based iSCSI ethernet HBAs, but never for fiber. I guess that doesn't help much if you use discrete cards. Plus, software MPIO probably gives you a lot more options for load balancing et al. Have you looked at HAST in FreeBSD? https://wiki.freebsd.org/HAST I had when they announced it but I forgot about it. Thanks for reminding me. Their page says they now have an async mode. I wonder how much it can buffer ala DRBD-proxy. I guess it's time to test it out and get some time on the metal so I know for sure. In my opinion, the holy grail for database servers is to have full-speed local access to storage, while keeping replicated to somewhere in Timbuktu in near-time for DR etc... -Swift
Re: Silly shell question
On Tue, 22 Mar 2016, Eric Haszlakiewicz wrote: The special OS glue/magic that happens is that the kernel takes the list of environment variables passed to the execve() system call, and copies them to the stack of the newly executing binary (kern_exec.c if you want to go look). Thank you, Eric. Your information is spot-on. Similar to what Manuel and Johnny gave me. I read quite a bit of source code, yesterday. The key for me was looking at the args to execve() just like you suggest. However, there were bits in the bourne shell source that also helped me to understand how and when the list was unrolled and used by the child shell. Then, the code that runs before main() (which comes from crt0.o and automatically gets included in your program when you link) Ahh, this is low-level and also interesting. I like learning about things we all take for granted sometimes. I've been coding in C for a while, and I never even knew about how the linker creates this and that it runs before main() etc... In the wikipedia page it says 'crt0' stands for "C runtime" and the zero means "very first!". Neat. If your program (/bin/sh in this case) calls one of the exec* functions that don't take an explicit environ pointer, the implementation in libc (e.g. execl.c) actually calls execve() and passes along the current environment for you (which is available as the global variable "environ"). I noticed that in the shell source (var.c and others) that the shell takes some pretty strict measures on how to unpack all the data that it gets from the array of character pointers it gets passed as "envp". I'm now understanding how you could get yourself into a real mess by not taking great care with sanity checking what you get. It appears to be a bit of a pain in the butt. Shell local variables aren't stored in environ at all and aren't passed along to the execve() call when the shell forks+execs some new process, they're just arbitrary data that the shell process keeps in its memory Just for fun I was hunting around in the bourne shell code for what kind of data structure it uses to store the fact that "hey, this variable has been exported". I'm guessing it's a linked list of pointer addresses somewhere signifying "These are special. If you fork() or execve() something, pass this stuff along." Since they are pointers, the data changes appropriately as needed without any copies being involved (I'm guessing). Actually, there's no reason that the shell needs to store even exported variables in its own environ global variable either, it just needs to make sure that it generates an appropriate array of variables when it starts a new process and calls execve(). Yes, I believe you and from what I'm seeing in the source, that's exactly what happens. I remember that there used to be some security problems with 'sudo' (and also others with telnet) because it didn't properly sanitize some of the environment that it accepted from the parent process. Now, I believe I understand more of the why. Thanks to everyone who was gracious enough to explain what is probably really elementary stuff to most of you. I just wanted a "lower level" understanding of what was going on. Now I have it (or at least enough hubris to believe I do)! -Swift
Re: Silly shell question
On Tue, 22 Mar 2016, Jeremy C. Reed wrote: Yes just use getenv. See the manpage. I wouldn't call it a "client" but either a child or replacement. Got it. That makese sense. Johnny described it as "inheriting the environment" and that concept makes a lot of sense. The concept doesn't exist at that level. Have a look at the execve manpage. Also have a look at the src/bin/sh source too: execcmd in eval.c environment in var.c tryexec in exec.c Juicy. This is perfect. I've got all those open right now, I'm learning a ton. I see what you guys are saying about "environment" at a low level. Thanks a lot! -Swift
Re: Silly shell question
On Tue, 22 Mar 2016, Johnny Billquist wrote: Only environment variables are propagated to child processes. Thanks for the info, but do you happen to know what the actual mechanism that the child processes is able to "import" the exported variable ? Ie.. is it some special OS glue/magic, or is it just straight getenv() calls by the client shell/app ? I don't see anything magical in the man page for getenv() that would distinguish an exported versus non-exported variable. -Swift
Silly shell question
In ksh, when you use the 'export' keyword, what is actually going on? Does it create a copy of the variable in memory? I doubt it since I tried a test and I could see the exported version changing even if I just change the original variable: # FOO=abc # export FOO # ksh # echo $FOO abc # exit # FOO=123 # ksh # echo $FOO 123 # exit So, what is that really happens with exported variables? Does the shell/app that's starting up check for exported variables and somehow import them? How can it tell? I noticed there are a LOT of checks around exported variables in ast-ksh. It appears to be a dangerous proposition in some cases. Sorry if this seems like a dumb question. I'm just curious about the intrinsics of how it works. I looked at the source code for ast-ksh, but it's pretty huge and hard to follow to answer this question. I'm just wondering what the key mechanism is. Curiosity only. -Swift
NetBSD's LVM works great for me
I've been using LVM under NetBSD now in 7.0 since the release. I have found it to be remarkably stable for such a newly implemented set of features. Maybe I just haven't been doing enough to beat on it. How possible/likely is it that NetBSD's LVM could get: * LVM caching devices ala Linux. RHEL 7.1 intro'd this and it works quite well. Ie.. using a fast disk to cache & front-end slower ones. This way you can get a ZIL-like feature without having ZFS or any specific file system. * Some kind of DRBD-async-proxy-alike feature. I know this is the killer pay-only feature in DRBD. It's also extremely wonderful. I'm probably dreaming, but this would be awesome. It could be implemented in LVM ala VxVM's "Volume Replicator" (which is also a "pay us a bit more because we know you want this awesome feature"). It could also be done standalone in a separate subsystem. Thanks a ton for the hard work. Having LVM in BSD is blowing my mind in a good way. Thanks, Swift
Per-user/group memory limits
I noticed this page: https://wiki.netbsd.org/projects/project/tmpfs-quotas/ Does that only apply to tmpfs (ie.. RAM disks) or would it be usable to limit total process memory usage by a given user? I'm just curious as to what portion of linux's "Control Groups" / "CGroups" is already present in the BSDs, and of course most notably within NetBSD. Thanks, Swift
Re: Random lockups on an email server - possibly kern/50168
On Wed, 16 Mar 2016, D'Arcy J.M. Cain wrote: Can anyone suggest any other avenues to investigate? Have you tried running a kernel with DDB enabled ? If the machine will handle it, horsepower-wise, I'd turn on that and make sure all your debugging symbols are rolled up into your kernel image (ie.. cc -g which you can set by un-commenting the makeoption in your kernel config). Then when the thing falls over again, get a backtrace from it. If you get really angry and motiviated you might try a serial line kernel debugger since it seems the bug might lockup the keyboard and mouse. There are some instructions for that here: http://www.netbsd.org/docs/kernel/kgdb.html Swift
Re: Any way to "passive commit" to RAM without syncing ?
On Tue, 8 Mar 2016, Hal Murray wrote: I had a setup for a while to keep working log files in RAM. I have forgotten the details. I have done something like that with XFS under Linux. The thing is, WAPBL logging doesn't seem to have an option (or perhaps I'm just ignorant of it) for doing logging to a separate block device. Ie.. if I could point to a MFS device for that, I think that would be pretty close to what I want. At the very least it'd be a lot like ZFS's ZIL, theoretically. I've also wondered if I'd be setting myself up for disaster by going into the kernel code and stretching the flush timers for UFS way way out. I'm looking in /usr/src/sys/ufs/ffs at ffs_vnops.c and ffs_vfsops.c for any kind of timers or tunables, but wasn't finding much. I found where fsync() ffs_full_fsync() get defined, but nothing jumped out at me. Looking at /usr/src/sys/ufs/ufs there seems to be more relevant code. The problem is that I can see several places where I'm going to shoot myself in the foot trying to delay or defer a write. Not having done my Ph.D thesis in file systems, I find I'm a bit intimidated (but I'll try it anyhow, what the hell). So, the bottom line is that it looks like the folks who've suggested some kind of work-in-RAM and periodically copy-to-physical approach are onto the only working setup like this. It seems like if there is some general way to perform asynchronous "logging" to a RAM-based device, it's not easy. I'm assuming this because so few implementations exist. The one that comes to mind right away is the DRBD async log shipping mode which is a for $$$ only killer-feature in DRBD. In the best case scenario, it can theoretically give you full speed access to a local block device, while still allowing you to have a mirrored block device in another state with 80ms latency or something. All you have to do have a big enough buffer and enough bandwidth to handle any backlog that accumulates. That approach seems like one that could be generalized into a cache-layer for any file system. Sprite did that but only for it's native file system, Cachefs for Solaris does that, too. RAM is now pretty cheap and the volume is big enough that I can operate out of my 2-3 Gb home directory and if that was fully cached, it still wouldn't matter on my systems with 16-24 Gb of RAM. It's just an interesting thought. -Swift
Any way to "passive commit" to RAM without syncing ?
I know NetBSD supports RAM disks. I also know it uses them for installation, but I'm not sure if they are overlays on top of the static disk images or if they get loaded/populated with the image after the system boots. The effect is the same, either way. You end up with a writable file system in RAM which is predicated on a disk image. I was thinking about this in the context of another problem. Let's say I've got a machine with a boatload of RAM and I have an application which wants extremely fast writes, but doesn't really care that much about data integrity, and this case, nor do I. Having the system come back in the same state after rebooting is "nice to have" in this thought experiment. Soo, here is the question: Since a RAM disk is probably the fastest option, is there a way to basically leave uncommitted data until I run "sync" or something calls fsync() ? Even better, is there a way to fake a sync until I run something else ? In other words, operate out of a RAM disk until my human operator gives the go-ahead to flush out the writes to disk or the RAM disk gets full. I think Sprite did something like this back in the Jurassic Period. I also remember CacheFS for Solaris doing something similar for NFS. I can think of a few applications: - Internet cafe style browsing where the machines are using older hardware. They might benefit from using a small RAM disk overlay rather than having to use something like a PATA disks for it's cache. - Editing high bandwidth data like video or audio. Most of the time my editing sessions don't last more than an hour and once I get a final cut I just save the project to a server and don't care much about the recording or editing workstation. - Any kind of temporary storage that needs RAM disk speeds, but still needs an option to sync/flush the file system back to long term storage. I'm guessing this would be a little tricky with something like UFS since a process could alter/write to a file and it'd need to reflect that when another process wanted to do the same, fseek(), or dozens of other operations. However, I just wonder if there is already a way to do this and I'm just not aware of it. I wonder if the LiveCD's for Linux that let you flush you changes to USB do something similar. I doubt it. I'm guessing they just load a RAM disk image and you operate on that until you reboot and it's lost. -Swift
Re: Reformatting little USB-harddisks
On Tue, 8 Mar 2016, Andrew Cagney wrote: One got ya! They can be slow to come online and may not be ready if you're trying to use or mount it during boot. This is especially slow for some reason on WD drives with SES. I have a 2TB WD USB3 drive for backups that has this issue. It's fine otherwise. NetBSD used to be a bit flaky with the SES operations (a long time ago) but it's great these days. My only wish was that it ran at USB3 speeds, but most USB3 drives can't exceed the USB2 spec (or not except for spikes). So, it's not a big deal to me. -Swift
Re: SSD TRIM / "discard" works after a remount with mount -a?
On Tue, 8 Mar 2016, Benny Siegert wrote: I don?t think that?s a big deal. For the record, I have run my SSD without TRIM on NetBSD for a long while now. In fact, your email taught me about the ?discard? option :? Well, I have also on other systems. I've also read that some SSDs don't actually need TRIM at all. I use Samsung 850 Pros. Otherwise, to enable TRIM without rebooting, try: mount -uo discard / I think that the mount -a does the same thing. What I did to test was to try it on one system and after editing the /etc/fstab and running mount -a I see that "discard" is listed in the "mount" output. However, I'm a little suspicous that it's not actually on. Paranoia. 3. dd if=/dev/zero bs=4096k of=/my/affected/file/system/DELETEME.000 I'm assuming short blocks get written as partials. I would skip this step and the following ones. They likely do more harm than good. I'd like to know where you are coming from on that. I get that SSDs have to be wear leveled, but I'm wondering what's going to happen the next time that I write to a block that was deleted but not properly trim'd. Wouldn't that cause a pause / blocking while the TRIM operation had to complete? Maybe I have a jacked up understanding of the dynamics, but hey, that's why I asked :-) Thanks for the feedback. -Swift
SSD TRIM / "discard" works after a remount with mount -a?
I have a scenario where I have several NetBSD systems which were just upgraded to SSDs. I forgot to turn on discard (TRIM) support on the drives when I first put them in. So, here is my plan, anyone see a problem with my assumptions (FYI, these are all NetBSD 7 x86_64) ? 1. Edit /etc/fstab, and add the "discard" option to all FFS file systems 2. Run "mount -a" which should re-mount the affected file systems with the newly added option. I'm assuming this work for the trim/discard opton. 3. dd if=/dev/zero bs=4096k of=/my/affected/file/system/DELETEME.000 I'm assuming short blocks get written as partials. 4. "sync" which I'm assuming will make sure that every byte actually gets written to the device and not just cache. 5. "rm /my/affected/file/system/DELETEME.000" and I'm assuming that the discard operations will kick in after the delete. My goals: 1. Enable TRIM / discard on the fly without rebooting. 2. Fix the non-TRIM'd blocks by filling the device with a big zero-file and then deleting it. Thanks, Swift
ssh-copy-id scriptae non grata?
Just a question. I know that 'ssh-copy-id' is just a dumb script you could write yourself or copy from another system. I know it's a 'contrib' item with openssh-portable. However. it's still kinda nice. Is there any chance of seeing that hit NetBSD ? If not, I'll just check the script into my own home directory CVS and be done with it. I'm just curious. -Swift
Re: bta2dpd - advanced audio distribution profile bluetooth daemon IMPROVED
On Fri, 4 Mar 2016, Nathanial Sloss wrote: Call for testers of the next installment of bta2dpd. Yay! Nice one. I'll test it out this week. It looks fun. It allows you to stream music or pad(4) output to bluetooth stereo headphones or speakers using the advanced audio distribution profile (a2dp). Ahhh, that's going to rock, quite literally. I've got a SoundBlaster BlasterAxx and a new set of Sony BT headphones. I'll try it with both. NEW!!! bta2dpd can be used also as an audio sink, so you can stream music from your phone/computer etc. to a file or audioplay to hear it on your computer speakers. See below. Hmm, does it have any noticeable delay? Ie.. could it be used to send output to a recording monitor without any significant delay (for live monitoring while playing music) ? If you don't know off hand, it's not big deal. I'll just try it. I have a two mixers hooked up to my main NetBSD workstation (though I sometimes move them to an SGI workstation). One for input, one for output. I'd love to be able to add a high bitrate BT DAC on a channel for both sides (know of any good ones you like?). To use this deamon requires 44100Hz stereo signed 16 bit little endian wav files or the pad(4) device. Just like a CDDA converted to PCM WAV would be, right? Most of the options change the way audio is encoded and are not necessary however if you have skipping audio or no audio try the following: Does that buffering result in more jitter or delays? Again, I'll fiddle, but just curious about your experience. What fun! Great work. -Swift
Re: ZFS
On Thu, 3 Mar 2016, Aaron B. wrote: Yes, NetBSD has these, but it's a lot easier on ZFS. As an example, I don't have to worry about shrinking or growing partitions, because they are all pulling space from a common pool. You are right and this is true. I wouldn't say it's "much" easier, but it's more coherent to me, too. I haven't used fss(4) yet, but ZFS snapshots appear to be a lot more powerful. I use zfs clone and zfs send/recv extensively. They are. I use send/recv and totally agree. I've run into significantly more limitations with fss(4). On the other hand, ZFS on anything other than Solaris has always felt like a square peg in a round hole. Agreed. I think that feeling will grow as ZFS's core code continues to drift. I think it makes a lot more sense to put energy into porting Hammer. HAMMER is different, since it doesn't have volume management built-in. I actually like that, but then again I'm old and I've learned to love LVM in my attendance to the School of Hard Knocks. I still think the layered approach offers more flexibility without sacrificing too much usability. Another interesting thing about HAMMER vis-a-vis ZFS is that HAMMER has this ambitious scheme to be distributed and the master-slave code already works today (and I suspect will get more distributed and bad-ass). AFAIK, ZFS can't do that and has no plans to go distributed+clustered or anything like that. There is definitely something sexy about that idea/feature. It reminds me of Amoeba or Sprite ideas which sadly don't get much play anymore. This gets us a filesystem with a future that can be controlled and guarenteed, as well as integrating cleanly. Damn straight. Those are both Good Things[tm]. -Swift
Re: ZFS
On Thu, 3 Mar 2016, co...@sdf.org wrote: Someone on IRC implied that he is using ZFS. Still struggling to believe, so I gotta ask - is there anyone out there using it? I'm not. I never knew it got past the idea stage for NetBSD. It's listed on the projects page here: https://wiki.netbsd.org/projects/project/zfs/ Did it ever get to the point where it was usable? The last note on the project page says "It is not finished. Ask on tech-kern for more information about the current state." but it's dated 2014. Like you, I wonder about ZFS in general. Didn't the NetBSD project pay someone to complete it or am I wrong and it was a purely volunteer effort? Was it part of a Google SoC project? Can it ever be pulled into the mainline code (ie.. isn't the CDDL incompatible with the BSD license?) ? ZFS has some great features. I've used it quite a bit under Solaris and FreeBSD. However, it's also proven to be a source of a lot of crashes I've seen under both (but especially FreeBSD). I'm guessing it'd be a really big challenge to re-implement things like deduplication or other key ZFS features, but then again I look at HAMMER and wonder... Since HAMMER is BSD-focused (and BSD licensed) maybe that's a better choice to want to throw in with ? Then again, I don't know the politics of that, either. A few tidbits about ZFS: * Solaris never has had LVM. Sun had a history of making spectacularly bad choices before ZFS came along. Those of you who have tried to recover a mis-matched SDS/SVM rig know what I mean. They had a chance to get VxVM on the cheap in the 90's and pooched it (while HP-UX, Tru64, and UnixWare snatched it up). So, NetBSD has RAIDframe which works nicely, and now a very sweet LVM. The urgency to have something like ZFS in NetBSD shouldn't be quite as acute as it was in Slowlaris. * I don't consider having block storage commands separated from file system commands a weakness. Ie.. LVM + FS == ZFS && ! ZFS > everything. I think that puts me on the outs with the cool kids. * If you put all the storage features in NetBSD up against ZFS, the main things you come up missing on are: - Huge sizes on all the upper limits. The "zettabyte" part is real. - Deduplication - online fsck (scrub) - An unprecedented level of paranoia about data integrity However, of those only deduplication is all that exciting. Other ZFS-like features we have (if you consider the full NetBSD toolset): - Block device encryption - Ability to grow a file system (don't think UFS in NetBSD can shrink) - Block device abstraction, aggregation / RAID (LVM / RAIDframe) - File system snapshots - Root disk mirroring Other stuff I'm not sure about: - Can NetBSD do RAID6 / RAID-DP in software ? - What about the ZIL and L2ARC, are there already NetBSD equals? - Does NetBSD's LVM support multiple copies of an LV on physical PVs ala ZFS ditto blocks ? - Does anyone care that ZFS is now closed source and the CDDL forks are now going to drift significantly from the parent core ? I don't really, but I'm just wondering if that's a concern. - Do we have anything against HAMMER ? I personally think it's cool. It's an interesting topic to me. -Swift
Re: "No route to host" in Alpine
On Wed, 2 Mar 2016, Marco Beishuizen wrote: Could be I don't know. Is there a way to make try NetBSD IPv4 first? I'm not sure there is any way to change that. Something like "ifconfig inet6 down" might be cool, but doesn't work. It'll take down the whole interface (including ipv4). What you could do, just to be sure, is compile a custom kernel without any ipv6 support in it. -Swift
Re: Qemu + tiny core linux = poor man's chromium on NetBSD
On Sat, 27 Feb 2016, Mayuresh wrote: What doesn't change is, whether you like it or not, you have no option but to work with such websites. Well, had that not been the case I'd use elinks almost everywhere... I feel the same way. I've used Chrome enough on other platforms to see that it's clearly faster than most and has some nice features. However, I'm distrustful of Google and Chrome seems to lack some of the privacy and anti-tracking features of others. Best option is of course to have it natively on NetBSD and I am sure it will come some day. I'm sure it will. FreeBSD and OpenBSD already have it. Hopefully, it's just a matter of time. Linux emulation layer could be an option. But I personally do not have sufficient expertise to try Linux version of chromium through NetBSD Linux emulation layer. I have tried it. It seems to fall apart somewhere in emulation-land. May be if someone has, kindly submit a package to pkgsrc or wip. (Just like libreoffice has native as well as Linux emulation version we might have both in future.) That'd be nice. I like how that's done with Firefox. Linux as Xen DomU is an option and I know how to use it, but I have several other difficulties with xen, particularly related to graphics. I have two problems with Xen. Lack of a console is problem #1. The second is that the Xen tools that are written in Python love to fail for a variety of reasons and when they don't fail, they often don't work properly. I've had terrible experiences with the toolset over the years on both NetBSD and Linux. So I come to Qemu. Qemu is categorically awesome, if you ask me. Fabrice Bellard is a genius. The only problem is the lack of any acceleration via kquemu and/or friends. Even on very fast systems, the performance hit is massive. Tiny core Linux can be a very good option as it is lightweight and has chromium browser in its repository. Are you using some kind of Docker version? I'm just curious. Getting it work is pretty easy as well. After getting the terminal I set the DISPLAY to that of the host and launch chromium perfectly fine. You're lucky then. I've done the same and it doesn't always work. Many times it complains about lack of some X11 extensions and XDMCP won't work. Sounds like your combination works well. I tried to create a nested X server with Xnest and launched chromium in that. I suppose you could also try to launch it within an X11 session running in Xvnc. Of course, that sucks because you can't simply launch Chrome and redirect the display, you have to show the whole "desktop". That's my #1 complaint about VNC - you can't share out apps (with the not-very-good exception of SeamlessRDP which is pretty breaky). Now the freezing behavior is gone. However the Xnested application does not render properly. Many of the widgets, text elements simply do not appear properly under Xnest. That almost always happens to me when using Xnest. Above was on NetBSD 7.0 amd64 Qemu 2.4.0.1, but I had witnessed similar behavior with older NetBSD and Qemu versions in the past. Does the whole VM lock up or just Chrome within it? -Swift
Re: create keys and certificates for postfix/tls
On Mon, 29 Feb 2016, Martin Husemann wrote: I am currently using free certificates from StartSSL. Interesting that they even offer such a thing. I had to look them up. I looked at letsencrypt, but I couldn't make any sense of it - can somebody explain (from an admin point of view) how that is supposed to work? It's a science project, for sure. I was playing with it recently under FreeBSD. My impression of how it's supposed to work is this: 1. You install a Python script using git. 2. You run the script and it tries to autoconfigure for your system. It's a script, so of course, that's mostly going to fail. The script tries to detect things like your cert locations in your Apache config. It does claim to be able to manage raw certs. 3. The script in conjunction with back-end tools on their site checks your domain's TXT records for an x509 special record with some special sauce to auth your CSR or whatever. Of course I will NOT install arbitrary 3rd party server side software (where my server OS isn't even officially supported) to handle important things like certificate renewals when it is a very simple task to do just once a year. Their intention is, I believe, for you to run this Python script every day until the end of time and it'll handle cert updates automagically. They don't issue certs for any longer than 90 days as far as I can tell. So, I'm guessing you'll be doing a lot of updating and it'd definitely need to work. They have a protocol for the crypto ops called ACME. So, I suppose the Python script is the first (and only?) implementation of that. Given all the hype about it, I am sure I must be missing something. What is it? My take is that it's a way to get a quick domain cert if you have control over your domain's DNS. I don't like the script-approach since it threw all kinds of warnings and errors, then failed to work under FreeBSD, I'm guessing it'll fail even worse for NetBSD. In short, Linux Foundation + overly ambitious python script = meh. -Swift
Re: pf or npf?
On Thu, 25 Feb 2016, John Nemeth wrote: You didn't ask, but I'll add that the third option is ipfilter. It sits somewhere in the middle. It hasn't seen a lot of maintenance or enhancement lately, but it is still much newer then pf. Just FYI, the last version was 4.1.33 and was released 2013-04-24 according to source forge. Looks like Darren Reed still runs the project, but as you say, there isn't any action lately. It is also quite stable and usable. I still use it on Tru64 5.1B as it is the only realistic and free option available that I'm aware of. I've also used it on Solaris 8, IRIX 6.2 and 6.5, Unixware 7, QNX, and HPUX. I don't know much about all the bitchery and crying that went on between Darren and Theo. *shrug*. I will just say ipfilter works amazingly great considering some of the challenging and crappy situations I've put it in. Years ago I ran a firewall with IRIX 6.2 that was up for about 3 years with no issues at all (yeah, laugh it up at IRIX, but it was beat on constantly and nobody hacked it). All that said, I'm excited about NPF, too. Finally our own code we can go fine-grain or lockless on. That should help us push the turbo-button on the filtering performance. Congrats to Mr. Rasiukevicius and friends on a great job so far! -Swift
Linux emulation - chroot always?
When a Linux-binary runs, what does it "see" in terms of the root file system? So, for example, if I run 'ldconfig', does it see Linux libraries in /emul/linux/lib or just "/lib" ? Also, how does this play out when I want to run Linux binaries from my home directory? Ie.. if I wanted to run foobar.exe and it expects to find some shared lib in /usr/local/lib does that need to be relative or absolute? What about 32 vs 64 bit binaries, is there any automatic translation or chrooting for /emul/linux vs /emul/linux32 ? -Swift