More fun with the new smtpd.conf syntax
I'm trying to upgrade a trivial config: table aliases file:/etc/mail/aliases listen on lo0 inet4 limit mta inet4 bounce-warn 1d accept from local for any relay via 192.168.7.2 The aliases file contains, aside from the defaults: root: daia daia: daia@[192.168.7.2] Basically this machine shouldn't receive mail, and all messages generated locally should be relayed to a certain user at some other machine. This has worked well for a few years. My naive attempt to achieve the same with the new syntax doesn't work: table aliases file:/etc/mail/aliases listen on lo0 inet4 set bounce warn-interval 1d action "local" mbox alias action "relay" relay host 192.168.7.2 match for local action "local" match from local for any action "relay" This delivers locally generated messages to the local user "daia", the forward alias "daia@[192.168.7.2]" is ignored. Commenting out the "match for local" line doesn't work either: it relays messages to the remote machine, but obviously aliases are not resolved. Other combinations involving expand-only, forward-only, and virtual are mentioned by name, without being actually documented in any obvious place. So, is there any way to make this work again? Regards, Liviu Daia
Re: Split zone DNS?
On 28 July 2017, Steve Williams <st...@williamsitconsulting.com> wrote: > Hi, > > I recently upgraded to 6.1 and am trying to (finally, after many OpenBSD > versions over 10 years) fine tune my home network. > > I would like to run a local resolver on my internal network that will > resolve all my hosts on my local network to IP addresses on my local > network(s) rather than resolving to their public IP addresses. > > I believe it's called a "split zone" DNS, where my domain is resolved > locally, but everyone else is resolved using normal resolution processes. > > I set this up at one of my previous jobs using BIND, but that was 7 years > ago. I've never gone to the trouble of doing it at home, but I would like > to exercise my brain a bit as well as having my home network set up > "better". > > What is the best tool to accomplish this these days? Is NSD the "modern" > tool to be using on OpenBSD? > > Are there any hooks for dhcpd to update records? > > I've read the NSD(8), nsd.conf(5) man pages and that seems to be the way to > go, but I thought I'd check the wisdom here to see if there is a better > approach. unbound(8) probably does exactly what you want. It's mainly a recursive resoler, but it can also answer authoritatively for "local" zones, or simply override addresses for given hosts (think anti-spam). Unless you also want to answer queries for your domain comming from the Internet, you don't need a separate authoritative server. Regards, Liviu Daia
Re: OpenBSD IPSec setup
On 29 June 2017, Liviu Daia <liviu.d...@gmail.com> wrote: [...] > On the server: > > # iked -d > ikev2_recv: IKE_SA_INIT request from initiator 89.136.163.27:500 to > x.y.z.t:500 policy 'sb1' id 0, 510 bytes > ikev2_msg_send: IKE_SA_INIT response from x.y.z.t:500 to 89.136.163.27:500 > msgid 0, 471 bytes > ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 > policy 'sb1' id 1, 1520 bytes > ikev2_msg_send: IKE_AUTH response from x.y.z.t:500 to 89.136.163.27:500 msgid > 1, 1440 bytes > sa_state: VALID -> ESTABLISHED from 89.136.163.27:500 to x.y.z.t:500 policy > 'sb1' > ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 > policy 'sb1' id 2, 1520 bytes > ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 > policy 'sb1' id 2, 1520 bytes > ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 > policy 'sb1' id 2, 1520 bytes > ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 > policy 'sb1' id 2, 1520 bytes > > On the home router: > > # iked -d > set_policy: could not find pubkey for /etc/iked/pubkeys/ipv4/x.y.z.t > ikev2_msg_send: IKE_SA_INIT request from 89.136.163.27:500 to x.y.z.t:500 > msgid 0, 510 bytes > ikev2_recv: IKE_SA_INIT response from responder x.y.z.t:500 to > 89.136.163.27:500 policy 'home' id 0, 471 bytes > ikev2_msg_send: IKE_AUTH request from 89.136.163.27:500 to x.y.z.t:500 msgid > 1, 1520 bytes > ikev2_recv: IKE_AUTH response from responder x.y.z.t:500 to 89.136.163.27:500 > policy 'home' id 1, 1440 bytes > ikev2_ike_auth_recv: unexpected auth method RSA_SIG, was expecting SIG > ikev2_msg_send: IKE_AUTH request from 89.136.163.27:500 to x.y.z.t:500 msgid > 2, 1520 bytes > > The warning about pubkey doesn't go away if I copy the server's > certificate to /etc/iked/pubkeys/ipv4/x.y.z.t, nor if I install it in > /etc/iked/certs. And then there's this, which doesn't look normal: > > ikev2_ike_auth_recv: unexpected auth method RSA_SIG, was expecting SIG [...] Ok this post sent me on the right course: http://www.going-flying.com/blog/mikrotik-openbsd-ikev2.html Here's what I did: cd /etc/ssl/vpn/private openssl rsa -in x.y.z.t.key -pubout -out ~/x.y.z.t ... copy ~/x.y.z.t to /etc/iked/pubkeys/ipv4 on the home router. After that the VPN works, I can send packets from a machine at home and I'm seeing them on enc0 on the remote server: # tcpdump -n -i enc0 tcpdump: listening on enc0, link-type ENC 05:14:04.103254 (authentic,confidential): SPI 0xd51e3910: 192.168.7.2 > 10.0.0.102: icmp: echo request (encap) 05:14:05.134106 (authentic,confidential): SPI 0xd51e3910: 192.168.7.2 > 10.0.0.102: icmp: echo request (encap) 05:14:06.137831 (authentic,confidential): SPI 0xd51e3910: 192.168.7.2 > 10.0.0.102: icmp: echo request (encap) ... However, I'm now running into what seems to be a firewall problem, an I'm getting no answer. I do have "pass quick inet proto esp" on both VPN ends. Any idea where / how to fix this? Also, IPs aren't assigned automatically to the VPN ends. I can add them to hostname.enc0, but is this the right thing to do? I tried adding a line config address 10.0.0.102 to /etc/iked.conf, but that's rejected as a syntax error. A clue stick again please? Regards, Liviu Daia
Re: OpenBSD IPSec setup
On 28 June 2017, Rupert Gallagher <r...@protonmail.com> wrote: > You need a server-signed certificate. Ok, let me redo this from scratch: (1) On the server: ikectl ca vpn create ikectl ca vpn install ikectl ca vpn certificate x.y.z.t create ikectl ca vpn certificate x.y.z.t install ikectl ca vpn certificate 10.0.0.1 create ikectl ca vpn certificate 10.0.0.1 export ... copy 10.0.0.1.tgz to the home router (2) On the home router: tar -C /etc/iked -xzpf 10.0.0.1.tgz Nothing seems to have changed: On the server: # iked -d ikev2_recv: IKE_SA_INIT request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 0, 510 bytes ikev2_msg_send: IKE_SA_INIT response from x.y.z.t:500 to 89.136.163.27:500 msgid 0, 471 bytes ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 1, 1520 bytes ikev2_msg_send: IKE_AUTH response from x.y.z.t:500 to 89.136.163.27:500 msgid 1, 1440 bytes sa_state: VALID -> ESTABLISHED from 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 2, 1520 bytes ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 2, 1520 bytes ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 2, 1520 bytes ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 2, 1520 bytes On the home router: # iked -d set_policy: could not find pubkey for /etc/iked/pubkeys/ipv4/x.y.z.t ikev2_msg_send: IKE_SA_INIT request from 89.136.163.27:500 to x.y.z.t:500 msgid 0, 510 bytes ikev2_recv: IKE_SA_INIT response from responder x.y.z.t:500 to 89.136.163.27:500 policy 'home' id 0, 471 bytes ikev2_msg_send: IKE_AUTH request from 89.136.163.27:500 to x.y.z.t:500 msgid 1, 1520 bytes ikev2_recv: IKE_AUTH response from responder x.y.z.t:500 to 89.136.163.27:500 policy 'home' id 1, 1440 bytes ikev2_ike_auth_recv: unexpected auth method RSA_SIG, was expecting SIG ikev2_msg_send: IKE_AUTH request from 89.136.163.27:500 to x.y.z.t:500 msgid 2, 1520 bytes The warning about pubkey doesn't go away if I copy the server's certificate to /etc/iked/pubkeys/ipv4/x.y.z.t, nor if I install it in /etc/iked/certs. And then there's this, which doesn't look normal: ikev2_ike_auth_recv: unexpected auth method RSA_SIG, was expecting SIG I'm using 6.1 release on the server, and the current snapshot on the home router: OpenBSD sb1.x.net 6.1 GENERIC#10 amd64 OpenBSD router.x.net 6.1 GENERIC.MP#44 amd64 Regards, Liviu Daia
Re: OpenBSD IPSec setup
On 28 June 2017, Philipp Buehler <e1c1bac6253dc54a1e89ddc046585...@posteo.net> wrote: > Am 28.06.2017 11:18 schrieb Liviu Daia: > > > > set skip on { lo, enc } > > pass in quick on egress inet proto udp to any port { isakmp, > > ipsec-nat-t } > > needs (on both) a 'pass quick inet proto esp', too I addded that, and still no dice. Logs on the server: # iked -d ikev2_recv: IKE_SA_INIT request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 0, 510 bytes ikev2_msg_send: IKE_SA_INIT response from x.y.z.t:500 to 89.136.163.27:500 msgid 0, 471 bytes ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 1, 1520 bytes ikev2_msg_send: IKE_AUTH response from x.y.z.t:500 to 89.136.163.27:500 msgid 1, 1440 bytes sa_state: VALID -> ESTABLISHED from 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 2, 1520 bytes ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 2, 1520 bytes ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 2, 1520 bytes ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 2, 1520 bytes ikev2_recv: IKE_AUTH request from initiator 89.136.163.27:500 to x.y.z.t:500 policy 'sb1' id 2, 1520 bytes Logs on the home router: # iked -d set_policy: could not find pubkey for /etc/iked/pubkeys/ipv4/x.y.z.t ikev2_msg_send: IKE_SA_INIT request from 89.136.163.27:500 to x.y.z.t:500 msgid 0, 510 bytes ikev2_recv: IKE_SA_INIT response from responder x.y.z.t:500 to 89.136.163.27:500 policy 'home' id 0, 471 bytes ikev2_msg_send: IKE_AUTH request from 89.136.163.27:500 to x.y.z.t:500 msgid 1, 1520 bytes ikev2_recv: IKE_AUTH response from responder x.y.z.t:500 to 89.136.163.27:500 policy 'home' id 1, 1440 bytes ikev2_ike_auth_recv: unexpected auth method RSA_SIG, was expecting SIG ikev2_msg_send: IKE_AUTH request from 89.136.163.27:500 to x.y.z.t:500 msgid 2, 1520 bytes Regards, Liviu Daia
OpenBSD IPSec setup
I'm trying to create a VPN between my home network (sitting behind an OpenBSD router), and a remote server (also an OpenBSD machine). After reading many man pages and searching previous posts, I'm still thoroughly confused. What I have so far: (1) On the remote server: - fixed IP, let's call it x.y.z.t - pf.conf: set skip on { lo, enc } pass in quick on egress inet proto udp to any port { isakmp, ipsec-nat-t } - iked.conf: ikev2 "sb1" passive esp \ from 10.0.0.102 to 10.0.0.1 \ local x.y.z.t peer any \ srcid x.y.z.t (2) On the home router: - the internal network is 192.168.7.0/24, the external IP is dynamic - pf.conf: set skip on { lo, enc } pass in quick on egress inet proto udp to any port { isakmp, ipsec-nat-t } match out on enc inet to 10.0.0.102 nat-to 10.0.0.1 match out on egress inet from !(egress:network) nat-to (egress:0) - iked.conf: ikev2 "home" active esp \ from 10.0.0.1 (192.168.7.0/24) to 10.0.0.102 \ local egress peer x.y.z.t \ srcid 10.0.0.1 Anyone, a clue stick please? Regards, Liviu Daia
Re: An AR9280 as an Access Point
On 12 October 2016, Liviu Daia <liviu.d...@gmail.com> wrote: > On 11 October 2016, physkets <physk...@tutanota.com> wrote: > > Hello! > > > > I'd asked a related question on the OpenBSD subreddit, and someone > > pointed me here. Hope this is appropriate. > > https://www.reddit.com/r/openbsd/comments/56lzhu/which_wifi_card_to_make_an_access_point > > > > Does anyone know how good a WiFi Access Point I could make of the > > Atheros AR9280 card (Compex-wle200nx) offered by the guys at PC Engines: > > http://www.pcengines.ch/wle200nx.htm > > I've been using one at home for maybe years. It mostly works, but s/years/two years/ > my experience with it hasn't been great. For whatever reasons the rate > of packet loss increased steadily over time. I've since re-purposed an > old Netgear WNDR 3800 as a bridged AP, and I'm much happier with it. > 805.11n, full power management, and no dropped connections ever, despite > it being located in the exact same spot as the old AP. > > Regards, > > Liviu Daia Regards, Liviu Daia
Re: An AR9280 as an Access Point
On 11 October 2016, physkets <physk...@tutanota.com> wrote: > Hello! > > I'd asked a related question on the OpenBSD subreddit, and someone > pointed me here. Hope this is appropriate. > https://www.reddit.com/r/openbsd/comments/56lzhu/which_wifi_card_to_make_an_access_point > > Does anyone know how good a WiFi Access Point I could make of the > Atheros AR9280 card (Compex-wle200nx) offered by the guys at PC Engines: > http://www.pcengines.ch/wle200nx.htm I've been using one at home for maybe years. It mostly works, but my experience with it hasn't been great. For whatever reasons the rate of packet loss increased steadily over time. I've since re-purposed an old Netgear WNDR 3800 as a bridged AP, and I'm much happier with it. 805.11n, full power management, and no dropped connections ever, despite it being located in the exact same spot as the old AP. Regards, Liviu Daia
Re: choosing OpenBSD for fileserver instead of FreeBSD + ZFS
On 20 July 2016, Miles Keaton <mileskea...@gmail.com> wrote: > Got a fileserver with a few terabytes of important personal media, > like all old home movies, baby photos, etc. Files that I want my > family to have access to when I die. > > Really it's more of a file archive. A backup. Just rsync + ssh. > Serving it isn't the point. Just preserving it forever. [...] Don't rely on your machines alone. As other people have pointed out, a fire can ruin your backup in a few minutes. There are online storage services, make copies of your backups to two or more separate systems like this, and make sure your family know about them, and know how to restore your files from them. Only when you have that sorted out spend time optimizing your local bakup system. Regards, Liviu Daia
Re: ntpd tries to connect via ipv6
On 31 May 2016, Lyndon Nerenberg <lyn...@orthanc.ca> wrote: > > On May 31, 2016, at 3:58 PM, Ted Unangst <t...@tedunangst.com> > > wrote: > > > > If we're talking about timeframes long enough for network > > connectivity to come and go, that's long enough for IP addresses to > > come and go as well. > > This is an interesting problem, in general. > > In my MTA development days, we would cache the targets of the MX > record(s) we found in queued message's metadata. For each host > target, we included the absolute time the data would expire, based on > the original MX lookup. Expired records were ignored, and when we ran > out of hosts we would re-run the MX lookup and update the meta-data. > This worked quite well, considering the underlying DNS data didn't > change all that often. But SMTP sessions are not long-lived, so this > just worked. > > These days I wish I had similar functionality in pf. And not for > mobile hosts. E.g., at work we need to open up access to things like > Paypal payment API hosts. For those rules we can either hardwire IP > addresses, or use their hostnames. But they inevitably move their API > hosts around. In the first case, our list of hardwired IP addresses > gets stale. In the second, the addresses returned by the A(AAA) > record lookup gets stale. I would really like to be able to say > "build the rule from this hostname, but refresh the A(AAA) record > results as the underlying data's TTL expires." > > pf isn't special - this is the same problem as the ntpd example. I've > puzzled over how to deal with this, but I can't see a solution that > doesn't involve some sort of proxy that isolates the process from the > network changes. And even then, you're dealing with at least a TCP > connection reset if an existing address vanishes. For some things, > that's not an issue. For others, ... ? For pf there's an easy solution: put the IP in a persistent table, and have a cron job resolve the name and update the table when the IP changes. Obviously this only works with rules that can take tables to begin with, but that's good enough in many situations. Regards, Liviu Daia
USB problem
I have an APU1.C router monitoring an UPS through an USB cable. Today I upgraded the UPS to a different model, and the router can't see it: uhub3: port 5, set config 0 at addr 2 failed uhub3: device problem, disabling port 5 (full dmesg below). The USB port on the router and the cable are fine, since they worked with the old UPS. The USB port on the new UPS seems fine too, since I can see it on a Linux machine: $ lsusb -s 1:9 -v Bus 001 Device 009: ID 10af:0001 Liebert Corp. PowerSure PSA UPS Device Descriptor: bLength18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass0 bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x10af Liebert Corp. idProduct 0x0001 PowerSure PSA UPS bcdDevice0.14 iManufacturer 19 iProduct1 iSerial 2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 34 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType33 bcdHID 10.01 bCountryCode0 Not supported bNumDescriptors 1 bDescriptorType34 Report wDescriptorLength 511 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes3 Transfer TypeInterrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 232 Any idea? Regards, Liviu Daia OpenBSD 5.9-current (GENERIC.MP) #1981: Thu Mar 31 14:51:55 MDT 2016 dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP real mem = 2098520064 (2001MB) avail mem = 2030616576 (1936MB) mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.7 @ 0x7e16d820 (7 entries) bios0: vendor coreboot version "4.0" date 09/08/2014 bios0: PC Engines APU acpi0 at bios0: rev 0 acpi0: sleep states S0 S1 S3 S4 S5 acpi0: tables DSDT FACP SPCR HPET APIC HEST SSDT SSDT SSDT acpi0: wakeup devices AGPB(S4) HDMI(S4) PBR4(S4) PBR5(S4) PBR6(S4) PBR7(S4) PE20(S4) PE21(S4) PE22(S4) PE23(S4) PIBR(S4) UOH1(S3) UOH2(S3) UOH3(S3) UOH4(S3) UOH5(S3) [...] acpitimer0 at acpi0: 3579545 Hz, 32 bits acpihpet0 at acpi0: 14318180 Hz acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: AMD G-T40E Processor, 1000.12 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,MWAIT,SSSE3,CX16,POPCNT,NXE,MMXX,FFXSR,PAGE1GB,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,IBS,SKINIT,ITSC cpu0: 32KB 64b/line 2-way I-cache, 32KB 64b/line 8-way D-cache, 512KB 64b/line 16-way L2 cache cpu0: 8 4MB entries fully associative cpu0: DTLB 40 4KB entries fully associative, 8 4MB entries fully associative cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges cpu0: apic clock running at 199MHz cpu0: mwait min=64, max=64, IBE cpu1 at mainbus0: apid 1 (application processor) cpu1: AMD G-T40E Processor, 1000.00 MHz cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,MWAIT,SSSE3,CX16,POPCNT,NXE,MMXX,FFXSR,PAGE1GB,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,IBS,SKINIT,ITSC cpu1: 32KB 64b/line 2-way I-cache, 32KB 64b/line 8-way D-cache, 512KB 64b/line 16-way L2 cache cpu1: 8 4MB entries fully associative cpu1: DTLB 40 4KB entries fully associative, 8 4MB entries fully associative cpu1: smt 0, core 1, package 0 ioapic0 at mainbus0: apid 2 pa 0xfec0, version 21, 24 pins acpiprt0 at acpi0: bus -1 (AGPB) acpiprt1 at acpi0: bus -1 (HDMI) acpiprt2 at acpi0: bus 1 (PBR4) acpiprt3 at acpi0: bus 2 (PBR5) acpiprt4 at acpi0: bus 3 (PBR6) acpiprt5 at acpi0: bus -1 (PBR7) acpiprt6 at acpi0: bus 5 (PE20) acpiprt7 at acpi0: bus -1 (PE21) acpiprt8 at acpi0: bus -1 (PE22) acpiprt9 at acpi0: bus -1 (PE23) acpiprt10 at acpi0: bus 0 (PCI0) acpiprt11 at acpi0: bus 4 (PIBR) acpicpu0 at acpi0: C2(0@100 io@0x841), C1(@1 halt!), PSS acpicpu
Re: who(XXXXX): syscall 54 in the last few snapshots
On 12 October 2015, Sebastien Marie <sema...@openbsd.org> wrote: > On Mon, Oct 12, 2015 at 08:02:11AM +0300, Liviu Daia wrote: > > > > I get something similar without nagios: > > > > $ grep syscall /var/log/messages > > Oct 10 07:50:26 router /bsd: tty(2446): syscall 54 > > Oct 10 07:50:33 router /bsd: tty(29826): syscall 54 > > Oct 10 07:54:15 router /bsd: tty(10733): syscall 54 > > Oct 10 07:54:15 router /bsd: tty(19344): syscall 54 > > Oct 10 07:58:59 router /bsd: tty(5574): syscall 54 > > Oct 10 07:59:05 router /bsd: tty(14634): syscall 54 > > Oct 10 08:02:47 router /bsd: tty(12313): syscall 54 > > Oct 10 08:02:47 router /bsd: tty(5281): syscall 54 > > Oct 10 08:06:23 router /bsd: tty(9186): syscall 54 > > Oct 10 08:06:23 router /bsd: tty(9710): syscall 54 > > Oct 11 01:30:01 router /bsd: tty(6080): syscall 54 > > Oct 12 01:30:01 router /bsd: tty(15518): syscall 54 > > > > $ uname -a > > OpenBSD router.lcd047.linkpc.net 5.8 GENERIC.MP#1449 amd64 > > > > > > I'd tentatively correlate most of them with login(1) run in a serial > > console. But the last two entries seem to be triggered by /etc/daily. > > > > Regards, > > > > Liviu Daia > > > > It should have been fixed by deraadt@ commit on > src/sys/kern/sys_generic.c (rev 1.107) > > Please rebuild a new kernel (or wait for snapshots) for testing. This does indeed seem to fix the problem, thank you! Regards, Liviu Daia
Re: who(XXXXX): syscall 54 in the last few snapshots
On 12 October 2015, Atanas Vladimirov <vl...@bsdbg.net> wrote: > On 11.10.2015 21:18, Theo de Raadt wrote: > >> I rebuild who(1) with DEBUG and add 'abort' in all pledge calls. > >> Also I changed kern.nosuidcoredump=3 and made /var/crash/who but I > >> can't > >> find who.core. > >> Meanwhile I got syscall 54 every 5 min. Is it possible another > >> process/daemon to generate this errors? > >> How can I find it? > >> > >> ~$ tail /var/log/messages > >> Oct 11 19:54:37 ns /bsd: who(5929): syscall 54 > >> Oct 11 19:59:37 ns /bsd: who(6769): syscall 54 > >> Oct 11 20:04:37 ns /bsd: who(13907): syscall 54 > >> Oct 11 20:09:37 ns /bsd: who(27822): syscall 54 > >> Oct 11 20:14:37 ns /bsd: who(25574): syscall 54 > >> Oct 11 20:19:37 ns /bsd: who(8480): syscall 54 > >> Oct 11 20:24:37 ns /bsd: who(28849): syscall 54 > >> Oct 11 20:29:37 ns /bsd: who(11423): syscall 54 > >> Oct 11 20:34:37 ns /bsd: who(20946): syscall 54 > > > > I have no explanation for this. You'll have to keep digging to find > > it. > I think that I found it - Nagios. Now the question is how to debug it > further? I get something similar without nagios: $ grep syscall /var/log/messages Oct 10 07:50:26 router /bsd: tty(2446): syscall 54 Oct 10 07:50:33 router /bsd: tty(29826): syscall 54 Oct 10 07:54:15 router /bsd: tty(10733): syscall 54 Oct 10 07:54:15 router /bsd: tty(19344): syscall 54 Oct 10 07:58:59 router /bsd: tty(5574): syscall 54 Oct 10 07:59:05 router /bsd: tty(14634): syscall 54 Oct 10 08:02:47 router /bsd: tty(12313): syscall 54 Oct 10 08:02:47 router /bsd: tty(5281): syscall 54 Oct 10 08:06:23 router /bsd: tty(9186): syscall 54 Oct 10 08:06:23 router /bsd: tty(9710): syscall 54 Oct 11 01:30:01 router /bsd: tty(6080): syscall 54 Oct 12 01:30:01 router /bsd: tty(15518): syscall 54 $ uname -a OpenBSD router.lcd047.linkpc.net 5.8 GENERIC.MP#1449 amd64 I'd tentatively correlate most of them with login(1) run in a serial console. But the last two entries seem to be triggered by /etc/daily. Regards, Liviu Daia
Re: cdio(1) cdrip (error correction / why WAVE).
On 23 August 2015, Geoff Steckel g...@oat.com wrote: [...] Also, drives wear out. After reading about 1000 discs each, two drives failed without any obvious error indications. [...] After reading about 1000 discs, chances are the lens in your drive has gathered dust. You can try to clean it gently with isopropyl alcohol or similar. Regards, Liviu Daia
Re: GROUP CHANGED
On 15 June 2015, Raimo Niskanen raimo+open...@erix.ericsson.se wrote: On Mon, Jun 15, 2015 at 09:53:56AM +0900, Joel Rees wrote: My memories of Debiandora are fading slightly, but, ... ... I think the numeric id for wheel group in Linux is not 0. At least on Ubuntu 12.04 there is no wheel group and the numeric id for the root group is 0. Yeah, renaming wheel to root makes for increased security, too. :) It prompted people to write recommendations, HOWTOs, and security cheat sheets about creating groups named staff, admins, and the like, and give _those_ special privileges. You can't make these things up, I'm telling ya. Regards, Liviu Daia
Re: Backup of OpenBSD to Linux box
On 15 June 2015, Nick Holland n...@holland-consulting.net wrote: [...] In the first case, an rsync-based backup is probably almost impossible to beat. Combine with the --link-dest option (google for it. the man page is accurate, but you won't probably understand the full implications of this. When you are grinning from ear-to-ear and saying oh wow over and over, you got it), you can have rotated backups with minimal BW and disk usage, AND files on the backup system are directly viewable and usable (and every backup after the first is incremental, and every backup directory is a full). I've used systems like this for over a decade now, and I can't over-state how powerful and useful they are BEYOND simple backup and restore, I keep finding new uses for this type of system. Downside: can't really do bare-metal restores, and when crossing OSs, I've had issues with ownerships and permissions. [...] The other downside, if you use the --link-dest option, is that there's always only one copy of each file. A few days ago there was a post on SO by somebody who used that system, and found out that his backup disk had bad sectors in the middle of some large files. He wasn't amused. Regards, Liviu Daia
Re: disk change-out and packages
On 4 March 2015, Manuel Giraud man...@ledu-giraud.fr wrote: Ed Ahlsen-Girard eagir...@cox.net writes: I decided to upgrade the internal drive, so I hooked up the new on on the CD's usual SATA channel and installed, having adjust the disklabel more to suit me (the auto partition of /usr left it really tight on space, and home was not big enough). First method: mount all the slices in /tree and run a series of cp -R as root. Files seemed to get there but something was not right with permissions when I tried booting the new disk, so I dropped back and did some research. For this kind of things, dump/restore is a good way too that won't mess anything. AFAIK your differents source directories (/, /home, ...) have to already be differents partitions, then you can go like this: # mount -o async /dev/sd?a /tree # cd /tree # dump -0a -f - / | restore -rf - # mount -o async /dev/sd?d /tree/home # cd /tree/home # dump -0a -f - /home | restore -rf - +1 for dump / restore. cp doesn't handle hardlinks, tar / pax have limitations on maximum path length, rsync is resource-hungry if you tell it to deal with hardlinks, and cpio has other limitations on file names. Really, dump / restore is the only viable choice for this kind of task. Regards, Liviu Daia
Re: New x86, 4,5W Hardware Fit-PC Fillet
On 15 January 2015, Jan Lambertz jd.arb...@googlemail.com wrote: Hi, as i am always searching for new (low power) hardware, today i found something new. It sounds quite nice for running openbsd as a router/firewall. It is possible that not everything is supported right now in openbsd but the low power and number of nics made me smile. It might be availiable around march 2015. Hopefully someone will try running openbsd on it.Some highlights: AMD A4-6400T SoC 64-bit quad core 1.0GHz (boost up to 1.6GHz) 4.5W 1x SO-DIMM 204-pin DDR3 SDRAM memory slot Up to 8GB DDR3-1333 1x mSATA slot up to 6 Gbps (SATA 3.0) AMD Radeon R3 Graphics 2x GbE LAN ports (RJ-45) LAN1: Intel I211 GbE controller LAN2: Intel I211 GbE controller Warranty 5 years Pricing ?? (other models available) According to a page linked from the comments onm Phoronix, fitlet-B will sell for $129 (without storage or RAM): http://liliputing.com/2015/01/compulab-fitlit-small-fanless-amd-mullins-pc-for-linux-windows.html There are also a few pictures of the inside on that page. I'd say it looks rather cramped, I'd wait for the first reports of how hot it runs with mSATA and wifi. link to product http://www.fit-pc.com/web/products/specifications/fitlet-models-specifications/?model%5B%5D=fitlet-B+%28TBA%29model%5B%5D=fitlet-X+%28TBA%29model%5B%5D=fitlet-i+%28TBA%29 link to news http://www.phoronix.com/scan.php?page=news_itempx=CompuLab-Fitlet-Linux-PC as always, other/similar choices: APU1D4 soekris net6801-xx Regards, Liviu Daia
Re: AMD64 packages
On 10 December 2014, Stan Gammons sg063...@gmail.com wrote: When will new packages be built for AMD64? I'm getting library errors with the latest snapshot and the current packages. There are bigger problems with the latest snapshot: $ ldd /usr/sbin/unbound /usr/sbin/unbound: /usr/sbin/unbound: can't load library 'libssl.so.30.0' /usr/sbin/unbound: exit status 4 $ ls -l /usr/lib/libssl* -r--r--r-- 1 root bin 1518902 Oct 29 03:25 /usr/lib/libssl.so.27.2 -r--r--r-- 1 root bin 1512855 Nov 16 09:49 /usr/lib/libssl.so.28.0 -r--r--r-- 1 root bin 1518550 Dec 8 07:54 /usr/lib/libssl.so.29.0 $ dmesg | head -1 OpenBSD 5.6-current (GENERIC.MP) #668: Wed Dec 10 12:43:55 MST 2014 Regards, Liviu Daia
Re: AMD64 packages
On 11 December 2014, Theo de Raadt dera...@cvs.openbsd.org wrote: On 10 December 2014, Stan Gammons sg063...@gmail.com wrote: When will new packages be built for AMD64? I'm getting library errors with the latest snapshot and the current packages. There are bigger problems with the latest snapshot: $ ldd /usr/sbin/unbound /usr/sbin/unbound: /usr/sbin/unbound: can't load library 'libssl.so.30.0' /usr/sbin/unbound: exit status 4 [...] Look, this is rather simple. If you don't understand that snapshots get built, that libraries crank, that there are PEOPLE building this, that the data takes time to get to the mirrors, and that this is a non-static situation, that small catch-up syncronization errors are made, that they get fixed by real people, then PLEASE DON'T RUN SNAPSHOTS. [...] Oh, I wasn't accusing anybody, or pointing fingers, or anything like that. I was just saying it's currently broken, that's all. Sorry if it came accross any other way. Regards, Liviu Daia
Re: smtpd: mail stuck in queue
On 29 November 2014, Liviu Daia liviu.d...@gmail.com wrote: [...] Not sure about Postfix being right, but it does solve the initial problem: you fix the relay, you run postfix -r ALL, and the messages go on their way. [...] s/postfix -r ALL/postsuper -r ALL/ Regards, Liviu Daia
Re: smtpd: mail stuck in queue
On 29 November 2014, Gilles Chehade gil...@poolp.org wrote: On Sat, Nov 29, 2014 at 02:13:46AM +0200, Liviu Daia wrote: On 28 November 2014, Gilles Chehade gil...@poolp.org wrote: On Thu, Nov 27, 2014 at 10:00:19PM -0500, Hugo Villeneuve wrote: [...] No, it is not proper behavior. As a store and forward system with potentially 4-5 days between submission and delivery, any MTA needs to be able to adapt in configuration changes across a long period. So, because a MTA configuration may change across a long period of time and that during these changes sometimes someone will decide that a mail that used to be ok to relay no longer is or should no longer be relayed the same way, you're advocating that we should add logic to the MTA for taking guesses at what it should do and adapt to all possible changes ? [...] No: the original HELO, FROM, and RCPT TO should be saved in the queue file, and there should be a command to re-queue the message. Then when a message is re-queued the entire envelope is resolved again from scratch, according to the current config: problem solved. Well, that depends how you define the problem ;-) I don't, but I believe the initial poster did, in the message that started this thread: messages routed to invalid relays, clogging the queue. I'm not against having a command to re-enqueue, that is actually what I suggested myself. YES, WE SHOULD HAVE A COMMAND TO REENQUEUE. Excellent! :) I'm against having the daemon execute the command automatically, Yup, I never said it should run automatically. and the reason is that in the if you resolve again from scratch case, you have some things that are going to be changed and others that aren't. Since problem is solved according to you, can you tell me which ones ? See above: mail stuck in queue, with nowhere to go? I mentionned aliases yesterday, with your method we would resolve again, so we would go through aliases expansion again. Let's put aside the fact that everyone would get a mail twice because we're going to be adding an insane amount of kludge anyway so a few to work around this special case isn't an issue anymore. You have resolved again so that the routing method changes, yet you have decided that expansions caused by the previous routing method should not be rollbacked. How do we decide what needs to be rollbacked and what doesn't ? Right, I suppose I should have explained my point, rather than assume people around me are mind readers. ;) What I mean by re-queue is something like this: the content of the old message is injected as if it were a new message, and the old message is deleted. It doesn't matter what the old routing was, for all intents and purposes it's a new message, that just happens to have the same content and the same envelope as the old one. There is nothing to roll back, only the current configuration conts. Does it start to make some sense now? The only notable problem with this approach would be that some recipients might receive the message twice, but not for the reason you mention: the message might have been already sent successfully to some of its recipients before it gets requeued. This is unavoidable (or at least a very hard problem), since old and new alias expansion might resolve to the same final recipient starting from different inputs. But I'd say receiving a message multiple times is preferable to not receiving it at all. Can you provide a list we can all agree upon ? Are you comfortable with taking decisions away from the admin and making the software take them instead... to cope with changes the admin will do to the configuration ? Nope, I never said this should happen any other way than on explicit demand, when the admin runs smtpctl requeue ID, or something like that. Is it ok if the software decides that all mail is going to dev/null because of a config change and without admin having an option to actually validate ? This is essentially what Postfix does, and I have yet to hear anybody arguing it should do something else. :) Are you confident that this behaviour in Postfix is the same as that of Sendmail, Exim, Qmail to name the most common ? I haven't used Sendmail for the last maybe 15 years, I don't know about Exim, and Qmail is all but dead. :) I'm not sure why this matters; the way each mailer solves a problem depends (among other things) on its architecture, and Postfix seems to me close enough to your smtpd from this point of view to offer at least *some* philosophical inspiration. Which is why I (rudely) interrupted this thread with my suggestion. :) Just asking because you seem to imply that Postfix is doing things right and I'd like to hear about what makes its resolving again strategy any better than the others then. Not sure about Postfix being right, but it does solve the initial problem: you fix the relay, you run
Re: smtpd: mail stuck in queue
On 28 November 2014, Gilles Chehade gil...@poolp.org wrote: On Thu, Nov 27, 2014 at 10:00:19PM -0500, Hugo Villeneuve wrote: [...] No, it is not proper behavior. As a store and forward system with potentially 4-5 days between submission and delivery, any MTA needs to be able to adapt in configuration changes across a long period. So, because a MTA configuration may change across a long period of time and that during these changes sometimes someone will decide that a mail that used to be ok to relay no longer is or should no longer be relayed the same way, you're advocating that we should add logic to the MTA for taking guesses at what it should do and adapt to all possible changes ? [...] No: the original HELO, FROM, and RCPT TO should be saved in the queue file, and there should be a command to re-queue the message. Then when a message is re-queued the entire envelope is resolved again from scratch, according to the current config: problem solved. This is essentially what Postfix does, and I have yet to hear anybody arguing it should do something else. :) Regards, Liviu Daia
Re: HEADS-UP: issues with chromium in -current
On 23 October 2014, Marc Espie es...@nerim.net wrote: This has been discussed internally, but chromium is partly broken these days. Most specifically, windows refresh does strange things under some circumstances. The circumstances are well-known (thanks to matthieu@): modern systems use some composition manager for eye-candy on their display. So if you're using a shiny window manager, you won't see an issue. Old-style window managers, such as fvwm, fvwm2 (from ports) and cwm don't. Hence the breakage. Work-around: start a composition manager, such as xcompmgr from base xenocara. Cry since you lost your background image or moire pattern (fvwm-root, from ports, does know about composition managers). We're currently in the process of reporting the problem upstream. According to Linux guys, there's a world of pain in that general direction: http://www.iuculano.it/linux/apt-get-purge-chromium/ Some of the comments there are enlightening too. Outside of OpenBSD, most people don't use primitive window managers, so they don't see the issue. It probably started around when chromium switched to Aura for its gfx system... Regards, Liviu Daia
Re: Why are there no PKG_PATH defaults?
On 24 September 2014, Mihai Popescu mih...@gmail.com wrote: I thought this kind of suggestion are not answered anymore on this list ... @Ingo Schwarze: why don't you remove the files in /etc/examples and put some examples in man pages, for the apps that have no such thing yet? I believe the new sysmerge looks at /etc/examples? Regards, Liviu Daia
Re: low power device
On 18 September 2014, Stan Gammons sg063...@gmail.com wrote: Yes, the APU has a serial console. Baud rate is 115,200. To install OpenBSD, boot from a CD. At the boot prompt, before it times out and continues to boot, type stty com0 115200 and press return. Then type set tty com0 and press return. Then press return at the boot prompt to continue the boot process. Choose yes when the install ask if you want to use com0 as the console. In order to boot from a CD you need an USB CD-ROM. If you don't have an USB CD-ROM you can just make a live USB flash disk (see FAQ 14.17.3), boot from it, and install the system over network. Alternatively, you can take out the mSATA / SD card from APU, put it in an enclosure / card reader, connect it to a computer, install the system on it, then mount it back in the APU. But, on a side note, a live USB flash disk is an useful thing to have around anyway. You can take it with you, and turn (almost) any Windows PC into an useful terminal in less than a minute. :) Regards, Liviu Daia
Re: low power device
On 19 September 2014, Richard Toohey richardtoo...@paradise.net.nz wrote: On 09/19/14 14:26, Steve Litt wrote: On Thu, 18 Sep 2014 19:22:32 -0500 Chuck Burns brea...@gmail.com wrote: On Thursday, September 18, 2014 7:52:38 PM Steve Litt wrote: I just remembered a third question: I can plug in a USB keyboard, but how do I view the computer's output while installing OpenBSD or troubleshooting? Ssh is good when it's running smoothly, but not for preboot stuff. Thanks, Usually, it's a serial console Thanks Chuck, I didn't see a serial port listed as an IO device. Ugh, none of my laptops have a serial port either, so I'd need to use an old desktop running minicom to act as a serial port. Unless I get a serial terminal from a junkyard. Use USB and a USB-to-serial cable ... something like this: http://www.dicksmith.co.nz/tv-video-cables/dse-serial-usb-adaptor-dsnz-xh8290 Yes, but you also need to make sure it's supported by the OS on your laptop. Something based on Prolific PL-2303 is probably a good choice, on OpenBSD it's supported by uplcom(4). Regards, Liviu Daia
Re: low power device
On 12 September 2014, Zé Loff zel...@zeloff.org wrote: On Fri, Sep 12, 2014 at 05:31:01PM +0200, Martijn van Duren wrote: On Fri, 2014-09-12 at 16:22 +0100, Zé Loff wrote: On Fri, Sep 12, 2014 at 04:28:46PM +0200, Lars wrote: On 12.09.2014 15:27, Martijn van Duren wrote: Hello misc@, Hi, Currently I have an old desktop PC running as a home server/media center, which runs OpenBSD. Most of the time it's idling, but does run (open)ssh/(open)smtp/imap(dovecot)/http(nginx/apache +subversion)/minidnla, which I want to keep available. Is there any board/device known which can support these requirements and is fully (within the requirements) supported by OpenBSD? As a personal preference I would avoid using ARM boards and would try to go with x86/amd64 boards instead. I don't know how well those ARM devices are supported on OpenBSD (I have only a little experince with running Linux on those), but the performance was pretty disappointing(with Linux). *I* would decide for the APU from pcengines. http://pcengines.ch/apu.htm My first thought too, but it has no video, which is probably required by the OP. Video is not an requirement, since I primarily use it for streaming via DLNA, so network and storage are sufficient. Oh, in that case, I agree with Lars. I have a APU (the 2Gb) model running a bunch of light services for my small lan (pf, dhcpd, unbound, nsd, ntpd, wifi AP), and apart from heating a lot (passive cooling through the enclosure) it runs fine. [...] +1 for APU.1C, it's a nice machine, much faster than ARM boards. I'd also buy a small mSATA disk for system (pretty much any model would do, except the Chinese thing sold by PC Engines), and an external 3.5 USB disk with an external power brick for DLNA. Don't try to mount a 2.5 SATA disk inside the case; it would overheat, and it would need more power than the power brick that comes with APU.1C can provide. For similar reasons, you should probably avoid external disks powered over USB (that is, most 2.5 external disks these days). Also make sure to upgrade to the latest firmware if you want to run OpenBSD. Regards, Liviu Daia
Re: httpd URI rewriting / try_files
On 28 August 2014, Christopher Zimmermann chr...@openbsd.org wrote: On Thu, 28 Aug 2014 14:37:34 +0300 Gregory Edigarov ediga...@qarea.com wrote: Hello are there any plans to implement uri rewriting or something in a manner of 'try_files' configuration option of nginx? I plan to add a URL stripping option, somewhat more powerful than the nginx alias directive: root [strip number] directory Set the document root of the server. The directory is a pathname within the chroot(2) root directory of httpd. If not specified, it defaults to /htdocs. If the strip option is set, number path components are removed from the beginning of the URI before directory is prepended. this would allow you to do for example: location /wiki/ { strip 1 root /dokuwiki directory index doku.php fastcgi socket /tmp/php.sock } What about redirect, say from http://mumble to https://mumble? Regards, Liviu Daia
Re: tmux mutt and f1
On 26 August 2014, frantisek holop min...@obiit.org wrote: Tobias Ulmer, 26 Aug 2014 15:41: On Mon, Aug 25, 2014 at 10:42:57PM +0200, frantisek holop wrote: does anyone know of a way to make urxvt play together nicely with mutt (and tmux) regarding the f1 key? it works in xterm... macro index,pager f1 shell-escapeless /usr/local/share/doc/mutt/manual.txtenter help Works in urxvt for me. You're probably using the wrong TERM/termName setting. Should be rxvt-256color and screen inside tmux hmm. .tmux.conf: set -g default-terminal screen-256color .Xdefaults: urxvt.termName: screen-256color ... XTerm.termName: screen-256color you are right. changing it to urxvt.termName: rxvt-256color makes it work. (xterm works with screen-256color) what kept confusing me is that other programs like midnight commander and vim had no problems. shrug. thank you. Midnight Commander and Vim have mechanisms to override termcap / terminfo, Mutt doesn't. The termcap / terminfo clusterfuck was much worse 20+ years ago. It has slowly improved over time, but IMO things like Vim and Midnight Commander mostly working out of the box have kept people from fixing it sooner. Regards, Liviu Daia
Re: OpenBSD 5.5 on mSATA SSD unit in PC Engines APU.1C - bad dir ino 2 at offset 0: mangled entry kernel panic
On 20 June 2014, Zé Loff zel...@zeloff.org wrote: On Fri, Jun 20, 2014 at 11:40:02AM +0200, Roger Wiklund wrote: No problems so far with Intel mSATA 525 30GB. On a side note I'm a bit worried about the CPU temperate, almost 70 degrees C during normal load. Same here: 70-75C, for a 0.2 average load. The case gets pretty hot, so I'm guessing I installed the heatsink correctly... The case itself is the sink. :) Does anyone have (much) lower figures? No, this seems to be common. I managed to lower the temperature a little by rising the case legs, so that there's better air circulation below it. Room temperature makes a big difference too. Regards, Liviu Daia
Re: OpenBSD 5.5 on mSATA SSD unit in PC Engines APU.1C - bad dir ino 2 at offset 0: mangled entry kernel panic
On 9 June 2014, Mattieu Baptiste mattie...@gmail.com wrote: Le 8 juin 2014 13:38, Nick Ryan n...@njryan.com a ??crit : [...] Didn???t the PCEngines mSATA drive have problems in general? There???s a mention on here about issues with the a version - is that yours? http://pcengines.ch/msata16b.htm Theoritically, I should have the new firmware (that's what told my vendor). You do, according to the dmesg you posted in your first message. But it seems there are still problems with these. [...] This time it's the particular model of the disk shipped by PCEngines that has problems, not the firmware. A quick search reveals that many other people had to replace it with something else. Just make sure to search before you buy. Regards, Liviu Daia
Re: Gnome 3, toad and my android phone
On 24 May 2014, Stuart Henderson s...@spacehopper.org wrote: On 2014-05-24, Jona Joachim j...@joachim.cc wrote: gphoto2 copies videos and maybe audio (at least there is a --get-all-audio-data option) libmtp has some command-line tools too (though subject to big delays with my phone; I don't know whether this is a common problem). I settled on using ftpdroid for now though (android port of pure-ftpd). MTP is crap: it's slow, it can lead to corrupted files, and according to supposedly knowledgeable people, that can be traced to the protocol itself. Anyway, just about anything is better than it in practice. The easy solution is CIFS: either install Samba Filesharing on your phone then export /sdcard and mount it on OpenBSD with Samba, or export a directory from your computer with Samba and mount it on your phone with one of the many file managers than can do SMB (f.i. ES File Explorer, or Total Commander with the LAN plugin). If you want encryption things involve slightly more work. There are various SFTP and FTPS Android apps, and even a plain rsync over SSH, but you'll typically need to convert SSH keys between openssh and putty's formats (which you can do with puttygen). Regards, Liviu Daia
sshd broken in today's snapshot?
Unless I'm doing something stupid, sshd seems to be broken in today's snapshot. From a Linux machine: $ ssh testing Connection to testing closed by remote host. Connection to testing closed. From the server's point of view: # dmesg | head -1 OpenBSD 5.5-current (GENERIC.MP) #95: Fri May 2 06:31:18 MDT 2014 # /usr/sbin/sshd -d debug1: sshd version OpenSSH_6.7, OpenSSL 1.0.1g 7 Apr 2014 debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type ECDSA debug1: private host key: #2 type 3 ECDSA debug1: private host key: #3 type 4 ED25519 debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-d' debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: fd 4 clearing O_NONBLOCK debug1: Server will not fork when running in debugging mode. debug1: rexec start in 4 out 4 newsock 4 pipe -1 sock 7 debug1: inetd sockets after dupping: 3, 3 Connection from 192.168.56.1 port 57650 on 192.168.56.102 port 22 debug1: Client protocol version 2.0; client software version OpenSSH_6.6 debug1: match: OpenSSH_6.6 pat OpenSSH_6.5*,OpenSSH_6.6* compat 0x1400 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.7 debug1: permanently_set_uid: 27/27 [preauth] debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519 [preauth] debug1: SSH2_MSG_KEXINIT sent [preauth] debug1: SSH2_MSG_KEXINIT received [preauth] debug1: kex: client-server aes128-ctr umac-64-...@openssh.com z...@openssh.com [preauth] debug1: kex: server-client aes128-ctr umac-64-...@openssh.com z...@openssh.com [preauth] debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth] debug1: SSH2_MSG_NEWKEYS sent [preauth] debug1: expecting SSH2_MSG_NEWKEYS [preauth] debug1: SSH2_MSG_NEWKEYS received [preauth] debug1: KEX done [preauth] debug1: userauth-request for user daia service ssh-connection method none [preauth] debug1: attempt 0 failures 0 [preauth] debug1: userauth-request for user daia service ssh-connection method publickey [preauth] debug1: attempt 1 failures 0 [preauth] debug1: test whether pkalg/pkblob are acceptable [preauth] debug1: temporarily_use_uid: 1000/1000 (e=0/0) debug1: trying public key file /home/daia/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: restore_uid: 0/0 Failed publickey for daia from 192.168.56.1 port 57650 ssh2: RSA 3b:30:77:5c:8b:55:cf:a1:f6:f6:81:27:73:d8:2e:3e debug1: userauth-request for user daia service ssh-connection method publickey [preauth] debug1: attempt 2 failures 1 [preauth] debug1: test whether pkalg/pkblob are acceptable [preauth] debug1: temporarily_use_uid: 1000/1000 (e=0/0) debug1: trying public key file /home/daia/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /home/daia/.ssh/authorized_keys, line 1 DSA 3b:87:9f:8e:ef:ea:cf:a5:2e:9a:a4:bb:c7:b6:86:f6 debug1: restore_uid: 0/0 Postponed publickey for daia from 192.168.56.1 port 57650 ssh2 [preauth] debug1: userauth-request for user daia service ssh-connection method publickey [preauth] debug1: attempt 3 failures 1 [preauth] debug1: temporarily_use_uid: 1000/1000 (e=0/0) debug1: trying public key file /home/daia/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /home/daia/.ssh/authorized_keys, line 1 DSA 3b:87:9f:8e:ef:ea:cf:a5:2e:9a:a4:bb:c7:b6:86:f6 debug1: restore_uid: 0/0 debug1: ssh_dss_verify: signature correct Accepted publickey for daia from 192.168.56.1 port 57650 ssh2: DSA 3b:87:9f:8e:ef:ea:cf:a5:2e:9a:a4:bb:c7:b6:86:f6 debug1: monitor_child_preauth: daia has been authenticated by privileged process debug1: Enabling compression at level 6. [preauth] debug1: monitor_read_log: child log fd closed User child is on pid 11401 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: do_cleanup At this point, sshd exits. Regards, Liviu Daia
Re: sshd broken in today's snapshot?
On 2 May 2014, Jeremy Evans jeremyeva...@gmail.com wrote: On Fri, May 2, 2014 at 8:42 AM, Liviu Daia liviu.d...@gmail.com wrote: Unless I'm doing something stupid, sshd seems to be broken in today's snapshot. From a Linux machine: $ ssh testing Connection to testing closed by remote host. Connection to testing closed. From the server's point of view: # dmesg | head -1 OpenBSD 5.5-current (GENERIC.MP) #95: Fri May 2 06:31:18 MDT 2014 # /usr/sbin/sshd -d debug1: Enabling compression at level 6. [preauth] Try disabling compression and see if that fixes it. Yes, it works with compression disabled, thank you. Regards, Liviu Daia
Re: adsl card advice
On 25 April 2014, Stuart Henderson s...@spacehopper.org wrote: [...] Personally I use an external router configured as a bridge, and configure pppoe on the OpenBSD side (with baby jumbos and RFC4638 where possible to avoid getting a restricted MTU). That way the router doesn't have external IP connectivity thus avoiding many of the problems you might run into, and meaning that any complex configuration is done on the OpenBSD box; it's then also pretty easy to swap out a spare router in case of hardware failure (which in my experience is more likely to occur for something that connects to a phone line). [...] Tangentially related: I used to have this exact setup a few years ago. It worked well, with two notable quirks. First, it was actually easier to make it work with userspace pppd first, then duplicate the setup with the kernel pppd. That's because the diagnostics produced by the userspace pppd were much better then the kernel's ones, and they allowed me to figgure out the exact combination of switches required by my ISP. The diagnostics from the kernel pppd were much less useful, and every single change in config required a reboot (or at least that's what I thought at the time). I believe the userspace pppd is gone these days. Second, the interface would simply disappear when the line went down, and that was mildly annoying. Various applications didn't like that; they would typically crash if I bound them to the interface, and said interface went away under their feet. :) I haven't checked in a long while if this is still the case, but it's something you might want to keep in mind. Regards, Liviu Daia
Re: Ralink mystery usb mini WiFi adapter
On 21 April 2014, Juan Francisco Cantero Hurtado i...@juanfra.info wrote: On Sun, Apr 20, 2014 at 10:09:19PM +0200, Benjamin Baier wrote: It's Advertised as an EP-N8508. It is most likely a rebrand, which uses the rtl8188cus (very low cost chip) This should be supported by the urtwn driver. Just need to recognize the USB device number. In this case it's idVendor 0x148f idProduct 0x7601. No, 0x148f is Ralink. A quick search turns up with this post: http://www.raspberrypi.org/forums/viewtopic.php?t=49864 According to this guy, the chipset is Ralink 5370. Supported (badly) on Linux as mt7601, not supported on OpenBSD. Regards, Liviu Daia
Re: How to apply a patch in OpenBSD?
On 15 April 2014, ohh, whyyy ohhwh...@postafiok.hu wrote: Hey, Thanks! yes, it looks like the sys.tar.gz was missing.. I created a small howto for it (for patching 5.4): cd /root ftp http://ftp.openbsd.org/pub/OpenBSD/`uname -r`/src.tar.gz [...] Nit pick: with automatically chosen partitions: # df -h /root Filesystem SizeUsed Avail Capacity Mounted on /dev/wd0a 129M110M 12.9M89%/ Regards, Liviu Daia
Re: How to apply a patch in OpenBSD?
On 16 April 2014, Kenneth Westerback kwesterb...@gmail.com wrote: On 16 April 2014 19:20, Liviu Daia liviu.d...@gmail.com wrote: On 15 April 2014, ohh, whyyy ohhwh...@postafiok.hu wrote: Hey, Thanks! yes, it looks like the sys.tar.gz was missing.. I created a small howto for it (for patching 5.4): cd /root ftp http://ftp.openbsd.org/pub/OpenBSD/`uname -r`/src.tar.gz [...] Nit pick: with automatically chosen partitions: # df -h /root Filesystem SizeUsed Avail Capacity Mounted on /dev/wd0a 129M110M 12.9M89%/ Regards, Liviu Daia What nit are you picking? Don't work in /root, chances are you'll fill up / in no time. $ ls -lh src.tar.gz -rw-r--r-- 1 daia daia 150M Aug 7 2013 src.tar.gz Start seeing any problem? Also, avoid downloading files as root. Regards, Liviu Daia
Re: How to apply a patch in OpenBSD?
On 16 April 2014, ohh, whyyy ohhwh...@postafiok.hu wrote: [...] So.. there is no documented way of checking the build of openssl (after ex.: a patching of it)? If I issue an openssl version -a then there is a built on line, and it says date not available (before/after patching). [...] Check for the bug that was supposedly fixed by the patch. In this particular case: run a SSL server with openssl s_server, and use one of the many many heartbleed checkers out there to see if the problem is still there. Regards, Liviu Daia
Re: ssh and relayd
On 4 December 2013, Predrag Punosevac punoseva...@gmail.com wrote: Hi Misc, This is trivial question but I am having a hard time wrapping my head around the possible use of relayd for ssh traffic redirecting. Namely I have a situation where I have multiple hosts behind firewall which I would like to make available for ssh loggin. [...] You can do that with ssh alone: Host internal_machine ProxyCommandssh -A -q -l %r -W %h:%p firewall Regards, Liviu Daia
Re: general question about usb stack and ups
On 19 September 2013, Gregory Edigarov ediga...@qarea.com wrote: On 09/19/2013 12:20 PM, Gregory Edigarov wrote: Hello, everybody. A few days ago I've bought a new ups, as a replacement for my old one, which got it's last way to junkyard. The old one had RS232 , and the new one is an USB ups. Trying different ways to connect it to OpenBSD, but everything I've tried fails. The UPS reports itself as: uhidev2 at uhub3 port 2 configuration 1 interface 0 ATCL FOR UPS ATCL FOR UPS rev 2.00/0.00 addr 4 uhidev2: iclass 3/0 uhid2 at uhidev2: input=8, output=8, feature=0 I've connected it to Windows via USB, and installed software which came with it, snooped the protocol, and I am dead sure it is an old and frayed Megatec/Q1, which should work with blazer_usb driver from nut. But it isn't. Seems I've tried nearly every option and allowed option combinations with no result. I cannot get you the usbdevs usbhidctl right now, because I left it connected to windows, and it is at home. So, my question is: could it the differences in usb stack between various OSes, that are giving the trouble? Will try connect it to linux and NetBSD later, but I am willing to solve the puzzle with OpenBSD. Oh, and another question: is there a way to quickly change usb device attachment? I.e. having a device that is attached as UHID, is there a way to reattach it as UGEN? For nut on OpenBSD you need ugen(4). The quick and dirty way to achieve that is to disablei uhidev* in the kernel. The cleaner way is to patch usb_quirks.c, as pointed out by somebody else. You also need r/w permissions for group _ups to /dev/usb* and /dev/ugen0*, and possibly other things (use ktrace to find out). Regards, Liviu Daia
Re: sqlite3 got slow?
On 22 August 2013, patrick keshishian pkesh...@gmail.com wrote: Hi, Anyone else notice that sqlite3 in base got slower somewhat recently? I have a fairly large database, roughly 55M in size. I had to repopulate it from source SQL file recently, and it took more than twice the time I remember it taking with an older snapshot. This is on newer snapshot[1]: $ time sqlite3 the.db in.sql 50m13.15s real 3m57.25s user 8m15.78s system [...] I recently had to populate a SQLite database with ~500k records, the end result being a ~240 MB file. I can't answer your question about sqlite3 getting slower, but I can tell you that tuning operations makes a huge difference. I suggest something along these lines: (1) set some pragmas: PRAGMA synchronous = OFF PRAGMA temp_store = MEMORY PRAGMA journal_mode = MEMORY PRAGMA page_size = 65536 (2) use transactions and commit every 10k inserts (or more), rather than after each new record (which is the default); (3) drop all indices, push the data, then re-create indices. Each of these have dramatic effects on speed. Other optimisations are possible too, but I believe these are the important ones. In my case, I cut database creation time from more than an hour to 80 seconds, on a relatively slow machine. FWIW. Regards, Liviu Daia
Re: sqlite3 got slow?
On 22 August 2013, patrick keshishian pkesh...@gmail.com wrote: On 8/22/13, Liviu Daia liviu.d...@romednet.com wrote: On 22 August 2013, patrick keshishian pkesh...@gmail.com wrote: Hi, Anyone else notice that sqlite3 in base got slower somewhat recently? I have a fairly large database, roughly 55M in size. I had to repopulate it from source SQL file recently, and it took more than twice the time I remember it taking with an older snapshot. This is on newer snapshot[1]: $ time sqlite3 the.db in.sql 50m13.15s real 3m57.25s user 8m15.78s system [...] I recently had to populate a SQLite database with ~500k records, the end result being a ~240 MB file. I can't answer your question about sqlite3 getting slower, but I can tell you that tuning operations makes a huge difference. I suggest something along these lines: (1) set some pragmas: PRAGMA synchronous = OFF PRAGMA temp_store = MEMORY PRAGMA journal_mode = MEMORY PRAGMA page_size = 65536 Thanks for this info! Adding only PRAGMA journal_mode = MEMORY was a tremendous help: Don't forget to take it out after you're done populating the database. It doesn't buy you all that much for normal operations, and it greatly increases the chances of data corruption. [...] Still hinting at a slowdown between the two snaps/sqlite3 version change. As I said, I don't really know how to answer that. I'd still go with transactions, as those would take most of the disk thrashing out of the picture. With a 55 MB database, it's probably fine to put everything in a single transaction: just put a BEGIN at the beginning, and a COMMIT at the end. The other thing that comes to mind as potentially relevant are locales, both inside and outside sqlite. For the database encoding: PRAGMA encoding = UTF-8 (or whatever is appropriate for your database). I'd also compile both versions of sqlite on the same machine, and do some profiling. If you still can't get an idea what going on from comparing profile traces, you should probably ask on a sqlite forum... Regards, Liviu Daia
Re: mysql.sock location
On 18 August 2013, Guy Ferguson guyfergu...@tpg.com.au wrote: [...] BUt I am trying to handle the chroot issue by using a hack I found online: ### in /etc/rc.local: # mysql server if [ -x /usr/local/bin/mysqld_safe ] ; then su -c mysql root -c '/usr/local/bin/mysqld_safe /dev/null 21 ' echo -n ' mysql' mkdir -p /var/www/var/run/mysql chown www:daemon /var/www/var/run/mysql # wait for a socket to appear for i in 1 2 3 4 5 6; do if [ -S /var/run/mysql/mysql.sock ]; then break else sleep 1 echo -n . fi done ln -f /var/run/mysql/mysql.sock /var/www/var/run/mysql/mysql.sock fi [...] You shouldn't believe everything you read, especially what you read on Internet. :) Broken record: linking only works until you restart the server manually, as mysqld removes the socket and re-creates it when starting. The location of the socket is configured in /etc/my.cnf. To use mysql with chrooted Apache / Nginx either use TCP connections, or set both /etc/my.cnf and /var/www/etc/my.cnf to point to a place inside the chroot jail. On a tangentially related topic: the official way to start mysql on 5.3 is to add mysqld to pkg_scripts in /etc/rc.conf.local. See: http://www.openbsd.org/faq/faq10.html#rc Regards, Liviu Daia
Re: mysql.sock location
On 18 August 2013, Guy Ferguson guyfergu...@tpg.com.au wrote: Livia, If you want to address me by name, s/Livia/Liviu/ please. It might not be much, but it's my name, and I kind of became attached to it over the years. :) Thanks for your help. I modded the /etc/my.cnf to add in the extra /run directory. A few other tweaks here and there and i can now get a test.php to connect to the default host mysql ($conn=mysql_connect...) So now i'm confident that mysql is working and connectable...I just ahve to sort out why drupal is unhappy, which no doubt is a chroot issue. [...] Like I said, the easy solution to that is to use TCP connections. As others have pointed out, just set hostname to 127.0.0.1 in your Drupal config, and you should be fine. If you insist on using UNIX sockets, you probably want to set socket = /var/www/run/mysql.sock in the /etc/my.cnf, then copy /etc/my.cnf to /var/www/etc/my.cnf, and set socket = /run/mysql.sock in the client section in /var/www/etc/my.cnf. There is no advantage in doing things like this though, you'd be just looking for future trouble. Regards, Liviu Daia
Re: Upgrade to 5.0 from 4.x broke Apache+PHP's ability to talk to mysql.sock
On 12 August 2013, Damon Getsman damo.g...@gmail.com wrote: [...] Last night, however, when I decided to take another stab at things, googling turned up a result that I hadn't seen previously (I am google-tarded, so I will accept the possibility that I'd not done as straightforward an attempt to look for the answer of this issue as I'd thought). The link was at http://philihp.com/blog/2008/connecting-to-mysql-with-php-in-apache-on-openbsd/ (2008? Certainly I must not have googled as well as I thought!), and referred to a permanent (although kludgy) solution found at http://www.openbsdsupport.org/e107_CMS.html . The solution was, indeed, dealing with creating a hardlink to somewhere within the chroot'ed jail; in this case under /var/www/var/run/mysql/mysql.sock after the appropriate path was created. [...] Please, stop repeating this nonsense. This solution works until you restart the server manually, since mysqld removes the socket before re-creating it. The real solution is either to use TCP connections, or move the socket inside the jail and make /etc/my.cnf and /var/www/etc/my.cnf point to it accordingly. Regards, Liviu Daia
Re: Management of pf.conf
On 11 July 2013, Andy a...@brandwatch.com wrote: Hi, I use 'puppet' for this to manage over 20 OpenBSD firewalls now. [...] If you're shopping for configuration management tools, people also seem to like Ansible, Salt, and Chef: http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software https://news.ycombinator.com/item?id=5983918 https://news.ycombinator.com/item?id=5932608 https://news.ycombinator.com/item?id=3090800 Regards, Liviu Daia
Re: offline mail setup for road warrior
On 9 March 2013, Stuart Henderson s...@spacehopper.org wrote: On 2013-03-08, frantisek holop min...@obiit.org wrote: hi there, i am fishing for ideas from others regarding how to read/send email in my current life situation (=being on the road all the time connecting once in a while with 3rd world wifi). i have my own mail server, that i can setup as i want. i am travelling with my notebook. my preferred setup would be something that downloads my mails when i am connected, then i can write answers locally even when being offline, and these would be sent automatically (through my server) when i come online again. my mail client is mutt. any road warriors living like this with a rock solid well tested setup? -f How about UUCP over TCP? Since your Received: headers show postfix, see http://www.postfix.org/UUCP_README.html (but of course it's possible with other MTAs). +1 for UUCP over TCP. Unlike newer protocols such as POP3 and ETRN, it was specifically designed to cope with crappy lines. You'll never lose mail with it, and you won't end up with duplicate messages or corrupt mailboxes, regardless of how many times your connection goes down during transfers. Use stunnel and relayd to wrap it in SSL, and you're done. Regards, Liviu Daia
Re: Verizon FIOS, OpenBSD, and DHCP
On 6 February 2013, bofh goodb...@gmail.com wrote: On Tue, Feb 5, 2013 at 11:18 PM, Jay Hart jh...@kevla.org wrote: Solved this. It took Verizon three tries (three calls by me), to actually get the RJ-45 port working on the ONT. Hmm... I had to set my MAC address to the Actiontec's. $ cat /etc/hostname.em0 !ifconfig \$if lladdr 00:0f:b3:aa:aa:aa dhcp For what it's worth, it's probably useful to keep around a packet capture of a successful DHCP negotiation with your ISP. DHCP is a complicated protocol, and ISPs do weird things with it. A known-good packet capture might save you a lot of time when switching equipment. Regards, Liviu Daia
Re: VPN on OpenBSD: OpenSSH or OpenVPN?
On 16 April 2012, Kostas Zorbadelos kzo...@otenet.gr wrote: [...] Should I go for OpenSSH with its tun(4) VPN features or do you think an OpenVPN solution would be more appropriate? [...] You should probably avoid SSH. Without actually looking at the code, I'd say SSH VPNs are prone to TCP-over-TCP meltdown. The better options are OpenVPN and IPsec. OpenVPN is relatively straightforward to set up, and it mostly works. IPsec is more robust, and can interoperate with more systems, but setting it up involves a deeper understanding of what you're doing, and possibly more fiddling. Regards, Liviu Daia
Re: Mysql connection from within php
On 2 June 2010, Eugene Yunak e.yu...@gmail.com wrote: On 1 June 2010 16:30, What you get is Not what you see wygin...@gmail.com wrote: Freshly installed on openbsd 4.6 mysql,php and php5-mysql packages. Done the configs. Now php and mysql works. But I couldnt make it connect to mysql from within php with such a command mysql_connect(localhost,user,pass) It used to give Cant connect to mysql through socket error till I change the command to mysql_connect(127.0.0.1,user,pass) I want to learn why? As you've been already told, this is because default apache is chrooted and thus cannot access mysql socket. To correct it, just do # mkdir -p /var/www/var/run/mysql # ln -f /var/run/mysql/mysql.sock /var/www/var/run/mysql/mysql.sock Please, stop perpetrating this nonsense. This only works until you restart mysqld. The reason is mysqld removes the socket when it starts before creating it anew. If you really must use a socket instead of TCP then move the socket to jail and give programs different views to it from inside and outside the jail, using my.cnf. Not tested: - in /etc/my.cnf: socket = /var/www/var/run/mysql/mysql.sock - in /var/www/etc/my.cnf: socket = /var/run/mysql/mysql.sock Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Mysql connection from within php
On 2 June 2010, Eugene Yunak e.yu...@gmail.com wrote: On 2 June 2010 20:48, Liviu Daia liviu.d...@imar.ro wrote: On 2 June 2010, Eugene Yunak e.yu...@gmail.com wrote: On 1 June 2010 16:30, What you get is Not what you see wygin...@gmail.com wrote: Freshly installed on openbsd 4.6 mysql,php and php5-mysql packages. Done the configs. Now php and mysql works. But I couldnt make it connect to mysql from within php with such a command mysql_connect(localhost,user,pass) It used to give Cant connect to mysql through socket error till I change the command to mysql_connect(127.0.0.1,user,pass) I want to learn why? As you've been already told, this is because default apache is chrooted and thus cannot access mysql socket. To correct it, just do # mkdir -p /var/www/var/run/mysql # ln -f /var/run/mysql/mysql.sock /var/www/var/run/mysql/mysql.sock B B Please, stop perpetrating this nonsense. B This only works until you restart mysqld. B The reason is mysqld removes the socket when it starts before creating it anew. B B If you really must use a socket instead of TCP then move the socket to jail and give programs different views to it from inside and outside the jail, using my.cnf. B Not tested: - in /etc/my.cnf: socket = /var/www/var/run/mysql/mysql.sock - in /var/www/etc/my.cnf: socket = /var/run/mysql/mysql.sock I fail to see how this is nonsense or what stops one from creating this hardlink in rc.local (which would be normally used to start mysql anyway). Like I said, it stops working when you restart mysqld. This doesn't necessarily happen at boot. If, for whatever reason, you restart mysqld manually, will you remember to re-create the link? Your solution however works as well, of course. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Comparing large amounts of files
On 11 December 2009, STeve Andre' and...@msu.edu wrote: I am wondering if there is a port or otherwise available code which is good at comparing large numbers of files in an arbitrary number of directories? I always try avoid wheel re-creation when possible. I'm trying to help some- one with large piles of data, most of which is identical across N directories. Most. Its the 'across dirs' part that involves the effort, hence my avoidance of thinking on it if I can help it. ;-) Try this tiny Perl script: http://hqbox.org/files/fdupe.pl It's still faster than all of its competitors I'm aware of (most of them written in C). :) Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: USB CD-ROM support
On 3 November 2008, Bob Hope [EMAIL PROTECTED] wrote: [...] DHCPD server setup: CentOS 5.2 dhcpd configured to point to file 'pxeboot' tftpd with server root at /tftpdroot and all files (pxeboot, bsd, bsd.rd etc) placed in here When I try booting the machine that I want OpenBSD on, it loads the pxeboot file (I see this in the logs) but it keeps timing out looking for the kernel file bsd (there aren't any log messages that tell me anything here). It's my understanding that all the files needed to boot using PXE should be placed in the root of the tftp server. [...] Create a file etc/boot.conf in your TFTP root directory, with the contents boot tftp:/bsd.rd If that still doesn't help, enable logging to see what the TFTP server is trying to do and where it's looking the files, and move bsd.rd accordingly. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Can't start Apache... MaxCPUPerChild is invalid??
On 3 September 2008, Ted Unangst [EMAIL PROTECTED] wrote: On 9/3/08, Chris Smith [EMAIL PROTECTED] wrote: On Wednesday 03 September 2008 09:04:01 am Dave Wilson wrote: If you find that the build test fails, and then find that memtest succeeds, then you can deduce that the problem lies with your hard drive Only if memtest is infallible. I may be mistaken but I've long held the opinion that while a memtest failure almost certainly means defective memory, a memtest pass does not carry quite the same weight. I had a computer with bad ECC that would pass memtest. It did make little notes in the BIOS log, but memtest itself issed no complaints. Attempting to compile something though would cause all sorts of mystery errors. memtest is hardly representative of real world usage patterns. Yes. FWIW, according to a friend who is a hardware designer and cuts open memory chips for a living, you simply can't test memories in software. That is, you can prove them broken, but you can't reliably prove them fine. You need some really expensive hardware for that. Also, just like disks, there is no such thing as a perfect error-free memory. So the answer to any conceivable test will be a statistic, not a definitive true / false. The difference between memtest and a hardware tester is how accurate this statistic really is... Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: zombies
On 12 March 2008, Lars NoodC)n [EMAIL PROTECTED] wrote: [...] And, is there a generic way to prevent them? The cause is a perl CGI called by apache2 Depending on what you're doing, make the parent wait(2) for the processes or setsid(3). Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: zombies
On 12 March 2008, Hannah Schroeter [EMAIL PROTECTED] wrote: Hi! On Wed, Mar 12, 2008 at 12:05:29PM +0200, Liviu Daia wrote: On 12 March 2008, Lars NoodC)n [EMAIL PROTECTED] wrote: [...] And, is there a generic way to prevent them? The cause is a perl CGI called by apache2 Depending on what you're doing, make the parent wait(2) for the processes or setsid(3). setsid(2) (yes, it's section 2 on OpenBSD) Yes, sorry. doesn't make the child lose the connection to the parent. No, it actually makes the calling process a session leader. See the source of daemon(3) for how to use setsid in connection with fork and exit (in fact _exit) to make a process disconnect from its parent and its controlling terminal etc. Actually, there's a bunch of other things to take care of, like signals and pipes. A more complete answer would be something like: read a book about UNIX process management; I was trying to provide a hint in the right direction, not abstract a book in a sentence. :) Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Intel S5000VSA motherboards?
Any experiences with Intel S5000VSA motherboards? Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: openldap with dbv4 crash
On 1 January 2008, Daniel [EMAIL PROTECTED] wrote: Vijay Sankar mrta: [...] there's support in 2.4 but iirc it's not a simple thing to backport. Why should we backport the db4.6 support? We just need to use 2.4. [...] (1) Historically, upgrading existing OpenLDAP databases to new formats has always been a PITA; (2) The 2.4 branch is still unstable; historically, previous branches haven't become (somewhat) usable until about minor version 20; and guess what: the new branch is not exactly less complex than the older ones; (3) Historically, none of the new brances have been backward compatible; many applications don't support 2.6 yet. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Using Mail(1)
On 26 December 2007, Hannah Schroeter [EMAIL PROTECTED] wrote: Hi! On Wed, Dec 26, 2007 at 09:28:33AM +0200, Liviu Daia wrote: On 25 December 2007, Girish Venkatachalam [EMAIL PROTECTED] wrote: [...] I just checked out the 'wl=72' stuff in vi. Works exactly like 'tw' in vim. I then did an fmt in the end. The result looks much better of course. But there is a problem. The quoting gets goofed up. One has to do it with little more care I guess. [...] Or use Par instead of fmt; textproc/par in ports. $ cat bin/wrap_quote #! /bin/sh sed 's/^//' | fmt | sed 's/^//' ... except Par can also handle multilevel quotes, like above. :) Take a look at it, you'll be impressed. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Using Mail(1)
On 25 December 2007, Girish Venkatachalam [EMAIL PROTECTED] wrote: [...] I just checked out the 'wl=72' stuff in vi. Works exactly like 'tw' in vim. I then did an fmt in the end. The result looks much better of course. But there is a problem. The quoting gets goofed up. One has to do it with little more care I guess. [...] Or use Par instead of fmt; textproc/par in ports. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: removing sendmail
On 3 December 2007, Amarendra Godbole [EMAIL PROTECTED] wrote: On Nov 30, 2007 4:32 PM, Liviu Daia [EMAIL PROTECTED] wrote: On 30 November 2007, Amarendra Godbole [EMAIL PROTECTED] wrote: Please note that postfix does not undergo the rigorous code scrub that sendmail goes through. [...] Will you please cut the crap? Thank you. Unlike Sendmail, Postfix was written from scratch with security in mind. It had only one published security flaw since its first public release in 1998. The author, Wietse Venema, is also the author of SATAN and tcpwrappers. He knew one or two things about writing secure code long before OpenBSD came into existence. The objections people occasionally have against Postfix are related to its license, not the code quality. [...] I guess my statement was mis-interpreted - I did not question the security of postfix, but asserted that sendmail, being in base, was code audited by OBSD developers. I surely trust stuff from the base more than something that gets installed through a port. Actually, what you did was imply Postfix doesn't undergo a code audit as rigorous as the version of Sendmail in base, without having any idea about the internals of either Postfix or Sendmail, their development processes, and their security histories. That is, you dismissed Postfix based on your fuzzy feelings. As a second note, postfix as a standalone entity may be secure, but I am not sure how secure it will be if it starts interacting with some other piece of software. Sorry, I can't parse this. Software doesn't live in Plato's Paideia, every program interacts one way or another with some other software. Also, from the top of my head I can say that postfix's 'mailq' gets me the status even as a normal user, while that of sendmail does not (maybe I am wrong, and defaults have changed now). YMMV. (1) Sendmail did the same for at least 25 years; (2) As somebody else pointed out, it's configurable; (3) This has nothing to do with either security, or code audit. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: removing sendmail
On 30 November 2007, Geoff Steckel [EMAIL PROTECTED] wrote: Liviu Daia wrote: On 30 November 2007, Amarendra Godbole [EMAIL PROTECTED] wrote: Please note that postfix does not undergo the rigorous code scrub that sendmail goes through. [...] Will you please cut the crap? Thank you. Unlike Sendmail, Postfix was written from scratch with security in mind. It had only one published security flaw since its first public release in 1998. The author, Wietse Venema, is also the author of SATAN and tcpwrappers. He knew one or two things about writing secure code long before OpenBSD came into existence. The objections people occasionally have against Postfix are related to its license, not the code quality. I have seen several installations of Postfix go catatonic due to spam overload, large messages, mailing list expansions, and other undiagnosed problems. These were run by Postfix lovers, so I have always assumed that the installation was correct. In the one case I saw tested replacing Postfix with Sendmail resulted in no further problems. Given this anecdotal history I would suggest not running Postfix in a large production environment. Well, the point I was trying to make was about Postfix code being audited. But since I'm never the man to turn down a pissing contest, here we go: I have seen several installations of Sendmail go catatonic due to spam overload, large messages, mailing list expansions, and other undiagnosed problems. These were run by Sendmail lovers, so I have always assumed that the installation was correct. In the many cases I saw tested replacing Sendmail with Postfix resulted in no further problems. Given this anecdotal history I would suggest not running Sendmail in a large production environment. A story just as valid as yours. :) Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: removing sendmail
On 30 November 2007, Amarendra Godbole [EMAIL PROTECTED] wrote: Please note that postfix does not undergo the rigorous code scrub that sendmail goes through. [...] Will you please cut the crap? Thank you. Unlike Sendmail, Postfix was written from scratch with security in mind. It had only one published security flaw since its first public release in 1998. The author, Wietse Venema, is also the author of SATAN and tcpwrappers. He knew one or two things about writing secure code long before OpenBSD came into existence. The objections people occasionally have against Postfix are related to its license, not the code quality. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 26 September 2007, Craig Skinner [EMAIL PROTECTED] wrote: RW wrote: What I was getting looked like backscatter and smelled like backscatter it is just that some of the IPs sending it didn't check out as MTAs. i.e. they were not listed MXs for the domain they came from AND the domain was not likely someone with separate outbound senders. They all retried too and when I had them as TRAPPED entries the logged data included typical failed-to-deliver messages. 'bots getting smart eh? Bugger! If that is the trend, greylisting starts to lose its value as spammers adapt to the RFCs. [...] Greylisting is trivial to bypass, with or without a queue: just send the same messages twice. Some spammers have figured that out long ago. Ever wondered why sometimes you receive 2 or 3 copies of the same spam, from the same IP, with the same Message-Id etc., a few minutes apart? Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 26 September 2007, Damien Miller [EMAIL PROTECTED] wrote: On Wed, 26 Sep 2007, Liviu Daia wrote: Greylisting is trivial to bypass, with or without a queue: just send the same messages twice. Some spammers have figured that out long ago. Ever wondered why sometimes you receive 2 or 3 copies of the same spam, from the same IP, with the same Message-Id etc., a few minutes apart? That doesn't work, at least not against spamd. How does spamd distinguish between a legitimate retry and a re-injection of the same message with the same Message-Id, sender etc.? Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 26 September 2007, Craig Skinner [EMAIL PROTECTED] wrote: Liviu Daia wrote: How does spamd distinguish between a legitimate retry and a re-injection of the same message with the same Message-Id, sender etc.? It doesn't. Just what you described would probably be within the default 25 mins grey period. Why should it? The second copy is sent in a separate run, that's the whole point. The only thing the bot has to figure out is how long to wait until the second run. A smart one would send a second copy after 10 minutes, and a third one after, say, 35 minutes. Another delivery attempt would be needed after this time to pass spamd. Moral: randomize the greylisting time... Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 26 September 2007, Luca Corti [EMAIL PROTECTED] wrote: On Wed, 2007-09-26 at 17:02 +0300, Liviu Daia wrote: Another delivery attempt would be needed after this time to pass spamd. Moral: randomize the greylisting time... Between which min/max valuse? Keep in mind that this corresponds to the (minimum) delay introduced in delivering a good messages to the mailbox. That's up to you. The minimum should be large enough to keep away naive bots, as it does now. The maximum should be as large as you can afford without being too anti-social. :) Some crap will still pass through anyway. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 26 September 2007, Liviu Daia [EMAIL PROTECTED] wrote: On 26 September 2007, Luca Corti [EMAIL PROTECTED] wrote: On Wed, 2007-09-26 at 17:02 +0300, Liviu Daia wrote: Another delivery attempt would be needed after this time to pass spamd. Moral: randomize the greylisting time... Between which min/max valuse? Keep in mind that this corresponds to the (minimum) delay introduced in delivering a good messages to the mailbox. That's up to you. The minimum should be large enough to keep away naive bots, as it does now. The maximum should be as large as you can afford without being too anti-social. :) Some crap will still pass through anyway. The maximum should also leave plenty of time before expiry. Some mailers use queue backoff algorithms, which means some legitimate messages might never get a chance to pass if the window is too small... Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 26 September 2007, Peter N. M. Hansteen [EMAIL PROTECTED] wrote: Liviu Daia [EMAIL PROTECTED] writes: Why should it? The second copy is sent in a separate run, that's the whole point. The only thing the bot has to figure out is how long to wait until the second run. A smart one would send a second copy after 10 minutes, and a third one after, say, 35 minutes. *BZZT!* Assuming facts not in evidence: a *smart* spambot /and/ a spammer who actually *cares* about the delivery of individual messages. My point is it doesn't have to. The third copy passes regardless of what happens with the first two. [...] Moral: randomize the greylisting time... Random numbers can be fun, but I'd like to see real world data which support your theory. Ok, since you ask, here's a recent one. The message passed all my filters, so it was received three times. Please note the identical message-id. First run: Sep 25 18:06:16 ns1 postfix-localhost/smtpd[27143]: 9FAE1142A7: client=unknown[212.239.40.101] Sep 25 18:06:17 ns1 postfix/cleanup[3734]: 9FAE1142A7: message-id=[EMAIL PROTECTED] Sep 25 18:06:18 ns1 postfix/qmgr[1554]: 9FAE1142A7: from=[EMAIL PROTECTED], size=2545, nrcpt=2 (queue active) Sep 25 18:06:18 ns1 postfix/pipe[25075]: 9FAE1142A7: to=[EMAIL PROTECTED], relay=uucpz, delay=1.8, delays=1.7/0/0/0.06, dsn=2.0.0, status=sent (delivered via uucpz service) Sep 25 18:06:18 ns1 postfix/local[7260]: 9FAE1142A7: to=[EMAIL PROTECTED], relay=local, delay=1.9, delays=1.7/0/0/0.24, dsn=2.0.0, status=sent (delivered to command: /usr/local/sbin/gather_stats.pl /usr/local/share/Mail_stats) Sep 25 18:06:18 ns1 postfix/qmgr[1554]: 9FAE1142A7: removed The same message, sent 8 minutes later: Sep 25 18:14:14 ns1 postfix-localhost/smtpd[8404]: 1649714331: client=unknown[212.239.40.101] Sep 25 18:14:15 ns1 postfix/cleanup[21622]: 1649714331: message-id=[EMAIL PROTECTED] Sep 25 18:14:15 ns1 postfix/qmgr[1554]: 1649714331: from=[EMAIL PROTECTED], size=2547, nrcpt=2 (queue active) Sep 25 18:14:15 ns1 postfix/pipe[25075]: 1649714331: to=[EMAIL PROTECTED], relay=uucpz, delay=1.4, delays=1.4/0/0/0.05, dsn=2.0.0, status=sent (delivered via uucpz service) Sep 25 18:14:15 ns1 postfix/local[7260]: 1649714331: to=[EMAIL PROTECTED], relay=local, delay=1.6, delays=1.4/0/0/0.25, dsn=2.0.0, status=sent (delivered to command: /usr/local/sbin/gather_stats.pl /usr/local/share/Mail_stats) Sep 25 18:14:15 ns1 postfix/qmgr[1554]: 1649714331: removed Same, 28 minutes later: Sep 25 18:42:52 ns1 postfix-localhost/smtpd[13055]: 72BCD142A7: client=unknown[212.239.40.101] Sep 25 18:42:53 ns1 postfix/cleanup[21622]: 72BCD142A7: message-id=[EMAIL PROTECTED] Sep 25 18:42:53 ns1 postfix/qmgr[1554]: 72BCD142A7: from=[EMAIL PROTECTED], size=3724, nrcpt=2 (queue active) Sep 25 18:42:53 ns1 postfix/pipe[25075]: 72BCD142A7: to=[EMAIL PROTECTED], relay=uucpz, delay=0.81, delays=0.75/0.01/0/0.05, dsn=2.0.0, status=sent (delivered via uucpz service) Sep 25 18:42:53 ns1 postfix/local[7260]: 72BCD142A7: to=[EMAIL PROTECTED], relay=local, delay=1, delays=0.75/0.01/0/0.24, dsn=2.0.0, status=sent (delivered to command: /usr/local/sbin/gather_stats.pl /usr/local/share/Mail_stats) Sep 25 18:42:53 ns1 postfix/qmgr[1554]: 72BCD142A7: removed Should I have used spamd, the first two copies would have been discarded, but the third would have passed. That said, randomizing the greylisting time probably is probably a lot of trouble, for little added value (it still doesn't solve the problem). I'm beginning to think that this is another one of those 'I refuse to believe greylisting works because I refuse to understand it' episodes. Oh, I'm not saying it doesn't work. What I'm saying is, greylisting is trivial to bypass, and some spammers have figured that out. Amazingly, most of them still haven't, which is why it still works in a significant number of cases. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 26 September 2007, Jeremy C. Reed [EMAIL PROTECTED] wrote: On Wed, 26 Sep 2007, Liviu Daia wrote: Same, 28 minutes later: Sep 25 18:42:52 ns1 postfix-localhost/smtpd[13055]: 72BCD142A7: client=unknown[212.239.40.101] Sep 25 18:42:53 ns1 postfix/cleanup[21622]: 72BCD142A7: message-id=[EMAIL PROTECTED] Sep 25 18:42:53 ns1 postfix/qmgr[1554]: 72BCD142A7: from=[EMAIL PROTECTED], size=3724, nrcpt=2 (queue active) Sep 25 18:42:53 ns1 postfix/pipe[25075]: 72BCD142A7: to=[EMAIL PROTECTED], relay=uucpz, delay=0.81, delays=0.75/0.01/0/0.05, dsn=2.0.0, status=sent (delivered via uucpz service) Sep 25 18:42:53 ns1 postfix/local[7260]: 72BCD142A7: to=[EMAIL PROTECTED], relay=local, delay=1, delays=0.75/0.01/0/0.24, dsn=2.0.0, status=sent (delivered to command: /usr/local/sbin/gather_stats.pl /usr/local/share/Mail_stats) Sep 25 18:42:53 ns1 postfix/qmgr[1554]: 72BCD142A7: removed Should I have used spamd, the first two copies would have been discarded, but the third would have passed. Not good example. As that would still hit spamd (default 25 minutes and your earlier one was too fast). Now it is whitelisted. Do you have a fourth email sent? (Which will have passed.) Not at hand, but I haven't been looking for one either. Does spamd really behave like that? That is, ignore retries during the greylisting period, and whitelist messages only on subsequent attempts? Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 26 September 2007, Bob Beck [EMAIL PROTECTED] wrote: Oh, I'm not saying it doesn't work. What I'm saying is, greylisting is trivial to bypass, and some spammers have figured that out. Amazingly, most of them still haven't, which is why it still works in a significant number of cases. greylisting does what it does. It delays the initial email for 30 minutes or more. what you do with that 30 minutes will decide on how effective it is for you. In that 30 minutes) [...] Ok, brain dump: That's an interesting idea, a lot of slow operations could be offloaded to those 30 minutes. Your greyscanner script does DNS checks on the envelope. A lot more could be done, should the script have access to the body. I think it's legal to reply with 4xx (that is, simulate a queue error) to the final . . That could be used to gather and inspect the data; basically do at greylisting time what Postfix does with the after-queue filters. I suppose gathering everything would be prohibitive though, and against the entire philosophy of greylisting. Which brings me to a different approach: use a pre-queue filter instead of spamd (which is what I'm doing now). Now, the problem with pre-queue filters is they can take too long to scan a message. Thus, the better mouse trap: a pre-queue filter, which can send feedback to smapd, and use spamd's database to keep some state across messages. I need to ponder on that some more. spamd is designed to get the low hanging fruit. It is *NOT* designed to stop all possible spam, forever. attempting to do so there is wrong. Spamd is a *tool* - it's good for what it's good for - stopping stuff that is easily identifiable in the smtp dialogue. It is not intended for other things. We are in violent agreement here... Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 25 September 2007, RW [EMAIL PROTECTED] wrote: [...] My defence was to write a couple of scripts. One parsed the output of spamdb looking for GREY with sender and then tested the intended recipient against the postfix valid mailbox database. [...] With Postfix you can use anvil(8) to control concurrency. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: SMTP flood + spamdb
On 26 September 2007, RW [EMAIL PROTECTED] wrote: On Tue, 25 Sep 2007 14:14:46 +0300, Liviu Daia wrote: On 25 September 2007, RW [EMAIL PROTECTED] wrote: [...] My defence was to write a couple of scripts. One parsed the output of spamdb looking for GREY with sender and then tested the intended recipient against the postfix valid mailbox database. [...] With Postfix you can use anvil(8) to control concurrency. Yep, you could. BUT 1- why let it get to postfix? This is crap that spamd can deal with, with a bit of scripting help for extra functionality. 2- What concurrency? We had a mailstorm of backscatter from hundreds of IPs each trying to send one or two messages. We had over a thousand IPs marked TRAPPED in spamdb at one time. What I've been seeing here the last few weeks is somewhat different: robots trying to determine how many connections I'll accept concurrently. Left alone they can get to 100+ connection attempts per second from the same IP, they go on until I'm running out of resources and start delaying the accept(2). When that happens, only one or two of these connections are subsequently used to try to send the crap, the rest are closed immediately. Limiting concurrency at SMTP level seems to actually reduce the number of bots that try that (presumably the information that my site is way too uninteresting is propagated across the bot net). This has nothing to do with backscatter, but FWIW, backscatter alone has never been a real problem with Postfix until recently. Resource exhaustion because of insane concurrency as I described can be, and anvil(8) is a first attempt to a solution (it's not THE solution because it also hurts legitimate sites like Yahoo). Postfix would just be rejecting them and filling its logs. Oh come on, these days you're probably rejecting 95% of messages anyway. :) As far as I'm concerned filling the logs of mailservers that are backscatter generators is A Good Thing . Unfortunately the people in charge with these servers either don't have a clue, or don't care. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Is OpenBSD good/best for my 486?
On 21 March 2007, Travers Buda [EMAIL PROTECTED] wrote: * Douglas Allan Tutty [EMAIL PROTECTED] [2007-03-21 22:37:01]: Hello, I've got a 486DX4-100 with 32 MB ram, ISA bus, with two drives: 840 MB and 1280 MB IDE. Currently running Debian GNU/Linux Sarge. *snip* Is there any reason that OpenBSD wouldn't be my best choice for this box? I've run OpenBSD on a 486DX2 with 20 megs of ram. When you're talking about the 486es, you're going to want a FPU with openbsd. [...] The DX series did have FPU. The SX didn't. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
4.0: AMD64 MP can't finish booting
VT83C572 USB rev 0x81: irq 10 usb2 at uhci2: USB revision 1.0 uhub2 at usb2 uhub2: VIA UHCI root hub, rev 1.00/1.00, addr 1 uhub2: 2 ports with 2 removable, self powered uhci3 at pci0 dev 16 function 3 VIA VT83C572 USB rev 0x81: irq 10 usb3 at uhci3: USB revision 1.0 uhub3 at usb3 uhub3: VIA UHCI root hub, rev 1.00/1.00, addr 1 uhub3: 2 ports with 2 removable, self powered ehci0 at pci0 dev 16 function 4 VIA VT6202 USB rev 0x86: irq 5 ehci0: timed out waiting for BIOS usb4 at ehci0: USB revision 2.0 uhub4 at usb4 uhub4: VIA EHCI root hub, rev 2.00/1.00, addr 1 uhub4: 8 ports with 8 removable, self powered viapm0 at pci0 dev 17 function 0 VIA VT8237 ISA rev 0x00 iic0 at viapm0 unknown at iic0 addr 0x18 not configured lm1 at iic0 addr 0x2f: W83791SD pchb6 at pci0 dev 24 function 0 AMD AMD64 HyperTransport rev 0x00 pchb7 at pci0 dev 24 function 1 AMD AMD64 Address Map rev 0x00 pchb8 at pci0 dev 24 function 2 AMD AMD64 DRAM Cfg rev 0x00 pchb9 at pci0 dev 24 function 3 AMD AMD64 Misc Cfg rev 0x00 isa0 at mainbus0 isadma0 at isa0 com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo com1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo pckbc0 at isa0 port 0x60/5 pckbd0 at pckbc0 (kbd slot) pckbc0: using irq 1 for kbd slot wskbd0 at pckbd0: console keyboard, using wsdisplay0 pmsi0 at pckbc0 (aux slot) pckbc0: using irq 12 for aux slot wsmouse0 at pmsi0 mux 0 pcppi0 at isa0 port 0x61 midi0 at pcppi0: PC speaker spkr0 at pcppi0 lpt0 at isa0 port 0x378/4 irq 7 lm0 at isa0 port 0x290/8: W83627THF fdc0 at isa0 port 0x3f0/6 irq 6 drq 2 fd0 at fdc0 drive 0: 1.44MB 80 cyl, 2 head, 18 sec uhub5 at uhub4 port 2 uhub5: NEC product 0x013e, rev 2.00/0.07, addr 2 uhub5: 4 ports with 4 removable, self powered, multiple transaction translators ugen0 at uhub0 port 1 ugen0: Cambridge Silicon Radio Bluetooth, rev 1.10/4.43, addr 2 dkcsum: wd0 matches BIOS drive 0x80 wd1: no disk label dkcsum: wd1 matches BIOS drive 0x81 wd2: no disk label dkcsum: wd2 matches BIOS drive 0x82 root on wd0a rootdev=0x0 rrootdev=0x300 rawdev=0x302 FWIW, the unconfigured device at pci0 dev 14 is a Quicknet PhoneJACK (FXS phone card). Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: hints for scanning msdosfs patters?
On 6 July 2006, vladas [EMAIL PROTECTED] wrote: [...] I was not clear enough in the first place: due to the first 10Mb being gone, I do not expect to find any valid fs anymore. What I still hope for are individual files from the 3Gb image file that I have. I mean e.g. exe's, or dll's, zip's, lha's etc should have their size written in them or their data structures, not only fs, as well. So that e.g. for exe's I would find their MZ beginning chars, size after them and seek until the end by the size. [...] There are normally two copies of FAT. I'm too lazy to check how large they should be for a 3 GB fs, but I guess you erased both. Looking for signatures like MZ and PK will get you the first block in a file. Without FAT however you won't be able to locate any subsequent blocks. Depending on how fragmented the fs was when you erased the FAT, there is a tiny chance some of the blocks are contiguous, but that's just about all you can hope for. You can try lazarus from Wietse Venema's Coroner Toolkit: http://www.porcupine.org/forensics/tct.html However, like I said, I doubt you'll get very far without FAT. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Filesystem using tags, not folders?
On 9 June 2006, Kyrre Nygard [EMAIL PROTECTED] wrote: Hello! Just a wild thought here ... After noticing how much simpler it is using tags, for instance with my bookmarks at http://del.icio.us -- compared to hours of frustration trying find the right combination of folders and sub folders in my Firefox' bookmarks.html, I was wondering if the same approach could be used to arrange the UNIX filesystem hierarchy, from the root and up. This is just a radical thought, not yet an idea even -- but if somebody would be willing to think with me -- maybe we could make a big change. If all you want is some kind of file organizer for human use, you don't need a new filesystem. Just start a web server on localhost and install a small wiki. You get tags, links, permissions, text notes associated to nodes, and a lot more. You can also publish everything on Internet should you need it. If OTOH you want to extend this model to the entire system, you'll need a lot more than a new kind of filesystem. Also, as somebody else pointed out, UNIX is probably not the right place to start. Perhaps you should look at plan9 / inferno first. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Problems trying to log on squirrelmail - part 2.
On 1 June 2006, Joco Salvatti [EMAIL PROTECTED] wrote: Hi all, Thank you very much for the tips you sent me. I could finally put squirrelmail to work. Now everything is almost fine, but there is still a little problem: I can send and receive e-mail through squirrelmail, but when it comes to receive an e-mail, it arrives at my mailbox (/var/spool/username) but it doesn't appear at my INBOX. But when I send an e-mail it appears at my sent items folder. Does anyone know what's happening? Just to remember: OpenBSD 3.9 postfix procmail cyrus-imapd Squirrelmail folders are placed at /var/spool/imap/user/myusername/Sent, Drafts, Trash Leave SquirrelMail out of the picture for now. The problem is Cyrus imapd uses it's own backend storage rather than the system mailboxes. You can instruct Postfix to deliver to Cyrus imapd via LMTP (see Postfix docs), you can can use the deliver script that comes with Cyrus, or you can do that from Procmail. Better yet, if you're not too far in this process to back off, just use Courier imapd instead of Cyrus. You'll need a script to convert your users' mailboxes to Maildir, but that's about the only problem you're likely to have with it. Some time ago I used mb2md to convert some 300 GB of mailboxes to Maildir, and I was happy with thne result: http://batleth.sapienti-sat.org/projects/mb2md/ Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Splitting xbaseXY.tgz - stupid idea?
On 20 May 2006, Jacob Meuser [EMAIL PROTECTED] wrote: On Sat, May 20, 2006 at 10:09:15AM +0300, Liviu Daia wrote: I have a simpler question: is there any plan to make installing xbase a requirement in the foreseeable future? no. nothing in {base,comp,man,misc,game,etc}XX.tgz depends on anything from xbaseXX.tgz, and that is extremely unlikely to ever change. [...] Ok, let me rephrase this. How realistic will be to run an OpenBSD firewall or router without xbase a few years from now? With the release of 3.9, there seems to be a new trend among port maintainers to make running a systems without xbase a PITA: packages of console applications now depend on X at run time even though that could be avoided with minimal fuss (example: mrtg), compiling ports that don't depend on X at run time now requires X (example: nmap-no_x11), and building ports without xbase is now unsupported (FAQ 15.4.1). So what I'm asking is: is all this an accident, or the new official policy? Will there be any effort put into making sure ports don't depend on X when that's reasonably feasible? Does anybody still care? What's the official take on this? Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Splitting xbaseXY.tgz - stupid idea?
On 22 May 2006, Lars Hansson [EMAIL PROTECTED] wrote: On Monday 22 May 2006 17:27, Liviu Daia wrote: Ok, let me rephrase this. How realistic will be to run an OpenBSD firewall or router without xbase a few years from now? Very, in my opinion. With the release of 3.9, there seems to be a new trend among port maintainers to make running a systems without xbase a PITA: Eh, 2 examples arent really a PITA and does not really make a trend. The consistent answer I got on ports@ was that it has been decided that installing X is not a showstopper, and a number of personal attacks for suggesting otherwise. :-) Which is why I'm now asking if this is the official position. packages of console applications now depend on X at run time even though that could be avoided with minimal fuss (example: mrtg), It's even easier to *not* run mrtg *on* the router/firewall. SNMP, symon and pfstatd exist for a reason. Sure, but that's not what I'm asking. Also, so far I've only seen 1 console application that requires X at runtime. [...] Mrtg, rrdtools, pftop, everything else depending on GD. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Splitting xbaseXY.tgz - stupid idea?
On 22 May 2006, Marc Balmer [EMAIL PROTECTED] wrote: * Liviu Daia wrote: The consistent answer I got on ports@ was that it has been decided that installing X is not a showstopper, and a number of personal attacks for suggesting otherwise. :-) Which is why I'm now asking if this is the official position. Yes, that _is_ the official position. Thank you. Please also answer my initial questions: On 22 May 2006, Liviu Daia [EMAIL PROTECTED] wrote: [...] Will there be any effort put into making sure ports don't depend on X when that's reasonably feasible? Does anybody still care? What's the official take on this? Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Splitting xbaseXY.tgz - stupid idea?
On 22 May 2006, Jacob Meuser [EMAIL PROTECTED] wrote: On Mon, May 22, 2006 at 12:27:18PM +0300, Liviu Daia wrote: On 20 May 2006, Jacob Meuser [EMAIL PROTECTED] wrote: On Sat, May 20, 2006 at 10:09:15AM +0300, Liviu Daia wrote: I have a simpler question: is there any plan to make installing xbase a requirement in the foreseeable future? no. nothing in {base,comp,man,misc,game,etc}XX.tgz depends on anything from xbaseXX.tgz, and that is extremely unlikely to ever change. [...] Ok, let me rephrase this. How realistic will be to run an OpenBSD firewall or router without xbase a few years from now? it will be just as it is now: you do't need xbase as long as you aren't also installing packages that depend on something from xbase. This is not how things used to be for many years, and it does make a difference if no_x11 flavors are being slowly phased out. With the release of 3.9, there seems to be a new trend among port maintainers to make running a systems without xbase a PITA: packages of console applications now depend on X at run time even though that could be avoided with minimal fuss (example: mrtg), who is deciding what minimal fuss and PITA are? oh, yeah, you, Well, passing two options to configure when building GD qualify as minimal fuss to me. Do you need a committee to decide that? Would the named committee reach a different conclusion after careful deliberation? the same person who doesn't install xbase for space reasons, but then builds ports instead of installing packages. IMO, you aren't qualified to define minimal fuss and PITA, since you choose the PITA of building ports over the minimal fuss of installing packages. and that's not just me deciding minimal fuss and PITA, that's from FAQ 15. Will you please cut the crap and address the point I'm making rather than my wording, my reasons, my experience, my qualifications, and the color of my dog? Thank you. [...] Does anybody still care? on ports@, you've already accused people who disagree with you of being highly political. Please check your attributions. I never said that. [...] What's the official take on this? what again is the problem with installing xbase, if you are installing packages that depend on things from xbase? [...] At this point, it's only a philosophical one. I just want to make an informed decision when choosing my OS and my hardware configuration, and I think a definitive statement about that would be useful for other people too. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Splitting xbaseXY.tgz - stupid idea?
On 22 May 2006, steven mestdagh [EMAIL PROTECTED] wrote: Liviu Daia [2006-05-22, 12:27:18]: Ok, let me rephrase this. How realistic will be to run an OpenBSD firewall or router without xbase a few years from now? Huh? You do not and will not need xbase to run a firewall/router. With the release of 3.9, there seems to be a new trend among port maintainers to make running a systems without xbase a PITA You are completely blowing up your own gd/xbase/no space left on device problem beyond proportion, and accusing/insulting port maintainers for it? I don't have a no space left on device problem, and my point was never about that. If you still don't get it, my problem is that, with the current policy, three years from now there will be 50+ other ports depending on X for no reason. At that point, the disks, CFs, and everything else will be larger and cheaper, and somebody will notice that we already need to install xbase to do anything non-trivial on a router, so why not get rid of no_x11 altogether? That would free a lot of maintainer resources, and people have waited for ever for antialiased fonts and true colors. What would you answer when that happens? This is why I'm asking for an official statement. compiling ports that don't depend on X at run time now requires X (example: nmap-no_x11), and building ports without xbase is now unsupported (FAQ 15.4.1). You do not understand. Unsupported does not mean impossible, it means you are on your own to do it. If you can't do this, just use the no_x11 package, as has been said many times now. Like I said, there are many ways around that, including compiling from sources outside the ports. But that's not relevant to what I'm asking. what I'm asking is: is all this an accident, or the new official policy? Will there be any effort put into making sure ports don't depend on X when that's reasonably feasible? Does anybody still care? What's the official take on this? Clearly, this no_x11 stuff has a low priority. Oh yes, I'm aware of that, the thread on ports@ made it clear. If you are still talking about making no_x11 flavors for the gd library and everything that depends on it, I doubt this will happen. I'm aware of that too. At this point, some people are probably willing to go out of their way to keep those two options out of Makefile, just because I asked for it. :-) This doesn't really disprove what I said above. [...] Now please stop wasting people's time with this. In the 7+ years I've been using OpenBSD I haven't bothered this list very often. So don't worry, I'll go away once this thread is over. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Splitting xbaseXY.tgz - stupid idea?
On 22 May 2006, Can Erkin Acar [EMAIL PROTECTED] wrote: On Monday 22 May 2006 Liviu Daia wrote: On 22 May 2006, Lars Hansson [EMAIL PROTECTED] wrote: On Monday 22 May 2006 17:27, Liviu Daia wrote: Ok, let me rephrase this. How realistic will be to run an OpenBSD firewall or router without xbase a few years from now? Extremely realistic. Base install of OpenBSD is an excellent firewall and router platform, without the need for X. what is *your* definition of a router or a firewall? Please address my point, not my wording. Sometimes it may be useful to install a few programs on a firewall or router that are not in base. packages of console applications now depend on X at run time even though that could be avoided with minimal fuss (example: mrtg), It's even easier to *not* run mrtg *on* the router/firewall. SNMP, symon and pfstatd exist for a reason. Sure, but that's not what I'm asking. Sure, but that is not how it works in OpenBSD. *You* submit patches, and make sure that these patches *work* not only on your setup, but for everyone and every supported achitecture. (1) My question was about the official policy. When / if I'm pointed to a written form of that policy (which is basically what I'm asking for), I might consider submitting a patch to it. :-) (2) If you're actually referring to my previous posts on ports@, somebody else has already posted a patch for the problem I was describing. I did test it on all architectures I have access to, and posted my results to the list. *You* work to get these patches committed. And everyone gets a better system as a result. Now, taking over somebody else's work would be downright rude, wouldn't it? Not contributing and only whining only gets you this far. Perhaps making other people aware of what I perceive as a policy problem affecting everybody, is an acceptable form of contributing too. Also, so far I've only seen 1 console application that requires X at runtime. [...] Mrtg, rrdtools, pftop, everything else depending on GD. I am sure pftop does _not_ depend on GD. Sorry, s/pftop/pfstat/ . Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Splitting xbaseXY.tgz - stupid idea?
On 22 May 2006, Jacob Yocom-Piatt [EMAIL PROTECTED] wrote: my reply to this thread has no references b/c most of the prior stuff is irrelevant to the contents of this reply. the reason nobody wants to accomodate liviu is that it takes WORK, namely other people's valuable time that could be spent working on code that 1 person is agitating about. in all the time it took liviu to generate the heap of negative karma bitchy replies that he has sent to ports@, he could have likely coded and patched his system to work in the manner he wants and have avoided pissing off so many people. Should you bother to read the beginning of this thread, you'll find out I'm perfectly happy with the way my system works, and I'm not asking for code. by endlessly complaining fix it! accomodate me, i'm important! liviu has demonstrated his opinion that the developer time it takes to prepare things as he suggests is less valuable than his own (not to mention the time wasted arguing with him). despite this, daniel has made a point to be nice and give you a patch and you still won't STFU. if you want something that other people aren't actively campaigning for, do it yourself. i can't imagine liviu has gotten as far as he has in mathematics without experiencing this phenomenon. liviu's argument makes sense in the neighborhood of liviu's space-time, but it does not patch to our own neighborhoods very well ;). the net human energy and man hours wasted with liviu's complaining far exceeds the time it would have taken to find what is needed from xbaseXY.tgz to make such a setup work, i.e. for liviu to hammer this out himself, as he should have after the first few posts. understanding the policies of the ports maintainers is a good thing. putting people on the defensive when they are, IMO, doing a great job, is a really poor way to get people to help you out. a picking apart of my post based on minutae that seem relevant in liviu's local frame is what i'm fully expecting. i feel stupid having spent the time i just did writing this, another waste of human energy. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Splitting xbaseXY.tgz - stupid idea?
On 22 May 2006, Jacob Meuser [EMAIL PROTECTED] wrote: On Mon, May 22, 2006 at 02:52:59PM +0300, Liviu Daia wrote: On 22 May 2006, Jacob Meuser [EMAIL PROTECTED] wrote: On Mon, May 22, 2006 at 12:27:18PM +0300, Liviu Daia wrote: On 20 May 2006, Jacob Meuser [EMAIL PROTECTED] wrote: On Sat, May 20, 2006 at 10:09:15AM +0300, Liviu Daia wrote: I have a simpler question: is there any plan to make installing xbase a requirement in the foreseeable future? no. nothing in {base,comp,man,misc,game,etc}XX.tgz depends on anything from xbaseXX.tgz, and that is extremely unlikely to ever change. [...] Ok, let me rephrase this. How realistic will be to run an OpenBSD firewall or router without xbase a few years from now? it will be just as it is now: you do't need xbase as long as you aren't also installing packages that depend on something from xbase. This is not how things used to be for many years, beg pardon? please show me when and where. Building the no_x11 flavor of ports didn't use to require xbase. AFAICT, 3.9 is the first release when that happens. If it did happen earlier, I was lucky enough to miss it. and it does make a difference if no_x11 flavors are being slowly phased out. [...] doesn't appear to be any phasing-out trend to me. They are receiving less support. According to Steven Mestdagh: On 22 May 2006, steven mestdagh [EMAIL PROTECTED] wrote: [...] Clearly, this no_x11 stuff has a low priority. [...] Which is why I was asking if the actual intention behind all this is to phase them out. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: ifficiency
On 23 May 2006, [EMAIL PROTECTED] wrote: Original message from prad [EMAIL PROTECTED]: suppose that you have 2 conditions A and B where B take a lot of effort to determine (eg looking for a string match in a huge file). either A or B needs to be true before you can execute 'this'. the 2 if statements below are equivalent i think: if A or B: do this if A: do this elseif B: do this now, do they work the same way? Both of these forms are equivalent only in languages which short-circuit Boolean expressions (not all language implement short-circuiting...). C/C++ both support this feature. [...] The only language I can think of that (1) does complete evaluations, (2) is still in use today, and (3) has a significant amount of code written in it, is Pascal. The Wirth-Jensen definition of Pascal specified complete evaluations. The once popular Borland Pascal implemented that as an option. Don't know about gpc. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Splitting xbaseXY.tgz - stupid idea?
On 19 May 2006, Jacob Meuser [EMAIL PROTECTED] wrote: On Sat, May 20, 2006 at 02:43:36AM +0200, viq wrote: Sorry if it sounds otherwise, I have no intention of telling anyone what to do and how, just sharing some idea I had that could possibly satisfy both sides of the argument, and maybe allow to avoid bi-weekly reocurring question. Seeing all those why can't I compile port XX? install xbase but I don't want to install X on my firewall/server/whatever arguments - maybe it would be possible to split xbase into xbase and xlibs packages, with the latter having just some base libraries? I wonder, if xbase were a port, would there have ever been a complaint? what I mean is, if 'make package' or pkg_add just worked, would anyone who has complained have even noticed/cared that xbase got installed? it seems that at least a few people who have complained are perfectly happy installing other stuff they don't really need. I have a simpler question: is there any plan to make installing xbase a requirement in the foreseeable future? no, I'm not suggesting that xbase be a port; I'm just offering some perspective. as far as biweekly question, that should be a clue that the people asking the question aren't doing their homework/paying attention (i.e. they probably would not have noticed/cared if xbase had been installed automatically anyway.) as far as making a new install set, that's a lot of continual work for very little gain. not to mention, it and would add more bytes of text to the installation scripts :( So what you're saying here is that installing 30MB of xbase without the user requesting it is acceptable, but making an install script some 30 bytes larger isn't, right? Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Linux UFS write support ??
On 20 May 2006, Jirtme Loyet [EMAIL PROTECTED] wrote: Hello, I'm trying to mount a OpenBSD image locally with write support on linux. [...] Don't do this; it will trash your filesystem. While read-only UFS on Linux is relatively safe these days (it used to produce frequent kernel panics), read-write has never worked properly. I also doubt there is much interest in fixing it. Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: Keyboard and cd not working on a dual EM64T with amd64.SMP
On 15 May 2006, Srebrenko Sehic [EMAIL PROTECTED] wrote: Some Dell boxes have this issue. The problem is not specific to Dell boxes, or to AMD64 kernels. I have the same issue on a HP LC2000, with i386 GENERIC.MP. Workaround is to use an USB keyboard. ... provided your motherboard has USB ports. Mine doesn't. :-) Regards, Liviu Daia -- Dr. Liviu Daia http://www.imar.ro/~daia
Re: tar(1) problem with long file names.
On 22 October 2005, Hannah Schroeter [EMAIL PROTECTED] wrote: Hello! On Sat, Oct 22, 2005 at 01:43:03PM +, Christian Weisgerber wrote: Hannah Schroeter [EMAIL PROTECTED] wrote: Use a more apt data format in your use case. Ehm correcting myself: According to pax(1), 100 is the limit for pathnames in the old tar format, while the limit for ustar is 250. For *pathnames*!. Perhaps you can use cpio (or pax with -x cpio). Actually, it's the SVR4 cpio format (sv4cpio or the variant sv4crc) you want. 1024-char file/path names, 32-bit inode and device numbers, and even reasonably portable. If the plain cpio format itself isn't up to the task, perhaps the pax manual page should document its limitations. I went by the manual page and saw no mention of restrictions there for cpio, either. Still good to know about that recommendation, I might have some use for it too. See also the classical articles by Elizabeth Zwicky: http://berdmann.dyndns.org/doc/dump/zwicky/testdump.doc.html http://www.usenix.org/events/lisa03/tech/full_papers/zwicky/zwicky_html/ Regards, Liviu Daia -- Dr. Liviu Daia e-mail: [EMAIL PROTECTED] Institute of Mathematics web page: http://www.imar.ro/~daia of the Romanian Academy PGP key: http://www.imar.ro/~daia/daia.asc