Re: [arch-general] Tiny webserver to run as root
On Sun, Jan 3, 2010 at 10:04 AM, RedShift redsh...@pandora.be wrote: Hi all Does anyone have a suggestion for some software, a tiny webserver that is able to run as root and execute CGI scripts? I speak under correction, but that seems wildly dangerous, and something that a secure webserver would be designed specifically *not* to do. -- Ryan W Sims
Re: [arch-general] A universal Operating System API - why don't we have it?
On Fri, Dec 18, 2009 at 4:24 AM, RedShift redsh...@pandora.be wrote: Hi all It dawned on my that lots of industries have standards and companies generally keep to them. For example slabs of aluminium have standard sizes, building materials have well defined specifications, or take electrical components: there's a huge list of standardized components. You can expect between 220 and 240 VAC from your wall socket, fuses have standard formats and ratings, 1 meter here is exactly the same as 1 meter in another country, etc... Even CD's, which have been around for decades by now, have always been created using the same format (albeit extended somewhat, over time, but a normal CD pressed now should still play in a CD player that's 20 years old). It allows for a very competitive market where choices are made based on price, quality, availability, etc... I look at it this way: an OS is a *tool,* whereas electricity, CDs and such are commodities, and need to be fungible. Tools are *not* fungible; the way you interface with a tool is very tightly coupled with the purpose of that tool, which is why you should never use a hammer to pull a screw. The abstractions OSs (and also programming languages) present represent what they're designed to do, so making a one-size-fits-all tool is worse than useless. The desktop wars and such arguments all commit the fallacy that OSs are a pretty shell over computer hardware, whereas they are (or should be) tools targeted at (more or less) specific solutions. -- Ryan W Sims
Re: [arch-general] Making pacman check multiple repos
2009/12/11 Ng Oon-Ee ngoo...@gmail.com: On Sat, 2009-12-12 at 02:13 +0100, Heiko Baums wrote: Am Sat, 12 Dec 2009 08:58:17 +0800 schrieb Ng Oon-Ee ngoo...@gmail.com: Because sometimes all the mirrors listed in mirrorlist will not have the file, if its just been uploaded. Also not everyone stays up-to-the-minute with updates, judging by the updated after a month posts we see once in a while. I'm concerned about the last bit, if a package was just uploaded and only exists on one mirror, everyone who updates and has that package in the period between its uploading and its appearance on their local mirrors will 'fall-back' on varying mirrors (lengthening the update process) and all end up on the poor main server (or Tier 1/2 mirrors). Bad for both the mirror bandwidth as well as most probably much slower for the user, who could probably just wait a day or so for the update to come to his (faster, presumably) local mirror. Wouldn't it be possible to first upload the packages and update the db files when the packages on the mirrors (at least on several mirrors) are updated? If I have such a problem that a package is on no mirrors, which doesn't happen often, I usually abort the system update and wait one day. I think that's the normal and easiest way of solving this issue. Greetings, Heiko The few mirrors which sync first would have quite much higher bandwidth usage =). It may be that natural selection is already producing this result: for instance, I just recently tried a bunch of different mirrors until I found one that was up to date. So people may be doing this already manually. -- Ryan W Sims
Re: [arch-general] Bug reports for out of date packages?
2009/4/9 Allan McRae al...@archlinux.org: hollun...@gmx.at wrote: jack-audio-connection-kit, qjackctl and ardour, all in extra, have been out of date... snip So, does anyone have working updated PKGBUILDs for these that I can push to [extra]? Allan This one for ardour2 from AUR works fine on my machine: # Maintainer: Philipp Ãœberbacher hollunder at gmx dot at pkgname=ardour-lv2 pkgver=2.8 pkgrel=3 pkgdesc=Ardour is a digital audio workstation. arch=('i686' 'x86_64') url=http://ardour.org/; license=('GPL') depends=('rubberband' 'liblrdf' 'libgnomecanvas' 'liblo' 'libusb' 'aubio' 'slv2') makedepends=('boost' 'ladspa' 'scons' 'gettext' 'libtool' 'pkgconfig') options=('!libtool') conflicts=('ardour' 'ardour2') provides=('ardour' 'ardour2') source=(http://releases.ardour.org/ardour-$pkgver.tar.bz2;) md5sums=('24bd768dbe08f1f2724dc97704ee0518') build() { cd ${startdir}/src/ardour-${pkgver} || return 1 scons PREFIX=/usr \ FREESOUND=1 \ DESTDIR=${pkgdir} || return 1 scons PREFIX=/usr \ DESTDIR=${pkgdir} \ install || return 1 } -- Ryan W Sims
[arch-general] kernel oops after recent upgrade
Just upgraded to kernel26 2.6.28.2-1, and now I get an oops when trying to mount my ntfs drive at boot. After boot, I could manually mount everything else, but mount /mnt/ntfs hangs unkillable. Since /mnt/ntfs was first in fstab, nothing else mounted; I have since commented out that line and everything's fine. Attached relevant kernel dmesg...can anyone shed any light on this, or should I open a bug? -- Ryan W Sims out Description: Binary data
Re: [arch-general] problems with ivman? hal? pmount?
On Thu, Dec 4, 2008 at 7:02 AM, James Rayner [EMAIL PROTECTED] wrote: On Thu, Dec 4, 2008 at 3:56 PM, Ryan Sims [EMAIL PROTECTED] wrote: I'm working on switching from KDE to openbox, and I'm playing around with ivman automounting. After reading the wiki page[1] following the instructions, I find that I still can't get USB devices to automount as a user. I can run ivman as a daemon, and it automounts just fine, except that it creates the mount points as root:root, so I can't access them. If I run ivman as a user, nothing gets automounted. When I run ivman -d, or when I run a pmount-hal '/path/listed/in/ivman/debug/output' I get a lot of these messages: process 17673: The last reference on a connection was dropped without closing the connection. This is a bug in an application. See dbus_connection_unref() documentation for details. Most likely, the application was supposed to call dbus_connection_close(), since this is a private connection. D-Bus not built with -rdynamic so unable to print a backtrace ivman has been unmaintained upstream for over 18 months. I now use the thunar volman plugin instead. Aha, hadn't realized that. Install thunar, and the thunar-volman plugin, then run thunar --daemon in your startup scripts. James I'll give that a shot...I'm still wondering about the dbus connection dropped errors, I get the same thing when I run pmount-hal by hand; plain pmount works as advertised. Can anyone else reproduce this? -- Ryan W Sims
Re: [arch-general] problems with ivman? hal? pmount?
On Thu, Dec 4, 2008 at 7:02 AM, James Rayner [EMAIL PROTECTED] wrote: On Thu, Dec 4, 2008 at 3:56 PM, Ryan Sims [EMAIL PROTECTED] wrote: I'm working on switching from KDE to openbox, and I'm playing around with ivman automounting. After reading the wiki page[1] following the instructions, I find that I still can't get USB devices to automount as a user. I can run ivman as a daemon, and it automounts just fine, except that it creates the mount points as root:root, so I can't access them. If I run ivman as a user, nothing gets automounted. When I run ivman -d, or when I run a pmount-hal '/path/listed/in/ivman/debug/output' I get a lot of these messages: process 17673: The last reference on a connection was dropped without closing the connection. This is a bug in an application. See dbus_connection_unref() documentation for details. Most likely, the application was supposed to call dbus_connection_close(), since this is a private connection. D-Bus not built with -rdynamic so unable to print a backtrace ivman has been unmaintained upstream for over 18 months. I now use the thunar volman plugin instead. Install thunar, and the thunar-volman plugin, then run thunar --daemon in your startup scripts. James I did just that, and it works a beaut. Still wish I knew what was up with pmount-hal and friends, but you gotta pick your battles, I guess. Thanks for the help. -- Ryan W Sims
[arch-general] problems with ivman? hal? pmount?
I'm working on switching from KDE to openbox, and I'm playing around with ivman automounting. After reading the wiki page[1] following the instructions, I find that I still can't get USB devices to automount as a user. I can run ivman as a daemon, and it automounts just fine, except that it creates the mount points as root:root, so I can't access them. If I run ivman as a user, nothing gets automounted. When I run ivman -d, or when I run a pmount-hal '/path/listed/in/ivman/debug/output' I get a lot of these messages: process 17673: The last reference on a connection was dropped without closing the connection. This is a bug in an application. See dbus_connection_unref() documentation for details. Most likely, the application was supposed to call dbus_connection_close(), since this is a private connection. D-Bus not built with -rdynamic so unable to print a backtrace Any help? [1]http://wiki.archlinux.org/index.php/Ivman PS. I've also been exploring automounting with udev, but I run into the same permissions problem with pmount: users can't access the file. I would also welcome advice in this direction, if anyone's interested. -- Ryan W Sims
Re: [arch-general] Dealing with Info documentation
On Fri, Jun 13, 2008 at 8:59 AM, Frédéric Perrin [EMAIL PROTECTED] wrote: Hello Archlinux, I like reading documentation in the Info format (especially when it is the prefered / only form of documentation). However, Archlinux decides to strip by default the Info documentation from its packages. I am not going to contest that decision, but I'd like to install Info docs. Now, what would be the good way to do it ? Installing mannually Info packages ? Patching and locally rebuilding the packages for which I want the documentation ? Creating a PKGBUILD that will install a bunch of docs ? It seems this is the way to go. But what will happen if Arch ever decides to include Info files, how are conflicting files going to be handled ? -- Fred http://wiki.archlinux.org/index.php/Keeping_Docs_and_Info_Files -- Ryan W Sims
Re: [arch-general] Dealing with Info documentation
On Fri, Jun 13, 2008 at 11:43 AM, Frédéric Perrin [EMAIL PROTECTED] wrote: Hello, Alessio Bolognino [EMAIL PROTECTED] writes: I recall a recent thread in arch-dev-public about a new policy about info docs; I think devs decided to include info docs in the new packages, but I'm not 100% sure. After some manual searching, I found the following thread : [arch-dev-public] OMG info pages Aaron Griffin aaronmgriffin at gmail.com Tue Apr 22 13:05:12 EDT 2008 Here's the link: http://archlinux.org/pipermail/arch-dev-public/2008-April/005920.html (it also continues into May). Doesn't look like anything's been decided. FWIW, the gnu info docs are all available online, or you can download them in various formats. I prefer HTML browsing to info, myself, but some people really like info. -- Ryan W Sims
Re: [arch-general] X errors...
On Fri, Jun 13, 2008 at 12:24 PM, Allie Daneman [EMAIL PROTECTED] wrote: Has anyone had any issues like this with X ? X Error of failed request: BadAccess (attempt to access private resource denied) Major opcode of failed request: 102 (X_ChangeKeyboardControl) What are you doing when you get the error? Are you by any chance using X over ssh? A little googling found this: http://www.cs.rochester.edu/twiki/bin/view/Main/LinuxChangesAugust2005 From the link: This is a result of the openssh upgrade, which adds some security measures to X11 forwarding. The solution to ridding yourself of these error messages is to add the line ForwardX11Trusted yes to your ~/.ssh/config file. X applications will then forward just fine. If you still have trouble, make sure you don't have an inadvertent emacs alias to emacs -nw, and if that doesn't work, use the -Y flag to ssh. -- Ryan W Sims
Re: [arch-general] [arch-dev-public] maintainers wanted
On Fri, Jun 13, 2008 at 2:27 PM, Daenyth Blank [EMAIL PROTECTED] wrote: Take this as a mini-announcement too. I figure that anyone I'd actually want on board is already following the arch-dev-public list anyway, so if anyone is interested in maintaining a handful of packages in [extra], let me know. I'm up for it. I have only i686 boxes at the moment though. Also, would you want me to be using [testing], or is that more on a case-by-case basis? Thanks for your consideration --Daenyth I'm in if you need somebody. I have an i686 and x86-64 box. Let me know how I can help. -- Ryan W Sims
Re: [arch-general] cups depend on ghostscript? WAS: troubles with Brother HL-2040
On Fri, May 16, 2008 at 1:03 PM, Andreas Radke [EMAIL PROTECTED] wrote: Am Thu, 15 May 2008 17:20:31 -0400 schrieb Ryan Sims [EMAIL PROTECTED]: [snip] Interesting, I found the problem: I had neglected to install ghostscript. It seems to me like that should be an explicit dependency for cups; is there a scenario where you could print from cups without ghostscript? I also think its interesting that cups didn't say anything about gs failing to run except in the debug log level, and the failure certainly wasn't considered fatal, even though it really was. File a bug please! -Andy Will do, just wanted to make sure I wasn't missing something obvious. -- Ryan W Sims
[arch-general] troubles with Brother HL-2040
I just installed Arch on my print server at home; it had been running another distro and chugging along happily. It's got a Brother HL2040 attached to it via USB, and I use cups to share it around the network. Well, I got the server is up and running so I followed the instruction in the wiki[1] for installing the printer. Everything seems to have gone ok, the printer was detected and installed correctly, hooray. However, it doesn't print the test page (or anything else). When I ask it to print, the printer wakes up and starts spinning, thinks for a couple seconds, and then goes back to sleep. The cups logs seem to be clean, just some errors about DNSSD registration, which doesn't seem related. I'm working on trying alternative drivers, but nothing has helped so far. I know it's worked before; my desk is covered in paper it's printed, so I'm just doing something stupid. Help? -- Ryan W Sims
[arch-general] cups depend on ghostscript? WAS: troubles with Brother HL-2040
On Thu, May 15, 2008 at 3:53 PM, Thayer Williams [EMAIL PROTECTED] wrote: On 5/15/08, Ryan Sims [EMAIL PROTECTED] wrote: I just installed Arch on my print server at home; it had been running another distro and chugging along happily. It's got a Brother HL2040 attached to it via USB, and I use cups to share it around the network. Well, I got the server is up and running so I followed the instruction in the wiki[1] for installing the printer. Everything seems to have gone ok, the printer was detected and installed correctly, hooray. However, it doesn't print the test page (or anything else). When I ask it to print, the printer wakes up and starts spinning, thinks for a couple seconds, and then goes back to sleep. Are you trying to print directly from the server, or from a remote PC on the network? Get it working locally first before worrying about anything else. Are you running a GUI on the server? If so, what does http://localhost:631/printers show for printer status? Interesting, I found the problem: I had neglected to install ghostscript. It seems to me like that should be an explicit dependency for cups; is there a scenario where you could print from cups without ghostscript? I also think its interesting that cups didn't say anything about gs failing to run except in the debug log level, and the failure certainly wasn't considered fatal, even though it really was. -- Ryan W Sims
Re: [arch-general] [arch-dev-public] policy on desktop files?
On Thu, May 8, 2008 at 9:27 AM, Grigorios Bouzakis [EMAIL PROTECTED] wrote: On Thu, May 08, 2008 at 03:53:29PM +0300, Dimitrios Apostolou wrote: On Thursday 08 May 2008 15:58:51 bardo wrote: On Thu, May 8, 2008 at 2:29 PM, Grigorios Bouzakis [EMAIL PROTECTED] wrote: Hi, i wanted to note that there is http://wiki.archlinux.org/index.php/Desktop_Project maintained by bardo a TU, which mentions absolutely nothing about upstream. Instead it says I (bardo) will write/modify the necessary files and notify the corresponding maintainers so they can be added to the packages. Additionally there are links to the bug tracker. I have no idea if bardo submitts them upstream as well but i doubt it. At the moment I don't. When I announced the project it got a good reaction, so I carried on with it, but after a while creating and providing desktop files through the bug tracker I started receiving the upstream problem answer. This has become pretty frequent, so lately I haven't been doing very much on the Arch side. The few developers I tried to contact didn't do anything, so I suppose there's little to no interest from them. The whole thing has started to become more frustrating than anything, so at the moment I'm not working on it anymore. IMHO, even though I'm aggresively against patching, I don't consider the .desktop files as patches. They are some extra, *non-code* files, that is fair for the distro to provide (like other configuration files). I don't really blame the app developers that don't include them upstream. True, they are not patches, but they should be part of the applicqations source. Requering from packagers to write include a .desktop file in their ditros for actively developed projects is IMO unecceptable today with linux being part of the desktop market. If they dont provide one its quite clear they dont want having one. If i submit a ,desktop file upstream and it gets rejected it should be treated the same as a rejected patch. Exception to the above rule would be projects not being actively developed anymore and only them. Greg We needn't get bogged down in another is this the ARCH WAY?!?! conversation here; I don't think it needs to be a policy decision. If neither Arch nor upstream want to deal with .desktop files (and they both seem to have their reasons), would it be possible to host some space somewhere that users could post their own? It wouldn't need to be Arch-hosted, perhaps this is a sf project waiting to happen; sort of a searchable repository for orphaned .desktop files? I'd be happy to go download .desktops from somewhere if they aren't already included. -- Ryan W Sims
Re: [arch-general] Compiling my own kernel: IDE, SATA...
At a guess, it sounds like arch is loading a module that's a specific driver for your chipset, while your own kernel is using the generic ata drivers. Take a look at the output of hwd, lspci and such. You also might get some mileage out of googling your motherboard, or poke around on the forums (you could also check gentoo's) for your chipsets. Do an lsmod under an arch kernel to see what modules its loading, that'll help you configure your kernel. It would help if you posted more specifics about your rig, and what options you're selecting in the block devices part of the kernel config. On Wed, May 7, 2008 at 1:13 PM, Carotinho [EMAIL PROTECTED] wrote: Hi! I'm sure this is an already answered question, but the problem is that I don't know which could be the question whose answer I'm in need of:) After this prologue, the problem is: The currently running system, with the Arch-supplied 2.6.24 kernel, has the disk devices all mapped to a /dev/sd* scheme, even if 3 are IDE and another is SATA. When I compile myself the kernel, I get the traditional /dev/hd* scheme, which is in contrast with the content of /etc/fstab. The main question is: how can I obtain the right behaviour from my own compiled kernel? Is this due to some misconfiguration of the kernel at compile-time, or is it obtained through some other kind of magic? The real problem here is that I cannot give a name to this problem, hence being unable to search for it!:) I've always used a Slack system with traditional disk mapping, it's the first time I come across this problem:) Thanks in advance! Carotinho Chiacchiera con i tuoi amici in tempo reale! http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com -- Ryan W Sims
[arch-general] kernel oopses when using modules
I posted[1] to the forums about this when I thought it was an nvidia problem, but now it seems to be more general. I recently upgraded to kernel26-2.6.23.9-1 from 2.6.23.8-1, and now I get oopses when certain modules are accessed. For example: BUG: unable to handle kernel NULL pointer dereference at virtual address 000b printing eip: c016c4da *pde = Oops: [#1] PREEMPT SMP Modules linked in: ext2 w83627ehf hwmon_vid ipv6 ohci1394 ieee1394 firewire_ohci firewire_core crc_itu_t tsdev usbhid hid ff_memless usb_storage ide_core intel_agp agpgart ppp_generic sky2 sg evdev thermal processor fan button battery ac kqemu i2c_i801 i2c_dev i2c_core coretemp snd_hda_intel snd_pcm snd_timer snd_page_alloc snd_hwdep snd soundcore slhc skge rtc ext3 jbd mbcache sd_mod sr_mod cdrom ehci_hcd uhci_hcd usbcore ahci ata_generic pata_jmicron libata CPU:1 EIP:0060:[c016c4da]Not tainted VLI EFLAGS: 00210206 (2.6.23-ARCH #1) EIP is at find_vma+0xa/0x70 eax: 0003 ebx: af09d000 ecx: af09d000 edx: af09d000 esi: 0003 edi: af09d000 ebp: f5d94000 esp: f490de1c ds: 007b es: 007b fs: 00d8 gs: 0033 ss: 0068 Process qemu (pid: 7434, ti=f490c000 task=f5d94000 task.ti=f490c000) Stack: 0114 af09d000 c016cc1d 0114 f959 af09d000 c016b084 f5d94000 0003 0003 0022 f959 f959 f959 0114 f959 f959 0002 f935345f 0001 0001 f490de80 Call Trace: [c016cc1d] find_extend_vma+0x1d/0x70 [c016b084] get_user_pages+0x44/0x2d0 [f935345f] kqemu_lock_user_page+0x3f/0x80 [kqemu] [f93549d7] mon_user_map+0xe7/0x110 [kqemu] [f93552cb] kqemu_init+0x7eb/0xe20 [kqemu] [c016ade1] handle_mm_fault+0x501/0x760 [c016e051] mmap_region+0x311/0x440 [f93531b9] kqemu_ioctl+0x109/0x120 [kqemu] [c018ae58] do_ioctl+0x78/0x90 [c018b09e] vfs_ioctl+0x22e/0x2b0 [c018b17d] sys_ioctl+0x5d/0x70 [c0104482] sysenter_past_esp+0x6b/0xa1 [c036] wait_for_completion+0x30/0xa0 === Code: 00 89 d1 8b 50 20 39 ca 73 05 89 48 20 89 ca 8b 48 14 39 d1 73 03 89 48 20 f3 c3 8d b6 00 00 00 00 56 85 c0 53 89 c6 89 d3 74 51 8b 50 08 85 d2 74 05 39 5a 08 77 35 8b 4e 04 85 c9 74 3e 31 d2 EIP: [c016c4da] find_vma+0xa/0x70 SS:ESP 0068:f490de1c I don't want to waste the bandwidth, but I have another one very much like it for nvidia trying to run opengl stuff. The dmesg above is the kqemu module and qemu. I'd suspect hardware, but I don't have any other reason to, and everything was working fine before the upgrade. Also, it's only these two modules (so far) and the BUG always happens at the same place, which seems a little too deterministic to be heat issues. I'm not familiar enough with the kernel changelog to be able to narrow things down that way. Any help? -- Ryan W Sims